Have recently started working with GWT and although by no means expert, see absolutely that this is the way web applications will be being developed in the future to create feature rich, testable web UI's.
The first thing I found when developing the client UI was that it was quite difficult to see what was going on from a logging perspective so after a quick look, I found gwt-log which solves this issue.
I have been using the Eclipse plugins to create my base projects and it all runs quite nicely and seamlessly. So if you would like to have client side logging ability do the following.
Dependencies
1. gwt-log
This is the entry point and core logging implementation. It has two other dependencies
2. commons-logging-1.1.jar
3. log4j-1.2.15.jar
Place these jars in the classpath of your gwt application, which in a Eclipse generated GWT project would be /war/WEB-INF/lib.
Once you have done this there are some configurations you will need to add, namely registering it in the *.gwt.xml file and adding a servlet mapping.
So in your *.gwt.xml file add the following lines
The import declaration
<inherits name="com.allen_sauer.gwt.log.gwt-log-OFF">
Your log levels
<extend-property name="log_level" values="INFO">
<set-property name="log_level" value="INFO">
This setting will cause the output to appear in the hosted mode console
<set-property name="log_DivLogger" value="DISABLED"></set-property>
Then in your web.xml file add the servlet mapping
<servlet>
<servlet-name>remoteLoggerServiceImpl</servlet-name>
<servlet-class>com.allen_sauer.gwt.log.server.RemoteLoggerServiceImpl</servlet-class></servlet>
<servlet-mapping>
<servlet-name>remoteLoggerServiceImpl</servlet-name>
<url-pattern>/[Application url]/gwt-log</url-pattern>
</servlet-mapping>
Once this is done within your GWT client module you can reference the logger via the static methods
in the com.allen_sauer.gwt.log.client.Log class and your logging output will look something like this. Very handy when sifting through what events have been fired when and for what reason.
.
Wednesday 24 March 2010
Tuesday 29 September 2009
Transaction decisions
When creating a Spring/Hibernate application in a collaborative development environment, functionality can grow exponentially and your business layer can contain methods which result in the execution of 20 or more queries for one operation, which if not combined in an atomic operation will lead to possible data integrity issues and incremental functional failure of your application.
So now you have all your service objects working nicely, now suddenly, you need to integrate support for transactions into your service layer. If you where clever you would have done this BEFORE you started to develop your service layer but often support for transactions is not top of the list of priorities and gets left behind. Not a good thing but its not the end of the world.
You have a few options once in this situation
a) Do nothing
This is the easiest option. If your application is not mission critical, the application is low volume and you have the time and resources to maintain data much more subject to data integrity errors, then this may be your best option. Its certainly the cheapest, but it is not a technical solution its an operational one.
b) Programmatic transaction support.
In a nutshell, we need to write code, lots more code to declare, commit or rollback transactions.
Within the Spring enviroment you have two ways to do this, the TransactionTemplate or the PlatformTransactionManager which can be passed into the service layer via dependency injection.
In many service layer operations the high level logic in most servoce layer methods is execute the operations , commit if no exception. If this describes your application then the declarative approach is almost certianly the better option.
The advantage of the programmatic approach fine-grained access to transaction logic, which in my opinion, in most business applications is not neccessary and the extra code you need to write for each service layer method is invasive and time consuming.
Should you have conditions other than Exceptions in which you would roll back the transaction maybe this way would be better, but then again, you could just create a checked exception for the condition and go with the next stratgey which I will discuss, which is declarative transaction management.
c) Declarative Transaction Management
Wouldnt it be great if you could declare a method in your service layer to be transactional, and the application code would then acknowledge that, execute your method and once it was happy that the execution was without error, commit all the changes to the persistant data store?
That pretty much sums up declarative transaction management and in an application with consistent transaction logic, probably the most consistent,elegant and least invasive way to go.
In Spring , aspects are used to achieve this, and pretty much the only design decision that needs to be taken is whether to use XML driven or annotation driven transaction declarations. I prefer annotations as its simpler and ties the transaction settings(Check here for your options) to the target method explicitly.
The XML driven notation requires the use of pointcut expressions in which the method signatures will fall into transactional scope if the method is in the scope of the pointcut expression. The XML configuration route would be good if you wanted to make your entire service layer or an entire service object transactional. I prefer to use transactions only if there is a risk of data integrity errors, so mostly on writes and deletes and not reads.
If you are interested in configuration details, here is the URL to the Spring reference documentation, which covers the technical aspects of implementing transaction management.
If your application is large and complex, and does not support transactions, and you have decided on the strategy you wish to adapt, I think having automated unit tests for your existing functionality is a must, as some things will just not work once wrapped in a transaction or the service layer logic may not be condusive to supporting transactions, so the tests would be the best way to figure out what is breaking after the introduction for support for transactions.
So now you have all your service objects working nicely, now suddenly, you need to integrate support for transactions into your service layer. If you where clever you would have done this BEFORE you started to develop your service layer but often support for transactions is not top of the list of priorities and gets left behind. Not a good thing but its not the end of the world.
You have a few options once in this situation
a) Do nothing
This is the easiest option. If your application is not mission critical, the application is low volume and you have the time and resources to maintain data much more subject to data integrity errors, then this may be your best option. Its certainly the cheapest, but it is not a technical solution its an operational one.
b) Programmatic transaction support.
In a nutshell, we need to write code, lots more code to declare, commit or rollback transactions.
Within the Spring enviroment you have two ways to do this, the TransactionTemplate or the PlatformTransactionManager which can be passed into the service layer via dependency injection.
In many service layer operations the high level logic in most servoce layer methods is execute the operations , commit if no exception. If this describes your application then the declarative approach is almost certianly the better option.
The advantage of the programmatic approach fine-grained access to transaction logic, which in my opinion, in most business applications is not neccessary and the extra code you need to write for each service layer method is invasive and time consuming.
Should you have conditions other than Exceptions in which you would roll back the transaction maybe this way would be better, but then again, you could just create a checked exception for the condition and go with the next stratgey which I will discuss, which is declarative transaction management.
c) Declarative Transaction Management
Wouldnt it be great if you could declare a method in your service layer to be transactional, and the application code would then acknowledge that, execute your method and once it was happy that the execution was without error, commit all the changes to the persistant data store?
That pretty much sums up declarative transaction management and in an application with consistent transaction logic, probably the most consistent,elegant and least invasive way to go.
In Spring , aspects are used to achieve this, and pretty much the only design decision that needs to be taken is whether to use XML driven or annotation driven transaction declarations. I prefer annotations as its simpler and ties the transaction settings(Check here for your options) to the target method explicitly.
The XML driven notation requires the use of pointcut expressions in which the method signatures will fall into transactional scope if the method is in the scope of the pointcut expression. The XML configuration route would be good if you wanted to make your entire service layer or an entire service object transactional. I prefer to use transactions only if there is a risk of data integrity errors, so mostly on writes and deletes and not reads.
If you are interested in configuration details, here is the URL to the Spring reference documentation, which covers the technical aspects of implementing transaction management.
If your application is large and complex, and does not support transactions, and you have decided on the strategy you wish to adapt, I think having automated unit tests for your existing functionality is a must, as some things will just not work once wrapped in a transaction or the service layer logic may not be condusive to supporting transactions, so the tests would be the best way to figure out what is breaking after the introduction for support for transactions.
Friday 11 September 2009
Accessing static constants in EL
Ever wanted to compare a value to a constant using Expression language in your UI ? Usually I would use the string representation hard-coded in the JSP which , if you have 1000 references in your UI code and you want to change the static constant value, you have a lot of work to do.
So you could
a) add accessors to your class for your contsants, which will work but is not very elegant when you have methods such as EG getAPPROVAL_STATUS_PENDING .
b) have a utility class which does the same as in a) and/or put all your constants in a map and import this class to the EL context.
I was not really happy with any of thse methods as they seemed clunky and unelegant. So I did what all people do in this situation and used Google.
I came up with a Tag library from Jakarta called "unstandard" which provided the functionality I needed. However it appears to be deprecated and only available as source code in one of their more obscure repository locations.
http://svn.apache.org/repos/asf/jakarta/taglibs/sandbox/unstandard/trunk
Since we have our own custom tag library on the project and I didnt really need all the functionality I took the description for the tag "useConstant" from the tld and integrated it with our Tag library descriptor and tooks the classes
and put them in our tag code library.
Now I can access constants in UI expression language context in the following way. On the jsp page I wish to use the constants I add the declaration:
Then I can access my classes static constants using standard exrpession language syntax:
I quite like this as its nice and clean, negates the need for string literals in the code corresponding to constant values , and will work seamlessly if constant values change.
A thank you to the Jakarta taglib team, my question is why is this not in the standard library and why do these utils appear deprecated?
So you could
a) add accessors to your class for your contsants, which will work but is not very elegant when you have methods such as EG getAPPROVAL_STATUS_PENDING .
b) have a utility class which does the same as in a) and/or put all your constants in a map and import this class to the EL context.
I was not really happy with any of thse methods as they seemed clunky and unelegant. So I did what all people do in this situation and used Google.
I came up with a Tag library from Jakarta called "unstandard" which provided the functionality I needed. However it appears to be deprecated and only available as source code in one of their more obscure repository locations.
http://svn.apache.org/repos/asf/jakarta/taglibs/sandbox/unstandard/trunk
Since we have our own custom tag library on the project and I didnt really need all the functionality I took the description for the tag "useConstant" from the tld and integrated it with our Tag library descriptor and tooks the classes
- org.apache.taglibs.unstandard.TagUtils
- org.apache.taglibs.unstandard.ClassUtils
- org.apache.taglibs.unstandard.UseConstantsTag
and put them in our tag code library.
Now I can access constants in UI expression language context in the following way. On the jsp page I wish to use the constants I add the declaration:
<mytaglibrary_prefix:useConstants classname="com.mypackagename.MyClass" var="MyClass" />
Then I can access my classes static constants using standard exrpession language syntax:
${MyClass.A_STATIC_FINAL_CONSTANT_VALUE}
I quite like this as its nice and clean, negates the need for string literals in the code corresponding to constant values , and will work seamlessly if constant values change.
A thank you to the Jakarta taglib team, my question is why is this not in the standard library and why do these utils appear deprecated?
Labels:
EL,
expression language,
java,
JSP,
jstl,
static constants
Monday 31 August 2009
Outsourcing ... be afraid?
As a person who specialises in technology and living in a, by economic definition, 1st world country, I would be lying if I maintained I had never felt threatened by the trend to outsource development functions to lower cost destinations.
Owing to my stereotyped views of hoards of highly educated, ambitious graduates willing to work twice as hard for a quarter of my salary, I decided to do a little bit of research and rationalise my views.
With reference to outsourcing, the trend presents opportunities as well as threats plus you need to look at reflexive trends as well.
Outsourcing will provide a software engineer with good planning skills and skills with colloborative development tools, with new opportunities to collaborate with offshore development teams as well as contribute to development. These days you need to full understanding of the goals of any IT undertaking and use technology to manage as well as work yourself.
In essence the job itself is changing, no longer can a software engineer just immerse himself in technology, the need to understand the benefit the technology brings to an organisation is paramount, and to be able to communicate that benefit to the organisations management structure is also of primary importance. We all need to not only appraise technologies from a technical perspective but from a business and revenue generating perspective as well.
The ability to assess how much effort a technology will take to implement and its direct or indirect contribution to an organisations revenue streams is now not just a manager function, its needs to be understood by the software engineers themselves.
If you analyse and for a minute accept the points I have made above, the following needs to be considered too. Technology is also empowering people with a traditionally non-managerial role to manage themselves and to set their own targets and report their progress in colloboration with their co-workers or co-participants in any unit of work. With good implementation the senior management can access the data they need to for progress reports and to draw up budgets.
This puts pressure in an organisation on the lower level of management who are neither as tech-savvy nor possess the ability to actually DO the work.
With regards to reflexive trends in outsourcing, in outsourcing markets you have the following trends, wage inflation and more worker centric employment legislation.
A demand for outsourcing services will inflate wages to the point that it becomes more cost effective to operate in the host country, especially coupled with the relative wage stagnation in the technology sector in Western Europe and US.
One of the other reasons companies outsource is that the host country affords less employment rights to workers enabling productivity increases using factors such as unpaid overtime, easy and rapid dismissal and hiring processes. As the workforce gentrifies political pressure to provide more employment rights will come to the fore, in many cases rendering outsourcing less profitable and effective than at present.
Outsourcing also drains the host company of intellectual property and the ability to effectively secure data and IP rights so this also has to be looked at when assessing trends in the technology sector.
In my opinion, outsourcing decisions have been taken in many cases , by managers who are under pressure to deliver short term financial benefit, and who themselves don't understand, and are reporting to people who do not understand, the full implications of outsourcing.
I think we are entering a mature stage in the outsourcing cycle which will see its use level off or drop in some sectors.
In essence we have the following forces at play:
1) Rising costs in outsourcing destinations
2) Falling or stagnating costs in "outsourcer" destinations.
3) Political pressure to protect labour markets in traditionally "outsourcer" destinations
4) Increased expectations and political pressure in outsourcing nations to improve working conditions
5) Increased concerns for intellectual property rights and data security
6) Increased public awareness of data security and the resulting legislative and social accountability of corporations.
To sum up, I think outsourcing is less of a threat than I originally thought, and will be a declining trend.
This is cold comfort to those of you who have lost your jobs owing to outsourcing, but its maybe worth looking at these factors and remembering that technology also influences one other thing; the rate of change. Changes in employment trends and other changes in society are happening at such a pace these days it is very difficult to stay ahead or even keep up, and we all at some point, unless we are very lucky get caught out.
Never a truer word said than the only thing constant is change.
Owing to my stereotyped views of hoards of highly educated, ambitious graduates willing to work twice as hard for a quarter of my salary, I decided to do a little bit of research and rationalise my views.
With reference to outsourcing, the trend presents opportunities as well as threats plus you need to look at reflexive trends as well.
Outsourcing will provide a software engineer with good planning skills and skills with colloborative development tools, with new opportunities to collaborate with offshore development teams as well as contribute to development. These days you need to full understanding of the goals of any IT undertaking and use technology to manage as well as work yourself.
In essence the job itself is changing, no longer can a software engineer just immerse himself in technology, the need to understand the benefit the technology brings to an organisation is paramount, and to be able to communicate that benefit to the organisations management structure is also of primary importance. We all need to not only appraise technologies from a technical perspective but from a business and revenue generating perspective as well.
The ability to assess how much effort a technology will take to implement and its direct or indirect contribution to an organisations revenue streams is now not just a manager function, its needs to be understood by the software engineers themselves.
If you analyse and for a minute accept the points I have made above, the following needs to be considered too. Technology is also empowering people with a traditionally non-managerial role to manage themselves and to set their own targets and report their progress in colloboration with their co-workers or co-participants in any unit of work. With good implementation the senior management can access the data they need to for progress reports and to draw up budgets.
This puts pressure in an organisation on the lower level of management who are neither as tech-savvy nor possess the ability to actually DO the work.
With regards to reflexive trends in outsourcing, in outsourcing markets you have the following trends, wage inflation and more worker centric employment legislation.
A demand for outsourcing services will inflate wages to the point that it becomes more cost effective to operate in the host country, especially coupled with the relative wage stagnation in the technology sector in Western Europe and US.
One of the other reasons companies outsource is that the host country affords less employment rights to workers enabling productivity increases using factors such as unpaid overtime, easy and rapid dismissal and hiring processes. As the workforce gentrifies political pressure to provide more employment rights will come to the fore, in many cases rendering outsourcing less profitable and effective than at present.
Outsourcing also drains the host company of intellectual property and the ability to effectively secure data and IP rights so this also has to be looked at when assessing trends in the technology sector.
In my opinion, outsourcing decisions have been taken in many cases , by managers who are under pressure to deliver short term financial benefit, and who themselves don't understand, and are reporting to people who do not understand, the full implications of outsourcing.
I think we are entering a mature stage in the outsourcing cycle which will see its use level off or drop in some sectors.
In essence we have the following forces at play:
1) Rising costs in outsourcing destinations
2) Falling or stagnating costs in "outsourcer" destinations.
3) Political pressure to protect labour markets in traditionally "outsourcer" destinations
4) Increased expectations and political pressure in outsourcing nations to improve working conditions
5) Increased concerns for intellectual property rights and data security
6) Increased public awareness of data security and the resulting legislative and social accountability of corporations.
To sum up, I think outsourcing is less of a threat than I originally thought, and will be a declining trend.
This is cold comfort to those of you who have lost your jobs owing to outsourcing, but its maybe worth looking at these factors and remembering that technology also influences one other thing; the rate of change. Changes in employment trends and other changes in society are happening at such a pace these days it is very difficult to stay ahead or even keep up, and we all at some point, unless we are very lucky get caught out.
Never a truer word said than the only thing constant is change.
Thursday 20 August 2009
Groovy Precompiling
Maybe anybody who is interested in Groovy can make a few comments about this.
The requirement is that we offer functionality that extends what exists in core functionality of a java web application surrounding the persistence lifecycle.
So I added functonality which, will execute a groovy script which has been associated with an event and object type EG preupdate and Widget. The script also needs to be updateble within the session lifecycle.
This works well and adds loads of flexiblity but there will be an obvious performance hit owing to the creation of a groovy shell and execution of the script during the persistence lifecycle.
So my collegue suggested we look into precompilation of the scripts into classes and executing these in a native java environment rather than creating a Groovy shell and running the script, which will need to be dynamically compiled every time the script executes.
This was achieved in essence by creating an instance of the compiler , compiling the script to a class file and storing it somewhere pre-execution, and then when the script needed to be executed, Loading the class (Will always be a subclass of the Script class) and calling the run() method
So to compile we we do this and the resultant output is a .class file.
CompilerConfiguration cc = new CompilerConfiguration();
cc.setTargetDirectory(destdir);
cc.setScriptBaseClass("groovy.lang.Script");
Compiler c = new Compiler(cc);
c.compile(myClassName,myScript);
Now we need to retrieve the class and for that we need a ClassLoader, so I extended the ClassLoader to create a FileClassLoader which finds the file on the file system, creates a byte array from the file, which it passes into the defineClass method of the ClassLoader class definition.
We create the script by creating a new instance of the class and clalling the run() method on it:
FileClassLoader fcl = new FileClassLoader();
start = System.currentTimeMillis();
Class cls = fcl.loadFromFile(DIR + "//DIR//" + classname + ".class", classname);
Binding b = new Binding();
b.setProperty("obj","compiled");
Script scr = (Script) cls.newInstance();
scr.setBinding(b);
scr.run();
The result of this is the as doing this :
GroovyShell gs = new GroovyShell();
gs.setVariable("obj","interpreted");
gs.evaluate(script);
The theory is that every time you pass the script in a string variable to a shell it needs to compile and then execute the script and precompiling will prevent the need for this, thus providing a performance gain as essentially, it not executing a script which needs to be dynamically compiled, but is executing precompiled code.
I tested for performance and since I have the groovy plugin installed in my Eclipse IDE, the script is compiled and cached even when using the shell so there is not real performance gain when I test in the IDE, but will this be a more efficient way of using Groovy on the webserver?
Will update this post when I have an answer
Wednesday 5 November 2008
To shard or not to shard
I remember back in the day it was simple, HTML --> Server Side Scripting --> DB. Thats it. It was easy.
It seems that as with the gaming industry driving the development of hardware to handle the increased requirements of the gaming software, data is driving the development of frameworks and designs to solve one issue;
"How do we create reliable architecture to scale with increased storage and retrieval requirements". That to me is the bottom line. To further segment those requirements with proposed solutions
Storage
In our teams latest prototype, we have created a prototype which works extremely well using a java suite of framworkes.
Briefly if uses Spring MCV and IOC,Hibernate Core,Hibernate Search,Memcached and MYSQL.
We have a DAO design pattern with Hibernate implementations, A single Hibernate Search(Lucene Index) as well as a Memcached object cache with an indexed mysql DB being used fro persistent storage.
After hammering the web application using JMETER, I have come to the conclusion that it performs very well.
Now enter Hibernate Sharding and the wheels are coming off!
My first blow came from my extensive use of DetachedCriteria.
My design all stems from a base object with a shardId in all object hierarchies, and a constraint ensuring all objects descended from that base object will be in the same shard. My ShardSelectionStrategy takes care of that based on entity type
So I subclassed DetachedCriteria and using a factory based on entity type I can retrieve the shard I need, which is restrictive but works for me.
I am now busy with the FullTextSession and I am running into big trouble.
Basically EventSource isnt supported by shards and a ShardedSessionImplementor and a SessionImplementor are quite different. I can created shard aware classes for all the main players.
Now without rewriting the whole implementation, and going through that pain, how do I solve the issue of implementing a distributed fetch?
It seems that as with the gaming industry driving the development of hardware to handle the increased requirements of the gaming software, data is driving the development of frameworks and designs to solve one issue;
"How do we create reliable architecture to scale with increased storage and retrieval requirements". That to me is the bottom line. To further segment those requirements with proposed solutions
Storage
- More Disk Space
- More Servers
- Sharding
- Archiving
- In Memory caches
- Indexing(DB)
- Indexing(Application)
- Sharding
In our teams latest prototype, we have created a prototype which works extremely well using a java suite of framworkes.
Briefly if uses Spring MCV and IOC,Hibernate Core,Hibernate Search,Memcached and MYSQL.
We have a DAO design pattern with Hibernate implementations, A single Hibernate Search(Lucene Index) as well as a Memcached object cache with an indexed mysql DB being used fro persistent storage.
After hammering the web application using JMETER, I have come to the conclusion that it performs very well.
Now enter Hibernate Sharding and the wheels are coming off!
My first blow came from my extensive use of DetachedCriteria.
My design all stems from a base object with a shardId in all object hierarchies, and a constraint ensuring all objects descended from that base object will be in the same shard. My ShardSelectionStrategy takes care of that based on entity type
So I subclassed DetachedCriteria and using a factory based on entity type I can retrieve the shard I need, which is restrictive but works for me.
I am now busy with the FullTextSession and I am running into big trouble.
Basically EventSource isnt supported by shards and a ShardedSessionImplementor and a SessionImplementor are quite different. I can created shard aware classes for all the main players.
- FullTextSession
- FullTextQuery
- AbstractQuery
Now without rewriting the whole implementation, and going through that pain, how do I solve the issue of implementing a distributed fetch?
Labels:
hibernate-search,
hibernate-shards,
java,
memcached,
mysql,
spring
Subscribe to:
Posts (Atom)