Light Work

February 17, 2012

A brief celebratory posting, something worked seamlessly and I now need to go away and think about the implications.

The context is that IBM recently acquired Worklight, a vendor of software addressing mobile platforms. As my team works in in the mobile space we’re all very interested to see the capabilities of the newly acquired portfolio. Worklight addresses development, infrastructure and the management of mobile applications. The capabilities here complement those of IBM’s existing mobile product set. The development features fit with technologies we currently use: PhoneGap for cross-device portability is key component in the Worklight stack and the dojo JavaScript framework is supported.

Programming model

I started by looking at one aspect of Worklight’s server-side programming model. This allows us to create “Adapters” which are effectively RESTful wrappers of enterprise data, producing JSON data ready for consumption in the client. Out of the box there are adapters for JDBC database access and HTTP services.

The creation of an adapter is remarkably simple. The details are all described in the online tutorials, but I want to give a flavour of the degree of efforts so I’ll summarise here.

First, there is some initial configuration of the server to connect to the data source, in my case that’s a database. So I adjusted the server definition with a few lines of configuration


Then a definition of the adapter in an XML file

        <connectionPolicy xsi:type="sql:SQLConnectionPolicy">
        <loadConstraints maxConcurrentConnectionsPerNode="5" />
<procedure name="getAccountTransactions1"/>

And finally the implementation, which comprises the SQL to be executed and a JavaScript wrapper.

var getAccountsTransactionsStatement = WL.Server.createSQLStatement(
    "SELECT transactionId, fromAccount, toAccount, transactionDate, transactionAmount, transactionType " +
    "FROM accounttransactions " +
    "WHERE accounttransactions.fromAccount = ? OR accounttransactions.toAccount = ? " +
    "ORDER BY transactionDate DESC " +
    "LIMIT 20;"

//Invoke prepared SQL query and return invocation result   
function getAccountTransactions1(accountId){
    return WL.Server.invokeSQLStatement({
        preparedStatement : getAccountsTransactionsStatement,
        parameters : [accountId, accountId]

Just Work

Then I just select

Run As –> Invoke Worklight Procedure

and my code is deployed to the server and invoked. There’s negliable build or deployment time and I see a JSON string displayed.

  "isSuccessful": true,
  "resultSet": [
      "fromAccount": "12345",
      "toAccount": "54321",
      "transactionAmount": 180,
      "transactionDate": "2009-03-11T11:08:39.000Z",
      "transactionId": "W06091500863",
      "transactionType": "Funds Transfer"
      "fromAccount": "12345",
      "toAccount": null,
      "transactionAmount": 130,
      "transactionDate": "2009-03-07T11:09:39.000Z",
      "transactionId": "W214122/5337",
      "transactionType": "ATM Withdrawal"
    }, etc.

Now that whole development process took maybe 30 mins, of which at least half was spent stumbling over Windows 7s security controls preventing me from updating the server configuration. I reckon the next query will take no more than 10 minutes to implement.

Conclusions and Open Questions

My previous articles have talked about using JAX/RS and JPA to achieve pretty much the same end result: a RESTful serviice obtaining some data from a database. I was pretty pleased with how easy that was to do, a couple of of hours initially and probably 30 mins for each additional query. Clearly the Worklight story offers significant effort savings. I will be using Worklight for rapid prototyping in future.

Two areas I want to investigate further:

  1. How efficient is the programming model? We’re executing JavaScript on the server. Are the overheads significant?
  2. What do we do when we are not just reading? Suppose we need transactional updates to different tables or even databases. For sure we can use stored procedures, but I’m uneasy about pushing business logic down to the database layer. Probably I need to use enterprise quality services perhaps implemented as an EJB, but in which case I can trivially expose those using JAX/RS. Do I need Worklight in those transactional scenarios?

So definitely another tool for the toolbox, I just need to figure out its optimal uses, and what other options there may be. Next, onwards to look at other Worklight features such as security and application management.


A brief posting describing how I got my Android device connected to my Windows 7 machine so that I could use the Android adb tool to install an application.

My objective is to be able to run the Android Debug Bridge (adb) on my Windows 7 machine, connecting to an Android device. This allows me to perform a number of useful administrative tasks such as deploy applications, look at application logs and start a unix shell session on the device.

We’re getting into territory here where incautious actions can damage your Android device. The instructions here can get you powerful access on your device, so “caveat lector”, don’t do any of this unless you take responsibility for unpleasant outcomes such as your device becoming as useful as a housebrick.

The starting point is that you have installed Eclipse and the Android SDK manager. References on how to do this are supplied with the IBM Mobile Technology Preview as mentioned in my previous article. There are a number of optional installation packages which can be seen in Eclipse if you select Window->Android SDK Manager. You need to ensure that the Google USB Driver is installed.


When you select this item the drivers are downloaded and placed in your ADK installation directory.


However, this does not conclude the installation procedure, when I plugged in my device it was not visible to adb. So there’s one more step:

Installing the Driver for the Device

First, ensure that the device is attached to the USB port and that the device memory is not mounted as a disk. Then in the Windows Device manager you should be able to find your device. In my case, I happen to be using a StorageOptions Scroll, which manifests as a device called TCC8900. RightClick on the device and you get the option to install a driver, and can browse to the just downloaded material. Unfortunately I got a message saying that no suitable driver could be found.

It transpired that the driver supplied by Google was suitable but the configuration file android_winusb.inf did not contain a matching entry. A bit of Googling (rather cyclic, no?) lead to the solution: the  to the .inf file:

as I’m using 64 bit Windows, so I find the line


on 32bit Windows it’s [Google.NTx86], and insert the lines

; Scroll – recovery
%SingleAdbInterface% = USB_Install, USB\VID_18D1&PID_DEED
%CompositeAdbInterface% = USB_Install, USB\VID_18D1&PID_DEED&MI_01
; Scroll – bootloader (fastboot)
%SingleBootLoaderInterface% = USB_Install, USB\VID_18D1&PID_D00D

just before the next entry

; HTC Dream

Then, retrying the installation works just fine. The device now appears under a more understandable name:


Magic Numbers

If you are using a different device then your entries in the inf file will be different. You can find the values by selecting your device, viewing the property tab and selecting hardware IDs.

Now what can we do?

So now that the device is attached we can simply select an Android application project, rightClick->Run As –> Android Application and it is deployed to the device and launched, any output appearing in the LogCat view. This is a great improvement on mounting a drive, copying an apk file and installing the app.

In my previous posting I described how to get started with the freely donwloadable  IBM Mobile Technology Preview. I took one of the samples provided and ran it in an Android emulation environment. That worked nicely to test some features of the samples but the speed of emulation is quite slow. This doesn’t make for a productive Code/Test cycle. In this article I want to describe an alternative approach using a Chrome Plugin known as Ripple. Ripple was created by the gloriously named Tiny Hippos, who were acquired by Research in Motion this year.

Hybrid Applications

One key aspect of the IBM Mobile Technology Preview is the use of PhoneGap to enable building Hybrid applications. That is applications that run in a browser and yet which exploit native device capabilities such as GeoLocation and the Accelerometer. In effect PhoneGap provides an device-independent JavaScript API hooking into a device-specific native implementations on each platform. The expectation is that the bulk of a Hybrid application is implemented in JavaScript that is portable across a wide range of devices. The Dojo mobile APIs will detect the particular device and hence allow suitable style-sheets to be selected giving a visual appearance that fiits the device standards. Hence we have a vision (to use an old phrase) of  “write once, run anywhere”. In my team’s experience this, with careful design, this vision can be realised.

When developing such Hybrid applications we can greatly accelerate the development/unit test cycle if we can simply run the JavaScript in a browser. This is where Ripple comes in: it provides an emulation of different device types and an implementation of the major PhoneGap APIs running directly in Chrome. I am going to show how to use Ripple to run the Mysurance sample, which uses GeoLocation in Ripple.

One note: some aspects of the Technology Preview, notably the Notification framework, depend upon Java code. We cannot test these aspects using Ripple as it currently stands. I believe that Mysurance is typical of most Hybrid applications in having very significant portions of the code in JavaScript and hence benefiting from testing in Ripple.


The application assets (web pages, JavaScript, css and images) are held in a assets directory. We need those to be accessible to the browser. It simplifies some of my other testing if I serve these files from a web server such as Apache or IBM HTTP Server.

I add these entries to my httpd.conf:

Alias /mysurance "C:/IBM/MobileTechPreview/samples/mysurance/eclipse-project/assets/www"

<Directory "C:/IBM/MobileTechPreview/samples/mysurance/eclipse-project/assets/www">
    AllowOverride None
    Options None
    Order allow,deny
    Allow from all

and restart Apache. I can now point my browser at


and see the Application

Ripple Installation

You can install Ripple into a running Chrome instance from this Ripple download site. On completion the Ripple icon is visible in the toolbar


The first time you access an application with Ripple installed you see this


which offers a number of possible runtimes, I choose PhoneGap, and then can select the Ripple icon and choose to Enable Ripple.


The Mysurance application now shows in a device emulator, along with various control capabilities.



If you explore the Ripple control capabilities you will see that you can select different devices, emulate the accelerometer and send events. Here I’m going to focus on the GeoLocation. Expanding that section I see


With a default location of Waterloo Station. We can change this, if we know the latitude and longitude of our desired location. I will choose a place in Yorkshire, finding it’s coordinates from this site.


Ripple now shows the new location


Back in the application I pick Accident Toolkit and from this menu


select Call the Police. This brings up a map centred on the location we specified to Ripple, with nearest police station identified.


Select the station gives us the contact details.



This does seem to be a very promising approach to testing some aspects of Hybrid applications.

In October 2011 IBM announced the Mobile Technology Preview, which you can download here. The preview enables development of “Hybrid” mobile applications that can exploit server-push notifications.  A “Hybrid”  application is developed in Javscript and runs in a device’s browser environment and exploits a virtual device API (PhoneGap in this case) to access native device capabilities such as geolocation and the accelerometer. It is interesting to see that the preview also includes an Alpha release of the 8.5 version of the WebSphere Application Server, which is used for hosting the server-side code, and in particular the notification engine.

The promise of  the Hybrid approach is that rich applications exploiting device capabilities can be developed more cost-effectviley that developing device-specific Native applications.

In this article I want to describe how build and run one of the samples provided with the preview. I encountered a few minor glitches in getting this going, I hope I can save you some time.


The preview documentation includes an article, ProjectSetup.html,  which describes how to set up your development environment.  This procedure requires you to also download Eclipse, the Android SDK and the emulators – the Android Virtual Devices (AVD) that you will use for testing.

I did my initial testing with the Eclipse Helios release, but more recently have been using Indigo. I had occasional stability problems with the latest AVD 4.0 release and so chose to install version 2.2.

On completing the installation my Eclipse Window menu now has some Android-related options.


Note that amongst the installation steps is an adjustment to your eclipse.ini file to use a Sun JRE when launching Eclipse. This is necessary because the build procedure to produce an deployment package for Android depends on a Sun-specific package:

At present I have not managed to find how to launch Rational Application Developer (RAD) under a Sun JRE and hence I do not use RAD as my Android development environment.

Project Set Up

The tech preview includes a number of sample applications delivered in the form of eclipse project directories. The application build tools make certain assumptions about project structure, that structure is most easily created by using Android-specific project creation options.

If you use a default Java project structure then your application will appear to build correctly but you will get an exception of this form:

10-31 16:28:53.982: E/AndroidRuntime(1200): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{}: java.lang.ClassNotFoundException:

So, use the following approach.

Import As An Android Project

In Eclipse select File->New->Other->Android->Android Project


Click Next and select Create project from existing source, and browse to the location of one the of sample projects, for example


You may see an error

    An Eclipse project already exists in this directory.
    Consider using File > Import > Existing Project instead.

It is important that you do not follow this advice! The File->Import option does not yield a correctly structured project.

Instead, in Windows Explorer, browse to that project directory and delete the .project file (do not delete the .classpath file) and retry creating the Android project, which will now succeed.

Click Next and select your chosen Build Target. By default the samples target Android 4, but so far I’ve been using Android 2.2


Click Next again and you will see that the project’s AndroidManifest.xml has been understood.


Your project should now build cleanly. On some occasions I’ve seen compilation errors

   Syntax error, annotations are only available if source level is 1.5 or greater  

Despite the fact that the workspace is set up to use Java 1.6. To fix this, select your project, rightClick->Properties->Java Compiler, check the “Enable Project-Specific Settings” option, choose some different compiler level and click OK. This forces rebuild. Then repeat the process reverting to having “Enable Project-Specific Settings” unchecked. I would have expected a project clean or a rebuild to be as effective, but this recipe is the one that worked for me.

Running the Application

To run the project in the emulator, select the Project, rightClick->Run As->Android Application

This will launch the appropriate emulator, package the applications, deploy it to the emulator and initiate it.


A couple of notes about this process

Startup Time

The emulator takes a considerable time to start (on my laptop over 5 minutes). It’s reassuring to watch the logs so that you can see progress. In Eclipse, Window->Show View->LogCat.


You should also open the Console view when you get to the stage of deploying the application.

Increase Timeout

You may also see an error of this form

11-02 20:30:02.950: I/System.out(391): onReceivedError: Error code=-6 Description=The connection to the server was unsuccessful. URL= file:///android_asset/www/demosite/demos/mobileGallery/demo.html

This is actually due to the emulator taking too long to retrieve the file. You can increase the timeout by adding a line of code to the application.


Set the loadUrlTimeoutValue by adding the code shown below

        this.setIntegerProperty("loadUrlTimeoutValue", 70000);
        super.loadUrl(“file:///android_asset //etc

With that in place the sample ran correctly, if slowly.

This post is a minor celebration, I used some technology and it worked nicely. There’s also a reminder of how to enable OpenJPA tracing in a WebSphere environment. This allowed me to have alook at the SQL generated by JPA.

Setting the Scene

This example is based on some work I was doing in a financial application but I’ve reworked the problem in terms of health checks of some pieces of electrical equipment. The idea is that our system receives records recording the latest known "Health” of a piece of equipment. The records contain a time-stamp.

Equipment ID Date Health %
101 11th July 2011 98
101 12th July 2011 97
101 13th July 2011 98
351 11th July 2011 71
351 12th July 2011 47
351 13th July 2011 33

In the example table we see a piece of equipment, number 101, operating healthily,  whereas equipment number 351 is less healthy and its health is falling over time.

Now we might also have a table with more information about the Equipment, and so our health class might look like

   @Entity public class Health {
     public Date timestamp;
     public int healthPercent;

     public Equipment equipment;

Simple JPA query

One thing we might reasonably do is implement a service to return the recent history of those records for a piece of equipment. Our RESTful service might have a URL like this


We would probably have some additional parameters to allow selection by date, but for the sake of simplicity let’s keep it to that.

In previous postings I’ve described how we can use JAX/RS to implement a service of this kind. Our JAX/RS implementation would probably call an EJB and eventually we’d end up invoking a JPA query

      SELECT h FROM Health s
                 WHERE = :equipmentId

We could have then an EJB with and injected entity manager

     public class RigEjb {

        private EntityManager m_em;

Then in the EJB a method to invoke the query

    public List<History> equipmentHistory(int equipmentId) {
         Query query = m_em.createNamedQuery(
         query.setParameter("equipmentId", equipmentId);       
         return (List<History>) query.getResultList();

All very simple written in a few tens of minutes and we get back a list of the history records for a piece of equipment, suitable for displaying in, for example, a graph. JPQL is doing well so far.

How Healthy Now?

Now Historic trends of Health are interesting, and indeed I’ve worked on systems where predicting the future health of equipment from such data is of considerable business value. However there’s probably a more important question to ask of this data: What’s the latest view about the health of each piece equipment?

For that we need to pick just one record for each piece of equipment, the latest one we have. Now when I first hit this problem I created a naive implementation. I just returned all the records to my Java application and iterated them identifying the latest record for each piece of equipment. This is not a scalable solution, with a large number of history records performance would not be good.

However JPQL is actually pretty powerful. And after some thought and a question on StackOverflow I came up with

  SELECT h FROM Health
  WHERE like :type
  AND = (
     FROM Health hmax WHERE

We’re here identify the record whose date matches the maximum date for this piece of equipment. I’m impressed that the OpenJPA iJPQL implementation delivered with WebSphere can deal with this and produced the desired answers.

However there’s even more we can accomplish. Let’s make the data a little more complicated, with multiple measurements on the same day, differentiated by a serial number.


Equipment ID Date Serial Health %
101 11th July 2011 1 98
101 12th July 2011 1 97
101 12th July 2011 2 98
351 11th July 2011 1 71
351 11th July 2011 2 47
351 11th July 2011 3 33
351 12th July 2011 1 29

Now this may seem a little contrived, but in fact the data now matches very closely the financial data I was working with in my real project. In that project the record with the highest serial number each day was deemed to have the most significant “health” value.

So I need to select these records:


Equipment ID Date Serial Health %
101 11th July 2011 1 98
101 12th July 2011 2 98
351 11th July 2011 3 33
351 12th July 2011 1 29

The query to do this is gratifyingly similar to our previous case

  SELECT s FROM State s
    WHERE = :equipmentId
     AND = (
         SELECT MAX(
         FROM State smax WHERE
                AND =

And this works very nicely. Out of curiosity I wanted to see what the actual SQL would be to implement this query, that led me to look at enabling OpenJPA trace in WebSphere.

OpenJPA Trace

In some environments OpenJPA trace is controlled by an entry in your peristence.xml, to enable SQL trace you would add the line:

<property name="openjpa.Log" value="SQL=TRACE"/>

In a WebSphere Application Server environment tracing is controlled through the RAS (Reliability, Availability Servicability) logging infrastructure. In my own code I use the java.util.logging APIs which are also integrated with WebSphere’s logging infrastructure.

Controlling this logging is a two step process. First you specify a destination for your trace and second you specify the logging levels for each module. One useful feature of WebSphere is that you can adjust logging levels dynamically at runtime.

I’ll describe doing this via the admin console, but you can also control logging via wsadmin scripts, and this is my preferred approach if I need to do much work with logging and tracing.

Logging Destinations

In the admin console select Troubleshooting, Logs and Trace, select your server and then Diagnostic Trace. This brings up the screen where you can specify the logging destination


In a high performance situation the use of a memory buffer which can then be seen in a core dump is useful, but in normal usage I use a file as show here.

Changes made to this specification do require a server restart, before doing that you may also want to change the default module logging levels. WebSphere allows you either to modify the logging levels temporarily (on the runtime tab) or to set the levels that take effect each time the server is started. I decided to make the change to those default settings and so selected Change Log Detail Levels.

Module Logging Levels

You can either specify a trace string directly or use a graphical UI.

The trace string can be entered directly


Here I set all modules to info, and the specifically the JPA SQL module to “all”, which is the highest volume setting.

If you don’t know the trace string, then it is best to use the UI module tree. I have found that it is best to make sure all modules are initialised before changing the logging levels through the UI module tree. So first I ran my test program which exercised JPA. Then expanded the tree to show the openjpa module


And then clicked the SQL module to bring up the available levels


Note that this UI is also available on the runtime tab.

Having saved the changes and restarted the server I reran my tests an could see the SQL in my trace file.

SELECT,, t0.serial,,,, t3.description, t3.field,, t2.type,
FROM OUI.State t0 LEFT OUTER JOIN OUI.Equipment t2 ON =
WHERE ( = ?
AND t0.serial = (SELECT MAX(t1.serial)
FROM OUI.State t1 WHERE ( = AND = )) 

JPA: Small Mysteries

July 13, 2011

The Java Persistance API (POJO) handles the mapping between Java objects and data in a relational database. A few quick annotations of our Java class and the instances can be persisted to the database with a couple of lines of code. A couple of lines of Java Persistence Query Language and we can retrieve some of those instances with not a line of JDBC code in sight. All very good stuff, and there’s a great deal of cleverness down the in the various available  implementation layers to make this perform well. As we might expect there are a few wrinkles to hinder the unwary. This article lists a few mysterious error messages I encountered when using the OpenJPA implementation that caused much head-scratching when first seen and the annoyingly simple resolutions of these problems.

My development environment is Rational Application Developer 8.0.1, using a WebSphere 8 runtime and the OpenJPA implementation delivered with these products.

The RAD 8.0.1 tooling allows me to create the annotated Java classes corresponding to an existing database schema with a few mouse clicks. So developing the application took about an hour and then I hit a couple of problems, the first happened when I tried to run my app: I got a a complaint about a Connection Driver.


The error says 

A JDBC Driver or DataSource class name must be specified in the ConnectionDriverName property

The stack trace doesn’t give much more of a hint, we can see it’s when JPA is trying to get a connection to the database, but why is it failing?

[13/07/11 07:33:03:453 BST] 00000020 BusinessExcep E   CNTR0020E: EJB threw an unexpected (non-declared) exception during invocation of method "findRigByPK" on bean "BeanId(OuiServiceApp#OuiServiceWeb.war#RigEjb, null)". Exception data: <openjpa-2.1.0-SNAPSHOT-r422266:990238 fatal user error> org.apache.openjpa.persistence.ArgumentException: A JDBC Driver or DataSource class name must be specified in the ConnectionDriverName property.
    at org.apache.openjpa.jdbc.schema.DataSourceFactory.newDataSource(
    at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.createConnectionFactory(
    at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDBDictionaryInstance(
    at org.apache.openjpa.jdbc.meta.MappingRepository.endConfiguration(

After some fruitless searching for where I might specify a JDBC Driver I thought to check my persistence.xml file. In there was the line


and I had no corresponding JDBC datasource created in my WebSphere Application Server.

So, one quick trip to the WebSphere console, create the Data Source with the JNDI entry jdbc/myapp and everything works.

Or at least for a while, then we began to see a peculiar error concerning Enhancement.

My Entities Lack Enhancement

The symptom was seen when testing in the WebSphere 8 test environment in RAD 8.0.1, I make some changes, my revised application would be published to WebSphere and when I try to run I see an errror on the lines of:

The type "class Rig" has not been enhanced at org.apache.openjpa.meta.ClassMetaData.resolveMeta

The meaning of this is reasonably clear: we know that OpenJPA performs some interesting processing, or Enhancement, on the annotated Entity classes. Different JPA implementations do different things as described in this Enhancement discussion but OpenJPA does some “byte weaving”. And for my classes this hasn’t happened.

Now it seems that there are many way to control Enhancement explicitly, see This article for some explanation. But I’d never needed to do this before, and I really didn’t want to introduce needless complexity.

So being a software person (you all know the jokes about physicists, engineers and software chaps in road accidents?) my immediate reaction was “it’s broken, lets see if it happens again!”. And what do you know it didn’t!

So my recipe for recovering from this problem: in RAD, Server View, expand your server, select the application, and restart it. This seems to provoke enhancement. No compile or server restart needed. This recipe seems to work reliably.

I then proceeded to expand my database, adding a few new simple tables and did some manual mapping of those tables. All seemed pretty easy until I hit another mysterious error message:

Wrong result type column

The error showed up when I was trying to navigate a relationshiop between by two new tables. The error seems quite clear:

Error 500: <openjpa-2.1.0-SNAPSHOT-r422266:990238 fatal general error> org.apache.openjpa.persistence.PersistenceException: [jcc][t4][1092][11643][3.57.82] Invalid data conversion: Wrong result column type for requested conversion. ERRORCODE=-4461, SQLSTATE=42815 FailedObject: [java.lang.String]

Caused by: [jcc][t4][1092][11643][3.57.82] Invalid data conversion: Wrong result column type for requested conversion. ERRORCODE=-4461, SQLSTATE=42815







And so I spent quite some time comparing my Java class attributes and the columns in the corresponding database. The actual problem transpired to be that I had forgotten to add my new classes to the persistence.xml file.

This is a short post documenting a little procedure I needed to follow in enabling WebSphere Integration Developer (WID) v7.0.0.3 to work with a Rational Team Concert v7 repository. This is another “it’s obvious in hindsight” story, but maybe it will save someone else some time.

WID is a development environment for WebSphere Process Server (WPS) and WebSphere Enterprise Service Bus (WESB). Using WIDS you can develop and test BPEL processes and WESB Mediations. Until recently I was using WID v6.x and keeping my source code in CVS. Joining a new project I upgraded to WID v7.0.0.3 and discovered that the project used Rational Team Concert (RTC). Now RTC has been around since about 2008 but this is the first time I’ve had chance to use it. So before getting to my installation gotcha a brief aside about RTC.

Rational Team Concert

Although my initial interest in RTC is just to store my source code and and work with a couple of team members on a small project a quick survey of the material at the Rational Team Concert site shows the scope is potentially much greater. Chatting to a colleague in the Rational team the things that caught my attention were:

  • Support for agile development methods, parallel development and continuous ingtegration
  • Highly configurable and extensible stream based approach – you can write client or server-side plugins, OSGi styles
  • Support for distributed development teams

I particularly like the concept of suspending a set of changes to temporarily work on something else. So, note to self “need to read more about this”.

Connecting WID and RTC

My version of WID came with the v1.0 RTC plugin, the repository the team are using needs v2.0. Should be easy: Installation Manager get some updates, got a RTC v2.0 client. Installation complete. Relaunch WID, attempt to connect to repository … and fails! Same error, apparently I’m still a v1.0 client. Check in Help-> About … and yes, I do indeed still have a v1.0 client.

Very odd. Llet’s uninstall the old version of the client and then install the new one. Still no joy, I’m still on v1.0! So I get suspcious, this seems like yet another case where launching Eclipse with –clean is needed. Eclipse has a plug-in cache which on occasion needs to flushed. This article gives rather more detail, about –clean and a few other wrinkles.

And still no joy. At this point I got help from my colleague Steve, who has been a Rational chap (in all senses) for many years. He’s got a nice article here about some RTC integration.

The Answer: the right notes in the right order

The answer was indeed to use –clean, but it seems that the order of actions is crucial. The steps we took were:

  1. Launch the IBM Install Manager, select Modify and choose to uninstall the v1 Rational Team Concert Clent for Eclipse.
  2. Exit the Install Manager. Launch WID using the –clean option.
  3. Exit WID, launch the Install Manager again, select Modify, and install the v2 Rational Team Concert Client for Eclipse.
  4. Relaunch WID.

The crucial point being to perform the clean immediately after the v1 uninstall.


Previously I described my use of Flazr, and open-source streaming client, to test my media server. And I mentioned that I wanted to test the server’s capabilities to achieve better scalability by distributing requests across satellite servers. When media server receives a request for content it chooses a satellite and then emits a redirection response in this style:

        <meta base="rtmp://hostX/app/" />
        <video src="djna01/someContent.mp4" />

This is SMIL format, albeit a very small subset of what SMIL can be used for – using full SMIL capabilities you can in effect build a complete animated presentation. That’s rather like having a PowerPoint for the Web.

Anyway my client then needs to understand this response and open up the stream on


So in this article I’ll explain how I used JAX/B to parse the SMIL XML.


When faced with something as simple as that SMIL example it’s very tempting to use a few regular expressions (regexp) to extract the data we need. We could probably get something working quite quickly. However in the general case XML complexity defeats regexp capability (see discussions such as this) and most of the time I need to deal with non-trivial XML. So as I haven’t previously explored using the JAX/B APIs for parsing XML, now’s the chance!

It transpires that, using the Rational Application Developer tooling,  it actually took about 20 minutes to write the JAX/B-based code. I doubt whether I could have got the regexp right as quickly.

Using JAX/B

My starting point was a sample XML file as shown above. I created a simple Java project and then took the following steps:

  1. Generated an XSD
  2. From the XSD generated annotated Java classes
  3. Wrote the few required lines of code to call the JAX/B API.

Generating the XSD

I have the sample XML file in my project, I

rightClick->Generate->Xml Schema

and selected the defaults offered. The result was a schema

  <xsd:element name="head">
        <xsd:element ref="meta"/>
  <xsd:element name="body">
        <xsd:element ref="video"/>
  <xsd:element name="meta">
      <xsd:attribute name="base" type="xsd:string"/>
  <xsd:element name="video">
      <xsd:attribute name="src" type="xsd:string"/>
  <xsd:element name="smil">
        <xsd:element ref="head"/>
        <xsd:element ref="body"/>

There are various options I could have selected to get finer control over the XSD. Alternatively I could have written the XSD by hand, or in more complex cases the service provider would have already have published the XSD.


Generating Java Classes

I then need a set of Java Classes corresponding to the XSD, these classes using JAX/B annotations to control mapping between Java and XML. Again, I could write these by hand, but a simple set of cannotated classes can be generated very easily, select the XSD and


This brings up the XSD to Java wizard. On the first page select

JAX/B Schema to Java Bean

and select Next, then on the next page specify a package name such as and click Finish. The result is a suitable set of classes


Here’s part of the generated class:

@XmlType(name = "", propOrder = {
@XmlRootElement(name = "head")
public class Head {

    @XmlElement(required = true)
    protected Meta meta;

I won’t here elaborate on the meanings of the JAX/B annotations, but it’s pretty clear that we’ve got a class which maps to this portion of the SMIL

        <meta base="rtmp://hostX/app/" />

and the other classes are annotated similarly. So after a few mouse clicks we now have a set of classes which correspond to the SMIL file. All that remains is the code to use those classes.

The JAX/B invocation code

In my case I have the URL of the redirection service, which returns the SMIL document to be parsed. So I can write this code

public Smil exampleGet(String url)
        throws JAXBException, MalformedURLException{
        JAXBContext jc
           = JAXBContext.newInstance("");
        Unmarshaller u = jc.createUnmarshaller();

        Smil theSmil = (Smil)u.unmarshal( new URL(url) );

        return theSmil;

So I have initialsed the JAXBContent with the name of the package where my Beans were generated.


and then use that context to create an Unmarshaller. The unmarshaller will accept a URL parameter and parses the response.

And that’s it; four lines of code and the XML is parsed.


I have to admit that when I decided to use JAX/B rather than a simple regexp I thought I might have been making things unduly complex. I was surprised when all the above “just worked”. In fact when my application ran I spent a few minutes trying to find out where it had broken before realising that in fact it had worked seamlessly.

Recently I’ve been looking at setting up a POC environment for a solution involving streaming media. I’ve got some streaming media servers that delivers content over RTMP and some degree of infrastructure cleverness that claims to give improved performance. So how do I test that?

Well, I need the capability of submitting requests for content and evaluating the quality of service as I tweak the infrastructure. Features along these lines:

  • Simulating particular access patterns, for example a large number of users all requesting some popular content.
  • Defining extended test runs, some parts of the infrastructure take a while to “warm up”, and performance measures are best taken over extended periods of time.
  • Some way of determining KPIs such as the time take to start streaming or the amount of “stutter” experienced.

Also I want efficient use of test client resources. I may be simulating tens and hundreds of users, I just need to retrieve the stream of content, I don’t actually need to have it rendered, no need for video graphics.

Now there are quite few clients able to do these kinds of things. I chose Flazr, which is an open-source Java application. In this article I am going to

  1. Describe some simple uses of Flazr.
  2. Explain a problem I hit and give the code for the fix I developed.
  3. Show an extension I developed, which enables Flazr to be aware of some load-balancing capabilities in my infrastructure. This exploits a very small subset of SMIL.

Testing with Flazr

Initially I imported the Flazr 0.7 source into  my Rational Software Developer, Eclipse-based development environment.


And added the libraries delivered with Flazr to my classpath.

I can then run the Rtmp client


Stream Content, Get Metrics

The simplest case is just to specify the URL for the stream to be played


I won’t here describe my Streaming Media Server, there are many possible products you can use for that purpose.

This streams the content and display some useful metrics

first media packet for channel: [0 AUDIO c6 #1 t0 (0) s0], after 219ms


finished in 26 seconds, media duration: 11 seconds

From this  have  a measure of the responsiveness of my server and also we note that although the media duration was only 11s, it took 26s to stream it – lots of stutter there. And in fact if I stream this content through a conventional viewer there is indeed quite a bit of stutter.

More Demanding Workloads

I can ramp up the workload by asking Flazr to spawn a number of simulated clients each retrieving the stream

-load 5 rtmp://myhost/myapp/mycontent

these 5 are executed  in parallel using the JSE 1.5 Executor capability.

We can adjust the degree of parallelism by controlling the thread pool size.

          -load 5 –threads 2 rtmp://myhost/myapp/mycontent

We then get 5 downloads competed, but done just two at a time,in the two parallel threads. And in the limiting case we can have just one thread and hence get sequential retrieval.

If you try this with Flazr 0.7 you will find that in fact the parallelism is not so controlled and Flazr itself does not shutdown when the last retrieval completes. I’ll explain how I fixed that in a moment, but first I want to mention one other invocation style.

Flazr Scripts

The “-load” option described above allows you to stream in parallel several copies of the same content. If instead you need to emulate a more mixed workload you can instead put a list of URLs in a file and then use a command such as

     -file myscript

to initiate these streams. You can again control the number of parallel streams by using the “-threads” option

    -threads 3 –file myscript

The Halting Problem

As mentioned earlier, when streaming in parallel, Flazr does not exit when the last stream completes. This is very inconvenient if you want to run Flazr as part of some larger test.

The reason for this behaviour is that Flazr is using an Executor and this has a worker thread which waits for new workitems to appear. It is necessary to issue a shutdown request in order for Falzr to exit.

I modified in package com.flazr.rtmp.clent. This is the modified code, which I’lll explain in the next couple of sections.

     if(options.getClientOptionsList() != null) {
  "file driven load testing mode, lines: {}",
            int line = 0;
            for(final ClientOptions tempOptions :
                    options.getClientOptionsList()) {
      "running line #{}", line);
                for(int i = 0; i < tempOptions.getLoad(); i++) {
                    final int index = i + 1;
                    final int tempLine = line;
                    executor.execute(new Runnable() {
                        @Override public void run() { 
                  "line #{}, spawned connection #{}"
                                    , tempLine, index);
                  "line #{}, finished connection #{}"
                                    , tempLine, index);
            // by default the executor hangs around, ask it to go away
  "queueing shutdown request");
            executor.execute(new Runnable() {
                    @Override public void run() {
              "Turning out the lights … ");

The most important change is to arrange for a shutdown to be requested.

Queue a Shutdown

The Flazr code creates an Executor request for each line in the script file. These requests are processed by the Executor  in the order in which they are created. Hence if I add one last request to the list, a request to shutdown, we know that this will be the last request to be actioned.

There is one corner case to consider, what happens if that shutdown request is issued while other threads are still active. Fortunately this handled by the Executor framework, the executor will not allow any subsequent requests for new work to be started, but will wait for current requests to complete.

So we get the desired behaviour: the Flazr script completes and Flazr then stops.

Which Executor?

However, there is a further wrinkle. The original code had:

   executor.execute(new Runnable() {
      @Override public void run() {                           
         final ClientBootstrap bootstrap
                  = getBootstrap(executor, tempOptions);
              new InetSocketAddress(

Note that the executor is passed down to the ClientBootStrap. Under the covers the IO code will add additional executors, and this happens after the initiation of this job. This introduces a race condition with he shutdown request, we can hit the shutdown before the parallel IO execution is requested.

Hence I changed this code to use the


method, which FLazr uses elsewhere. This creates a dedicated, separate executor.


My infrastructure attempts to optimise performance by using a load distribution capability. The user requests


and receives an XML file, in SMIL format, which contains the URL that this client should use to stream the content. Hence different clients will get the same content from different places.

I added code to interpret these redirection responses, I’ll describe how in my next posting.

IBM Business Process Management products, and increasingly other IBM products, present user Interfaces in an extensible Web 2.0 Framework known as Business Space. The UI allows users to create their own views from visual components (widgets) supplied for each product. So for example, in WebSphere Process Server, there are widgets for presenting lists of tasks and working with the tasks. The widgets can be “wired” to each other so that actions in one widget can pass events to another.

The widgets are developed in JavaScript using the dojo framework, and conform to the iWidget specification, which predefines certain life-cycle events that the widget must support. You can develop you own widgets to be be used in conjunction with out of the box widgets.

I’ve been working to create a custom widget to be used in conjunction with the ECM Widgets delivered with IBM Filenet 4.5.1. This is using a version of Business Space consistent with that found in WID/WPS 6.2. This article concerns some wrinkles I came across. You should note that creating custom widgets in later versions of Business Space is rather easier than in these versions: in WID v7 there is tooling for creating iWidgets and a much simpler deployment model.

Widgets and Endpoints

In order to make custom widgets available for use you create an XML files containing catalogue entries. Placing these XML files in


Will cause BusinessSpace to add corresponding entries to the iWidget pallette in the UI. It seems that different Business Space environments have subtly different requirements for the contents of this file. In my case, I omitted one stanza, and when deploying to the FileNet environment my widgets were not being recognised. It seems that the following file format works across my WPS and FileNet test environments.

Example Registry File

<?xml version="1.0" encoding="UTF-8"?>
<tns:BusinessSpaceRegistry xmlns:tns="" xmlns:xsi="" xsi:schemaLocation=" BusinessSpaceRegistry.xsd ">


    <tns:name>Xyz Custom</tns:name>
    <tns:description>Custom Widgets for Xyz</tns:description>
      <tns:name>Xyz Custom</tns:name>
      <tns:description>Custom Widgets for Xyz</tns:description>

    <tns:name>Role Selection</tns:name>
    <tns:description>Role Selection and Event Emission</tns:description>
    <tns:tooltip>Role Select</tns:tooltip>
    <tns:serviceEndpointRef required="true">
      <tns:name>Role Selection</tns:name>
      <tns:description>Role Selection Widget</tns:description>
      <tns:tooltip>Role Select</tns:tooltip>


The key entry here is the Endpoint entry. It is possible to place this in a separate endpoints file – many examples have xxxWidgets.xml and xxxEndpoints.xml – but it seems also to be possible to combine the entries in a single file. We discovered that if the endpoint entry is missing, in a FileNet environment the Pallette entry is not displayed. Curiously, in my WPS environment, the endpoint seems to be optional.

ECM Events

Many online examples of event emission use code such as this:

         var payload = {"name": data};
         this.iContext.iEvents.fireEvent("Receive Role", null, payload);

When firing event across to an ECM Widget we discovered that it was necessary to specify that second parameter, which is the type of the payload.

        this.iContext.iEvents.fireEvent("Receive Role", “JSON”, payload);

ECM inBasket

That got the event sent, and we wired the ECM inBasket to receive the event. Our intention was to allow the user pick a role and have that transmitted to the inBasket, but there was one more piece ta having that take effect: you also need to correctly configure the inBasket. In the configuration panel of the inBasket you can select a chosen role, if you do that then events are ignored. So instead you must select no role (an empty entry at the end of the list) in the inBasket configuration. With that done the events are delivered to the inBasket and we get the desired effect.

It’s all in the … Timing

Having got the payload nicely transferred there just one more problem. What happens when the page is first displayed? If the user has previously selected a role we want to make that the default. So I have used a cookie to record the current selection and so in my onLoad method I retrieve it:

     this.currentRole = dojo.cookie("XyzItems.currentRole");

Clearly, we want that current value to be transmitted to the inBasket so I also explicitly fire an event across:

     var payload = {"name": ""+ role};
     this.iContext.iEvents.fireEvent("Receive Role", "JSON", payload);

And in my test environment this works just fine. To my annoyance when deployed to a UAT environment the widget does not even load! That leads to two important learning points.

Make sure Exceptions are Handled

After some head-scratching I found that fireEvent() was throwing an exception and as my onLoad() method had no exception handling the exception was causing onLoad() to fail. Hence my widget didn’t complete its initialisation.

So Lesson Number One (obvious, so why did I forget to do it?) Don’t forget to have suitable Exception Handling.

But that’s not the end: why did we get an exception at all? In the test environment it was fine, why not in UAT?

Don’t do too much in onLoad

The exception was complaining that the receiving widget didn’t have an appropriate event handler. My inBasket doesn’t have an event handler? But there it is in the code! In my test environment it obviously does, it works!

Here we see a clasic little race-condition. Until the inBasket is properly initialised the implementation class may not be available. In UAT clearly I had a rather different set of performance characteristics. My code, running in the onLoad() method of my wiget was assuming that all event recipients were ready to receive events. Manifetsly, that’s not guaranteed while onLoad() is executing.

So what to do? Well this problem is nicely solved in the dojo/iWidget environemnt: It is possible to install a second callback to be executed when the whole page is initialised. You add this code in your onLoad() method:

     dojo.addOnLoad(this, "_onReallyLoaded");

and then fire the event from the _onReallyLoaded() mthod.