Light Work

February 17, 2012

A brief celebratory posting, something worked seamlessly and I now need to go away and think about the implications.

The context is that IBM recently acquired Worklight, a vendor of software addressing mobile platforms. As my team works in in the mobile space we’re all very interested to see the capabilities of the newly acquired portfolio. Worklight addresses development, infrastructure and the management of mobile applications. The capabilities here complement those of IBM’s existing mobile product set. The development features fit with technologies we currently use: PhoneGap for cross-device portability is key component in the Worklight stack and the dojo JavaScript framework is supported.

Programming model

I started by looking at one aspect of Worklight’s server-side programming model. This allows us to create “Adapters” which are effectively RESTful wrappers of enterprise data, producing JSON data ready for consumption in the client. Out of the box there are adapters for JDBC database access and HTTP services.

The creation of an adapter is remarkably simple. The details are all described in the online tutorials, but I want to give a flavour of the degree of efforts so I’ll summarise here.

First, there is some initial configuration of the server to connect to the data source, in my case that’s a database. So I adjusted the server definition with a few lines of configuration

training-jndi-name=${custom-db.1.jndi-name}
custom-db.1.relative-jndi-name=jdbc/wl_training
custom-db.1.driver=com.mysql.jdbc.Driver
custom-db.1.url=jdbc:mysql://localhost:3306/wl_training
custom-db.1.username=aname
custom-db.1.password=apassword

Then a definition of the adapter in an XML file

<displayName>AccountTransactions</displayName>
    <description>AccountTransactions</description>
    <connectivity>
        <connectionPolicy xsi:type="sql:SQLConnectionPolicy">
            <dataSourceJNDIName>${training-jndi-name}</dataSourceJNDIName>
        </connectionPolicy>
        <loadConstraints maxConcurrentConnectionsPerNode="5" />
    </connectivity> 
<procedure name="getAccountTransactions1"/>

And finally the implementation, which comprises the SQL to be executed and a JavaScript wrapper.

var getAccountsTransactionsStatement = WL.Server.createSQLStatement(
    "SELECT transactionId, fromAccount, toAccount, transactionDate, transactionAmount, transactionType " +
    "FROM accounttransactions " +
    "WHERE accounttransactions.fromAccount = ? OR accounttransactions.toAccount = ? " +
    "ORDER BY transactionDate DESC " +
    "LIMIT 20;"
);

//Invoke prepared SQL query and return invocation result   
function getAccountTransactions1(accountId){
    return WL.Server.invokeSQLStatement({
        preparedStatement : getAccountsTransactionsStatement,
        parameters : [accountId, accountId]
    });
}

Just Work

Then I just select

Run As –> Invoke Worklight Procedure

and my code is deployed to the server and invoked. There’s negliable build or deployment time and I see a JSON string displayed.

{
  "isSuccessful": true,
  "resultSet": [
    {
      "fromAccount": "12345",
      "toAccount": "54321",
      "transactionAmount": 180,
      "transactionDate": "2009-03-11T11:08:39.000Z",
      "transactionId": "W06091500863",
      "transactionType": "Funds Transfer"
    },
    {
      "fromAccount": "12345",
      "toAccount": null,
      "transactionAmount": 130,
      "transactionDate": "2009-03-07T11:09:39.000Z",
      "transactionId": "W214122/5337",
      "transactionType": "ATM Withdrawal"
    }, etc.

Now that whole development process took maybe 30 mins, of which at least half was spent stumbling over Windows 7s security controls preventing me from updating the server configuration. I reckon the next query will take no more than 10 minutes to implement.

Conclusions and Open Questions

My previous articles have talked about using JAX/RS and JPA to achieve pretty much the same end result: a RESTful serviice obtaining some data from a database. I was pretty pleased with how easy that was to do, a couple of of hours initially and probably 30 mins for each additional query. Clearly the Worklight story offers significant effort savings. I will be using Worklight for rapid prototyping in future.

Two areas I want to investigate further:

  1. How efficient is the programming model? We’re executing JavaScript on the server. Are the overheads significant?
  2. What do we do when we are not just reading? Suppose we need transactional updates to different tables or even databases. For sure we can use stored procedures, but I’m uneasy about pushing business logic down to the database layer. Probably I need to use enterprise quality services perhaps implemented as an EJB, but in which case I can trivially expose those using JAX/RS. Do I need Worklight in those transactional scenarios?

So definitely another tool for the toolbox, I just need to figure out its optimal uses, and what other options there may be. Next, onwards to look at other Worklight features such as security and application management.

Advertisements

This post is a minor celebration, I used some technology and it worked nicely. There’s also a reminder of how to enable OpenJPA tracing in a WebSphere environment. This allowed me to have alook at the SQL generated by JPA.

Setting the Scene

This example is based on some work I was doing in a financial application but I’ve reworked the problem in terms of health checks of some pieces of electrical equipment. The idea is that our system receives records recording the latest known "Health” of a piece of equipment. The records contain a time-stamp.

Equipment ID Date Health %
101 11th July 2011 98
101 12th July 2011 97
101 13th July 2011 98
351 11th July 2011 71
351 12th July 2011 47
351 13th July 2011 33

In the example table we see a piece of equipment, number 101, operating healthily,  whereas equipment number 351 is less healthy and its health is falling over time.

Now we might also have a table with more information about the Equipment, and so our health class might look like

   @Entity public class Health {
     public Date timestamp;
     public int healthPercent;

     @ManyToOne
     @JoinColumn(name="ID")
     public Equipment equipment;
  }

Simple JPA query

One thing we might reasonably do is implement a service to return the recent history of those records for a piece of equipment. Our RESTful service might have a URL like this

http://myhost/equipment/351/history

We would probably have some additional parameters to allow selection by date, but for the sake of simplicity let’s keep it to that.

In previous postings I’ve described how we can use JAX/RS to implement a service of this kind. Our JAX/RS implementation would probably call an EJB and eventually we’d end up invoking a JPA query

      SELECT h FROM Health s
                 WHERE h.equipment.id = :equipmentId

We could have then an EJB with and injected entity manager

    @Stateless
    @LocalBean
     public class RigEjb {

       @PersistenceContext
        private EntityManager m_em;

Then in the EJB a method to invoke the query

    public List<History> equipmentHistory(int equipmentId) {
         Query query = m_em.createNamedQuery(
                         "listHistoryForEquipment"); 
         query.setParameter("equipmentId", equipmentId);       
         return (List<History>) query.getResultList();
    }

All very simple written in a few tens of minutes and we get back a list of the history records for a piece of equipment, suitable for displaying in, for example, a graph. JPQL is doing well so far.

How Healthy Now?

Now Historic trends of Health are interesting, and indeed I’ve worked on systems where predicting the future health of equipment from such data is of considerable business value. However there’s probably a more important question to ask of this data: What’s the latest view about the health of each piece equipment?

For that we need to pick just one record for each piece of equipment, the latest one we have. Now when I first hit this problem I created a naive implementation. I just returned all the records to my Java application and iterated them identifying the latest record for each piece of equipment. This is not a scalable solution, with a large number of history records performance would not be good.

However JPQL is actually pretty powerful. And after some thought and a question on StackOverflow I came up with

  SELECT h FROM Health
  WHERE h.equipment.type like :type
  AND h.date = (
     SELECT MAX(hmax.date)
     FROM Health hmax WHERE
           hmax.equipment.id = h.equipment.id
    )

We’re here identify the record whose date matches the maximum date for this piece of equipment. I’m impressed that the OpenJPA iJPQL implementation delivered with WebSphere can deal with this and produced the desired answers.

However there’s even more we can accomplish. Let’s make the data a little more complicated, with multiple measurements on the same day, differentiated by a serial number.

 

Equipment ID Date Serial Health %
101 11th July 2011 1 98
101 12th July 2011 1 97
101 12th July 2011 2 98
351 11th July 2011 1 71
351 11th July 2011 2 47
351 11th July 2011 3 33
351 12th July 2011 1 29

Now this may seem a little contrived, but in fact the data now matches very closely the financial data I was working with in my real project. In that project the record with the highest serial number each day was deemed to have the most significant “health” value.

So I need to select these records:

 

Equipment ID Date Serial Health %
101 11th July 2011 1 98
101 12th July 2011 2 98
351 11th July 2011 3 33
351 12th July 2011 1 29

The query to do this is gratifyingly similar to our previous case

  SELECT s FROM State s
    WHERE s.equipment.id = :equipmentId
     AND s.id.serial = (
         SELECT MAX(smax.id.serial)
         FROM State smax WHERE
           smax.equipment.id  = s.equipment.id
                AND smax.id.date = s.id.date
         )

And this works very nicely. Out of curiosity I wanted to see what the actual SQL would be to implement this query, that led me to look at enabling OpenJPA trace in WebSphere.

OpenJPA Trace

In some environments OpenJPA trace is controlled by an entry in your peristence.xml, to enable SQL trace you would add the line:

<property name="openjpa.Log" value="SQL=TRACE"/>

In a WebSphere Application Server environment tracing is controlled through the RAS (Reliability, Availability Servicability) logging infrastructure. In my own code I use the java.util.logging APIs which are also integrated with WebSphere’s logging infrastructure.

Controlling this logging is a two step process. First you specify a destination for your trace and second you specify the logging levels for each module. One useful feature of WebSphere is that you can adjust logging levels dynamically at runtime.

I’ll describe doing this via the admin console, but you can also control logging via wsadmin scripts, and this is my preferred approach if I need to do much work with logging and tracing.

Logging Destinations

In the admin console select Troubleshooting, Logs and Trace, select your server and then Diagnostic Trace. This brings up the screen where you can specify the logging destination

image

In a high performance situation the use of a memory buffer which can then be seen in a core dump is useful, but in normal usage I use a file as show here.

Changes made to this specification do require a server restart, before doing that you may also want to change the default module logging levels. WebSphere allows you either to modify the logging levels temporarily (on the runtime tab) or to set the levels that take effect each time the server is started. I decided to make the change to those default settings and so selected Change Log Detail Levels.

Module Logging Levels

You can either specify a trace string directly or use a graphical UI.

The trace string can be entered directly

image

Here I set all modules to info, and the specifically the JPA SQL module to “all”, which is the highest volume setting.

If you don’t know the trace string, then it is best to use the UI module tree. I have found that it is best to make sure all modules are initialised before changing the logging levels through the UI module tree. So first I ran my test program which exercised JPA. Then expanded the tree to show the openjpa module

image

And then clicked the SQL module to bring up the available levels

image

Note that this UI is also available on the runtime tab.

Having saved the changes and restarted the server I reran my tests an could see the SQL in my trace file.

SELECT t0.date, t0.id, t0.serial, t2.id, t2.name, t3.id, t3.description, t3.field, t3.name, t2.type, t0.health
FROM OUI.State t0 LEFT OUTER JOIN OUI.Equipment t2 ON t0.id = t2.id
LEFT OUTER JOIN OUI.RIGS t3 ON t2.RIG_ID = t3.id
WHERE (t0.id = ?
AND t0.serial = (SELECT MAX(t1.serial)
FROM OUI.State t1 WHERE (t1.id = t0.id AND t1.date = t0.date) )) 

JPA: Small Mysteries

July 13, 2011

The Java Persistance API (POJO) handles the mapping between Java objects and data in a relational database. A few quick annotations of our Java class and the instances can be persisted to the database with a couple of lines of code. A couple of lines of Java Persistence Query Language and we can retrieve some of those instances with not a line of JDBC code in sight. All very good stuff, and there’s a great deal of cleverness down the in the various available  implementation layers to make this perform well. As we might expect there are a few wrinkles to hinder the unwary. This article lists a few mysterious error messages I encountered when using the OpenJPA implementation that caused much head-scratching when first seen and the annoyingly simple resolutions of these problems.

My development environment is Rational Application Developer 8.0.1, using a WebSphere 8 runtime and the OpenJPA implementation delivered with these products.

The RAD 8.0.1 tooling allows me to create the annotated Java classes corresponding to an existing database schema with a few mouse clicks. So developing the application took about an hour and then I hit a couple of problems, the first happened when I tried to run my app: I got a a complaint about a Connection Driver.

ConnectionDriverName

The error says 

A JDBC Driver or DataSource class name must be specified in the ConnectionDriverName property

The stack trace doesn’t give much more of a hint, we can see it’s when JPA is trying to get a connection to the database, but why is it failing?

[13/07/11 07:33:03:453 BST] 00000020 BusinessExcep E   CNTR0020E: EJB threw an unexpected (non-declared) exception during invocation of method "findRigByPK" on bean "BeanId(OuiServiceApp#OuiServiceWeb.war#RigEjb, null)". Exception data: <openjpa-2.1.0-SNAPSHOT-r422266:990238 fatal user error> org.apache.openjpa.persistence.ArgumentException: A JDBC Driver or DataSource class name must be specified in the ConnectionDriverName property.
    at org.apache.openjpa.jdbc.schema.DataSourceFactory.newDataSource(DataSourceFactory.java:76)
    at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.createConnectionFactory(JDBCConfigurationImpl.java:840)
    at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDBDictionaryInstance(JDBCConfigurationImpl.java:598)
    at org.apache.openjpa.jdbc.meta.MappingRepository.endConfiguration(MappingRepository.java:1486)

After some fruitless searching for where I might specify a JDBC Driver I thought to check my persistence.xml file. In there was the line

<jta-data-source>jdbc/myapp</jta-data-source>

and I had no corresponding JDBC datasource created in my WebSphere Application Server.

So, one quick trip to the WebSphere console, create the Data Source with the JNDI entry jdbc/myapp and everything works.

Or at least for a while, then we began to see a peculiar error concerning Enhancement.

My Entities Lack Enhancement

The symptom was seen when testing in the WebSphere 8 test environment in RAD 8.0.1, I make some changes, my revised application would be published to WebSphere and when I try to run I see an errror on the lines of:

The type "class Rig" has not been enhanced at org.apache.openjpa.meta.ClassMetaData.resolveMeta

The meaning of this is reasonably clear: we know that OpenJPA performs some interesting processing, or Enhancement, on the annotated Entity classes. Different JPA implementations do different things as described in this Enhancement discussion but OpenJPA does some “byte weaving”. And for my classes this hasn’t happened.

Now it seems that there are many way to control Enhancement explicitly, see This article for some explanation. But I’d never needed to do this before, and I really didn’t want to introduce needless complexity.

So being a software person (you all know the jokes about physicists, engineers and software chaps in road accidents?) my immediate reaction was “it’s broken, lets see if it happens again!”. And what do you know it didn’t!

So my recipe for recovering from this problem: in RAD, Server View, expand your server, select the application, and restart it. This seems to provoke enhancement. No compile or server restart needed. This recipe seems to work reliably.

I then proceeded to expand my database, adding a few new simple tables and did some manual mapping of those tables. All seemed pretty easy until I hit another mysterious error message:

Wrong result type column

The error showed up when I was trying to navigate a relationshiop between by two new tables. The error seems quite clear:

Error 500: <openjpa-2.1.0-SNAPSHOT-r422266:990238 fatal general error> org.apache.openjpa.persistence.PersistenceException: [jcc][t4][1092][11643][3.57.82] Invalid data conversion: Wrong result column type for requested conversion. ERRORCODE=-4461, SQLSTATE=42815 FailedObject: com.ibm.helios.jpa.Transaction-21 [java.lang.String]

Caused by: com.ibm.db2.jcc.am.io: [jcc][t4][1092][11643][3.57.82] Invalid data conversion: Wrong result column type for requested conversion. ERRORCODE=-4461, SQLSTATE=42815

    at com.ibm.db2.jcc.am.bd.a(bd.java:676)

    at com.ibm.db2.jcc.am.bd.a(bd.java:60)

    at com.ibm.db2.jcc.am.bd.a(bd.java:120)

    at com.ibm.db2.jcc.am.gc.L(gc.java:1589)

    at com.ibm.db2.jcc.am.zl.getBlob(zl.java:1218)

    at com.ibm.ws.rsadapter.jdbc.WSJdbcResultSet.getBlob(WSJdbcResultSet.java:740)

And so I spent quite some time comparing my Java class attributes and the columns in the corresponding database. The actual problem transpired to be that I had forgotten to add my new classes to the persistence.xml file.

This is a short post documenting a little procedure I needed to follow in enabling WebSphere Integration Developer (WID) v7.0.0.3 to work with a Rational Team Concert v7 repository. This is another “it’s obvious in hindsight” story, but maybe it will save someone else some time.

WID is a development environment for WebSphere Process Server (WPS) and WebSphere Enterprise Service Bus (WESB). Using WIDS you can develop and test BPEL processes and WESB Mediations. Until recently I was using WID v6.x and keeping my source code in CVS. Joining a new project I upgraded to WID v7.0.0.3 and discovered that the project used Rational Team Concert (RTC). Now RTC has been around since about 2008 but this is the first time I’ve had chance to use it. So before getting to my installation gotcha a brief aside about RTC.

Rational Team Concert

Although my initial interest in RTC is just to store my source code and and work with a couple of team members on a small project a quick survey of the material at the Rational Team Concert site shows the scope is potentially much greater. Chatting to a colleague in the Rational team the things that caught my attention were:

  • Support for agile development methods, parallel development and continuous ingtegration
  • Highly configurable and extensible stream based approach – you can write client or server-side plugins, OSGi styles
  • Support for distributed development teams

I particularly like the concept of suspending a set of changes to temporarily work on something else. So, note to self “need to read more about this”.

Connecting WID and RTC

My version of WID came with the v1.0 RTC plugin, the repository the team are using needs v2.0. Should be easy: Installation Manager get some updates, got a RTC v2.0 client. Installation complete. Relaunch WID, attempt to connect to repository … and fails! Same error, apparently I’m still a v1.0 client. Check in Help-> About … and yes, I do indeed still have a v1.0 client.

Very odd. Llet’s uninstall the old version of the client and then install the new one. Still no joy, I’m still on v1.0! So I get suspcious, this seems like yet another case where launching Eclipse with –clean is needed. Eclipse has a plug-in cache which on occasion needs to flushed. This article gives rather more detail, about –clean and a few other wrinkles.

And still no joy. At this point I got help from my colleague Steve, who has been a Rational chap (in all senses) for many years. He’s got a nice article here about some RTC integration.

The Answer: the right notes in the right order

The answer was indeed to use –clean, but it seems that the order of actions is crucial. The steps we took were:

  1. Launch the IBM Install Manager, select Modify and choose to uninstall the v1 Rational Team Concert Clent for Eclipse.
  2. Exit the Install Manager. Launch WID using the –clean option.
  3. Exit WID, launch the Install Manager again, select Modify, and install the v2 Rational Team Concert Client for Eclipse.
  4. Relaunch WID.

The crucial point being to perform the clean immediately after the v1 uninstall.

Introduction

Previously I described my use of Flazr, and open-source streaming client, to test my media server. And I mentioned that I wanted to test the server’s capabilities to achieve better scalability by distributing requests across satellite servers. When media server receives a request for content it chooses a satellite and then emits a redirection response in this style:

<smil>
    <head>
        <meta base="rtmp://hostX/app/" />
    </head>
    <body>
        <video src="djna01/someContent.mp4" />
    </body>
</smil>

This is SMIL format, albeit a very small subset of what SMIL can be used for – using full SMIL capabilities you can in effect build a complete animated presentation. That’s rather like having a PowerPoint for the Web.

Anyway my client then needs to understand this response and open up the stream on

rtmp://hostX/app/djna01/someContent.mp4

So in this article I’ll explain how I used JAX/B to parse the SMIL XML.

Why JAX/B

When faced with something as simple as that SMIL example it’s very tempting to use a few regular expressions (regexp) to extract the data we need. We could probably get something working quite quickly. However in the general case XML complexity defeats regexp capability (see discussions such as this) and most of the time I need to deal with non-trivial XML. So as I haven’t previously explored using the JAX/B APIs for parsing XML, now’s the chance!

It transpires that, using the Rational Application Developer tooling,  it actually took about 20 minutes to write the JAX/B-based code. I doubt whether I could have got the regexp right as quickly.

Using JAX/B

My starting point was a sample XML file as shown above. I created a simple Java project and then took the following steps:

  1. Generated an XSD
  2. From the XSD generated annotated Java classes
  3. Wrote the few required lines of code to call the JAX/B API.

Generating the XSD

I have the sample XML file in my project, I

rightClick->Generate->Xml Schema

and selected the defaults offered. The result was a schema

  <xsd:element name="head">
    <xsd:complexType>
      <xsd:sequence>
        <xsd:element ref="meta"/>
      </xsd:sequence>
    </xsd:complexType>
  </xsd:element>
  <xsd:element name="body">
    <xsd:complexType>
      <xsd:sequence>
        <xsd:element ref="video"/>
      </xsd:sequence>
    </xsd:complexType>
  </xsd:element>
  <xsd:element name="meta">
    <xsd:complexType>
      <xsd:attribute name="base" type="xsd:string"/>
    </xsd:complexType>
  </xsd:element>
  <xsd:element name="video">
    <xsd:complexType>
      <xsd:attribute name="src" type="xsd:string"/>
    </xsd:complexType>
  </xsd:element>
  <xsd:element name="smil">
    <xsd:complexType>
      <xsd:sequence>
        <xsd:element ref="head"/>
        <xsd:element ref="body"/>
      </xsd:sequence>
    </xsd:complexType>
  </xsd:element>
</xsd:schema>

There are various options I could have selected to get finer control over the XSD. Alternatively I could have written the XSD by hand, or in more complex cases the service provider would have already have published the XSD.

 

Generating Java Classes

I then need a set of Java Classes corresponding to the XSD, these classes using JAX/B annotations to control mapping between Java and XML. Again, I could write these by hand, but a simple set of cannotated classes can be generated very easily, select the XSD and

rightClick->Generate->Java

This brings up the XSD to Java wizard. On the first page select

JAX/B Schema to Java Bean

and select Next, then on the next page specify a package name such as  org.djna.smil.data and click Finish. The result is a suitable set of classes

image 

Here’s part of the generated Head.java class:

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "", propOrder = {
    "meta"
})
@XmlRootElement(name = "head")
public class Head {

    @XmlElement(required = true)
    protected Meta meta;

I won’t here elaborate on the meanings of the JAX/B annotations, but it’s pretty clear that we’ve got a class which maps to this portion of the SMIL

<head>
        <meta base="rtmp://hostX/app/" />
</head>

and the other classes are annotated similarly. So after a few mouse clicks we now have a set of classes which correspond to the SMIL file. All that remains is the code to use those classes.

The JAX/B invocation code

In my case I have the URL of the redirection service, which returns the SMIL document to be parsed. So I can write this code

public Smil exampleGet(String url)
        throws JAXBException, MalformedURLException{
        JAXBContext jc
           = JAXBContext.newInstance("org.djna.smil.data");
        Unmarshaller u = jc.createUnmarshaller();

        Smil theSmil = (Smil)u.unmarshal( new URL(url) );

        return theSmil;
    }

So I have initialsed the JAXBContent with the name of the package where my Beans were generated.

    JAXBContext.newInstance("org.djna.smil.data");

and then use that context to create an Unmarshaller. The unmarshaller will accept a URL parameter and parses the response.

And that’s it; four lines of code and the XML is parsed.

Conclusion

I have to admit that when I decided to use JAX/B rather than a simple regexp I thought I might have been making things unduly complex. I was surprised when all the above “just worked”. In fact when my application ran I spent a few minutes trying to find out where it had broken before realising that in fact it had worked seamlessly.

Recently I’ve been looking at setting up a POC environment for a solution involving streaming media. I’ve got some streaming media servers that delivers content over RTMP and some degree of infrastructure cleverness that claims to give improved performance. So how do I test that?

Well, I need the capability of submitting requests for content and evaluating the quality of service as I tweak the infrastructure. Features along these lines:

  • Simulating particular access patterns, for example a large number of users all requesting some popular content.
  • Defining extended test runs, some parts of the infrastructure take a while to “warm up”, and performance measures are best taken over extended periods of time.
  • Some way of determining KPIs such as the time take to start streaming or the amount of “stutter” experienced.

Also I want efficient use of test client resources. I may be simulating tens and hundreds of users, I just need to retrieve the stream of content, I don’t actually need to have it rendered, no need for video graphics.

Now there are quite few clients able to do these kinds of things. I chose Flazr, which is an open-source Java application. In this article I am going to

  1. Describe some simple uses of Flazr.
  2. Explain a problem I hit and give the code for the fix I developed.
  3. Show an extension I developed, which enables Flazr to be aware of some load-balancing capabilities in my infrastructure. This exploits a very small subset of SMIL.

Testing with Flazr

Initially I imported the Flazr 0.7 source into  my Rational Software Developer, Eclipse-based development environment.

image

And added the libraries delivered with Flazr to my classpath.

I can then run the Rtmp client

image

Stream Content, Get Metrics

The simplest case is just to specify the URL for the stream to be played

       rtmp://myhost/myapp/mycontent

I won’t here describe my Streaming Media Server, there are many possible products you can use for that purpose.

This streams the content and display some useful metrics

first media packet for channel: [0 AUDIO c6 #1 t0 (0) s0], after 219ms

and

finished in 26 seconds, media duration: 11 seconds

From this  have  a measure of the responsiveness of my server and also we note that although the media duration was only 11s, it took 26s to stream it – lots of stutter there. And in fact if I stream this content through a conventional viewer there is indeed quite a bit of stutter.

More Demanding Workloads

I can ramp up the workload by asking Flazr to spawn a number of simulated clients each retrieving the stream

-load 5 rtmp://myhost/myapp/mycontent

these 5 are executed  in parallel using the JSE 1.5 Executor capability.

We can adjust the degree of parallelism by controlling the thread pool size.

          -load 5 –threads 2 rtmp://myhost/myapp/mycontent

We then get 5 downloads competed, but done just two at a time,in the two parallel threads. And in the limiting case we can have just one thread and hence get sequential retrieval.

If you try this with Flazr 0.7 you will find that in fact the parallelism is not so controlled and Flazr itself does not shutdown when the last retrieval completes. I’ll explain how I fixed that in a moment, but first I want to mention one other invocation style.

Flazr Scripts

The “-load” option described above allows you to stream in parallel several copies of the same content. If instead you need to emulate a more mixed workload you can instead put a list of URLs in a file and then use a command such as

     -file myscript

to initiate these streams. You can again control the number of parallel streams by using the “-threads” option

    -threads 3 –file myscript

The Halting Problem

As mentioned earlier, when streaming in parallel, Flazr does not exit when the last stream completes. This is very inconvenient if you want to run Flazr as part of some larger test.

The reason for this behaviour is that Flazr is using an Executor and this has a worker thread which waits for new workitems to appear. It is necessary to issue a shutdown request in order for Falzr to exit.

I modified RtmpClient.java in package com.flazr.rtmp.clent. This is the modified code, which I’lll explain in the next couple of sections.

     if(options.getClientOptionsList() != null) {
            logger.info("file driven load testing mode, lines: {}",
                    options.getClientOptionsList().size());
            int line = 0;
            for(final ClientOptions tempOptions :
                    options.getClientOptionsList()) {
                line++;
                logger.info("running line #{}", line);
                for(int i = 0; i < tempOptions.getLoad(); i++) {
                    final int index = i + 1;
                    final int tempLine = line;
                    executor.execute(new Runnable() {
                        @Override public void run() { 
                            logger.info("line #{}, spawned connection #{}"
                                    , tempLine, index);
                            connect(tempOptions);                          
                            logger.info("line #{}, finished connection #{}"
                                    , tempLine, index);
                        }
                    });
                }                         
            }
            // by default the executor hangs around, ask it to go away
            logger.info("queueing shutdown request");
            executor.execute(new Runnable() {
                    @Override public void run() {
                        logger.info("Turning out the lights … ");
                        executor.shutdown();
                    }
                });
            return;
        }

The most important change is to arrange for a shutdown to be requested.

Queue a Shutdown

The Flazr code creates an Executor request for each line in the script file. These requests are processed by the Executor  in the order in which they are created. Hence if I add one last request to the list, a request to shutdown, we know that this will be the last request to be actioned.

There is one corner case to consider, what happens if that shutdown request is issued while other threads are still active. Fortunately this handled by the Executor framework, the executor will not allow any subsequent requests for new work to be started, but will wait for current requests to complete.

So we get the desired behaviour: the Flazr script completes and Flazr then stops.

Which Executor?

However, there is a further wrinkle. The original code had:

   executor.execute(new Runnable() {
      @Override public void run() {                           
         final ClientBootstrap bootstrap
                  = getBootstrap(executor, tempOptions);
         bootstrap.connect(
              new InetSocketAddress(
                  tempOptions.getHost(),
                  tempOptions.getPort()
                   ));
         }
    });

Note that the executor is passed down to the ClientBootStrap. Under the covers the IO code will add additional executors, and this happens after the initiation of this job. This introduces a race condition with he shutdown request, we can hit the shutdown before the parallel IO execution is requested.

Hence I changed this code to use the

      connect(tempOptions);

method, which FLazr uses elsewhere. This creates a dedicated, separate executor.

SMILing

My infrastructure attempts to optimise performance by using a load distribution capability. The user requests

     http://somehost/someapp/somecontent

and receives an XML file, in SMIL format, which contains the URL that this client should use to stream the content. Hence different clients will get the same content from different places.

I added code to interpret these redirection responses, I’ll describe how in my next posting.

IBM Business Process Management products, and increasingly other IBM products, present user Interfaces in an extensible Web 2.0 Framework known as Business Space. The UI allows users to create their own views from visual components (widgets) supplied for each product. So for example, in WebSphere Process Server, there are widgets for presenting lists of tasks and working with the tasks. The widgets can be “wired” to each other so that actions in one widget can pass events to another.

The widgets are developed in JavaScript using the dojo framework, and conform to the iWidget specification, which predefines certain life-cycle events that the widget must support. You can develop you own widgets to be be used in conjunction with out of the box widgets.

I’ve been working to create a custom widget to be used in conjunction with the ECM Widgets delivered with IBM Filenet 4.5.1. This is using a version of Business Space consistent with that found in WID/WPS 6.2. This article concerns some wrinkles I came across. You should note that creating custom widgets in later versions of Business Space is rather easier than in these versions: in WID v7 there is tooling for creating iWidgets and a much simpler deployment model.

Widgets and Endpoints

In order to make custom widgets available for use you create an XML files containing catalogue entries. Placing these XML files in

<profile>/BusinessSpace/registryData

Will cause BusinessSpace to add corresponding entries to the iWidget pallette in the UI. It seems that different Business Space environments have subtly different requirements for the contents of this file. In my case, I omitted one stanza, and when deploying to the FileNet environment my widgets were not being recognised. It seems that the following file format works across my WPS and FileNet test environments.

Example Registry File

<?xml version="1.0" encoding="UTF-8"?>
<!– START NON-TRANSLATABLE –>
<tns:BusinessSpaceRegistry xmlns:tns="
http://com.ibm.bspace/BusinessSpaceRegistry" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://com.ibm.bspace/BusinessSpaceRegistry BusinessSpaceRegistry.xsd ">
<!– END NON-TRANSLATABLE –>

  <tns:Endpoint>
       <tns:id>com.xyz.bspace.rootId</tns:id>
       <tns:type>com.xyz.bspace.rootId</tns:type>
       <tns:version>1.0.0.0</tns:version>
       <tns:url>XyzRoleWidget</tns:url>
        <tns:description></tns:description>
  </tns:Endpoint>

  <!– START NON-TRANSLATABLE –>
  <tns:Category>
    <tns:id>{com.xyz.bspace}Xyz</tns:id>
    <tns:name>Xyz Custom</tns:name>
    <tns:description>Custom Widgets for Xyz</tns:description>
    <tns:tooltip>Xyz</tns:tooltip>
    <tns:localeInfo>
    <!– END NON-TRANSLATABLE –>
      <tns:locale>en_US</tns:locale>
      <tns:name>Xyz Custom</tns:name>
      <tns:description>Custom Widgets for Xyz</tns:description>
      <tns:tooltip>Xyz</tns:tooltip>
    <!– START NON-TRANSLATABLE –>
    </tns:localeInfo>
    <tns:order>5</tns:order>
  </tns:Category>
  <!– END NON-TRANSLATABLE –>

  <!– START NON-TRANSLATABLE –>
  <tns:Widget>
    <tns:id>{com.xyz.bspace}Role</tns:id>
    <tns:version>1.0.0.0</tns:version>
    <tns:name>Role Selection</tns:name>
    <tns:type>{com.ibm.bspace}iWidget</tns:type>
    <tns:description>Role Selection and Event Emission</tns:description>
    <tns:tooltip>Role Select</tns:tooltip>
    <tns:categoryId>{com.xyz.bspace}Xyz</tns:categoryId>
    <tns:widgetEndpointId>com.xyz.bspace.rootId</tns:widgetEndpointId>
    <tns:url>/iwidget/role.xml</tns:url>
    <tns:helpUrl></tns:helpUrl>
    <tns:iconUrl>images/generic_iWidget.gif</tns:iconUrl>
       <tns:owner>IBM</tns:owner>
    <tns:email>TBD</tns:email>
    <tns:serviceEndpointRef required="true">
      <tns:name>serviceUrlRoot</tns:name>
      <tns:refId>com.xyz.bspace.rootId</tns:refId>
      <tns:refVersion>1.0.0.0</tns:refVersion>
    </tns:serviceEndpointRef>
    <tns:localeInfo>
    <!– END NON-TRANSLATABLE –>
      <tns:locale>en_US</tns:locale>
      <tns:name>Role Selection</tns:name>
      <tns:description>Role Selection Widget</tns:description>
      <tns:tooltip>Role Select</tns:tooltip>
    <!– START NON-TRANSLATABLE –>
    </tns:localeInfo>
  </tns:Widget>
  <!– END NON-TRANSLATABLE –>

<!– START NON-TRANSLATABLE –>
</tns:BusinessSpaceRegistry>
<!– END NON-TRANSLATABLE –>

The key entry here is the Endpoint entry. It is possible to place this in a separate endpoints file – many examples have xxxWidgets.xml and xxxEndpoints.xml – but it seems also to be possible to combine the entries in a single file. We discovered that if the endpoint entry is missing, in a FileNet environment the Pallette entry is not displayed. Curiously, in my WPS environment, the endpoint seems to be optional.

ECM Events

Many online examples of event emission use code such as this:

         var payload = {"name": data};
         this.iContext.iEvents.fireEvent("Receive Role", null, payload);

When firing event across to an ECM Widget we discovered that it was necessary to specify that second parameter, which is the type of the payload.

        this.iContext.iEvents.fireEvent("Receive Role", “JSON”, payload);

ECM inBasket

That got the event sent, and we wired the ECM inBasket to receive the event. Our intention was to allow the user pick a role and have that transmitted to the inBasket, but there was one more piece ta having that take effect: you also need to correctly configure the inBasket. In the configuration panel of the inBasket you can select a chosen role, if you do that then events are ignored. So instead you must select no role (an empty entry at the end of the list) in the inBasket configuration. With that done the events are delivered to the inBasket and we get the desired effect.

It’s all in the … Timing

Having got the payload nicely transferred there just one more problem. What happens when the page is first displayed? If the user has previously selected a role we want to make that the default. So I have used a cookie to record the current selection and so in my onLoad method I retrieve it:

     this.currentRole = dojo.cookie("XyzItems.currentRole");

Clearly, we want that current value to be transmitted to the inBasket so I also explicitly fire an event across:

     var payload = {"name": ""+ role};
     this.iContext.iEvents.fireEvent("Receive Role", "JSON", payload);

And in my test environment this works just fine. To my annoyance when deployed to a UAT environment the widget does not even load! That leads to two important learning points.

Make sure Exceptions are Handled

After some head-scratching I found that fireEvent() was throwing an exception and as my onLoad() method had no exception handling the exception was causing onLoad() to fail. Hence my widget didn’t complete its initialisation.

So Lesson Number One (obvious, so why did I forget to do it?) Don’t forget to have suitable Exception Handling.

But that’s not the end: why did we get an exception at all? In the test environment it was fine, why not in UAT?

Don’t do too much in onLoad

The exception was complaining that the receiving widget didn’t have an appropriate event handler. My inBasket doesn’t have an event handler? But there it is in the code! In my test environment it obviously does, it works!

Here we see a clasic little race-condition. Until the inBasket is properly initialised the implementation class may not be available. In UAT clearly I had a rather different set of performance characteristics. My code, running in the onLoad() method of my wiget was assuming that all event recipients were ready to receive events. Manifetsly, that’s not guaranteed while onLoad() is executing.

So what to do? Well this problem is nicely solved in the dojo/iWidget environemnt: It is possible to install a second callback to be executed when the whole page is initialised. You add this code in your onLoad() method:

     dojo.addOnLoad(this, "_onReallyLoaded");

and then fire the event from the _onReallyLoaded() mthod.

My previous postings were concerned with developing Restful services delivering JSON payloads. Using a combination of JAX-RS and JAXB I was able to rapidly develop services and clients for my REST services.

In my current project I’m working with a non-restful service that manipulates JSON payloads, so I need to parse and produce JSON strings without the assistance of JAX-RS frameworks. My challenge is to make something like this happen:

    public String doSomeWork(String theJson) {

        InterestingObject payload = parseJson( theJson );

        // now I have a Java object I can work …

        ResultObject answer =  worker.process(payload);

        return formatJson(answer);  // need to produce JSON format

    }

I’ve discovered that the open source Jackson parser lets me do this, so  in this article I’m going to describe some of the Jackson features I’ve used.

Set Up the Java Project

I’m using Rational Software Architect 7.5.3, and I create a simple Java project using the default Java environment,  JSE 1.6. I would expect any equivalent Eclipse-based environment to be equally suitable.

I download two Jackson components: core and mapper from the Jackson Download site. Jackon supports a number of binding models, I want to use the”Data Binding” approach, which maps Java Beans to and from JSON and for that I need the mapper jar in addition to the Jackson core functionality.

download

I am using the Apache-licenced versions of these JARs. I drag the downloaded JARs to my Java project and then add them to my classpath.

Select the project, rightClick->Properties, Java Build Path, Add JARs and in the JAR Selection dialogue select both JARs.

BuildPath

The project referenced libraries now shows the use of the two JARs. Note that by including the jars here, the project becomes a self contained entity that can be shared via a source control system such as CVS. In the long term we should probably avoid duplication and instead put the libraries into their own project.

ReferencedLibs 

The Data

The real data I’m dealing with is quite technical, relating to the IIF framework, telemetry on oil rigs and so on. So here’s a small, fictitious example with some similar features. It’s represents the response to a query of a music catalogue for works by particular artists.

{
      "success": true,
      "artists": [
         {
            "name": "Ashley Hutchings",
            "albumns": [
               {
                  "title" : "Copper, Russet and Gold",               
                  "properties": [
                     { "name": "artist",
                       "value" : "Ashley Hutchings, Ken Nicol"
                     },
                     { "name": "id",
                       "value" : "PRKCD109"
                     }                    
                  ]
               }, 
               {       
                  "title" : "Twanging and a Tradding",

      … etc.

The response comprises a success indicator and then an array of the artists that were found. Each artist contains an array of albums and they in turn have an array of name/value pair properties. That property set will present a little challenge later, but first let’s deal with the easier pieces.

The example JSON string I stash in a file in my Java project. My code will read it from there, though of course in real life the data is being delivered from remote services.

The complete test file:  Albumn List

The Application, untyped parsing

Jackson a very simple way to parse this JSON, it will produce a HashMap containing the parsed data with very few lines of code.

I present the whole code here. Towards the end there are a few non obvious lines, so I’ll explain them in more detail,

package djna.jackson.eg;

import java.io.File;
import java.io.IOException;
import java.util.HashMap;

import org.codehaus.jackson.JsonFactory;
import org.codehaus.jackson.map.ObjectMapper;
import org.codehaus.jackson.type.TypeReference;

public class JsonExample {
    public static void main(String argv[]) {
        try {
            JsonExample jsonExample = new JsonExample();
            jsonExample.testJackson();
        } catch (Exception e){
            System.out.println("Exception " + e);
        }       
    }
    public void testJackson() throws IOException {       
        JsonFactory factory = new JsonFactory();
        ObjectMapper mapper = new ObjectMapper(factory);
        File from = new File("albumnList.txt");
        TypeReference<HashMap<String,Object>> typeRef
              = new TypeReference<
                     HashMap<String,Object>
                   >() {};
        HashMap<String,Object> o
             = mapper.readValue(from, typeRef);
        System.out.println("Got " + o);
    }   

}

The package, import and main() code are pretty much standard Java, I want to focus on the testJackson() method. Its purpose is to read the file containing a JSON string into HashMap representation and print out the result. I’m using a file here but there are similar readValue() variants for reading from other sources such as InputStreams.

The first thing we see is the creation of an instance of a class that is quite widely used in Jackson, the ObjectMapper.

       JsonFactory factory = new JsonFactory();
     ObjectMapper mapper = new ObjectMapper(factory);

A common pattern is to create a single ObjectMapper and reuse it but here I’m creating an instance for use just in this example method. The next line creates a File instance for our source of data. In my eclipse-based environment the default working directory will be my project, so the file is immediately accessible.

    File from = new File("albumnList.txt");

Now for the tricker bits of code. This line is where the parsing happens:

    HashMap<String,Object> o
             = mapper.readValue(from, typeRef);   

The general pattern here is that we ask the mapper to read from the File and produce an object, the result is assigned to a HashMap<String, Object>, so mapper.readValue() must produce an object of that type. How does it know what type? from the second parameter – typeRef. This is where the Java Generic capabilities impose a wrinkle. In concept we pass a class to readValue() to say “make me one of these”. But Java Generics don’t make that easy (read up on Type Erasure if you’re interested) so Jackson provides a a TypeReference class to allow us to describe the Generic we want. Hence this line of code:

    TypeReference<HashMap<String,Object>> typeRef
              = new TypeReference<
                     HashMap<String,Object>
                   >() {};

Note that we are creating an instance of an anonymous class, which is sufficient to pass the necessary type information.

Now when we execute the Java Application, the print

System.out.println("Got " + o);

Will print the HashMap and we get output like this:

Got {success=true, artists=[{name=Ashley Hutchings, albumns=[{title=Copper, Russet and Gold

This shows that the JSON string has been parsed, but on seeing that default HashMap.toString() output I wonder how readily I could instead output a JSON string. Can we serialise the data we read back to JSON? It transpires that this is pretty easy too, but does lead to an understanding of how to configure some Jackson behaviours. So, a quick diversion via Jackson serialisation

Serialising to JSON

The code to produce a JSON string is pleasingly simple:     

        mapper.writeValue(System.out, o);
        System.out.println("\nComplete.");

and this produces the following output:

    {"success":true,"artists":[{"name":"Ashley Hutchings  … etc

with one small surprise, that “Complete” line is not written. This is because by default  the Jackson output streams are closed when serialization is complete, and that’s probably not what we want for System.out. We can alter this behaviour using the mapper’s configure method

     mapper.configure(
                JsonGenerator.Feature.AUTO_CLOSE_TARGET,
                false);   

Similar techniques can be used to configure other Jackson behaviours. As an example, suppose we prefer the JSON string to be formatted:

    mapper.getSerializationConfig().set(
        SerializationConfig.Feature.INDENT_OUTPUT,
        true);

This produces output like this

      (beginning omitted) … "title" : "Original Owners",
      "properties" : [ {
        "name" : "artist",
        "value" : "Michael Chapman, Rick Kemp"
      }, {
        "name" : "id",
        "value" : "RR002"
        } ]
      } ]
    } ]
  }
  Complete.

Jackson provides several configuration enums  for configuring different aspects, each enum is called “Feature” and is declared in the appropriate class. To avoid import collisions import the class, not the enum:

   import org.codehaus.jackson.JsonGenerator;
   import org.codehaus.jackson.map.SerializationConfig;

So Far

That gets us started with Jackson, but we haven’t got a nice, typed representation of the data. More on that next time …