Reject dirty data! Don’t let it in, no matter what happens

More of a note to myself. Just working on the JSF bindings of the soon to be announced openEHR framework, and due to nature of my persistence model, once bad data finds its way into db, it messes the whole form entry. It is possible to modify the persistence mechanism for immunity to bad data, but is that something I should do? I guess not.

When bad data appears in the system, it should stop the execution. You (I) should find why that happens, and fix it. I’ll keep the persistence a little bit fragile intentionally. Better to discover potential problems now, then trying to find them in production system.

Patch your software, or someone dies!

Sounds like an extreme statement, right? Well, not according to some people in some undisclosed US hospitals. Their  machines running Windows OSs were infected with Cornficker . I can understand how they felt. Patching systems which are running critical software is always trouble. I’ve been in this situation so many times. A web server running an electronic claim server software, another one running an inventory management software, or a machine controlling an MRI device.. You name it. People in charge of these machines know that in case something goes wrong due to upgrade process, they are in big, big trouble. So they say: “don’t fix it if it is not broken”.

Of course this argument goes out of the windows when a worm takes the control of the machine. That’s hell on earth. I won’t go into windows bashing here, since the penetration rate of MS in operating systems makes them such a huge target, that it is hard to miss it in case you want to throw something at it. (don’t read this as it is not MS’s fault).

In old times, worms were not such an important threat, since an IT department which takes its job serious would be enough to disconnect the network from the rest of the world. Now all systems are either being built or re-engineered toward connectivity, and we have a problem. Sometimes I feel glad that I’m not running a production system anymore, at least not for the last two years or so.

Extending markup mechanisms in web tier for better archetype bindings

Ok,

This is probably a weird title, but when you face the same situation that I am facing at the moment, it will make sense. That situation is, when you are working an a web based application with a decent web tier technology, and you are trying to bind the UI layer to a back end layer. Almost all of the recent web stacks give you a declerative UI layer with some sort of markup language that allows you to refer to backing objects for bindings. In my case these objects are AOM instances (what a big surprise …).

What I am not happy with is the usual focus on backend functionality, building most of the code at the layer that is just behind the UI layer, and using the markup in UI layer just to do simple binding. Of course there is a reason that the markup in UI layer is quite simple in most recent technologies: if it was not like that you’d go back to days of ASP and JSP where code was very intimate with markup. That’s a scary mess, but I still feel that we are not making use of the bindings in the UI layer as much as we can. With some tweaking in these markup features, we may have a better distribution of functionality, with some responsibilities shifted to markup side of things. With some tooling support (cough: eclipse) this may even lead to a small domain specific language in UI layer for archetype bindings. In case of JSF, there is the Unified Expression Language extension mechanism    and for WPF, there is markup extensions for XAML So in case you are using one these technologies and doing something I’m doing, you may consider taking a look at these.

Notes from Healthcare Interoperability 2008

I was in Birmingham yesterday, to join Healthcare Interoperability 2008. I can not say that is was as rich as I had expected, but still it hosted some interesting stands.

EMIS and their partners were the star of the event, and I guess from a business perspective the message was obvious there: primary care is a very good market for healthcare informatics.

I guess it allows you to build controllable products and services. Controllable in the sense that the medical domain requirements you have to handle are much smaller compared to ones in a secondary care institution, and also administrative and financial side of things is again tiny compared to hospital information systems. EMIS certainly knows how to tap into this domain, and their approach to business partners is encouraging. PAERS and EMIS together produce solutions which can take some burden off the shoulders of GP offices. I have to say EMIS people have been very helpful, and they made sure that any questions are answered.

There were some other interesting stands, with some quite specific solutions. I think in terms of healthcare IT, targeting a well differentiated market has huge benefits to offer. You can find a layer that is fit for your resources and you can move between different scales as  a vendor. That is what is missing in some other markets, like Turkey. Being able to produce “to the point” products is a very good option to get into the market, and this model needs to be encouraged. Needless to say, without an established system in primary care, you can not do much.

After the HL7 UK conference

I’ve spend two days of last week at HL7 UK conference, and it was worth it. It was nice to see what are the hot topics in the HL7 domain, and some topics seemed to be a real focus of interest from the community.

A couple of thoughts about the feel of things: first of all, CDA is getting stronger. People are more focused on CDA than any other method for messaging, and this is supported by NHS’s choice to use CDA for messaging. However, HL7 community may like to  consider thinking about  why people are so focused on CDA, when they have a huge amount of work channeled into building models based on the RIM. I mean, CDA has  a more controlled environment of (kinda) its own, and that seems to take a lot of attention.

This was the conclusion I reached a couple of years ago, when I first started about how to implement the national project in Turkey, and I can see that more or less everyone is going towards the same direction now. I consider this as a signal for HL7 people all around the world to take a good look at the modeling practices and approaches people are taking. If people do prefer CDA due to its simplicity, than there is something wrong with the other parts of the work.  In general, I have doubts about the modeling approach of HL7 V3, especially about the specialization practices in RIM based modeling. Throwing away attributes from parent as you descend from it, is not a good fit with the object oriented technologies, which will be a problem in case you want to reflect RIM to a JAVA ITS.

Work around the tooling efforts seem to go in a better direction, and the demo of the Eclipse based modeling solution was nice. I’l like to know how that will end up. I guess RIMBAA is another important initiative to follow, but in general I do not think I am happy with the offered solution. I somehow feel that it will bring more trouble than it eliminates, especially due to work that will be necessary for integration with the local system

Seeing OpenEHR discussed in the event was nice, and listening to Dipak Kalra was even nicer. HL7 is a messaging standard, that’s how it is defined, but what happens after the message arrives is another huge question. OpenEHR has answers for this question, and I guess it would be nice if HL7 domain would also consider the phases after successful messaging.  There is more to write about the whole thing, especially about terminology and information models relationship, but that is another thing that deserves its own post.