openEHR for policymakers

As of end of 2018, I’ve been working on openEHR for almost 15 years, beginning with my exposure to openEHR archetypes during a European Union research project, around 2003 or so.

During these fifteen years, I tried to explain my (sometimes incorrect at the time) understanding of openEHR to many people who occupied various positions: junior software developers, product managers, general managers, investors, academics, ministers of health, marketing professionals. It would be a long list.

Looking back, I can see that I have not been able to articulate some key points clearly when I was talking to policy makers. That is, people who get to cast a vote or make a decision when it comes to choosing how to use technology in healthcare.

This post is an attempt to focus on aspects of openEHR that are relevant to policy makers, but it should be of interest to many people in other roles since we’re all affected by the decisions of policy makers as patients, if not as people in other roles in healthcare(IT).Read More »

Why US will have better clinical decision support than the EU

For many, smarter healthcare through the use of computers is an exciting idea. This has been the case since the sixties, and I belong to the current generation of people who try to make this happen.

There are so many misunderstood things about making computers help clinicians. It would take a lot of space to discuss these things, a luxury neither the blogger nor the reader can afford, so let’s stick to a key component of making CDS happen: data.

One form of CDS that benefits from data is based on building mathematical models using the data, then using these models to assess clinical scenarios. The more data you have, the more robust and accurate your decision making models are. This is why the big data hype is acting similar to rents in London: there appears to be no limit to its rise. Big data is basically breaking very large amounts of data that would not possibly fit into a single computer to smaller chunks and process it using hundreds and if necessary thousands of computers. It is not conceptually new, but it became easier and cheaper in the last decade. With this improved approach to processing larger data, the possibility of better decision making models arise, and (some) clinicians, vendors and investors begin to think: “This is it! We’ll now be able to have sci-fi smart computers”. Not so easy.Read More »

The semantic web that never was. Will it be the same for smart healthcare IT?

I attended another Data Science London meeting last week. As usual, it was a good one. Speakers talked about their experience with twitter feeds that includes foursquare check-ins and scraping data from web sites. Scraping is basically extracting information from the web pages, in a way simulating a human’s use of the web site to use the information provided by that web site.

Both talks were interesting, and both had something in common: the people who are trying to access data had no programmatic, well defined method of doing so, so they resorted to other methods. The case of Lyst was especially interesting. They’ve gone through a lot of trouble to set up a system that can collect data from lots and lots of online fashion retailers. They have an infrastructure that extracts information about tens (hundreds?) of thousands of products from lots of web sites, and as surprising as it may be, they are actually keeping things under control, and presenting a single site that allows people to access data as if it is presented by a single source.  A question that was asked by someone in the audience was: “do you have any programmatic access to these sites?”. As in, do they give you web services? The answer was something in the lines of very few. It is usually a crawler that extracts information from the web site that does the job (though they are working with the consent of the web sites they’re parsing). I think it was also someone from Lyst  (or maybe the audience, not so sure about it) who said it is pretty much the reality of the web we have today, despite all that hype about semantic web.Read More »

Pulse evolves into SDC Cloud Connect, and becomes even cooler

I am a big admirer of Eclipse. It is an incredibly ambitious piece of work. It tackles the problem of creating a platform for software tooling, a platform that can generalize features of most IDEs, report tools, scientific software and even regular desktop applications.

Not everybody agrees with me of course, when it comes to calling Eclipse an impressive piece of work. I won’t waste pages trying to convince those who disagree. Due to is generic infrastructure, Eclipse may not feel like a tool specific to java development, python development etc. Even if you get over the slightly unintuitive feeling it gives you,  it is hard to ignore the effort required to make Eclipse your home.

Home in the sense that your particular Eclipse installation supports Java, XML, Python, R, EMF or whatever you’re interested in using (Haskell? Sure, why not?) You configure it, you find the links to update sites, add them to your Eclipse config. Change workbench settings based on your preferences. Then someone else wants to work on your code, or you make a jump to another computer. Or you find yourself instead of a computer that is not your, but you need to use for an hour or so to demonstrate something to someone.

Being able to manage your Eclipse installation using the cloud help you in these cases. Imagine being able to share your IDE  with your friends,  colleagues, or simply with people who want to use your code through the exact same Eclipse setup you have, which is known to work for sure. Pulse was a product that enabled this for free. Was, because even though it is still available for a few more weeks, it is now being replaced with SDC Cloud Connect from Genuitec.

Genuitec is a company that understands how people use Eclipse, what kind of problems they have, and more important than that, how those problems can be solved. Pulse was my favourite tool because of this. I have a new computer at UCL? No problem. I install Pulse, pull the installations I want from my profile, and get to work. SDC Cloud Connect removes Pulse and uses an Eclipse plugin coupled with a clever web based interface to do the same. It is still free up until you hit a certain limit, in terms of Eclipse instances you’re hosting on the cloud. If you pay for it and go private, you can have a lot more: a custom server behind your firewall that lets you deploy your company’s version of Eclipse, and other nice things that people who pay for software get.

For me, Cloud Connect is a way of pushing my well polished configurations to colleagues and friends who keep saying “I can’t spend ages configuring Eclipse”. Well, I’ve spend all the time required, and here is a link for you, go get my statistics benchmark installation.  In the future, we may seriously consider this mechanism for distributing openEHR based tools, certainly beats explaining the plugin mechanism etc to first time Eclipse users.

So if you’re curious about the experience, go visit http://www.genuitec.com/sdc/cloud/ and play with the technology.

Open source in healthcare IT: being realistic about it

Dear reader, as you can see, the title begs the question: “are not we realistic about it?”. The answer, in my humble opinion, is no and this is a major issue.

I just wanted to express a few things I’ve had for some time in my mind about open source software  in written form so that I can give the URL to this post next time I encounter the same situation.

The perception of open source by people with different positions in healthcare seems to swing from one extreme to the other. Unless we can build a healthy, realistic view of open source software, we won’t probably be able to introduce benefits into healthcare IT, at least not at the level we would like to do so.

There is a large group of people who are proponents of using open source software. In many contexts, I am a member of that group.  However, this group is mostly branded as a group of idealistic hackers and activists who are defending open source for the sake of principles. Even worse, the members of the same group is mostly branded as the stereotypical,  paranoid enemy of the big software vendor who hates corporations and believes that open source software is always better. This is wrong, and it is killing a lot of good that could come from open source.

First of all, I do not think that open source can exist in a free market economy without clearly expressing its commercial aspects. Neglecting to assess and communicate the cost of open source hurts its credibility (a lot). The critical point in any discussion about open source is when you hear something in the lines of “and it has zero licence cost!”. Whatever the context is, at that point, someone should underline the fact that not having any licence cost does not mean not having any cost at all. Yes, you would not pay licence fees, but it is quite likely that you’d need to pay some support fees or buy some consultancy and from a cost perspective, if you manage to keep those expenses lower than the proprietary software scenario, then you’ll have a better deal with open source.

When offering an open source solution, one should always draw the whole picture in perfect clarity. This is how much you can save from the license fees, and this is how much you’d have to pay for support, training and consultancy. Once you do that, you have a proper business proposition with pros and cons which is subject to measurement with the same metric that the value of the proprietary option is measured with: money.

Putting the open source to this perspective actually saves everyone from the risk of bitter disappointment. Otherwise, we end up with users, managers, policy makers, all misinformed, waiting for something wonderful to be delivered by the community effort of hard working developers. This is not a realistic expectation, not at all.  Here is a list of facts that everybody who is hearing about open source should know (the sooner, the better):

  • Software has to be developed by developers. Good software takes good developers, and good developers are rare and usually expensive.
  • Unless they have other means of making a living or they don’t have any financial concerns, no one can escape the realities of making a living. Some people will put more effort for less, but at the end of the day, successful projects do take long term commitments, and people can afford to commit up until a certain limit.
  • 1 hour of work from 24 developers every day does not end up as 1 day of work. Good software always have leaders who dedicate focus and time continuously. Contributions do help, but most, if not all successful projects have a core team. Many key open source projects have key people funded by government, vendors, academia etc.
  • Not all open source software is in the form of turnkey solutions. Sometimes you get a good starting point, but it will take effort and investment to create value out of it.
  • Not all open source plays nice with established business models. Some open source licenses require the party using the open source software to make their own software also open source.  A software vendor may not be able to use open source software even if they want to.
  • Having the source code is not an absolute guarantee for survival. Software gets complicated very quickly. Just because you can access the source code does not mean you have the capacity to change, improve or fix it. In fact, developers need to go through serious pain and apply a lot of discipline to make sure that they can read and fix their own code.
  • Maintenance is the single biggest cost item in the software life cycle. A group of five developers can develop a very impressive piece of software, but it does not mean they can support two thousand users in a hospital. Even if the savings from not having to pay fees is great, and even if you shift all of those savings to the team, they may simply not have necessary amount of people to run an operation beyond a certain scale.
  • The consumer/contributor ratio is incredibly high in open source.

Does it look negative? It should not, because it is only realistic. It probably does not fit the image you have in mind for open source, and that is exactly what we need to change. My own experience has thought me that it is absolutely vital to be objective about what you’re offering as open source, and explain and offer it based on the same principles that are applied to closed source alternatives. Everything else leads to disappointment.  Having spent 15 years in this domain, I am sure that there is a lot that can be done, but we have to be very realistic about how to do it.

I am going to release a key part of my PhD work as open source software pretty soon, and this time I’m determined to be very clear about what I’m releasing, and when and how it can be useful. Once the open source argument adopts this practice of presenting pros and cons with an equal balance, it will certainly make waves. Facebook, Google, Skype, Microsoft, Oracle, IBM, RedHat and many others are already creating huge value from the approach that was once considered their demise, and there is much that needs to be said about the business models of open source especially in a unique setting such as healthcare, but that would deserve another post.