As of end of 2018, I’ve been working on openEHR for almost 15 years, beginning with my exposure to openEHR archetypes during a European Union research project, around 2003 or so.
During these fifteen years, I tried to explain my (sometimes incorrect at the time) understanding of openEHR to many people who occupied various positions: junior software developers, product managers, general managers, investors, academics, ministers of health, marketing professionals. It would be a long list.
Looking back, I can see that I have not been able to articulate some key points clearly when I was talking to policy makers. That is, people who get to cast a vote or make a decision when it comes to choosing how to use technology in healthcare.
This post is an attempt to focus on aspects of openEHR that are relevant to policy makers, but it should be of interest to many people in other roles since we’re all affected by the decisions of policy makers as patients, if not as people in other roles in healthcare(IT).Read More »
This is a copy/paste of a few responses I sent to a discussion in the openEHR lists. I’m copying them here because images in my responses and responses themselves are not properly archived anywhere yet.
If you want more: I wrote a PhD thesis on this stuff, so if you want a deeper discussion of the topic here it is but I suggest you read the following first.
Here is the whole exchange from openEHR mail lists, with all responses, including mine:Read More »
For many, smarter healthcare through the use of computers is an exciting idea. This has been the case since the sixties, and I belong to the current generation of people who try to make this happen.
There are so many misunderstood things about making computers help clinicians. It would take a lot of space to discuss these things, a luxury neither the blogger nor the reader can afford, so let’s stick to a key component of making CDS happen: data.
One form of CDS that benefits from data is based on building mathematical models using the data, then using these models to assess clinical scenarios. The more data you have, the more robust and accurate your decision making models are. This is why the big data hype is acting similar to rents in London: there appears to be no limit to its rise. Big data is basically breaking very large amounts of data that would not possibly fit into a single computer to smaller chunks and process it using hundreds and if necessary thousands of computers. It is not conceptually new, but it became easier and cheaper in the last decade. With this improved approach to processing larger data, the possibility of better decision making models arise, and (some) clinicians, vendors and investors begin to think: “This is it! We’ll now be able to have sci-fi smart computers”. Not so easy.Read More »
Necessary clarification: Please note that the term implementation in the text below refers to development of a software platform based on openEHR. I realised that the term is overloaded in the health IT space, implying adoption of a standard sometimes. That is not what I mean by ‘implementation’.
Recently, I found myself in more than one discussion during which I was trying to explain what openEHR is to someone. It is common to adopt a different explanation of key concepts based on the occupation of the audience. The modelling side of things matter most to clinicians and policy makers and we talk in different terms than a conversation between software developers, architects etc.
The openEHR mail lists also reflect this natural distinction; there is technical, modelling, implementers etc.. I think I’ve realised something though, we end up having technical conversations with clinicians and implementation discussions with software developers. There is nothing wrong with it of course but I think the openness of the standard (it is in the name after all) is causing some problems in its adoption. This post is meant to express my thoughts about this pattern and it may or may not help you when you’re trying to understand some aspects of openEHR.Read More »
I attended another Data Science London meeting last week. As usual, it was a good one. Speakers talked about their experience with twitter feeds that includes foursquare check-ins and scraping data from web sites. Scraping is basically extracting information from the web pages, in a way simulating a human’s use of the web site to use the information provided by that web site.
Both talks were interesting, and both had something in common: the people who are trying to access data had no programmatic, well defined method of doing so, so they resorted to other methods. The case of Lyst was especially interesting. They’ve gone through a lot of trouble to set up a system that can collect data from lots and lots of online fashion retailers. They have an infrastructure that extracts information about tens (hundreds?) of thousands of products from lots of web sites, and as surprising as it may be, they are actually keeping things under control, and presenting a single site that allows people to access data as if it is presented by a single source. A question that was asked by someone in the audience was: “do you have any programmatic access to these sites?”. As in, do they give you web services? The answer was something in the lines of very few. It is usually a crawler that extracts information from the web site that does the job (though they are working with the consent of the web sites they’re parsing). I think it was also someone from Lyst (or maybe the audience, not so sure about it) who said it is pretty much the reality of the web we have today, despite all that hype about semantic web.Read More »