For many, smarter healthcare through the use of computers is an exciting idea. This has been the case since the sixties, and I belong to the current generation of people who try to make this happen.
There are so many misunderstood things about making computers help clinicians. It would take a lot of space to discuss these things, a luxury neither the blogger nor the reader can afford, so let’s stick to a key component of making CDS happen: data.
One form of CDS that benefits from data is based on building mathematical models using the data, then using these models to assess clinical scenarios. The more data you have, the more robust and accurate your decision making models are. This is why the big data hype is acting similar to rents in London: there appears to be no limit to its rise. Big data is basically breaking very large amounts of data that would not possibly fit into a single computer to smaller chunks and process it using hundreds and if necessary thousands of computers. It is not conceptually new, but it became easier and cheaper in the last decade. With this improved approach to processing larger data, the possibility of better decision making models arise, and (some) clinicians, vendors and investors begin to think: “This is it! We’ll now be able to have sci-fi smart computers”. Not so easy.
In order to realise the potential of the big data, you need big data. Anybody who spends more than a few years in healthcare IT knows that healthcare indeed produces a lot of data, but it is all over the place. So it is incredibly important that you have a way of getting the pieces of data together. This is why fracking is such a game changer for the energy industry. It is not possible to set up an oil rig and turn all those little pockets of gas into a large supply.
There is more than one way to join those bits of clinical data. Using healthcare IT standards is one of them. The idea is that software vendors create and exchange data in forms compatible with standards, so when it is needed, data can be pooled with relatively little effort. In Europe, and especially in the UK, this is the most promising and pretty much the only way of making big data in healthcare happen (there is so much more that is needed, but this is a real precondition) Why? Because the second tier of health services, that is, the hospitals have hundreds of systems from different vendors running. There is no way other than the standards adoption for these vendors to connect to big data.
The EU and the UK is doing whatever it can to support adoption of standards, I won’t go into details of that now. The important thing about e-health standards in the EU is that it is shaped as a government led process. Everyone expects the government to set the direction and then vendors will follow, to the extent they can.
Things are very different in the US for both the standards and the market structure. First of all, US does not have the committees that has to deal with adoption of standards across a number of countries. it is the industry that makes the decisions. There is minimum government intervention in setting the standards. So when a standard gains traction, it happens fast, and vendors take it forward.
The real advantage of the US is its scale driven healthcare IT market when it comes to big (health) data. A small number of vendors provide very large scale solutions installed across hundreds of sites, providing capability for a lot of functionality. This is not the pattern you’d see in the EU. Why is this an advantage? Well, you have a much smaller number of systems that needs to negotiate, and even when they don’t bother negotiating, they still produce a lot of data in a form that is completely under a single implementer’s control.
The other unique advantage the US market possesses is the massive cloud infrastructure and legal framework that allows vendors to use that infrastructure. This definition can be summarized with two words: HIPAA and AWS (go google this) There is nothing like this in the EU. With the EU regulations and laws in place, if you magically replaced every existing health IT solution with its standards based twin from an alternative universe in the blink of an eye, you’d still not be able to join that data in a cloud infrastructure.
In the US, Epic, Cerner, Intermountain Healthcare, Allscripts etc will build their own big data installations. They have the AWS to give them cost efficient infrastructure if they choose to use it, and the government has taken care of the legal framework, so all is in place. They may or may not negotiate to build larger pools, but if they don’t want to, they don’t have to. So these large actors will be building a whole new form of intellectual capital: mathematical models based on large scale health data. Which will turn into products themselves. You can imagine how things will develop.
You don’t even have to be a large corporation to innovate in this manner, though they’ll have the clear advantage. You can start a company based on a big data solution or algorithm you trust and if you can convince your customers, both the cloud and the legal framework is there for you.
So the US has a market structure and a legal framework in place that allows vendors to move forward much faster than the vendors in the EU. This is why I expect the US healthcare to have better clinical decision support sooner.
Please don’t interpret any of the above as a claim of better healthcare delivery or outcomes in the US. That is an entirely different topic. There is also the issue of most big data people not having a clue about healthcare data and how it is produced, but there is only so much that can go into a blog post.