Are There Really More Sensors in a Car Than in a Patient?
A notable speaker at a recent digital health conference presented a TED-style PowerPoint talk with provocative infographics about how healthcare has fallen behind every other industry on the planet. One of the slides said this:
“Your car has 100 sensors. Your smartphone has 14 sensors. Your patient has ZERO sensors.”
The technophiliacs in the audience nodded in approval, some even snickering at the sheer insanity of failing to connect patients with just one measly sensor. You could practically see them high-fiving each other in the aisles.
As I listened to the rest of the talk, I heard claims about how wearable sensors are fundamentally transforming healthcare delivery, providing new insights never before possible, and delivering data to doctors to make better real-time decisions. As a technophile myself, and as someone who invented an FDA-approved wearable sensor, I was moved by this excitement.
But as a doctor who sees patients each week in a clinic, I had trouble bringing myself to fully engage the bro-grammer high-five circle. The infographic just didn’t make sense.
Are there really more sensors in a car than in a patient?
I couldn’t help but think that there are already 86 billion “sensors” in the human brain alone - nearly half the total number of stars in the Milky Way galaxy. There are millions more onboard sensors that provide immediate and precise feedback about whether we’re hungry, thirsty, walking, running, sitting, working, relaxing, meditating, laughing, or just farting. Do you know if you have to urinate or defecate? Sure you do, because your body tells you so; you don’t need a wearable for that. Have you gotten enough exercise lately? Don’t tell me you need a Fitbit to answer that question; you just know if you’re treating your body right. Are you eating too much food? You know that answer too; an unfathomably vast network of neuro-hormonal sensors has already clued you in.
We are evolutionarily invested with an array of sensors that blows away anything in a car, a phone, or even the Mars Rover. Zero sensors on a patient? Perhaps it’s more like a “1” with nine trailing zeros.
This doesn’t mean adding more sensors isn’t warranted. It doesn’t mean we can’t “hack” the body by offering creative new ways to present physiologic data to users. It doesn’t mean wearable sensors have a bleak future. It just means we need to separate the issue of how many and what type of sensors we have from what to do with their data.
I don’t believe the question is whether patients have enough biosensors (they do). The issue is whether patients are willing to pay attention to what those billions of sensors are saying. The number of biosensors in (or on) a patient’s body will probably not determine whether she makes wise health decisions; her readiness to change makes the difference. I believe that digital health is less about computer science and engineering, and more about social science and behavior.
This mindset helps us recognize that building a device is just the beginning. In order to make inroads with chronic diseases, like diabetes, heart failure, or obesity, we need to change behavior. We already have billions of sensors in our body; the issue is whether we heed their clarion call to action. Often we don’t, even though we know better.
This highlights the difference between data and information. Wearable devices yield data – “zeros and ones”. The data points are filtered through algorithms to yield information – a higher level of understanding. But we know that information also isn’t enough. We next have to transform information into knowledge. Only then can we transform knowledge into wisdom – the marrow-deep understanding that drives behavior. That’s the traditional sequence found in health analytic textbooks:
Data → Information → Knowledge → Wisdom.
Data is the result of engineering and computer science, and it’s great. Everything after data is a result of social and behavioral science. And that’s vital. How do we combine computer science with behavioral science? I am heavily influenced by Joseph Kvedar’s work at Partners HealthCare. Dr. Kvedar’s academic team not only builds and tests digital interventions, but also determines how to optimize their apps and sensor within a biopsychosocial framework using the transtheoretical model (TTM) of behavioral change. The TTM posits that patients find themselves along a spectrum of five “readiness to change” stages: precontemplation (not ready); contemplation (getting ready); preparation (ready); action; and maintenance. Where a patient is along the TTM spectrum will determine whether he acts on the results of a biosensor. It matters less whether the signals are coming from a sensor attached to his wrist or lodged deep within his cerebral or adrenal cortex; what matters more is whether he is ready to act on those signals.
Kvedar’s recent book, The Internet of Healthy Things, is a must read to learn why digital health is essentially a behavioral science. Kvedar’s team not only personalizes its digital interventions, but hyper-personalizes the interventions. By integrating everything from time of day, to step counts, to the local weather, to levels of depression or anxiety, to the patients TTM stage, Kvedar’s sends pinpoint messages to patients and their providers at the right time and right place. As a result, they are making headway in solving some of medicine’s hardest behavioral challenges. We should all take note in learning how to translate the data from sensors – whether biological or man made – to information, knowledge, and wisdom.
I think that is a more pressing issue than whether our patients have as many sensors as a car or a phone, because they do.
_- _Commentary by Dr. Brennan Spiegel