Dr. Chris Stout, VP, Clinical Research and Data Analytics, ATI Holdings, LLC
From my daily focus on the analytical realm of sports medicine and orthopedic rehabilitation, I am very interested in and have published on complex systems and nonlinear relationships in hospitals’ functioning and healthcare. With almost daily improvement and refinement of using big data sets and their being more available (see Registries below), we can move away from the prior orthodoxy of IFTTT (IF This Then That) approach of causality to a more sophisticated (and realistic) one that considers risk pattern recognition over that of solely risk factors. Initial work in this area is in sports injury understanding and prevention, but conceivably is scalable to public health and personalized medicine.
The authors of a paper, Complex Systems Approach for Sports Injuries, note that “Injury prediction is one of the most challenging issues in sports and a key component for injury prevention. Sports injuries aetiology investigations have assumed a reductionist view in which a phenomenon has been simplified into units and analyzed as the sum of its basic parts and causality has been seen in a linear and unidirectional way. This reductionist approach relies on correlation and regression analyses and, despite the vast effort to predict sports injuries, it has been limited in its ability to successfully identify predictive factors. The majority of human health conditions are complex. In this sense, the multifactorial complex nature of sports injuries arises not from the linear interaction between isolated and predictive factors, but from the complex interaction among a web of determinants.”
Other researchers looked at using predictive analytical tools to forecast injury likelihood in rugby players by informing as to a player’s training load “…in order to field the best possible team throughout the season. The analysis (were used to) predict the likelihood of a particular player being injured, which then enabled the coaching team to adapt and modify each player's personalized training program to maximize their training load and minimize their risk of injury.”
Sports Injury Predictor is a patent pending algorithm that determines the probability of an American football player being injured in a season. It applies machine learning and combined player injury data that includes “every injury that has taken place to skill position players in the NFL and college for the last 10 years. Includes type of injury, games missed, surgery required and more.” It combines that with player age, height, weight, “position, how many times players will touch the ball in a game, number of plays a player is on the field” and then runs an “injury correlation matrix to determine the statistical probability of an injury occurring based on previous injury.”
It was just in 2016 that one of the most prestigious medical journals, the Journal of the American Medical Association, made mention of machine learning.
Available for many years, implantable cardioverter-defibrillators have saved lives by using algorithms to detect ventricular fibrillation and immediately deliver a defibrillating shock to the heart
In the somewhat landmark paper, the authors speak to the internet of things and quantified-self in that, “Global adoption of mobile and wearable technology has added yet another dimension to machine learning, allowing the uploading of large amounts of personal data into learning algorithms. Now, within closed-loop feedback systems, mobile technology (e.g., a smartphone) is not just a biometric device (e.g., measuring blood glucose levels) but ultimately could become a platform from which to deliver tailored interventions based on algorithms that continually optimize for personal information in real time. Available for many years, implantable cardioverter-defibrillators have saved lives by using algorithms to detect ventricular fibrillation and immediately deliver a defibrillating shock to the heart. Now, wearable devices promise to improve diabetes care—a small glucose meter adherent to the upper arm can regularly sample glucose levels, which are then wirelessly fed to the patient’s smartphone to inform the patient and treating physician”
A more recent article appearing in the New England Journal of Medicine noted three key ways that machine learning will be transformative in medicine. The authors’ point is that impact will be less from big data, per se, but better algorithms.
Basically, “Dr. Watson” is better than Dr. Kildare. This is due to the fact that machine learning enables the processing of millions of variables and weights their valence of influence in combination with other comorbidities and demographics.
Thanks to ever-increasing computational horsepower, “… computers can look for anomalies at the pixel level of radiographs,” for example, something tough for a human, even if an expert. The three key, disruptive areas noted are:
1. “Establishing a prognosis: Data drawn from electronic health records or claims databases can help refine these models. They say prognostic algorithms will be used within five years, though several more years of data will be needed for validation.
2. Taking over much of the work of radiologists and anatomical pathologists. They also see algorithms used on streaming data taking over aspects of anesthesiology and critical care within years, not decades.
3. Improving diagnostic accuracy, suggesting high-value tests, and reducing overuse of testing. This will happen more slowly, they say, because some conditions don’t present clear or binary standards like radiology--malignant or benign--which make it harder to train algorithms (because of the prevalence of unstructured data in EHRs and because each diagnosis would require its own model).”
Not everyone has a rosy acceptance or enthusiasm for medical applications of machine learning. Authors of the enjoyably titled paper, Voodoo Machine Learning for Clinical Predictions, examined two popular cross-validation methods and found that one approach massively overestimated the prediction accuracy of machine learning algorithms used to support clinical decision making. Furthermore, they also found that studies that used accelerometers, wearable sensors, or smartphones to predict clinical outcomes tended to use the more error-prone approach to cross-validation.
The staff at DeepMind Health (a branch of Alphabet) is working in the UK, with the National Health Service, perhaps the largest healthcare system in the world, treating about a million patients every 36 hours! About 10 percent of those folks that go to a hospital will experience a medical error or some iatrogenic harm. DeepMind Health aims “…to support clinicians by providing the technical expertise needed to build and scale technologies that help them provide the best possible care to their patients.” They are applying machine learning to medical research in the analysis of medical data with the goal to “improve how illnesses are diagnosed and treated…” and to “…help clinicians to give faster, better treatment to their patients…”
Such potentials I believe could scale to third-party payers such as private insurance companies as well as Medicare, who in turn could then scale back on case management needs (and costs and hassles), and provide more appropriate levels of reimbursement based on more accurate value-based quality of care, while providing them with cost-savings as well.