Human factors and assessments in medical devices are becoming more important to health systems and health services researchers. The evolving priorities within healthcare truly underscore the importance of human factors and to practitioners gaining access to key contextual information to facilitate a more holistic view of the system.
Designing for Human Needs in Healthcare
Relative to other high-risk industries, medicine, and health systems have been slow to adopt a human factors approach. Looking back at the literature, the current discussion started around 2000 with the Landmark Institute of Medicine report entitled “Err is Human,” which documented the scope of the problem– medical error in medicine and the impact on patient harm that was occurring because of it and the fact that most of these errors occur because of poor system design. Then, in 2011, the FDA issued its draft guidance of human factors and evaluations of medical devices to enhance safety and effectiveness. It is not a stretch to say human factors is still in its infancy in medicine.
By way of context, in the 1990s, the NIH and other federal research organizations realized that while they were doing a great job of commissioning the development of knowledge that could benefit patients, that knowledge was not necessarily being applied. A review was published in 2000 that demonstrated there is a 17-year gap between knowledge creation and application, and even then, only 14% of that knowledge is actually translated to improve patient care. This spurred a shift in biomedical research away from pure knowledge generation toward thinking about how to implement this knowledge into practice in a manner that benefits patients at a population level. Out of this was born the translational research paradigm, which is a process of discovering scientific concepts and then applying the knowledge in clinical practice, with the objective to improve patient care at a population level.
As it turns out, understanding how to get an evidence-based intervention into clinical practice in a way that yields health benefits is extremely challenging, which is why the discipline of implementation science was created. It cannot be assumed that because an intervention has been shown to have benefit in one setting it will necessarily have benefit in another. Implementation science is concerned with ensuring that an intervention that has been shown to have benefits in controlled environments is implemented in a manner that yields benefits in other settings. If for instance if there is an intervention that only a few clinical settings can implement and use, then the intervention’s health impact will be limited regardless of its efficacy. This is coupled with the fact that in order to be a robust, sustainable functional healthcare system, they have to also consider the Triple Aim, which involves simultaneously improving population health, enhancing the care experience and reducing costs. On top of that, since the pandemic, a fourth aim for workforce well-being/safety and a fifth aim of advancing health equity have been added, now referred to as the Quintuple Aim.
Burnout has not always been a problem but has grown since the late 80s, and early 90s. It’s tied to the growth of the HMOs, where clinicians started to lose autonomy over their tasks. The early 2000s showed the broader use of the EHRs, which have been connected in many studies to the epidemic of burnout. They contributed significantly to burnout in large part because they had not been designed with the human factors lens. There are multiple studies now documenting that clinicians actually spend more time charting in the EHR than they do with patients. The shift in work from direct patient care to administrative tasks is one of the big drivers of burnout. Since the pandemic, this trajectory of burnout has only worsened within clinical medicine.
The clinical environment will become increasingly complex, with greater cognitively loaded time pressures. This is in part because of the explosion of medical knowledge that is available, coupled with an incredible amount of data through the EHR, which, to be frank, are not designed in a cognitively friendly way. This information overload tends to make decisions much more complicated when making optimal decisions for patients. Healthcare systems are built with the assumption that working memory (or short-term memory) is an infinite resource, but cognitive load theory explains that working memory has a limited capacity and that overloading it reduces effectiveness. Attention is the most limited resource in clinical practice. In modern hospitals, managing attention throughout the day to prioritize the correct things, or optimizing cognitive ergonomics and clinical environments is really essential to patient care.
Optimizing Device Use
With that context in mind, it is easy to see how learning new things in the healthcare system is extremely difficult, and so the adoption of any new device is a challenge. With healthcare, anything that adds inefficiency to the system is just not going to be part of the system. If content ergonomics are not optimally tailored to the system, it will increase system efficiency and therefore, adoption will be limited.
Let’s look at point-of-care diagnostic devices, which are some of the more complex examples of human factors design. They are being used more often than ever. In fact, this market is expected to grow from $30 billion to $100 billion in a few years.
When we talk about human factors and diagnostic devices, complex interventions have multiple components that must be implemented to deliver a health benefit. Distinct from drugs, for example. If you have a pill, you swallow the pill and get the benefit of that intervention. That is not the case with diagnostic tests. The entire test-treat strategy needs to be evaluated to understand why and how a device is effective. To have an impact on health a diagnostic test must be ordered, performed, interpreted and integrated into the other clinical data appropriately; and then finally, the clinician needs to act on the results appropriately. If any of those components are not implemented properly, the patient will not see the benefit. Additionally, it is important to note that complex interventions are always context-dependent because there are so many components whose implementation can be affected by unique factors within the context.
Because they can be derailed by the context, complex interventions cannot adequately be evaluated in a study setting—the information will not be complete. A real-world example is an imaging study that looked at the ability of PET scans to improve the management of patients with lung cancer. The idea was that this test would be included in the diagnostic work up, making it possible to identify which patients had advanced disease to determine whether a thoracotomy, a major surgery, would be needed or not. What they found in this study is that while the test characteristics were fantastic, this is a highly accurate and useful test, it did not change patient management because the surgeons did not believe the results. Therefore, it was an ineffective test, not because the test itself wasn’t good, but because there wasn’t enough proper work done to educate clinicians on how to use the test.
There are several points where the implementation of diagnostic test can go awry, which will decrease the effectiveness of a test. In many cases, it is important to understand the psychology of those in the system to understand what implementation strategies aimed at behavior change need to be employed to successfully implement the test. For instance, knowing how strongly the clinician believes in the test result can impact the treatment confidence and adherence by the patient. Those are all very human responses to the test and should be taken into consideration. A human factors perspective is needed to capture this critical data.
More rigorous evaluations of diagnostic tests need to be conducted to ensure the value of these tests are becoming more and more common and can greatly impact system efficiency and patient outcomes. It is not enough to implement diagnostic tests because they have been demonstrated under controlled research conditions to be more accurate than other kinds of tests. The industry needs to go further and evaluate process outcomes and impact on the patient clinically to really understand their value. It will be incumbent on the health systems to make these sorts of evaluations possible.
There is recognition within health systems that in-context human factors assessments are needed for the optimal adoption of medical devices. Health systems will need to be more proactive in reducing the barriers to these assessments in collaboration with device makers. Device makers can enhance adoption by going beyond just risk assessments of medical devices by performing in-context assessments. Ideally, human factors practitioners will be working across academia and industry to optimize the use of medical devices in a manner that contributes to the Quintuple Aim.
About Anna Maw, MD, MS
Dr. Maw is an academic hospitalist, implementation scientist, and associate professor of medicine at the University of Colorado School of Medicine. Her research focuses on how dynamic and complex inpatient environments interact with human factors to impact the adoption of new technology by clinicians and health systems.