Journal List > Healthc Inform Res > v.18(4) > 1075620

Elkin: Human Factors Engineering in HI: So What? Who Cares? and What's in It for You?

Abstract

Objectives

Human factors engineering is a discipline that deals with computer and human systems and processes and provides a methodology for designing and evaluating systems as they interact with human beings. This review article reviews important current and past efforts in human factors engineering in health informatics in the context of the current trends in health informatics.

Methods

The methodology of human factors engineering and usability testing in particular were reviewed in this article.

Results

This methodology arises from the field of human factors engineering, which uses principles from cognitive science and applies them to implementations such as a computer-human interface and user-centered design.

Conclusions

Patient safety and best practice of medicine requires a partnership between patients, clinicians and computer systems that serve to improve the quality and safety of patient care. People approach work and problems with their own knowledge base and set of past experiences and their ability to use systems properly and with low error rates are directly related to the usability as well as the utility of computer systems. Unusable systems have been responsible for medical error and patient harm and have even led to the death of patients and increased mortality rates. Electronic Health Record and Computerized Physician Order Entry systems like any medical device should come with a known safety profile that minimizes medical error and harm. This review article reviews important current and past efforts in human factors engineering in health informatics in the context of the current trends in health informatics.

I. Introduction

Computer systems are increasingly being implemented within healthcare. Although the main drivers toward computerization remain the provision of increased efficiencies in the business of medicine, the systems serve to record data about patient which are often analyzed and displayed to help clinicians with their decision making. Many of us have long thought of the day when computers would serve in partnership with patients and clinicians.
However, as we advance the science of Informatics and implement more systems within the practice of medicine we find too often that these systems lead to what has been called health information technology (HIT) related medical error [1]. Errors in human factors have often been blamed as an explanation for why the software was capable of inducing error [2].
Computerized Physician Order Entry (CPOE) has been extensively studied as a source of HIT related medical error [3]. Authors have shown that many of these errors were human factors related and were indeed preventable had the systems been rigorously usability tested prior to implementation [4].
Human factors engineering is the formal study of people's interaction with their environment [5]. Here we focus on their interaction with computer systems and the processes that surround the use of computers in healthcare. In doing so, we advocate for the employment of user-centered design methodology for the authorship of software to be used in the healthcare environment.
User-centered software design employs users in the earliest phase of software design and testing and even includes methods for the monitoring of the software post implementation [6]. It has been made clear in the literature that for software systems to improve the quality of care they must be designed with those improvements in mind [7].
It has also become clear that the socio-technical framework within which systems sit can also have a profound effect on their performance. In some cases this can overshadow the effect of the software itself [8].
Usability testing in particular, when used in the context of the user-centered design methodology has the potential to protect patients and providers against HIT related medical error and to help us to provide safer and more effective care for our patients.
The methodology of human factors engineering and usability testing in particular have been well described, but will be reviewed here briefly for the reader [9].

II. Methodology

A model usability study evaluates how a particular process or product works for individuals (Figure 1) [10]. Optimally one would test a population of individuals who are a sample of typical users of the type of process or product being tested. It should be stated clearly to participants, that the purpose of the study is to evaluate the process or product and not the individual participant [11]. Usability sessions are videotaped from multiple angles (including the computer's screen image) and participants are encouraged to share their thoughts verbally as they progress through the scenarios provided ("think aloud") [12]. This helps to define the participants' behavior in terms of both their intentions and their actions [13]. For example, in our study, we had the user identify what information they were looking for before they initiated their search. We could monitor what was entered into the program and we were able to view the information retrieved. Then we recorded the degree to which the clinician-user felt that they were satisfied with the information that they had obtained [14].
To accomplish a valid study, one must follow a specific protocol and have multiple participants (typically 6 to 12) interact with the system using the same set of scenarios [15]. It is important that the design team be able to observe multiple participants if they are to become informed by the study. The scenarios should reflect the way the system being tested is actually going to be utilized [16]. The closer the study design can mimic the true end user environment, the more validity the results of the study will have [17]. In this manner, developers ascertain characteristics of their Web environment that are functional, need improvement, fit user expectations, miss expectations, fail to function, or are opportunities for development [18].
The usability laboratory is a suite of rooms, which provides space for study planning, execution and review. There is a conference room with white board space for planning and evaluation. The facility utilized for executing the study includes the study lab, a control room and a developer's observation booth. The study lab is a space, which in our study included a desk and chair with a computer and screen, keyboard and mouse on the desk. There are cameras on each of three corners of the room and the back wall is a one way mirror. The user sits in this space and works on the scenarios provided by the study team, after a short introduction to the facility and purpose of the study by the study director (who is not part of the development team) (Figure 2). Behind the one way mirror, is a soundproof room with multiple monitors and video recording equipment. The control person directs the videotaping from the available source input (including a video input from the screen). The study director has a microphone, which is used to communicate with the study participant. The development team, if present, sits in a third space separated by a soundproof enclosure, which is located behind the control room. In this space, the development team has no contact with the participant but can easily observe the study and gain direct experience with the user's interaction with the Web environment [19].
This methodology arises from the field of human factors engineering, which uses principles from cognitive science and applies them to implementations such as a computer-human interface [20]. By critically evaluating the design of our Web environment we move closer to the ideal design strategy which is "user-centered design" [21].

III. So What?

Computer systems when designed safely and accurately have the potential to improve patient care and to make the practice of medicine more efficient. This requires user-centered system designs which employ formal usability testing methods. These systems will have more predictable human-computer interaction properties and by doing so will help clinicians to practice safer and more effective healthcare.

IV. Who Cares?

Healthcare organizations are increasingly implementing electronic systems. These are being encouraged by governments such as the United States as they see the electronification of healthcare as a way to monitor the practice, institute pay for performance and create more value for their healthcare dollar spent.
These advantages are dependent on systems that fit into the workflow of healthcare and that efficiently provide assistance to increasingly busy clinicians trying to provide the very best care for their patients.
Health informatics researchers who are attempting to design the next generation of health IT systems and processes should also be interested in minimizing human factors errors. This requires that our field embrace human factors and in particular usability testing to ensure that we practice what we preach, creating systems designed in conjunction with typical users and that produce consistent clinical and administrative outcomes.

V. What's in It for You?

Formal usability testing and the user-centered design paradigm will help you to ensure that your work will function once installed as it is intended. It will also allow you to measure important clinical and functional outcomes that will support the quality of your work.
In the end, if we consistently employ human factors methods, we all get to work with better software and processes that will improve the quality, safety and efficiency of our work.

VI. Conclusion

Human factors engineering (HFE) for health informatics (HI) supports the user-centered design approach to software creation and the environment in which that software functions. The many expectations that we have for computers to positively affect the care that we provide to patients are only possible if the human-computer interaction afforded by these systems are engineered to produce consistent and positive clinical outcomes.
The international informatics community has embraced human factors engineering as an important aspect of our field. In 2006, the International Medical Informatics Association (IMIA) started a working group (WG) on human factors engineering in HI. The WG was formed with liaisons to the socio-technical WG and the evaluation WG.
The IMIA WG has held many meetings over the years. Initially there was a meeting in Lille, France which introduced Evalab an advanced usability testing laboratory at the University of Lille 2, in France. Here we started networks of collaboration in HFE that widened the circle of interested researchers and interesting projects. A conference held at the Mayo Clinic later in 2006 looked at how HFE affected the development of intelligent Electronic Health Records.
The next conference was held at Skybe Hospital in Denmark and was devoted to a wide range of HFE issues. As output of the meeting it became clear that standard in HFE for HI were needed and that work is ongoing with the University of Amsterdam leading a Delphi method inquiry into standardizing both publication and usability reports. The next meeting was held in Amsterdam in 2008 and the now sophisticated working group helped in the evaluation of masters' candidates whose primary interest was in HFE in HI.
In 2009 a meeting was held in Sonoma, California where international leaders and students in HFE in HI gathered to discuss advanced topics and to debate the statement on reporting of evaluation studies in health informatics (STARE-HI) criteria's relevancy to HFE in HI [22].
In 2011, once again the IMIA WG on HFE in HI came together in Trondheim, Norway to discuss advancements in HFE that included a wide range of implementation studies and techniques.
In 2013 we are planning a meeting in coordination with MedInfo in Copenhagen, Denmark. The meeting is developed in conjuction with the socio-technical working group from IMIA and will encourage contributions that discuss the context sensitive nature of HI implementations [23].
The HFE in HI working group of IMIA welcomes your participation and hopes that you will be inspired to take on some of the important challenges remaining to be solved in the HFE of HIT systems. HFE is a methodology and a philosophy that says that we need to work together to better understand the context and perspectives in which our HIT systems function and provides guidance for how to reduce unwanted variability in our design work. Through rigorous HFE techniques we can and will create a brighter future for the healthcare of our patients and their families.

Figures and Tables

Figure 1
Some attributes of usefulness, as elucidated by bench testing. Here we depict the axes of usability. These depictions serve to emphasize the goals and challenges to the design of a well-formed Web (hypertext) environment.
hir-18-237-g001
Figure 2
This is a typical layout for an evaluation laboratory used for user interface and software evaluation. Recording and monitoring equipment is managed from the control room. There are cameras and microphones in the "lab" which capture the computer screen as well as the participant's actions and verbal observations. The lab (as noted in the diagram above) is where each participant would sit in front of a computer, at a desk configuration similar to his or her normal work environment, and performed the scenarios outlined in the methods section. To avoid bias, developers typically "observe" from an observation room, and do not themselves participate in the usability studies.
hir-18-237-g002

Notes

No potential conflict of interest relevant to this article was reported.

References

1. Koppel R, Metlay JP, Cohen A, Abaluck B, Localio AR, Kimmel SE, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA. 2005. 293(10):1197–1203.
crossref
2. Koppel R. Monitoring and evaluating the use of electronic health records. JAMA. 2010. 303(19):1918.
crossref
3. Nohr C, Sorensen M, Kushniruk A. Method for testing a CPOE system in the medication process in a cardiology ward. Stud Health Technol Inform. 2010. 160(Pt 1):183–187.
4. Aarts J, Nohr C. From safe systems to patient safety. Stud Health Technol Inform. 2010. 157:1–3.
5. Beuscart-Zephir MC, Elkin P, Pelayo S. Human factors engineering for clinical applications. Stud Health Technol Inform. 2006. 124:685–690.
6. Beuscart-Zephir MC, Aarts J, Elkin P. Human factors engineering for healthcare IT clinical applications. Int J Med Inform. 2010. 79(4):223–224.
crossref
7. Koppel R, Majumdar SR, Soumerai SB. Electronic health records and quality of diabetes care. N Engl J Med. 2011. 365(24):2338–2339.
crossref
8. Pelayo S, Anceaux F, Rogalski J, Elkin P, Beuscart-Zephir MC. A comparison of the impact of CPOE implementation and organizational determinants on doctor-nurse communications and cooperation. Int J Med Inform. 2012. 09. 20. [Epub]. http://dx.doi.org/10.1016/j.ijmedinf.2012.09.001.
crossref
9. Beuscart-Zephir MC, Elkin P, Pelayo S, Beuscart R. The human factors engineering approach to biomedical informatics projects: state of the art, results, benefits and challenges. Yearb Med Inform. 2007. 109–127.
crossref
10. Nielsen J. Usability engineering. 1993. Boston (MA): Academic Press.
11. Preece J, Rogers Y, Sharp H, Benyon D, Holland S, Carey T. Human-computer interaction. 1994. Reading (MA): Addison-Wesley Pub Co..
12. Hix D, Hartson HR. Developing user interfaces: ensuring usability through product & process. 1993. New York (NY): John Wiley.
13. Coble JM, Karat J, Orland MJ, Kahn MG. Iterative usability testing: ensuring a usable clinical workstation. Proc AMIA Annu Fall Symp. 1997. 744–748.
14. Kushniruk AW, Patel VL, Cimino JJ. Usability testing in medical informatics: cognitive approaches to evaluation of information systems and user interfaces. Proc AMIA Annu Fall Symp. 1997. 218–222.
15. Nielsen J. Estimating the number of subjects needed for a thinking aloud test. Int J Hum Comput Stud. 1994. 41(3):385–397.
crossref
16. Patel VL, Ramoni MF. Feltovich PJ, Ford KM, Hoffman RR, editors. Cognitive models of directional inference in expert medical reasoning. Expertise in context: human and machine. 1997. Cambridge (MA): MIT Press;67–99.
17. Weir C, Lincoln MJ, Green J. Usability testing as evaluation: development of a tool. Proc AMIA Annu Fall Symp. 1996. 870.
18. Kushniruk A, Patel V, Cimino JJ, Barrows RA. Cognitive evaluation of the user interface and vocabulary of an outpatient information system. Proc AMIA Annu Fall Symp. 1996. 22–26.
19. Elkin PL, Mohr DN, Tuttle MS, Cole WG, Atkin GE, Keck K, et al. Standardized problem list generation, utilizing the Mayo canonical vocabulary embedded within the Unified Medical Language System. Proc AMIA Annu Fall Symp. 1997. 500–504.
20. Patel VL, Kushniruk AW. Interface design for health care environments: the role of cognitive science. Proc AMIA Symp. 1998. 29–37.
21. Patel VL, Kushniruk AW. Understanding, navigating and communicating knowledge: issues and challenges. Methods Inf Med. 1998. 37(4-5):460–470.
crossref
22. Talmon J, Ammenwerth E, Brender J, de Keizer N, Nykanen P, Rigby M. STARE-HI: statement on reporting of evaluation studies in health informatics. Int J Med Inform. 2009. 78(1):1–9.
crossref
23. Context sensitive health informatics: human & sociotechnical approaches [Internet]. c2012. cited at 2012 Nov 16. Geneva, Switzerland: International Medical Informatics Association;Available from: http://www.cshi2013.org.
TOOLS
Similar articles