Abstract
Objectives
The purpose of this study was to review evaluation studies of nursing management information systems (NMISs) and their outcome measures to examine system effectiveness.
Methods
For the systematic review, a literature search of the PubMed, CINAHL, Embase, and Cochrane Library databases was conducted to retrieve original articles published between 1970 and 2014. Medical Subject Headings (MeSH) terms included informatics, medical informatics, nursing informatics, medical informatics application, and management information systems for information systems and evaluation studies and nursing evaluation research for evaluation research. Additionally, manag* and admin*, and nurs* were combined. Title, abstract, and full-text reviews were completed by two reviewers. And then, year, author, type of management system, study purpose, study design, data source, system users, study subjects, and outcomes were extracted from the selected articles. The quality and risk of bias of the studies that were finally selected were assessed with the Risk of Bias Assessment Tool for Non-randomized Studies (RoBANS) criteria.
Results
Out of the 2,257 retrieved articles, a total of six articles were selected. These included two scheduling programs, two nursing cost-related programs, and two patient care management programs. For the outcome measurements, usefulness, time saving, satisfaction, cost, attitude, usability, data quality/completeness/accuracy, and personnel work patterns were included. User satisfaction, time saving, and usefulness mostly showed positive findings.
Conclusions
The study results suggest that NMISs were effective in time saving and useful in nursing care. Because there was a lack of quality in the reviewed studies, well-designed research, such as randomized controlled trials, should be conducted to more objectively evaluate the effectiveness of NMISs.
As the number of hospital information systems (HISs) have rapidly increased, systems for nursing also have increased. Since nursing care is a major operating cost within a hospital budget, nursing management is important for cost saving, and it contributes to the financial stability of hospitals [1]. Moreover, nursing management also affects clinical practice; it is responsible for managing nursing units, personnel (recruitment, selection of staff, development, working environment), budgets (budgeting, cost control, and financial results), nursing practice (introducing and maintaining standards), and the development of services [2,3]. For these activities, effective nursing management relies on the effective use of up-to-date information about patient flow and acuity, staffing, and costs. Thus, evaluation of these systems should be conducted to manage costs, activity planning, resource allocation, and quality assurance [4,5,6].
The outcome of investment should be justified via evaluation for effectiveness in terms of various factors, such as money, time, and resources involved in the development and implementation of systems [7]. Evaluation studies or projects involving information systems for nurses have been published. Generally, researchers evaluate the results against the expected outcomes or goals of the information system, or compare the results before and after system implementation [8]. Several evaluation studies of nursing management information systems (NMISs) have also been published. Some studies have focused on nursing financial or cost management systems [9,10], while others have been related to staffing or resource management systems [6,11]. In addition, some studies have focused on patient or data management systems [4,12].
Although several studies evaluating the effectiveness of NMISs have been conducted, to our knowledge, there has been no summarization or synthesis of the existing evidence. Therefore, the purpose of the current study was to systemically review and synthesize the evidence on the effectiveness of NMISs used by nurses in clinical settings.
An extensive search for articles published from 1970 to May of 2014 was conducted using the PubMed, CINAHL, Embase, and Cochrane Library databases. Because all of the databases we used include terms based on Medical Subject Headings (MeSH) terms, we selected MeSH terms first, including (informatics OR nursing informatics OR medical informatics OR medical informatics application OR management information systems) for information systems, and (evaluation studies OR nursing evaluation research) for evaluation research. With those MeSH terms, we also combined other keywords, such as (manag* OR admin*) for management, and nurs* so that these wildcards could retrieve all relevant articles. Additionally, only studies written in English were used for the current study. The titles and abstracts returned by the search were read and assessed by two reviewers: one reviewer was a nursing professor who majored in nursing informatics, and the other reviewer was a doctoral student of nursing who has trained for systematic review.
An article was included if it met the following criteria: 1) included a nursing management system or a system developed as a part of nursing management; 2) was original research; and 3) included nurses as system users or study subjects. An article was excluded if it met the following criteria: 1) focused on systems used in nursing homes, the community, or long-term care facilities; 2) simply evaluated an IT device; 3) evaluated the technical aspects of a developed system; 4) evaluated systems not directly related to nursing; and 5) was a thesis, abstract, or part of conference proceedings.
From the six selected studies, we extracted type of management system, study purpose, study design, data source, system users, and study subjects. Since system users and study subjects were not necessarily the same, we extracted both types of information. Additionally, we extracted outcomes used to evaluate the effectiveness of NMISs. In the case of disagreement, differences were resolved through discussion between the two reviewers.
Study quality was independently assessed by the two reviewers using the Risk of Bias Assessment Tool for Non-randomized Studies (RoBANS 2.0) criteria [13]. The RoBANS criteria cover seven specific areas, including comparability of participants, selection of participants, confounding variables, intervention measurement, blinding of outcome assessment, outcome evaluation, incomplete outcome data, and selective outcome reporting [13]. Each criterion was evaluated as 'low risk of bias', 'high risk of bias', or 'unclear'. If the study did not mention a certain criterion, we evaluated it as 'unclear'. In instances of disagreement, each case was discussed with all authors.
The initial search retrieved a total of 2,257 studies: 807 from PubMed, 812 from CINAHL, 625 from Embase, and 13 from the Cochrane Library. From these, 253 duplicate articles were removed. Based on the inclusion and exclusion criteria, two members of the research team independently reviewed each article and reached a consensus regarding its exclusion. The review process for the selected articles progressed in three stages, including title review, abstract review, and full text review. We extracted 1,929 studies from the title and abstract review and 69 studies from the full text review. Finally, a total of six articles were selected for this study. The retrieval and screening process is summarized in Figure 1.
The selected studies included three types of NMISs. There were two scheduling programs including a perioperative system and a self-scheduling system [11,12] as well as two nursing cost-related programs, including a nursing resource management system and a nursing financial management system [6,9]. Moreover, there were two patient care management programs, including a data warehouse-based NMIS and a computerized nurse dependency management system [4,10] (Table 1).
With regard to study design, we found two quantitative studies [9,10], three mixed studies with both quantitative and qualitative approaches [4,6,11], and a descriptive study [12]. For the data source, questionnaires were the most preferred quantitative method, and they were used in four studies [4,6,9,11]. Qualitative approaches, including semi-structured interview and focus group interview were used in three studies [4,6,11]. Additionally, hospital data, such as fiscal and human resource data [9] as well as electronic nursing workload management reports [10], were utilized to evaluate system effectiveness.
For system users and study subjects, although the majority of users of the perioperative system were nurses, other professionals (e.g., physicians and lab technicians) were also system users [12]. The study subjects were operating room nurses, unit nurse managers or directors, unit head nurses, and experts in clinical nursing informatics (Table 1).
The outcome measures of NMISs in the six studies were classified into eight categories, including usefulness, time saving, satisfaction, cost, attitude, usability, data quality/completeness/accuracy, and personnel work patterns. Most studies used multiple outcome measures, ranging from 2 to 7, with an average of 5.0 per study (Table 2).
All six studies evaluated 'usefulness', and most studies addressed the positive results. For example, nursing financial management systems helped make the nursing staffs' work processes less complicated and improved productivity [9]. In addition, the perioperative system decreased cancellations and equipment conflicts in operating rooms and improved overall documentation [12]. In the nursing resource management information system evaluation study, nurses' perceived usefulness mean score was 26.7, with a range of -33.0 to +33.0, which was a high score [6]. Furthermore, the data warehouse-based nursing management system improved care to meet actual care needs (40%), and appropriate care was delivered at the right time according to predefined clinical processes (20%) [4]. By using the computerized nurse dependency management system, nurse managers allowed for continuous patient dependency data to be used to allocate and predict staff allocation based on nursing care requirements [10]. However, one negative result was described in terms of the self-scheduling system study. Specifically, in this study, nurses could not predict what events would happen 2-3 months ahead of time, and self-scheduling was difficult when the schedule had to be changed [11].
'Time saving' and 'satisfaction' outcomes were the second most evaluated measurements, and they were included in five studies [4,6,9,11,12]. Furthermore, most of the results demonstrated positive effects. In the time saving category, the nursing financial management system evaluation study showed that use of the system led to 6% of staff's time spent on report generation compared to 52% with the manual system. Moreover, the time required for data collection, organization, and manipulation decreased by 88%, demonstrating a savings of 2,410 hours annually [9]. In addition, the perioperative system decreased the number of hours nurses spent ordering supplies and reduced unnecessary inventory [12]. The nursing resource management information system also reduced expenditures for overtime and extra hours compared to the control group [6]. Users of the self-scheduling system perceived that they had more time to spend with their families and felt they provided better patient care as a result [11]. On the other hand, the 'time saving' outcome was associated with some negative aspects of the NMISs. For instance, the self-scheduling system created too much work for the nurse managers to organize scheduling [11]. In addition, the data warehouse-based NMIS included too much data content with several dimensions that made data exploration obscure, and it was found to be confusing and time consuming in everyday use (27%) [4].
In the 'satisfaction' category, nurse managers were satisfied with the nursing financial management system, which ensured that delays in information reporting did not occur [9]. Additionally, the perioperative system allowed for reporting of caseload, average surgical time, cost per case, room usage or turnover time, numbers of cases, average case costs, case cancellation reasons, and the number of cancellations [12]. With regard to the nursing resource management information system, nurses' mean score for satisfaction was 54.7, with a range of 12-60 [6]. In the study of the self-scheduling system, nurse users were able to control their schedules and felt more freedom in their personal lives (70%), although some competition occurred when selecting preferred shifts [11]. The data warehouse-based NMIS does not include important quality aspects of a patient's care (40%) or information describing personnel competency and educational needs (30%) [4].
'Cost' was evaluated as an outcome measurement in four studies. The nursing financial management system eliminated salaries related to re-working, which led to 122% of return on investment [9]. In addition, the perioperative system decreased lost charges [12], while the nursing resource management information system improved the budget balance [6]. Furthermore, the computerized nurse dependency management system under-predicted the decreased average number of hours per ward and per shift compared to the manual system. Moreover, the system allowed for staff allocation to meet patients' varying requirements for nursing hours and skill mix [10].
'Attitude' was evaluated as an outcome measurement in three studies. In the study of the nursing resource management information system, the mean score of implementation on attitude (i.e., job performance) was 13.3, with a range of -26 to +26 [6]. However, the self-scheduling system gradually decreased control and flexibility [11]. The data warehouse-based NMIS showed positive aspects in terms of the systematic production of information from available nursing databases [4].
'Usability' was also evaluated as an outcome measurement in three studies. In the nursing resource management information system study, the mean ease of use score was 16.3, with a range of -18 to +18 [6]. The data warehouse-based NMIS was useable (22%) and multi-professional use was available to ensure total quality of patient care (40%) [4]. However, the perioperative system study described how the personnel module only allowed users to view 10 personnel and 2 weeks of the schedule on each screen [12].
Three of the selected studies measured the outcome of 'data quality/completeness/accuracy'. With regard to the nursing resource management information system, the mean score of information accuracy was 8.3, with a range of 2-10 [6]. In the data warehouse-based NMIS study, risk of misleading conclusions was 27% if users lacked competencies in either nursing management or statistical decision-making, and 40% of participants demanded that data from other HIS-subsystems (e.g., hospital infection data) be added [4]. The computerized nurse dependency management system provided a detailed measure of the complexity of patient needs and their dependency on nurses, while also electronically recording actual care. Thus, it predicts the care required for individual patient needs and their dependency on nurses [10].
Two of the selected studies measured the outcome of 'personnel work patterns'. The perioperative system helped to track continuing staff education and basic life support renewal dates [12]. In the interviews of the self-scheduling system users, participants described how the system provided a feeling of control over their own lives and allowed them to schedule work based on their personal needs without filling out multiple time request forms [11].
The quality of the included studies is summarized in Table 3. In appraising the risk of bias as a result of inappropriate comparability and participant selection, only one study [6] was evaluated as 'low risk of bias' on comparability as it compared 4 test units and 6 control units by simultaneous parallel measurements. We evaluated studies that did not specifically describe participants as 'unclear' [4,9,10,11,12] and one study that used convenient sampling as 'high risk of bias' on participant selection [6]. All studies were also 'unclear' about blinding outcome assessment and incomplete outcome data. On the other hand, all studies did not consider confounding variables that were evaluated as 'high risk of bias' on confounding variables. In terms of the intervention measurement, three studies were rated as having a 'low risk of bias' [9,10,12] and others were appraised as having a 'high risk of bias' because of using non-standardized measurements, such as interviews [4,6,11]. Most of the criteria were 'unclear' or 'high risk of bias' on two studies [4,11].
We conducted a systematic review of the studies that have evaluated various NMISs in terms of methods and outcome measures. We attempted to show not only the methods and outcome measures of the evaluation studies, but also the positive or negative aspects of outcomes from the six articles that met the inclusion criteria from our literature search.
Evaluations assessed two scheduling programs, two nursing cost-related programs, and two patient care management programs. Half of the studies assessed utilized quantitative and qualitative mixed-method designs (n = 3), and only one study adopted a test group and control group comparison approach [6]. Although experimental design is a valuable approach for evaluating the outcomes of NMISs, it is very difficult to completely develop and implement the applicable systems. Thus, experimental designs are not necessarily applicable to the evaluation of NMISs. Instead, evaluations should be multidimensional and should consider human, contextual, and cultural aspects, which usually requires methods more complex than what experimental designs can offer [14]. Several articles recognized the need for assessing qualitative studies [15,16] and constructive assessment [17] to obtain deeper knowledge in the evaluation of health information systems, such as the organizational and social aspects [18].
Among data collection methods, questionnaires and chart/EMR reviews were frequently used, which is similar to findings from other studies [15,19]. This result is also consistent with the finding that half of the studies used questionnaires for data collection, most of which were descriptive and correlational. Also, two studies extracted data from the system (e.g., hospital financial data, fiscal and human resource data, and electronic nursing workload management reports).
In addition, most studies had multiple evaluation outcomes, with an average number of outcomes per study of 5.3. This is consistent with Ammenwerth and de Keizer [18], who reported that 48% of studies reviewed, had two or more outcomes. Out of all of the articles, usefulness was one of the most frequently measured outcomes. Specifically, 5 out of 6 studies reported positive outcomes, such as staff productivity. However, one study derived from interviews with nurses reported a negative response [11]. Time saving and satisfaction were also frequently measured, and these outcomes were assessed in five articles. Although most of the outcomes were reported as positive, two studies included negative results, such as decreased patient care quality and personnel competency as well as the view that it was a time-consuming process [4,11].
In the current study, the quality of selected studies was unclear for most of the risk of bias criteria in the RoBANS appraisal tool. Friedman and Abbas [20] raised the issue that studies that evaluate health information systems still require a reliable and valid measurement to achieve scientific rigor. In their article reviewing 25 quantitative studies, only 12% of studies reported reliability and none reported validity [20]. We had similar results, in that reliability and validity were rarely reported. Furthermore, we found that some of the published articles reviewed did not report detailed information regarding research design, subjects, and sample sizes, suggesting that there is a need to increase the scholarly rigor of research in evaluation studies to ensure their quality. This can be improved by adopting a formal evaluation framework. Since various evaluation frameworks have been suggested by several publications, it may also be more effective to assemble a multidisciplinary research team that can share their strengths with one another [15,16,19,21].
In addition, we had some difficulty categorizing the NMISs. This may be an inherent issue with review studies that attempt to categorize systems into certain types because systems may have multiple aspects or dominant/non-dominant functions. Consequently, we considered the dominant aspect of information systems and categorized them accordingly to achieve the best fit; however, some readers might disagree with our categorizations. In addition, there were so few studies that developed and evaluated the information systems focused on nursing management that the quality of the selected studies was not certain.
Since NMISs were executed within complex and dynamic hospital environments, an issue that may arise is the interpretation of results based on the application of certain viewpoints in various studies. For example, in terms of evaluating time saving, the substantial number of data elements required by information systems was negatively evaluated [4]. However, compared to the occurrence of problems with a paper-based system, this negative evaluation can be interpreted as the of the computer-based system failure to satisfy expectations, not a failure of the computer-based system to transcend the usefulness of a paper-based system. Likewise, various results should be analyzed and interpreted according to varying situations and contexts. Although NMISs are shown to improve many aspects of nursing, the issues raised in the evaluative studies should be considered for the future development of HISs used by nurses.
According to our review of the studies related to the evaluation of NMISs in clinical settings, a study with a dynamic and sufficient design including long-term follow-up (i.e., a longitudinal study) and patient outcomes has not yet been conducted. As a result, a plan for evaluation should be integrated at the beginning of the information development process [15,17].
Acknowledgments
This study was supported by a part of research fund from College of Nursing, the Catholic University of Korea.
References
1. Tillett J, Senger P. Determining the value of nursing care. J Perinat Neonatal Nurs. 2011; 25(1):6–7.
2. Skytt B, Ljunggren B, Sjoden PO, Carlsson M. The roles of the first-line nurse manager: perceptions from four perspectives. J Nurs Manag. 2008; 16(8):1012–1020.
3. Surakka T. The nurse manager's work in the hospital environment during the 1990s and 2000s: responsibility, accountability and expertise in nursing leadership. J Nurs Manag. 2008; 16(5):525–534.
4. Junttila K, Meretoja R, Seppala A, Tolppanen EM, Ala-Nikkola T, Silvennoinen L. Data warehouse approach to nursing management. J Nurs Manag. 2007; 15(2):155–161.
5. Lammintakanen J, Saranto K, Kivinen T. Use of electronic information systems in nursing management. Int J Med Inform. 2010; 79(5):324–331.
6. Ruland CM, Ravn IH. Usefulness and effects on costs and staff management of a nursing resource management information system. J Nurs Manag. 2003; 11(3):208–215.
7. Friedman CP, Wyatt J. Evaluation methods in medical informatics. 2nd ed. New York (NY): Springer;2006.
8. Burkle T, Ammenwerth E, Prokosch HU, Dudeck J. Evaluation of clinical information systems. What can be evaluated and what cannot? J Eval Clin Pract. 2001; 7(4):373–385.
9. Hlusko DL, Weatherly KS, Franklin KG, Wallace S, Williamson S. Computerization of a nursing financial management system using continuous quality improvement as a framework. Comput Nurs. 1994; 12(4):193–200.
10. Heslop L, Plummer V. Nurse staff allocation by nurse patient ratio vs. a computerized nurse dependency management system: a comparative cost analysis of Australian and New Zealand hospitals. Nurs Econ. 2012; 30(6):347–355.
11. Bailyn L, Collins R, Song Y. Self-scheduling for hospital nurses: an attempt and its difficulties. J Nurs Manag. 2007; 15(1):72–77.
12. Madrid EM. Perioperative system design and evaluation. Semin Perioper Nurs. 1997; 6(2):94–101.
13. Health Insurance Review & Assessment Service. DAMI & RoBANS version 2.0 [Internet]. Seoul: Health Insurance Review & Assessment Service;2013. cited at 2014 May 15. Available from: http://www.hira.or.kr/dummy.do?pgmid=HIRAA030067010000&cmsurl=/cms/law/03/08/03/1319759_25126.html&subject=DAMI%20&%20RoBANS%20version%202.0%20by%20HIRA#none.
14. Kaplan B. Evaluating informatics applications: some alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inform. 2001; 64(1):39–56.
15. Oroviogoicoechea C, Elliott B, Watson R. Review: evaluating information systems in nursing. J Clin Nurs. 2008; 17(5):567–575.
16. Currie LM. Evaluation frameworks for nursing informatics. Int J Med Inform. 2005; 74(11-12):908–916.
17. Ammenwerth E, Brender J, Nykanen P, Prokosch HU, Rigby M, Talmon J, et al. Visions and strategies to improve evaluation of health information systems. Reflections and lessons based on the HIS-EVAL workshop in Innsbruck. Int J Med Inform. 2004; 73(6):479–491.
18. Ammenwerth E, de Keizer N. An inventory of evaluation studies of information technology in health care trends in evaluation research 1982-2002. Methods Inf Med. 2005; 44(1):44–56.
19. Van Der Meijden MJ, Tange HJ, Troost J, Hasman A. Determinants of success of inpatient clinical information systems: a literature review. J Am Med Inform Assoc. 2003; 10(3):235–243.