Phase I: development of the framework of an NM program
First step: We developed an initial draft of the NM framework based on a literature review of the 3-stage NM model, as well as our experiences of conducting this course, and we allocated these 3 stages to 2 theoretical and practical training sections.
NM model: The NM model included reading a narrative (“attending”), reflective writing (“representing”), and small-group discussions and sharing experiences with others (“affiliating”) [
7]. We adapted the NM model grounded in Gagne’s theory and developed a draft of the framework.
Gagne’s instructional design model: Gagne explained that learning levels include verbal knowledge and memory, mental ability, cognitive approaches, performance ability, and emotional beliefs, and that learning objectives should be presented at these levels. Gagne’s theory has 3 components, including learning results, the circumstances of learning, and a set of 9 training activities. Interior and exterior learning circumstances through specific learning results are realized in training activities. In each training activity, particular actions are taken to achieve the training results [
8]. Gagne’s theory derives from the theory of behavioral learning, and is one of the most popular models of instructional design.
We chose Gagne’s theory in order to match the 9 training activities with the steps of the NM model. In this model, the details of training activities tailored to the NM sessions were well addressed. Furthermore, this approach emphasizes educational effectiveness through participatory learning in small groups and life-long learning, a theme that is also highlighted in NM.
Therefore, we integrated the 3-stage NM model used for reflective training with Gagne’s theory that included 9 steps of training activities (
Supplement 2).
Second step: We developed a checklist containing indicators of instructional design components to validate the framework (
Table 1). Next, we utilized a scale with scores ranging from ‘completely disagree’=1 to ‘completely agree’=9 (range, 1–9) to measure responses, following the RAND/UCLA appropriateness method (RAM) developed by the RAND Corporation and the University of California Los Angeles for ranking indicators. The criteria for reaching agreement (consensus) on each indicator were set as follows: 1–3 as “inappropriate,” 4–6 as “uncertain,” and 7–9 as “appropriate” [
9,
10].
Table 1.
No. of indicator |
Gagne’s instructional design components |
Appropriateness criteria |
Inappropriate |
Uncertain |
Appropriate |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
1 |
Cognitive approaches |
|
|
|
|
|
|
|
|
|
2 |
Performance abilities |
|
|
|
|
|
|
|
|
|
3 |
Emotional beliefs |
|
|
|
|
|
|
|
|
|
4 |
Exterior circumstances |
|
|
|
|
|
|
|
|
|
5 |
Interior circumstances |
|
|
|
|
|
|
|
|
|
6 |
Step 1. Attracting and attending |
|
|
|
|
|
|
|
|
|
7 |
Step 2. Raising learners’ awareness of objectives |
|
|
|
|
|
|
|
|
|
8 |
Step 3. Evoking memory of past knowledge |
|
|
|
|
|
|
|
|
|
9 |
Step 4. Presenting training materials |
|
|
|
|
|
|
|
|
|
10 |
Step 5. Providing a guide for learning |
|
|
|
|
|
|
|
|
|
11 |
Step 6. Examining performance |
|
|
|
|
|
|
|
|
|
12 |
Step 7. Giving feedback |
|
|
|
|
|
|
|
|
|
13 |
Step 8. Evaluating performance |
|
|
|
|
|
|
|
|
|
14 |
Step 9. Improving and transferring learning |
|
|
|
|
|
|
|
|
|

(1) Agreement: If the scoring range of all indicators was in a single one of the intervals (1–3, 4–6 and 7–9), agreement was attained. (2) Disagreement: If the scoring range was distributed across all 3 of the intervals (1–3, 4–6, and 7–9), then there was disagreement. (3) Acceptability: Acceptability was defined as a median score in the range of 7–9 with no disagreement. Otherwise, an indicator was considered to be inappropriate.
Phase II: expert panel
To validate the framework of NM, we formed an expert panel. The panel consisted of multidisciplinary experts who were qualified to comment on NM training. Furthermore, we incorporated instructional design into this process. The framework approval process was conducted in 3 rounds. We applied RAM, which is a modified Delphi method, to determine the agreement of experts [
10]. This method uses a combination of Delphi techniques (mailed questionnaires), and nominal group techniques (face-to-face sessions); furthermore, it is a dynamic process, as in addition to presenting a clear criterion for ranking the indicators, it takes advantage of group interactions and discussions accordingly [
9]. Therefore, we used this method to avoid any ambiguity and the possibility of ignoring important indicators or details.
We performed the first round through email. The second and third rounds were conducted through face-to-face meetings to promote interactions and discussion among the experts, and the interval between each round was 2 weeks.
First round (email): We explained the purpose of the NM educational framework to each of the 7 experts separately and asked them to participate in the expert panel for validating the framework. After the experts agreed to collaborate, we sent them the primary draft along with the checklist via email detailing the ranking criteria. The framework and the checklist were given to the experts with explanations of how to rank the indicators. We asked the experts to rank the criteria in the draft version based on the indicators listed in the checklist and to present their comments on ways to supplement and improve the framework at the next meeting.
Second round (face to face): We held a face-to-face meeting with 7 experts, and a moderator guided the panel and facilitated group discussions in order to gather the experts’ comments. At the beginning of the session, we provided an overview of the goal of formulating the framework and ranking the criteria in the checklist. In this session, we compared the ratings for each indicator in the first round and addressed them in a group discussion and recorded the experts’ suggestions on each indicator. Then, we revised the indicators according to the experts’ comments (
Tables 2,
3). After revising the draft, we sent the edited version to the experts and asked them to present their ratings and recommendations again. In addition, we invited them to attend a face-to-face meeting in the third round.
Table 2.
No. of indicator |
Gagne’s instructional design components |
Appropriateness criteria |
Inappropriate |
Uncertain |
Appropriate |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
1 |
Before training |
Learning results |
Mental abilities |
|
|
|
|
|
|
|
|
|
2 |
|
|
Cognitive approaches |
|
|
|
|
|
|
|
|
|
3 |
|
|
Verbal knowledge and memory |
|
|
|
|
|
|
|
|
|
4 |
|
|
Performance abilities |
|
|
|
|
|
|
|
|
|
5 |
|
|
Emotional beliefs |
|
|
|
|
|
|
|
|
|
6 |
|
Learning circumstances |
Exterior |
|
|
|
|
|
|
|
|
|
7 |
|
|
Interior |
|
|
|
|
|
|
|
|
|
8 |
Training |
9 Training activities |
Step 1. Attracting and attending |
|
|
|
|
|
|
|
|
|
9 |
|
|
Step 2. Raising learners’ awareness of objectives |
|
|
|
|
|
|
|
|
|
10 |
|
|
Step 3. Evoking memory of past knowledge |
|
|
|
|
|
|
|
|
|
11 |
|
|
Step 4. Presenting training materials |
|
|
|
|
|
|
|
|
|
12 |
|
|
Step 5. Providing a guide for learning |
|
|
|
|
|
|
|
|
|
13 |
|
|
Step 6. Examining performance |
|
|
|
|
|
|
|
|
|
14 |
|
|
Step 7. Giving feedback |
|
|
|
|
|
|
|
|
|
15 |
|
|
Step 8. Evaluating performance |
|
|
|
|
|
|
|
|
|
16 |
After training |
|
Step 9. Improving and transferring learning |
|
|
|
|
|
|
|
|
|

Table 3.
Recommendations of the experts in round 2
No. |
Indicator in round 1 |
Recommendations of the experts in round 2 |
1 |
Cognitive approaches |
Separate the learning results related to the cognitive domain based on Gagne’s instructional design model. |
Add 2 indicators (mental abilities, and verbal knowledge and memory). |
Place learning circumstances based on learning results in a separate table. |
2 |
Performance ability |
Place learning circumstances based on learning results in a separate table. |
3 |
Emotional beliefs |
Previous recommendation |
4 |
Exterior circumstances |
Previous recommendation |
5 |
Interior circumstances |
Previous recommendation |
6 |
Step 1. Attracting and attending |
In writing the instructions for steps 1 to 9, make minor edits to make the wording easy to understand for clinical teachers. |
7 |
Step 2. Raising learners’ awareness of objectives |
Previous recommendation |
8 |
Step 3. Evoking memory of past knowledge |
Previous recommendation |
9 |
Step 4. Presenting training materials |
Previous recommendation |
10 |
Step 5. Providing a guide for learning |
Previous recommendation |
11 |
Step 6. Examining performance |
Previous recommendation |
12 |
Step 7. Giving feedback |
Previous recommendation |
13 |
Step 8. Evaluating performance |
Previous recommendation |
14 |
Step 9. Improving and transferring learning |
Previous recommendation |

Third round (face to face): We held the second face-to-face meeting to finalize the draft. The 7 experts suggested minor changes and finally consensus was achieved among the experts. We extracted the median, minimum, and maximum ratings for each indicator. In the second round, we corrected the ambiguities pointed out in the first round, and in the third round the draft was validated with minor edits to the wording.
The median of the ratings was calculated to determine the level of agreement among the experts for each indicator in the checklist. The flow diagram of this panel study is presented in
Fig. 1.
Fig. 1.
Flow diagram of this panel study.
