Journal List > Clin Exp Otorhinolaryngol > v.16(3) > 1516083571

Shim, Lee, Han, Jeon, Hong, and Lee: Feasibility of Virtual Reality-Based Auditory Localization Training With Binaurally Recorded Auditory Stimuli for Patients With Single-Sided Deafness

Abstract

Objectives

To train participants to localize sound using virtual reality (VR) technology, appropriate auditory stimuli that contain accurate spatial cues are essential. The generic head-related transfer function that grounds the programmed spatial audio in VR does not reflect individual variation in monaural spatial cues, which is critical for auditory spatial perception in patients with single-sided deafness (SSD). As binaural difference cues are unavailable, auditory spatial perception is a typical problem in the SSD population and warrants intervention. This study assessed the applicability of binaurally recorded auditory stimuli in VR-based training for sound localization in SSD patients.

Methods

Sixteen subjects with SSD and 38 normal-hearing (NH) controls underwent VR-based training for sound localization and were assessed 3 weeks after completing training. The VR program incorporated prerecorded auditory stimuli created individually in the SSD group and over an anthropometric model in the NH group.

Results

Sound localization performance revealed significant improvements in both groups after training, with retained benefits lasting for an additional 3 weeks. Subjective improvements in spatial hearing were confirmed in the SSD group.

Conclusion

By examining individuals with SSD and NH, VR-based training for sound localization that used binaurally recorded stimuli, measured individually, was found to be effective and beneficial. Furthermore, VR-based training does not require sophisticated instruments or setups. These results suggest that this technique represents a new therapeutic treatment for impaired sound localization.

INTRODUCTION

Spatial hearing in the single-sided deafness (SSD) population exhibits wide variability, with some patients showing relatively good sound localization performance, while others cannot differentiate between sounds presented on the left or right [1]. The variable impact of hearing loss on sound localization in individuals with SSD suggests that adaptive processes in the higher-level auditory pathway might improve spatial hearing through auditory training. Evidence for the effectiveness of auditory localization training has been shown in hearing-impaired patients [2,3], subjects with normal hearing [4-6], and those with simulated asymmetric hearing [7-9]. Physical resources are a practical obstacle to applying this training in clinics, as the training requires a spacious sound-attenuated room with sophisticated instruments to control multiple calibrated speakers. The setup for multimodal protocols, such as coupling auditory tasks with visual feedback or motor responses (both of which significantly promoted the effect of training in previous studies), is even more complicated [9-11].
Virtual reality (VR) hardware, which has recently become available, allows multimodal integration with auditory tasks in an artificially constructed space. However, the virtual audio system based on generic head-related transfer functions (HRTFs) would not be optimal for use in patients with SSD [12,13]. To localize auditory sources, patients with SSD who are deprived of binaural difference cues largely depend on monaural spectral cues ignored by individuals with normal symmetric hearing [9,14,15]. Since factors including sizes and shapes of the torso, head, and pinna, which vary from person to person, contribute significantly to monaural HRTFs, individualized/custom HRTFs are needed for patients with SSD to perceive spatial location accurately. As previously shown in a simplified procedure with a set of binaural recordings of stimuli [16], the incorporation of binaurally recorded auditory stimuli into a VR-based training program could be applied to patients with spatial hearing deficits to receive sound localization training based on personally tailored stimuli. Training in a virtual environment requires only a consumer VR device rather than sophisticated instruments and equipped spaces.
In this study, we assessed the feasibility of using binaurally recorded spatial stimuli to train patients to carry out azimuthal sound localization in a virtual environment. For SSD patients, the binaurally recorded stimuli were matched to virtual speakers at the exact azimuthal location. Auditory spatial performance and subjective scores were evaluated before and after training to measure the effects of training, as well as 3 weeks after the end of training, to determine the persistence of the training effects. The acceptability of the training program was also tested in normal-hearing (NH) controls with non-individualized stimuli. Different training protocols were applied to the two groups (SSD and NH), as these groups use different auditory spatial cues and show different sound localization abilities at baseline.

MATERIALS AND METHODS

Participants

A total of 40 NH and 20 SSD participants were recruited in the VR training. Subsequently, 38 NH and 16 SSD subjects were included in the data analysis because of personal scheduling issues in 2 NH and 4 SSD subjects. The SSD group was based upon the international consensus on the audiological criteria of SSD, with the pure-tone average (PTA) in the better ear of 30 dB HL or better and that in the impaired ear of 70 dB HL or worse [17]. All subjects received monetary compensation based on the number of their visits.
The Institutional Review Board of University Hallym University Sacred Heart Hospital approved the study protocol (No. 2020-10-013) and written informed consent was obtained from each participant.

Auditory stimuli

Subjects wore in-ear binaural microphones (SP-TFB-2, Sound Professionals Inc.) that were placed carefully at the entrance of the ear canal to minimize destruction of the pinna, and they were seated 1.1 m away from a speaker at zero elevation in a clinical audio booth (2.05×1.90×1.85 m). The subjects were instructed not to move their head and body after rotating participants by nine angles (0, ±15°, ±30°, ±45°, and ±60°) to acquire stereo stimuli reflecting each azimuthal source. A total of 81 stimuli per participant were yielded (9 Korean digits ×9 angles). For the NH group, stimuli were created by recording binaurally over a standard anthropometric model of the head and torso (KEMAR-45BA, GRAS Sound & Vibration) to capture binaural difference cues. The KEMAR manikin was placed on the rotating chair, and the same recording procedure was performed. The recorded stimuli yielded a clear spatial percept in the frontal azimuth based on the available interaural level difference (ILD), interaural time difference (ITD), and spectral cues (Fig. 1A and B). The recorded stimuli were created using the Adobe Audition program (Adobe) with sampling rates of 48,000 Hz.

VR programming

Audiovisual VR training was implemented using an HTC Vive system, comprising one head-mounted display (HMD; resolution, 1,080×1,200 pixels; field of view; 110°, refresh rate, 90 Hz), one controller, and two lighthouse base stations for outside-in head position tracking. All stimuli protocols and virtual speakers were developed with the Unity program (Unity Technologies).
The subjects sat on a chair facing a desk that held a monitor and two sensors, and wore the HMD in a quiet room (3×2.5 m). Auditory stimuli are provided through headphones, a part of the HMD. The virtual environment consisted of an empty square room indicated with horizontal/vertical lines, and the direction of the subject’s head was marked by a small white cross (Fig. 1C). Virtual speakers were positioned at ear level across 120° in the frontal azimuth, and the number of speakers varied from 3 to 9 to make level differences.
The created stimuli were matched to virtual speakers at identical azimuthal locations. To initiate a trial, the subject directed their head to the center speaker, which turned green (Fig. 1C). Subsequently, a stimulus was randomly presented from one of the virtual speakers. The subject was instructed to turn their head to point to the speaker perceived to emit the sound and to press the button on the handheld controller. Visual feedback was provided with the color of the virtual speaker changed according to response accuracy: correct, incorrect, and almost correct (i.e., selecting a speaker next to the correct one) were indicated with blue, red, and orange, respectively (Fig. 1C).

VR-based training protocol

The VR-based sound localization training consisted of 4 levels according to the numbers (3, 5, 7, and 9 speakers) of virtual speakers (Fig. 1C). Each level consisted of a training set of 10 trials per speaker and a test set of 20 trials. The level of difficulty was optimized based on the concurrent performance evaluation. The training began at level 1, which had three speakers. The training set was repeated until the rate of correct responses on the last 20 trials reached 70%; once subjects reached this criterion, the VR program automatically loaded the test set of the level containing 20 trials. If the correct responses on the test set were 70% or higher, subjects proceeded to the next training level; if not, training was repeated at the same level. If the subject had correct responses over 90% on the test set in three consecutive sessions, the subject was allowed to skip that level at the next visit. VR-based training consisted of three 40-minute training sessions per week over 3- and 6-consecutive week periods for participants in the NH and SSD groups, respectively.

Sound localization assessments in VR and the sound field

Sound localization and subjective spatial hearing were assessed before (Pre), after (Post) and 3 weeks after (Post-3w) the training. Sound localization in the VR environment was assessed three times in all but 1 SSD subject at Post and 5 NH subjects at Post-3w. The test sets of 20 trials for all four levels in the VR-based training were used for this evaluation. The Pre-test was conducted before the first training session followed by the post-test acquired a couple of days after the last training session.
Additionally, a sound localization test (1 session=50 trials: 5 speakers×10 repeats) in the sound field was conducted in a sound-attenuated booth with five speakers in varying azimuthal locations (0, ±30°, and ±60°), 1 m from the subject and at ear level. For each trial, one of the nine Korean digits used in the VR program was randomly presented at the most comfortable level for each subject; subjects were instructed to report the perceived source location among the five speakers using a Bluetooth keypad. Only nine subjects in the SSD group completed the sound field test on all three conditions (Pre, Post, and Post-3w) due to a technical issue. The test in the sound field was not repeated in the NH group, which showed ceiling performance at pre-test.

Subjective evaluation of spatial ability: K-SSQ and visual analog scale scores

To evaluate subjective improvement, the Korean version of the Speech, Spatial, and Qualities of Hearing Scale (K-SSQ) [18] was administered to all 54 subjects at three time conditions except for 6 NH subjects at Post-3w. In the SSD group, a visual analog scale (VAS) was adopted to measure the subjective perception of participant sound localization in daily life; the VAS was administered at Post and Post-3w to compare changes after training. The VAS score ranged from –10 to +10 (marked changes compared to Pre). Subjects were asked to complete 8 VASs for four types of sounds (nature sounds, human voices, environmental sounds, and machinery sounds) in two listening situations (indoors and outdoors). The reported eight scores were averaged per subject/session.

Statistics

Statistical tests were conducted using R [19] with the rstatix package [20]. Normality was assessed using the Shapiro-Wilk test. To examine the group differences in age, PTA, and sex, t-tests and Fisher’s exact tests were conducted. The lme4 package was used to perform a linear mixed-effects analysis to determine if the changes in sound localization performance in VR over time (fixed effect) were significant, with subject ID included as a random effect [21]. The post-hoc test was performed using the emmeans package with Tukey’s honestly significant difference (HSD) method [22]. The correct responses of sound localization in the sound field were assessed using nonparametric repeated-measures analysis of variance (ANOVA) in jmv [23]. The correct response rate on the two sound localization tests (VR and the sound field) was compared to the chance level with a one-tailed t-test. Changes in K-SSQ scores over time were analyzed using a linear mixed-effects model in the NH group and a nonparametric repeated-measures ANOVA in the SSD group. In the SSD group, VAS scores at Post were compared to those at Post-3w using a paired t-test. All data are expressed as the mean±standard deviation unless otherwise stated.

RESULTS

Demographic characteristics

The SSD group was significantly older than the NH group (t(22,2)=–2.19, P=0.040) (Table 1). The hearing threshold of the intact ear in the SSD group was within the range of normal hearing (range, –2 to 20 dB HL) (Fig. 2). However, as expected with the significant age difference, the hearing threshold in the intact ear of the SSD group was significantly worse than that of the NH group in the right (t(23,8)=–6.9, P<0.001) and left (t(28,9)=–6.3, P<0.001) ears. The sex distribution was not significantly different between the two groups (P=0.384).

Sound localization improved in the SSD group but with slower progress than in the NH group

In the first training session, 25% of the SSD group proceeded to level 2 by scoring over 70% on level 1. However, no SSD subjects were able to skip level 1 until the seventh session. In the last session, two SSD subjects skipped level 1, and three SSD subjects proceeded to level 3 by scoring over 70% on level 2. No SSD subjects proceeded to level 4. In the NH group, 26 out of 38 subjects proceeded to level 4 in the first session, and 35 subjects skipped level 1 in the fourth session. In the third session, all NH subjects proceeded to level 4 by scoring over 70% in level 3. In the last session, only one subject started with level 1, and 16 proceeded directly to level 4 for the entire session.
The correct response rates on the sound localization test in the VR environment were compared across three-time conditions (Pre, Post, and Post-3w) and four levels in each group (Fig. 3). The mixed-effect model revealed that the correct response rate in the VR environment varied significantly across time in all four levels, both in the SSD group (level 1: χ2(2)=30.6, P<0.001; level 2: χ2(2)=35.6, P<0.001; level 3: χ2(2)=16.3, P<0.001; and level 4: χ2(2)=17.6, P<0.001) and in the NH group (level 1: χ2(2)=21.1, P<0.001; level 2: χ2(2)=46.7, P<0.001; level 3: χ2(2)=46.9, P<0.001; and level 4: χ2(2)=66.0, P<0.001). Simultaneous pairwise comparisons using Tukey’s HSD test indicated that the correct response rates increased significantly after training (both Post and Post-3w) compared to the scores before training (Pre) on all four difficulty levels in both groups (P<0.01 in the SSD group and P<0.001 in the NH group); in contrast, the performance at the Post time point was not significantly different from that at the Post-3w time point (P>0.05 on all levels in both groups). The correct response rates at Pre in the NH group were significantly higher than chance (Fig. 3A, dashed lines) at all four levels. In the SSD group, the correct response rates at Pre were significantly better than chance in levels 1–3 but not in level 4 (t(15)=0.41, P=0.344) (Fig. 3B); the performance improved to be significantly better than chance at Post (t(15)=4.79, P<0.001) and Post-3w (t(15)=4.05, P<0.001).
In the nine SSD subjects who completed the sound field test in all three conditions, nonparametric repeated-measures ANOVA revealed a significant difference in the correct response rates over time (χ2(2)=8.40, P=0.015) (Fig. 3C). Post-hoc testing indicated that training significantly improved performance (Pre vs. Post: P=0.002; Pre vs. Post-3w: P=0.034). The performance at Post (45.6±13.7) was better than that at Post-3w (37.6±10.9), but the change was not significant (P=0.184).

Subjective improvement in spatial hearing in the SSD group

Subjective improvement in spatial hearing was assessed with K-SSQ and VAS scores. Regarding K-SSQ scores, the NH group showed no difference across time in all three items (P>0.05), but the SSD group showed a significant change across time in the spatial item (χ2(2)=6.13, P=0.047) (Fig. 4); specifically, the scores of the SSD group at Post-3w were higher than those at Pre (P=0.019) and Post (P=0.045). In the VAS scores compared to Pre, all subjects but one reported some improvement at Post (4.51±1.91) (Fig. 4C) and no change at Post-3w (4.28±2.84, t(14)=0.28, P=0.783).

DISCUSSION

This study aimed to test the feasibility of VR training for SSD patients with auditory stimuli created by binaural recordings. Subject-specific stimuli were used for sound localization training in the SSD group to provide all possible spatial cues, as this group is vulnerable to potential distortion of spatial information when using audio programs with standard HRTFs. The results of this study demonstrated that subject-specific stimuli could be applied in VR-based localization training and suggested that this training may be an effective treatment option for improving spatial hearing in patients with SSD. After training with individualized stimuli, the SSD group exhibited a significant increase in sound localization performance as measured both in the VR environment and in the sound field (Fig. 3); these subjects also reported subjective improvement (Fig. 4). The training-induced improvements were retained after 3 weeks (Fig. 3), consistent with previous studies showing that training exerted sustained effects for up to one month afterward in individuals with hearing impairment [3].

The effective auditory localization training in SSD patients with individualized spatial stimuli

In previous studies, a longer duration [24] and earlier exposure [1] to single-sided hearing were associated with better sound localization in this population, suggesting that an adaptive process in the higher-level auditory pathway is facilitated by experience with single-sided hearing. When binaural hearing is artificially perturbed, cue weighting across auditory spatial cues changes to increase dependency on more reliable cues [25]. Dynamic changes in spatial hearing occur in response to acute monaural plugging and are modified by training, allocating perceptual weight preferentially to the head shadow effect and monaural spectral cues from the intact ear [7,9]. In patients with asymmetric hearing loss, the redistribution of cue weighting varies depending on the reliability of the remaining cues. For example, residual hearing at low frequencies strengthens ITD cues, and residual hearing at high frequencies strengthens spectral cues [14,25]. Considering the effects of experiences and the diverse audiological profiles of the clinical population with asymmetric hearing loss, a training program encompassing all possible spatial cues would enable trainees to develop the most efficient combination of spatial hearing strategies [11].
This study applied individualized binaural recordings to produce personalized spatial auditory stimuli for VR-based training in the SSD group. Individualized binaural recordings have been used to investigate the neurophysiological response underlying auditory spatial processing in experimental paradigms where the difficulty is locating spatially separated speakers. The prerecorded stereo stimuli presented via earphones produced distinct auditory cortical responses, contributing to the spatial perception that decodes the azimuthal location of the source [26].
To create an immersive VR environment, software development tools provide solutions for simulating 3D audio [27] based on a standard HRTF database created using anthropometric averages. As described in the Introduction, since the standard HRTF database does not incorporate individual variation in spectral cues, the benefits of individualized HRTF demonstrated in normal hearing subjects [12,13] are expected to be greater in patients with SSD. However, the generation of personal HRTFs based on individual measurement is highly time-consuming, technically difficult, and expensive [28]. Our method of binaural recording could easily capture individualized spatial information in the head-centered position. Therefore, our method could be helpful to compensate for the disadvantages of auditory training in the sound field and avoid the disadvantage of using a standardized HRTF database in VR-based training.

Benefits of VR-based training for individuals with hearing impairments

On-site auditory localization training requires a wide sound-attenuated booth with multiple calibrated speakers and a control system, and an even more sophisticated instrument is needed to produce multimodal interactions, such as visual feedback or motor responses, which have been reported to promote the training effect [9,29]. As VR-related technology advances, custom VR hardware enables multimodal feedback, including auditory stimuli, visual feedback, and motor responses in auditory training programs without needing additional instruments.
The present study provided different training protocols in two groups. The NH group received fewer training sessions with limited auditory spatial cues but showed faster progress to a higher level of difficulty than the SSD group. Although the purpose of this study was not to optimize training protocols, we observed that the SSD group needed more sessions to show effective results. In some patients, the improvement observed immediately after training was not retained after 3 weeks (Fig. 3C and 4C), indicating that more sustained training is required to maintain the benefits of training. In this regard, VR-based training that enables home training can be beneficial for minimizing the number of clinical visits required and reducing travel time and cost burdens.

Limitations of the study and future works

This study did not test whether training with stimuli based on the standard HRTF would also benefit the SSD group, although this finding was expected to a limited degree. The results of the NH group align with previous evidence showing that subjects with normal symmetrical hearing benefit from VR-based sound localization training with non-individualized stimuli [13,30]. In patients with SSD, training with standardized stimuli might facilitate the learning of spatial hearing strategies that rely on monaural level cues. Nevertheless, the contribution of other spatial cues, such as interaural differences and monaural spectral cues, would be largely limited.
In this study, head movement was not allowed during the presentation of training stimuli. Subjects needed to fix their head at the center of the virtual environment to correctly receive the spatial information contained in the prerecorded stimuli. This method can be used to localize moving auditory objects if stimuli are recorded with moving sources, with head movement still restricted during stimuli presentation. Individual filtering in real time based on individualized HRTF measurements would be necessary to allow head movement in VR.
The results of this study show that VR-based training for sound localization was effective in patients with SSD when performed using binaurally recorded stimuli. The SSD group exhibited significant improvements in terms of sound localization and subjective satisfaction; these improvements persisted for 3 weeks after training. Further research is required to fine-tune training protocols and to determine the long-term effects.

HIGHLIGHTS

▪ Sound localization training using virtual reality (VR) technology is helpful for patients with single-sided deafness (SSD) and does not require complicated instruments or setups.
▪ The generic head-related transfer function in VR, however, does not reflect individual variation in monaural spatial cues, which is critical for auditory spatial perception in patients with SSD.
▪ The feasibility of using binaurally recorded spatial stimuli was assessed for azimuthal sound localization training in a virtual environment.
▪ Sound localization performance improved after VR-based training in individuals with SSD and normal hearing.
▪ VR-based training represents a new therapeutic treatment option for impaired sound localization.

Notes

No potential conflict of interest relevant to this article was reported.

AUTHOR CONTRIBUTIONS

Conceptualization: SKH, HJL. Methodology: JL, JHH. Software: JL, HJ, SKH. Formal analysis: JL. Data curation: LS, JHH, HJL. Visualization: LS, JL, HJ, HJL. Supervision: HJL. Funding acquisition: JL, JHH, HJL. Writing–original draft: LS, HJL. Writing–review & editing: JL, SKH, HJL.

ACKNOWLEDGMENTS

This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (NRF-2022R1A2C1004862, NRF-2019R1A2B5B01070129, NRF-2020R1A6A3A01099260, RS-2023-00244421, and RS-2023-00243712) and the Hallym University Research Fund and Hallym University Medical Center Research Fund.
The authors are grateful to Inyong Choi and Il Joon Moon for providing technical support and to all subjects for their participation.

REFERENCES

1. Firszt JB, Reeder RM, Holden LK. Unilateral hearing loss: understanding speech recognition and localization variability-implications for cochlear implant candidacy. Ear Hear. 2017; Mar/Apr. 38(2):159–73.
2. Luntz M, Brodsky A, Watad W, Weiss H, Tamir A, Pratt H. Sound localization in patients with unilateral cochlear implants. Cochlear Implants Int. 2005; Mar. 6(1):1–9.
3. Kuk F, Keenan DM, Lau C, Crose B, Schumacher J. Evaluation of a localization training program for hearing impaired listeners. Ear Hear. 2014; Nov-Dec. 35(6):652–66.
4. Mendonca C, Campos G, Dias P, Santos JA. Learning auditory space: generalization and long-term effects. PLoS One. 2013; Oct. 8(10):e77900.
5. Steadman MA, Kim C, Lestang JH, Goodman DF, Picinali L. Short-term effects of sound localization training in virtual reality. Sci Rep. 2019; Dec. 9(1):18284.
6. Hanenberg C, Schluter MC, Getzmann S, Lewald J. Short-term audiovisual spatial training enhances electrophysiological correlates of auditory selective spatial attention. Front Neurosci. 2021; Jul. 15:645702.
7. Hofman PM, Van Riswick JG, Van Opstal AJ. Relearning sound localization with new ears. Nat Neurosci. 1998; Sep. 1(5):417–21.
8. Yu F, Li H, Zhou X, Tang X, Galvin Iii JJ, Fu QJ, et al. Effects of training on lateralization for simulations of cochlear implants and single-sided deafness. Front Hum Neurosci. 2018; Jul. 12:287.
9. Zonooz B, Van Opstal AJ. Differential adaptation in azimuth and elevation to acute monaural spatial hearing after training with visual feedback. eNeuro. 2019; Nov. 6(6):1–18.
10. Cai Y, Chen G, Zhong X, Yu G, Mo H, Jiang J, et al. Influence of audiovisual training on horizontal sound localization and its related ERP response. Front Hum Neurosci. 2018. Oct. 6(23):12–423.
11. Kumpik DP, Campbell C, Schnupp JW, King AJ. Re-weighting of sound localization cues by audiovisual training. Front Neurosci. 2019; Nov. 13:1164.
12. Jenny C, Reuter C. Usability of individualized head-related transfer functions in virtual reality: empirical study with perceptual attributes in sagittal plane sound localization. JMIR Serious Games. 2020; Sep. 8(3):e17576.
13. Stitt P, Picinali L, Katz BF. Auditory accommodation to poorly matched non-individual spectral localization cues through active learning. Sci Rep. 2019; Jan. 9(1):1063.
14. Agterberg MJ, Hol MK, Van Wanrooij MM, Van Opstal AJ, Snik AF. Single-sided deafness and directional hearing: contribution of spectral cues and high-frequency hearing loss in the hearing ear. Front Neurosci. 2014; Jul. 8:188.
15. Kumpik DP, King AJ. A review of the effects of unilateral hearing loss on spatial hearing. Hear Res. 2019; Feb. 372:17–28.
16. Kulkarni A, Colburn HS. Role of spectral detail in sound-source localization. Nature. 1998; Dec. 396(6713):747–9.
17. Van de Heyning P, Tavora-Vieira D, Mertens G, Van Rompaey V, Rajan GP, Muller J, et al. Towards a unified testing framework for single-sided deafness studies: a consensus paper. Audiol Neurootol. 2016; 21(6):391–8.
18. Kim BJ, An YH, Choi JW, Park MK, Ahn JH, Lee SH, et al. Standardization for a Korean version of the Speech, Spatial and Qualities of Hearing Scale: study of validity and reliability. Korean J Otorhinolaryngol-Head Neck Surg. 2017; Jun. 60(6):279–94.
19. R Core Team. R: a language and environment for statistical computing [Internet]. R Foundation for Statistical Computing;2021. [cited 2023 Jun 1]. Available from: https://www.R-project.org/.
20. Kassambara A. Rstatix: pipe-friendly framework for basic statistical tests [Internet]. R Foundation for Statistical Computing;2023. [cited 2023 Jun 1]. Available from: https://CRAN.R-project.org/package=rstatix.
21. Bates D, Machler M, Bolker B, Walker S. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015; Oct. 67(1):1–48.
22. Lenth RV, Bolker B, Buerkner P, Gine-Vazquez I, Herve M, Jung M, et al. Emmeans: estimated marginal means, aka least-squares means [Internet]. R Foundation for Statistical Computing;2023. [cited 2023 Jun 1]. Available from: https://CRAN.R-project.org/package=emmeans.
23. Selker R, Love J, Dropmann D, Moreno V. jmv: The ‘jamovi’ analyses [Internet]. R Foundation for Statistical Computing;2022. [cited 2023 Jun 1]. Available from: https://CRAN.R-project.org/package=jmv.
24. Kim JH, Shim L, Bahng J, Lee HJ. Proficiency in using level cue for sound localization is related to the auditory cortical structure in patients with single-sided deafness. Front Neurosci. 2021; Oct. 15:749824.
25. Van Wanrooij MM, Van Opstal AJ. Sound localization under perturbed binaural hearing. J Neurophysiol. 2007; Jan. 97(1):715–26.
26. Derey K, Valente G, de Gelder B, Formisano E. Opponent coding of sound location (azimuth) in planum temporale is robust to sound-level variations. Cereb Cortex. 2016; Jan. 26(1):450–64.
27. Stream. Introducing stream audio [Internet]. Valve Corporation;2023. [cited 2023 Apr 1]. Available from: https://steamcommunity.com/games/596420/announcements/detail/521693426582988261.
28. Braren HS, Fels J. Towards child-appropriate virtual acoustic environments: a database of high-resolution HRTF measurements and 3D-scans of children. Int J Environ Res Public Health. 2021; Dec. 19(1):324.
29. Valzolgher C, Verdelet G, Salemme R, Lombardi L, Gaveau V, Farne A, et al. Reaching to sounds in virtual reality: a multisensory-motor approach to promote adaptation to altered auditory cues. Neuropsychologia. 2020; Dec. 149:107665.
30. Parseihian G, Katz BF. Rapid head-related transfer function adaptation using a virtual auditory environment. J Acoust Soc Am. 2012; Apr. 131(4):2948–57.

Fig. 1.
Stimuli and the virtual environment for training. (A) Source locations of the binaural recording (in azimuth). (B) Plots depict the interaural time difference (ITD), interaural level difference (ILD), and power spectrum of the binaural recordings at specified azimuthal locations from a representative recording (/ilgop/, “seven” in Korean). First, each stereo auditory stimulus was calculated from the fast Fourier transform, and the individualized power, ITD, and ILD were calculated. Each sound stimulus was filtered with a bandpass filter 2 Hz in width (finite impulse response filter; noncausal zero-phase lag; filtering order length of 0.2 seconds). Then, the ITD was computed with the time of the maximum cross-correlation between filtered left and right channels. The root mean square (RMS) value of the filtered left channel was divided by the RMS of the filtered right channel to calculate the ILD. The power spectrum of the left channel of the binaural recording illustrates the availability of spectral cues in a specific frequency band of the recordings. The X-axis (frequency) of each plot is presented as a logarithmic scale. (C) Images of screens presented during the virtual reality-based training program at four of the levels (details in the text).
ceo-2023-00206f1.tif
Fig. 2.
The average hearing thresholds (±standard deviation) of the single-sided deafness (SSD) group.
ceo-2023-00206f2.tif
Fig. 3.
Changes in sound localization performance in a virtual reality environment. The rates of correct sound localization on the four difficulty levels in the normal hearing (NH) group (A) and the single-sided deafness (SSD) group (B) improved after virtual reality (VR)-based training; these improvements persisted for 3 weeks. (C) Changes in the rate of correct sound localization in nine SSD subjects measured in the sound field. The horizontal dashed lines denote the chance level of performance for each of the four levels (color-coded) in the VR environment (A, B) and the sound field (C). Pre, before VR training; Post, after VR training; Post-3w, 3 weeks after completing VR training. *P<0.05, **P<0.01, ***P<0.001.
ceo-2023-00206f3.tif
Fig. 4.
Changes in Korean version of the Speech, Spatial, and Qualities of Hearing Scale (K-SSQ) and visual analog scale (VAS) scores. (A) K-SSQ results in the normal hearing (NH) group and (B) the single-sided deafness (SSD) group. (C) VAS scores regarding changes in spatial hearing after completing virtual reality (VR) training (Post) and 3 weeks after completing VR training (Post-3w) in the SSD group. Pre, before VR training. *P<0.05.
ceo-2023-00206f4.tif
Table 1.
Demographic characteristics of participants
Variable NH (n=38) SSD (n=16)
Age (yr) 36.8±10.4 45.4±14.1
Sex (male:female) 14:24 8:8
PTA L, –3.4±6.5; R, –3.9±5.2 Intact ear, 8.6±6.3; impaired ear, 94.8±21.0
Side of SSD (L:R) - 10:6
Age at SSD onset (yr) - 40.9±13.3
Duration of SSD (yr) - 4.3±4.4

Values are presented as mean±standard deviation.

NH, normal-hearing; SSD, single-sided deafness; PTA, pure-tone average; L, left; R, right.

TOOLS
Similar articles