P34Session 2 (Friday 12 January 2024, 09:00-11:30)An interactive evaluation of gaze-directed beamforming in noisy conversations.
Gaze-directed beamforming has potential for future hearing aids by selecting sound from the direction of the user’s gaze. Gaze control is intended to allow users to redirect the beam to the current speaker, but its overall effectiveness in continuous conversation with natural exchanges of the conversational floor has not been evaluated. We evaluated the effectiveness using a simulation of an 8-microphone array mounted on spectacles. In phase 1, participants watched 6 segments (165 seconds each) of a zoom call between two parties located at ±15˚. Interfering talkers were added in at 0˚, ±60˚, ±135˚ and 180˚. One condition simulated binaural hearing over headphones using head-related transfer functions. The other condition used an eye-tracker to select filters for each source and for each video frame from a look-up table. The table was based on the predicted directionality of the microphone array in diffuse noise for a minimum-variance distortionless response beamformer. The participant’s gaze and the headphone output were recorded. Participants completed a short questionnaire about the conversation after listening to each segment. In phase 2, the intelligibility of each sentence was measured formally. Each individual participant's recording was presented to a new participant, sentence-by-sentence for transcription. An individually tailored video cut back and forth between the two talkers in accordance with the eye-tracking record of the corresponding phase-1 participant in order to provide identical lip-reading cues. In phase 1, the scores on the questionnaire were approximately doubled by use of the beamformer and, in phase 2, the number of words correctly transcribed also doubled.