P36Session 2 (Friday 12 January 2024, 09:00-11:30)Principal Components Analysis of amplitude envelopes from spectral channels: Comparison between music and speech.
Keywords : perception, cochlear implants, natural signal statistics, efficient coding
Introduction: The efficient coding approach predicts that perceptual systems are optimally adapted to natural signal statistics. Sensory system would have evolved to encode environmental signals in order to represent the greatest amount of information at the lowest possible resource cost.
Previous studies applied Factor Analysis (FA) on amplitude modulations channels from natural speech signals in order to estimate optimal frequency boundaries between channels. While some authors argued that 4 channels would be sufficient to represent the main contrastive segmental information in natural clean speech, comparison of speech statistics with perceptual performance led to suggest that 6 to 7 frequency bands would be required to optimally represent vocoded speech.
However, research on music perception in cochlear implanted listeners sheds light on potential limits associated with this hypothesis. Indeed, performance observed on vocoded signal material in normal-hearing listeners as well as in cochlear implant users is systematically better for speech signals than for music. It is therefore crucial to compare statistical properties of music and speech in order to reach a better understanding of the relation between characteristics of various auditory communication signals and their possible optimal coding in auditory perception.
We applied the same FA method on 2 different sets of data: (1) a database of free music recordings (Free Music Archive, https://github.com/mdeff/fma), (2) a free corpus of speech signals (Clarity Speech, doi:10.17866/rd.salford.16918180).
Method: Analyses were carried out using the Matlab environment and mirrored previous studies. Sample signals were passed through a gammatone filterbank (1/4th ERB bandwidth, approx. 100-120 channels depending on the higher-frequency limit) and their energy envelope was extracted. This amplitude modulation matrix was then run through FA, and Principal Components (PCs) were independently rotated. Channels that covary in amplitude envelope should be grouped as a single Principal Component. Methods for automatically determining the optimal number of Principal Components as well as to estimate frequency boundaries between these PCs were developed. As our aim was to compare speech and music, for which typical signal bandwidths differ, two higher-frequency limits were compared (8000 Hz vs. 16000 Hz).
Results: Focusing on a reduced number of PC combinations that would compare to previous conclusions on speech according to which 4 to 7 PCs would be optimal, we find that cumulative explained variance is similarly located between 35% and 50% for music and between 39% and 52% for speech. Our estimates of frequency boundaries identified do not match those of previous studies. Boundaries are not fixed and depend on the type of natural signals (speech vs. music): variation in (1) boundary location and (2) PC Rank/frequency relations. Perceptual studies are in preparation that will help assess the validity of these measures.