Show simple item record

dc.contributor.author
Wong, Daniel D. E.
dc.contributor.author
Fuglsang, Søren A.
dc.contributor.author
Hjortkjær, Jens
dc.contributor.author
Ceolini, Enea
dc.contributor.author
Slaney, Malcolm
dc.contributor.author
de Cheveigné, Alain
dc.date.accessioned
2019-02-19T15:40:17Z
dc.date.available
2019-01-30T09:43:26Z
dc.date.available
2019-02-19T15:40:17Z
dc.date.issued
2018-08
dc.identifier.issn
1662-453X
dc.identifier.issn
1662-4548
dc.identifier.other
10.3389/fnins.2018.00531
en_US
dc.identifier.uri
http://hdl.handle.net/20.500.11850/321262
dc.identifier.doi
10.3929/ethz-b-000321262
dc.description.abstract
The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies.
en_US
dc.format
application/pdf
dc.language.iso
en
en_US
dc.publisher
Frontiers Media
dc.rights.uri
http://creativecommons.org/licenses/by/4.0/
dc.subject
temporal response function
en_US
dc.subject
speech decoding
en_US
dc.subject
electroencephalography
en_US
dc.subject
selective auditory attention
en_US
dc.subject
attention decoding
en_US
dc.title
A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding
en_US
dc.type
Journal Article
dc.rights.license
Creative Commons Attribution 4.0 International
dc.date.published
2018-08-07
ethz.journal.title
Frontiers in Neuroscience
ethz.journal.volume
12
en_US
ethz.journal.abbreviated
Front Neurosci
ethz.pages.start
531
en_US
ethz.size
16 p.
en_US
ethz.version.deposit
publishedVersion
en_US
ethz.publication.place
Lausanne
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02140 - Dep. Inf.technologie und Elektrotechnik / Dep. of Inform.Technol. Electrical Eng.::02533 - Institut für Neuroinformatik / Institute of Neuroinformatics
en_US
ethz.date.deposited
2019-01-30T09:43:30Z
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Open access
en_US
ethz.rosetta.installDate
2019-02-19T15:40:26Z
ethz.rosetta.lastUpdated
2024-02-02T07:10:55Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=A%20Comparison%20of%20Regularization%20Methods%20in%20Forward%20and%20Backward%20Models%20for%20Auditory%20Attention%20Decoding&rft.jtitle=Frontiers%20in%20Neuroscience&rft.date=2018-08&rft.volume=12&rft.spage=531&rft.issn=1662-453X&1662-4548&rft.au=Wong,%20Daniel%20D.%20E.&Fuglsang,%20S%C3%B8ren%20A.&Hjortkj%C3%A6r,%20Jens&Ceolini,%20Enea&Slaney,%20Malcolm&rft.genre=article&rft_id=info:doi/10.3389/fnins.2018.00531&
 Search print copy at ETH Library

Files in this item

Thumbnail

Publication type

Show simple item record