
Open access
Date
2021Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Automatic ICD coding is the task of assigning codes from the International Classification of Diseases (ICD) to medical notes. These codes describe the state of the patient and have multiple applications, e.g., computer-assisted diagnosis or epidemiological studies. ICD coding is a challenging task due to the complexity and length of medical notes. Unlike the general trend in language processing, no transformer model has been reported to reach high performance on this task. Here, we investigate in detail ICD coding using PubMedBERT, a state-of-the-art transformer model for biomedical language understanding. We find that the difficulty of fine-tuning the model on long pieces of text is the main limitation for BERT-based models on ICD coding. We run extensive experiments and show that despite the gap with current state-of-the-art, pretrained transformers can reach competitive performance using relatively small portions of text. We point at better methods to aggregate information from long texts as the main need for improving BERT-based ICD coding. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000497651Publication status
publishedExternal links
Book title
Proceedings of the 20th Workshop on Biomedical Language ProcessingPages / Article No.
Publisher
Association for Computational LinguisticsEvent
Organisational unit
03604 - Wattenhofer, Roger / Wattenhofer, Roger
More
Show all metadata
ETH Bibliography
yes
Altmetrics