Metadata only
Date
2022Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
The learning of the simplest possible computational pattern -- periodicity -- is an open problem in the research of strong generalisation in neural networks. We formalise the problem of extrapolative generalisation for periodic signals and systematically investigate the generalisation abilities of classical, population-based, and recently proposed periodic architectures on a set of benchmarking tasks. We find that periodic and "snake" activation functions consistently fail at periodic extrapolation, regardless of the trainability of their periodicity parameters. Further, our results show that traditional sequential models still outperform the novel architectures designed specifically for extrapolation, and that these are in turn trumped by population-based training. We make our benchmarking and evaluation toolkit, PerKit, available and easily accessible to facilitate future work in the area. Show more
Publication status
publishedExternal links
Book title
2022 IEEE Symposium Series on Computational Intelligence (SSCI)Pages / Article No.
Publisher
IEEEEvent
Subject
Machine Learning (cs.LG); Artificial Intelligence (cs.AI); FOS: Computer and information sciences; neural networks; generalisation; extrapolation; periodicityOrganisational unit
03604 - Wattenhofer, Roger / Wattenhofer, Roger
Related publications and datasets
Is new version of: https://doi.org/10.48550/ARXIV.2209.10280
More
Show all metadata
ETH Bibliography
yes
Altmetrics