Search
Results
-
Unsupervised Task Clustering for Multi-Task Reinforcement Learning
(2020)Deep Reinforcement Learning Workshop, NeurIPS 2020. Accepted PapersMeta-learning, transfer learning and multi-task learning have recently laid a path towards more generally applicable reinforcement learning agents that are not limited to a single task. However, most existing approaches implicitly assume a uniform similarity between tasks. We argue that this assumption is limiting in settings where the relationship between tasks is unknown a-priori. In this work, we propose a general approach to automatically ...Conference Paper -
Asynchronous Byzantine Agreement in Incomplete Networks
(2020)The Byzantine agreement problem is considered to be a core problem in distributed systems. For example, Byzantine agreement is often used to build a blockchain, a totally ordered log of records. Blockchains are asynchronous distributed systems, fault-tolerant against Byzantine nodes. In the literature, the asynchronous byzantine agreement problem is studied in a fully connected network model where every node can directly send messages ...Conference Paper -
Medley2K: A Dataset of Medley Transitions
(2020)Proceedings MML 2020: 13th International Workshop on Machine Learning and Music at ECML/PKDD 2020Conference Paper -
Brief Announcement: Byzantine Agreement with Unknown Participants and Failures
(2020)Proceedings of the 39th Symposium on Principles of Distributed ComputingA set of participants that want to agree on a common opinion despite the presence of malicious or Byzantine participants need to solve an instance of a Byzantine agreement problem. This classic problem has been well studied but most of the existing solutions assume that the participants are aware of n --- the total number of participants in the system --- and f --- the upper bound on the number of Byzantine participants. In this paper, ...Conference Paper -
The Append Memory Model: Why BlockDAGs Excel Blockchains
(2020)Proceedings of the 32nd ACM Symposium on Parallelism in Algorithms and ArchitecturesThis paper presents a novel shared memory model that simplifies the analysis of consensus on a Chain and a DAG. In this new model, referred to as the append memory model, nodes are allowed to write new values to the unordered memory, but not to overwrite already existing values. We show that although this model differs from the standard shared memory model with n shared read-write registers, many known results from the shared memory model ...Conference Paper -
On the Hardness of Red-Blue Pebble Games
(2020)Proceedings of the 32nd ACM Symposium on Parallelism in Algorithms and ArchitecturesRed-blue pebble games model the computation cost of a two-level memory hierarchy. We present various hardness results in different red-blue pebbling variants, with a focus on the oneshot model. We first study the relationship between previously introduced red-blue pebble models (base, oneshot, nodel). We also analyze a new variant (compcost) to obtain a more realistic model of computation. We then prove that red-blue pebbling is NP-hard ...Conference Paper -
Neural Symbolic Music Genre Transfer Insights
(2020)Communications in Computer and Information Science ~ Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2019Transferring a song from one genre to another is most difficult if no instrumentation information is provided and genre is only defined by the timing and pitch of the played notes. Inspired by the CycleGAN music genre transfer presented in [2] we investigate whether recent additions to GAN training like spectral normalization and self-attention can improve transfer. Our preliminary results show that spectral normalization improves audible ...Conference Paper -
High-Throughput and Low-Latency Hyperloop
(2020)2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)Hyperloop pods are expected to travel faster than 1, 000 km/h. Apart from high speed, high throughput and low latency are crucial to hyperloop 's success. We show that hyperloop networks could transport as many passengers as train or plane networks. Our on-demand pod scheduling method provides passenger waiting times of only a few minutes, even at peak times. That minimizes the overall trip latencies. Further, our scheduling results in ...Conference Paper -
Attentive Multi-Task Deep Reinforcement Learning
(2020)Lecture Notes in Computer Science ~ Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2019Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the ...Conference Paper -
Contrastive Graph Neural Network Explanation
(2020)Proceedings of the 37th Graph Representation Learning and Beyond Workshop at ICML 2020Graph Neural Networks achieve remarkable results on problems with structured data but come as black-box predictors.Transferring existing explanation techniques, such as occlusion, fails as even removing a single node or edge can lead to drastic changes in the graph. The resulting graphs can differ from all training examples, causing model confusion and wrong explanations. Thus, we argue that explicability must use graphs compliant with ...Conference Paper