Search
Results
-
GraphChef: Learning the Recipe of Your Dataset
(2023)ICML 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)We propose a new graph model, GraphChef, that enables us to understand graph datasets as a whole. Given a dataset, GraphChef returns a set of rules (a recipe) that describes each class in the dataset. Existing GNNs and explanation methods reason on individual graphs not on the entire dataset. GraphChef uses decision trees to build recipes that are understandable by humans. We show how to compute decision trees in the message passing ...Conference Paper -
Multidimensional Approximate Agreement with Asynchronous Fallback
(2023)SPAA '23: Proceedings of the 35th ACM Symposium on Parallelism in Algorithms and ArchitecturesMultidimensional Approximate Agreement considers a setting of n parties, where each party holds a vector in ℝD as input. The honest parties are required to obtain very close outputs in ℝD that lie inside the convex hull of their inputs. Existing Multidimensional Approximate Agreement protocols achieve resilience against ts < n / (D + 1) corruptions under a synchronous network where messages are delivered within some time δ, but become ...Conference Paper -
From Distributed Algorithms to Machine Learning and Back
(2023)PODC '23: Proceedings of the 2023 ACM Symposium on Principles of Distributed ComputingIn the realm of computer science, it may seem that distributed computing and machine learning exist on opposite ends of the spectrum. However, there are many connections between the two domains, both in theory and practice. Recently, machine learning research has become excited about graphs. And when machine learning meets graphs, researchers familiar with distributed algorithms may experience a sense of déjà vu, as many classic distributed ...Conference Paper -
End-to-End Neural Permutation Program Synthesis
(2023)Permutations are the most general abstraction describing finitary actions affect ing finitely many elements, yet also the simplest class of programs imaginable. We propose a neural model that, given input-output pair examples, finds both a minimal set of atomic operations and the programs mapping inputs to outputs in terms of these atoms. Our model, DISPER, achieves 100% program reconstruc tion accuracy when the atoms are known and performs ...Conference Paper -
Discovering Graph Generation Algorithms
(2023)NeSy-GeMs Workshop @ ICLR 2023 PapersWe provide a novel approach to construct generative models for graphs. Instead of using the traditional probabilistic models or deep generative models, we pro pose to instead find an algorithm that generates the data. We achieve this using evolutionary search and a powerful fitness function, implemented by a randomly initialized graph neural network. This brings certain advantages over current deep generative models, for instance, a higher ...Conference Paper -
Computing the Best Policy That Survives a Vote
(2023)AAMAS '23: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent SystemsAn assembly of $n$ voters needs to decide on $t$ independent binary issues. Each voter has opinions about the issues, given by a $t$-bit vector. Anscombe's paradox shows that a policy following the majority opinion in each issue may not survive a vote by the very same set of $n$ voters, i.e., more voters may feel unrepresented by such a majority-driven policy than represented. A natural resolution is to come up with a policy that deviates ...Conference Paper -
DeFi and NFTs Hinder Blockchain Scalability
(2023)Many classical blockchains are known to have an embarrassingly low transaction throughput, down to Bitcoin's notorious seven transactions per second limit.Various proposals and implementations for increasing throughput emerged in the first decade of blockchain research. But how much concurrency is possible? In their early days, blockchains were mostly used for simple transfers from user to user. More recently, however, decentralized finance ...Conference Paper -
Agent-based Graph Neural Networks
(2023)The Eleventh International Conference on Learning Representations (ICLR 2023)We present a novel graph neural network we call AgentNet, which is designed specifically for graph-level tasks. AgentNet is inspired by sublinear algorithms, featuring a computational complexity that is independent of the graph size. The architecture of AgentNet differs fundamentally from the architectures of traditional graph neural networks. In AgentNet, some trained \textit{neural agents} intelligently walk the graph, and then collectively ...Conference Paper -
Towards Foundation Models with Mathematical Understanding
(2023)ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo 2023)We investigate the ability of transformer models to build representations of integer sequences that are of utility to tasks where deeper mathematical understanding is needed. To that end, we train BERT-like transformer encoders to assess the impact of individual pre-training tasks on the quality of the resulting model, and evaluate them for sequence classification, continuation, unmasking, complexity prediction, and next sequence-part ...Conference Paper -
Deep Learning-Powered Iterative Combinatorial Auctions with Active Learning
(2023)AAMAS '23: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent SystemsDeep learning-powered iterative combinatorial auctions (DL-ICA) are auctions that utilize machine learning techniques. Unlike traditional auctions, bidders in DL-ICA do not need to report the valuations for all bundles upfront. Instead, they report their value for certain bundles iteratively, and the allocation of the items is determined by solving a winner determination problem. During this process, the bidder profiles are modeled with ...Conference Paper