Search
Results
-
Cascade: CPU Fuzzing via Intricate Program Generation
(2024)Generating interesting test cases for CPU fuzzing is akin to generating programs that exercise unusual states inside the CPU. The performance of CPU fuzzing is heavily influenced by the quality of these programs and by the overhead of bug detection. Our analysis of existing state-of-the-art CPU fuzzers shows that they generate programs that are either overly simple or execute a small fraction of their instructions due to invalid control ...Conference Paper -
-
Bridging Diversity and Uncertainty in Active learning with Self-Supervised Pre-Training
(2024)This study addresses the integration of diversity-based and uncertainty-based sampling strategies in active learning, particularly within the context of self-supervised pre-trained models. We introduce a straightforward heuristic called TCM that mitigates the cold start problem while maintaining strong performance across various data levels. By initially applying TypiClust for diversity sampling and subsequently transitioning to uncertainty ...Other Conference Item -
SUPClust: Active Learning at the Boundaries
(2024)Active learning is a machine learning paradigm designed to optimize model performance in a setting where labeled data is expensive to acquire. In this work, we propose a novel active learning method called SUPClust that seeks to identify points at the decision boundary between classes. By targeting these points, SUPClust aims to gather information that is most informative for refining the model's prediction of complex decision regions. ...Other Conference Item -
GraphChef: Decision-Tree Recipes to Explain Graph Neural Networks
(2024)The Twelfth International Conference on Learning RepresentationsWe propose a new self-explainable Graph Neural Network (GNN) model: GraphChef. GraphChef integrates decision trees into the GNN message passing framework. Given a dataset, GraphChef returns a set of rules (a recipe) that explains each class in the dataset unlike existing GNNs and explanation methods that reason on individual graphs. Thanks to the decision trees, GraphChef recipes are human understandable. We also present a new pruning ...Conference Paper -
Efficient and Scalable Graph Generation through Iterative Local Expansion
(2024)In the realm of generative models for graphs, extensive research has been conducted. However, most existing methods struggle with large graphs due to the complexity of representing the entire joint distribution across all node pairs and capturing both global and local graph structures simultaneously. To overcome these issues, we introduce a method that generates a graph by progressively expanding a single node to a target graph. In each ...Conference Paper -
Optimus: Warming Serverless ML Inference via Inter-Function Model Transformation
(2024)Conference Paper -
General Versus Vocational Education in Recruitment: Which Factors Shape Employers’ Preferences Regarding Education?
(2024)Other Conference Item -
-
MIMDRAM: An End-to-End Processing-Using-DRAM System for High-Throughput, Energy-Efficient and Programmer-Transparent Multiple-Instruction Multiple-Data Computing
(2024)2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA)Processing-using-DRAM (PUD) is a processing-in-memory (PIM) approach that uses a DRAM array's massive internal parallelism to execute very-wide (e.g., 16,384-262,144-bit-wide) data-parallel operations, in a single-instruction multiple-data (SIMD) fashion. However, DRAM rows' large and rigid granularity limit the effectiveness and applicability of PUD in three ways. First, since applications have varying degrees of SIMD parallelism (which ...Conference Paper