Recent Submissions 

  1. Survival of the Fastest: Enabling More Out-of-Order Execution in Dataflow Circuits 

    Elakhras A.; Guerrieri A.; Josipovic L.; et al. (2024)
    FPGA 2024 - Proceedings of the 2024 ACM/SIGDA International Symposium on Field Programmable Gate Arrays
    Dynamically scheduled HLS, through dataflow circuit generation, has proven successful at exploiting operation-level parallelism in several important situations where statically scheduled HLS fails. Yet, although existing dataflow circuits support out-of-order execution of different operations, they strictly confine successive instances of the same operation to execute sequentially in program order, which drastically affects the circuit's ...
    Conference Paper
  2. Suppressing Spurious Dynamism of Dataflow Circuits via Latency and Occupancy Balancing 

    Xu J.; Josipović L. (2024)
    FPGA 2024 - Proceedings of the 2024 ACM/SIGDA International Symposium on Field Programmable Gate Arrays
    Dataflow circuits produced via high-level synthesis (HLS) adapt their schedule at runtime to unpredictable data and control outcomes, thus promising superior performance to standard HLS solutions. However, their distributed handshake mechanism is extremely resource-expensive-there is a clear benefit in simplifying or removing it whenever it is unneeded for correctness and performance. Yet, even in such situations, transient and spurious ...
    Conference Paper
  3. FPGA-based real-time laser beam profiling and stabilization system for quantum simulation applications 

    Marti, Stefano; Mustafa, Enis; Bisson, Giacomo; et al. (2023)
    2023 26TH EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN, DSD 2023
    Lasers are imperative in quantum simulation experiments to trap and cool atoms. With the growing complexity of experimental setups, precise control and stability of the position and exotic shape of lasers have become essential for high-fidelity operations. However, disturbances such as temperature fluctuations, mechanical vibrations, deformation of materials, and acoustic noise pose challenges to fulfill these requirements. In this paper, ...
    Conference Paper
  4. Reducing Load-Use dependency-induced performance penalty in the Open-Source RISC-V CVA6 CPU 

    Ottavi, Gianmarco; Zaruba, Florian; Benini, Luca; et al. (2023)
    2023 26TH EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN, DSD 2023
    Embedded CPUs play a critical role in many modern electronic devices and are commonly used in a range of applications, from IoT edge, to automotive and industrial systems. In particular, application class processors are designed to run operating systems such as Linux, providing a platform for running a broad range of software ecosystems. As such, the performance of these processors is critical for ensuring that these systems can operate ...
    Conference Paper
  5. Data-driven subgrouping of patient trajectories with chronic diseases: Evidence from low back pain 

    Naumzik, Christof; Kongsted, Alice; Vach, Werner; et al. (2024)
    Conference Paper
  6. Sequential Deconfounding for Causal Inference with Unobserved Confounders 

    Hatt, Tobias; Feuerriegel, Stefan (2024)
    Conference Paper
  7. A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models 

    Stolfo, Alessandro; Jin, Zhijing; Shridhar, Kumar; et al. (2023)
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1
    We have recently witnessed a number of impressive results on hard mathematical reasoning problems with language models. At the same time, the robustness of these models has also been called into question; recent works have shown that models can rely on shallow patterns in the problem description when generating a solution. Building on the idea of behavioral testing, we propose a novel framework, which pins down the causal effect of various ...
    Conference Paper
  8. An Ordinal Latent Variable Model of Conflict Intensity 

    Stoehr, Niklas; Hennigen, Lucas Torroba; Valvoda, Josef; et al. (2023)
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1
    Measuring the intensity of events is crucial for monitoring and tracking armed conflict. Advances in automated event extraction have yielded massive data sets of '' who did what to whom '' micro-records that enable datadriven approaches to monitoring conflict. The Goldstein scale is a widely-used expert-based measure that scores events on a conflictualcooperative scale. It is based only on the action category ('' what '') and disregards ...
    Conference Paper
  9. Exploiting Biased Models to De-bias Text: A Gender-Fair Rewriting Model 

    Amrhein, Chantal; Schottmann, Florian; Sennrich, Rico; et al. (2023)
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1
    Natural language generation models reproduce and often amplify the biases present in their training data. Previous research explored using sequence-to-sequence rewriting models to transform biased model outputs (or original texts) into more gender-fair language by creating pseudo training data through linguistic rules. However, this approach is not practical for languages with more complex morphology than English. We hypothesise that ...
    Conference Paper
  10. Poor Man's Quality Estimation: Predicting Reference-Based MT MetricsWithout the Reference 

    Zouhar, Vilem; Dhuliawala, Shehzaad; Zhou, Wangchunshu; et al. (2023)
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023
    Machine translation quality estimation (QE) predicts human judgements of a translation hypothesis without seeing the reference. Stateof-the-art QE systems based on pretrained language models have been achieving remarkable correlations with human judgements yet they are computationally heavy and require human annotations, which are slow and expensive to create. To address these limitations, we define the problem of metric estimation (ME) ...
    Conference Paper
  11. Convergence and Diversity in the Control Hierarchy 

    Butoi, Alexandra; Cotterell, Ryan; Chiang, David (2023)
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1
    Weir has defined a hierarchy of language classes whose second member (L-2) is generated by tree-adjoining grammars (TAG), linear indexed grammars (LIG), combinatory categorial grammars, and head grammars. The hierarchy is obtained using the mechanism of control, and L-2 is obtained using a contextfree grammar (CFG) whose derivations are controlled by another CFG. We adapt Weir's definition of a controllable CFG to give a definition of ...
    Conference Paper
  12. Log-linear Guardedness and its Implications 

    Ravfogel, Shauli; Goldberg, Yoav; Cotterell, Ryan (2023)
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1
    Methods for erasing human-interpretable concepts from neural representations that assume linearity have been found to be tractable and useful. However, the impact of this removal on the behavior of downstream classifiers trained on the modified representations is not fully understood. In this work, we formally define the notion of log-linear guardedness as the inability of an adversary to predict the concept directly from the representation, ...
    Conference Paper
  13. What does a Text Classifier Learn about Morality? An Explainable Method for Cross-Domain Comparison of Moral Rhetoric 

    Liscio, Enrico; Araque, Oscar; Gatti, Lorenzo; et al. (2023)
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1
    Moral rhetoric influences our judgement. Although social scientists recognize moral expression as domain specific, there are no systematic methods for analyzing whether a text classifier learns the domain-specific expression of moral language or not. We propose Tomea, a method to compare a supervised classifier's representation of moral rhetoric across domains. Tomea enables quantitative and qualitative comparisons of moral rhetoric via ...
    Conference Paper
  14. XDailyDialog: A Multilingual Parallel Dialogue Corpus 

    Liu, Zeming; Nie, Ping; Cai, Jie; et al. (2023)
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1
    High-quality corpora are significant to the development of dialogue models. However, most existing corpora for open-domain dialogue modeling are limited to a single language. The absence of multilingual open-domain dialog corpora not only limits the research on multilingual or cross-lingual transfer learning but also hinders the development of robust open-domain dialogue systems that can be deployed in other parts of the world. In this ...
    Conference Paper
  15. Efficient Semiring-Weighted Earley Parsing 

    Opedal, Andreas; Zmigrod, Ran; Vieira, Tim; et al. (2023)
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1
    This paper provides a reference description, in the form of a deduction system, of Earley's (1970) context-free parsing algorithm with various speed-ups. Our presentation includes a known worst-case runtime improvement from Earley's O N-3|G||R|), which is unworkable for the large grammars that arise in natural language processing, to O N-3 |G|) which matches the runtime of CKY on a binarized version of the grammar G. Here N is the length ...
    Conference Paper
  16. Sentiment as an Ordinal Latent Variable 

    Stoehr, Niklas; Cotterell, Ryan; Schein, Aaron (2023)
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023
    Sentiment analysis has become a central tool in various disciplines outside of natural language processing. In particular in applied and domain-specific settings with strong requirements for interpretable methods, dictionary-based approaches are still a popular choice. However, existing dictionaries are often limited in coverage, static once annotation is completed and sentiment scales differ widely; some are discrete others continuous. ...
    Conference Paper
  17. Tokenization and the Noiseless Channel 

    Zouhar, Vilem; Meister, Clara; Gastaldi, Juan Luis; et al. (2023)
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1
    Subword tokenization is a key part of many NLP pipelines. However, little is known about why some tokenizer and hyperparameter combinations lead to better downstream model performance than others. We propose that good tokenizers lead to efficient channel usage, where the channel is the means by which some input is conveyed to the model and efficiency can be quantified in information-theoretic terms as the ratio of the Shannon entropy to ...
    Conference Paper
  18. Adaptive and Personalized Exercise Generation for Online Language Learning 

    Cui, Peng; Sachan, Mrinmaya (2023)
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1
    Adaptive learning aims to provide customized educational activities (e.g., exercises) to address individual learning needs. However, manual construction and delivery of such activities is a laborious process. Thus, in this paper, we study a novel task of adaptive and personalized exercise generation for online language learning. To this end, we combine a knowledge tracing model that estimates each student's evolving knowledge states from ...
    Conference Paper
  19. Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training 

    Zeng, Yan; Zhou, Wangchunshu; Luo, Ao; et al. (2023)
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1
    In this paper, we introduce Cross-View Language Modeling, a simple and effective pre-training framework that unifies cross-lingual and cross-modal pre-training with shared architectures and objectives. Our approach is motivated by a key observation that cross-lingual and cross-modal pre-training share the same goal of aligning two different views of the same object into a common semantic space. To this end, the cross-view language modeling ...
    Conference Paper
  20. When Does Aggregating Multiple Skills with Multi-Task LearningWork? A Case Study in Financial NLP 

    Ni, Jingwei; Jin, Zhijing; Wang, Qian; et al. (2023)
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1
    Multi-task learning (MTL) aims at achieving a better model by leveraging data and knowledge from multiple tasks. However, MTL does not always work - sometimes negative transfer occurs between tasks, especially when aggregating loosely related skills, leaving it an open question when MTL works. Previous studies show that MTL performance can be improved by algorithmic tricks. However, what tasks and skills should be included is less well ...
    Conference Paper

View more