Alexander Stremitzer


Loading...

Last Name

Stremitzer

First Name

Alexander

Organisational unit

09629 - Stremitzer, Alexander / Stremitzer, Alexander

Search Results

Publications1 - 10 of 26
  • Having Your Day in Robot Court
    Item type: Working Paper
    Chen, Benjamin; Stremitzer, Alexander; Tobia, Kevin P. (2021)
    Center for Law & Economics Working Paper Series
  • Aspirational Rules
    Item type: Journal Article
    Leibovitch, Adi; Stremitzer, Alexander (2022)
    The Journal of Legal Studies
    A long-standing puzzle in comparative constitutional law revolves around the negative correlation between the number of constitutional rights incorporated in constitutions and a country’s human rights situation. Is it only a correlation, driven by other sociological or historical factors? Is it that committing to more de jure rights hurts the protection of de facto human rights? Or is it that countries with worse human rights records are more likely to amend their constitutions to include more rights? This article discusses an experiment that examines the existence and direction of a causal effect between setting overly ambitious goals and achieving outcomes and the potential mechanisms underlying it. The main finding is that setting overly ambitious goals may not only be counterproductive in the domains in which such goals are set but may also have a negative spillover effect to other domains.
  • Fan, Yu; Ni, Jingwei; Merane, Jakob; et al. (2025)
    arXiv
    Long-form legal reasoning remains a key challenge for large language models (LLMs) in spite of recent advances in test-time scaling. To address this, we intro duce LEXAM, a novel benchmark derived from 340 law exams spanning 116 law school courses across a range of subjects and degree levels. The dataset comprises 4,886 law exam questions in English and German, including 2,841 long-form, open-ended questions and 2,045 multiple-choice questions. Besides reference answers, the open questions are also accompanied by explicit guidance outlining the expected legal reasoning approach such as issue spotting, rule recall, or rule application. Our evaluation on both open-ended and multiple-choice questions present significant challenges for current LLMs; in particular, they notably struggle with open questions that require structured, multi-step legal reasoning. Moreover, our results underscore the effectiveness of the dataset in differentiating between models with varying capabilities. Deploying an ensemble LLM-as-a-Judge paradigm with rigorous human expert validation, we demonstrate how model-generated reasoning steps can be evaluated consistently and accurately, closely aligning with human expert assessments. Our evaluation setup provides a scalable method to assess legal reasoning quality beyond simple accuracy metrics. We have open-sourced our code on GitHub and released our data on Hugging Face. Project page: https://lexam-benchmark.github.io/.
  • Monsch, Martin; Stremitzer, Alexander (2024)
    Deutsches Deliktsrecht aus ökonomischer Perspektive: 16. Travemünder Symposium zur ökonomischen Analyse des Rechts
  • Rutkowski, Karol; Stremitzer, Alexander (2025)
    Center for Law & Economics Working Paper Series
    We consider a model of a single defendant and N plaintiffs where the total cost of litigation is fixed on the part of the plaintiffs and shared among the members of a suing coalition. By settling and dropping out of the coalition, a plaintiff therefore creates a negative externality on the other plaintiffs. It was shown in Che and Spier (2008) that failure to internalize this externality can often be exploited by the defendant. However, if plaintiffs make sequential take-it-or-leave-it settlement offers, we can show that they will actually be exploited by one of their fellow plaintiffs rather than by the defendant. When we relax the assumptions, allowing each party to have positive bargaining power, the result lies between the two cases, and the plaintiffs can be exploited jointly by a fellow plaintiff and the defendant. Moreover, if litigation is a public good as is the case in shareholder derivative suits, parties may fail to reach a settlement even having complete information. This may explain why we observe derivative suits in the US but not in Europe.
  • Chen, Benjamin; Hermstrüwer, Yoan; Langenbach, Pascal; et al. (2025)
    Center for Law & Economics Working Paper Series
    Are robot judges perceived as less fair than human judges? If so, how can this perceived judicial human-AI fairness gap be mitigated? We conduct a large online experiment with more than 4,800 observations to explore whether delegating judicial tasks to algorithms affects perceived procedural fairness and how human-in-the-loop interventions might offset any fairness gap. Participants are randomly assigned to assess one of the following conditions: a pure robot court (no human oversight), a pure human court (no algorithmic decision aids), or a hybrid court (a human judge assisted by algorithmic decision aids). Within the human and the hybrid conditions, we further vary whether the human judge conducts a thorough (high involvement) or brief (low involvement) review. Confirming prior research, we find robust evidence of a judicial human-AI fairness gap: participants perceive robot courts as less fair than human courts. Crucially, even low human involvement fully closes this gap. Although the fairness gap persists among Black participants, it is significantly smaller than the gap seen among White participants, highlighting an ethnic disparity. Overall, our results suggest that delegating judicial tasks to algorithmic systems may not undermine perceived procedural fairness if they remain subject to human review. However, our findings also raise the concern that nominal human oversight could be used to legitimize substantively unfair algorithmic procedures in the eyes of ordinary citizens.
  • Having Your Day in Robot Court
    Item type: Journal Article
    Chen, Benjamin Minhao; Stremitzer, Alexander; Tobia, Kevin (2022)
    Harvard Journal of Law & Technology
    Should machines be judges? Some say “no,” arguing that citizens would see robot-led legal proceedings as procedurally unfair because the idea of “having your day in court” is thought to refer to having another human adjudicate one’s claims. Prior research established that people obey the law in part because they see it as procedurally just. The introduction of “robot judges” powered by artificial intelligence (“AI”) could undermine sentiments of justice and legal compliance if citizens intuitively view machine-adjudicated proceedings as less fair than the human-adjudicated status quo. Two original experiments show that ordinary people share this intuition: There is a perceived “human-AI fairness gap.” However, it is also possible to reduce — and perhaps even eliminate — this fairness gap through “algorithmic offsetting.” Affording litigants a hearing before an AI judge and enhancing the interpretability of AI decisions reduce the human-AI fairness gap. Moreover, the perceived procedural justice advantage of human over AI adjudication appears to be driven more by beliefs about the accuracy of the outcome and thoroughness of consideration, rather than doubts about whether a party had adequate opportunity to voice their opinions or whether the judge understood the perspective of the litigant. The results of the experiments can support a common and fundamental objection to robot judges: There is a concerning human AI fairness gap. Yet, at the same time, the results also indicate that the public may not believe that human judges possess irreducible procedural fairness advantages. In some circumstances, people see a day in a robot court as no less fair than a day in a human court.
  • Cope, Kevin; Stremitzer, Alexander (2021)
    The Journal of Nuclear Medicine
  • Nielsen, Aileen; Skylaki, Stavroula; Norkute, Milda; et al. (2024)
    Journal of Empirical Legal Studies
    Rapidly improving artificial intelligence (AI) technologies have created opportunities for human-machine cooperation in legal practice. We provide evidence from an experiment with law students (N = 206) on the causal impact of machine assistance on the efficiency of legal task completion in a private law setting with natural language inputs and multidimensional AI outputs. We tested two forms of machine assistance: AI-generated summaries of legal complaints and AI-generated text highlighting within those complaints. AI-generated highlighting reduced task completion time by 30% without any reduction in measured quality indicators compared to no AI assistance. AI-generated summaries produced no change in performance metrics. AI summaries and AI highlighting together improved efficiency but not as much as AI highlighting alone. Our results show that AI support can dramatically increase the efficiency of legal task completion, but finding the optimal form of AI assistance is a fine tuning exercise. Currently, AI-generated highlighting is not readily available from state-of-the-art, consumer-facing large language models, but our work suggests that this capability should be prioritized in the development of legal AI products.
  • Cope, Kevin; Stremitzer, Alexander (2021)
    Slate Magazine
    The concept of vaccine passports has opened a new fissure in the COVID culture wars. Many liberals and libertarians—and a slim majority of Americans—support them, while many conservatives call them unjust, even unconstitutional discrimination. On this last point, however, U.S. constitutional law is clear: The government can limit certain rights and privileges to the vaccinated, especially during a pandemic. So the more pertinent question now isn’t whether vaccine passports are allowed; it’s whether they are required—whether the Constitution entitles vaccinated people to receive exemptions from many COVID restrictions. We believe it does.
Publications1 - 10 of 26