From Struggle (06-2024) toMastery (02-2025) LLMs Conquer Advanced Algorithm Exams andPave theWay forEditorial Generation
METADATA ONLY
Loading...
Author / Producer
Date
2026
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Scopus:
Altmetric
METADATA ONLY
Data
Rights / License
Abstract
This paper presents a comprehensive evaluation of the performance of state-of-the-art Large Language Models (LLMs) on challenging university-level algorithms exams. By testing multiple models on both a Romanian exam and its high-quality English translation, we analyze LLMs problem-solving capabilities, consistency, and multilingual performance. Our empirical study reveals that the most recent models not only achieve scores comparable to top-performing students but also demonstrate robust reasoning skills on complex, multi-step algorithmic challenges, even though difficulties remain with graph-based tasks. Building on these findings, we explore the potential of LLMs to support educational environments through the generation of high-quality editorial content, offering instructors a powerful tool to enhance student feedback. The insights and best practices discussed herein pave the way for further integration of generative AI in advanced algorithm education. 2025 Elsevier B.V., All rights reserved.
Permanent link
Publication status
published
External links
Book title
Generative Systems and Intelligent Tutoring Systems
Journal / series
Volume
15723
Pages / Article No.
32 - 46
Publisher
Springer
Event
21st International Conference on Intelligent Tutoring Systems (ITS 2025)