Ayush Mishra


Loading...

Last Name

Mishra

First Name

Ayush

Organisational unit

09477 - Vanbever, Laurent / Vanbever, Laurent

Search Results

Publications 1 - 5 of 5
  • Mishra, Ayush; Sun, Xiangpeng; Jain, Atishya; et al. (2019)
    Proceedings of the ACM on Measurement and Analysis of Computing Systems
    In 2016, Google proposed and deployed a new TCP variant called BBR. BBR represents a major departure from traditional congestion-window-based congestion control. Instead of using loss as a congestion signal, BBR uses estimates of the bandwidth and round-trip delays to regulate its sending rate. The last major study on the distribution of TCP variants on the Internet was done in 2011, so it is timely to conduct a new census given the recent developments around BBR. To this end, we designed and implemented Gordon, a tool that allows us to measure the exact congestion window (cwnd) corresponding to each successive RTT in the TCP connection response of a congestion control algorithm. To compare a measured flow to the known variants, we created a localized bottleneck where we can introduce a variety of network changes like loss events, bandwidth change, and increased delay, and normalize all measurements by RTT. An offline classifier is used to identify the TCP variant based on the cwnd trace over time. Our results suggest that CUBIC is currently the dominant TCP variant on the Internet, and it is deployed on about 36% of the websites in the Alexa Top 20,000 list. While BBR and its variant BBR G1.1 are currently in second place with a 22% share by website count, their present share of total Internet traffic volume is estimated to be larger than 40%. We also found that Akamai has deployed a unique loss-agnostic rate-based TCP variant on some 6% of the Alexa Top 20,000 websites and there are likely other undocumented variants. The traditional assumption that TCP variants "in the wild" will come from a small known set is not likely to be true anymore. We predict that some variant of BBR seems poised to replace CUBIC as the next dominant TCP variant on the Internet.
  • Kirci, Ege Cem; Mishra, Ayush; Vanbever, Laurent (2025)
  • Better QUIC Implementations with Nesquic
    Item type: Conference Poster
    Brandner, Laurin; Marti, Kevin; Niekel, Bas; et al. (2025)
  • Better QUIC implementations with Nesquic
    Item type: Other Conference Item
    Brandner, Laurin; Marti, Kevin; Niekel, Bas; et al. (2025)
    ACM SIGCOMM 2025 Posters and Demos
    QUIC represents a re-design of the transport stack in response to the requirements of modern applications. While it provides a myriad of benefits, studies have shown that its performance penalties in several contexts keep it from being more widely adopted. While these studies provide a good overview of when QUIC can lack performance compared to TCP, they do little to specifically pinpoint where in the QUIC stack these slowdowns come from and how they can be fixed. To address this gap, we present Nesquic, a testing infrastructure for QUIC stacks. It collects library-internal metrics of each stack component without changing the library's source code. Our preliminary findings show that not only can Nesquic help developers pinpoint which components of their QUIC stacks are less performant, but it also gives them actionable insights into fixing these issues.
  • Hoffman, Benjamin; Dietmüller, Alexander; Mishra, Ayush; et al. (2025)
    PACMI '25: Proceedings of the 4th Workshop on Practical Adoption Challenges of ML for Systems
    Machine learning (ML)-based Adaptive Bitrate (ABR) algorithms often struggle to bridge the gap between simulation and reality. Their strong performance in synthetic environments frequently fails to generalize to real-world conditions. Researchers have therefore begun testing these algorithms over the Internet to incorporate real-world feedback into their design. In this paper, we show that since network conditions vary significantly across the globe, testing in individual real-world environments can suffer from the same generalization issues as lab-based testing. Existing testing platforms face (and might even be oblivious to) this limitation because they cover a small geographical region and rely on a narrow set of users affected by survivorship bias. As a result, their insights on an algorithm's performance generalize poorly to other deployments across the Internet, hindering the widespread adoption of ML-based ABR methods in practice. To address this gap, we present ABR-Arena, a global testing platform that enables researchers to evaluate the performance of ABR algorithms across a diverse set of regions around the globe. As a result of its worldwide coverage, ABR-Arena can reveal the performance shortcomings of several state-of-the-art ML-based approaches. It is extensible and easy to deploy in additional locations. We will make ABR-Arena available to the community to support the development of new ML-based approaches and to facilitate meaningful improvements to existing algorithms.
Publications 1 - 5 of 5