Florian Zaruba


Loading...

Last Name

Zaruba

First Name

Florian

Organisational unit

Search Results

Publications1 - 10 of 22
  • Balkind, Jonathan; Lim, Katie; Schaffner, Michael; et al. (2020)
    ASPLOS '20: Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems
  • Riedel, Samuel; Schuiki, Fabian; Scheffler, Paul; et al. (2021)
    2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD)
    System simulators are essential for the exploration, evaluation, and verification of manycore processors and are vital for writing software and developing programming models in conjunction with architecture design. A promising approach to fast, scalable, and instruction-accurate simulation is binary translation. In this paper, we present Banshee, an instruction-accurate full-system RISC-V multi-core simulator based on LLVM-powered ahead-of-time binary translation that can simulate systems with thousands of cores. Banshee supports the RV32IMAFD instruction set. It also models peripherals, custom ISA extensions, and a multi-level, actively-managed memory hierarchy used in existing multi-cluster systems. Banshee is agnostic to the host architecture, fully open-source, and easily extensible to facilitate the exploration and evaluation of new ISA extensions. As a key novelty with respect to existing binary translation approaches, Banshee supports performance estimation through a lightweight extension, modeling the effect of architectural latencies with an average deviation of only 2% from their actual impact. We evaluate Banshee by simulating various compute-intensive workloads on two large-scale open-source RISC-V manycore systems, Manticore and MemPool (with 4096 and 256 cores, respectively). We achieve simulation speeds of up to 618 MIPS per core or 72 GIPS for complete systems, exhibiting almost perfect scaling, competitive single-core performance, and leading multi-core performance. We demonstrate Banshee’s extensibility by implementing multiple custom RISC-V ISA extensions.
  • Scheffler, Paul; Zaruba, Florian; Schuiki, Fabian; et al. (2023)
    IEEE Transactions on Parallel and Distributed Systems
    Sparse linear algebra is crucial in many application domains, but challenging to handle efficiently in both software and hardware, with one- and two-sided operand sparsity handled with distinct approaches. In this work, we enhance an existing memorystreaming RISC-V ISA extension to accelerate both one- and two-sided operand sparsity on widespread sparse tensor formats like compressed sparse row (CSR) and compressed sparse fiber (CSF) by accelerating the underlying operations of streaming indirection, intersection, and union. Our extensions enable single-core speedups over an optimized RISC-V baseline of up to 7.0×, 7.7 × , and 9.8 × on sparse-dense multiply, sparse-sparse multiply, and sparse-sparse addition, respectively, and peak FPU utilizations of up to 80% on sparse-dense problems. On an eight-core cluster, sparse-dense and sparse-sparse matrix-vector multiply using real-world matrices are up to 4.9× and 5.9× faster and up to 2.9× and 3.0× more energy efficient. We explore further applications for our extensions, such as stencil codes and graph pattern matching. Compared to recent CPU, GPU, and accelerator approaches, our extensions enable higher flexibility on data representation, degree of sparsity, and dataflow at a minimal hardware footprint, adding only 1.8% in area to a compute cluster. A cluster with our extensions running CSR matrix-vector multiplication achieves 9.9× and 1.7× higher peak floating-point utilizations than recent highly optimized sparse data structures and libraries for CPU and GPU, respectively, even when accounting for off-chip main memory (HBM) and on-chip interconnect latency and bandwidth effects.
  • Di Mauro, Alfio; Zaruba, Florian; Schuiki, Fabian; et al. (2020)
    2020 IEEE International Symposium on Circuits and Systems (ISCAS)
    To provide high computational capabilities, and, at the same time, minimize the power consumption, modern Systems-on-Chip (SoCs) target very low energy consumption per operation as a primary objective. This goal has been achieved in recent years by adopting simple, yet very effective strategies like aggressive voltage and frequency scaling. However, the process variations that affects highly scaled technology nodes represents a severe limitation to the application of such techniques [2]; forcing digital designers to account for significant supply voltage margins to guarantee sign-off frequencies [1].
  • Benz, Thomas; Bertaccini, Luca; Zaruba, Florian; et al. (2021)
    ESSCIRC 2021 - IEEE 47th European Solid State Circuits Conference (ESSCIRC)
    We present Thestral, a 10-core RISC-V chip for energy-proportional parallel computing manufactured in 22 nm FD-SOI technology. Thestral contains a control core and a nine-core compute cluster. Each core features a single-precision floating-point unit (FPU) and an integer processing unit (IPU) and implements custom instruction set architecture (ISA) extensions to improve utilization. The chip features 20 fine-grain power domains: one for each FPU and IPU, as well as one for the entire acceleration cluster. Such aggressive power management granularity is valuable both for extreme-edge computing, where power gating reduces sleep power, and for high-performance computing, where leakage control is required to meet thermal design power constraints and to minimize idle power. We propose a fast and fine-grain power gating architecture with much finer granularity than the state of the art for multi-core computing platforms. A sub-10 ns power-up sequence allows for fine-tuning the compute cluster configuration, powering up only the computational units required for a specific application phase. Our solution enables up to 42% measured power savings for the extreme-edge scenario during sleep mode (@350 MHz, 0.6 V, 25 °C), which is 12.7% more than what can be achieved with aggressive clock-gating. On the other extreme, in an HPC setting, a Thestral-based many-core system running memory-bound applications (@850 MHz, 0.9 V, 75 °C) can save up to 41% power. ©2021 IEEE
  • Schiavone, Pasquale D.; Sanchez, Ernesto; Ruospo, Annachiara; et al. (2018)
    2018 IFIP/IEEE International Conference on Very Large Scale Integration (VLSI-SoC)
  • PULPino: A RISC-V based single-core system
    Item type: Other Conference Item
    Traber, Andreas; Stucki, Sven; Zaruba, Florian; et al. (2015)
  • Mach, Stefan; Schuiki, Fabian; Zaruba, Florian; et al. (2021)
    IEEE Transactions on Very Large Scale Integration (VLSI) Systems
    The slowdown of Moore's law and the power wall necessitates a shift toward finely tunable precision (a.k.a. transprecision) computing to reduce energy footprint. Hence, we need circuits capable of performing floating-point operations on a wide range of precisions with high energy proportionality. We present FPnew, a highly configurable open-source transprecision floating-point unit (TP-FPU), capable of supporting a wide range of standard and custom FP formats. To demonstrate the flexibility and efficiency of FPnew in general-purpose processor architectures, we extend the RISC-V ISA with operations on half-precision, bfloat16, and an 8-bit FP format, as well as SIMD vectors and multiformat operations. Integrated into a 32-bit RISC-V core, our TP-FPU can speedup the execution of mixed-precision applications by 1.67x with respect to an FP32 baseline, while maintaining end-to-end precision and reducing system energy by 37%. We also integrate FPnew into a 64-bit RISC-V core, supporting five FP formats on scalars or 2, 4, or 8-way SIMD vectors. For this core, we measured the silicon manufactured in Globalfoundries 22FDX technology across a wide voltage range from 0.45 to 1.2 V. The unit achieves leading-edge measured energy efficiencies between 178 Gflop/sW (on FP64) and 2.95 Tflop/sW (on 8-bit mini-floats), and a performance between 3.2 and 25.3 Gflop/s. © 2020 IEEE.
  • Kurth, Andreas; Rönninger, Wolfgang; Benz, Thomas; et al. (2022)
    IEEE Transactions on Computers
    On-chip communication infrastructure is a central component of modern systems-on-chip (SoCs), and it continues to gain importance as the number of cores, the heterogeneity of components, and the on-chip and off-chip bandwidth continue to grow. Decades of research on on-chip networks enabled cache-coherent shared-memory multiprocessors. However, communication fabrics that meet the needs of heterogeneous many-cores and accelerator-rich SoCs, which are not, or only partially, coherent, are a much less mature research area. In this work, we present a modular, topology-agnostic, high-performance on-chip communication platform. The platform includes components to build and link subnetworks with customizable bandwidth and concurrency properties and adheres to a state-of-the-art, industry-standard protocol. We discuss microarchitectural trade-offs and timing/area characteristics of our modules and show that they can be composed to build high-bandwidth (e.g., 2.5 GHz and 1024 bit data width) end-to-end on-chip communication fabrics (not only network switches but also DMA engines and memory controllers) with high degrees of concurrency. We design and implement a state-of-the-art ML training accelerator, where our communication fabric scales to 1024 cores on a die, providing 32 TB/s cross-sectional bandwidth at only 24 ns round-trip latency between any two cores.
  • Zaruba, Florian; Schuiki, Fabian; Benini, Luca (2021)
    IEEE Micro
    Data-parallel problems demand ever growing floating-point (FP) operations per second under tight area-and energy-efficiency constraints. In this work we present Manticore, a general-purpose, ultra-efficient chiplet-based architecture for data-parallel FP workloads. We have manufactured a prototype of the chiplet's computational core in Globalfoundries 22FDX process and demonstrate more than 5x improvement in energy efficiency on FP intensive workloads compared to CPUs and GPUs. The compute capability at high energy and area efficiency is provided by Snitch clusters [1] containing eight small integer cores, each controlling a large floating-point unit (FPU). The core supports two custom ISA extensions: The Stream Semantic Register (SSR) extension elides explicit load and store instructions by encoding them as register reads and writes [2]. The Floating-point Repetition (FREP) extension decouples the integer core from the FPU allowing floating-point instructions to be issued independently. These two extensions allow the single-issue core to minimize its instruction fetch bandwidth and saturate the instruction bandwidth of the FPU, achieving FPU utilization above 90%, with more than 40% of core area dedicated to the FPU. © 2020 IEEE.
Publications1 - 10 of 22