Rui Pedro Zenhas Graça


Loading...

Last Name

Zenhas Graça

First Name

Rui Pedro

Organisational unit

Search Results

Publications 1 - 10 of 11
  • Alsaad, Zinah M.; Logan, Julie V.; Morath, Christian P.; et al. (2024)
    Journal of Applied Physics
    Forthcoming infrared event-based sensors will have utility in the space-based surveillance domain where they could potentially perform traditional sensing and tracking functions with significantly enhanced temporal resolution and reduced downstream datalink demands and power consumption. As a first step toward extending event-based sensing technology into the mid-wave infrared, a DC simulation of the conventional event-based sensor unit cell's photoreceptor circuit is performed, and the results are compared with measurements of a printed circuit board implementation of the same circuit to assess what design freedom is available to interface the photoreceptor with a mid-wave infrared photodetector. Detailed analysis of the circuit and measurements provides insight into which fundamental properties of the transistors drive the photoreceptor's dynamic range and demonstrates several characteristics that are relevant to mid-wave infrared sensing. These characteristics include the capability to produce a stable, low voltage bias on the infrared photodetector to maximize sensitivity, and that operating the circuit below room temperature increases the photoreceptor's dynamic range. Measurements show that even the simplest implementation of the photoreceptor circuit exhibits a dynamic range of logarithmic compression of 150 dB at room temperature. However, the dynamic range is ultimately limited by the mid-wave infrared photodetector's dark current activation energy, which is significantly lower than the threshold voltage (energy) of the photoreceptor's feedback transistor and, thus, there is an incentive for the photodetector's dark current to be diffusion-limited.
  • Zenhas Graça, Rui Pedro; McReynolds, Brian; Delbrück, Tobias (2023)
    2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
    The operation of the Dynamic Vision Sensor (DVS) event camera is controlled by the user through adjusting different bias parameters. These biases affect the response of the camera by controlling - among other parameters - the bandwidth, sensitivity, and maximum firing rate of the pixels. Besides determining the response of the camera to input signals, biases significantly impact its noise performance. Bias optimization is a multivariate process depending on the task and the scene, to which the user’s knowledge about pixel design and non-idealities can be of great importance.In this paper, we go step-by-step along the signal pathway of the DVS pixel, shining light on its low-level operation and non-idealities, comparing pixel level measurements with array level measurements, and discussing how biasing and illumination affect the pixel’s behavior. With the results and discussion presented, we aim to help DVS users achieve more hardware-aware camera utilization and modelling.
  • Zenhas Graça, Rui Pedro; McReynolds, Brian; Delbrück, Tobias (2023)
    Under dim lighting conditions, the output of Dynamic Vision Sensor (DVS) event cameras is strongly affected by noise. Photon and electron shot-noise cause a high rate of non-informative events that reduce Signal to Noise ratio. DVS noise performance depends not only on the scene illumination, but also on the user-controllable biasing of the camera. In this paper, we explore the physical limits of DVS noise, showing that the DVS photoreceptor is limited to a theoretical minimum of 2x photon shot noise, and we discuss how biasing the DVS with high photoreceptor bias and adequate source-follower bias approaches optimal noise performance. We support our conclusions with pixel-level measurements of a DAVIS346 and analysis of a theoretical pixel model.
  • McReynolds, Brian; Zenhas Graça, Rui Pedro; Delbrück, Tobias (2022)
    Optical Engineering
    Dynamic vision sensors (DVS) represent a promising new technology, offering low power consumption, sparse output, high temporal resolution, and wide dynamic range. These features make DVS attractive for new research areas including scientific and space-based applications; however, more precise understanding of how sensor input maps to output under real-world constraints is needed. Often, metrics used to characterize DVS report baseline performance by measuring observable limits but fail to characterize the physical processes at the root of those limits. To address this limitation, we describe step-by-step procedures to measure three important performance parameters: (1) temporal contrast threshold, (2) cutoff frequency, and (3) refractory period. Each procedure draws inspiration from previous work, but links measurements sequentially to infer physical phenomena at the root of measured behavior. Results are reported over a range of brightness levels and user-defined biases. The threshold measurement technique is validated with test-pixel node voltages, and a first-order low-pass approximation of photoreceptor response is shown to predict event cutoff temporal frequency to within 9% accuracy. The proposed method generates lab-measured parameters compatible with the event camera simulator v2e, allowing more accurate generation of synthetic datasets for innovative applications.
  • McReynolds, Brian J.; Zenhas Graça, Rui Pedro; O'Keefe, Daniel; et al. (2023)
    Proceedings of SPIE ~ Unconventional Imaging, Sensing, and Adaptive Optics 2023
    Neuromorphic cameras, or Event-based Vision Sensors (EVS), operate in a fundamentally different way than conventional frame-based cameras. Their unique operational paradigm results in a sparse stream of high temporal resolution output events which encode pixel-level brightness changes with low-latency and wide dynamic range. Recently, interest has grown in exploiting these capabilities for scientific studies; however, accurately reconstructing signals from the output event stream presents a challenge due to physical limitations of the analog circuits that implement logarithmic change detection. In this paper, we present simultaneous recordings of lightning strikes using both an event camera and frame-based high-speed camera. To our knowledge, this is the first side-by-side recording using these two sensor types in a real-world scene with challenging dynamics that include very fast and bright illumination changes. Our goal in this work is to accurately map the illumination to EVS output in order to better inform modeling and reconstruction of events from a real-scene. We first combine lab measurements of key performance metrics to inform an existing pixel model. We then use the high-speed frames as signal ground truth to simulate an event stream and refine parameter estimates to optimally match the event-based sensor response for several dozen pixels representing different regions of the scene. These results will be used to predict sensor response and develop methods to more precisely reconstruct lightning and sprite signals for Falcon ODIN, our upcoming International Space Station neuromorphic sensing mission.
  • McReynolds, Brian J.; Zenhas Graça, Rui Pedro; Kulesza, Lucas; et al. (2024)
    Proceedings of SPIE ~ Unconventional Optical Imaging IV
    Biologically inspired event-based vision sensors (EVS) are growing in popularity due to performance benefits including ultra-low power consumption, high dynamic range, data sparsity, and fast temporal response. They efficiently encode dynamic information from a visual scene through pixels that respond autonomously and asynchronously when the per-pixel illumination level changes by a user-selectable contrast threshold ratio,. theta Due to their unique sensing paradigm and complex analog pixel circuitry, characterizing Event-based Vision Sensor (EVS) is non-trivial. The step-response probability curve (S-curve) is a key measurement technique that has emerged as the standard for measuring θ. Though the general concept is straightforward, obtaining accurate results requires a thorough understanding of pixel circuitry and non-idealities to correctly obtain and interpret results. Furthermore, the precise measurement procedure has not been standardized across the field, and resulting parameter estimates depend strongly on methodology, measurement conditions, and biasing - which are not generally discussed. In this work, we detail the method for generating accurate S-curves by applying an appropriate stimulus and sensor configuration to decouple 2nd-order effects from the parameter being studied. We use an EVS pixel simulation to demonstrate how noise and other physical constraints can lead to error in the measurement, and develop two techniques that are robust enough to obtain accurate estimates. We then apply best practices derived from our simulation to generate S-curves for the latest generation Sony IMX636 and interpret the resulting family of curves to correct the apparent anomalous result of previous reports suggesting that θ changes with illumination. Further, we demonstrate that with correct interpretation, fundamental physical parameters such as dark current and RMS noise can be accurately inferred from a collection of S-curves, leading to more accurate parameterization for high-fidelity EVS simulations.
  • Kim, Kwantae; Gao, Chang; Zenhas Graça, Rui Pedro; et al. (2022)
    Digest of Technical Papers / IEEE International Solid State Circuits Conference ~ 2022 IEEE International Solid- State Circuits Conference (ISSCC)
    Voice-controlled interfaces on acoustic Internet-of-Things (IoT) sensor nodes and mobile devices require integrated low-power always-on wake-up functions such as Voice Activity Detection (VAD) and Keyword Spotting (KWS) to ensure longer battery life. Most VAD and KWS ICs focused on reducing the power of the feature extractor (FEx) as it is the most power-hungry building block. A serial Fast Fourier Transform (FFT)-based KWS chip [1] achieved 510nW; however, it suffered from a high 64ms latency and was limited to detection of only 1-to-4 keywords (2-to-5 classes). Although the analog FEx [2]–[3] for VAD/KWS reported 0.2μW-to-1 μW and 10ms-to-100ms latency, neither demonstrated >5 classes in keyword detection. In addition, their voltage-domain implementations cannot benefit from process scaling because the low supply voltage reduces signal swing; and the degradation of intrinsic gain forces transistors to have larger lengths and poor linearity.
  • Kim, Kwantae; Gao, Chang; Zenhas Graça, Rui Pedro; et al. (2022)
    IEEE Journal of Solid-State Circuits
    This article presents the first keyword spotting (KWS) IC that uses a ring-oscillator-based time-domain processing technique for its analog feature extractor (FEx). Its extensive usage of time-encoding schemes allows the analog audio signal to be processed in a fully time-domain manner except for the voltage-to-time conversion stage of the analog front end. Benefiting from fundamental building blocks based on digital logic gates, it offers better technology scalability compared to conventional voltage-domain designs. Fabricated in a 65-nm CMOS process, the prototyped KWS IC occupies 2.03 mm(2) and dissipates 23-SW power consumption, including analog FEx and digital neural network classifier. The 16-channel time-domain FEx achieves a 54.89-dB dynamic range for 16-ms frame shift size while consuming 9.3 mu W. The measurement result verifies that the proposed IC performs a 12-class KWS task on the Google Speech Command dataset (GSCD) with >86% accuracy and 12.4-ms latency.
  • Zenhas Graça, Rui Pedro; Zhou, Sheng; McReynolds, Brian; et al. (2024)
    2024 IEEE European Solid-State Electronics Research Conference (ESSERC)
    This paper reports a Dynamic Vision Sensor (DVS) event camera that is 6x more sensitive at 14x lower illumination than existing commercial and prototype cameras. Event cameras output a sparse stream of brightness change events. Their high dynamic range (HDR), quick response, and high temporal resolution provide key advantages for scientific applications that involve low lighting conditions and sparse visual events. However, current DVS are hindered by low sensitivity, resulting from shot noise and pixel-to-pixel mismatch. Commercial DVS have a minimum brightness change threshold of >10%. Sensitive prototypes achieved as low as 1%, but required kilo-lux illumination. Our SciDVS prototype fabricated in a 180 nm CMOS image sensor process achieves 1.7% sensitivity at chip illumination of 0.7 lx and 18 Hz bandwidth. Novel features of SciDVS are (1) an autocentering in-pixel preamplifier providing intrascene HDR and increased sensitivity, (2) improved control of bandwidth to limit shot noise, and (3) optional pixel binning, allowing the user to trade spatial resolution for sensitivity.
  • Delbrück, Tobias; Li, Chenghan; Zenhas Graça, Rui Pedro; et al. (2022)
    2022 IEEE International Conference on Image Processing (ICIP)
    Standard dynamic vision sensor (DVS) event cameras output a stream of spatially-independent log-intensity brightness change events so they cannot suppress spatial redundancy. Nearly all biological retinas use an antagonistic center-surround organization. This paper proposes a practical method of implementing a compact, energy-efficient Center Surround DVS (CSDVS) with a surround smoothing network that uses compact polysilicon resistors for lateral resistance. The paper includes behavioral simulation results for the CSDVS (see sites.google.com/view/csdvs/home). The CSDVS would significantly reduce events caused by low spatial frequencies, but amplify the informative high frequency spatiotemporal events.
Publications 1 - 10 of 11