Vincent Becker
Loading...
13 results
Filters
Reset filtersSearch Results
Publications 1 - 10 of 13
- Combining gaze estimation and optical flow for pursuits interactionItem type: Conference Paper
ETRA '20 Full Papers: ACM Symposium on Eye Tracking Research and ApplicationsBâce, Mihai; Becker, Vincent; Wang, Chenyang; et al. (2020) - Investigating universal appliance control through wearable augmented realityItem type: Conference Paper
Proceedings of the 10th Augmented Human International Conference 2019Becker, Vincent; Rauchenstein, Felix; Sörös, Gábor (2019) - Gestear: Combining audio and motion sensing for gesture recognition on smartwatchesItem type: Conference Paper
Proceedings of the 2019 International Symposium on Wearable Computers (ISWC ’19), September 9–13, 2019, London, United Kingdom. ACM, New York, NY, USABecker, Vincent; Fessler, Linus; Sörös, Gábor (2019) - Facilitating Object Detection and Recognition through Eye GazeItem type: Conference PaperBâce, Mihai; Schlattner, Philippe; Becker, Vincent; et al. (2017)When compared to image recognition, object detection is a much more challenging task because it requires the accurate real-time localization of an object in the target image. In interaction scenarios, this pipeline can be simplified by incorporating the users’ point of regard. Wearable eye trackers can estimate the gaze direction, but lack own processing capabilities. We enable mobile gaze-aware applications by developing an open-source platform which supports mobile eye tracking based on the Pupil headset and a smartphone running Android OS. Through our platform, we offer researchers and developers a rapid prototyping environment for gaze-enabled applications. We describe the concept, our current progress, and research implications.
- Augmented humans interacting with an augmented worldItem type: Other Conference Item
Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services AdjunctBecker, Vincent (2018)Our research explores a seamless interaction with smart devices, which we divide into three stages: (i) device recognition, (ii) user input recognition, and (iii) inferring the appropriate action to be carried out on the device (cf. Figure 1). We leverage wearable computers and combine them into one interaction system. This makes interactions ubiquitous in two ways: While smart devices are becoming increasingly ubiquitous, our wearable system will be ubiquitously available to the user, making it possible to interact everywhere, any time. - Automatically estimating the savings potential of occupancy-based heating strategiesItem type: Working PaperBecker, Vincent; Kleiminger, Wilhelm; Mattern, Friedemann (2017)A large fraction of energy consumed in households is due to space heating. Especially during daytime, the heating is often running constantly, controlled only by a thermostat – even if the inhabitants are not present. Taking advantage of the absence of the inhabitants to save heating energy by lowering the temperature thus poses a great opportunity. Since the concrete savings of an occupancy-based heating strategy strongly depend on the individual occupancy pattern, a fast and inexpensive method to quantify these potential savings would be beneficial. In this paper we present such a practical method which builds upon an approach to estimate a household’s occupancy from its historical electricity consumption data, as gathered by smart meters. Based on the derived occupancy data, we automatically calculate the potential savings. Besides occupancy data, the underlying model also takes into account publicly available weather data and relevant building characteristics. Using this approach, households with high potential for energy savings can be quickly identified and their members could be more easily convinced to adopt an occupancy-based heating strategy (either by manually adjusting the thermostat or by investing in automation) since their monetary benefits can be calculated and the risk of misinvestment is thus reduced. To prove the usefulness of our system, we apply it to a large dataset containing relevant building and household data such as the size and age of several thousand households and show that, on average, a household can save over 9% heating energy when following an occupancy-based heating regime, while certain groups, such as single-person households, can even save 14% on average.
- Exploring zero-training algorithms for occupancy detection based on smart meter measurementsItem type: Journal Article
Computer Science, Research + DevelopmentBecker, Vincent; Kleiminger, Wilhelm (2018)Detecting the occupancy in households is becoming increasingly important for enabling context-aware applications in smart homes. For example, smart heating systems, which aim at optimising the heating energy, often use the occupancy to determine when to heat the home. The occupancy schedule of a household can be inferred from the electricity consumption, as its changes indicate the presence or absence of inhabitants. As smart meters become more widespread, the real-time electricity consumption of households is often available in digital form. For such data, supervised classifiers are typically employed as occupancy detection mechanisms. However, these have to be trained on data labelled with the occupancy ground truth. Labelling occupancy data requires a high effort, sometimes it even may be impossible, making it difficult to apply these methods in real-world settings. Alternatively, one could use unsupervised classifiers, which do not require any labelled data for training. In this work, we introduce and explain several unsupervised occupancy detection algorithms. We evaluate these algorithms by applying them to three publicly available datasets with ground truth occupancy data, and compare them to one existing unsupervised classifier and several supervised classifiers. Two unsupervised algorithms perform the best and we find that the unsupervised classifiers outperform the supervised ones we compared to. Interestingly, we achieve a similar classification performance on coarse-grained aggregated datasets and their fine-grained counterparts. - Interacting with Smart Devices - Advancements in Gesture Recognition and Augmented RealityItem type: Doctoral ThesisBecker, Vincent (2020)Over the last decades, embedded computer systems have become powerful and wide-spread with remarkable success. Besides traditional computers, such as desktops, laptops, smartphones, and servers, such systems have become part of nearly every technical appliance, for example, cars, televisions, and washing machines and thereby, an essential part of our lives. A common phrase for such appliances is “smart device”, a term which encompasses equipment to which one can digitally connect in order to exchange information and commands. Further, it can potentially sense its environment and process as well as act upon this measurement. Although computers and the devices containing them play such an important role, the way in which we interact with them has not changed much since the early days. Humans are required to control them in a manner very different from human-to-human communication. In particular, the possibilities to provide input to a smart device are mostly limited to traditional interfaces, such as buttons, knobs, or keyboards; or graphical representations thereof on displays. Technological progress in recent years in hardware, as well as in algorithmic methods, e.g. in machine learning, enables novel solutions for human-computer (or in fact human-device) interaction. Only recently, the use of speech has become practical and is used on smartphones as well as for home devices, incorporating a modality in the interaction process that is innate to humans and rich in expressiveness. However, for natural communication, other modalities, such as gestures, complement speech and may do so in human-computer communication, enabling simple and spontaneous interactions and avoiding the known social awkwardness of having to talk to devices. The adoption of speech and also other commonly used interaction methods, such as touch input on smartphones, indicate the relevance of considering further modalities in addition to the traditional ones. This dissertation contributes to further bridging the interaction gap between humans and smart devices by exploring solutions in the following areas: (i) Wearable gesture recognition based on electromyography (EMG): The touch of our fingers is widely used for interaction. However, most approaches only consider binary touch events. We present a method, which classifies finger touches using a novel neural network architecture and estimates their force based on data recorded from a wireless EMG armband. Our method runs in real time on a smartphone and allows for new interactions with devices and objects, as any surface can be turned into an interactive surface and additional functionality can be encoded through single fingers and the force applied. (ii) Wearable gesture recognition based on sound and motion: Besides other signals, gestures might also emit sound. We develop a recognition method for sound-emitting gestures, such as snapping, knocking, or clapping, employing only a standard smartwatch. Besides the motion information from the built-in accelerometer and gyroscope, we exploit audio data recorded by the smartwatch microphone as input. We propose a lightweight convolutional neural network architecture for gesture recognition, specifically designed to run locally on resource-constrained devices. It achieves a user-independent recognition accuracy of 97.2% for nine distinct gestures. We find that the audio input drastically reduces the false positive rate in continuous recognition compared to using only motion. (iii) Device representations in wearable augmented reality (AR): While AR technology is becoming increasingly available to the public, ways of interaction in the AR space are not yet fully understood. We investigate how users can control smart devices in AR. Connected devices are augmented with interaction widgets representing them. For example, a widget can be overlaid on a loudspeaker to control its volume. We explore three ways of manipulating the virtual widgets in a user study: (1) in-air finger pinching and sliding, (2) whole-arm gestures rotating and waving, (3) incorporating physical objects in the surroundings and mapping their movements to interaction primitives. We find significant differences in the preference of the users, the speed of executing commands, and the granularity of the type of control. While these methods only apply to control of a single device at a time, in a second step, we create a method which also takes potential connections between devices into account. Users can view, create, and manipulate connections between smart devices in AR using simple gestures. (iv) Personalizable user interfaces from simple materials: User interfaces rarely adapt to specific user preferences or the task at hand. We present a method that allows the quick and inexpensive creation of personalized interfaces from paper. Users can cut out shapes and assign control functions to these paper snippets via a simple configuration interface. After configuration, control takes place entirely through the manipulation of the paper shapes, providing the experience of a tailored tangible user interface. The shapes, which are monitored by a camera with depth sensing, can be dynamically changed during use. The proposed methods aim at a more natural interaction with smart devices through advanced sensing and processing in the user’s environment or on his/her body itself. As these interactions could be made ubiquitously available through wearable computers, our methods could help to improve the usability of the growing number of smart devices and make them more easily accessible to more people.
- TouchSense: Classifying Finger Touches and Measuring their Force with an Electromyography ArmbandItem type: Conference Paper
ISWC '18 Proceedings of the 2018 ACM International Symposium on Wearable ComputersBecker, Vincent; Oldrati, Pietro; Barrios, Liliana; et al. (2018) - Estimating the savings potential of occupancy-based heating strategiesItem type: Journal Article
Energy Informatics ~ Proceedings of the 7th DACH+ Conference on Energy InformaticsBecker, Vincent; Kleiminger, Wilhelm; Coroamă, Vlad C.; et al. (2018)Because space heating causes a large fraction of energy consumed in households, occupancy-based heating systems have become more and more popular in recent years. However, there is still no practical method to estimate the potential energy savings before installing such a system. While substantial work has been done on occupancy detection, previous work does not address a combination with heating simulation in order to provide an easily applicable method to estimate this savings potential. In this paper we present such a combination of an occupancy detection algorithm based on smart electricity meter data and a building heating simulation, which only requires publicly available weather data and some relevant building characteristics. We apply our method to a dataset containing such data for several thousand households and show that when taking occupancy into account, a household can save over 9% heating energy on average, while certain groups, such as employed single-person households, can even save 14% on average. Using our approach, households with high potential for energy savings can be quickly identified and their inhabitants could be more easily convinced to adopt an occupancy-based heating strategy.
Publications 1 - 10 of 13