Barnabas Gavin Cangan
Loading...
Last Name
Cangan
First Name
Barnabas Gavin
ORCID
Organisational unit
09689 - Katzschmann, Robert / Katzschmann, Robert
12 results
Filters
Reset filtersSearch Results
Publications 1 - 10 of 12
- Model-Based Capacitive Touch Sensing in Soft Robotics: Achieving Robust Tactile Interactions for Artistic ApplicationsItem type: Journal Article
IEEE Robotics and Automation LettersSilva-Plata, Carolina; Rosel, Carlos; Cangan, Barnabas Gavin; et al. (2025)In this letter, we present a touch technology to achieve tactile interactivity for human-robot interaction (HRI) in soft robotics. By combining a capacitive touch sensor with an online solid mechanics simulation provided by the SOFA framework, contact detection is achieved for arbitrary shapes. Furthermore, the implementation of the capacitive touch technology presented here is selectively sensitive to human touch (conductive objects), while it is largely unaffected by the deformations created by the pneumatic actuation of our soft robot. Multi-touch interactions are also possible. We evaluated our approach with an organic soft robotics sculpture that was created by a visual artist. In particular, we evaluate that the touch localization capabilities are robust under the deformation of the device. We discuss the potential this approach has for the arts and entertainment as well as other domains. - Multi-tap Resistive Sensing and FEM Modeling enables Shape and Force Estimation in Soft RobotsItem type: Journal Article
IEEE Robotics and Automation LettersTian, Sizhe; Cangan, Barnabas Gavin; Navarro, Stefan Escaida; et al. (2024)We address the challenge of reliable and accurate proprioception in soft robots, specifically soft robots with tight packaging constraints and relying only on internally embedded sensors. While various sensing approaches with single sensors have commonly been tried using a constant curvature assumption, we look into sensing local deformations at multiple sensor locations. In our approach, we multi-tap an off-the-shelf resistive sensor by creating multiple electrical connections onto the resistive layer of the sensor, and we insert the sensor into a soft body. This modification allows us to measure changes in resistance at multiple segments throughout the length of the sensor, providing improved resolution of local deformations in the soft body. These measurements inform a model based on a finite element method (FEM) that estimates the shape of the soft body and the magnitude of an external force acting at a known arbitrary location. Our model-based approach estimates soft body deformation with approximately 3% average relative error while taking into account internal fluidic actuation. Our estimate of external force disturbance has an 11% relative error within a range of 0 to 5 N. For instance, the combined sensing and modeling approach can be integrated into soft manipulation platforms to enable features such as identifying the shape and material properties of an object being grasped. Such manipulators can benefit from the inherent softness and compliance while being fully proprioceptive, relying only on embedded sensing and not on external systems such as motion capture. Such proprioception is essential for the deployment of soft robots in real-world scenarios. - Autonomous Vision-based Rapid Aerial GraspingItem type: Working Paper
arXivBauer, Erik; Cangan, Barnabas Gavin; Katzschmann, Robert K. (2022)In a future with autonomous robots, visual and spatial perception is of utmost importance for robotic systems. Particularly for aerial robotics, there are many applications where utilizing visual perception is necessary for any real-world scenarios. Robotic aerial grasping using drones promises fast pick-and-place solutions with a large increase in mobility over other robotic solutions. Utilizing Mask R-CNN scene segmentation (detectron2), we propose a vision-based system for autonomous rapid aerial grasping which does not rely on markers for object localization and does not require the size of the object to be previously known. With spatial information from a depth camera, we generate a point cloud of the detected objects and perform geometry-based grasp planning to determine grasping points on the objects. In real-world experiments, we show that our system can localize objects with a mean error of 3 cm compared to a motion capture ground truth for distances from the object ranging from 0.5 m to 2.5 m. Similar grasping efficacy is maintained compared to a system using motion capture for object localization in experiments. With our results, we show the first use of geometry-based grasping techniques with a flying platform and aim to increase the autonomy of existing aerial manipulation platforms, bringing them further towards real-world applications in warehouses and similar environments. - Getting the Ball Rolling: Learning a Dexterous Policy for a Biomimetic Tendon-Driven Hand with Rolling Contact JointsItem type: Conference Paper
2023 IEEE-RAS 22nd International Conference on Humanoid Robots (Humanoids)Toshimitsu, Yasunori; Forrai, Benedek; Cangan, Barnabas Gavin; et al. (2023)Biomimetic, dexterous robotic hands have the potential to replicate much of the tasks that a human can do, and to achieve status as a general manipulation platform. Recent advances in reinforcement learning (RL) frameworks have achieved remarkable performance in quadrupedal locomotion and dexterous manipulation tasks. Combined with GPU-based highly parallelized simulations capable of simulating thousands of robots in parallel, RL-based controllers have become more scalable and approachable. However, in order to bring RL-trained policies to the real world, we require training frameworks that output policies that can work with physical actuators and sensors as well as a hardware platform that can be manufactured with accessible materials yet is robust enough to run interactive policies. This work introduces the biomimetic tendon-driven Faive Hand and its system architecture, which uses tendon-driven rolling contact joints to achieve a 3D printable, robust high-DoF hand design. We model each element of the hand and integrate it into a GPU simulation environment to train a policy with RL, and achieve zero-shot transfer of a dexterous in-hand sphere rotation skill to the physical robot hand. - Model-Based Disturbance Estimation for a Fiber-Reinforced Soft Manipulator using Orientation SensingItem type: Conference Paper
2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)Cangan, Barnabas Gavin; Escaida Navarro, Stefan; Yang, Bai; et al. (2022)To aid in real-world situations, soft robots need to be able to estimate their state and external interactions based on proprioceptive sensors. Estimating disturbances allows a soft robot to perform desirable force control. However, even in the case of rigid manipulators, force estimation at the end-effector is seen as a non-trivial problem. And indeed, current approaches to address this challenge have shortcomings that prevent their general application. They are often based on simplified soft dynamic models, such as the ones relying on a piece-wise constant curvature approximation or matched rigid-body models that do not represent enough details of the problem. This severely limits applications in complex human-robot interaction. Finite element method (FEM) based modeling allows for predictions of soft robot dynamics in a more generic fashion. Here, using the soft robot modeling capabilities of the frame-work SOFA, we built a detailed FEM model of a multi-segment soft continuum robotic arm composed of compliant deformable materials and fiber-reinforced pressurized actuation chambers. In addition, a model for sensors that provide orientation output is presented. This model is used to establish a state observer for the manipulator. The sensor model is adequate for representing the output of flexible bend sensors as well as orientations provided by IMUs or coming from tracking systems, all of which are popular choices in soft robotics. Model parameters were calibrated to match imperfections of the manual fabrication process using physical experiments. We then solve a quadratic programming inverse statics problem to compute the components of external force that explain the pose mismatch. Our experiments show an average force estimation error of around 1.2%. As the methods proposed are generic, these results are encouraging for the task of building soft robots exhibiting complex, reactive, sensor-based behavior that can be deployed in human-centered environments. - Autonomous Marker-Less Rapid Aerial GraspingItem type: Conference Paper
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)Bauer, Erik; Cangan, Barnabas Gavin; Katzschmann, Robert K. (2023)In a future with autonomous robots, visual and spatial perception is of utmost importance for robotic systems. Particularly for aerial robotics, there are many applications where utilizing visual perception is necessary for any real-world scenarios. Robotic aerial grasping using drones promises fast pick-and-place solutions with a large increase in mobility over other robotic solutions. Utilizing Mask R-CNN scene segmentation (detectron2), we propose a vision-based system for autonomous rapid aerial grasping which does not rely on mark-ers for object localization and does not require the appearance of the object to be previously known. Combining segmented images with spatial information from a depth camera, we generate a dense point cloud of the detected objects and perform geometry - based grasp planning to determine grasping points on the objects. In real-world experiments on a dynamically grasping aerial platform, we show that our system can replicate the performance of a motion capture system for object local-ization up to 94.5 % of the baseline grasping success rate. With our results, we show the first use of geometry-based grasping techniques with a flying platform and aim to increase the autonomy of existing aerial manipulation platforms, bringing them further towards real-world applications in warehouses and similar environments. - Task-defined Pulley Design for Nonlinearly Coupled Tendon-driven ActuationItem type: Conference Paper
2024 IEEE 7th International Conference on Soft Robotics (RoboSoft)Zhang, Wensi; Cangan, Barnabas Gavin; Buchner, Thomas Jakob Konrad; et al. (2024)Coupled actuation is a common strategy for reducing the number of actuators in robots with high degrees of freedom. Coupled actuation leads to an underactuation that lowers system complexity and weight. However, an approach for a controlled nonlinear coupling of multiple tendons does not exist for tendon-driven actuation. We propose a method to enable task-defined non-linearly coupled actuation by designing eccentric pulleys of variable radii. Given a target profile of tendon length changes, we perform piecewise fitting of the target curve to generate a composite pulley geometry. Theoretical verification of the method with 200 randomly generated tendon profiles shows stable results with an average of 2.19 % relative error. In addition, we built a practical demonstration by implementing non-linearly coupled control of a tendon-driven finger. The six tendons that control the robotic finger were all actuated using pulleys connected to a single motor, resulting in a 5.35 % average fingertip orientation tracking error. In conclusion, the pulley-shaping method proposed in this project enables the task-defined nonlinear coupling of multiple degrees of freedom to a single actuator of tendon-driven systems. We believe that the algorithm's generalizability will enable further practical use in tendon-driven dexterous hands, animatronics, manipulators, and legged robots, among various other possible applications. - Vision-controlled jetting for composite systems and robotsItem type: Journal Article
NatureBuchner, Thomas Jakob Konrad; Rogler, Simon; Weirich, Stefan; et al. (2023)Recreating complex structures and functions of natural organisms in a synthetic form is a long-standing goal for humanity. The aim is to create actuated systems with high spatial resolutions and complex material arrangements that range from elastic to rigid. Traditional manufacturing processes struggle to fabricate such complex systems. It remains an open challenge to fabricate functional systems automatically and quickly with a wide range of elastic properties, resolutions, and integrated actuation and sensing channels. We propose an inkjet deposition process called vision-controlled jetting that can create complex systems and robots. Hereby, a scanning system captures the three-dimensional print geometry and enables a digital feedback loop, which eliminates the need for mechanical planarizers. This contactless process allows us to use continuously curing chemistries and, therefore, print a broader range of material families and elastic moduli. The advances in material properties are characterized by standardized tests comparing our printed materials to the state-of-the-art. We directly fabricated a wide range of complex high-resolution composite systems and robots: tendon-driven hands, pneumatically actuated walking manipulators, pumps that mimic a heart and metamaterial structures. Our approach provides an automated, scalable, high-throughput process to manufacture high-resolution, functional multimaterial systems. - ViSE: Vision-Based 3D Real-Time Shape Estimation of Continuously Deformable RobotsItem type: Working Paper
arXivZheng, Hehui; Pinzello, Sebastian; Cangan, Barnabas Gavin; et al. (2022)The precise control of soft and continuum robots requires knowledge of their shape. The shape of these robots has, in contrast to classical rigid robots, infinite degrees of freedom. To partially reconstruct the shape, proprioceptive techniques use built-in sensors resulting in inaccurate results and increased fabrication complexity. Exteroceptive methods so far rely on placing reflective markers on all tracked components and triangulating their position using multiple motion-tracking cameras. Tracking systems are expensive and infeasible for deformable robots interacting with the environment due to marker occlusion and damage. Here, we present a regression approach for 3D shape estimation using a convolutional neural network. The proposed approach takes advantage of data-driven supervised learning and is capable of real-time marker-less shape estimation during inference. Two images of a robotic system are taken simultaneously at 25 Hz from two different perspectives, and are fed to the network, which returns for each pair the parameterized shape. The proposed approach outperforms marker-less state-of-the-art methods by a maximum of 4.4\% in estimation accuracy while at the same time being more robust and requiring no prior knowledge of the shape. The approach can be easily implemented due to only requiring two color cameras without depth and not needing an explicit calibration of the extrinsic parameters. Evaluations on two types of soft robotic arms and a soft robotic fish demonstrate our method's accuracy and versatility on highly deformable systems in real-time. The robust performance of the approach against different scene modifications (camera alignment and brightness) suggests its generalizability to a wider range of experimental setups, which will benefit downstream tasks such as robotic grasping and manipulation. - Vision-Based Online Key Point Estimation of Deformable RobotsItem type: Journal Article
Advanced Intelligent SystemsZheng, Hehui; Pinzello, Sebastian; Cangan, Barnabas Gavin; et al. (2024)The precise control of soft and continuum robots requires knowledge of their shape, which has, in contrast to classical rigid robots, infinite degrees of freedom. To partially reconstruct the shape, proprioceptive techniques use built-in sensors, resulting in inaccurate results and increased fabrication complexity. Exteroceptive methods so far rely on expensive tracking systems with reflective markers placed on all components, which are infeasible for deformable robots interacting with the environment due to marker occlusion and damage. Here, a regression approach is presented for three-dimensional key point estimation using a convolutional neural network. The proposed approach uses data-driven supervised learning and is capable of online markerless estimation during inference. Two images of a robotic system are captured simultaneously at 25 Hz from different perspectives and fed to the network, which returns for each pair the parameterized key point or piecewise constant curvature shape representations. The proposed approach outperforms markerless state-of-the-art methods by a maximum of 4.5% in estimation accuracy while being more robust and requiring no prior knowledge of the shape. Online evaluations on two types of soft robotic arms and a soft robotic fish demonstrate the method's accuracy and versatility on highly deformable systems.
Publications 1 - 10 of 12