Research News /program/robotics/ en Tiny insects could lead to big changes in robot design /program/robotics/2025/02/24/tiny-insects-could-lead-big-changes-robot-design Tiny insects could lead to big changes in robot design Jeff Zehnder Mon, 02/24/2025 - 10:10 Categories: Research News Tags: Sean Humbert News Jeff Zehnder

Sean Humbert and Leopold Beuken inspecting sensors on a fixed wing UAS.

Sean Humbert is unlocking the biological secrets of the common housefly to make major advances in robotics and uncrewed aerial systems (UAS).

A professor in the Paul M. Rady Department of Mechanical Engineering and director of the Robotics Program at the University of Colorado Boulder, Humbert is working to understand how tiny biological systems process sensory information as they move through the world.

This basic sounding concept involves extremely complex science and engineering.

“Insects aren’t built like robots,” Humbert said. “If I have a robot and I want it to perceive the environment, I tend to put a larger, high fidelity lidar system on it. Flies instead have small, low-quality sensors throughout their bodies. Due to the way that the measurements are processed by the nervous system, you can extract similar levels of information as bulky robotic sensors. We want to take advantage of what nature does.”

The research has drawn the interest of the U.S. Air Force Research Laboratory, which awarded Humbert a five-year, $909,000 grant to advance the work.

He also  in the Institute of Electrical and Electronics Engineers (IEEE) Access journal, proposing a mathematical framework for understanding and applying to robotics connections in the flight physics and visual physiology of flies.

Flies may seem an unlikely creature to study for enhancing robots, but Humbert’s co-author and post-doctoral researcher Zoe Turin says the insects have a lot to offer roboticists.

“If you've ever tried to catch or swat a fly, you know that they can be quite capable fliers, despite a lack of computational power,” Turin says. “If we apply principles from how insects operate, then we may be able to develop robots that have similar capabilities at a much smaller size than traditional robots. This has potential applications across a wide variety of industries.”

Although flies are only 6-7 millimeters long and have brains the size of a poppy seed, Humbert said their abilities have evaded researchers, until now. A key element of how flies work is distributed sensing.

Zoe Turin and Eugene Rush in front of a white board with a small UAS.

“It’s taken years to arrive at a model of how the fly’s sensory structure is set up the way it is and to be able to figure out the math behind it,” Humbert said. “This has so much potential going forward. An F-22 fighter jet has a small number of high fidelity, big, expensive sensors that require a lot of backend processing and computation to generate quality measurements. Nature is the exact opposite. It’s small, low fidelity, lightweight, and distributed.”

By unlocking the principles and mathematical optimizations that exist in flies, Turin said researchers will be able to explore similar techniques for robots.

“Understanding more about how insects are able to do what they do has only made them more amazing to me. This framework will hopefully help our engineered systems to react more quickly to unexpected disturbances, such as a sudden gust of wind, while reducing the computational power required,” Turin said.

Over the course of the grant, Humbert will take the models he has developed on flies to conduct proof-of-concept demonstrations and then experimental research using robotic sensor technology.

“This is a wonderful, cool biological principle and we now have the model to explore what nature has constructed,” Humbert said. “It will revolutionize how we think about the design cycle of robotic systems.”

Sean Humbert is unlocking the biological secrets of the common housefly to make major advances in robotics and uncrewed aerial...

Off

Traditional 0 On White ]]>
Mon, 24 Feb 2025 17:10:49 +0000 Jeff Zehnder 136 at /program/robotics
Rentschler, Aspero Medical awarded $4.5M for endoscopy advancement /program/robotics/2025/02/11/rentschler-aspero-medical-awarded-45m-endoscopy-advancement Rentschler, Aspero Medical awarded $4.5M for endoscopy advancement Jeff Zehnder Tue, 02/11/2025 - 15:31 Categories: Research News Tags: Mark Rentschler News

It’s been six years since the launch of startup company , co-founded by Professor Mark Rentschler of the Paul M. Rady Department of Mechanical Engineering. The company has seen great success, including the development of a medical device designed to enable more efficient procedures in the small bowel region.

Today, with the help of a $4.5 million award through the Anschutz Acceleration Initiative (AAI), Rentschler and his colleagues are working to bring two new products to the market that will transform these types of procedures even further.

“We brought our first product out on the market in 2024,” said Rentschler, also a faculty member in biomedical engineering (BME) and robotics. “We are planning to bring a second and third product to the market in 12-18 months, and we are extremely excited to get these devices in the hands of interventional endoscopists.”

 

Professor Mark Rentschler holding Aspero Medical's patented Ancora-SB balloon overtube.

In 2023, Aspero received clearance from the Food and Drug Administration (FDA) to market and sell the Ancora-SB device. The product is used during endoscopy procedures to diagnose and treat small bowel diseases.

According to Rentschler, operating within the small intestine can be time consuming and technically challenging. Equipped with a patented micro-textured balloon, the Ancora-SB overtube is designed to provide more traction and anchoring consistency than smooth latex or smooth silicone balloon overtube competitors.

“Balloon overtubes for small bowel procedures have been around for about a decade,” said Rentschler. “We’re not looking to change the small bowel enteroscopy procedure, but instead improve balloon anchoring performance during these procedures in the small bowel.”

Ancora-SB has allowed Aspero to prove their worth in hospitals. Their next products expand on this concept, of course, with additional features that can facilitate a less invasive interventional procedure than traditional open surgery.

The next generation balloon overtube will be used to remove cancerous lesions in the large bowel region. It features an extra working channel that allows for an additional tool to be utilized alongside the visualization scope. This offers physicians more control, access, and stabilization when maneuvering through the colon and performing advanced interventional procedures.

“Conceptually, these devices will enable triangulated surgery with two tools and centralized visualization so that physicians can more efficiently perform surgery from inside the lumen,” Rentschler said. “Instead of historically invasive procedures, where the patient is cut open, and the cancerous bowel region is removed, we’re assisting physicians as they remove the cancer from the inside of the lumen during an outpatient procedure.

“It's much less invasive, with potentially tremendous cost savings, and numerous benefits for the patient.”

Aspero’s third product will be another balloon overtube, this time with a working channel that enables minimally invasive cancer removal in the esophagus and stomach regions of the gastrointestinal tract.

 

Rentschler showcasing all three of the medical devices in Aspero Medical's multi-product platform, including their two new highly anticipated devices.

Rentschler and his team say the two upcoming devices have the potential to replace a large, and growing, number of today’s conventional surgical procedures in the gastrointestinal region by enhancing safety and efficiency while reducing patient recovery time. Moving procedures from inpatient surgery to outpatient endoscopy can generate potential cost savings of up to 50 percent or more.

“Everyone knows this is the direction we need to go. Clinical outcomes from these types of procedures are incredibly strong, but the techniques and devices aren’t widely available yet,” said Rentschler. “We are creating products that help physicians and patients feel safe and comfortable without overcomplicating things. The paradigm is rapidly shifting, and we endeavor to push endoscopy forward.”

The company is currently finalizing the design of the second product. It’s about six months further along in development than the third product, but Rentschler says they are looking to have both devices FDA cleared by the end of 2026.

When all three devices hit the market, Aspero will look to market a portfolio of products, rather than a single tool. But further innovation is on the horizon, this time incorporating the Ancora balloon technology with a robotic element.

“Ancora is a multi-product platform focusing on the small bowel, large bowel, stomach and esophageal regions,” Rentschler said. “Our next potential venture will be in flexible robots. We’ll continue with our balloon overtubes, but as anchoring platforms to be used with flexible robotic endoscopy systems.”

Until then, Rentschler and company are full steam ahead on these next products. The $4.5 million AAI grant is being offered over a four year span, but they anticipate spending that money much sooner so they can get the devices out on the market and begin positively impacting patients and physicians everywhere.

But that’s not their only goal. With a lot of Colorado involved in the company’s revolutionizing technology, Rentschler hopes to also tell another story.

“I started Aspero Medical with Dr. Steven Edmundowicz at CU Anschutz. We’ve received a number of grants from the state of Colorado and everyone involved is invested in our vision,” said Rentschler. “We believe that a rising tide raises all boats, and when we think of Aspero, we want it to be a successful Colorado story.”

window.location.href = `/mechanical/rentschler-aspero-awarded-45m-endoscopy-advancement`;

Off

Traditional 0 On White ]]>
Tue, 11 Feb 2025 22:31:15 +0000 Jeff Zehnder 135 at /program/robotics
CS robotics research to help strengthen domestic battery supply chain /program/robotics/2024/12/05/cs-robotics-research-help-strengthen-domestic-battery-supply-chain CS robotics research to help strengthen domestic battery supply chain Jeff Zehnder Thu, 12/05/2024 - 11:28 Categories: Research News Tags: Nikolaus Correll News

Computer science professor Nikolaus Correll and his lab at CU Boulder have been awarded $1.8 million by the U.S. Department of Energy Advanced Research Projects Agency-Energy (ARPA-E) to help establish a circular supply chain for domestic electric vehicle (EV) batteries.

The percentage of EV passenger vehicles on the road is  to 28% by 2030 and 58% by 2040, globally.

The existing supply chain for EV batteries relies mostly on recycling to recover critical minerals such as cobalt, nickel or copper.

However, conventional battery recycling methods are energy-intensive, produce significant quantities of greenhouse gases, and lead to large volumes of waste deposited in landfills.

CU Boulder joins 12 other projects around the country working to change this dynamic through ARPA-E's  program.

Correll's project focuses on autonomous robotic disassembly of EV lithium-ion battery packs. Humanoid robots will work together with robotic arms to manipulate wire harnesses and remove screws and other components before dismantling commercial battery packs with a heavy-duty industrial arm.

Correll explained that people are interested in using robots for the task due to the hazardous nature of the work.

"The batteries are quite dangerous to handle due to the risk of electrocution and spontaneous ignition," Correll said.

The Correll Lab's project will use state-of-the-art perception models and large-language models to consider the physics and context of each battery.

By advancing the efficiency and ability of battery disassembly systems, component recycling could be done at a commercial scale more safely and cost-effectively, leading to less waste in landfills and more material available for new EV batteries.

The director of ARPA-E, Evelyn N. Wang, said, "I look forward to seeing how these CIRCULAR projects develop regeneration, repair, reuse, and remanufacture technologies to create a sustainable EV battery supply chain." 

window.location.href = `/cs/2024/12/02/cs-robotics-research-help-strengthen-domestic-battery-supply-chain`;

Off

Traditional 0 On White ]]>
Thu, 05 Dec 2024 18:28:06 +0000 Jeff Zehnder 134 at /program/robotics
Robots can’t outrun animals (yet). A new study explores why /program/robotics/2024/04/29/robots-can%E2%80%99t-outrun-animals-yet-new-study-explores-why Robots can’t outrun animals (yet). A new study explores why Anonymous (not verified) Mon, 04/29/2024 - 12:58 Categories: Research News Tags: Kaushik Jayaram News

The question may be the 21st century’s version of the fable of the tortoise and the hare: Who would win in a foot race between a robot and an animal?

In a new perspective article, a team of engineers from the United States and Canada, including CU Boulder roboticist Kaushik Jayaram, set out to answer that riddle. The group analyzed data from dozens of studies and came to a resounding “no.” In almost all cases, biological organisms, such as cheetahs, cockroaches and even humans, seem to be able to outrun their robot counterparts. 

 

 

 

The researchers, led by and , published their findings .

“As an engineer, it is kind of upsetting,” said Jayaram, an assistant professor in the Paul M. Rady Department of Mechanical Engineering at CU Boulder. “Over 200 years of intense engineering, we’ve been able to send spacecraft to the moon and Mars and so much more. But it’s confounding that we do not yet have robots that are significantly better than biological systems at locomotion in natural environments.”

He hopes the study will inspire engineers to learn how to build more adaptable, nimble robots. The researchers concluded that the failure of robots to outrun animals doesn’t come down to shortfalls in any one piece of machinery, such as batteries or actuators. Instead, where engineers might falter is in making those parts work together efficiently.  

This pursuit is one of Jayaram’s chief passions. His lab on the CU Boulder campus is home to a lot of creepy crawlies, including several furry wolf spiders that are about the size of a half dollar.

“Wolf spiders are natural hunters,” Jayaram said. “They live under rocks and can run over complex terrain with incredible speed to catch prey.”

He envisions a world in which engineers build robots that work a bit more like these extraordinary arachnids.

“Animals are, in some sense, the embodiment of this ultimate design principle—a system that functions really well together,” he said.

 

 

 

A cockroach alongside the HAMR-Jr robot. (Credit: Kaushik Jayaram)

 

Cockroach energy

The question of “who can run better, animals or robots?” is complicated because running itself is complicated. 

In previous research, Jayaram and his colleagues at Harvard University designed a line of robots that seek to mimic the behavior of the oft-reviled cockroach. The team’s fits on top of a penny and sprints at speeds equivalent to that of a cheetah. But, Jayaram noted, while HAMR-Jr can bust a move forward and backward, it doesn’t move as well side-to-side or over bumpy terrain. Humble cockroaches, in contrast, have no trouble running over surfaces from porcelain to dirt and gravel. They can also and .

To understand why such versatility remains a challenge for robots, the authors of the new study broke these machines down into five subsystems including power, frame, actuation, sensing, and control. To the group’s surprise, few of those subsystems seemed to fall short of their equivalents in animals. 

 

 

Kaushik Jayaram, right, with graduate student Heiko Kabutz, left, in Jayaram's lab on the CU Boulder campus. (Credit: Casey Cass/CU Boulder)

 

 

High-quality lithium-ion batteries, for example, can deliver as much as 10 kilowatts of power for every kilogram (2.2 pounds) they weigh. Animal tissue, in contrast, produces around one-tenth that. Muscles, meanwhile, can’t come close to matching the absolute torque of many motors. 

“But at the system level, robots are not as good,” Jayaram said. “We run into inherent design trade-offs. If we try to optimize for one thing, like forward speed, we might lose out on something else, like turning ability.”

Spider senses

So, how can engineers build robots that, like animals, are more than just the sum of their parts? 

Animals, Jayaram noted, aren’t split into separate subsystems in the same way as robots. Your quadriceps, for example, propel your legs like HAMR-Jr’s actuators move their limbs. But quads also produce their own power by breaking down fats and sugars and incorporating neurons that can sense pain and pressure.

Jayaram thinks the future of robotics may come down to “functional subunits” that do the same thing: Rather than keeping power sources separate from your motors and circuit boards, why not integrate them all into a single part? In a 2015 paper, CU Boulder computer scientist Nikolaus Correll, who wasn’t involved in the current study, proposed such theoretical “robotic materials” that work more like your quads. 

Engineers are still a long way away from achieving that goal. Some, like Jayaram, are making steps in this direction, such as through his lab’s Compliant Legged Articulated Robotic Insect (CLARI) robot, a multi-legged robot that moves a little like a spider. Jayaram explained that CLARI relies on a modular design, in which each of its legs acts like a self-contained robot with its own motor, sensors and controlling circuitry. The team’s  can move in all directions in confined spaces, a first for four-legged robots.

It's one more thing that engineers like Jayaram can learn from those perfect hunters, wolf spiders.

“Nature is a really useful teacher.”

window.location.href = `/today/2024/04/29/robots-cant-outrun-animals-yet-new-study-explores-why`;

Off

Traditional 0 On White ]]>
Mon, 29 Apr 2024 18:58:02 +0000 Anonymous 122 at /program/robotics
A delicate touch: teaching robots to handle the unknown /program/robotics/2024/04/02/delicate-touch-teaching-robots-handle-unknown A delicate touch: teaching robots to handle the unknown Anonymous (not verified) Tue, 04/02/2024 - 15:30 Categories: Research News Tags: Nikolaus Correll News

William Xie, a first-year PhD student in computer science, is teaching a robot to reason how gently it should grasp previously unknown objects by using large language models (LLMs). 

, Xie's project, is an intriguing step beyond the custom, piecemeal solutions currently used to avoid pinching or crushing novel objects. 

In addition, Deligrasp helps the robot translate what it can 'touch' into meaningful information for people. 

"William has gotten some neat results by leveraging common sense information from large language models. For example, the robot can estimate and explain the ripeness of various fruits after touching them." Said his advisor, Professor Nikolaus Correll

Let's learn more about DeliGrasp, Xie's journey to robotics, and his plans for the conference Japan and beyond. 

 

How would you describe this research? 

As humans, we’re able to quickly intuit how exactly we need to pick up a variety of objects, including delicate produce or unwieldy, heavy objects. We’re informed by the visual appearance of an object, what prior knowledge we may have about it, and most importantly, how it feels to the touch when we initially grasp it. 

Robots don’t have this all-encompassing intuition though, and they don’t have end-effectors (grippers/hands) as effective as human hands. So solutions are piecemeal: the community has researched “hands” across the spectrum of mechanical construction, sensing capabilities (tactile, force, vibration, velocity), material (soft, rigid, hybrid, woven, etc…). And then the corresponding machine learning models and/or control methods to enable “appropriately forceful” gripping are bespoke for each of these architectures.

Embedded in LLMs, which are trained on an internet’s worth of data, is common sense physical-reasoning that crudely approximates a human’s (as the saying goes: “all models are wrong, some are useful”). We use the LLM-estimated mass and friction to simplify the grasp controller and deploy it on a two-finger gripper, a prevalent and relatively simple architecture. Key to the controller working is the force feedback sensed by the gripper as it grasps an object, and knowing at what force threshold to stop—the LLM-estimated values directly determine this threshold for any arbitrary object, and our initial results are quite promising.

How did you get inspired to pursue this research?

I wouldn’t say that I was inspired to pursue this specific project. I think, like a lot of robotics research, I had been working away at a big problem for a while, and stumbled into a solution for a much smaller problem. My goal since I arrived here has been to research techniques for assistive robots and devices that restore agency for the elderly and/or mobility-impaired in their everyday lives. I’m particularly interested in shopping (but eventually generalist) robots—one problem we found is that it is really hard to determine, let alone pick ripe fruits and produce with a typical robot gripper and just a camera. In early February, I took a day to try out picking up variably sized objects via hand-tuning our MAGPIE gripper’s force sensing (an affordable, open-source gripper developed by the Correll Lab). It worked well; I let ChatGPT calibrate the gripper which worked even better, and it evolved very quickly into DeliGrasp.

What would you say is one of your most interesting findings so far?

LLMs do a reasonable job of estimating an arbitrary object’s mass (friction, not as well) from just a text description. This isn’t in the paper, but when paired with a picture, they can extend this reasoning for oddballs—gigantic paper airplanes, or miniature (plastic) fruits and vegetables.

With our grasping method, we can sense the contact forces on the gripper as it closes around an object—this is a really good measure of ripeness, it turns out. We can then further employ LLMs to reason about these contact forces to pick out ripe fruit and vegetables!

What does the day-to-day of this research look like?

Leading up to submission, I was running experiments on the robot and picking up different objects with different strategies pretty much every day. A little repetitive, but also exciting. Prior to that, and now that I’m trying to improve the project for the next conference, I spend most of my time reading papers, thinking/coming up with ideas, and setting up small, one-off experiments to try out those ideas.

How did you come to study at CU Boulder? 

For a few years, I’ve known that I really wanted to build robots that could directly, immediately help my loved ones and community. I had a very positive first research experience in my last year of undergrad and learned what it felt like to have true personal agency in pursuing work that I cared about. At the same time I knew I’d be relocating to Boulder after graduation. I was very fortunate that Nikolaus accepted me and let me keep pursuing this goal of mine.

It’d be unfathomable if I could keep doing this research in academia or industry, though of course that would be ideal. But I’m biased toward academia, particularly teaching. I’ve been teaching high school robotics for 5 years now, and now teaching/mentoring undergrads at CU—each day is as fulfilling as the first. I have great mentors across the robotics faculty and senior PhD students we work in ECES 111, a giant, well-equipped space that 3 robotics labs share, and it’s great for collaboration and brainstorming. 

What are your hopes for this international conference (and what conference is it?)

The venue is a workshop at the 2024 International Conference on Robotics and Automation (ICRA 2024), happening in Yokohama, Japan from May 13-17. The name of the workshop is a mouthful: Vision-Language Models for Navigation and Manipulation (VLMNM).

A workshop is detached from the main conference, and kind of is its own little bubble (like a big supermarket—the conference—hosting a pop-up food tasting event—the workshop). I'm really excited to meet other researchers and pick their brains. As a first-year, I’ve spent the past year reading papers from practically everyone on the workshop panel, and from their students. I’ll probably also spend half my time exploring (eating) around the Tokyo area.

window.location.href = `/cs/2024/04/02/delicate-touch-teaching-robots-handle-unknown`;

Off

Traditional 0 On White ]]>
Tue, 02 Apr 2024 21:30:06 +0000 Anonymous 120 at /program/robotics
CU Boulder robotics research showcased in Advanced Intelligent Systems /program/robotics/2024/01/09/cu-boulder-robotics-research-showcased-advanced-intelligent-systems CU Boulder robotics research showcased in Advanced Intelligent Systems Anonymous (not verified) Tue, 01/09/2024 - 09:19 Categories: Research News Tags: Kaushik Jayaram News Jeff Zehnder

Kaushik Jayaram's bioinspired robotics are on the cover of the latest issue of the journal Advanced Intelligent Systems.

The article, "Design of CLARI: A Miniature Modular Origami Passive Shape-Morphing Robot," discusses the design and creation of Jayaram's compliant legged articulated robotic insect.

Jayaram is an assistant professor in the Robotics Program and the Paul M. Rady Department of Mechanical Engineering. He is an expert in robotics and systems design, materials, and work at the micro and nanoscale.

The cover shows a 2.59 gram, 3.4 cm long, modular origami robot capable of passive shape morphing.

These tiny robots provide unique abilities to access confined environments and have potential for applications such as search-and-rescue and high-value asset inspection.

Off

Traditional 0 On White ]]>
Tue, 09 Jan 2024 16:19:21 +0000 Anonymous 119 at /program/robotics
Xu's 'cyborg jellyfish' highlighted in Nature /program/robotics/2023/12/12/xus-cyborg-jellyfish-highlighted-nature Xu's 'cyborg jellyfish' highlighted in Nature Anonymous (not verified) Tue, 12/12/2023 - 13:35 Categories: Research News Tags: Nicole Xu News

Assistant Professor Nicole Xu recently spoke with Nature for a feature about biohybrid robots and their real-world applications.

Xu and her collaborators have been working on a jellyfish-inspired robot that can help monitor climate change and ecological shifts in the Earth's oceans. 

"Jellyfish have a number of appealing characteristics for roboticists. They are energy-efficient swimmers, and are able to descend to great depths. Compared with current mechanical submersibles, Xu says, jellyfish are less likely to cause damage to marine environments. Their natural appearance and quietness also make them unremarkable — during the ocean tests, fish swam right up to them."

 works at the intersection of robotics, fluid dynamics and biology. Their methods include laboratory experiments, theoretical modeling and field work.

window.location.href = `/mechanical/2023/12/07/xus-cyborg-jellyfish-highlighted-nature`;

Off

Traditional 0 On White ]]>
Tue, 12 Dec 2023 20:35:15 +0000 Anonymous 112 at /program/robotics
Is the World Ready for Self-Driving Cars? /program/robotics/2023/11/27/world-ready-self-driving-cars Is the World Ready for Self-Driving Cars? Anonymous (not verified) Mon, 11/27/2023 - 10:41 Categories: Research News Tags: Christoffer Heckman News Majid Zamani News

In August, the California Public Utilities Commission when it voted to allow two self-driving car companies, Waymo and Cruise, to commercially operate their “robotaxis” around the clock in San Francisco.

Within hours, Cruise reported at least 10 incidents where vehicles stopped short of their destination, blocking city streets. The commission demanded they recall 50% of their fleet. 

Despite these challenges, other cities — including Las Vegas, Miami, Austin and Phoenix — autonomous vehicle startups to conduct tests on public roads. 

 

"Self-driving car proponents see the jump from laboratories to real-world testing as a necessary step that has been a long time coming."

 

Self-driving car proponents see the jump from laboratories to real-world testing as a necessary step that has been a long time coming. The first autonomous vehicle was tested on the Autobahn in Germany in 1986, but the advances stalled in the 1990s due to technology limitations. 

After a 2007 Defense Department’s Advanced Research Projects Agency (DARPA) , it seemed like the era of driverless cars had finally arrived. The competition kickstarted a Silicon Valley race to develop the first commercial driverless car. Optimism abounded, with engineers, investors and automakers predicting there would be as many as 10 million self-driving cars on the road by 2020. 

“The question for the last 30 years is — how long is this going to take?” said Javier von Stecher (PhDPhys’08), senior software engineer at Nvidia who has worked on self-driving car technology at companies including Uber and Mercedes-Benz. “I think a lot of people were oversold on the idea that we could get this working fast. The biggest shift I’ve seen over the past decade is people realizing how hard this problem really is.” 

The stakes may be high, but that’s not deterring CU Boulder researchers. From creating systems and models to studying human-machine interactions, university teams are working to advance the field safely and responsibly as self-driving cars become a fixture in our society. 

Their next big question: Can we learn to trust these vehicles?

Cruise Control

The idea behind autonomous vehicles is simple. An artificial intelligence system pulls in data from an array of sensors including radar, high-resolution cameras and GPS, and uses this data to navigate from point A to point B while avoiding obstacles and obeying traffic laws. Sounds simple? It’s not.

When a self-driving car encounters an unexpected obstacle, it makes split-second judgment calls — should it brake or swerve around it? — that develop naturally in humans but are still beyond even the most sophisticated AI systems. 

Moreover, there will always be an edge case that the AI-powered car hasn’t seen before, which means the key to safe autonomous vehicles is building systems that can correctly favor safe choices in unfamiliar situations.

Majid Zamani, associate professor and director of , studies how to create software for autonomous systems such as cars, drones and airplanes. In autonomous vehicles’ AI systems, data flows into the AI and helps it make decisions. But how the AI creates those decisions is a mystery. This, said Zamani, makes it difficult to trust the AI system — and yet trust is critically important in high-stakes applications like autonomous driving.

“These are what we call safety critical applications because system failure can cause loss of life or damage to property, so it’s really important that the way those systems are making decisions is provably correct,” Zamani said. 

In contrast to AI systems that use data to create models that are not intelligible to humans, Zamani advocates for a bottom -up approach where the AI’s models are derived from fundamental physical laws, such as acceleration or friction, which are well-understood and unchanging.

“If you derive a model using data, you have to be able to ensure that you can quantify how much error is in that model and the actual system that uses it,” Zamani said.

Mathematically demonstrating the safety of the models used by autonomous vehicles is important for engineers and policymakers who need to guarantee safety before they’re deployed in the real world. But this raises some thorny questions: How safe is “safe enough,” and how can autonomous vehicles communicate these risks to drivers? 

Computer, Take the Wheel 

Each year, more than 40,000 Americans die in car accidents, and , about 90% of U.S. auto deaths and serious crashes are attributable to driver error. The great promise of autonomous vehicles is to make auto deaths a relic of history by eliminating human errors with computers that never get tired or distracted.  

The NHTSA designates six levels of “autonomy” for self-driving cars, which range from Level 0 (full driver control) to Level 5 (fully autonomous). For most of us, Level 5 is what we think of when we think of self-driving cars: a vehicle so autonomous that it might not even have a steering wheel and driver’s seat because the computer handles everything. For now, this remains a distant dream, with many automakers pursuing Level 3 or 4 autonomy as stepping stones. 

“Most modern cars are Level 2, with partial autonomous driving,” said Chris Heckman, associate professor and director of the Autonomous Robotics and Perception Group in CU Boulder’s computer science department. “Usually that means there’s a human at the wheel, but they can relegate some functions to the car’s software such as automatic braking or adaptive cruise control.”

While these hybrid AI-human systems can improve safety by assisting a driver with braking, acceleration and collision avoidance, limitations remain. Several fatal accidents, for example, have resulted from drivers’ overreliance on autopilot, which stems from issues of human psychology and AI understanding.

Fostering Trust 

This problem is deeply familiar to Leanne Hirschfield, associate research professor at the Institute of Cognitive Science and the director of the System-Human Interaction with NIRS and EEG () Lab at CU Boulder. Hirschfield’s research focuses on using brain measurements to study the ways humans interact with autonomous systems, like self-driving cars and AI systems deployed in elementary school classrooms. 

 

"When an autonomous vehicle can show the driver information about how it’s making decisions or its level of confidence in its decisions, the driver is better equipped to determine when they need to grab the wheel."

 

Trust, Hirschfield said, is defined as a willingness to be vulnerable and take on risks, and for decades the dominant engineering paradigm for autonomous systems has been focused on ways to foster total trust in autonomous systems. 

“We’re realizing that’s not always the best approach,” Hirschfield said. “Now, we’re looking at trust calibration, where users often trust the system but also have enough information to know when they shouldn’t rely on it.”

The key to trust calibration, she said, is transparency. When an autonomous vehicle can show the driver information about how it’s making decisions or its level of confidence in its decisions, the driver is better equipped to determine when they need to grab the wheel. 

Studying user responses is challenging in a laboratory setting, where it’s difficult to expose drivers to real risks. So Hirschfield and researchers at the U.S. Air Force Academy have been using a Tesla modified with a variety of internal sensors to study user trust in autonomous vehicles. 

“Part of what we’re trying to do is measure someone’s level of trust, their workload and emotional states while they’re driving,” Hirschfield said. “They’ll have the car whipping around hills, which is how you need to study trust because it involves a sense of true risk compared to a study in a lab setting.” 

Although Hirschfield said that researchers have made a lot of progress in understanding how to design autonomous vehicles to foster driver trust, there is still a lot of work to be done.

Human-Centered Design 

Sidney D’Mello, a professor at the Institute of Cognitive Science, studies how human-computer interactions shift the way we think and feel. For D’Mello, it’s unclear whether the current crop of self-driving cars can shift to a new driver-focused paradigm from the current perfected engineering-forward approach.

“I think we need an entirely new methodology for the self-driving car context,” D’Mello said. “If you really want something you can trust, then you need to design these systems with users starting from day one. But every single car company is kind of stuck in this engineering mindset from 50 years ago where they build the tech and then they present it to the user.”

The good news, D’Mello said, is that automakers are starting to take this challenge seriously. A collaboration between Toyota and the Institute of Cognitive Science focused on designing autonomous vehicles that foster trust in the user.

“The autonomous model typically implies the AI is in the center with the human hovering around it,” said D’Mello. “But this needs to be a model with the human in the center.” 

Even when users learn to trust autonomous vehicles, living with driverless cars and reconceptualizing how they relate to them is complex. But there’s a lot we can apply from research on prosthetics, said Cara Welker, assistant professor in biomechanics, robotics and systems design.

Much like autonomous vehicles analyze surroundings to make navigation and control decisions, robotic prostheses monitor a wearer’s movements to understand appropriate behavior. And just as teaching users to trust prosthetics requires strong feedback loops and predictable prosthetic behavior, teaching drivers to trust autonomous vehicles means providing drivers with information about what the AI is doing — and it requires drivers to reconceptualize vehicles as extensions of themselves. 

“There’s a difference between users being able to predict the behavior of an assistive device versus having some kind of sensory feedback,” Welker said. “And this difference has been shown to affect whether the people think of it as ‘me and my prosthesis’ instead of just ‘me, which includes my prosthesis.’ And that’s incredibly important in terms of how users will trust that device.” 

How, then, will drivers evolve to experience cars as extensions of themselves? 

Next Exit

In 2018, a by a self-driving Uber in Arizona, which marked the first fatality attributed to an autonomous vehicle. Although the driver pleaded guilty in the case, the question of who is responsible when autonomous vehicles kill is far from settled. 

Today, there is limited regulation dictating autonomous vehicle safety and liability. One problem is that vehicles are regulated at the federal level while drivers are regulated at the state level — a division of responsibility that doesn’t account for a future where the driver and vehicle are more closely aligned. 

Researchers and automakers have voiced frustration with existing autonomous driving regulations, agreeing that updated regulations are necessary. Ideally, regulations would ensure driver, passenger and pedestrian safety without quashing innovation. But what these policies might look like is still unclear. 

The challenge, said Heckman, is that the engineers don’t have complete control over how autonomous systems behave in every circumstance. He believes it’s critical for regulations to account for this without insisting on impossibly high safety standards. 

“Many of us work in this field because automotive deaths seem avoidable and we want to build technologies that solve that problem,” Heckman said. “But I think we hold these systems [to] too high of a standard — because yes, we want to have safe systems, but right now we have no safety frameworks, and automakers aren’t comfortable building these systems because they may be held to an extremely high liability.” 

Other industries may offer a vision for how to regulate the autonomous driving industry while providing acceptable safety standards and enabling technological development, Heckman said. The aviation industry, for example, adopted rigorous engineering standards and fostered trust in engineers, pilots, passengers and policymakers. 

“There’s an engineering principle that trust is a perception of humans,” Heckman said. “Trust is usually built through experience with a system, and that experience confers trust on the engineering paradigms that build safe systems. 

“With airplanes, it took decades for us to come up with designs and engineering paradigms that we feel comfortable with. I think we’ll see the same in autonomous vehicles, and regulation will follow once we’ve really defined what it means for them to be trustworthy.” 

window.location.href = `/coloradan/2023/11/06/world-ready-self-driving-cars`;

Off

Traditional 0 On White ]]>
Mon, 27 Nov 2023 17:41:51 +0000 Anonymous 109 at /program/robotics
Building next generation autonomous robots to serve humanity /program/robotics/2023/11/17/building-next-generation-autonomous-robots-serve-humanity Building next generation autonomous robots to serve humanity Anonymous (not verified) Fri, 11/17/2023 - 16:19 Categories: Research News Tags: Christoffer Heckman News Eric Frew News Sean Humbert News Jeff Zehnder

One thousand feet underground, a four-legged creature scavenges through tunnels in pitch darkness. With vision that cuts through the blackness, it explores a spider web of paths, remembering its every step and navigating with precision. The sound of its movements echo eerily off the walls, but it is not to be feared – this is no wild animal; it is an autonomous rescue robot.

Initially designed to find survivors in collapsed mines, caves, and damaged buildings, that is only part of what it can do.

Created by a team of University of Colorado Boulder researchers and students, the robots placed third as the top US entry and earned $500,000 in prize money at a Defense Advanced Projects Research Agency Subterranean Challenge competition in 2021.

Going Futher

Two years later, they are pushing the technology even further, earning new research grants to expand the technology and create new applications in the rapidly growing world of autonomous systems.

“Ideally you don’t want to put humans in harm’s way in disaster situations like mines or buildings after earthquakes; the walls or ceilings could collapse and maybe some already have,” said Sean Humbert, a professor of mechanical engineering and director of the Robotics Program at CU Boulder. “These robots can be disposable while still providing situational awareness.”

The team developed an advanced system of sensors and algorithms to allow the robots to function on their own – once given an assignment, they make decisions autonomously on how to best complete it.

Advanced Communication

A major goal is to get them from engineers directly into the hands of first responders. Success requires simplifying the way the robots transmit data into something approximating plain English, according to Kyle Harlow, a computer science PhD student.

“The robots communicate in pure math. We do a lot of work on top of that to interpret the data right now, but a firefighter doesn’t have that kind of time,” Harlow said.

To make that happen Humbert is collaborating with Chris Heckman, an associate professor of computer science, to change both how the robots communicate and how they represent the world. The robots’ eyes – a LiDAR sensor – creates highly detailed 3D maps of an environment, 15 cm at a time. That’s a problem when they try to relay information – the sheer amount of data clogs up the network.

“Humans don’t interpret the environment in 15 cm blocks,” Humbert said. “We’re now working on what’s called semantic mapping, which is a way to combine contextual and spatial information. This is closer to how the human brain represents the world and is much less memory intensive.”

High Tech Mapping

The team is also integrating new sensors to make the robots more effective in challenging environments. The robots excel in clear conditions but struggle with visual obstacles like dust, fog, and snow. Harlow is leading an effort to incorporate millimeter wave radar to change that.

“We have all these sensors that work well in the lab and in clean environments, but we need to be able to go out in places such as Colorado where it snows sometimes,” Harlow said.

Where some researchers are forced to suspend work when a grant ends, members of the subterranean robotics team keep finding new partners to push the technology further.

Autonomous Flight

Eric Frew, a professor of aerospace at CU Boulder, is using the technology for a new National Institute of Standards and Technology competition to develop aerial robots – drones – instead of ground robots, to autonomously map disaster areas indoors and outside.

“Our entry is based directly on the Subterranean Challenge experience and the systems developed there,” Frew said.

Some teams in the competition will be relying on drones navigated by human operators, but Frew said CU Boulder’s project is aiming for an autonomous solution that allows humans to focus on more critical tasks.

Although numerous universities and private businesses are advancing autonomous robotic systems, Humbert said other organizations often focus on individual aspects of the technology. The students and faculty at CU Boulder are working on all avenues of the systems and for uses in environments that present extreme challenges.

“We’ve built world-class platforms that incorporate mapping, localization, planning, coordination – all the high level stuff, the autonomy, that’s all us,” Humbert said. “There are only a handful of teams across the world that can do that. It’s a huge advantage that CU Boulder has.”

window.location.href = `/engineering/2023/11/17/building-next-generation-autonomous-robots-serve-humanity`;

Off

Traditional 0 On White ]]>
Fri, 17 Nov 2023 23:19:59 +0000 Anonymous 107 at /program/robotics
Jayaram and team win IROS Best Paper Award on Safety, Security, and Rescue Robotics /program/robotics/2023/10/31/jayaram-and-team-win-iros-best-paper-award-safety-security-and-rescue-robotics Jayaram and team win IROS Best Paper Award on Safety, Security, and Rescue Robotics Anonymous (not verified) Tue, 10/31/2023 - 13:28 Categories: Research News Tags: Kaushik Jayaram News

Assistant Professor Kaushik Jayaram’s Animal Inspired Movement and Robotics Laboratory recently won the , rising above around 3,000 other academic papers that were submitted to the IEEE/RSJ International Conference on Intelligent Robots and Systems. Along with Jayaram as the PI of the lab, PhD student Heiko Kabutz was the lead researcher of the paper, and PhD students Alex Hedrick and Parker McDonnell were coauthors, as well.

Their paper titled , improves upon their to demonstrate the ability to passively change its shape to squeeze through narrow gaps in multiple directions. This is a new capability for legged robots, let alone insect-scale systems, that enables significantly enhanced maneuverability in cluttered environments, and has the potential to aid first responders after major disasters.

Kabutz and Jayaram’s latest version is scaled down 60% in length and 38% in mass, while maintaining 80% of the actuation power. The robot weighs less than a gram but can support over three times its body weight as an additional payload. It is also over three times as fast as its predecessor reaching running speeds of 60 millimeters per second, or three of its body lengths per second.

Check out their video of mCLARI here: .

With the latest breakthrough that Jayaram and Kabutz have now achieved with their research, they are able to scale down (or up), their design without sacrificing design integrity bringing such robots closer in size to real-world application needs.

“Since these robots can deform, you can still have slightly larger sizes,” Jayaram said. “If you have a slightly larger size, you can carry more weight, you can have more sensors, you'll have a longer lifetime and be more stable. But when you need to be, you can squish through and go through those specific gaps.”

Kabutz, who leads the design of the mClari, has surgeon-like hands that allow him to build and fold the tiny legs of the robot. Kabutz grew up fascinated by robots and competed in robotic competitions in high school.

“Initially, I was interested in building bigger robots,” said Kabutz, “but when I came to Jayaram’s lab, he really got me interested in building bioinspired robots at the insect scale.”

Jayaram’s research team studies concepts from biology and applies them to the design of real-world engineered systems. In his lab, you can find robots modeled after the body morphologies of various arthropods including cockroaches and spiders. 

“We are fundamentally interested in understanding why animals are the way they are and move the way they do,” said Jayaram, “and how we can build bioinspired robots that can address social needs, like search and rescue, environmental monitoring, or even use them during surgery.”

window.location.href = `/mechanical/2023/10/31/jayaram-and-team-win-iros-best-paper-award-safety-security-and-rescue-robotics`;

Off

Traditional 0 On White ]]>
Tue, 31 Oct 2023 19:28:03 +0000 Anonymous 105 at /program/robotics