Self-Driving Vehicle Success is Tied to Machine Vision

Rideshare company Uber made a big splash when it introduced self-driving cars in Pittsburgh to pick up riders. Autonomous vehicles have moved off the drawing board and into the streets. During this test-phase, Uber will continue using engineers behind the wheel until the system is fully developed. Passenger pick-ups in the city of bridges and winding streets have so far proven successful.

Uber Self-Driving Car

Uber Self-Driving Car (Image source: uber.com)

Self-driving cars being developed for consumers include changing safety systems from reacting to crashes such as the use of air bags to preventing crashes. These active safety systems are opening many opportunities for makers of machine vision systems.

One reason why vision is so powerful is because it allows us to interact with the environment and to make decisions without being in physical contact with the objects around us (Issues on Machine Vision, 1989).

Vehicles on the move that are constantly monitoring their environment with machine vision systems, as noted in the article Autonomous Car Industry Comes Knocking on Machine Vision’s Front Door, are in a mode of preventing accidents.

Expect cars to use multiple imaging devices and components including sensors, cameras, LIDAR (Light Detection and Ranging), and radar. These devices will be tasked with monitoring and eventually controlling everything from lane departure to parking.

But all good ideas have challenges to overcome. In the development of autonomous vehicles, power and size constraints need [to be] addressed in the design stage. The cables alone that are needed to connect the various components can add considerable weight to a vehicle and negatively impact its fuel efficiency.

>Read more by Association for Advancing Automation, October 5, 2016