You are currently viewing LIDAR Versus Camera

LIDAR Versus Camera

If you were shopping for a self-driving car today, would you opt for Tesla’s camera and radar technology or unapologetically go for those with LIDAR technology? Either way, both approaches have their strengths and weaknesses. None is superior over the other. But both can be integrated to provide maximum performance and safety on the road.

Modern robocars need accurate sensors and a reliable perception and localization system for optimum performance. Whereas the perception system is essential for detecting nearby cars, cyclists, and other objects on the road, the localization system has a Global Positioning System that enables it to determine the car’s geographical location. The effectiveness of the perception system is mostly dependent on the accuracy of the sensors.

These sensors can either be laser (LIDAR) or camera. LIDAR technology sensors use a laser beam to determine an object’s distance or velocity. LIDAR performance is not dependent on lighting conditions. Conversely, cameras provide a good understanding of the scene and operate like the human visual cortex.

 

LIDAR

As mentioned above, LIDAR technology works on a simple principle-focus a beam of light on an object, obtain a 3D imaging of the object, measure how long it takes for that beam to return to the source to figure out its distance. Data collection and processing during this exercise is usually super-fast given light’s constant speed in air.

Because a LIDAR sensor uses its laser light, it maintains its optimal performance in varied lighting conditions, whether day or night time. Whereas the LIDAR system can be integrated with cameras, it’s even costlier than a self-driving car.

Camera

A camera’s performance operates on the same principle as the human visual system. A camera doesn’t work on the same day and night. At night, light is required. Thus it has to adapt to lighting. As the human visual cortex, a camera sees objects around it in different colours. Its object recognition capability is superior to LIDAR. Unlike LIDAR, cameras are relatively cheap and affordable, and one can comfortably acquire many.

Tesla’s astounding success is attributed to this simple principle: “less is more,” thus one of the primary reasons Elon Musk has deprecated the use of LIDAR technology on self-driving cars.

Comparisons between LIDAR versus camera


  • Amount of processing power consumed.

As much as Musk can brag about Tesla’s cost-effective approach towards the development of robocars, a lot of processing power is required by cameras that drive the perception system. On the contrary, the use of LIDAR sensors does not require such large quantities of power to drive the perception system.

  • Image resolution capability

A LIDAR’s detection system results in the production of 3D imaging whose resolution is not as accurate when cameras are used. Since a camera sees its surroundings differently and possesses object recognition capability, the resultant image obtained has a very high resolution.

  • Object recognition capability

Unlike LIDAR that only detects surrounding objects’ presence, a camera’s ability to see objects in their respective colours equips it with recognizing different objects, such as cars, pedestrians, traffic lights, et cetera. It can also tell an object’s brightness, direction, size, et cetera

  • Dependency on ambient light

LIDAR relies 100% on the beam of light it emits to work. Its ability to see and detect objects is not limited by time of day or weather conditions. On the flip side, a camera sees differently in different lighting conditions. For example, it must be aided with light at night for it to see.

  • Scene depth accuracy

Because of 3D LIDAR’s technological sophistication, the landscape elevation and height information obtained is very accurate and reliable. 3D LIDAR can emit a beam of light along an object’s vertical plane. Thus the data collected has both X, Y, and Z coordinates that enhance accuracy. On the other hand, a camera’s ability is limited to short-range objects, understanding of the scene, and recognition of surrounding objects.

  • Cost

If you want to enjoy the benefits of LIDAR technology, you must be prepared to spend not less than $1000 on a single sensor unit. It’s estimated that an advanced 3D LIDAR sensor costs more than a robocar does. The amount of processing power saved, its ability to operate in any weather and lighting conditions, scene depth accuracy of landscapes obtained are some of the features that make the LIDAR system so expensive. But like any other electronic product, the cost is likely to go down considerably.

On the other hand, a camera is relatively cheap and affordable, and a number of them can be installed on a single robocar to enhance its performance. Tesla’s self-driving cars, for example, have eight cameras in total, and a radar. Its “less for more” approach in using cost-effective cameras and computer vision technology in their robocars has made them achieve tremendous success.

  • Object range capability

LIDAR technology can easily see and detect objects that are 70 to 100 meters away. Some companies claim that theirs can see up to 200 meters though that claim is far-fetched. The farther a car is from the LIDAR source, the fewer returns from larger objects are received, the less accurate information is obtained. Conversely, a camera’s range is much more limited than the LIDAR system.

  • Presence of moving parts

Unlike a camera, a LIDAR sensor has moving parts to scan and detect target objects for 3D imaging quickly. The less the number of moving parts a LIDAR system has, the more expensive it becomes, like flash LIDARS. A camera does not have parts that move unless required.

  • Ability to see traffic lights

As much as LIDAR tech companies can pride themselves in the advancement made in LIDAR systems over the last decade, LIDAR cannot still detect and see traffic and brake lights, and this is a significant safety concern. On the flip side, a camera works as a human eye does, and so it’s able to scan the scene, recognize and see objects in their respective colours, and provide a detailed understanding of the scene. Such a capability makes a camera able to see traffic and brake lights for appropriate action.

The bottom line

Whereas LIDAR and camera systems can be integrated for optimum performance, they exhibit stark differences in their features. LIDAR technology uses a beam of illuminated light to gather 3D data to estimate object brightness, size, distance, and velocity. LIDAR’s performance is, therefore, not dependent on lighting. Conversely, a camera has to scan the scene, detect and recognize objects, and provide an understanding of the scene for appropriate action. As such, for a camera to work effectively, suitable lighting conditions must be availed.

Moreover, these two technologies differ in their image resolution capabilities, the processing power required, scene depth accuracy, et cetera.

Leave a Reply