This turns out to be an interesting article
Musk’s prediction cast a spotlight on a rapidly growing divide in the world of AV (autonomous vehicle) development: whether to aim for vehicles that, like human drivers, can navigate the world through sight alone or if sensors like LiDAR are still necessary to counter-balance some of the limitations of computer vision.
Is Elon Wrong About LiDAR? (Scale)
While the world is 3D the self-driving car needs a 2D “map” to navigate. Lidar uses the same techniques as radar but uses reflected laser instead of radio waves to aid in the 3D/2D conversion as opposed to only using video which has difficulty with precise ranges.
This is very similar to the issue discussed on the thread about the third mate on the Exxon Valdez. While a pilot, being in familiar waters, can convert the 3D view he sees to the 2D “map” in his head the third mate has to use a literal map (the chart) to do the same task.
Like the self-driving car needed Lidar (or not) the third mate needs radar ranges and compass bearing to make the conversion.
What does it mean that humans struggle to annotate these types of 2D scenes (while managing to walk around our houses and drive to work without a second thought)? It’s really a testament to just how differently our brains go about perceiving the world compared to the software of a self-driving car. When it comes to planning physical movements, we don’t need to perform mathematical calculations in our head about the environment to know to hit the brakes when a driver runs a red-light in front of us.
If I needed a “chart” of the inside of my house and had to plot bearing and ranges to navigate inside the house the task of going to the kitchen and back for a cup of coffee would be much more difficult. Likewise the third mate has a much more difficult task than the pilot