HD Live Map Part 2:  A literal trunk-load of data

Intel recently projected that a self-driving car will produce four terabytes of data a day. The ability to process that much data in real time will require some heavy lifting. Thankfully, the HD Live Map is built to manage both the data flow and the processing power needed to keep an autonomous car safely on track.

In Part 1 of this series, we looked at how standard definition maps grew to become HD Maps, and the various data layers that came together to make the HD Live Map. The HD Live Map gives us a highly precise map with roads, lanes, signs, poles, and a plethora of other objects. The next step is producing a vehicle system that can use all this data efficiently, and in real time.

Imagine you’re driving down the road and you’re coming up on a four-way intersection. This particular intersection is one you drive through every day. So, you know there’s a stop sign for you and for oncoming traffic. You also know there’s a cross-walk at this intersection, and you’re always careful to stop properly behind it. As you’re approaching it today, you see a car coming the other way, which is slowing down for its own stop.

This is a relatively simple scenario for you. As a driver, it’s likely that you can easily dismiss most of the information without worry, and move right through the intersection after you’re sure the way is clear.

HEREHDMapsCarsDataBlog_Body1_1000heads_2017-10-17.jpg

By comparison, an autonomous car’s navigation system is collecting a lot more data than you. As the car approaches the intersection, its various sensors are perpetually measuring how fast you’re going. The sensors are also looking at where the stop sign is, where the crosswalk is, the light pole on the corner, the lane markers, the car across the street, and more. To add some complexity, the car is checking and re-checking these things up to 30 times per second.

That much data, and the power needed to put it to good use, is a big task.

If you want to solve this, you could consider building a super computer into your trunk. That approach is not a scalable solution just yet, but time and Moore’s Law will deliver in the very near future. If you want to solve this now, you need a car with a robust array of sensors connected to a cloud network that has enough power to take all the data. The cloud solution can then process the information intelligently and direct the car to take the appropriate action.

HEREHDMapsCarsDataBlog_Body2_1000heads_2017-10-17.jpg

This is precisely what the HD Live Map is doing. To put it simply: cars detect their environment, and compares that information with the map. The car also communicates with the cloud network, where the heavy processing takes place, then gives proper commands so the car can navigate effectively in coordination with the car’s sensors. All the work is done in the cloud, and the HD Live Map is always informing what’s happening next with the freshest information possible.

In the future, car systems will grow more advanced and computer processing power will grow smaller. As those computers develop, more processing responsibility can be moved to the car, and data transmission will focus on the most important information…

… like what happens if there is an undetected object in the road?

In part three of this series, we’ll look at data aggregation, and how the HD Live Map heals itself.

Click here to download the HD live map tech brief!

Topics: HD Live Map, Automotive, HERE Auto SDK, Editor's picks

Comments