Lidar and cameras get most of the time in the spotlight in the push towards autonomous vehicle technology. To cross the finish line, however, autonomous vehicle companies are likely going to need some geospatial help.
To create an autonomous vehicle (AV) that will travel on existing public roads, there are a few key technological steps. Companies developing AVs need some kind of visual system that can do two things – know where the vehicle is (localization), and know what it is looking at (object classification). The latter gets a lot of attention in the press, with a cohort of startup companies aiming to either create the best sensors, the best interpretive algorithms, or a combination of both. Lidar, infrared and camera systems are all being developed, some meant to be used in concert with each other to be able to determine what is in the road ahead.
However, beyond telling the difference between a squirrel and a shadow, autonomous vehicles will also need to make decisions that involve destination planning and reacting to traffic signals, signs and road conditions. In addition to requiring precise positioning of the vehicle itself, the surroundings and infrastructure need to be ingested into AV systems to determine which actions that should be taken. The geospatial data that is used by these vehicles need to reference absolute or relative positioning – which is a significant challenge, especially in urban areas or GPS-poor locations.
A need for more than GPS
To accomplish this type of positioning, you can take one of two routes. The first is to gather and store such mapping information in advance of your trip (e.g., to “ghost drive” it) and then use the pre-gathered data to continuously check the route as it is driven. For large-scale applications, however, this can be labor intensive.
In 2017, startup Mapper.ai set out to pay people to collect scans as they drove – paying them as “gig workers” to create a huge foundational map for use with AVs. By 2019, they were acquired by Velodyne lidar, who plans to use their map interpretation software to strengthen their own lidar sensors.
This map-as-you-go approach is still being undertaken by some, including Intel’s subsidiary Mobileye who plans to create high-definition maps from crowdsourced mapping efforts that are joined together thanks to advanced cloud processing.
For many AV developers, however, are looking towards a difference solution: to source high-definition geospatial data that already exists. This approach looks to use sources of external mapping data that is frequently refreshed and of a high enough accuracy to reduce errors and increase safety. While there are limitations (e.g., how often is the data refreshed, whether it is publically available or proprietary, whether the sources reliable enough to be used consistently, etc.), when used together with scans generated during the drive itself, it can work together to create safer scenarios for passengers.
In 2020, UK-based Zenzic (an autonomous vehicle thinktank) released a report detailing what types of HD geospatial data will be necessary to meet the safety requirements for autonomy, concluding that obtaining this high-accuracy data reliably will be key for autonomous vehicle rollout.
“We are now starting to see a shift in self-driving simulation technology from validation of safety cases towards its use for certification and regulation. Longer term, geospatial data for simulation will play a vital role in efficient and effective operation of self-driving technology. Particularly, if near real-time and highly accurate data is needed for safe operation and navigation. Now is the time to examine not only what types of geospatial data are needed across the industry to deliver simulation services for testing and development of self-driving technology, but also to further explore standardisation and sharing of this data in preparation for operational deployments.” (Zenzic Report)
This need has been reflected in where the money is going as well. According to Allied Market Research, the global digital map market is projected to reach $3.67 billion by 2023, growing at a CAGR of 12.6% from 2017 to 2023.
Solving pieces of the positioning puzzle
So just as there are a scrum of companies competing to develop sensors, there are a suite of geospatial companies who are adapting their products to serve the needs of AVs. These efforts are not limited to mapping companies. Big technology has taken notice, too, with Microsoft are spending significant amount of time and energy in pursuing better maps for smart cities and automation via Azure Maps and Digital Twins.
At CES, HERE technologies debuted several new mapping technology products last month aimed to find solutions to provide detailed 3D maps for autonomous driving. HERE now offers a HD GNSS Positioning capability, that gives their HD live maps an accuracy of <1m, making lane-level accuracy possible, and with lower latency than traditional GPS.
HERE’s Premier 3D Cities offering is also interesting – featuring a collection of high-fidelity 3D models of cities backed with geospatial data. Rather than relying on raw scans alone, they have endeavored to create accurate 3D-modeled versions of 75 cities around the world., which can then be applied in vehicle technologies.
Even for those companies that do not map themselves, there are others that are making maps more AV-friendly. Earlier this year, a team of data visualization engineers (formerly at Uber) created a spin-off company based on their efforts to tackle a particular tricky problem with providing updated and detailed maps to such services – how to handle transmitting or managing terabytes of map data to relatively small applications. Their new company, Unfolded.ai is a way to bring geospatial datasets together and allow them to be seamlessly served to third party applications.
No matter what approach is taken, the value of the collection and processing of geospatial data has been apparent for decades. Geospatial data companies, however, but may well be on the way to being the backbone for a more autonomous future.