Autonomous navigation is being refined worldwide. But before autonomous vehicles will become ubiquitous, a number of problems will have to be solved first. One of them is severe weather conditions, that may affect current sensor technology used by autonomous vehicles to define its location and position among other moving objects. A new technology developed by Geophysical Survey Systems (GSSI), named TerraVision, uses ground penetrating radar to improve navigation for autonomous vehicles.
The benefit of ground penetrating radar is that it is not affected by weather conditions, as are the usual AR sensors, explains Dr. David Cist, GSSI’s VP of Engineering. “For rain, fog and mud, sensors can become blinded when windows get dirty or heavy rain or snow clutters up the signal. Snow covered roads cover lane paint and road signs and other markers used for navigation. In addition to weather, of course, GPS is great until there are buildings or trees. Night-time conditions can be a challenge, especially with rain. Additionally, seasons change the maps a great deal.”
Creating a digital fingerprint of the subsurface
GSSI’s LGPR technology works by sending radio waves into the ground, creating a digital fingerprint of the subsurface, that is used to locate the position of the vehicle that is equipped with TerraVision (Figure 2). The technology works because radio waves penetrate about 3m down and reflect off rocks, roots, soils and pipes. The returned reflected signals are used to create a 3D map of what’s below the surface. Stitching many of these images together creates a full 3D fingerprint that can be used by any LGPR-equipped vehicle to know exactly where it is. Additionally, creating the full map, above-ground and below, raises the probability that autonomous vehicles can localize and navigate roadways in any conditions, says Cist.
“Nearly all autonomous navigation relies on 3D maps, which the cameras and lidars create. Cameras and lidar sensors on autonomous cars then use maps of road marks, street signs, buildings and the like to know where they are for localization purposes. TerraVision is no different, except that the 3D map is below ground. Using these maps, any TerraVision-equipped vehicle knows where it is geographically. TerraVision vehicles know their location when their data slice or fingerprint finds a match with some slice in the reference map. As the vehicle moves, each next slice finds a new match within the map. With each new match, the heading and velocity can be calculated to keep the vehicle on track. As with other maps, the TerraVision reference map needs to be georeferenced in some way, either by an accurate GPS, or integrated into the other above-ground maps to make a powerful navigation guide that reduces failure modes”.
Years of experimental testing at MIT has proven LGPR’s potential. Now, it is benefiting from GSSI making performance improvements with faster hardware, smarter software, and smaller mechanical design: GSSI redesigned the RF switching, cut the power requirements by a factor of 4, reduced emissions by more than 100x, significantly reduced the size, made it more weather-resistant, and improved performance. In late June, TerraVision successfully integrated into Level 2 test automation in a closed loop field trials in Devens, MA. Level 2 is defined as control of both steering and acceleration, where a human sits in the driver’s seat and can take control of the car at any time.
GSSI has decided initially to focus overseas, given the uncertainties around FCC restrictions in the United States, and since they had been asked by several firms around the world to develop and test LGPR, including two large Japanese and German companies working on AV navigation. The performance testing in Germany may establish the technology’s effectiveness by year end.
With many years of know-how and data, GSSI can prove that GPR maps remain rock-solid for decades, and has the geophysics experience needed to make LGPR maps stable and reliable. LGPR testing has shown an in-lane localization accuracy at highway speeds of about 4cm, an accuracy equal to or better than other AV navigation sensors: for example, GPS navigation can give about 30cm accuracy, except in cities, forests and tunnels. Other sensors, such as lidar, radar, and cameras, scan the surface features of the road and its environment to achieve about 10cm accuracy, but navigation can easily break down in rain, snow, dust, fog and even fallen leaves.
Adding LGPR for AV navigation would compensate for known sensor failure modes that put lives at risk. If international LGPR field trials confirm MIT’s and GSSI’s results so far, it would be a huge win for the industry, since user AV acceptance requires fail-safe localization everywhere and in all conditions: “all sensors have limitations, and TerraVision is no different. But if AV is ever going to get past Level 3, it is vital that sensors can compensate for each other’s shortcomings in all conditions.”