GPUs and Virtualization Boost Performance for ADAS Platforms
Over the course of decades, the graphics processing unit (GPU) has evolved from its origins as a video display adapter in arcade games to a computing powerhouse that drives artificial intelligence and machine learning, accelerating computational workloads in a wide array of fields from oil and gas exploration to natural language processing. Specifically, GPUs play an increasingly critical role in the fast-evolving technologies for autonomous driving and advanced driver-assistance systems (ADAS).
How did the GPU find its way from the video arcade to the cutting edge of scientific research and self-driving cars?
The GPU’s rise as the go-to processor for Big Data workloads is due to some basic architectural differences between the traditional central processing unit (CPU) and the GPU. The GPU is a specialized type of microprocessor, originally designed for rendering visual effects and sophisticated 3D graphics for gaming, which requires intense computing power to display real-time action. To deliver that capacity, a GPU uses thousands of small and efficient cores to deliver a massively parallel architecture that can handle the processing of vast amounts of data simultaneously.
In contrast, a typical CPU consists of just few cores with abundant cache memory and is usually designed to process only a few software threads at a time. CPUs are optimized for sequential serial processing, which is sufficient for most general-purpose computing workloads. However, when it comes to simultaneous processing of vast amounts of data, the GPU wins.
A GPU with hundreds of cores to process thousands of threads in parallel has the capacity to accelerate the performance of some software by 100X compared to that of a typical CPU. And increasingly, the really challenging computational problems that we expect computers to solve for us have inherently parallel structures. Think of the enormous volumes of video-processing, image-analysis, signal-processing, and machine-learning flows that must occur reliably and in real-time to operate a self-driving vehicle. In power-constrained systems like a battery-powered electric vehicle, it’s also important that a GPU typically achieves this processing speed while providing more power- and cost-efficiency than a CPU.
GPUs are Tailor-Made for Autonomous Vehicles
The processing requirements of autonomous vehicles and ADAS technologies are completely within the GPU wheelhouse, especially in the areas of image analysis and parallel signal processing. Image processing is a natural problem domain for the made-for-gaming GPU. Indeed, almost any kind of computationally dense parallel computation is a good fit.
ADAS platforms can leverage the GPU’s graphics compute capability to process and analyze sensor data in real-time. These discrete sensors include:
- Light detection and ranging (LiDAR), which measures the distance to a target with a pulsed laser light.
- Radio detection and ranging (radar), which is similar to LiDAR but uses radio waves instead of a laser.
- Infrared (IR) cameras systems that use thermal imaging to perceive in darkness.
These all enable ADAS to better interpret the environment and improve the system’s ability to support the driver and maintain the safety of an autonomous vehicle.
As self-driving systems become more prevalent and advanced, the GPU will increase in importance—and in power. The GPU is set to be the workhorse of the autonomous vehicle, as it will be able to deliver the compute capabilities to enable cars of the future to become more aware of and responsive to their environment so that they can operate dependably, efficiently, and safely.
Virtualizing the GPU
The level of performance demanded by ADAS platforms will require increasingly larger and more powerful GPUs, thus impacting the manufacturing bills of materials for autonomous vehicles. To mitigate this expense, platform vendors will look to increase the value and functionality of the GPU by using it to perform multiple workloads in the vehicle.
Most modern vehicles already have GPUs on-board to enable driving displays and other digital dashboards, with multiple high-resolution screens to show maps, forecasts, and other visual information. 1080p resolution is now common in mid-range cars and 4K screens are increasingly specified for luxury and executive cars.
As we’ve already discussed, a single physical GPU is already capable of tremendous processing performance. However, virtualizing the GPU using specialized software abstracts the processing potential of a physical GPU and transforms it into multiple virtual instances. A single physical GPU is able to host multiple virtual workloads, all operating independently of each other yet emanating from the same hardware. Virtualization lets the GPU run multiple autonomous operations, without any of the virtual instances being aware of each other or in any way affecting the others.
Virtualized GPUs have obvious applicability for autonomous vehicle and ADAS scenarios, as a single GPU can power multiple applications, from visualization of maps and operations of entertainment consoles to the processing of environmental sensor data to identify roadway obstacles.
However, enabling multiple virtual operations from a single GPU in automotive applications is only safe and effective if the GPU has rock-solid support for hardware-accelerated virtualization.
Virtualization software is most dependable when hardware enforces entirely separate managed address spaces for each virtual instance, and enables the restart, or flushing, of an instance that’s not operating correctly. This workload isolation is key to allowing shared use of the GPU, while keeping critical software, such as driver-assistance systems, from being corrupted by any other process.
Imagine a situation where a problem with the dashboard software was able to affect the correct operation of the drive-assistance system—it would have disastrous consequences. Hardware-supported virtualization for GPUs provides protected execution contexts to ensure that this situation doesn’t arise.
From an ADAS platform developer’s point of view, hardware-based virtualization offers another additional benefit. It enables a safer environment to deliver various applications and services without any concerns about the electronics systems being taken down by a rogue piece of software. It also means that rather than a traditional hardware box with fixed software for the infotainment and engine management systems, the car becomes a flexible, configurable software platform that can be updated over-the-air. It enables OEMs to swap paid-for services in and out easily, without disrupting the car’s operation, thus offering potential new revenue streams.
Imagination’s GPU Solutions
PowerVR GPUs developed by Imagination address the data processing and trusted architecture challenges that face developers of autonomous-vehicle platforms. PowerVR GPUs support full hardware virtualization, completely isolating virtual instances that share the GPU. They also provide the muscle required to manage and prioritize these virtual operations to effectively power the ADAS platform architecture, with the performance bandwidth demanded to achieve safe, dependable outcomes.
Lower power consumption is also critical for autonomous vehicles, as most self-driving cars will be electric and operate on batteries. Lower power requirements for the vehicle control computing platform help lead to improved overall vehicle performance.
The core compute architecture inside PowerVR GPUs was designed from the ground up to offer fast performance and low power consumption through reduced-precision computation, especially half-precision floating point (FP16). Running at lower precision (where lower is usually classed as less than 32 bits) is one of the best ways to reduce power dissipation in an embedded GPU without significant loss of accuracy.
Imagination designed the FP16 hardware as a separate data path from the full-precision FP32 hardware. Though shared data-path designs are common since they're simpler in many ways, having discrete hardware for each pathway enables the company to offer the best possible power consumption and efficiency as each data path accepts fewer design compromises to do what it needs to do.
Imagination also offers a toolset to support the development, optimization, and deployment of neural networks across GPU and AI accelerators. The design environment provides a single unified tool-chain that lets developers take multiple frameworks and multiple network types and bring them into a format that allows them to be deployed on:
- The GPU as a compute engine.
- The PowerVR Series2NX and Series3NX neural networks accelerators.
- A mixture of the above two, where the flexibility of the GPU to implement a new network layer can be complemented by running the remaining layers on a highly optimized, high-performance dedicated convolutional-neural-network (CNN) accelerator.
According to Imagination, ADAS platform designers can trust PowerVR GPU as a proven component in the overall system architecture of the autonomous vehicle, with best-in-class power efficiency and memory bandwidth usage, as well as a balanced GPU design that fits well with the car's technology needs.
These include improved performance for the systems that the driver and passengers interact with most—on larger and higher-resolution displays—and with a design that lends itself to safer, more dependable next-generation ADAS applications.
Bryce Johnstone is Director of Automotive Segment Marketing at Imagination Technologies.