Monday, October 24, 2016

Autonomous Cars – Part 3: Technology Consolidation

Autonomous Cars – Part 3: Technology Consolidation -, I am often asked "what's the best car?" My answer varies greatly, but over the past two decades in the automotive industry, I have come to the conclusion that European cars are superior. This does not mean I do not appreciate some Asian or American cars, but they do not compare with European car thrills techniques. American and Asian cars lose something that I think is more of a quality than a European car. this time we will discuss aboutAutonomous Cars – Part 3: Technology ConsolidationLets me talk it Kaivan Karimi
SVP of Strategy and Business Development
BlackBerry Technology Solutions (BTS)


The amount of software in a car is mushrooming with there being 100 to 150 million lines of code in a modern car, which is more than most any other system.

Today cars are controlled via hardware electronic control units (ECUs) running the millions of lines of code.   60 to 100 ECUs are found in most newer cars today and that number is growing.  High end cars can have even more.  Reducing the number of ECUs in favor of reduced number of domain/area controllers is the new trend.  The idea is to reduce the complexities associated with software development, reduce the weight of the car, and reduce the overall cost. It also makes software upgradability less complex, where software functionalities can be enhanced to extend the life of a platform and offer a very large return on investment. 

Another benefit is that software can be more easily upgraded Over-The-Air (OTA) for minor or major fixes, respond to security issues, and provide other enhancements without the need to bring a car to the dealership.  This not only saves time, but also adds to the safety, security, and reliability of the car, while lowering the overall maintenance cost for the vehicle. According to research firm IHS, about 4.6 million cars received OTA software updates for telematics applications last year, and by the year 2022 forty-three million cars are expected to be using OTA services.  That is clearly a huge increase.
Some of the other technology components of Advanced Driver Assistance Systems (ADAS) are noted below:

Maps
While most people do not consider maps as a component of an ADAS system, in the future they will play a key role in assisting drivers to operate vehicles safely and adapt to driving changes based on location, such as changing what side of the road you drive on when you hit a border crossing. Maps provide a necessary input to augment the information that is provided by the various sensors in the car. This is not just macro-level geological data for finding directions, but also for augmenting functions such as camera-based traffic sign and roadway information detection, as well as infrastructure information. Cloud based processing will then be used to integrate the data sent by all vehicles into a global map that gets updated cooperatively by all drivers, including the road pot-holes to avoid, new roadway signs added, or rerouting due to construction.

Sensor Fusion
Sensor fusion means combining information and data from different sensors, leveraging the individual advantages of each sensor to complement and cover the weaknesses other sensors. The whole is greater than the sum of the parts, which means the individual sensors’ functions. This is very similar to what our brain does. You do not need to touch a pot of boiling water to know it is very hot, because your eyes to see the bubbling water and the steam on the top of the pot. In an ADAS system, the same thing happens: The sensor inputs are fused together for the ADAS domain controller to formulate a conclusive opinion about an event with better situational awareness, rather than just relying on a certain sensor’s data individually. This notion is at the heart of how any robot operates, but is especially important with the mission critical functionalities needed by connected autonomous cars.

HW & SW Roadmap to Consolidation
As the modern CPU increases in processing power, and decreases in electrical power consumption due to smaller process geometries, it would lead one to believe that consolidating multiple ECU functions onto one physical processor may result in significant cost savings. While that is true, consolidation needs to be balanced with a few important factors:

  1.  The increase in leakage current as semiconductor process geometries get smaller (this is a downside of Moore’s Law) 
  2. Thermal issues increase as clock speeds increase 
  3. The extent to which the software can be multithreaded to take advantage of new multi-core  processors.

The auto industry will be going through a transformation with ECU consolidation into single powerful multi-core processors that is similar to what happened in early 2000s in the networking industry.  At that time I had a front seat to the networking debate as I was driving some products a large semiconductor company. What happened was that most network and baseband processor semiconductor suppliers for both wired and wireless infrastructure business moved from single to dual to quad-core processors.  I remember a day when people were planning to pack as many as 80 cores into a single chip.
There is a huge difference between the software requirements for mutli-core processing in the networking and automotive industries.  

The elephant in the automotive room is the need to combine mission-critical with non-mission critical functionalities into the same processor, while separating and isolating these functions effectively from each other from a safety and security perspective. This single fundamental requirement becomes the basis for what types of software framework and architecture needs to be used.

Multi-core Processing
At a very high-level, all multi-core processors pack multiple processing units (cores) into a single    physical package—just like it sounds. But, this is where the similarities end. Other architectural factors come into play and determine the application fit, throughput, bandwidth, effective horsepower, and software architectures suitable for an optimal processing environment. Some of the considerations are noted below:

  • Choice and configuration of interconnect buses and shared memory schemes
  • Choice of homogeneous multi-core systems with identical cores sharing the same instruction sets, vs. heterogeneous multi-core systems with identical cores (some with same instruction set, and some with different ones 

  • Heterogeneous multi-core systems that mix different types of processor cores for application specific use cases (e.g. mix of MPUs, DSPs, GPUs, etc.).

  • Mix of the above core with localized memories and predefined high-level functions such as micro-coded engines and vector processors

  • Mix of cores and architectures that allow control and data path processing in a single core for communication applications 
  • Choice of architectural implementations such as VLIW, vector or multithread processors, fine-grain vs. coarse grain processors, etc.

The improvement in performance by using multi-core processors can only happen if the software running on the processor can take advantage of every last cycle that the multiple core device can offer. It also assumes that the interconnect buses and interfaces between the cores and the world outside of the chip, as well as between the cores, and the interaction between the cores and the memory architecture are properly modeled and designed for the end application, so that there are no design bottle necks introduced. 

This situation is analogous to adding multiple streets and multiple lanes in and out of a parking lot. If the electronic door to go in and out of that parking lot is too slow to accommodate the extra traffic, you will cause bad congestion, and the traffic throughput in and out of the parking lot would be as good as the speed of that electronic door. You may need to open the gate altogether, but have a traffic cop that coordinates the flow of traffic in and out of different entrances, into different parking spots. That is exactly what you would also need in the world of software, namely a traffic cop for the processes running in the given multi-core architecture. That is where a hypervisor comes in, which is to act as that traffic cop.

QNX offers a hypervisor and other safety- and mission-critical software for make connected autonomous cars safe reliable, secure, and trusted.

The next blog will address the hypervisor/traffic cop, and describe how they make the software-defined future more autonomous and safe.



_______________________________________________________________________________
Kaivan Karimi is the SVP of Strategy and Business Development at BlackBerry Technology Solutions (BTS). His responsibilities include operationalizing growth strategies, product marketing and business development, eco-system enablement, and execution of business priorities. He has been an IoT evangelist since 2010, bringing more than two decades of experience working in cellular, connectivity, networking, sensors, and microcontroller semiconductor markets. Kaivan holds graduate degrees in engineering (MSEE) and business (MBA). Prior to joining BlackBerry, he was the VP and General Manager of Atmel wireless MCUs and IOT business unit.


Related Post

Autonomous Cars – Part 3: Technology Consolidation
4/ 5
Oleh

Coment

Like And Subscribe By email.