On the Cusp of Self-Driving Vehicles

Ziqi is a Sr. Systems Engineer at Faraday Future, developing Advanced Driver-Assistance Systems (ADAS) and Self-Driving features for the first production car, the FF91. He likes to explore and experience new things, like surfing near his new home in Los Angeles.

The author posing with future self-driving vehicles

2017 is off to a busy start in the automotive industry.

  • Tesla launched the AutoPilot 2.0 which claimed to achieve a self-driving trip from California to New York by the end of this year.
  • Faraday Future revealed its FF91 with a live self-driving demo that will hit the market in 2018.
  • NIO EP9 set an autonomous race record at Circuit of the Americas with a top speed of 160 mph.
  • Apple is back in self-driving development and has obtained a test permit in California.
  • Baidu announced the Apollo project, while opening up its autonomous driving platform.
  • Intel acquired MobilEye with $15B, Daimler partnered with Bosch for autonomous technology, Ford invested $1B in the Argo AI, and Uber’s self-driving car unfortunately got into accidents.
An example of one of Tesla’s recent innovations in autonomous vehicles

Clearly, many growing startups as well as some big players are digging into the autonomous driving technology, but as far as we have come, there are still many concerns. I’d like to address how the self-driving vehicle works — knowing where it is, what’s around it, and how the vehicle should drive itself.

To inform a vehicle of its exact location, some map companies like HERE and Tom-tom have already started building a high-definition map that will include not only road type, location, and lane informations, but also road attitude, curvature, width, and even real-time map data, or LiDar scanned data. Through differential global positioning system (DGPS), it aims to achieve location accuracy down to the centimeter. With additional camera computer vision for traffic signs and lane markers, a vehicle will know which lane it is on and make precise turns for door to door navigation in the near future.

Sensor technology lets a vehicle know of its surroundings, and there are 4 typical sensors used:

1) Camera, which uses computer vision to identify road markers and traffic signs as well as categorize objects

2) Radar, which use radio waves to reflect off surrounding objects reflection, particularly those made of metal

3) Ultrasonic sensor, which use sound waves to detect an object’s reflection

4) LiDar, a new technology that uses active laser reflection to get high accuracy 3D scans

Companies like Tesla, Nvidia and MobileEye propose a camera-based solution. Tesla’s AutoPilot 2.0 is so far the most popular camera-based solution as it supports semi-autonomous driving; the company claims that it will achieve Level 4 full autonomous driving with the same hardware by end of 2017. Some companies still think LiDar is necessary to achieve full autonomous driving. This is because of LiDar’s high accuracy and high resolution data for better detecting and categorizing of the surroundings under most weather conditions, which may be the solution to cover the last few digits percentage of autonomous vehicle failure.

Startup Faraday Future has already revealed that its FF91 will have a LiDar system. Audi also announced that the next generation A8 will have the LiDar system on board to enable L3 Autonomous Drive. Ford and Baidu invested $150M in the Velodyne LiDar, hoping it will get better LiDar performance, meet the automotive standards, and cut costs down for customer affordability.

The first development of LiDar units on Google self-driving vehicles were priced around $85,000, but the company is moving towards creating a production unit that is smaller, less powerful, and comes at a cheaper price point. Other startups such as Quanergy, Luminar, and even some chinese startups like Hesai and Robosense are developing solid state LiDar to compete with Velodyne. These companies claim to be cheaper than Velodyne and project that the production cost could be as low as a few hundred dollars.

Lastly there are different approaches to instructing the vehicle on how to drive. Some companies would like to pre-program each available situation and preset vehicle behavior. It will be extremely difficult to cover all possible cases in the real world driving environment, and if some cases are missing, it will not achieve full autonomous driving in real world. Other companies would like to use advanced machine learning or deep learning algorithms with neural network computing to allow vehicles to learn from human behavior in order to drive like a human. However, there is much uncertainty and risk when a conflict arises between rules learned by the vehicle and human judgement, which may result in unexpected accidents.

With all these approaches to letting cars drive themselves, no matter which way companies choose to go, there are ethical problems that first need to be addressed. Should the vehicle attempt to save the pedestrian or the occupant in a dangerous situation? Should it put itself in harm’s way to avoid killing people?

These questions are also applicable to human drivers, but we never know the clear answer until we face them. Whether it is Volvo claiming to avoid fatal accident by 2020 or Daimler claiming it will save the occupant first in its vehicles, there may not alway be a clear answer to every scenario.

Self-driving cars will also need to pass current regulations and push for the further development of tests and regulations in every state to finally get on the road. From California’s automated vehicle test permit to Michigan’s autonomous drive test (court MCity), these instances are pioneering history in regards to autonomous driving vehicles. There are as yet many kinks to iron out and learn from, and only the future will tell if we ever get to that point. Just as Martin dreamt about the days in 2015 and how everything could be affected by a simple mistake from the past in “Back to the Future,” no one wants to make mistakes that will risk human lives. The technology revolution has to get involved in these big discussions if it wants to succeed. Now more than ever, it’s time to move carefully and think through our options thoroughly as we look to the future.




Master of Engineering at UC Berkeley with a focus on leadership. Learn more about the program through our publication.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

“Web3 thinking” and the obfuscation of late-capitalism

Apple rumored to not include a charger in iPhone 12 || GearNow ||

Meross Smart WiFi Plug review: This simplistic indoor smart plug is too rough around the edges to…

Before Regulating Next-Generation Mobility Data, We Must Understand That Data

Virtual Reality’s Uncertain Picture

Tired of Your Photos Getting Zero Credit? Here’s What to Do

The Classroom of Tomorrow

Windows 11 is Just Out: What’s New?

Windows logo

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Berkeley Master of Engineering

Berkeley Master of Engineering

Master of Engineering at UC Berkeley with a focus on leadership. Learn more about the program through our publication.

More from Medium

Should we start talking about Autonomous Vehicles for emerging countries?

Autonomous Vehicle, Weekly News #18 / 2022

We Need Bold Government Investments to Prepare the U.S. Workforce for Autonomous Vehicles

Improving Machine Learning Models for Autonomous Vehicles