Imagine a self-learning lightweight algorithm of the near future capable to report an accident or a traffic jam without a tap, with potential to be part of car infotainments in the pre-self-driving era
Last month, at TU Automotive Detroit 2019 expo we have been showcasing live demo of our Artificial intelligence-powered real view navigation which uses the smartphone camera to augment routing information into the real world. And boy, it was an attention drawer. Now, after just a couple of months of development, we are ready to show it to the world as one of our most promissing innovationsn in early stage of development.
The showcase built on new Sygic Automotive SDK was running live image segmentation on the mobile phone, which was placed in front of a large flat-screen TV with the pre-recorded route in the real world to show off lane and traffic sign recognition.
Integrating image recognition into a navigation app running on the smartphone is an interesting concept, but might be quite limiting. If you want to take advantage of the Real View Navigation with AI, you need to place your smartphone to the windshield.
The true catch is the possibility to use data from vehicle cameras while benefiting from personalized navigation experience using your Sygic Navigation app, on your phone, connected to your vehicle infotainment.
Accessing the built-in cameras in cars
The next big step we are exploring is to work with data from the built-in cameras in front and back of the vehicle. As a pioneer in almost all connectivities available in the market, including Apple CarPlay or Smart Device Link (SDL), we are working with connectivity providers to explore the possibility of what we call a deeper vehicle integration. It will allow apps to source data from the vehicle in order to achieve enhanced user experience.
Currently, only a few connectivity standards allow two-way, app-vehicle communication. Among them Smart Device Link (SDL) and MirrorLink.
With access to the built-in cameras in the cars we will have two options:
Fully integrated
The solution is lightweight - small in size and capable to work offline as part of almost any current infotainment system. You just start the car, open the app in the in-dash, select the route and you will see the routing information in the real world.
Partially integrated
We will use the infotainment system as the thin client for the solution installed on your mobile phone. You will just connect to the in-dash using cable or wi-fi, the built-in cameras will provide the picture and the smartphone app will interpret the routing information directly into your in-dash.
No more tapping while driving, your navigation just became intelligent
If you are a Waze user, you probably experience on road an accident and you would like to report it. You tap on the report, then you see 11 icons and you need often in high speed pick the right one. In the near future, we are expecting the machine to do it for you, so there will be a smaller chance that you will become part of an accident by reporting one.
We believe, that once, Real View Navigation with AI will be capable not only to recognize signs near the road but also accidents, stopped cars and will help you decide what action to take in hazardous situations. All this will be automatically reported without you taking unnecessary risks and will give other drivers sharing the same directions valuable information on what to expect in the road in front of them.
Self-driving is the future, why would I need something like this?
Although most of the automakers have committed to bring self-driving car by 2020(‘s), we think that until at least 2040, there will be still more cars controlled by humans than robots. For obvious reasons. There are still too many technological, ethical and legal obstacles which need to be solved before the cars will hit “Skynet”. At the same time, we believe that our Real View Navigation with AI, if widely used can become a great source of data for the self-driving future.