Overview

Nowadays, automobile manufacturers make efforts to develop ways to make cars fully safe. Monitoring driver's actions by computer vision techniques to detect driving mistakes in real-time and then planning for autonomous driving to avoid vehicle collisions is one of the most important issues that has been investigated in the machine vision and Intelligent Transportation Systems (ITS). The main goal of this study is to prevent accidents caused by fatigue, drowsiness, and driver distraction. To avoid these incidents, this paper proposes an integrated safety system that continuously monitors the driver's attention and vehicle surroundings, and finally decides whether the actual steering control status is safe or not. For this purpose, we equipped an ordinary car called FARAZ with a vision system consisting of four mounted cameras along with a universal car tool for communicating with surrounding factory-installed sensors and other car systems, and sending commands to actuators. The proposed system leverages a scene understanding pipeline using a deep convolutional encoder-decoder network and a driver state detection pipeline. We have been identifying and assessing domestic capabilities for the development of technologies specifically of the ordinary vehicles in order to manufacture smart cars and eke providing an intelligent system to increase safety and to assist the driver in various conditions/situations.
Paper thumbnail
1 Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran
2 Charles University in Prague, Czech Republic
3 Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
* Corresponding author
  Email: ansari at iasbs dot ac dot ir
An autonomous car also known as a self-driving car is a vehicle that has the characteristics of a traditional car and in addition, is capable of transporting automatically without human intervention. A driverless car system drives the vehicle by the perception of the environment and based on dynamic processes which result in steering and navigating the car to safety.

As it seems, the studies done to get to self-driving cars have led to creating the driver assistance systems. From another perspective, an utter vehicle control system, without examining different driver assistance systems, as well as the use of intelligent highways, is meaningless.

Our central goal in this work is to create a semi-autonomous car by integrating some state-of-the-art approaches in computer vision and machine learning for assisting the drivers during critical and risky moments in which driver would be unable to steer the vehicle safely.
Our Team. Left to right: Hadi Abdi Khojasteh, Ebrahim Ansari, Parvin Razzaghi and Alireza Abbas Alipour

Code and Extras

Find additional resources on Github, including:
  • Test code (uses C++/OpenCV)
  • Live demo code
  • Hardware communication resources
  • Project related resources
Follow project updates on ResearchGate and check out arXiv paper.

Bibtex

@article{khojasteh2018safetysystem,
author = {{Abdi Khojasteh}, Hadi and {Abbas Alipour}, Alireza and Ansari, Ebrahim and Razzaghi, Parvin},
title = {An Intelligent Safety System for Human-Centered Semi-Autonomous Vehicles},
journal = {CoRR},
volume = {abs/1812.03953},
year = {2018},
url = {https://arxiv.org/abs/1812.03953},
archivePrefix = {arXiv},
eprint = {1812.03953}
}

The Instrumented Vehicle

The instrumented vehicle and drone with the vision system consist of four mounted cameras and a drone camera along with a universal car tool for communicating and sending commands to the vehicle.
The front and rear wide-angle HD cameras are mounted at close to the center of the windshields. The driver-facing camera is mounted on the center of the roadway view. The car cabin camera is mounted on the center of the headliner to include a view of the driver's body.

Driving Scene Perception / Driver State Detection

The overall scene understanding pipeline along with architecture of the Convolutional Encoder-Decoder Network model for scene segmentation and lane detection is shown. The pipeline includes geometric transformation, encoder-decoder network, free-space detection, perspective transform, masking, filtering, edge detection, lane assignment, and tracking respectively.
Driver gaze, head pose, drowsiness and distraction detection and the real-time model for driver body-foot keypoints estimation on car cabin camera RGB output implemented, which is represent by human skeleton including head, wrist, elbow, and shoulder by color lines.

In-vehicle Communication Device

The top, bottom, and left view of the Universal Vehicle Diagnostic Tool (known as UDIAG) that connects to vehicle diagnostic port and establishes communications with the in-vehicle network. The vehicle network interface, power supply, processing unit, data storage, wireless adapter and Micro USB socket are shown in the figure.