ECE5725 Project

Intelligent Dual-camera Dashboard Camera
By Naiqin Zhou & Chun Chen


Demonstration Video

Watch video here

Introduction

The project is a raspberry pi-based dual-camera system embedded with a 3-axis accelerometer. The front camera is a webcam that performs eye tracking and detection. The openness of eyes will be continuously monitored to determined drowsiness of drivers. The rear camera is a high resolution picamera module that supports up to 4k video at 30 fps and it performs lane detection. The detected lanes will be detected and highlighted so that driver can clearly see his position between the two lanes. The last component, 3-axis accelerometer, provides acceleration information so that the data can be used for future behavior analysis and determining current driving status.

Throughout the project, LEDs are used as part of warning system. Driver will see red LED along with text warning on the pitft screen when drifting left or right. A green LED will be on when eye openness drops below the preset threshold value. All threshold value has been adjusted to sensitive to minor changes based on road test.

Vehicle recording system will automatically record acceleration data, speed and corresponding timestamp into driver’s log file. Another file, behavior log, will be used to record any risky behavior such as sudden acceleration or slowdown. Since a major change in acceleration value might be caused by accident, our intelligent dashcam can also identify such pattern and save a picture. If any risky patterns are detected, a picture will be saved for further analysis or as a proof if needed. Also, a video recording can be implemented later with a cloud-based system to allow larger storage space for videos.


Project Objective

Fatigue driving has a huge impact on driving ability and has become the main cause of more than 30% traffic accidents in 2016. Beating fatigue driving is an urgent problem to be solved among lower-end vehicles, where limited safety features were taken into consideration. Our solution is an affordable, intelligent dual-lens Dashboard Camera that features eye/gaze tracking, lane departure warning, vehicle parameter recording and driving behavior analysis, all in real time.


Design and Testing

The rear camera is a PiCamera purchased from amazon. We used it because of it features an adjustable focus lengths and can record high-resolution image/video in real time. Picamera works perfectly with raspberry pi and has its absolute advantage over any other camera modules. Some advantages are ignorable time latency, higher frame processing rate, low energy consumption and CPU occupancy. It is ideal to used to perform lane detection.

The image processing step consists of pre-filtering and lane drawing. Some computer vision techniques are involved in our algorithm such as Hough transform. The image shows how it looks like after filtered by Gaussian blur and grayscale conversion.

Generic placeholder image

Gaussian Blurring with grayscale conversion

The image below is the RGB color picture after lines is drawing. The drawing will be updated on each frame at the rate of 32 frames per second. The actual effect works perfectly and smoothly in real-life condition. The algorithm here is basically draw lines between points and remove the lines that are out of certain angle range. The idea here is to remove the horizontal line and leave the vertical lines. Based on our test on multiple pre-recorded videos, the preset threshold for line removal is below 28 degrees.

To test the performance, we use five different challenging videos with dotted line on the left side or on the right side. Also, a more challenging video contains a shadow regions. The performance was impressive and track lines quite accurately.

Generic placeholder image

Conversion to RGB color with lines drawing

The front camera is a webcam module that is used to perform eye tracking. The eye tracking algorithm is basically performing similar pre-filtering procedure as lane detection. It will extract the eye information, in this case the length of eyes, and compared with an ideal value that was written in another txt/dat/csv file. When length is below a threshold, it means drowsiness is detected. When the eye is opened again, the length will be above the threshold and change the status.

One issue here is the latency in capturing real-time image. The webcam is connected to raspberry pi through USB port, which is the only option for raspberry pi. Therefore, we choose to use a webcam based on the fact that raspberry pi can’t support two picamera at the same time. The disadvantage of webcam is obvious in time latency, image processing ability, resolution, high CPU occupancy, compared with picamera. The test shows that webcam produces a six to eight seconds delay in displaying images. The image display is not smooth because it read only 8 frames per second. When the webcam is enabled, the CPU occupancy typically boosts up to somewhere between 45-55%, while a picamera only consumes less than 20% or below. Since webcam is the only option available for us. We also found that other raspberry pi users have the same issue and there is no solutions the processing ability of webcam. We believe a fast CPU or powerful processor will help solve the issue and provide real-time tracking.

The test was conducted by monitoring LED and text warning. We have adjusted our eye parameter to optimize the performance. Based on our test, our algorithm works well on raspberry pi, expect for the latency of 6 seconds. The functionality works as expected.

Generic placeholder image

Webcam module

Generic placeholder image

3-axis accelerometer

The I2C accelerometer is attached to the pins for collecting acceleration data. The data is important for driving status monitor. The left, right, accelerate or slowdown status is determined based on the accelerometer. Considering the fact, the different cars have suspensions system that with different stiffness level, we set the threshold value to have most sensitivity so that any minor changes can be reflected and recorded. The vehicle speed is calculated based on the accelerometer data. At first, we had issue calculating the right speed, which I correct it later. The issue was because I ignore the slowdown and reset the vehicle speed to zero when it stops. It was easily fixed and the speed data is accurate as of now. The driver behavior analysis is also conducted based on accelerometer information. When a change of state is detected, for example, from slowdown to a sudden speedup, a bad behavior will be recorded and counted. With the driving going on, the system will continue to accumulate the numbers of bad driving behavior and display at the bottom of screen.


System Integration

Pictures below shows front view and rear view of our system. The external battery is attached to the bottom of the system. The webcam and picamera are both connected to the raspberry pi. The webcam is connected via USB port. The 3-axis accelerometer is embedded in the protection case.

Generic placeholder image

Front view

Generic placeholder image

Rear view


Result

We finished the project as expected. The usability and functionality works as expect. We have also fixed a minor issue with vehicle speed display later. The system was tested with five different videos, three of which are quite challenging. The challenging video contains dotted line and shadow regions. Our algorithm was able to track visible lanes either connected or dotted correctly and precisely. All warning function is sensitive after our adjustment. LED light warning and text warning works as expected for each driving state. The driver behavior is shown at the bottom of screen and works perfectly when any change of driving state is detected. The webcam can record eye movement and display status on the screen. Our team member completed equal portion of task as expected and the system integration was successfu.


Conclusion

Our project aims to provide an affordable solution to enhance safety feature of lower-end vehicle. The product we developed is innovative, dual-lens dashboard camera that provides real-time data analysis and extend the applications in different circumstances. The combination of front and rear camera will provide users with best possible protection and effectively prevent accidents from happening. Through this project, we are able to solve most the technical issues. However, one thing that has been solved is to run the script when devices reboots. There might be issue with the editing of bashrc file but we haven’t figured out how. Overall, the completeness and functionality of this project reaches our expectation. We have enjoyed the learning and development experience along the way.


Future work

Our future work will be adding the function of data streaming so that users can view images and videos on his mobile devices. We also expect to add a GPS module to allow more accurate location. As for the eye detection, I believe there is room to improve. In the aspect of computer vision, pupil locating will be a more robust method in eye tracking. The algorithm we used as of now is basically identifying the eye by finding the similarity of extracted features between pre-written eye parameters. We will consider using the pupil locating to realize real-time eye tracking. Within each frame, the eye center will be tracked and the extraction of feature will be performed more efficiently. The lane detection can be implemented by calculating curvature, shift percentage and warning user by color change and arrow guidance. This will provide better user experience as a successful product always looks for.


Work Distribution

Generic placeholder image

Naiqin Zhou

nz248@cornell.edu

Designed the overall software architecture, built up the prototype and wrote the initial program.

Generic placeholder image

Chun Chen

cc2632@cornell.edu

Did improvement for the algorithm and conducted the overall software and hardware system testing.


Parts List

Total: $81.18


References

Face and Eye detection using OpenCV with raspberry pi
Advanced lane detection
ADXL345 Triple Axis Accelerometer
Picamera
OpenCV Documentation

Code Appendix

Download code as a zip

LaneDetection.py

from picamera.array import PiRGBArray
#from ADXL345 import*
from adxl345 import*
import RPi.GPIO as GPIO
from picamera import PiCamera
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import cv2
import random
import os
import sys
import pygame
from pygame.locals import *
import datetime
import matplotlib.animation as  animation
import matplotlib.dates as mdates
import time
from datetime import datetime
import dateutil.parser

GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(17,GPIO.OUT)
#previous state #current state
#GPIO.setup(22,GPIO.OUT)

GPIO.setup(26,GPIO.OUT)
#p=GPIO.PWM(17,1)
#q=GPIO.PWM(22,1)-
os.putenv('SDL_VIDEODRIVER','fbcon')
os.putenv('SDL_FBDEV','/dev/fb1')
os.putenv('SDL_MOUSEDRV','TSLIB')
os.putenv('SDL_MOUSEDEV','/dev/input/touchscreen')


np.seterr(divide='ignore')
input_type = 'video' #'video' # 'image'

#cam = cv2.VideoCapture(0)
#cam = cv2.VideoCapture('video.avi')
#cam = cv2.VideoCapture('sample_video.mp4')
#cam = cv2.VideoCapture('solidWhiteRight.mp4')
#cam = cv2.VideoCapture('solidYellowLeft.mp4')

camera = PiCamera()
camera.resolution = (320, 240)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(320, 240))
camera.rotation=-90


def grayscale(img):
    """Applies the Grayscale transform
    This will return an image with only one color channel
    but NOTE: to see the returned image as grayscale
    you should call plt.imshow(gray, cmap='gray')"""    
    return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)


def canny(img, low_threshold, high_threshold):
    return cv2.Canny(img, 50,80)


def gaussian_blur(img, kernel_size):
    """Applies a Gaussian Noise kernel"""
    return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)


    #s, img = cam.read()
    #--------GAUSSIAN BLUR--------------------------------------------------------
    #Gaussian Blurring to remove noise
    # Define a kernel size for Gaussian smoothing / blurring
v=0
R=0
tf=0.05
t0=time.time()
r0=0
counter0=-1
counter1=0
counter2=0
counter3=0
ps0=1
ps1=1
ps2=0
ps3=0

for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
    g=v
    img = frame.array
    gray = grayscale(img)#grayscale conversion
    kernel_size = 5
    blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size), 3)

    #------------CANNY EDGE DETECTION-------------------------------------

    edges = cv2.Canny(blur_gray, 1, 150,3)
    
    lines = cv2.HoughLinesP(edges,1,np.pi/180,100,1,10)
 
    kernel_size = 17
    blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size), 3)
    threshold1=50
    threshold2=80
    edges = cv2.Canny(blur_gray, threshold1, threshold2,3)
    lines = cv2.HoughLinesP(edges,1,np.pi/180,100,1,10)


    minLineLength = 20
    maxLineGap = 5
    lines = cv2.HoughLinesP(edges,1,np.pi/180,100,minLineLength,maxLineGap, 1)
    from adxl345 import ADXL345
  
    adxl345 = ADXL345()
    #ger accelerometer data
    axes = adxl345.getAxes(True)
    r2= ((axes['y'])**2+(axes['z'])**2)**(0.5)
    x=abs(r0-r2)
    #print((x))
    
    if abs(float(x))>0.2:
        if (abs(float(x))>3):
            
            cv2.putText(img, "Life Threatening Accident: Recording..."  ,  (50,70),cv2.FONT_HERSHEY_PLAIN, 0.9,(255,0,255) )
            cv2.putText(img, "Are you Okay?  Press any key to confirm..."  ,  (50,70),cv2.FONT_HERSHEY_PLAIN, 0.9,(255,0,255) )
            out=cv2.imwrite('dangerous.jpg',img)
        else:
            #GPIO.output(26,OUTPUT.HIGH)
  #          p=GPIO.PWM(26,2)
 #           p.start(10)
#            p.stop()
            
            cv2.putText(img, "Accident/Suspicious Activity: Recording..."  ,  (30,150),cv2.FONT_HERSHEY_PLAIN, 0.7,(255,0,255) )
            out=cv2.imwrite('suspiciousactivity.jpg',img)
        
    #image = frame.array
    #gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    #img = cv2.resize(detect_lanes_img(image),(640,480))
    from adxl345 import ADXL345
  
    adxl345 = ADXL345()
    
    axes = adxl345.getAxes(True)
    t1=time.time()
    x=" %.3f" % ( (axes['y']*9.8 ))
    y=" %.3f" % ( (axes['z']*9.8) )
    z=" %.3f" % ( (axes['x']*9.8) )
    #s=" %.3f" % ( (((axes['x'])**2+(axes['y'])**2+(axes['z'])**2)**(0.5))*9.8 )
    r1= ((axes['y'])**2+(axes['z'])**2)**(0.5)
    r0=r1
    

    
    if float(y) <= 0: #accelerate
        r= -float(y)
        
        ps1=0    
        
        if float(y) <= -0.025*9.8:
            if float(y) >= -0.12*9.8:
                t2=time.time()
                te=abs(t2-t1)
    #t3=time.time()

                V=v+r*(te+0.27) #0.27 is the delay that program requires for each loop, we add it because the vehicle is still moving (acc/de-acc) during the time latency
    #speed is updated in each condition for accurate add/minus when acc or de-acc
   
                v=V 
                cv2.putText(img, "Accelerating" ,  (10,180),cv2.FONT_HERSHEY_PLAIN, 0.8,(0,0,255) )
                ps0=0

            if float(y)<-0.12*9.8:
                #GPIO.output(17,GPIO.HIGH)
                
                if (ps0==1):
                    pass
                else:
                    counter0+=1

                cv2.putText(img, "Risk detected [FAST]: Recording..." ,  (50,100),cv2.FONT_HERSHEY_PLAIN, 0.8,(0,0,255) )
                out=cv2.imwrite('overspeed.jpg',img) #initiate photo saving
                ps0=1
                #print(counter0)
                #print (counter0)
                #fourcc=cv2.cv.CV_FOURCC(*'MPV4')
                
                #out=cv2.VideoWriter(get_file_name(),fourcc,20.0,(480,320))
             #   a+=1
        #print a

    if float(y) >=0: #slowdown
        r= -float(y)
        ps0=0
        #print(ps1)
        
        if float(y) >= 0.025*9.8:
            if float(y)<=0.12*9.8:
                t2=time.time()
                te=abs(t2-t1)
    #t3=time.time()

                V=v+r*(te+0.27)
    
   
                v=V
                cv2.putText(img, "Slowing" ,  (260,180),cv2.FONT_HERSHEY_PLAIN, 0.8,(0,0,255) )
                ps1=0
            if float(y)>=0.12*9.8:
                #GPIO.output(17,GPIO.HIGH)
                
                if (ps1==1):
                    pass
                else:
                    counter1+=1

                cv2.putText(img, "Risk detected [SLOW]: Recording..." ,  (50,100),cv2.FONT_HERSHEY_PLAIN, 0.8,(0,0,255) )
                out=cv2.imwrite('overbrake.jpg',img)
                ps1=1
   #             print (counter1)    
                         
               
        #print b
    if abs(float(x)) > 0.8*9.8: #upside down
        cv2.putText(img, "EMERGENCY 9-1-1" ,  (100,125),cv2.FONT_HERSHEY_PLAIN, 1.0,(0,0,255) )
    if abs(float(y)) > 0.8*9.8:
        cv2.putText(img, "EMERGENCY 9-1-1" ,  (100,125),cv2.FONT_HERSHEY_PLAIN, 1.0,(0,0,255) )
    
        
   
    if float(x) >0.015*9.8: #going too right
        
        if abs (float(x)) > 0.08*9.8:
        
            cv2.putText(img, "Right Warning" ,  (220,40),cv2.FONT_HERSHEY_PLAIN, 0.8,(0,0,255) )
            GPIO.output(17,GPIO.HIGH)
            out=cv2.imwrite('Rdrift.jpg',img)
            cs=1
            if (ps2!=cs):
                counter2+=1
            else:
                counter2+=0

        
            #p=GPIO.PWM(26,5)
            
            #wl+=1
        else:
            GPIO.output(17,GPIO.LOW)
            cv2.putText(img, "Going Right" ,  (220,40),cv2.FONT_HERSHEY_PLAIN, 0.8,(0,255,0) )
    if float(x) <-0.015*9.8:
    
        if abs (float(x))> 0.08*9.8:
            cv2.putText(img, "left Warning" ,  (10,40),cv2.FONT_HERSHEY_PLAIN, 0.8,(0,0,255) )
            #wr+=1
            cs=1
            if (ps3!=cs):
                counter3+=1
            else:
                counter3+=0

            GPIO.output(17,GPIO.HIGH)
            out=cv2.imwrite('Ldrift.jpg',img)
        else:
            GPIO.output(17,GPIO.LOW)
            cv2.putText(img, "Going left " ,  (15,40),cv2.FONT_HERSHEY_PLAIN, 0.8,(0,255,0) )
        
    u=time.strftime(' %Y/%m/%d %H:%M:%S', time.localtime(time.time()))
        
    
    cv2.putText(img,"x = %.2fG" % ( (axes['y'] )),  (140,165),cv2.FONT_HERSHEY_PLAIN,0.8,(0,0,255) )
    cv2.putText(img,"y = %.2fG" % ( (axes['z'] )),  (140,177),cv2.FONT_HERSHEY_PLAIN,0.8,(0,0,255) )
    cv2.putText(img,"z = %.2fG" % ( (axes['x'] )),  (140,189),cv2.FONT_HERSHEY_PLAIN, 0.8,(0,0,255) )
    cv2.putText(img, u ,  (175,15),cv2.FONT_HERSHEY_PLAIN, 0.75,(0,0,255))
    
    #t2=time.time()
    #te=abs(t2-t1)
    #t3=time.time()

    #V=v+r*(te+0.27)
    
   
    #v=V
    filetext= str(u)+"  ,  "+str(x)+"  ,  "+str(y)+"  ,  "+str(z)+"  ,  "+str(v)
    open("drivelog.dat", "a").write(filetext+'\n')
    cv2.putText(img, "Speed is: " + str(('%.2f'%(v*2.237)))+" mph"  ,  (180,25),cv2.FONT_HERSHEY_PLAIN, 0.75,(0,0,255) )
    h=v
    #if (abs(h-g)/(te+0.27)<0.28) or ( abs(h)>300):
     #   cv2.putText(img, "NO Movement Recorded"  ,  (90,140),cv2.FONT_HERSHEY_PLAIN, 0.9,(0,0,255) )
      #  v=0
    #if (abs(float(r1)-float(r1))/(te+0.27)>5):
     #   cv2.putText(img, "Accident Detected: Recording..."  ,  (60,140),cv2.FONT_HERSHEY_PLAIN, 1.0,(0,0,255) )

    #print(float(r2)-float(r1))
    #print(te+0.27)
    #print(r2)
    #print(r1)
    
    if v<-5:
        cv2.putText(img, "Backing up too fast! Brake now "  ,  (50,150),cv2.FONT_HERSHEY_PLAIN, 0.8,(0,0,255) )
    
    

    

                #if angle > 0 :
                 #   if len([]) >= 4:
                  #      lt_std = np.std([])
                   #     lt_mean = np.mean([])
                        
                    #    if lt_std > 0 + 100 or lt_std < 0 - 100:
                     #       cv2.putText(img, 'left warning', (200,280), cv2.FONT_HERSHEY_PLAIN, 2.0, (0,0,255))

                #if angle < 0 :
                    
                    

                    
                 #   if len([]) >= 4:
                  #      rt_std = np.std([])
                   #     rt_mean = np.mean([])
                    #    if rt_std > 0 + 100 or rt_std < 0 - 100:
                     #       cv2.putText(img, 'right warning', (640, 480), cv2.FONT_HERSHEY_PLAIN, 2.0, (0,0,255))


    if lines is not None:
        for line in lines[0]:
             pt1 = (line[0], line[1])
             pt2 = (line[2], line[3])

             cv2.line(img, pt1, pt2, (0, 255, 0), 2)
             #img = cv2.cvtColor(edges, cv2.COLOR_GRAY2BGR)
                                    


    try:
        for line in lines:
            for x1,y1,x2,y2 in line:
                dx,dy=x2-x1,y2-y1
                wt, wd, dp = img.shape
                angle = np.arctan2(dy,dx) * (180/np.pi)
                #print (angle)
                if angle >= 28 or angle <= - 28:
                    cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)    
                
                
    except TypeError:
        pass
    
    #cv2.putText(img," "+"bad behaviors in last "+str('%.2f'%(time.time()-t0))+"s"  ,(50,212),cv2.FONT_HERSHEY_PLAIN, 0.7,(200,150,250) )
    
    if (counter1>=1) or (counter0>=1):
                cv2.putText(img, str(counter1+counter0)+" bad behaviors in last "+str('%.2f'%(time.time()-t0))+"s"  ,(65,203),cv2.FONT_HERSHEY_PLAIN, 0.7,(200,150,250) )
    
                cv2.putText(img, "You have "+str(counter1)+" bad brake "+" & "+str(counter0)+" sudden speedup",(35,212),cv2.FONT_HERSHEY_PLAIN, 0.7,(200,150,250) )
                filetext2= "As of now, "+str(u)+" ,you have had"+str(counter1)+" bad brake "+" & "+str(counter0)+" sudden speedup " + " in last " + str('%.2f'%(time.time()-t0))+"s" 
                open("behaviorlog.dat", "a").write(filetext2+'\n')
                

    
    
    cv2.imshow("original", img)
    key = cv2.waitKey(1) & 0xFF
    rawCapture.truncate(0)

    if key == ord("q"):
        break
    
#gpio.output(RELAY, False)
GPIO.cleanup()

cv2.destroyAllWindows()

    

#    img = cv2.cvtColor(edges, cv2.COLOR_GRAY2BGR)

        #cv2.imshow('edges', edges)

#    cv2.imshow('original', img)

#    if cv2.waitKey(10) & 0xff == ord('q'):
#        break

#cam.release()
#cv2.destroyAllWindows()