Security Monitoring Robot

Mengceng He (mh2387) Peidong Qi (pq32)

Objective

The objective of this project is to design and implement a security monitoring robot which is able to detect surrounding motions and help the user watch personal belongings. The robot is designed to work under remote controls, which means that the user can manipulate it through a WIFI network using a computer. Control mode and Patrol mode are two running modes of the robot. In control mode, the robot’s movement is controlled by user in real time and in this way, it can be deployed to any position as the user wants. In the Patrol Mode, the user defines a routine and the robot follows the routine back and forth.





Introduction

In this project, we designed and implemented a security robot which can monitor personal belongings and detect motions in ordinary indoor environments. The information processing core, or we can call it the brain, of the robot is a Raspberry Pi which processes commands from user and graphical information received from the surrounding environment. The information from the surrounding environment is collected using a Pi camera mounted on the front end of the robot. The movement module of the robot simply consists of two rear wheels driven by two servo motors and a front rotatable steel ball, which is enough to deal with indoor environments where the ground is usually flat and has no much friction. Two most important features of the robot are motion detection and stuff watching. The motion detection feature allows the robot to detect any motions in the not far environment. The stuff watching feature enables the robot to watch a stuff with a special sticker put on it. When the stuff is taken away from the robot’s sight, a prompt message will be sent to the user. These two features come from specially designed code with tuned parameters suitable for indoor environment use (especially for home use) based on the OpenCV library. The robot has two running modes, which are the Control mode and the patrol mode. In the control mode, the user can manipulate the movement of the robot in real time. This allows the user to deploy the robot to any position wanted. In the patrol mode, the user defines the routine. The robot follows the routine and in this way can watch up to two objects automatically. The robot adopts connection through WIFI to realize remote control.

Draft sketchs of the mode design(left) and the hardware structure (right)


Hardware design




Pi Camera


PiCamera and installation



For our project, we used the Pi camera. We connected our camera to the CSI port on our Raspberry PI. We slided the ribbon cable of the camera into the port with the blue side facing the ethernet port. After the camera was installed on Pi, we mounted it to the front of the robot frame.



Servo


For our project, we used parallax continuous rotation servo motor to drive the robot. The GPIO pins of Raspberry Pi output pulse width modulated (PWM) signals to the servos. PWM encodes signals with variation of pulse width (the duration when signal is high). According to the servo manual, pulses received by servos have a 20 ms gap between two adjacent high pulses. As shown in the figures below, the rotation speed of servos changes with the high pulse width, varying in the range 1.3ms – 1.7ms. 1.3ms for maximum clockwise speed and 1.7 ms for maximum counter clockwise speed. When the high pulse width is 1.5 ms, the servo should stop rotating.


(a) Servo stops when the pulse width is 1.5 ms




(b) Servo reached max clockwise speed when the pulse width is 1.3ms




(c) Servo reached max counter clockwise speed when the pulse width is 1.7ms

The circuit for connecting a servo to the Rpi is shown in the figure below. Servos requires 4 to 6 VDC inputs. We used four pack AA batteries to be our power supply. The servo and the Pi were connected to same ground in order to avoid ground voltage shifts. we added a 1kohms resistor between I/O port and white cable of servo. In this way, the current through the line could be restricted to a reasonable value which can be accepted by the RPi.


Servo circuit connection



Software design

Design of the user interface


The user interface for remote access is established using pygame. A flowing chart showing the hierarchy of the menus is as below.


Menu hierarchy design

In Menu 1, the user is prompted whether to choose the Control mode or the Patrol Mode. If entering the Control Mode, the program goes to Menu 5 showing that the Control mode is currently running and a window showing the stream from the camera is opened. By pressing ‘q’ button on the keyboard, program exits the control mode and enters Menu 2. In this menu, there are three options, which are to conduct the motion detection, to conduct the stuff monitoring and to go back to Menu 1. If choosing the motion detection or the stuff monitoring, there will be no change in the user menu but a corresponding window showing processed information from the camera is displayed. By pressing ‘q’ on the keyboard, the program quits these two functions.

If in Menu 1, Patrol Mode is chosen. The program then goes to Menu 3. In this Menu, the user is allowed to design the patrol routine. On the menu, ‘up’, ’down’, ‘left’ and ‘right’ four buttons representing the movement directions are displayed. The user can start a segment of a routine by clicking on any one of them and stop the segment by clicking on the ‘stop’ button. To do the next segment, just simply repeat this procedure. A routine design is finished by clicking on ‘commit’. Then the program will go to Menu 4 and execute the Patrol mode along the designed routine back and forth. At the end of a routine, stuff detection is on for a fixed period. In Menu 4, the program shows a message showing it is executing Patrol Mode and user can quit and go back to Menu by clicking on ‘q’ on the keyboard.



Movement function


Movement function move( ) is designed to receive user commands from keyboard and change the state of the servos according to them. Because the whole user interface is in pygame, there is a while loop runs repetitively. In each frame, the program sets the servos according to which arrow key is pressed. One thing worth noticing is that, in our code, pygame.key.get_pressed( ) is used to detect key press. It can become effective if only a line ‘pygame.event.pump( )’ follows. The flowchart below shows the basic thought doing this.

The circuit for connecting a servo to the Rpi is shown in the figure below. Servos requires 4 to 6 VDC inputs. We used four pack AA batteries to be our power supply. The servo and the Pi were connected to same ground in order to avoid ground voltage shifts. we added a 1kohms resistor between I/O port and white cable of servo. In this way, the current through the line could be restricted to a reasonable value which can be accepted by the RPi.


The basic logic flow of doing movement function




Motion detection


We use some computer vision method to do motion detection. In detect(), we let the camera to capture frames continuously. For image processing, we need gray scale images but the images captured from the camera are in BGR. Therefore, the first step is to convert from BGR color mode to grayscale color mode.

Next, for each frame except for the first one, we calculate the difference in pixel values between two adjacent frames. To remove the salt and pepper noise in the difference result, we employ median filtering. After that, we do thresholding (threshold is 10) on the result to enhance the difference. The thresholding process sets regions containing motions to white (255). We want to find motion regions of reasonable sizes and ignore small regions to preclude the influence by changing in lighting and vibrations. To deal with this, morphological operations (closing) are carried out.

Finally, we sketch contours enclosing the remaining regions. If a region enclosed by a contour has more than 1000 pixels, the program informs the user a motion is detected. If a region has more than 4000 pixels, the program sketches a rectangle in the original image and writes it into a jpg file. 1000 and 4000 are obtained through tuning. The problem here is if each frame is written out, there would be a ton of image files generated for a single big motion. To solve this, we employ some cooldown mechanism. Every two image file generations should have a gap of at least 50 frames between them.


The basic logic flow of motion detection




Stuff monitoring




The procedures of stuff monitoring


We use specially made green stickers to simplify the problem. A green sticker is attached to the object which the user wants the robot to watch. If the sticker presents in the sight of the robot and is recognized. The robot would consider the object is not taken away.

The algorithm for stuff monitoring is based on color recognition. The frames captured by camera are first converted into HSV color mode, in which we can easily tune the parameters to some range suitable for detecting the color of stickers. The three letters of HSV separately represent hue, saturation and value. The lower and upper limits of HSV range we set for the green sticker are (30,100,50) and (70,255,255).

In OpenCV library, there is a function cv2.inRange( ) which can be used to mark the region within the HSV range using white pixels (255) and other regions outside the range with black pixels (0). The algorithm marks the green sticker’s region using this function.

However, after the previous two steps are done. There might be small regions caused by noise also marked. They should be neglected. In the third step, we do morphological operations (opening) to erase them. After that, the remaining marked region should be the region of the sticker. The program then draws a contour in the non-processed image frame to show the user that the object is still not taken away.



Patrol routine design


Before the robot enters the Patrol mode, the user is able to define a routine for the robot to follow. The code carries out routine designed in Menu 3. The data structure that we use to store the routine information is list. For a routine, two lists are used. One records the directions and the other one records the period (in milliseconds) the robot marching in the directions. For instance, a routine defined by a user is to forward for 2 seconds, then turn left for 0.5 seconds and finally go backwards for 1 second. One list contains ‘up’, ‘left’, ‘down’. The other list contains 2000, 500, 1000.

After finishing the routine defined by user, the robot will do the routine in a reversed manner and in this way go back to the origin. The reversed routine information is stored in another two list, similar to the one not reversed. For the example talked about in the paragraph above, one list contains ‘up’, ‘right’ and ‘down’ and the other one contains 1000, 500, 2000. The following flowchart shows the idea of this part.


The procedures of defining the routine



Testing

Patrol mode testing


For the patrol mode testing, the user entered the patrol mode and sets up the patrol routine for the robot. The robot followed the designed path to patrol.


Testing Patrol mode



Control mode testing


For the control mode testing, the user entered the control mode. The live streaming was turned on and the user was allowed to use direction keys on the keyboard to control the robot.


Testing Control mode



Motion detection testing


When the user entered the motion detection mode, the live streaming was turned on. Once the camera detected a movement, it took a picture and marked the movement with a rectangle. In addition, the user was sent a message showing that a motion was detected.


Testing motion detection



Stuff detection testing


When the user got into the stuff detection mode, the camera took a picture and checked if there was a green spot in the picture. When the green spot was detected, the Pi sent a message showing that the stuff was here and marked the green spot in the picture. When not, the Pi sent a message showing that the stuff is gone.


Testing stuff detection



Results


Our final product achieved all the goals we set at the beginning of the project. Our security monitoring robot can successfully do the motion detection and the stuff detection. Also, it worked well in both the Control mode and the Patrol mode. Once the robot was setup and connected to WIFI, the user could use a laptop to open our GUI to remotely control the robot.

The Robot started with choosing one of two modes: Patrol mode and Control mode.

The user could enter different modes by clicking on the buttons. Once the user got into the Patrol mode, the user could design routines for the robot. After finishing the routine design, the robot moved along the routine to do patrol works. When the robot reaches the desired position, it started to do stuff detection. After that, the robot followed the reversed path to go back to start point. When reaching the start point, the robot did stuff detection again. The commute continued. The user could press “q” to quit patrol mode and back to the main menu.

When the user entered the control mode, the user could control the robot with arrow keys on the keyboard. At the same time, a live streaming video showed up. The user could use the camera view to adjust the position of the robot. When the robot got to the desired spot, the user could press “q” to quit moving. Then the user could enter the motion detection or stuff detection mode. When the robot was in the motion detection mode, the camera took images of moving object. And save the images to hard disk. When the robot was in the stuff detection mode, the robot started to look for green spot. If the green spot was detected, the “your stuff is here” message showed up. If the green spot was not detected, the “your stuff is gone” message showed up.



Conclusion


We successfully achieved all our goals for this project. During the final project, we learned how to use OpenCV to process images. We also learned that there are some limitations for the Raspberry Pi. For instance, when we were designing the stuff detection, our original plan was to use Hough transform. We can design a special shape to let the Pi detect. That will be more accurate than detection based on color. However, the hough transform requires a lot of calculations. Due to the performance limitations of Raspberry Pi, the process can take a rather long time. Then we realized the hough transform require a high standard platform and hardware.

We used parallax continuous rotation servo motors to drive our robot. Then we had a huge issue that the robot could not go straight. After talking to professor Skovira, we realized that it was due to software PWM. Two servos would never be identical.

This four weeks of project, from idea brainstorming to finishing the product, we feel delighted that we are able to do it.

Future work


1.The major issue with Patrol mode is that the robot can not follow the user defined path perfectly. For example, the robot could not keep going straight when moveing forward for a long distance. The reason is that the two servos can always have a difference in rotation speed no matter how they are calibrated. In the future, a new function can be added. It should be able to make adjustments to robot's position in order to offset the difference in servo motor speeds. An alternative way is to replace the servos with some more accurate motors which can meet our requirements.

2. For the current project, we can only use laptop to control the robot through SSH. We want to make it possible to run on other platforms like tablets or mobile phones.

3. The stuff detection for our robot needs to be improved. Currently, the robot only can detect green color. It is easy to get confused with other stuffs in green. So we want to some more accurate identifacation methods such as shape recognition to make stuff detection more accurate.

Cost list

Parts Cost
Raspberry Pi 3 $35(not included)
Raspberry Pi Camera $30
Servo Motor X2 $25
Robot Frame --
AA Battery Pack $5
Bread Board $5
Mobile Battery $10
Miscellaneous(wires, resistors) --
Total $75

Work contribution

Peidong was responsible for the hardware design. He built the robot frame and connected the circuit. Peidong also wrote the program for the stuff detection. He also helped to improve the GUI.

Mengceng was responsible for the software design. He created the GUI and motion detection code. He also designed the patrol routine algorithm.

Acknowledgement

Great thanks to Prof Joseph Skovira for his guidance. Every time we hit a roadblock or got to the wrong way, Prof Skovira always guided us to the right path. We would like to thank all TAs from ECE 5725. They helped us a lot. We also want to thank the developers of ‘Robotic Candy Sorter’ Mark (Moonyoung) Lee and Peter A. Slater from this course in 2016 fall. Their project inspired us to use color detection in OpenCV to achieve our stuff detection.

Code Appendix

###############################################################
#                                                             #
# file: security_bot.py                                       #
#                                                             #
# authors:  Peidong  Qi  (pq32)                               #
#           Mengceng He  (mh2387)                             #
#                                                             #
# date:     Dec 9 2017                                        #
#                                                             #
# Here in this file is our final version code to run the      #
# security monitoring robot with full functions.              #
#                                                             #
#                                                             #
###############################################################

import sys, pygame
import os
import RPi.GPIO as GPIO

from picamera.array import PiRGBArray
from picamera import PiCamera
import cv2
import time
import numpy as np
###############################################################

#GPIO settings
GPIO.setmode(GPIO.BCM)

GPIO.setup(5, GPIO.OUT)   #GPIO pins 5 and 6 are used by 
GPIO.setup(6, GPIO.OUT)   #the servos

###############################################################

#pygame initialization
pygame.init()
pygame.mouse.set_visible(True)

size = width, height = 320, 240
screen_center=(width/2,height/2)
black = 0, 0, 0
white= 255, 255, 255
red = 255,0,0
green = 0,255,0
screen = pygame.display.set_mode(size)

###############################################################
#menu1 buttons positioning

quitx=250
quity=220

patrolx=70
patroly=100

controlx=250
controly=100

###############################################################
#menu2 buttons positioning

backx=250
backy=220

motion_detectx=70
motion_detecty=180

stuff_detectx=70
stuff_detecty=220

cx=150
cy=120

###############################################################
#menu3 buttons positioning

stop_x=60
stop_y=210

up_x=160
up_y=80

down_x=160
down_y=160

left_x=80
left_y=160

right_x=240
right_y=160

commit_x=250
commit_y=210

###############################################################
#menu4 buttons positioning

qx=150
qy=120

###############################################################
#menu5 buttons positioning

ax=150
ay=50

bx=150
by=100

###############################################################
#dict of button positions and texts
my_font=pygame.font.Font(None, 30)
my_buttons_1={'quit':(quitx,quity),'patrol':(patrolx,patroly),'control':(controlx,controly)}
my_buttons_2={'motion detect':(motion_detectx,motion_detecty),'back':(backx,backy), 'press q to quit detecting':(cx,cy), 'stuff detect':(stuff_detectx,stuff_detecty)}
my_buttons_3={'stop':(stop_x,stop_y),'up':(up_x,up_y),'down':(down_x,down_y), 'left':(left_x,left_y), 'right':(right_x,right_y),'commit':(commit_x,commit_y)}
menu4_display={'press q to quit':(qx,qy)}
menu5_display={'press q to quit moving':(bx,by), 'press dirction key to control':(ax,ay)}

###############################################################

menu=1 #set the initial menu to menu1

###############################################################
#global variables

mousex=0 #default mouse cursor position
mousey=0

Clock = pygame.time.Clock()
tick_val=10

#Two servos are separately calibrated
pulse_width_cw_max_l=1.3/1000.0
pulse_width_cc_max_l=1.7/1000.0
pulse_width_l=1.5/1000.0
period_stop_l=pulse_width_l+20.0/1000.0
period_cw_l=pulse_width_cw_max_l+20.0/1000.0
period_cc_l=pulse_width_cc_max_l+20.0/1000.0
dc_stop_l=pulse_width_l*100/period_stop_l
dc_cw_l=pulse_width_cw_max_l*100/period_cw_l
dc_cc_l=pulse_width_cc_max_l*100/period_cc_l
freq_stop_l=1.0/period_stop_l
freq_cw_l=1.0/period_cw_l
freq_cc_l=1.0/period_cc_l

pulse_width_cw_max_r=1.40/1000.0
pulse_width_cc_max_r=1.62/1000.0
pulse_width_r=1.5/1000.0
period_stop_r=pulse_width_r+20.0/1000.0
period_cw_r=pulse_width_cw_max_r+20.0/1000.0
period_cc_r=pulse_width_cc_max_r+20.0/1000.0
dc_stop_r=pulse_width_r*100/period_stop_r
dc_cw_r=pulse_width_cw_max_r*100/period_cw_r
dc_cc_r=pulse_width_cc_max_r*100/period_cc_r
freq_stop_r=1.0/period_stop_r
freq_cw_r=1.0/period_cw_r
freq_cc_r=1.0/period_cc_r

p1 = GPIO.PWM(5,freq_stop_r)
p2 = GPIO.PWM(6,freq_stop_l)

#for servo_op()
s_num=1
direct=0
p1_direct=0 
p2_direct=0

#for menu 3
tc_flag=0
t_start=0
routine=[] #record routine directions
routine_time=[] #record the time for each segment of the routine
anti_routine=[] #the reversed routine directions
anti_time=[] #reverse of routine_time

#for menu 4
reverse_dir=1
t_step_start=0
doing_step=0
i=0
wait_flag=0
wait_start_time=0
initial_flag=1
stop_flag =0
###############################################################
#Set up the Pi camera
camera=PiCamera()
camera.resolution=(320,240)
camera.rotation=180
camera.framerate=15
rawCapture=PiRGBArray(camera,size=(320,240))




###############################################################
#stuff monitoring function
def stuff_monitor(time_to_detect):
    
    colorLB=(30,100,50)
    colorUB=(70,255,255)    #Set the color range in HSV color mode

    start_time=pygame.time.get_ticks()

    for frame in camera.capture_continuous(rawCapture, format = "bgr", use_video_port = True ):
        if pygame.key.get_pressed()[pygame.K_q]==True:
            menu=1
            stop_flag =1
            cv2.waitKey(1)
            rawCapture.truncate(0)
            cv2.destroyAllWindows()
            break
                    
        image = frame.array
        imageRaw = image.copy()
        pygame.event.pump()

        hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
        #convert the frame to HSV

        mask = cv2.inRange(hsv, colorLB, colorUB)
        #Mark region with the color range
        
        mask = cv2.erode(mask, None, iterations = 2)
        mask = cv2.dilate(mask, None, iterations = 2)
        #Use morphological operations to remove small regions that can be ignored
        
        contours = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
        
        #draw a contour for the remaining region
        for c in contours:
            if cv2.contourArea(c) > 0:
                cv2.drawContours(image, [c], -1, (0,255,255), 2)
        
        #check if any region of interest 
        sum=0
        for h in range(mask.shape[1]):
            for v in range(mask.shape[0]):
                sum=sum+mask[v][h]
                if sum>0:
                    print("Your stuff is here")

                else:
                    print("Your stuff is gone")
                    
    
        current_time=pygame.time.get_ticks()

        if current_time-start_time>time_to_detect and time_to_detect!=0:
            cv2.imshow("overlay",image)
            cv2.waitKey(100)
            rawCapture.truncate(0)
            break

        cv2.waitKey(1)
        rawCapture.truncate(0)

###############################################################
# Used to change the states of servos
def servo_op(s_num,direct):
    if s_num==1:
        if direct==1:
            p1.ChangeFrequency(freq_cw_r)
            p1.ChangeDutyCycle(dc_cw_r)
        if direct==2:
            p1.ChangeFrequency(freq_cc_r)
            p1.ChangeDutyCycle(dc_cc_r)
        if direct==0:
            p1.ChangeFrequency(freq_stop_r)
            p1.ChangeDutyCycle(dc_stop_r)
    if s_num==2:
        if direct==1:
            p2.ChangeFrequency(freq_cw_l)
            p2.ChangeDutyCycle(dc_cw_l)
        if direct==2:
            p2.ChangeFrequency(freq_cc_l)
            p2.ChangeDutyCycle(dc_cc_l)
        if direct==0:
            p2.ChangeFrequency(freq_stop_l)
            p2.ChangeDutyCycle(dc_stop_l)

###############################################################
# Movement function for the Control mode
def move():
    p1.start(dc_stop_r)
    p2.start(dc_stop_l)
  
    for frame in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True):
        if pygame.key.get_pressed()[pygame.K_UP]==True:
            servo_op(1,1)
            servo_op(2,2)
            print("up")
            #If up arrow pressed, go forward
            
        elif pygame.key.get_pressed()[pygame.K_DOWN]==True:
            servo_op(1,2)
            servo_op(2,1)
            print("down")
            #If down arrow pressed, go backward
        
        elif pygame.key.get_pressed()[pygame.K_LEFT]==True:
            servo_op(1,1)
            servo_op(2,1)
            print("left")
            #If left arrow pressed, go leftward
        
        elif pygame.key.get_pressed()[pygame.K_RIGHT]==True:
            servo_op(1,2)
            servo_op(2,2)
            print("right")
            #If right arrow pressed, go rightward
            
        elif pygame.key.get_pressed()[pygame.K_q]==True:
            rawCapture.truncate(0)
            break
        
        else:
            servo_op(1,0)
            servo_op(2,0)
            
        pygame.event.pump()
        
        frame=frame.array
        cv2.imshow("Colored",frame)
        cv2.waitKey(1)
        rawCapture.truncate(0)
        
    
    p1.stop()
    p2.stop()

###############################################################
#motion detection function
def detect(time_to_detect):
    global menu
    global stop_flag       
    p1.stop()
    p2.stop()
    
    i=1 #the image frame count
    first_flag=0 #see if the frame is the first frame
    camera_wait_count=0

    start_time=pygame.time.get_ticks()
    
    for frame in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True):
        if pygame.key.get_pressed()[pygame.K_q]==True:
            menu=1
            cv2.waitKey(1)
            rawCapture.truncate(0)
            cv2.destroyWindow("Colored")
            break

        current_time=pygame.time.get_ticks()
        
        if current_time-start_time>time_to_detect and time_to_detect!=0:
            stop_flag=0
            cv2.waitKey(1)
            rawCapture.truncate(0)
            break
        
        pygame.event.pump()

        new1_im=frame.array
        new_im=cv2.cvtColor(new1_im,cv2.COLOR_BGR2GRAY)
        
        if first_flag == 0:   #If the frame is the first frame, do nothing
            old_im=new_im
            first_flag=1
            cv2.waitKey(1)
            rawCapture.truncate(0)
            continue
        
        diff=cv2.absdiff(new_im,old_im)   #Calculate the difference between two adjacent images
        diff_thre=cv2.medianBlur(diff,31) #Remove noise using median
        diff_thre=cv2.threshold(diff,10,255,cv2.THRESH_BINARY)[1] #threshold the image
        kernel1=cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(20,20))
        diff_thre=cv2.dilate(diff_thre,kernel1,iterations=1)
        kernel2=cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
        diff_thre=cv2.erode(diff_thre,kernel2,iterations=1) #do morphological operations
        (cnts,_)=cv2.findContours(diff_thre.copy(),cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
        #sketch contours for all motion regions
        for cnt in cnts:
            if pygame.key.get_pressed()[pygame.K_q]==True:
                menu=1
                break
            if cv2.contourArea(cnt)>1000:
                #if the region is >1000,inform the user
                detec_flag=1
                print("Motion detected %s"%i)
                
                if cv2.contourArea(cnt)>9000:
                #if the region is >9000, rectangle the region and write to a file
                    (x,y,w,h)=cv2.boundingRect(cnt)
                    cv2.rectangle(new1_im,(x,y),(x+w,y+h),(255,0,0),2)
                    if camera_wait_count==0:
                        camera.capture("./im%s.jpg"%i)
                        pic_name="./finalproj/12_3/im"+str(i)+".jpg"
                        cv2.imwrite(pic_name,new1_im)
                        camera_wait_count=50

        
        i=i+1
        if camera_wait_count>0:
            print(camera_wait_count)
            camera_wait_count-=1
        
        old_im=new_im
        
        cv2.imshow("Colored",new1_im)
        cv2.waitKey(1)
        rawCapture.truncate(0)
    
    print("stop detection")
    cv2.destroyWindow("Colored")
    
###############################################################
#execute the patrol routine
def patrol():
    global wait_flag
    global reverse_dir
    global t_step_start
    global doing_step
    global i
    global wait_start_time
    global initial_flag
    global menu
    global stop_flag
    global routine
    global anti_routine
    global routine_time
    global anti_time
    
    if pygame.key.get_pressed()[pygame.K_q]==True:
        return
    
    if wait_flag==0:  #if not at the end of a routine
        if initial_flag==1:
            print("test")
            p1.start(dc_stop_r)
            p2.start(dc_stop_l)
            initial_flag=0
        if reverse_dir==0: #if not following the reversed routine
            if doing_step==0: #if doing a segment of a routine
                t_step_start=pygame.time.get_ticks()
                doing_step=1

                if routine[i]=="up":
                    servo_op(1,1)
                    servo_op(2,2)
                    print("up")
                    
                elif routine[i]=="down":
                    servo_op(1,2)
                    servo_op(2,1)
                    print("down")
                
                elif routine[i]=="left":
                    servo_op(1,1)
                    servo_op(2,1)
                    print("left")
                
                elif routine[i]=="right":
                    servo_op(1,2)
                    servo_op(2,2)
                    print("right")
                    

            #if a segment of a routine is finished
            else:
                if pygame.time.get_ticks()-t_step_start>routine_time[i]:
                    doing_step=0
                    i=i+1 #go to the next segment
                    servo_op(1,0)
                    servo_op(2,0)

                    if i>=len(routine): #if at the end of a routine
                        i=0
                        reverse_dir=1 #change the routine direction
                        wait_flag=1
                        wait_start_time=pygame.time.get_ticks()
                        stuff_monitor(5000) #do stuff monitoring
                        
        elif reverse_dir==1: #if following the reversed routine
            if doing_step==0:
                t_step_start=pygame.time.get_ticks()
                doing_step=1
                if anti_routine[i]=="up":
                    servo_op(1,1)
                    servo_op(2,2)
                    print("up")
                    
                elif anti_routine[i]=="down":
                    servo_op(1,2)
                    servo_op(2,1)
                    print("down")
                
                elif anti_routine[i]=="left":
                    servo_op(1,1)
                    servo_op(2,1)
                    print("left")
                
                elif anti_routine[i]=="right":
                    servo_op(1,2)
                    servo_op(2,2)
                    print("right")

            else:
                if pygame.time.get_ticks()-t_step_start>anti_time[i]:
                    doing_step=0
                    i=i+1
                    servo_op(1,0)
                    servo_op(2,0)

                    if i>=len(routine):
                        i=0
                        reverse_dir=0
                        wait_flag=1
                        wait_start_time=pygame.time.get_ticks()
                        stuff_monitor(1000)
                            
    elif wait_flag==1:
        p1.start(dc_stop_r)
        p2.start(dc_stop_l)
        wait_flag=0
       

while True:
    Clock.tick(tick_val)
    screen.fill(black)
      
    if menu==1:
        for event in pygame.event.get():
            if event.type == pygame.MOUSEBUTTONDOWN:
                pos=pygame.mouse.get_pos()
            elif event.type == pygame.MOUSEBUTTONUP:
                pos=pygame.mouse.get_pos()
                mousex,mousey=pos
                        
                if mousey>controly-10 and mousey<controly+10:
                    if mousex>controlx-20 and mousex<controlx+20:
                            print("control button pressed")
                            menu=5
                            
                if mousey>patroly-10 and mousey<patroly+10:
                    if mousex>patrolx-20 and mousex<patrolx+20:
                            print("patrol button pressed")
                            menu=3
                            
                            
                if mousey>quity-10 and mousey<quity+10:
                    if mousex>quitx-20 and mousex<quitx+20:
                            print("quit button pressed")
                            GPIO.cleanup()
                            sys.exit()
                                    
        for my_text,text_pos in my_buttons_1.items():
            text_surface=my_font.render(my_text,True,white)
            rect=text_surface.get_rect(center=text_pos)
            screen.blit(text_surface,rect)
            
    if menu==2:
        for event in pygame.event.get():
            if event.type == pygame.MOUSEBUTTONDOWN:
                pos=pygame.mouse.get_pos()
            elif event.type == pygame.MOUSEBUTTONUP:
                pos=pygame.mouse.get_pos()
                mousex,mousey=pos
                
                if mousey>backy-10 and mousey<backy+10:
                    if mousex>backx-20 and mousex<backx+20:
                            print("back button pressed")
                            menu=1
                
                if mousey>motion_detecty-10 and mousey<motion_detecty+10:
                    if mousex>motion_detectx-20 and mousex<motion_detectx+20:
                            print("motion detect button pressed")
                            detect(0)
                            
                if mousey>stuff_detecty-10 and mousey<stuff_detecty+10:
                    if mousex>stuff_detectx-20 and mousex<stuff_detectx+20:
                            print("stuff detect button pressed")
                            stuff_monitor(1000)
                            
        for my_text,text_pos in my_buttons_2.items():
            text_surface=my_font.render(my_text,True,white)
            rect=text_surface.get_rect(center=text_pos)
            screen.blit(text_surface,rect)
    
    if menu==3:
        global initial_flag
        
        if initial_flag==1:
            p1.start(dc_stop_r)
            p2.start(dc_stop_l)
            initial_flag=0
        for event in pygame.event.get():
            if event.type == pygame.MOUSEBUTTONDOWN:
                pos=pygame.mouse.get_pos()
            elif event.type == pygame.MOUSEBUTTONUP:
                pos=pygame.mouse.get_pos()
                mousex,mousey=pos

                if mousey>up_y-20 and mousey<up_y+20:
                    if mousex>up_x-20 and mousex<up_x+20:
                        if tc_flag==0:
                            print("up")
                        
                            servo_op(1,1)
                            servo_op(2,2)
                            tc_flag=1
                            routine.append("up")
                            t_start=pygame.time.get_ticks()
                            
                if mousey>down_y-20 and mousey<down_y+20:
                    if mousex>down_x-20 and mousex<down_x+20:
                        if tc_flag==0:
                            print("down")
                            servo_op(1,2)
                            servo_op(2,1)
                            tc_flag=1
                            routine.append("down")
                            t_start=pygame.time.get_ticks()   
                
                if mousey>left_y-20 and mousey<left_y+20:
                    if mousex>left_x-20 and mousex<left_x+20:
                         if tc_flag==0:
                            print("left")
                            servo_op(1,1)
                            servo_op(2,1)
                            tc_flag=1
                            routine.append("left")
                            t_start=pygame.time.get_ticks()   
                
                if mousey>right_y-20 and mousey<right_y+20:
                    if mousex>right_x-20 and mousex<right_x+20:
                         if tc_flag==0:
                            print("right")
                            servo_op(1,2)
                            servo_op(2,2)
                            tc_flag=1
                            routine.append("right")
                            t_start=pygame.time.get_ticks()
                            
                if mousey>stop_y-20 and mousey<stop_y+20:
                    if mousex>stop_x-20 and mousex<stop_x+20:
                         if tc_flag==1:
                            print("stop")
                            servo_op(1,0)
                            servo_op(2,0)
                            tc_flag=0
                            t_current=pygame.time.get_ticks() 
                            routine_time.append(t_current-t_start)
                            print(routine)
                            print(routine_time)
                
                if mousey>commit_y-20 and mousey<commit_y+20:
                    if mousex>commit_x-20 and mousex<commit_x+20:
                         if tc_flag==0:
                            print("commit")
                            
                            for direction in list(reversed(routine)):
                                if direction == "up":
                                    anti_routine.append("down")
                                elif direction  =="down":
                                    anti_routine.append("up")
                                elif direction  =="left":
                                    anti_routine.append("right")
                                elif direction  =="right":
                                    anti_routine.append("left")
                            
                            anti_time=list(reversed(routine_time))
                            menu=4
                         
                            print(routine)
                            print(routine_time)
                            print(anti_routine)
                            print(anti_time)
                    
        for my_text,text_pos in my_buttons_3.items():
            text_surface=my_font.render(my_text,True,white)
            rect=text_surface.get_rect(center=text_pos)
            screen.blit(text_surface,rect)
    
    if menu==4:
        for my_text,text_pos in menu4_display.items():
            text_surface=my_font.render(my_text,True,white)
            rect=text_surface.get_rect(center=text_pos)
            screen.blit(text_surface,rect)
       
        patrol()
        
        if pygame.key.get_pressed()[pygame.K_q]==True:
            routine=[]
            anti_routine=[]
            routine_time=[]
            anti_time=[]
            
            menu=1
            stop_flag=0
            initial_flag=1
            
    if menu==5:
        screen.fill(black)
        menu=2
       
        for my_text,text_pos in menu5_display.items():
            text_surface=my_font.render(my_text,True,white)
            rect=text_surface.get_rect(center=text_pos)
            screen.blit(text_surface,rect)
            pygame.display.flip()   
        move()
        
        
    pygame.display.flip()                              


GPIO.cleanup()

Contact

Mengceng He mh2387@cornell.edu
Peidong Qi pq32@cornell.edu