A Project By Daniel Fayad and Neil Farnham.
The Mobile Item Sorter is a robot capable of identifying, extracting, and sorting items by color. The mobile sorter uses a camera to detect and guide the robot to colored boxes on the floor and uses a paper scoop to lift them up to a sorting region in the robot’s chassis. The robot uses a Raspberry Pi located on top of the frame as its computer, running code written in Python to navigate and actuate the sorting and lifting motors.
The design was divided into four main parts: tracking, navigation, collection, and sorting. Tracking consisted of using the camera and computer vision libraries to detect the location and color of the object. The navigation consists on using a constant stream of location data to navigate to the object, and keep updating the pwm signals sent to the servos to keep moving in the object's direction. The collection mechanism consists of a DC-motor controlled paper-made scoop. The sorting mechanism utilizes a script to control a stepper motor that steers a deflection fin for guiding the blocks into the correct bin. Each of the scripts are able to communicate using FIFO’s, the navigation script acting as a central hub; every script only communicates with the navigation script.
Due to the hasty construction and design of many of the elements in the project, there’re many improvements that could be made.
In navigating the robot towards items, there was only one closed loop control structure, the camera and navigational motor system. While this certainly works, it must be quite slow to give reliable navigational input. By implementing a closed loop control structure on the wheel velocity specifically the motion of the robot could be made far more predictable and smooth, helping to remove some of the innate heaving as well. Furthermore, having a reliable and predictable motion of the robot opens up the possibility for more advanced navigational techniques.
With a predictable motion of the robot and reliable camera data, it may be possible to construct a new navigation system that implements a Kalman filter. Essentially, due to the near 1 to 1 mapping of the camera image space in relation to their location on the floor and predictable robot motion, it is possible to predict the location of an object in the camera space after a certain maneuver. By generating a control structure that weighs both the actual data and the predicted data from the model, it is possible to have an adaptive controller that can handle erroneous data from the camera. Furthermore, due to the predictive nature of this sort of control structure, implementing it would allow for prediction to be made between frames of the video stream. Essentially, this sort of structure could permit accurate data interpolation, allowing for the possibility to utilize a more continuous servo controller.
Due to the decoupled planar motion of the robot, it is possible to have a navigation servo control system that simultaneously commands a forward velocity with a turn. While difficult to implement into the current system well due to the discrete nature of the command inputs, with an upgraded navigation logic that allows for data prediction (like the Kalman filter) and actual velocity control on the wheels, it may be possible to implement a system that continuously varies the PWM signals to the navigation motors to generate a smooth and fast robot motion. Essentially, the controller would overlap velocity and steering commands to provide a continuously varying PWM signal to the motors.
With regards to the scooper, the open loop nature of the controller meant that the motor had unreliable velocity and position control; implementing a closed loop control structure here could greatly improve the performance of the scooper. Using an upgraded controller would significantly lessen the impact voltage changes would have on the system. This could be done either by using an optical wheel encoder or a potentiometer, the latter being a superior choice due to the continuous nature and more accurate positional data. However, such a system would almost certainly require an analog to digital converter to be useful, adding significant complexity to the design.
In terms of using computer vision to detect small objects. Our system was pretty robust in terms reliable detection within a window of 26 inches in front of the robot and 20 inches on the sides as long as the room is evenly lit. We noticed that glare and shadows can lead to poor performance, decreasing the window of detection, and sometimes even leading us to have to readjust the color boundaries. We also noticed that for our system red was easier to detect, probably something to do with the color contrast or the range boundaries. We also noticed that our system works best when detecting against a uniform-colored dark background.
In charge of the Raspberry Pi and Camera Module Interaction. Worked on the script to detect colors, mark their contours, identify their center and track them contiously. Also helped with testing, debugging and tuning the robot. Additionally built the website.
Created the navigation algorithm, the collecting and sorting mechanisms, and the scripts to control the actuators. Constructed the robot and created the FIFO based communication system. Tested and debugged the robot.
#import packages
from collections import deque
import numpy as np
import argparse
import imutils
import cv2
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import os
import RPi.GPIO as GPIO
import subprocess
GPIO.setmode(GPIO.BCM)
GPIO.setup(27, GPIO.IN, pull_up_down = GPIO.PUD_UP)
isRunning = 1
def GPIO27_cb(channel):
print("quitting feed")
global isRunning
isRunning = 0
GPIO.add_event_detect(27,GPIO.FALLING, callback=GPIO27_cb,bouncetime = 300)
#initialize camera
camera = PiCamera()
camera.resolution=(640,480)
camera.framerate=24
rawCapture = PiRGBArray(camera, size=(640,480))
#defn bourdries for yellow object in HSV, then initialize list of tracked points
#y_Lower=(0,119,84)
#y_Upper=(223,208,155)
r_Lower=(130,22,67)
r_Upper=(250,197,255)
g_Lower=(37,32,119)
g_Upper=(158,133,255)
#t_Lower=(7,58,148)
#t_Upper=(125,255,185)
#pts=deque(maxlen=args["buffer"])
pts=deque(maxlen=16)
#allow camera to warmup
print("warming up...")
time.sleep(2.5)
#capture frames from camera
for f in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
#global isRunning
#if isRunning == 0:
#print("HEY")
#break
#grab the NumPy array representing the image and initalize
frame=f.array
#resize the frame, blur it, and convert it to the HSV color space
frame=imutils.resize(frame, width=400)
#blurred = cv2.GaussianBlur(frame, (11,11),0)
hsv=cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
#FIND RED--------------------------------
#construct a mask for the color "green", then perform
#a series of dilations and erosions to remove any small blobs left in the mask
mask_R=cv2.inRange(hsv, r_Lower, r_Upper)
mask_R=cv2.erode(mask_R, None, iterations=2)
mask_R=cv2.dilate(mask_R,None, iterations=2)
#FIND CONTOURS IN RED
#find contours in the mask and initalize the current (x,y) center of the ball
cnts_R=cv2.findContours(mask_R.copy(),cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)[-2]
center=None
#loop over RED contours
MIN_THRESH=80
MAX_THRESH=50000
x = ''
y = ''
for c in cnts_R:
#MIN_THRESH=50
#MAX_THRESH=50000
if cv2.contourArea(c)>MIN_THRESH and cv2.contourArea(c) test_fifo')
#draw on frame
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
cv2.circle(frame, (cX,cY),7,(255,255,255),-1)
cv2.putText(frame, "center red", (cX-20,cY-20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255),2)
os.system('echo A' + str(x) + ' > /home/pi/video_fifo_x_red &') # sending stuff to fifos
os.system('echo A' + str(y) + ' > /home/pi/video_fifo_y_red &')
#FIND GREEN-----------------------------------
mask_G=cv2.inRange(hsv, g_Lower, g_Upper)
mask_G=cv2.erode(mask_G, None, iterations=2)
mask_G=cv2.dilate(mask_G, None, iterations=2)
#find contours in green
cnts_G=cv2.findContours(mask_G.copy(),cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)[-2]
x = ''
y = ''
#loop over GREEN contours
for c in cnts_G:
if cv2.contourArea(c)>MIN_THRESH and cv2.contourArea(c) /home/pi/video_fifo_x_green &') # sending stuff to fifos
os.system('echo A' + str(y) + ' > /home/pi/video_fifo_y_green &')
cv2.imshow("Feed",frame)
key=cv2.waitKey(1)&0xFF
if key==ord("q"):
break
if isRunning == 0:
break
rawCapture.truncate(0)
#if len(cnts)>0:
#find largest contour in the mask then use it to compute
#the minimum enclosing circle and centroid
#c=max(cnts, key=cv2.contourArea)
#((x,y), radius)=cv2.minEnclosingCircle(c) #CHANGE THIS TO RECTANGLE LATER!!!!
#M=cv2.moments(c)
#center=(int(M["m10"]/M["m10"]), int(M["m01"]/M["m00"]))
#only proceed if the radius meets a minimum size
#if radius>10:
#draw the circle and centroid on the frame,
#then update the list of tracked points
# cv2.circle(frame,(int(x),int(y)), int(radius),
# (0,255,255),2)
# cv2.circle(frame, center, 5, (0,0,255), -1)
# update the points queue
#pts.appendleft(center)
#loop over the set of tracked points
#for i in xrange(1,len(pts)):
#if either of the tracked points are None, ignore them
#if pts[i-1] is None or pts[i] is None:
#continue
#otherwise, compute the thickness of the line and draw the connecting lines
#thickness=int(np.sqrt(args["buffer"]/float(i+1))*2.5)
#cv2.line(frame, pts[i-1],pts[i], (0,0,255), thickness)
#cleanup the camera and close any open windows
camera.close()
cv2.destroyAllWindows()
#sg90_v2.py
#to control sg90_v2 w/ fifo support
import RPi.GPIO as GPIO
import time
import subprocess
import os
GPIO.setmode(GPIO.BCM)
GPIO.setup(16, GPIO.OUT)
GPIO.setup(27, GPIO.IN, pull_up_down = GPIO.PUD_UP)
T = 0.020
sg90_pwm = GPIO.PWM(16, T)
sg90_pwm.start(100*0.0015/T)
isRunning = 1
cur_pos = 0
def change_pwm(sig):
global sg90_pwm
sig = sig/1000
sg90_pwm.ChangeFrequency(1/T)
sg90_pwm.ChangeDutyCycle(100*sig/T)
def GPIO27_cb(channel):
print("quitting sg90")
global isRunning
isRunning = 0
GPIO.add_event_detect(27,GPIO.FALLING, callback=GPIO27_cb,bouncetime = 300)
os.system('echo 0 > /home/pi/sg90_fifo &')
input_fifo = open('sg90_fifo','r',0)
change_pwm(1.225)
while isRunning == 1:
sg90_pos_raw = input_fifo.read()
input_fifo.flush()
try:
sg90_pos = int(sg90_pos_raw)
except:
sg90_pos = cur_pos
if sg90_pos != cur_pos:
cur_pos = sg90_pos
if sg90_pos == 1: # to green bin
change_pwm(1.05)
print('to green bin')
elif sg90_pos == -1: # to red bin
change_pwm(1.40)
print('to red bin')
else:
change_pwm(1.225)
print('default')
time.sleep(0.05)
sg90_pwm.stop()
GPIO.cleanup()
#robot_v2
#robot modified
import RPi.GPIO as GPIO
import time
import subprocess
import os
import re
GPIO.setmode(GPIO.BCM)
GPIO.setup(5, GPIO.OUT)
GPIO.setup(13, GPIO.OUT)
GPIO.setup(27, GPIO.IN, pull_up_down=GPIO.PUD_UP)
left = GPIO.PWM(5,0.0215)
right = GPIO.PWM(13,0.0215)
sig_s = 0.0015
base_s = 0.02
left.start(sig_s*100/0.0215)
right.start(sig_s*100/0.0215)
isRunning = True
def change_pwm(sig,side):
global left
global right
sig = sig/1000
if side == 1:
T = base_s + sig
right.ChangeFrequency(1/T)
right.ChangeDutyCycle(sig*100/T)
if side == 2:
T = base_s + sig
left.ChangeFrequency(1/T)
left.ChangeDutyCycle(sig*100/T)
def GPIO27_cb(channel):
global isRunning
print('quitting robot')
isRunning = False
GPIO.add_event_detect(27,GPIO.FALLING, callback=GPIO27_cb,bouncetime = 300)
# actual loop stuff
change_pwm(1.5,1)
change_pwm(1.5,2)
#opening fifo
input_fifo_redx = open('video_fifo_x_red','r',0)
input_fifo_redy = open('video_fifo_y_red','r',0)
input_fifo_greenx = open('video_fifo_x_green','r',0)
input_fifo_greeny = open('video_fifo_y_green','r',0)
iTR = False
iTG = False
Cap = False
step_wander = 1
spot_red = 0
spot_green = 0
inCap_range = 0
while isRunning:
set_s = False
spot_red = 0 # need to be zero entering wander stage
spot_green = 0
os.system('echo 0 > /home/pi/sg90_fifo &')
# WANDERING
while (not iTR) & (not iTG) & (not Cap) & (isRunning):
print('in wander')
set_time = -1
max_time = 0.0
if (step_wander == 1) & (not set_s): # move fwd 0.5 sec
change_pwm(1.55,1)
change_pwm(1.45,2)
max_time = 0.1
set_s = True
set_time = time.time()
if (step_wander == 2) & (not set_s): # move left 0.1 sec
change_pwm(1.45,1)
change_pwm(1.45,2)
max_time = 0.1
set_s = True
set_time = time.time()
if (step_wander == 3) & (not set_s): # move right 0.1 sec
change_pwm(1.45,1)
change_pwm(1.45,2)
max_time = 0.1
set_s = True
set_time = time.time()
if (time.time() - set_time) > max_time:
set_s = False
step_wander = step_wander + 1
if step_wander > 2:
step_wander = 1
time.sleep(0.3) # sleep for refresh rate
# pull from fifos, just red for now
# repeat for green
red_input_xs = input_fifo_redx.read()
red_input_ys = input_fifo_redy.read()
green_input_xs = input_fifo_greenx.read()
green_input_ys = input_fifo_greeny.read()
input_fifo_redx.flush()
input_fifo_redy.flush()
input_fifo_greenx.flush()
input_fifo_greeny.flush()
try: # for reds
red_xs_list = re.split("A+",red_input_xs)
red_centerX = int(red_xs_list[-1])
spot_red = spot_red + 1
print('I see something... red!')
except:
red_centerX = 1000
red_centerY = 1000
spot_red = spot_red - 1
try: # for greens
green_xs_list = re.split("A+",green_input_xs)
green_centerX = int(green_xs_list[-1])
spot_green = spot_green + 1
print('I see something... green!')
except:
green_centerX = 1000
green_centerY = 1000
spot_green = spot_green - 1
if spot_red < 0:
spot_red = 0
if spot_green < 0:
spot_green = 0
#if it sees it 4 times, it will consider it spotted
if (spot_red > 1) & (not iTG):
iTR = True
print('going to red tracking...')
if (spot_green > 1) & (not iTR):
iTG = True
print('going to green tracking...')
print(' ')
#TRACKING...
while (iTR | iTG) & isRunning:
if iTR:
# give SG90 cmd to sort to red or green
os.system('echo -1 > /home/pi/sg90_fifo &')
print('Tracking RED')
else:
os.system('echo 1 > /home/pi/sg90_fifo &')
print('Tracking GREEN')
xTrackL_bound = 165 #from camera's perspective
xTrackR_bound = 210
red_centerX_raw = input_fifo_redx.read()
red_centerY_raw = input_fifo_redy.read()
green_centerX_raw = input_fifo_greenx.read()
green_centerY_raw = input_fifo_greeny.read()
input_fifo_redx.flush()
input_fifo_redy.flush()
input_fifo_greenx.flush()
input_fifo_greeny.flush()
spot_saturation = 6
if iTR: # for RED
try:
red_xs_list = re.split("A+",red_centerX_raw)
red_ys_list = re.split("A+",red_centerY_raw)
red_centerX = int(red_xs_list[-1])
red_centerY = int(red_ys_list[-1])
spot_red = spot_red + 1
print('tracking...')
except:
red_centerX = 1000
red_centerY = 1000
spot_red = spot_red - 1
print('lost sight')
centerY = red_centerY
centerX = red_centerX
if spot_red > spot_saturation:
spot_red = spot_saturation
if spot_red < 0: # lost sight of red
spot_red = 0
iTR = False
print('going to wander...')
else: # for GREEN
try:
green_xs_list = re.split("A+",green_centerX_raw)
green_ys_list = re.split("A+",green_centerY_raw)
green_centerX = int(green_xs_list[-1])
green_centerY = int(green_ys_list[-1])
spot_green = spot_green + 1
print('tracking...')
except:
green_centerX = 1000
green_centerY = 1000
spot_green = spot_green - 1
print('lost sight')
centerY = green_centerY
centerX = green_centerX
if spot_green > spot_saturation:
spot_green = spot_saturation
if spot_green < 0: # lost sight of red
spot_green = 0
iTG = False
print('going to wander...')
#if centerX and is inside some box near lifter
# 120 < y < 160
# 165 < x < 210
if (centerY > 85) & (centerY < 125):
if (centerX > 165) & (centerX < 212):
inCap_range = inCap_range + 1
else:
inCap_range = 0
if inCap_range > 1:
iTR = False
iTG = False
Cap = True
print('going to cap...')
# maybe make more sophisticated...
# if shape center is on left side, go right one step, inverted cam
if centerX < xTrackL_bound:
change_pwm(1.51,1)
change_pwm(1.51,2)
print('go right')
# if shape center in center go fwd
elif centerX < xTrackR_bound:
change_pwm(1.47,1)
change_pwm(1.53,2)
print('go fwd')
# else shape center is on right side, go left one step
elif centerX < 480: # so basically if it exists
change_pwm(1.48,1)
change_pwm(1.48,2)
print('go left')
else:
change_pwm(1.53,1)
change_pwm(1.47,2)
print("can't see, bkwd")
print(' ')
time.sleep(0.3)# run at refresh rate
while Cap & isRunning:
# essentially, pedal to the metal for 1 sec and then raise lifter, then pause and reset
change_pwm(1.30,1)
change_pwm(1.70,2)
print('go go go go!')
time.sleep(1)
os.system('echo 1 > lifter_fifo &')
time.sleep(3)
Cap = False
iTR = False
iTG = False
left.stop()
right.stop()
GPIO.cleanup()
#lifter_v2.py
#large servo cntr_v2 w/ fifo support
import RPi.GPIO as GPIO
import time
import subprocess
import os
GPIO.setmode(GPIO.BCM)
GPIO.setup(19, GPIO.OUT)
GPIO.setup(26, GPIO.OUT)
GPIO.setup(27, GPIO.IN, pull_up_down=GPIO.PUD_UP)
isRunning = 1
def GPIO27_cb(channel):
print("quitting big servo")
global isRunning
isRunning = 0
GPIO.add_event_detect(27,GPIO.FALLING, callback=GPIO27_cb,bouncetime = 300)
os.system('echo 0 > /home/pi/lifter_fifo &')
input_fifo = open('lifter_fifo','r',0)
while isRunning == 1:
manuever_raw = input_fifo.read()
input_fifo.flush()
try:
doManuever = int(manuever_raw)
except:
doManuever = 0
time.sleep(0.05)
if doManuever != 0:
# for going up
start_time = time.time()
max_time = 0.3
while( (time.time() - start_time) < max_time) & isRunning == 1:
GPIO.output(26, False)
GPIO.output(19, True )
time.sleep(0.005)
GPIO.output(26, False)
GPIO.output(19, False)
time.sleep(0.002)
start_time = time.time()
max_time = 0.25
while( (time.time() - start_time) < max_time) & isRunning == 1:
GPIO.output(26, False)
GPIO.output(19, True )
time.sleep(0.004)
GPIO.output(26, False)
GPIO.output(19, False)
time.sleep(0.002)
time.sleep(1)
# for going down
start_time = time.time()
max_time = 0.12
while( (time.time() - start_time) < max_time) & isRunning == 1:
GPIO.output(26, True)
GPIO.output(19, False )
time.sleep(0.01)
GPIO.output(26, False)
GPIO.output(19, False)
time.sleep(0.01)
GPIO.cleanup()
#!/usr/bin/python2.7
echo "heY"
python /home/pi/sg90_v2.py &
python /home/pi/lifter_v2.py &
python /home/pi/robot_v2.py &
python /home/pi/stream_tracking.py