ECE5725: Mobile Item Sorter

A Project By Daniel Fayad and Neil Farnham.


Demonstration Video


Introduction

The Mobile Item Sorter is a robot capable of identifying, extracting, and sorting items by color. The mobile sorter uses a camera to detect and guide the robot to colored boxes on the floor and uses a paper scoop to lift them up to a sorting region in the robot’s chassis. The robot uses a Raspberry Pi located on top of the frame as its computer, running code written in Python to navigate and actuate the sorting and lifting motors.

Generic placeholder image
Figure 1. The robot is shown in the picture above. The raspberry pi can be seen on the top left, and the scooper is on the right in black.

Project Objective:

  • Successfully detect small objects and identify their colors.
  • Implement the controls to navigate to the objects.
  • Pick them up and sort them into different bins.

Design

The design was divided into four main parts: tracking, navigation, collection, and sorting. Tracking consisted of using the camera and computer vision libraries to detect the location and color of the object. The navigation consists on using a constant stream of location data to navigate to the object, and keep updating the pwm signals sent to the servos to keep moving in the object's direction. The collection mechanism consists of a DC-motor controlled paper-made scoop. The sorting mechanism utilizes a script to control a stepper motor that steers a deflection fin for guiding the blocks into the correct bin. Each of the scripts are able to communicate using FIFO’s, the navigation script acting as a central hub; every script only communicates with the navigation script.

Tracking and Classifying

The tracking algorithm consisted of a single python script that used the computer vision library OpenCV and numpy. We collected the data from the tilted camera in front of the robot. It was tilted in such a way that the camera able to see up to 1.5 meters in front. The tilted camera is shown on figures 2 and 3.
Generic placeholder image Generic placeholder image
Figure 2 and 3. Front and side view of the camera.
In order to detect the colors the python script would essentially need a set of boundaries in the HSV color space to define our red and green colors. The range was determined by taking a picture of the objects we were trying to detect, and then used a range-detector script by Adrian Rosebrock. (the script can be found here: https://github.com/jrosebr1/imutils/blob/master/bin/range-detector) An example of finding the upper and lower boundries of the colors is shown on figure 4.
Generic placeholder image
Figure4. Using the range-detector script to detect the color boundaries for color red on the HSV color space.

Once the boundaries have been found for each color we iterate over each frame that the camera captures, we resize the frame for faster processing, convert into HSV color space, find the points that are within the color range for each color. To optimize this we also erode and dilate the image, so it is easier to find the contours of the objects. Once we have found the contours, we iterate over each contour and calculate its center. This center variable is then associated with the specific color it detected first, and the variable is then passed into the navigation script. We do all of this for red first and then green, so our tracking algorithm prioritizes red objects. In order to avoid detecting very small objects that may fall within the color boundaries of red and green accidentally, we determine also a boundary of minimum and maximum size of contours and we ignore all the contours that aren’t within those boundaries. An example of the running script can be found on Figure 5.

Generic placeholder image
Figure5. The tracking script identifying the location of the red object on the right and displaying it on the monitor.

The testing and debugging for the tracking script took a long time. Determining which blurring, dilation and erosion image processing techniques were adequate had to be done via trial and error. A great resource to work on this part of the project was the PyImageSearch blog. Here is a list of the following projects that helped:

To detect colors from a video stream: https://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/


To compute the center of a contour: https://www.pyimagesearch.com/2016/02/01/opencv-center-of-contour/


To get the raspberry pi camera module to capture frames continuously: https://www.pyimagesearch.com/2015/06/01/home-surveillance-and-motion-detection-with-the-raspberry-pi-python-and-opencv/


Navigation

The navigation logic consists of a script that receives the pixel locations (through FIFOs) of the colored blocks and converts them into actions that the robot will perform. The script is divided into three parts: a wandering stage, a tracking stage, and a capture stage. Starting in the wander state, if the robot detects a red or green object, it will then pass control to the tracking stage that will guide it to the object. If the tracking algorithm was successful in guiding the robot to the item, the tracking stage will transfer control to capture stage. If the tracking stage loses sight of the object for too long, control will revert back to the wander stage.

The wandering stage is the default state of the robot where the algorithm polls the computer vision output stream for possible detections while iterating through a list of motor commands. Due to the possibility of receiving erroneous color sightings, the logic implements a sighting confidence level, ranging from 0 to 3 for each color. Essentially, every time the camera detects a red or green object, the system increases the confidence level of that color by 1, until it reaches maximum confidence where it will then pass control to the tracking logic. If the system does not see an object, the confidence level will drop by 1.

The tracking stage is the state where the robot will actively try to approach the detected object. Pulling from the same video stream as in the wandering state, the logic attempts to center object while also moving forward. In this state, the loop will pass the color to the sorter motor control script, allowing the motor to deflect in the proper direction. Essentially, if the object is on the left side of its view, the robot will turn slightly left, and similarly for the right side. If the object is in the center view, it will move forward, and if it cannot see the object it will move backwards. Using the same confidence method as in the wandering state, except with a scale from 0 to 5, after every movement, the robot will verify if it has sufficient confidence to continue tracking the object. If the confidence level is equal to zero, it will return to the wander state. Over several successful iterations, of the tracking loop, we expect that the object will tend to move directly in front of the scooper. If the object is found to be located in small defined area in front of the scooper twice in a row, the tracking stage will pass control to capture stage.

The capture stage is a state where the robot will attempt to capture the object. It has no real control logic, instead implementing a series of timed commands in an attempt to push the item onto the scooper and then raise the scooper. Once the series of commands has been issued, the robot will then wait for a second, resetting the logic to its default state, and then returning control to the wander stage.

Overall, the implementation of navigation code was quite straight forward. Despite some fine tuning regarding the left and right bounds on the camera output, i.e. at what point should the script decide to turn left or right rather than go forward, there were no significant difficulties in its creation.

Nevertheless, there was an issue regarding blinds spots created from chassis. Originally, the code called for the robot to continue moving forward regardless if it could detect the object in the expectation that it was having a hard time detecting a far away object; however, this rarely ever occured. It was far more likely that the sensor would lose track of the object since it went into a blind spot caused by the front forks and moving forward only worsened the issue. By making the robot move backwards instead, it would enable it to position the object back into its view and adjust accordingly.

Generic placeholder image
Figure 6: Flow chart of navigation logic.

Collecting

The collecting portion of the design consists of the paper scooper, the dc motor, motor controller, and the control logic.

The paper scooper was designed to provide a flat scooping surface when deployed down, and a curving slope downward when upwards. The downward slope was necessary to give the object enough energy to slide through the sorting section. A continuous dc motor was used to control the orientation of the scooper. The motor was powered by using a 6V dc battery, and was controlled by implementing a half-h bridge. The controller required two GPIO pins. At 6 V, the motor was far too strong, and ended up tossing the object to bench across the aisle; however, at 4.5 V, the motor wasn’t strong enough to pick up the items either. In order to compensate for this, the motor was fed a pulsed control signal with varying times of high and low signals to create a motion that was slow enough to not toss the object, but strong enough to actually pick it up.

The control script is essentially an infinite loop that contains a conditional section that had a series of timed commands. Throughout the conditional, the script would pulse the proper GPIO pins to generate the intermittent signal required for slow, but high torque motion for lifting.

Generic placeholder image Generic placeholder image
Figures 7 and 8: In figure 7, a side view of the scooper assembly. In figure 8, the circuit implementation. The motor is located below the H-bridge configuration shown.

Due to the open loop nature of the script (everything is timed), the motor’s motion was very sensitive to voltage of the battery. The timing of the signal pulses to the H-bridge was very specific to a small range of battery potential. Over the testing period, retuning the control was quite common, generally twice per hour. Fortunately, due to the nonlinear nature of the resistive forces on the scooper, it was possible to set the motor to move a bit faster than desired, and have the floor or mis-allied axle friction stop its motion. Regardless, controlling the motion was still difficult.

Generic placeholder image
Figure 9: Circuit diagram of H-bridge. Pin 7 (2A) is wired into GPIO pin 26 while pin 2 is wired in GPIO 6. Vcc is held to the battery voltage.

Originally, the design called for a conveyor belt to be used for bringing the items to the sorting stage, with a continuously spinning swiper in the front to load items onto the belt. With a laser cut acrylic frame, carved wooden dowels, steel axles, bronze bearings, and a plastic belt, the conveyor system would transport items from the floor, to the sorting region, and into the bins. Power would be transferred throughout the system using the conveyor belt, and rubber bands between axles, with a single power input point on the sorting conveyor. However, due to excessive slippage in the belt material and axle misalignments, not enough power was being generated and transmitted to run the system. The entire idea had to be scrapped, and some of the parts scavenged for the scooper method. Primarily, the continuous dc motor that was going to be used to drive the belt became to motor that drove the scooper, and the sorting conveyor belt portion had the belt removed and tilted upwards to allow for items to slide through. This proved to be a very large set back in the construction of the robot, leading to the half paper design currently implemented.

Sorting

The sorting portion of the design consists of a paper deflection fin, a dc stepper motor, and the control logic. The sorting stage would use the kinetic energy of item to carry it the region and into the correct bin. There were three orientations of the fin: a neutral orientation with the fin not favoring either direction, and both the left and right orientation for the red and green bins respectively. When the robot is tracking a red object, the navigation script will give the signal for the sorting controller to deflect the object into the red bin, and likewise for green objects. During the wandering stage, the control would be set into the neutral state. The dc stepper motor was an SG90 servo motor that required a pulse width modulated (PWM) signal to control orientation. The motor pulled power directly battery pack and the control line was connected directly to a GPIO pin. The control script consisted of a continuous loop that constantly polled the input FIFO for direction changes. If a change was detected, the script would alter the position of the paper deflection fin to match that new signal. Overall, the sorting portion of the design worked well, depending whether or not the collecting stage would work properly. Sometimes the item would get stuck in the sorting stage because it did not have enough energy to slide through. Originally, there was going to be a conveyor belt that transported items through the sorting stage, but due to issues described earlier, the idea had to be scrapped. This was a large setback during the construction phase, initiating a near complete redesign of the system. The sorting section frame was salvage though, and the SG90 servo motor configuration remained essentially untouched.

Future Work

Due to the hasty construction and design of many of the elements in the project, there’re many improvements that could be made.

In navigating the robot towards items, there was only one closed loop control structure, the camera and navigational motor system. While this certainly works, it must be quite slow to give reliable navigational input. By implementing a closed loop control structure on the wheel velocity specifically the motion of the robot could be made far more predictable and smooth, helping to remove some of the innate heaving as well. Furthermore, having a reliable and predictable motion of the robot opens up the possibility for more advanced navigational techniques.

With a predictable motion of the robot and reliable camera data, it may be possible to construct a new navigation system that implements a Kalman filter. Essentially, due to the near 1 to 1 mapping of the camera image space in relation to their location on the floor and predictable robot motion, it is possible to predict the location of an object in the camera space after a certain maneuver. By generating a control structure that weighs both the actual data and the predicted data from the model, it is possible to have an adaptive controller that can handle erroneous data from the camera. Furthermore, due to the predictive nature of this sort of control structure, implementing it would allow for prediction to be made between frames of the video stream. Essentially, this sort of structure could permit accurate data interpolation, allowing for the possibility to utilize a more continuous servo controller.

Due to the decoupled planar motion of the robot, it is possible to have a navigation servo control system that simultaneously commands a forward velocity with a turn. While difficult to implement into the current system well due to the discrete nature of the command inputs, with an upgraded navigation logic that allows for data prediction (like the Kalman filter) and actual velocity control on the wheels, it may be possible to implement a system that continuously varies the PWM signals to the navigation motors to generate a smooth and fast robot motion. Essentially, the controller would overlap velocity and steering commands to provide a continuously varying PWM signal to the motors.

With regards to the scooper, the open loop nature of the controller meant that the motor had unreliable velocity and position control; implementing a closed loop control structure here could greatly improve the performance of the scooper. Using an upgraded controller would significantly lessen the impact voltage changes would have on the system. This could be done either by using an optical wheel encoder or a potentiometer, the latter being a superior choice due to the continuous nature and more accurate positional data. However, such a system would almost certainly require an analog to digital converter to be useful, adding significant complexity to the design.


Results

Tracking and Classification

In terms of using computer vision to detect small objects. Our system was pretty robust in terms reliable detection within a window of 26 inches in front of the robot and 20 inches on the sides as long as the room is evenly lit. We noticed that glare and shadows can lead to poor performance, decreasing the window of detection, and sometimes even leading us to have to readjust the color boundaries. We also noticed that for our system red was easier to detect, probably something to do with the color contrast or the range boundaries. We also noticed that our system works best when detecting against a uniform-colored dark background.

Generic placeholder image Generic placeholder image
Figures 10 and 11 show the maximum range of reliable detection and classification for our tracking system. The black background seen in the pictures is also the one that led to the best performance.

Navigation

The navigation system worked sufficiently well. While it was a bit slow and the control scheme led to jerky movement, it would reliably position the object directly in front of the scooper, allowing for the capture stage of the code to execute. Despite the temporary issue of the objects finding themselves in the blind spots, and the necessary tuning of the motor commands, there were few issues with the navigation. Regardless, likely due to the motors not being properly calibrated, the craft would tend to heave to the left, generally causing item to find itself on the forward/right camera image boundary.

Collecting and Sorting

Overall, the collecting system did not work particularly well. Primarily, the robot would have a difficult time getting the objects fully into the scooper. It was rare for the system to properly pick up the object and let it slide down into the sorting region; however, if the item were pushed in manually, the system would work.
The sorting system worked quite well, despite the issues with the scooper. When an item would be push into the scooper properly, it would reliably slide down through the sorting stage, and be deflected into the proper bin.

Work Distribution

Daniel (contact at dhf63@cornell.edu)

In charge of the Raspberry Pi and Camera Module Interaction. Worked on the script to detect colors, mark their contours, identify their center and track them contiously. Also helped with testing, debugging and tuning the robot. Additionally built the website.

Neil (contact at naf48@cornell.edu)

Created the navigation algorithm, the collecting and sorting mechanisms, and the scripts to control the actuators. Constructed the robot and created the FIFO based communication system. Tested and debugged the robot.


Parts List

Generic placeholder image

References

PiCamera Document
Tower Pro Servo Datasheet
Bootstrap
OpenCV Library
R-Pi GPIO Document

Code Appendix

stream_tracking.py


#import packages
from collections import deque
import numpy as np
import argparse
import imutils
import cv2
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import os
import RPi.GPIO as GPIO
import subprocess

GPIO.setmode(GPIO.BCM)
GPIO.setup(27, GPIO.IN, pull_up_down = GPIO.PUD_UP)

isRunning = 1

def GPIO27_cb(channel):
    print("quitting feed")
    global isRunning
    isRunning = 0

GPIO.add_event_detect(27,GPIO.FALLING, callback=GPIO27_cb,bouncetime = 300)

#initialize camera
camera = PiCamera()
camera.resolution=(640,480)
camera.framerate=24
rawCapture = PiRGBArray(camera, size=(640,480))


#defn bourdries for yellow object in HSV, then initialize list of tracked points
#y_Lower=(0,119,84)
#y_Upper=(223,208,155)

r_Lower=(130,22,67)
r_Upper=(250,197,255)

g_Lower=(37,32,119)
g_Upper=(158,133,255)

#t_Lower=(7,58,148)
#t_Upper=(125,255,185)

#pts=deque(maxlen=args["buffer"])
pts=deque(maxlen=16)

#allow camera to warmup
print("warming up...")
time.sleep(2.5)

#capture frames from camera
for f in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
    #global isRunning
    #if isRunning == 0:
        #print("HEY")
        #break
    #grab the NumPy array representing the image and initalize
    frame=f.array

    #resize the frame, blur it, and convert it to the HSV color space
    frame=imutils.resize(frame, width=400)
    #blurred = cv2.GaussianBlur(frame, (11,11),0)
    hsv=cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)

    #FIND RED--------------------------------
    #construct a mask for the color "green", then perform
    #a series of dilations and erosions to remove any small blobs left in the mask
    mask_R=cv2.inRange(hsv, r_Lower, r_Upper)
    mask_R=cv2.erode(mask_R, None, iterations=2)
    mask_R=cv2.dilate(mask_R,None, iterations=2)

    #FIND CONTOURS IN RED
    #find contours in the mask and initalize the current (x,y) center of the ball
    cnts_R=cv2.findContours(mask_R.copy(),cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)[-2]
    center=None

    #loop over RED contours
    MIN_THRESH=80
    MAX_THRESH=50000
    x = ''
    y = ''
    for c in cnts_R:
        #MIN_THRESH=50
        #MAX_THRESH=50000
        if cv2.contourArea(c)>MIN_THRESH and cv2.contourArea(c) test_fifo')
            #draw on frame
            cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
            cv2.circle(frame, (cX,cY),7,(255,255,255),-1)
            cv2.putText(frame, "center red", (cX-20,cY-20),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255),2)

    os.system('echo A' + str(x) + ' > /home/pi/video_fifo_x_red &') # sending stuff to fifos
    os.system('echo A' + str(y) + ' > /home/pi/video_fifo_y_red &')
    #FIND GREEN-----------------------------------        
    mask_G=cv2.inRange(hsv, g_Lower, g_Upper)
    mask_G=cv2.erode(mask_G, None, iterations=2)
    mask_G=cv2.dilate(mask_G, None, iterations=2)

    #find contours in green
    cnts_G=cv2.findContours(mask_G.copy(),cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)[-2]

    x = ''
    y = ''
    #loop over GREEN contours
    for c in cnts_G:
        if cv2.contourArea(c)>MIN_THRESH and cv2.contourArea(c) /home/pi/video_fifo_x_green &') # sending stuff to fifos
    os.system('echo A' + str(y) + ' > /home/pi/video_fifo_y_green &')   

    cv2.imshow("Feed",frame)

    key=cv2.waitKey(1)&0xFF
    if key==ord("q"):
        break
    if isRunning == 0:
        break
    rawCapture.truncate(0)

    #if len(cnts)>0:
        #find largest contour in the mask then use it to compute 
        #the minimum enclosing circle and centroid
        #c=max(cnts, key=cv2.contourArea)
        #((x,y), radius)=cv2.minEnclosingCircle(c) #CHANGE THIS TO RECTANGLE LATER!!!!
        #M=cv2.moments(c)
        #center=(int(M["m10"]/M["m10"]), int(M["m01"]/M["m00"]))

        #only proceed if the radius meets a minimum size
        #if radius>10:
            #draw the circle and centroid on the frame,
            #then update the list of tracked points
        #    cv2.circle(frame,(int(x),int(y)), int(radius),
        #            (0,255,255),2)
        #    cv2.circle(frame, center, 5, (0,0,255), -1)

        # update the points queue
        #pts.appendleft(center)

        #loop over the set of tracked points
        #for i in xrange(1,len(pts)):
            #if either of the tracked points are None, ignore them
            #if pts[i-1] is None or pts[i] is None:
                #continue

            #otherwise, compute the thickness of the line and draw the connecting lines
            #thickness=int(np.sqrt(args["buffer"]/float(i+1))*2.5)
            #cv2.line(frame, pts[i-1],pts[i], (0,0,255), thickness)

#cleanup the camera and close any open windows
camera.close()
cv2.destroyAllWindows()
              

sg90_v2.py


#sg90_v2.py
#to control sg90_v2 w/ fifo support
import RPi.GPIO as GPIO
import time
import subprocess
import os

GPIO.setmode(GPIO.BCM)
GPIO.setup(16, GPIO.OUT)
GPIO.setup(27, GPIO.IN, pull_up_down = GPIO.PUD_UP)

T = 0.020
sg90_pwm = GPIO.PWM(16, T)
sg90_pwm.start(100*0.0015/T)

isRunning = 1
cur_pos = 0
def change_pwm(sig):
    global sg90_pwm
    sig = sig/1000

    sg90_pwm.ChangeFrequency(1/T)
    sg90_pwm.ChangeDutyCycle(100*sig/T)

def GPIO27_cb(channel):
    print("quitting sg90")
    global isRunning
    isRunning = 0

GPIO.add_event_detect(27,GPIO.FALLING, callback=GPIO27_cb,bouncetime = 300)

os.system('echo 0 > /home/pi/sg90_fifo &')
input_fifo = open('sg90_fifo','r',0)

change_pwm(1.225)
while isRunning == 1:
    sg90_pos_raw = input_fifo.read()
    input_fifo.flush()

    try:
        sg90_pos = int(sg90_pos_raw)
    except:
        sg90_pos = cur_pos

    if sg90_pos != cur_pos:
        cur_pos = sg90_pos

        if sg90_pos == 1: # to green bin
            change_pwm(1.05)
            print('to green bin')
        elif sg90_pos == -1: # to red bin
            change_pwm(1.40)
            print('to red bin')
        else:
            change_pwm(1.225)
            print('default')
    time.sleep(0.05)
sg90_pwm.stop()
GPIO.cleanup()
              

robot_v2.py


#robot_v2
#robot modified
import RPi.GPIO as GPIO
import time
import subprocess
import os
import re

GPIO.setmode(GPIO.BCM)
GPIO.setup(5, GPIO.OUT)
GPIO.setup(13, GPIO.OUT)
GPIO.setup(27, GPIO.IN, pull_up_down=GPIO.PUD_UP)

left = GPIO.PWM(5,0.0215)
right = GPIO.PWM(13,0.0215)

sig_s = 0.0015
base_s = 0.02
left.start(sig_s*100/0.0215)
right.start(sig_s*100/0.0215)
isRunning = True

def change_pwm(sig,side):
    global left
    global right
    sig = sig/1000

    if side == 1:
        T = base_s + sig
        right.ChangeFrequency(1/T)
        right.ChangeDutyCycle(sig*100/T)
    if side == 2:
        T = base_s + sig
        left.ChangeFrequency(1/T)
        left.ChangeDutyCycle(sig*100/T)

def GPIO27_cb(channel):
    global isRunning
    print('quitting robot')
    isRunning = False

GPIO.add_event_detect(27,GPIO.FALLING, callback=GPIO27_cb,bouncetime = 300)

# actual loop stuff
change_pwm(1.5,1)
change_pwm(1.5,2)

#opening fifo
input_fifo_redx = open('video_fifo_x_red','r',0)
input_fifo_redy = open('video_fifo_y_red','r',0)
input_fifo_greenx = open('video_fifo_x_green','r',0)
input_fifo_greeny = open('video_fifo_y_green','r',0)

iTR = False
iTG = False
Cap = False

step_wander = 1
spot_red = 0
spot_green = 0
inCap_range = 0

while isRunning:
    set_s = False

    spot_red = 0 # need to be zero entering wander stage
    spot_green = 0
    
    os.system('echo 0 > /home/pi/sg90_fifo &')
    # WANDERING
    while (not iTR) & (not iTG)  & (not Cap) & (isRunning):
        print('in wander')
        set_time = -1
        max_time = 0.0

        if (step_wander == 1) & (not set_s): # move fwd 0.5 sec
            change_pwm(1.55,1)
            change_pwm(1.45,2)

            max_time = 0.1
            set_s = True
            set_time = time.time()

        if (step_wander == 2) & (not set_s): # move left 0.1 sec
            change_pwm(1.45,1)
            change_pwm(1.45,2)

            max_time = 0.1
            set_s = True
            set_time = time.time()

        if (step_wander == 3) & (not set_s): # move right 0.1 sec
            change_pwm(1.45,1)
            change_pwm(1.45,2)

            max_time = 0.1
            set_s = True
            set_time = time.time()
        if (time.time() - set_time) > max_time:
            set_s = False
            step_wander = step_wander + 1
        if step_wander > 2:
            step_wander = 1

        time.sleep(0.3) # sleep for refresh rate


        # pull from fifos, just red for now
        # repeat for green
        red_input_xs = input_fifo_redx.read()
        red_input_ys = input_fifo_redy.read()
        green_input_xs = input_fifo_greenx.read()
        green_input_ys = input_fifo_greeny.read()
        input_fifo_redx.flush()
        input_fifo_redy.flush()
        input_fifo_greenx.flush()
        input_fifo_greeny.flush()

        try: # for reds
            red_xs_list = re.split("A+",red_input_xs)
            red_centerX = int(red_xs_list[-1])
            
            spot_red = spot_red + 1
            print('I see something... red!')
        except:
            red_centerX = 1000
            red_centerY = 1000
            spot_red = spot_red - 1

        try: # for greens
            green_xs_list = re.split("A+",green_input_xs)
            green_centerX = int(green_xs_list[-1])
            
            spot_green = spot_green + 1
            print('I see something... green!')
        except:
            green_centerX = 1000
            green_centerY = 1000
            spot_green = spot_green - 1

        if spot_red < 0:
            spot_red = 0
        if spot_green < 0:
            spot_green = 0

        #if it sees it 4 times, it will consider it spotted
        if (spot_red > 1) & (not iTG):
            iTR = True
            print('going to red tracking...')
        if (spot_green > 1) & (not iTR):
            iTG = True
            print('going to green tracking...')
        print(' ')

    #TRACKING...
    while (iTR | iTG) & isRunning:
        
        if iTR:
            # give SG90 cmd to sort to red or green
            os.system('echo -1 > /home/pi/sg90_fifo &')
            print('Tracking RED')
        else:
            os.system('echo 1 > /home/pi/sg90_fifo &')
            print('Tracking GREEN')

        xTrackL_bound = 165 #from camera's perspective
        xTrackR_bound = 210
        
        red_centerX_raw = input_fifo_redx.read()
        red_centerY_raw = input_fifo_redy.read()
        green_centerX_raw = input_fifo_greenx.read()
        green_centerY_raw = input_fifo_greeny.read()
        input_fifo_redx.flush()
        input_fifo_redy.flush()
        input_fifo_greenx.flush()
        input_fifo_greeny.flush()
        
        spot_saturation = 6
        
        if iTR: # for RED
            try:
                red_xs_list = re.split("A+",red_centerX_raw)
                red_ys_list = re.split("A+",red_centerY_raw)
                red_centerX = int(red_xs_list[-1])
                red_centerY = int(red_ys_list[-1])
            
                spot_red = spot_red + 1
                print('tracking...')
            except:
                red_centerX = 1000
                red_centerY = 1000
                spot_red = spot_red - 1
                print('lost sight')

            centerY = red_centerY
            centerX = red_centerX
        
            if spot_red > spot_saturation:
                spot_red = spot_saturation

            if spot_red < 0: # lost sight of red
                spot_red = 0
                iTR = False
                print('going to wander...')
        else: # for GREEN
            try:
                green_xs_list = re.split("A+",green_centerX_raw)
                green_ys_list = re.split("A+",green_centerY_raw)
                green_centerX = int(green_xs_list[-1])
                green_centerY = int(green_ys_list[-1])
            
                spot_green = spot_green + 1
                print('tracking...')
            except:
                green_centerX = 1000
                green_centerY = 1000
                spot_green = spot_green - 1
                print('lost sight')
       
            centerY = green_centerY
            centerX = green_centerX

            if spot_green > spot_saturation:
                spot_green = spot_saturation

            if spot_green < 0: # lost sight of red
                spot_green = 0
                iTG = False
                print('going to wander...')

        #if centerX and is inside some box near lifter
        # 120 < y < 160 
        # 165 < x < 210
        if (centerY > 85) & (centerY < 125):
            if (centerX > 165) & (centerX < 212):
                inCap_range = inCap_range + 1
        else:
            inCap_range = 0
        
        if inCap_range > 1:
            iTR = False
            iTG = False
            Cap = True
            print('going to cap...')
        
        # maybe make more sophisticated...
        # if shape center is on left side, go right one step, inverted cam
        if centerX < xTrackL_bound:
            change_pwm(1.51,1)
            change_pwm(1.51,2)
            print('go right')
        # if shape center in center go fwd
        elif centerX < xTrackR_bound:
            change_pwm(1.47,1)
            change_pwm(1.53,2)
            print('go fwd')
        # else shape center is on right side, go left one step
        elif centerX < 480: # so basically if it exists
            change_pwm(1.48,1)
            change_pwm(1.48,2)
            print('go left')
        else:
            change_pwm(1.53,1)
            change_pwm(1.47,2)
            print("can't see, bkwd")
        print(' ')
        time.sleep(0.3)# run at refresh rate

    while Cap & isRunning:
        # essentially, pedal to the metal for 1 sec and then raise lifter, then pause and reset
        change_pwm(1.30,1)
        change_pwm(1.70,2)
        print('go go go go!')
        time.sleep(1)
        os.system('echo 1 > lifter_fifo &')
        time.sleep(3)
        Cap = False
        iTR = False
        iTG = False

left.stop()
right.stop()
GPIO.cleanup()
              

lifter_v2.py


#lifter_v2.py
#large servo cntr_v2 w/ fifo support
import RPi.GPIO as GPIO
import time
import subprocess
import os

GPIO.setmode(GPIO.BCM)
GPIO.setup(19, GPIO.OUT)
GPIO.setup(26, GPIO.OUT)
GPIO.setup(27, GPIO.IN, pull_up_down=GPIO.PUD_UP)

isRunning = 1

def GPIO27_cb(channel):
    print("quitting big servo")
    global isRunning
    isRunning = 0

GPIO.add_event_detect(27,GPIO.FALLING, callback=GPIO27_cb,bouncetime = 300)


os.system('echo 0 > /home/pi/lifter_fifo &')
input_fifo = open('lifter_fifo','r',0)



while isRunning == 1:
    
    manuever_raw = input_fifo.read()
    input_fifo.flush()

    try:
        doManuever = int(manuever_raw)
    except:
        doManuever = 0

    time.sleep(0.05)


    if doManuever != 0:
        # for going up
        start_time = time.time()
        max_time = 0.3
        while( (time.time() - start_time) < max_time) & isRunning == 1:
            GPIO.output(26, False)
            GPIO.output(19, True )
            time.sleep(0.005)
            GPIO.output(26, False)
            GPIO.output(19, False)
            time.sleep(0.002)

        start_time = time.time()
        max_time = 0.25
        while( (time.time() - start_time) < max_time) & isRunning == 1:
            GPIO.output(26, False)
            GPIO.output(19, True )
            time.sleep(0.004)
            GPIO.output(26, False)
            GPIO.output(19, False)
            time.sleep(0.002)

        time.sleep(1)

        # for going down
        start_time = time.time()
        max_time = 0.12
        while( (time.time() - start_time) < max_time) & isRunning == 1:
            GPIO.output(26, True)
            GPIO.output(19, False )
            time.sleep(0.01)
            GPIO.output(26, False)
            GPIO.output(19, False)
            time.sleep(0.01)

GPIO.cleanup()
              

master_script.py


#!/usr/bin/python2.7
echo "heY"
python /home/pi/sg90_v2.py &
python /home/pi/lifter_v2.py &
python /home/pi/robot_v2.py &
python /home/pi/stream_tracking.py