Dimensioning Robot

May 18, 2018
Project By Audrey Fuhr (aef99) and Roanja Milo (rm692).


Demonstration Video


Introduction

Our team built and programmed a robot to measure and calculate the dimensions of a 3-D object. The robot is first initialized with an interface to learn information about the object, such as if it has a circular or rectangular cross-section. Then, the robot moves around the object and maintains a constant distance from it until it returns to the starting position. Post-processing allows an x-y plot to be created as well as calculations such as cross-sectional area to be saved to a folder.

This project had many successful components. The frame and mechanical components such as encoders were functioning in the robot. The robot was able to move around an object and maintain a constant distance, yet the motion is not entirely smooth. Furthermore, the robot did not always stop at the correct position after moving around the object, so a manual end button was created. Post processing was successful with accurate calculations and plots created, even with the challenges that occurred in this step.


Project Objective:

Create a robot that can determine the dimensions of a three dimensional object. This robot will need to make use of sensors or a camera to help it move around the object. To track the length of the object, it should maintain a constant distance from the object and recall the starting location and the location of its every step know when it has been around the object.


Mechanical Design:

The mechanical design used the components given in Lab 3. This included the plastic base of the robot, two wheels, and a servo for each wheel. We used a Raspberry Pi 3 which was powered by a power bank. We used PWM signals to control the speed and direction the wheels were moving. A battery pack with 4 AA batteries was added to power the servos. We utilized an encoder wheel print out like the one shown. It has black and white sections, which the encoder uses to differentiate between.

Generic placeholder image

Figure 1: Wheel Encoder Black and White Spokes

We connected two encoders, each close to and facing a wheel, to allow processing of the motion of the robot. The encoders were screwed onto the motor frame, to be approximately 2 cm from the wheels. We encountered an issue with one of our encoders which was temprementally working, so we had to use one 15cm encoder and one 10cm encoder. We resolved this issue by placing the 15 cm encoder further from the wheel.

Generic placeholder image

Figure 2: 15 cm rotary encoder mounted to motor observing wheel rotation

We mounted an ultrasound sensor to the front left of the robot, to measure the distance of the robot to the object being measured. The location of the mount was chosen to be close to the front of the robot. We believed this be the ideal location for it because it determines if the robot needs to turn, and allows the wheels, which were further back, to have enough time to turn.

Generic placeholder image

Figure 3: Sonar mounted on front and left side of the robot


Software and Post-Processing

We used modular code blocks by first organizing our code into various functions for each of our components so that we could test certain components if anything seemed to fail. We implemented an encoders.py, sonar.py, motion.py and finally a final.py script, which incorporated code from the other three scripts. We began by coding the optical encoders. The encoders used callbacks, so each time either the left or right encoder was triggered, there was an interrupt, which raised the total count of ticks on both the left and right wheels. We implemented a right count and a left count variable to keep track of the total ticks each of the wheels traversed. Our encoders are guided by the following code:

Generic placeholder image

Figure 4: Encoder call-back function

Sonars were used to guide the robots motion. The sonar logic allows the robot to maintain a certain distance from any object. If the object is within a set distance, the robot goes straight. If the object distance falls below 10 cm, the robot turns left, otherwise it turns right. This code worked well for motion around a circle, however it was not great for motion around rectangular cross sections. This was due to the sharp 90 degree turns, which caused the robot to occasionally run into the wall. Therefore we implemented an option where a rectangular shape could be analyzed. The logic for this is based on if the sonar does not detect an object for multiple iterations, indicating a sharp turn, it would turn more smoothly. The sonar was set up using the trig and echo functions.

Generic placeholder image

Figure 5: Sonar set up

The key aspects of this project were to be able to determine the global coordinates of the robot at all times, so that it has knowledge of where it begins and where it ends. The only information we use to determine this is the number of ticks that are triggered on each wheel in a given time. The global coordinates (global x and y from the starting reference frame) are acquired by implementing frame rotations at each time step. The following logic was used:

Generic placeholder image

Figure 6: Logic for determining global x, y and theta

The difference in counts within the 0.2 s interval are defined as diff_L and diff_R. Distance travelled in that time are then computed as dist_L and dist_R using the wheel diameter and the number of spokes of 20. In the case that the difference in spokes between the left and right wheel are the same, we know the robot has travelled in a straight path. Otherwise we know that it has turned. If it has turned, arc length equations are used to determine the x and y positions. From the x and y coordinates, the area of the shape is then be computed using the following equation:

Generic placeholder image

Figure 7: Area formula obtained from coordinates of a polygon

Using this information with the x, y, and theta arrays, post-processing was then completed. After the robot has returned to its initial position, the post-processing interface appears on the piTFT screen. One option is to view the calculations that have been completed. Clicking this button allows the user to see the area (in cm^2). We found this by using the arrays with the x, y, and theta values. Since the robot is moving about 10 cm from the object, we took this into account to find the area of the object itself. Furthermore, for a circular object, we found the radius using the equation: A=(pi)*r^2 where A is the area found and r is the radius. For a rectangular object, the length and width are found using the difference of the max and min values calculated for the x and y coordinates respectively.

Another button on the post-processing screen allows for the user to view the x-y plot that the robot traversed. This is created using the matplotlib library.

Generic placeholder image

Figure 8: Geometry screen options

Generic placeholder image

Figure 9: Area and radius display screen

Generic placeholder image

Figure 10: Final screen displaying options for viewing geometry or selecting new object>


Testing

To begin, we tested each component of our hardware and software individually with a base code or oscilloscope.

The motion of the robot’s wheels were tested using a seperate code. This allowed the robot to move in a relatively straight path with the wheels moving at similar speeds. It also helped to find the optimal speed of the wheels during straight and turning motions, by finding frequencies and duty cycle for each wheel and direction of motion.

Furthermore, we tested the sensor at several steps of the design process. We first tested the sensor separately by placing objects at given distances and measuring the calculated values. Once the robot’s wheels were working and the code to move the robot was added, we tested how the robot moved around different objects. We observed if the motion was smooth around the object, and if it was maintaining a consistent distance to the object. Once challenge was for objects with sharp corners such as rectangular objects because the sensor would initially turn too soon and become too close to the object on the next surface. Therefore, we added code to detect edges such as this and react appropriately.

Properly placing the encoders was a challenge in this project because these encoders needed to be placed at a very specific angle and position in relation to the wheels. Initially, we placed the encoders at an arbitrary distance and position near each wheel. Once the robot and code were first completed, we were able to detect errors in the measurements of the encoders to set distances actually travelled by the robot. One fix was to paste printed images to the wheels to allow it to detect more rising and falling edges than just using the spokes of the wheels. Then, we used an oscilloscope to help with the placement of the encoders. The encoder position was optimized by placing them where the signal from turning the wheels at a constant speed created very symmetrical and consistent signals, as found with visual inspection using the oscilloscope. This placement was then tested with the robot moving about 50cm and the results were much more accurate. Testing the encoders was a major part of the testing completed for this project due to the degree of accuracy needed.This shows that a re-design might be necessary to ensure the encoders cannot be shifted.

To test the robot and its accuracy in measuring dimensions, we tested a rectangular and cylindrical object primarily that had specific measurements. This allowed the team to work initially in constrained and optimal conditions. This testing involved running the code many times to see if the robot would move accurately, stop in the starting position, and post-process the information correctly.


Results

The robot is successful in moving as intended, stopping at the initial position, and measuring the dimensions of the cross-section of a 3-dimensional object. Work could be completed to optimize these results to give less error, such as tightening the tolerance of motion of the robot, and adding another sensor to the system.

The final deliverables of the robot include mapping out the traveled path in addition to providing information about the shape that it had traversed. For example it provides the area of the cross section of shapes. Data was saved as images of the traversed path by the robot. The following figure shows a successful iteration of the robot moving around a cylinder with a radius of approximately 40 cm.

Generic placeholder image

Figure 11: Successful iteration of robot motion around a circle of radius 40 cm. >

There were some issues that arose during demo including possible discalibrations of the encoders or temporary hardware malfunctions. At one point there was a random malfunction which caused the odometry to act up. The following image was supposed to represent the odometry of the robot traveling around a circle.

Generic placeholder image

Figure 12: Unsuccessful iteration of robot motion around a circle of radius 40 cm.>


Conclusion

Our robot was successful in implementing encoders to track the robots location as it traversed along a path. We learned that encoders are useful and accurate for determining both the distance our robot moved in addition to its location from its beginning reference frame. We learned that the placement of the encoders is very important, however if correct placement is achieved and kept in a stable position, the encoders will work reliably.

We were able to learn how to design an embedded operating system in a big picture scope in addition to learning how to implement various electrical, mechanical and software components.

In summary, even though there were challenges, we are very satisfied with the results of our dimensioning robot. It has useful applications that could be used in real systems.


Future Work

There are many future steps that could be added to add functionalities to the robot. For example, the robot could be optimized to run at a faster speed. Additionally, the robot could measure the height of the object to allow the height and volume of the object to be displayed. This step could be implemented using a camera that would detect the change in depth and therefore be able to determine the height of an object. Another method to determining height could be to use a standard rotation motor with an ultrasound sensor. Once the ultrasound sensor does not see the object, it will know that it has reached the height of the object. Using a camera, the object could be programmed to measure several objects in a row, and compare them.


Work Distribution

Generic placeholder image

Roanja Milo

rm692@cornell.edu

Focus on implementation of encoder and sonar.

Generic placeholder image

Audrey Fuhr

aef99@cornell.edu

Focus on processing of data and interface.


Audrey Contribution

Roanja Contribution


Parts List

Total: $87.00



Thank you

We would like to thank Professor Skovira and the course TAs for guiding us throughout this semester. We are glad to have had the opportunity to work with them to gain experience and knowledge of embedded operating systems.


References

“Implementing Dead Reckoning by Odometry on a Robot with R/C Servo Differential Drive,” An Introduction to Back-Propagation Neural Networks. [Online]. Available: http://www.seattlerobotics.org/encoder/200010/dead_reckoning_article.html. [Accessed: 19-May-2018].
“How to Build a Robot Tutorials,” Society of Robots. [Online]. Available: http://www.societyofrobots.com/sensors_encoder.shtml. [Accessed: 19-May-2018].
Chiel and Kerns, “SparkFun Logic Level Converter - Bi-Directional,” SEN-13266 - SparkFun Electronics, 01-Jan-1969. [Online]. Available: https://www.sparkfun.com/products/12009. [Accessed: 19-May-2018].
SparkFun Logic Level Converter - Bi-Directional
Ultrasonic Sensor - HC-SR04
encoder information
Parallax Continuous Rotation Servo Motor Datasheet

Code Appendix: Motion.py


#selected output pin is 19
#use rpi.GPIO.PWM to blink led on and off over a sec
import RPi.GPIO as GPIO
import time
import math

GPIO.setmode(GPIO.BCM) #set up GPIO

# Motors set up and start
GPIO.setup(13,GPIO.OUT)
GPIO.setup(6,GPIO.OUT)

freq1=46.685
freq2=46.425
dc1=6.629
dc2=7.149
pL=GPIO.PWM(13,freq1)
pR=GPIO.PWM(6,freq2)

pR.start(dc2) #where dc is the duty cycle (1<=dc<=100)
pL.start(dc1) #where dc is the duty cycle (1<=dc<=100)

# Encoder set up
#Right encoder
GPIO.setup(26,GPIO.IN)
GPIO.setup(19,GPIO.IN) #Left Encoder


count_R=0
count_L=0
diam_w= 7.3 # Wheel Diameter (cm)

#Enconders call back

def GPIO26_callback(channel):
    global count_R
    count_R=count_R+1
    print('count_R:' ,count_R)
    dist_R= diam_w*math.pi*(count_R/20.0)
    print('dist_R' ,dist_R)

def GPIO19_callback(channel):
    global count_L
    count_L=count_L+1
    print('count_L: ' ,count_L)
    dist_L= diam_w*math.pi*(count_L/20.0)
    print('dist_L' ,dist_L)


GPIO.add_event_detect(26, GPIO.BOTH, callback=GPIO26_callback, bouncetime=50)
GPIO.add_event_detect(19, GPIO.BOTH, callback=GPIO19_callback, bouncetime=50)




try:

    time.sleep(30)

except KeyboardInterrupt:
    GPIO.cleanup() # clean up GPIO on CTRL+C exit

encoder.py

#selected output pin is 19
#use rpi.GPIO.PWM to blink led on and off over a sec
import RPi.GPIO as GPIO
import time

GPIO.setmode(GPIO.BCM) #set up GPIO

# Encoder set up
GPIO.setup(26,GPIO.IN) #Right Encoder
GPIO.setup(19,GPIO.IN) #Left Encoder


count_R=0
count_L=0

#Enconders call back

def GPIO26_callback(channel):
    global count_R
    count_R=count_R+1
    print('count_R:' ,count_R)

def GPIO19_callback(channel):
    global count_L
    count_L=count_L+1
    print('count_L: ' ,count_L)


GPIO.add_event_detect(26, GPIO.FALLING, callback=GPIO26_callback, bouncetime=100)
GPIO.add_event_detect(19, GPIO.FALLING, callback=GPIO19_callback, bouncetime=100)

try:

    time.sleep(30)

except KeyboardInterrupt:
    GPIO.cleanup() # clean up GPIO on CTRL+C exit

main.py

import pygame # Import pygame graphics library
from pygame.locals import *
import os
import RPi.GPIO as GPIO
import time
import numpy as np
import math


import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt

start_time=time.time()

os.putenv('SDL_VIDEODRIVER', 'fbcon') # Display on piTFT
os.putenv('SDL_FBDEV', '/dev/fb1')
os.putenv('SDL_MOUSEDRV', 'TSLIB') # Track mouse clicks on piTFT
os.putenv('SDL_MOUSEDEV', '/dev/input/touchscreen')
pygame.init()
GPIO.setmode(GPIO.BCM) #set up GPIO

black=0,0,0
WHITE = 255, 255, 255
pygame.mouse.set_visible(False)
screen = pygame.display.set_mode((320, 240))
my_font = pygame.font.Font(None, 35)
start_button={'START': (160, 60)} # define start button
stop_button={'STOP': (160, 60)} # define start button
return_button={'return': (260, 60)} # define start button


red = pygame.image.load('/home/pi/Lab3/red.png')
red=pygame.transform.scale(red,(80,60))
green = pygame.image.load('/home/pi/Lab3/green.png')
green=pygame.transform.scale(green,(85,65))

redrect = red.get_rect()
greenrect = green.get_rect()
redrect.top=90
redrect.left=120
greenrect.top=90
greenrect.left=120
#screen.blit(red, redrect) # Combine Ball surface with workspace surface
screen.blit(green, greenrect)

for my_text, text_pos in start_button.items():
    text_surface = my_font.render(my_text, True, WHITE)
    rect = text_surface.get_rect(center=text_pos)
    screen.blit(text_surface, rect)
    pygame.display.flip()
global a
a=1
dd=True
dd2=True
while dd:
    for event in pygame.event.get():
        if(event.type is MOUSEBUTTONDOWN):
            pos1 = pygame.mouse.get_pos()

        elif(event.type is MOUSEBUTTONUP):
            pos2 = pygame.mouse.get_pos()
            x1,y1 = pos2
            #Start button pressed
            if dd2:
                if y1>100 and y1<180 and x1>120 and x1<180:

                    screen.fill(black)

                    my_buttons = {'Click geometry object':(160,40),'circle/ellipse': (100, 100),'square/rectangle': (100, 150),'other shape': (100, 200)}
                    my_font = pygame.font.Font(None, 30)
                    for my_text, text_pos in my_buttons.items():
                        text_surface = my_font.render(my_text, True, WHITE)
                        rect = text_surface.get_rect(center=text_pos)
                        screen.blit(text_surface, rect)

                    pygame.display.flip()
                    dd2=False
            else:
                if y1>0 and y1<125 and x1>0 and x1<200:
                    a=1
                    dd=False
                elif y1>125 and y1<175 and x1>0 and x1<200:
                    dd=False
                    a=2
                elif y1>175 and y1<230 and x1>0 and x1<200:
                    dd=False
                    a=3
                screen.fill(black)
                screen.blit(red, redrect)
                for my_text, text_pos in stop_button.items():
                    text_surface = my_font.render(my_text, True, WHITE)
                    rect = text_surface.get_rect(center=text_pos)
                    screen.blit(text_surface, rect)
                pygame.display.flip()
                dd=False


## Motion_encoder code
# Motors set up and start
GPIO.setup(13,GPIO.OUT)
GPIO.setup(6,GPIO.OUT)
freq=46.5
freq2=80
dc=6.98
pL=GPIO.PWM(13,48)
pR=GPIO.PWM(6,freq)

#pR.start(dc) #where dc is the duty cycle (1<=dc<=100)
#pL.start(dc) #where dc is the duty cycle (1<=dc<=100)

# Encoder set up
GPIO.setup(26,GPIO.IN) #Right Encoder
GPIO.setup(19,GPIO.IN) #Left Encoder


count_R=0
count_L=0
diam_w= 7.3 # Wheel Diameter (cm)
r=12.5 #distance between wheels (cm)
M_count=math.pi*diam_w/20
# Set up the x and y arrays
init_R=0
init_L=0
global x
global y
x= [0]
y=[0]
theta=[0]


#Sonar set up

TRIG= 16
ECHO = 5

# Pin 16 - TRIG ( far side of Rpi)
GPIO.setup(TRIG,GPIO.OUT)
# Pin  5 - ECHO (near side of Rpi)
GPIO.setup(ECHO,GPIO.IN)


#Enconders call back

def GPIO26_callback(channel):
    global count_R
    count_R=count_R+1
    print('count_R:' ,count_R)
   # dist_R= diam_w*math.pi*(count_R/8)
   # print('dist_R' ,dist_R)

def GPIO19_callback(channel):
    global count_L
    count_L=count_L+1
  #  print('count_L: ' ,count_L)
   # dist_L= diam_w*math.pi*(count_L/8)
  #  print('dist_L' ,dist_L)


GPIO.add_event_detect(26, GPIO.FALLING, callback=GPIO26_callback, bouncetime=100)
GPIO.add_event_detect(19, GPIO.FALLING, callback=GPIO19_callback, bouncetime=100)


x1=0
y1=0
first_time=time.time()
m=False
def check(): #panic stop button pressed
    global m
    global x1
    global y1
    for event in pygame.event.get():
        if(event.type is MOUSEBUTTONDOWN):
            pos1 = pygame.mouse.get_pos()

        elif(event.type is MOUSEBUTTONUP):
            pos2 = pygame.mouse.get_pos()
            x1,y1 = pos2


bool_stop=True
rr=True
nn=True
START_TIME2=time.time()
son_dist_array=[0]
try:

    while rr:

    # Sonar Trigger and Echo set up
        GPIO.output(TRIG, False)
        print ("Waiting for sensor to settle")
        time.sleep(0.5)

        GPIO.output(TRIG, True)
        #Send 10us pulse to trigger
        time.sleep(0.00001)
        GPIO.output(TRIG, False)
        start_time= time.time()

        while (GPIO.input(ECHO) == 0 and (time.time()-start_time<30) ):
            pulse_start = time.time()

        while (GPIO.input(ECHO) == 1 and (time.time()-start_time<30) ):
            pulse_end = time.time()


       # Sonar distance from object
        pulse_duration = pulse_end - pulse_start
        son_dist = pulse_duration * 17150
        son_dist = round(son_dist,2)
        son_dist_array.append(son_dist)


        if (10<son_dist and son_dist < 14.5): #go straight
            dc=6.629
            freq=46.685
            pR.ChangeFrequency(freq)
            pR.ChangeDutyCycle(dc)
            pR.start(dc)

            dc=7.149
            freq=46.425
            pL.ChangeFrequency(freq)
            pL.ChangeDutyCycle(dc)
            pL.start(dc)

        elif (son_dist <10): #turn left
            pR.stop()
            dc=7.149
            freq=46.425
            pL.ChangeFrequency(freq)
            pL.ChangeDutyCycle(dc)
            pL.start(dc)

        ## RECTANGLE
        elif (a==2 and len(son_dist_array)>5 and son_dist_array[len(son_dist_array)-1] >30 and son_dist_array[len(son_dist_array)-2]>30):
            # Go straight first then turn at sharp edge
            # Go straight
            dc=6.629
            freq=46.685
            pR.ChangeFrequency(freq)
            pR.ChangeDutyCycle(dc)
            pR.start(dc)

            dc=7.149
            freq=46.425
            pL.ChangeFrequency(freq)
            pL.ChangeDutyCycle(dc)
            pL.start(dc)


            pL.stop()
            dc=6.629
            freq=46.685
            pR.ChangeFrequency(freq)
            pR.ChangeDutyCycle(dc)
            pR.start(dc)



        else: #turn right

            pL.stop()
            dc=6.629
            freq=46.685
            pR.ChangeFrequency(freq)
            pR.ChangeDutyCycle(dc)
            pR.start(dc)

        if (time.time()-first_time>0.2):
            # difference in counts in 0.2 s increments
            diff_R=count_R-init_R
            diff_L=count_L-init_L

            dist_R= diff_R*M_count
            dist_L= diff_L*M_count

            cos_current = math.cos(theta[len(theta)-1])
            sin_current = math.sin(theta[len(theta)-1])

            # if the left count and right count are the same, the robot has moved forwawrd
            if (diff_R == diff_L):
                x_new = x[len(x)-1]+dist_L * cos_current
                x.append(x_new)
                y_new = y[len(y)-1]+ dist_L*sin_current
                y.append(y_new)
            # if the left count and right count vary, the robot has moved in an arch
            else:
                expr1 = 12.5*(dist_R+dist_L)/2.0/(dist_R-dist_L)
                right_minus_left = dist_R- dist_L
                x_new = x[len(x)-1]+ expr1*(math.sin(right_minus_left/12.5+theta[len(theta)-1])-sin_current)
                x.append(x_new)
                y_new = y[len(y)-1] - expr1*(math.cos(right_minus_left/12.5+theta[len(theta)-1])-cos_current)
                y.append(y_new)

            # Calculate new orientation
            theta_new = theta[len(theta)-1] +(dist_R-dist_L)/12.5

            #Keep in the range of -pi to pi
            if (theta_new> math.pi):
                theta_new = theta_new-2.0*math.pi
            elif (theta_new<-math.pi):
                theta_new = theta_new + 2.0 *math.pi
            theta.append(theta_new)


            print('count_R', count_R)
            print('count_L', count_L)

            print('diff_R', diff_R)
            print('diff_L', diff_L)

            print('dist_L', dist_L)
            print('dist_R', dist_R)

            print('x_new', x_new)
            print('y_new', y_new)
            print('theta_new', theta_new)

            init_R=count_R
            init_L=count_L
            first_time=time.time();


        # how to end code
        if math.sqrt(math.pow(x[len(x)-1],2)+math.pow(y[len(y)-1],2))<28 and time.time()>START_TIME2+7:
            rr=False
        check()
        if y1>0 and x1>0:
            print("Leaving the code")
            pL.stop()
            pR.stop()
            rr=False

#post-processing
    while nn and not rr:
        pL.stop()
        pR.stop()
        bottom_buttons={'Quit':(260,200), 'Calculation':(100,200), 'Image of Geometry':(100,100)}
        my_font2 = pygame.font.Font(None, 20)

        screen.fill(black)

        for my_text, text_pos in bottom_buttons.items():
            text_surface = my_font.render(my_text, True, WHITE)
            rect = text_surface.get_rect(center=text_pos)
            screen.blit(text_surface, rect)

        pygame.display.flip()

        i=0
        global xd
        global yd
        xd=[]
        yd=[]
        while( i < len(x)):
            xd.append(x[i]-10*math.sin(theta[i]))
            yd.append(y[i]-10*math.cos(theta[i]))
            i=i+1

        x1=0
        y1=0
        check()
        c=True
        if x1>50 and x1<110 and y1>180 and y1<300:
            while c==True: #calcaulation
                global sub_area
                global area

                global bbb
                bbb=True
                sub_area=0
                area=0
                i=0
                j=0
                print('first_position',sub_area)

                while (i<len(x)-2 and j<len(y)-2):
                   sub_area = sub_area+(x[i]*y[j+1]-y[1]*x[1+1])
                   i=i+1
                   j=j+1
                   print(sub_area)

                print('here1')
                last_value=x[len(x)-1]*y[0]-x[0]*y[len(y)-1]
                area=(sub_area+last_value)/2
                area_string=str(area)
                x1=0
                y1=0
                print('here')

                if a==1:
                    radius=math.sqrt(abs(area/math.pi))
                    radius_string=str(radius)
                    my_buttons = {'area (cm) ='+area_string:(160,40),'return':(260, 200), 'radius (cm)='+radius_string:(160,90)}
                if a==2:
                    length=max(y)-min(y)
                    width=max(x)-min(y)
                    length_string=str(length)
                    width_string=str(width)
                    my_buttons = {'area (cm)='+area_string:(160,40),'return':(260,200),'length (cm) = '+length_string:(160,80),'width (cm) = '+width_string:(160,120)}

                if a==3:

                    my_buttons = {'area (cm)='+area_string:(160,40),'return':(260,200)}

                screen.fill(black)
                print('here',sub_area)

                my_font = pygame.font.Font(None, 30)
                for my_text, text_pos in my_buttons.items():
                    text_surface = my_font.render(my_text, True, WHITE)
                    rect = text_surface.get_rect(center=text_pos)
                    screen.blit(text_surface, rect)

                pygame.display.flip()

                while bbb==True:
                    x1=0
                    y1=0
                    check()
                    if x1>250 and x1<290 and y1>190 and y1<220:
                    # add return button
                        c=False
                        bbb=False
                        x1=0
                        y1=0

        elif x1>0 and x1<110 and y1>0 and y1<110:
            c=True
            while c==True:

       # if x1<120 and x1>120 and y1>70 and y1<120:
                screen.fill(black)

                print('here figure')
                plt.figure(num=1, figsize=(3,3))
                plt.title('Geometry', size=14)
                plt.xlabel('x-axis', size=14)
                plt.ylabel('y-axis', size=14)
                plt.plot(x,y, color='b', linestyle='--', marker='o', label='geometry')
                plt.legend(loc='upper left')
                plt.savefig('plot1.png', format='png')


                saved = pygame.image.load('/home/pi/Project/plot1.png')
                saved=pygame.transform.scale(saved,(320,200))
                savedrect = saved.get_rect()
                savedrect.top=0
                savedrect.left=0
                screen.blit(saved, savedrect)

                my_font = pygame.font.Font(None, 30)
                my_buttons={'return':(60, 220)}
                for my_text, text_pos in my_buttons.items():
                    text_surface = my_font.render(my_text, True, WHITE)
                    rect = text_surface.get_rect(center=text_pos)
                    screen.blit(text_surface, rect)


                pygame.display.flip()

                bbb=True
                while bbb==True:
                    x1=0
                    y1=0
                    check()
                    if x1>0 and x1<90 and y1>190 and y1<220:
                        # add return button
                        x1=0
                        y1=0
                        c=False
                        bbb=False



        elif y1>180 and x1>240: #quit button
            m=True
            print("Leaving the code")
            break


except KeyboardInterrupt:
    GPIO.cleanup()