The Raspberry Pi is a powerful and robust computer that is excellent for portable embedded systems. So we used it to build an autonomous WALL-E robot that navigates obstacles through an ultrasound sensor and can also be manually controlled through voice recognition or by pressing buttons on another host computer.
We had a client-server system, where the other host computer was the client sending the commands and the Raspberry Pi was the server, running these commands and sending acknowledgement back to the client. We were able to carry this out by sending commands to a FIFO on the Raspberry Pi via secure shell.
We used the Google speech recognition API to recognize input command and we parsed the output of this recognition and compared it to a couple of keywords to see if they matched a command the server can run.
Then we had an autonomous feature where the robot would explore the room navigating around obstacles using an ultrasound sensor that can calculate obstacle distances pretty accurately by sending inaudible ultrasounds that reflect from surfaces.
The objective of our project was to build an autonomous, remote controlled and voice controlled system.
We initially wanted to do this on a quadruped-robot frame (Fig. 1), but we got a cheap and unstable frame from eBay that was unfortunately unable to navigate steadily, so we used a WALL-E frame (Fig. 2) instead :(.
Figure 1 : Quadruped robot
Figure 2 : WALL-E
Figure 3 : GUI in initial state
Figure 4 : GUI in Voice Recognition mode
Figure 5 : Case where Left is recognized as Loft and previous state is restored
Figure 6 : Servo Connections
Figure 7 : Ultrasound Sensor
Figure 8 : Ultrasound Connections
Figure 9 : Ultrasound
Overall our components worked as planned, but there were two main issues:
We wanted to have WALL-E follow faces using the openCV library and the pi camera, but we had spent so much time adjusting frames that this wasn’t possible to complete before the demo.
If we had a higher budget, we would also have gotten a custom made stable quadruped robot which we would have used instead of WALL-E :(.
Finally, we would have liked to use a wheel speed sensor, to aid with the turning movement of the robot so it does not easily deviate from its track.
Leke worked on the code, testing each component, writing the lab report doc and writing it as a webpage. Xiaobin worked on assembling the robot, ensuring proper function after assembly and writing the lab report doc.
[1] SG90 servo User Guide http://www.micropik.com/PDF/SG90Servo.pdf [2] HC-SR04 Micropik Datasheet http://www.micropik.com/PDF/HCSR04.pdf [3] HC-SR04 User’s Manual v1.0 https://docs.google.com/document/d/1Y-yZnNhMYy7rwhAgyL_pfa39RsB-x2qR4vP8saG73rE/edit [4] Speech, Google Cloud Platform https://cloud.google.com/speech/ [5] Speech Recognition Library https://github.com/Uberi/speech_recognition [6] Ultrasound tutorial http://www.modmypi.com/blog/hc-sr04-ultrasonic-range-sensor-on-the-raspberry-pi [7] Quadruped robot https://spiercetech.com/shop/content/8-meped
Thanks to Professor Joe Skovira and the TAs for their incessant support throughout the semester :)