Three In A Row

Hongyi Deng(hd294)

Chang Liu(cl2428)

Chengkun Shen(cs2327)

Objective

1. Design a robot that could play against a human on a real game board.

2. Experience with the design that involves multiple systems.

3. Have some fun.

Introduction

The goal of this project was to design a human-machine interactive system with an implementation of a tiny game named “Three in a row”. The rule is very simple: the two players (in this case a human and this robot) take turns to place a piece on an empty cell in the 3x3 game board, and whoever gets three pieces in a row first wins the game. In the end, we have successfully built a robot that could recognize game on board and use a robotic arm to grab and place game pieces onto the board. Although it is not smart, at least it is able to give the user some fun.

System Overview

The system functionality involved three essential parts: board image processing using OpenCV, game play using AI, and robotic arm control using Inverse Kinematics. The work was distributed among two systems: we used the Raspberry Pi to perform image processing and the virtual game play, and an Arduino to control the robotic arm movement. The communication between two systems was achieved through the Serial USB connection.


This is a picture

Figure 1. FSM diagram


The robot used a FSM based control as shown above in Figure 1. As shown on the diagram, when the game starts, the user will be prompted to place a piece on the board. After the user makes a movement, he/she will press a button to notify the system. As soon as the button is pressed, the camera will capture an image of the board. The captured image will then be processed by the Raspberry-Pi, and a command will be generated and sent to the Arduino as long as the user movement doesn’t lead to his/her winning. the Arduino will map the received command to a destination on the physical board, and control the robotic arm to pick up a piece and place it at the desired destination. Once the arm movement is done, the Arduino signals the Raspberry Pi, and if there’s still no winner and the board is not fill yet, the user will be prompted to make another movement. The game ends whenever one player wins the game, or the board is full.

Hardware Design


This is a picture

Figure 2. Overview of the project


We spent some time searching for a decent looking container that could hold the entire setup appropriately and was stable enough. When we came across the wood crate in Walmart, we decided to use it as the base for our playground. We removed the original base and put the crate on its side, so that there were large openings on both new sides to allow a clear view of the system and make it easy for the user to put pieces onto the board. There were some additional advantages of the crate-based advantage. First, it has a ceiling where we could mount our camera and get a clear view of the game board; second, there were spaces between the wood pieces on the ceiling, so that we could place our Raspberry Pi on top of the box while keeping the camera inside, as the connecting cable could easily pass through a slit on the ceiling. This made it easier for us to debug the Raspberry Pi, and also made it possible for the system to convey messages to the user through the Pi-TFT screen and accept user inputs from buttons on the screen. A picture showing how we mount the camera is shown below in Figure 2.


This is a picture

Figure 3. View of the ceiling from Inside


The setup on the floor of the playground contained a 3x3 game board, a robotic arm to move the pieces, and an Arduino to control the servos on the robotic arms. Before designing these components, we should first decide on the selection of game pieces, which had impacts on the design specification of the other components. For the game pieces, we initially thought about using the traditional stones that were used for the Go games and Five-in-a-Row games. The problem with the Go stones was that they were round-shaped and were usually slippery, which made it hard to pick up if we use a claw-based robotic arm. The alternative solution we came up with was to use magnetic-based design. To be more specific, we decided to use steel-made game pieces, and use an electromagnet instead of a claw at the end of the robotic arm for pick-up and drop-off. We did some research and found that the Grove Electromagnet best fit our need, as it provided an appropriate amount of suction force, and had a well-designed breakout board to provide a stable support to the electromagnet and a nice interface to communicate with the Arduino controller. For the pieces, the most available thing to us that were made of steel and were suitable to be used as game pieces were the hex nuts. The issue with hex nuts was that they had holes in the center, and the hole diameter increased with the nuts size. The hex nuts that could be attracted by the electromagnet were too small to be used as a game piece, while the with the desired size had diameters larger than the diameter of the electromagnet coil, making it hard to be attracted. A final solution was to use small sized nuts, and add additional layers with suitable diameter.


This is a picture

Figure 4. Game pieces


Coins were attached on the flat surfaces of the nuts, so that the whole piece was attractable by magnets because of the small hex nut in the center, and it looked more like a real game piece because of the larger size provided by the coins on the surfaces. We used two types of coins, penny and dime, as they have different colors which could be used to differentiate between the two players.


For the robotic arm, since we couldn’t find any designs within budget that could hold an electromagnet, we decided to use the MeArm v1.1, which was a mature claw-based arm design that was used widely for college design projects, and modify it to meet our need. Initially we tried to modify the CAD files and print it on our own, but then we realized that the mechanical designing process was too time consuming for unexperienced electrical engineering students. We ended up get the laser cut pieces of the original design, removed the claw during assembly, and taped our electromagnet onto it as an expediency. A close look of the end of arm is shown below in Figure 5. It looks quite sketchy. Given more time, we should redesign it to make it look better.


This is a picture

Figure 5. Electromagnet end-effector


The body of the robotic arm contained three servos, which were used to control the base, arm, and elbow of the arm. Details about the arm motion will be discussed in the software section. Both the servos and the electromagnet are controlled by the Arduino.


In order to ensure that the pieces dropped by the robotic arm fell at the correct position, we created the 3x3 game board using a thin square piece of poster paper, drew lines to divide it into 9 evenly sized cells, and attached permanent magnet at the back of the paper positioning at the center of the 9 cells. Using this design helped ensure that in the case where the piece was picked up or dropped off a little off the desired position, it would still be attracted by the permanent magnets to the right spot. The board was fixed on the base of the crate so that it its relative position to the camera was also fixed. This facilitated the design in the software.


Till this point, the hardware implementation is finished. Notice that in the current design, we defined a fixed spot for the robotic arm to grab game pieces, and every time a piece was picked up, we should manually refill a piece to that spot.

Software Design

As described before, the system contains three major parts: board image capture and processing, game play control, and robotic arm control.


Board image capture happened when a user pressed a button on the side of the PiTFT screen, which was connected to RPi GPIO pin 27. In the loop of the entire program on Raspberry Pi, the system would first poll for the button press event on pin 27. Once a press was detected, the capture() function provided by the PiCamera library was called to realize the image capture.


To process the captured image and interpret the board status, we used functions provided by the Python OpenCV 3.0 library. As a recap, the game board in this project was a 3x3 grid, and the board was placed at a fixed position related to the camera. With some trial, we obtained the bounding box of the game board on the captured image, therefore we could easily calculate the pixel range that each grid cell spanned given that the number of rows and columns were known, and that each row/column had the same width. This served as the foundation of our detection algorithm. Since each game piece had a circular shape, the game piece recognition was achieved through the Hough circle detection algorithm, which detected any contours that was circular, and returned the center position in pixel coordinates on the input image. To account for the inaccuracy and potential noise in the detection process, we adjusted the parameters for the Hough circle function to limit the circle radius to a certain range. A sample result of circle detection is shown below in Figure 6.



This is a picture

Figure 2. Flowchart Of OpenCV Implementation


As shown on the figure, since the range captured by the camera was much larger than the actual board size, a lot of circles detected were not needed. In order to obtain the desired list of detected circles, we added an additional filter to filter out everything that is outside the game board. Then, we iterated through the circles, and compared the center location of each circle to the pixel range spanned by the grid cells to determine at which cell the piece was placed. As the pieces were successfully detected, the next step was to determine which pieces belong to which player. This step was done by obtaining the RGB value of each circle and matching it to the corresponding label. In our design, we used value 1 to denote the silver color (a dime) and value 2 to denote the bronze color (a penny). A 0 was used if there was no piece placed in a specific cell. The labels were stored in a 9 element array, which was passed to the game play AI to determine the response of the robot.


The game play used a simple AI algorithm that was based on only one objective, that was, to stop the user from winning the game, but not attempting to win the game itself.

As discussed in the board recognition process above, information about the board and pieces was stored in a 1 dimensional array, where each array element corresponds to a single cell on the 3x3 grid. The indexing of each cell is shown below in Figure 8.


This is a picture

Figure 5. Board Array Representation


When the game first started, the array was initialized to be all zeros, denoting that no piece was yet placed on the board. In our design, the default setting was that the user always used the penny-based pieces, and the robot used the dime-based pieces. This decision was made because the electromagnet we used couldn’t pick up the penny-based pieces as well as the dime-based pieces. As a result, in our current design, a piece placed by the user always had the label “2”, while the piece placed by the robot always had the label “1”.


A global variable was used to always store the board array in the previous round. Whenever the user made a movement and pressed the button, the new board array created by the recognition part was compared to the previous copy in order to detect where the change took place, and that change signified the play by the user. As soon as the user play was recognized, the next step of the robot should be determined. For our primitive blocking algorithm, there were three cases to consider: when the user already had three pieces in a row, then the robot should do nothing and the game was over; whenever the user had two pieces in a row and the third spot in that row was still empty, then the robot should place a piece on that spot; otherwise, just pick a random available spot. Specifically, the system was designed to scan all rows and columns as well as two diagonal lines and check if there were three “2”s in a line. If there were, then the game ends, and a message should be printed on the console telling that the user won the game. Otherwise, it kept on checking if there were two “2”s and a “0” on the line. If true, the AI would select that empty spot as the response. A trick was used here to make this simple code even simpler: to detect two “2” and one “0”, we could calculate sum and product of the three values in a line; in this case, the sum should be 4 and the product should be 0. Else, a random number was generated among the “0” spots in the array, and that spot would be used as the robot response. An additional check was performed to determine whether or not this move had led to the robot winning the game. If the robot turned out to win the game after this move, then the game would end as soon as the robotic arm movement was made; otherwise, the system should go back to the step to prompt the user make another move. In either case, the global array variable would be updated with both the new user movement and the robot response, and the index for the robot response would be sent to the Arduino through the serial USB connection to control the movement of the robotic arm.


The serial connection between Raspberry Pi and Arduino was achieved using the PySerial library. The library was straight forward to use, but we still encountered some problems in the beginning due to the compatibility issue. The issue was that in Python 2, one could directly send a string through the serial connection, but in Python 3, the string should be encoded first before it is sent through the channel. When compiling the python program in the CV virtual environment, we still used the Python 2 convention as we forgot that our OpenCV library was installed based on Python 3, and it took us a significant amount of time to figure out the error. Another issue with the serial connection was that it is impossible to send a “0” without some special treatment, so a solution was that we changed our indexing scheme, and made the starting value to be 1 instead of 0.


The basic idea of the robotic arm control in Arduino was that, the physical coordinates of the center of a cell relative to the base of the robotic arm was mapped to the cell’s index number. Thus whenever the index value was received on the Arduino from Raspberry Pi, it would be decoded to a set of x,y, z coordinates, which was the desired destination of the robotic arm to drop off the piece. In our design, we used a fixed spot for the robotic arm to pick up the go piece. For convenience, we set the home position for the robotic arm to be 4cm above the pick-up spot. In other words, before and after every picking event, the end of the robotic arm, or the electromagnet, always stayed at the position which was 4cm above the place to pick up a game piece. This setting avoided unnecessary movements, and this location was also helpful in the board recognition process, since it won’t be seen in the bounding box of the game board from the camera’s perspective. In summary, the basic flow would be: when idle, the end of the arm stays at the home position; as soon as a request is received, the arm lowers its end to pick up the game piece, and the electromagnet is turned on; then the arm rises, moves to 4cm above the destination spot and lowers the end, and at this moment the electromagnet is turned off. After it drops off the game piece, it returns to the home position, and a message is sent to the Raspberry Pi through the serial connection signaling that the action is done. There was still one missing step in this process, that was, the conversion of the x, y, z coordinates to the actual movement of the servos on the robotic arm. This conversion was achieved with the Inverse Kinematics Algorithms, which used a set of kinematics equations to determine the movement of each joint in order for the end-effector to reach a desired position, and was quite complicated from the perspective of an electrical engineering student. Luckily, since the MeArm was widely used in college design projects and had a large crowd of users, an open source library was developed, which provided helper functions that implemented the kinematics functions, and could sufficiently generate the required angle movement given a set of x, y, z, coordinates. With the angles calculated, we could map the angle to the necessary PWM needed, and drive each servo with the corresponding PWM signal.


Until this point, all the design process for the project is done.

Testing

To test the system, we first test the components individually, and finally test the system as a whole. There are two major pieces to test upon: the openCV detection, and the robotic arm movement. For the OpenCV detection, we first drew circles on a paper, and examined the results to understand what the functions in OpenCV was able to provide us with. Then we tested with the colors, and checked with pieces with different colors to see if the RGV difference is clear enough to differenciate between the two colors. For the testing of robotic arm, we tested by letting it moving back and forth between two locations, and see if each time the arm went to the same spot. Then we expanded the range to tested the movement of the entire board to make sure that the arm would work fine in any cases. For the testing of the communication between the two systems, we first tested in the Python command line, and then transport the commands into the real progrm.

Results and conclusions

At the end, we successfully finished a robot that was able to detect the board situation precisely, and pick up and drop off the pieces onto the desired location of the board without error. More importantly, we have enjoyed playing against this robot, even though its functionality was still quite primitive. The design didn't met the plans that we made in the beginning, since we initially planned to build a five-in-a-row bot. This is because the robotic arm we could find has a limited range and therefore limits the size of our game board. But it is still possible to expand the design if we have more time, budget, and mechanical support.

Code Appendix

W3C+Hates+Me Valid+CSS%21 Handcrafted with sweat and blood Runs on Any Browser Any OS