RPi Smart Door System


Project by Rongguang Wang (rw564), Yuan He (yh772)

December 6th 2017



   Project Overview
Generic placeholder image
    Back View
Generic placeholder image
    GUI Visitor Mode
Generic placeholder image
    Speaker Recognition
Generic placeholder image
    Web Monitor Mode
Generic placeholder image
    Web Log-in


Generic placeholder image
GUI: Face Recognition

Project Objective


  • A smart door system using human physical features as keys, remotely controlling over door access through a web interface.
  • User-friendly GUI and secured login web front-end.
  • Modular design that embraces future improvements.
  • Back and front end communicating via sockets implemented under Flask framework.
  • Improved FIFO that pipelines multiple processes together without concerning for time sync problems.
  • Imitating the modern banks, the identity authentification process consists of three levels, including face, speaker and fingerprint verifications, better guarding against cheating behaviors.


Demonstration Video

In the video, the parts of voice messages are truncated to ensure video length.


Introduction



We built a smart door system together with a web server on Raspberry Pi, give someone access to the house by face, speaker and fingerprint verifications. It also enables owner to remotely answer the door when someone visits. By logging into the web interface, the owner could easily check the visitor voice-mail box, get notified when someone knocks on the door, communicate with visitors by video streaming, and remotely control the door via network sockets. This design is fitted into a miniature clip-board model to demonstrate the full functionalities of the system.

Generic placeholder image
Fingerprint Recognition


Hardware Design



Generic placeholder image
General View of the System

Circuit Design



The hardware modules are connected as below. To have a clean wiring, the Raspberry-pi, server, microphone, fingerprint sensor, and PiCamera are fixed on the wall from the inside by self-made clip-board holders using superglue and pins, penetrating through the wall to interact with users. The indoor door-switch stretches out of breadboard and is attached to the clip-board wall. The Pi-TFT is omitted in the graph below because it basically just attached on top of Raspberry-Pi (and hard to draw in Fritzing :( ).


Generic placeholder image

Circuit Schematic of Smart Door System



Software Design


Web Framework



The web server was developed under “Flask” framework. Within the framework, several functionalities like log-in/out, video streaming, R-Pi GPIO-controlling, voice message playback and new message notification were implemented. For log-in/out, it "render" between web pages after user-id and password were correctly entered or log-out button was pressed. MJPG library was used to continuously send JPEG pictures captured by PiCamera to web, the key technique for video streaming. GPIO-controlling was realized by implementing Javascript for a door-controlling button. Once the button was pressed, a “POST” message would be sent from template to web server, the servo-controlling function therefore was called to open the door. Voice playback was implemented by fetching audio files recorded at back end. New message notification was implemented by messaging function, which keeps updating the states of the "knock-knock" and "leave voice message" modes at back end. When new messages pops, a red dot would appear accordingly in the side bar to notify the user.


Generic placeholder image
Web Voice Message Page





Generic placeholder image
GUI Home

GUI

A user-friendly GUI was implemented using Pygame. It guides users with clear and straight-forward notifications prompted up on the Pi-TFT. (There are currently two owners, and one visitor in the database for system’s robust testing) On boot of the GUI program, it initializes the Picamera, trains classifiers for face recognition, and clears up caches and buffers left by last boot. It has two modes, Owner and Visitors. Owner mode basically does the verifications in sequence, with every success proceeding to next level and every failure retry or back to homepage. After passing 3 tests, the door would open accordingly. A visitor could choose to knock on the door or leave a voice message in Visitor mode. Both requests would be sent to the owner’s website. With Knock-Knock mode chosen, the visitor would be verified (without retry) to see if he/she matches the user-group records, and this verification info would be sent together with the knocking-on-door request to the owner's website. If the request was responded, the visitor would be notified and start communicating with the owner through video streaming; If not responded, the visitor would be redirected to leave a “selfied” voice message, later available on the website and get read by the owner in the future. What's worth mention is, each level of the GUI has the buttons of retry/remake selfie/remake messages, back-to-home and delete-on-back-to-home, in case that someone changes their minds in the middle of trying to visit someone else's home.



Generic placeholder image

Flow Chart of GUI



Face Recognition

We utilized the powerful libraries in "OpenCV" to process incoming images and implement face recognition. The key to face recognition is to train and get a classifier for differentiating a specific object from the surroundings (for instance, detect your face out of 10 people). In theory, we could train to obtain classifiers to detect any object as we want. But in practice, classifier generating is time-consuming (took us more than one week) and fail-prone (but still failed). Therefore, we avoid generating and export an ".xml" file (classifier) for our faces. Instead we refer to another tutorial(second link in the Reference section) to avoid having to export a classifier. This tutorial extracts someone’s face features out of his/her 10 different pictures (as exampled below with Yuan's face), including sad, laugh, left-lighted, center-lighted and etc., and use these extracted features to generate a temperate classifier that would not be exported (It also uses a 11th picture to justify the algorithm's completeness). This algorithm therefore requires the system to train the classifiers with every boot, therefore a "Loading..." was added at the very beginning of the booting. Following this tutorial, Yuan He’s face was recognized with a lower confidence (indicating a better performance), however, Rongguang Wang’s face has a relatively higher confidence (could be due to photos not taken properly, or being short of facial expressions). With Yuan’s classifier, the algorithm is tested to work correctly after trying it out with several friends of ours.

Fingerprint Recognition

The fingerprint sensor, installed for home surveillance systems and secure verification, further omits the need for keys. The fingerprints are enrolled and recognized by image processing through the built-in optical sensor. The tutorial we refered to are listed as No.6 and 7 links in the Reference section. The libraries differs with hardware wiring of the sensor. If wired up with GPIO, then C libraries would adapt (but rare); If TTL-USB converter, then there are a lot more tutorials to refer to. The database of fingerprints is stored on sensor's chip, creating barriers for generating separate identity records in MySQL for different people. We avoid this by creating different directories to refer to different identities.

Generic placeholder image
Generic placeholder image
Generic placeholder image
Generic placeholder image
Generic placeholder image
Generic placeholder image
Generic placeholder image
Generic placeholder image
Generic placeholder image
Generic placeholder image
Generic placeholder image



Face Recognition  Training Graphs

Messaging

There are more than 3 processes and over 10 python scripts running on the Raspberry-pi for this system, making communication between each process rather challenging. FIFO and PIPE worked fine at the very primitive stage, but as system builds up, we discover that the while-loop (for mouse-event detection) in GUI script makes FIFO and PIPE hard to synchronize with other processes. Therefore, we imitated the working principle of FIFO, and make .txt files containing the “message” to be sent to each process. With every reading and writing to the .txt files, communications are implemented with more flexibility. For instance, both webserver and GUI processes need to flip the door, and the door’s state should be consistent between them. We create a .txt file containing only one bit (0/1) to indicate the door’s state accordingly, enabling communicating between them. Also for notifying new messages, the webserver and back-end system communicate with each other likewise.

Speaker Recognition

“PiWho” speaker recognition library can be developed as test-independent speaker identification which is based on MARF. There are basically two phases for the speaker recognition: training phase and recognition phase. In training phase, a “.gzbin” model and a “speakers.txt” file will be created. “.gzbin” model stores the extracted features from the training audio file and the “speakers.txt” file stores the file name and its speaker label and ID. In recognition phase, the function “identify_speakers()” will return a list of best recognized speakers from the trained model. The first element in the returned list would always be the recognized speaker and second one would be the one that's closest to it. Additionally, the recognition distance is also printed out to assist debugging. The lower the distance, the closer the speaker is to the pre-recorded audio record.



Generic placeholder image

Front View of Smart Door System



Testing

The smart door system consists of graphic user interface (GUI) and web server parts. In GUI, face recognition module, speaker recognition module, fingerprint recognition module, servo controller and messaging module are integrated inside. Besides, web server runs separately to support the remote website. The functions of face recognition, speaker recognition, fingerprint recognition, servo control, messaging, voice message, video streaming, remote door control and audio message playback were tested thoroughly.

Face Recognition Module

In this module, the owner group and visitor group photos were trained separately though “OpenCV” library at first. Then, the photos taken in GUI was used to predict face recognition result. When there is no match result or the confidence level is too high, (the program would not proceed and prompts a "retry" message in the GUI) failure message would prompt in console, and therefore indicating that the trained photos are not taken properly, or the pre-image-processing need to adapt to higher resolutions.

Speaker Recognition Module

In this module, the owner group and visitor group voice records were already trained through the “PiWho” library before hand. Then, the audio recorded by the microphone in the GUI was used to predict voice recognition results. If there is no matching result or the distance level is too high, a failure message would be returned in the console. A significant point to mention is, when recording audio models, make sure to speak in a normal voice and tone (don't get too high-pitched or too masculine, just practice to be yourself!).

Fingerprint Recognition Module

This is the most robust section in our system! (must have been that the library we are using is rather mature!) In this module, the owner group and visitor group fingerprints were enrolled before hand. Then, with every input of the fingerprint invalid in the database, the program returns failure messages, which would always be very likely caused by fingers not well placed on the sensor.

GUI

There are two modes in this part: owner, and visitor. To distinguish the owner-user group from the visitor-user group, messaging modules were called to write temporary log files (AKA, our self-implemented FIFO), indicating the corresponding user groups. In owner mode, the face recognition module, speaker recognition module and fingerprint module were executed sequentially. If the owner passed all three tests, the servo controller module would be called to open the door. Besides, in visitor mode, there are two sub-modes: "Knock-Knock", and "Voice Message" mode. If "Knock-Knock" was chosen, the verification phase would go all over again and the results of it would then be recorded, and get sent to the web server through messaging module. Then, the messaging module was called again to notify the web server that a visitor was waiting for response. If the owner responded on the website, the GUI will ask the visitor to watch into the camera for manual verification. Otherwise, the GUI will redirect the user to the "Voice Message" mode. In "Voice Message", the visitor was asked to leave a ten-second voice message together with a selfie. The GUI consists of 17 layers to implement different functionalities among several modes. The synchroniztion between each layers should be awared of, for instance, the camera could not be initialized twice by two different libraries. You'll have to make sure before one library init() the camera, the camera is closed by another library. Other hardware devices' synchronization is also worth awared of, although they themselves already maintain thread-safety.

Servo Controller

The continuous servo was controlled by PWM signal with different frequencies and duty cycles for counter-clockwise and clockwise rotation. The state of the servo was recorded in a log file for messaging. Additionally, to implement a door shaffle, the step-servo, rotating over specific angles (step-distances), would be a better choice than continuous servos.

Messaging Module

In this module, a simple txt file was used as log file (out dumb version of FIFO) to record the states of user group, intermediate results and notification info. File writing and reading methods were implemented inside the module's programs.

Web Server

The “Flask” web framework was used to implement the web server. The server rendered web templates: home page, log-in page, index pages and log-out page. The video streaming function was integrated inside the index-1 page which inherited the index page. MJPG library was used here to continuously send JPEG pictures to the server. Besides, the door-open button was realized by Javascript. Additionally, the voice message playback function was integrated inside the index-2 page. A red dot which represents the notification for new message appeared on the “visitor monitor” and “voice message” side bars when there are any updates to them.


Project Result


Generic placeholder image

Smart Door System Model


The smart door system turns out to be consistent with our expectation, both locally as offline GUI and remotely as online website. As for offline GUI, the owner can open the door through 3 steps: face recognition, speaker recognition and fingerprint recognition. If one of the steps fails, they will be asked to retry or back-to-homepage; On success, it would proceed to next step. In visitor mode, the visitor has two options, "Knock-Knock" and leave "Voice Messages". If "Knock-Knock" was chosen, the visitor would also go through the three tests, but without retry (saves time for knocking doors). This verification would indicate how much the visitor matches to the pre-recorded user-group, assisting the house owner to decide whether or not to open the door for him/her. Then, the visitor should wait for the owner’s response from website (on timeouts, a visitor would be redirected to "Voice Message" layer). If "Voice Message" was chosen, the visitor will be asked to leave a 10-second voice message along with a selfie.


Generic placeholder image

Interactive Front-end


As for online website, owner should firstly log-in with user-id and password. Then, in "Visitor Monitor" section, the owner can check verfication results of the current visitor and video stream with him/her to finally decide on whether or not to open the door. In voice message section, the owner can playback the audio messages together with selfies of the visitors. Once there’s a knock-knock message or a new voice message uploaded to the server, the owner would be notified by the red dots on side bar.


Conclusions


The final project was planned, developed and demonstrated as expected. First, the face recognition was implemented since it was treated as the hardest part of the project. Then, a simple version of web server was developed to render different templates with the function of log-in and log-out. Next, fingerprint recognition was realized. The wiring problem of fingerprint sensor was resolved by assistance of TTL-to-USB convertor. Additionally, the video streaming function was implemented on the web server. Furthermore, the speaker recognition was realized after searching on tons of libraries (including APIs). Then, the graphic user interface (GUI) integrates all functionalities together, including face, speaker, fingerprint recognition, servo control and messaging. The most challenging part was to synchronize different PiCamera libraries' init() functions, because the camera could not be init() twice by different processes (video streaming, video displaying, face recognition and etc), which was resolved by closing the camera before init() sequentially for each pocess. The problem of messaging between GUI script and web server script was solved by maintaining log files. Finally, a well-tested robust smart door system was finished to serve both at home and online.


Future Work


Many further improvements could be done to perfect our smart door system. Firstly, the pictures for training, voice records, fingerprint records, user-id and password could be stored in a database (either self-created DB or MySQL tables) to better manage the user groups, which would also enable a flexibility in adding more users to the system. Secondly, the owner can only watch the video streaming without audio for now, bottlenecking the communication efficiency between owner and user. Additionally, to implement a door shaffle, the step-servo, rotating over specific angles (step-distances), would be a better choice than continuous servos. Finally, the latency of the video streaming is a little bit high now, because it's currently based on TCP protocols. In the future, a UDP protocol video streaming is worth trying for faster transportation. Alternatively, improvements could also be made by refining the current streaming algorithm or find a more efficient streaming library.


Work Distribution


Generic placeholder image

Project Group Picture



Generic placeholder image

Rongguang Wang

rw564@cornell.edu

Design the front-end web interface and the construction of the miniature model. Write the interactive flask application that handles post requests and convert them into messages that send over the network. Additionally, implement the speaker recognition function of the system.

Generic placeholder image

Yuan He

yh772@cornell.edu

Design the graphic user interface (GUI) and the hardware system, including the miniature house and electrical components layout in it. Implement face recognition and fingerprint recognition functionalities, and the communication between local processes. Conduct system robust testing.


Parts List

  • Raspberry Pi Camera Module V2        $19.37
  • Adafruit Electret Microphone Amplifier      $7.99
  • KOOKYE Optical Fingerprint Reader Sensor  $22.99
  • USB 2.0 to TTL 6PIN CP2102 Converter    $5.99
  • Chip board(for modeling)              $5.00
  • Servo, Resistors and Wires - Provided in lab

Total: $61.34


References

PiCamera Documentation
Face Recognition
OpenCV Documentation
Flask Documentation
MJPG Streamer
Fingerprint Python Library
Fingerprint Sensor User Manual
Speaker Recognition
RPi GPIO Documentation

Code Appendix

GUI.py

offline user interface

import sys
import RPi.GPIO as GPIO
import os
import shutil
import pygame
import time
import cv2
import pygame.camera
import numpy as np
import speaker_recognition
import finger_recognition
import door_control
import message_control
import user_control
import visitor_verification_upload
from PIL import Image
from io import BytesIO
from picamera.array import PiRGBArray
from picamera import PiCamera
from sound_recorder import record
from pygame.locals import *


GPIO.setmode(GPIO.BCM)
GPIO.setup(6, GPIO.IN, pull_up_down=GPIO.PUD_UP)
GPIO.setup(27, GPIO.IN, pull_up_down=GPIO.PUD_UP)

os.putenv('SDL_VIDEODRIVER','fbcon')
os.putenv('SDL_FBDEV', '/dev/fb1')
os.putenv('SDL_MOUSEDRV', 'TSLIB')
os.putenv('SDL_MOUSEDEV', '/dev/input/touchscreen')

pygame.init()
pygame.mouse.set_visible(False)
 
#Setup frame's size
size = width, height = 320,240
screen  = pygame.display.set_mode(size)

# Countdown interval
ct = 0.6

# Color Library
WARM = (254,221,120)
CREAM = (255,234,180)
BLACK = (0,0,0)
HORIZON = (160,186,205)
LAKE = (177,212,219)
HOME = (125,180,205)
WHITE = (255,255,255)
RED = (255, 0 , 0)

#Fonts
font1 = '/home/pi/ttf_font/DK Gamboge.otf'

#home level
level = 0

# Define buttons
home_buttons={'Owner':(80,120), 'Visitor':(240,120)}
visitor_buttons={'Knock-Knock':(160, 65), 'Leave A Message':(160, 175)}
message_buttons={'Remake voice message':(150, 40), 'Delete and back to Home':(160, 120),'continue and leave a selfie':(160, 200)}
selfie_buttons={'Retake a selfie':(150, 40), 'Delete and back to Home':(160, 120),'Done':(160, 200)}

# index for message
ii = 0

''' Text Drawing '''
def draw_text(text, pos, font_size, color):
    my_font = pygame.font.Font(font1, font_size)
    text_surface = my_font.render(text, True, color)
    rect = text_surface.get_rect(center=pos)
    screen.blit(text_surface, rect)

######### Loading.... Clearing Cache ####################

screen.fill(LAKE)
draw_text("Loading...", (160,120), 40, WHITE)
pygame.display.flip()

shutil.rmtree('/home/pi/smart_door_system/static/media/audio')
os.mkdir('/home/pi/smart_door_system/static/media/audio')
shutil.rmtree('/home/pi/smart_door_system/static/media/image')
os.mkdir('/home/pi/smart_door_system/static/media/image')

####### Loading.... Initialize door_state and user_state ##########

file = open("Global_variable_for_door.txt","w") 
file.write(str(0) + "," + str(0))
file.close()
message_control.file_write(0)
user_control.file_write(0)
visitor_verification_upload.file_write(0,0,0)

##### Loading... Face recognition Training ###################

cascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath)
recognizer = cv2.createLBPHFaceRecognizer()
recognizer2 = cv2.createLBPHFaceRecognizer()

def get_images_and_labels(path):
    image_paths = [os.path.join(path, f) for f in os.listdir(path) if not f.endswith('.sad')]
    images = []
    labels = []
    for image_path in image_paths:
        image_pil = Image.open(image_path).convert('L')
        image = np.array(image_pil, 'uint8')
        nbr = int(os.path.split(image_path)[1].split(".")[0].replace("subject", ""))
        faces = faceCascade.detectMultiScale(image)
        for (x, y, w, h) in faces:
            images.append(image[y: y + h, x: x + w])
            labels.append(nbr)
    return images, labels

            
def face_train(path, recognizer):
    images, labels = get_images_and_labels(path)
    cv2.destroyAllWindows()
    recognizer.train(images, np.array(labels))

    image_paths = [os.path.join(path, f) for f in os.listdir(path) if f.endswith('.sad')]
    for image_path in image_paths:
        predict_image_pil = Image.open(image_path).convert('L')
        predict_image = np.array(predict_image_pil, 'uint8')
        faces = faceCascade.detectMultiScale(predict_image)
        for (x, y, w, h) in faces:
            nbr_predicted, conf = recognizer.predict(predict_image[y: y + h, x: x + w])
            nbr_actual = int(os.path.split(image_path)[1].split(".")[0].replace("subject", ""))
            if nbr_actual == nbr_predicted:
                print "{} is Correctly Recognized with confidence {}".format(nbr_actual, conf)
            else:
                print "{} is Incorrect Recognized as {}".format(nbr_actual, nbr_predicted)
            
            
face_train('./cornellfaces',recognizer)
face_train('./visitorfaces',recognizer2)
            
                      
draw_text("Loading...", (160,120), 40, LAKE)
pygame.display.flip()

############# Functions ###########################################

'''face_recognition'''
def face_recognize(flag):
    global recognizer2, recognizer
    if flag == -1:
        reco = recognizer2
        path = './visitorfaces'
    else:
        reco = recognizer
        path = './cornellfaces'
        
    t_start = time.time()
    fps = 0
    face_recognized = 0
    # Capture frames from the camera
    for frame in camera.capture_continuous( rawCapture, format="bgr", use_video_port=True ):
        image = frame.array    
        gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
    # Detect the face in the image
        face_income = faceCascade.detectMultiScale(gray)
        image_paths = [os.path.join(path, f) for f in os.listdir(path) if f.endswith('.sad')]    
        for (x, y, w, h) in face_income:
            nbr_predicted, conf = reco.predict(gray[y: y + h, x: x + w])        
            for image_path in image_paths:
                #nbr_actual = int(os.path.split(image_path)[1].split(".")[0].replace("subject", ""))
                print "Confidence {}".format(conf)
                if nbr_predicted == 6 and conf <= 130:
                    face_recognized += 1
                elif nbr_predicted == 1 and conf <= 70:
                    face_recognized += 1
                elif nbr_predicted == 7 and conf <= 70:
                    face_recognized += 1
        if face_recognized >= 1:
            #print "CONGRATULATIONS!!!{} is Correctly Recognized with confidence {}".format(nbr_predicted, conf)
            print "CONGRATULATIONS!!!"
            return "CONGRATULATIONS!"#{} is Correctly Recognized with confidence {}"#.format(nbr_actual1, conf)
        fps = fps + 1
        rawCapture.truncate( 0 )
        if fps >= 2:
            #print "SORRY...Incorrect Recognized as {} with confidence {}".format(nbr_predicted, conf)
            print "SORRY..."
            return "Please retry."

    
''' Count-Down Animation '''  
def count_down(text):
    draw_text(text, (160,80), 30, WHITE)
    draw_text("in ", (140,120), 40, WHITE)
    for t in range(1,4):
        t = 4-t
        if t < 4:
            screen.fill(LAKE)
            draw_text(text, (160,80), 30, WHITE)
            draw_text("in ", (140,120), 40, WHITE)
        draw_text(str(t), (170,120), 40, WHITE)
        pygame.display.flip()
        time.sleep(ct)
        #delay_and_home(ct)


''' Init camera through pygame library '''
def smooth_init():
    pygame.camera.init()
    cam_list = pygame.camera.list_cameras()
    cam = pygame.camera.Camera(cam_list[0],(320,240))
    cam.start()
    return cam


''' Return back to home directory '''
def delay_and_home(dt): # ct is the delay interval
    pygame.draw.rect(screen,HORIZON,[0,170,320,240],0) #0 means finishing filling
    draw_text("Home", (160,205),28, WHITE)
    pygame.display.flip()
    global level
    ts = time.time()
    while time.time()- ts < dt:
        for event in pygame.event.get():
            if(event.type  is  pygame.MOUSEBUTTONDOWN):
                pos = pygame.mouse.get_pos()
                x,y = pos
                if y > 170:
                    print "'Home' pressed"
                    level = 0
                    break

############### Main Method #######################

start_time = time.time()
door_state = True

while  time.time() - start_time < 1800:
    
    screen.fill(LAKE)
    
    if ( not GPIO.input(6) ):
        print "Button 6 pressed"
        door_control.door_ctrl()
        #Debounce
        time.sleep(0.1)
        
    if ( not GPIO.input(27) ):
        print "Button 27 pressed,quit"
        quit()

    ############# HomePage #############################
    
    if level == 0:
        
        pygame.draw.rect(screen,HORIZON,[0,0,160,240],0) #0 means finishing filling
        user_control.file_write(0)

        for  event  in  pygame.event.get():
            if(event.type  is  pygame.MOUSEBUTTONUP):
                pos = pygame.mouse.get_pos()
                x,y = pos
                if x > 300:
                    print "'Quit' pressed"
                    quit()
                if x > 160:
                    print "'Visitor' pressed"
                    user_control.file_write(-1)
                    level = 2                    
                else:
                    print "'Owner' pressed"
                    user_control.file_write(1)
                    level = 1                   
        for  text, pos  in  home_buttons.items():
            draw_text(text, pos, 40, WHITE)
    
    ################ Owner ##################################
    
    if level == 1:
        
        screen.fill(LAKE)
        draw_text("Put your head in frame",(160,120), 30, WHITE)
        pygame.display.flip()
        time.sleep(4*ct)
        count_down("Tap anywhere when ready")
        cam = smooth_init()
        camera = cam
        
        level = 3
                          
    ############## Owner Mirror #############################
    
    if level == 3:
        
        global cam
        image1 = cam.get_image()
        image1 = pygame.transform.scale(image1,(640,480))
        ecli = pygame.draw.ellipse(image1, WHITE, [70, 0, 180, 240], 4)
        screen.blit(image1,(0,0))
        pygame.display.update()
        
        for event in pygame.event.get():
            if(event.type  is  pygame.MOUSEBUTTONUP):
                print "'Face Recognition Authentation' pressed"
                cam.stop()
                global camera
                camera = PiCamera()
                camera.resolution = ( 640, 480 )
                camera.framerate = 20
                rawCapture = PiRGBArray( camera, size=( 640, 480 ) )
                level = 4
                     
    ################ Face-Recognition ######################
    
    if level == 4:
        #screen.fill(LAKE)
        flag = user_control.file_read()
        text = face_recognize(flag)
        #draw_text(text, (170,110), 40, WHITE)
        
        if text=="CONGRATULATIONS!" :
            global camera
            camera.close()
            if flag == -1:
                visitor_verification_upload.column_write(0,1)
            level = 5
        else:
            
            if flag == -1:
                visitor_verification_upload.column_write(0,-1)
                global camera
                camera.close()
                level = 5
            else:
                level = 7
                              
    ############ Face-Recognition-retry ######################
    
    if level == 7:
        screen.fill(LAKE)
        draw_text("Please retry.", (170,110), 40, WHITE)
        
        pygame.draw.rect(screen,HOME,[160,170,160,240],0) #0 means finishing filling
        draw_text("Retry", (240,205),28, WHITE)
        
        pygame.draw.rect(screen,HORIZON,[0,170,160,240],0) #0 means finishing filling
        draw_text("Home", (80,205),28, WHITE)
        
            
        for event in pygame.event.get():
            if(event.type  is  pygame.MOUSEBUTTONUP):
                pos = pygame.mouse.get_pos()
                x,y = pos
                
                if y > 170:
                    if x > 160:
                        print "back to last level pressed"
                        global camera
                        camera.close()
                        global cam
                        cam = smooth_init()
                        level = 3
                    else:
                        print "'Home' pressed"
                        global camera
                        camera.close()
                        level = 0
                            
    ############# Speaker-Recognition ######################            
                      
    if level == 5:
        screen.fill(LAKE)
        count_down("Start speaking")
        screen.fill(LAKE)
        draw_text("Start speaking", (160,80), 30, WHITE)
        pygame.display.flip()
        flag = user_control.file_read()
        text = speaker_recognition.find_speaker(flag)
        
        if text=="CONGRATULATIONS!" :
            if flag == -1:
                visitor_verification_upload.column_write(1,1)
            level = 6
        else:
            
            if flag == -1:
                visitor_verification_upload.column_write(1,-1)
                level = 6
            else:
                level = 8
            
            
    ########## Speaker-Recognition-retry ####################
    
    if level == 8:
        screen.fill(LAKE)
        draw_text("Please retry.", (170,110), 40, WHITE)
        
        pygame.display.flip()
        
        delay_and_home(4*ct)
       
        if level != 0:
            level = 5
        
    ######### Fingerprint-Recognition ########################
        
    if level == 6:
        screen.fill(LAKE)
        count_down("Put finger on device")
        screen.fill(LAKE)
        draw_text("Put finger on device", (160,80), 30, WHITE)
        pygame.display.flip()
        
        flag = user_control.file_read()
        text, owner = finger_recognition.finger(flag)
        
        if text=="CONGRATULATIONS!" :
            if flag == -1:
                level = 11
                visitor_verification_upload.column_write(2,1)
            else:
                level = 10
        else:
            
            if flag == -1:
                visitor_verification_upload.column_write(2,-1)
                level = 11
            else:
                level = 9

            
    ############ Finger-Recognition-retry ####################
    
    if level == 9:
        screen.fill(LAKE)
        draw_text("Please retry.", (170,110), 40, WHITE)
     
        pygame.display.flip()
        
        delay_and_home(4*ct)
        
        if level != 0:
            level = 6
        
    ################ Welcome! ###############################
        
    if level == 10:
        global owner
        if owner == 0:
            people = "Yuan He"
        elif owner == 1:
            people = "Rongguang Wang"
        screen.fill(LAKE)
        draw_text("Welcome home : )", (160,90), 35, WHITE)
        draw_text(people, (160,150), 35, WHITE)
        pygame.display.flip()
        
        #open the door, let'em in, and close the door
        door_control.door_ctrl()
        time.sleep(4*ct)
        door_control.door_ctrl()
        
        level = 0
            
    ################ Visitor ################################
  
    if level == 2:
        
        screen.fill(LAKE)
        pygame.draw.rect(screen,HORIZON,[0,0,320,120],0) #0 means finishing filling
        
        for  text, pos  in  visitor_buttons.items():
            draw_text(text, pos, 40, WHITE)
        
        for  event  in  pygame.event.get():
            if(event.type  is  pygame.MOUSEBUTTONUP):
                pos = pygame.mouse.get_pos()
                x,y = pos
                if y < 120:
                    level = 1
                else:
                    level = 12
                         
    ############## Knock-knock #############################
    
    if level == 11:
        screen.fill(LAKE)
        draw_text("Your request was sent!", (160,80), 30, WHITE)
        draw_text("Waiting for response...", (160,120), 30, WHITE)
        pygame.display.flip()
        #message sent
        message_control.file_write(1)
        #wait fro response
        time.sleep(10*ct)
        if message_control.file_read() == 1:
            screen.fill(LAKE)
            draw_text("Owner not online", (160,80), 30, WHITE)
            draw_text("please leave a message", (160,120), 30, WHITE)
            pygame.display.flip()
            time.sleep(6*ct)
            level = 12
        else:
            screen.fill(LAKE)
            draw_text("Owner is responding", (160,80), 30, WHITE)
            draw_text("Please look at the camera", (160,120), 30, WHITE)
            pygame.display.flip()
            time.sleep(20*ct)
            level = 0
        
    ############# Leave a voice-message #####################
    
    if level == 12:
        
        count_down("Leave a 10s voice message")
        screen.fill(LAKE)
        draw_text("Leave a 10s voice message", (160,80), 30, WHITE)
        pygame.display.flip()
        
        global ii
        ii+=1
        record(10,'/home/pi/smart_door_system/static/media/audio/' + str(ii) + '.wav')
        
        level = 13

    ########## re-leave a message or continue? ###############
    
    if level == 13:
        
        screen.fill(LAKE)
        pygame.draw.rect(screen,HORIZON,[0,80,320,160],0)
        pygame.draw.rect(screen,HOME,[0,160,320,240],0)
        
        for  text, pos  in  message_buttons.items():
            draw_text(text, pos, 30, WHITE)
        
        for  event  in  pygame.event.get():
            if(event.type  is  pygame.MOUSEBUTTONUP):
                pos = pygame.mouse.get_pos()
                x,y = pos
                if y < 80:
                    print "re-leave a voice message"
                    os.remove('/home/pi/smart_door_system/static/media/audio/' + str(ii) + '.wav')
                    ii-=1
                    level = 12
                elif y < 160:
                    print "delete and back to home"
                    os.remove('/home/pi/smart_door_system/static/media/audio/' + str(ii) + '.wav')
                    ii-=1
                    level = 0
                else:
                    level = 14
        
    ############### Leave a selfie ##########################
    
    if level == 14:
        count_down("tap anywhere to selfie")
        cam = smooth_init()
        camera = cam
        level = 15
        
    ############# Leave a selfie ###########################
    
    if level == 15:
        global cam
        #cam = smooth_init()
        image1 = cam.get_image()
        image1 = pygame.transform.scale(image1,(640,480))
        screen.blit(image1,(0,0))
        pygame.display.update()
        
        for  event  in  pygame.event.get():
            if(event.type  is  pygame.MOUSEBUTTONUP):
                image = cam.get_image()
                pygame.image.save(image,'/home/pi/smart_door_system/static/media/image/' + str(ii) + '.jpg')
                screen.fill(WHITE)
                pygame.display.flip()
                time.sleep(0.05)
                screen.blit(image,(0,0))
                time.sleep(0.05)
                pygame.display.flip()
                cam.stop()
                level = 16
                
    ############# re-take a selfie #######################
    
    if level == 16:
        
        screen.fill(LAKE)
        pygame.draw.rect(screen,HORIZON,[0,80,320,160],0)
        pygame.draw.rect(screen,HOME,[0,160,320,240],0)
        
        for  text, pos  in  selfie_buttons.items():
            draw_text(text, pos, 30, WHITE)
        
        for  event  in  pygame.event.get():
            if(event.type  is  pygame.MOUSEBUTTONUP):
                pos = pygame.mouse.get_pos()
                x,y = pos
                if y < 80:
                    print "re-take a selfie"
                    os.remove('/home/pi/smart_door_system/static/media/image/' + str(ii) + '.jpg')
                    cam = smooth_init()
                    camera = cam
                    level = 15
                elif y < 160:
                    print "delete and back to home"
                    os.remove('/home/pi/smart_door_system/static/media/audio/' + str(ii) + '.wav')
                    os.remove('/home/pi/smart_door_system/static/media/image/' + str(ii) + '.jpg')
                    ii-=1
                    level = 0
                else:
                    print "Bye-bye"
                    level = 17
                    
    ############## Bye-bye ##############################
    
    if level == 17:
    
        screen.fill(LAKE)
        draw_text("Thx, bye-bye ; )", (160,120), 35, WHITE)
        pygame.display.flip()
        
        time.sleep(3*ct)
        
        level = 0
    
    
    pygame.display.flip()

web.py

Flask web server

from importlib import import_module
import os, shutil
from flask import Flask, render_template, redirect, \
    url_for, request, session, flash, Response
from functools import wraps
from camera_pi import Camera
import finger_recognition
import door_control
import message_control
import visitor_verification_upload
import time


music_dir = '/home/pi/smart_door_system/static/media/audio'
image_dir = '/home/pi/smart_door_system/static/media/image'

music_files_original = [f for f in os.listdir(music_dir) if f.endswith('wav')]
music_files_number_original = len(music_files_original)

data = {'message': 'false'}

def message_b(num):
    music_files = [f for f in os.listdir(music_dir) if f.endswith('wav')]
    music_files_number = len(music_files)
    global data2
    if music_files_number != num:
        data2 = {'message': 'true'}
    else:
        data2 = {'message': 'false'}
        
def message_a():
    global data1
    if message_control.file_read() == 1:
        data1 = {'message': 'true'}
    else:
        data1 = {'message': 'false'}

# create the application object
app = Flask(__name__)

# config
app.secret_key = 'smart door'

# login required decorator
def login_required(f):
    @wraps(f)
    def wrap(*args, **kwargs):
        if 'logged_in' in session:
            return f(*args, **kwargs)
        else:
            flash('You need to login first.')
            return redirect(url_for('login'))
    return wrap


# use decorators to link the function to a url
@app.route('/')
#@login_required
@app.route('/home')
def home():
    return render_template('home.html')  
    # return "Hello, World!"  # return a string


# route for handling the login page logic
@app.route('/login', methods=['GET', 'POST'])
def login():
    error = None
    if request.method == 'POST':
        if (request.form['username'] != 'rw564') \
                or request.form['password'] != 'rw564':
            error = 'Invalid Credentials. Please try again.'
        else:
            session['logged_in'] = True
            global music_files_number_original
            message_b(music_files_number_original)
            global data2
            message_a()
            global data1
            return render_template('index.html',
                                   title = 'Message',
                                   data1 = data1,
                                   data2 = data2)
    return render_template('login.html', error=error)


@app.route('/logout')
#@login_required
def logout():
    session.pop('logged_in', None)
    flash('You are logged out now.')
    return render_template('logout.html')  # render a template


@app.route('/index_1')
#@login_required
def index_1():
    global music_files_number_original
    message_b(music_files_number_original)
    global data2
    data1 = {'message': 'false'}
    message_control.file_write(0)
    a,b,c=visitor_verification_upload.file_read()
    return render_template('index_1.html',
                           data1 = data1,
                           data2 = data2,
                           a=a,
                           b=b,
                           c=c)  # render a template

def gen(camera):
    """Video streaming generator function."""
    while True:
        frame = camera.get_frame()
        yield (b'--frame\r\n'
               b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')


@app.route('/video_feed')
def video_feed():
    """Video streaming route. Put this in the src attribute of an img tag."""
    return Response(gen(Camera()),
                    mimetype='multipart/x-mixed-replace; boundary=frame')


@app.route('/index_2')
#@login_required
def index_2():
    music_files = [f for f in os.listdir(music_dir) if f.endswith('wav')]
    music_files_number = len(music_files)
    image_files = [f for f in os.listdir(image_dir) if f.endswith('jpg')]
    image_files_number = len(image_files)
    global music_files_number_original
    music_files_number_original = music_files_number 
    data2 = {'message': 'false'}
    message_a()
    global data1
    return render_template("index_2.html",
                        title = 'Message',
                        music_files_number = music_files_number,
                        music_files = music_files,
                        image_files_number = image_files_number,
                        image_files = image_files,
                        data1 = data1,
                        data2 = data2)


@app.route('/open', methods=['GET','POST'])
def open():
    print('Open the door!')
    door_control.door_ctrl()
    return "Door opening..."


@app.route('/delete_visitor', methods=['GET','POST'])
def delete_visitor():
    print('Delete visitor!')
    visitor_verification_upload.file_write(0,0,0)
    return "Deleting..."


@app.route('/delete_voice', methods=['GET','POST'])
def delete_voice():
    print('Delete message!')
    shutil.rmtree('/home/pi/smart_door_system/static/media/audio')
    os.mkdir('/home/pi/smart_door_system/static/media/audio')
    shutil.rmtree('/home/pi/smart_door_system/static/media/image')
    os.mkdir('/home/pi/smart_door_system/static/media/image')
    return "Delete...message"


# start the server with the 'run()' method
if __name__ == '__main__':
    app.run(host="0.0.0.0", debug=True, threaded=True)

speaker_recognition.py

speaker recognition function

from piwho import recognition
from piwho import vad
from sound_recorder import record

def find_speaker(flag):
    
    recog = recognition.SpeakerRecognizer()

    # Record voice until silence is detected
    # save WAV file
    #vad.record()
    record(5,'test.wav')

    # use newly recorded file for recognition
    name = []
    name = recog.identify_speaker('test.wav')
    dictn = recog.get_speaker_scores()
    
    print(name[0])
    print(dictn)
    
    if flag == 1:
        if float(dictn[name[0]]) < 0.5:
            print ('Congratulation !!! with distance ' + dictn[name[0]])
            return "CONGRATULATIONS!"
        else:
            print ('Sorry... with distance ' + dictn[name[0]])
            return "Please retry."
    elif flag == -1:
        if float(dictn[name[0]]) < 0.4:
            print ('Congratulation !!! with distance ' + dictn[name[0]])
            return "CONGRATULATIONS!"
        else:
            print ('Sorry... with distance ' + dictn[name[0]])
            return "Please retry."

sound_recorder.py

sound recording function

import pyaudio
import wave

def record(t,path):
 
    FORMAT = pyaudio.paInt16
    CHANNELS = 1
    RATE = 44100
    CHUNK = 8192
    RECORD_SECONDS = t
    WAVE_OUTPUT_FILENAME = path #"test.wav"
 
    audio = pyaudio.PyAudio()
    for i in range(audio.get_device_count()):
        dev = audio.get_device_info_by_index(i)
        print((i,dev['name'],dev['maxInputChannels']))
    # start Recording
    stream = audio.open(format=FORMAT, channels=CHANNELS,
                    rate=RATE, input=True,
                    input_device_index = 0,
                    frames_per_buffer=CHUNK)
    print "recording..."
    frames = []
 
    for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
        data = stream.read(CHUNK)
        frames.append(data)
    print "finished recording"
 
 
    # stop Recording
    stream.stop_stream()
    stream.close()
    audio.terminate()
 
    waveFile = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
    waveFile.setnchannels(CHANNELS)
    waveFile.setsampwidth(audio.get_sample_size(FORMAT))
    waveFile.setframerate(RATE)
    waveFile.writeframes(b''.join(frames))
    waveFile.close()

fingerprint_recognition.py

fingerprint recognition function

import hashlib
from pyfingerprint.pyfingerprint import PyFingerprint


def finger(flag):

    # initialize the sensor
    try:
        f = PyFingerprint('/dev/ttyUSB0', 57600, 0xFFFFFFFF, 0x00000000)

        if ( f.verifyPassword() == False ):
            raise ValueError('The given fingerprint sensor password is wrong!')

    except Exception as e:
        print('The fingerprint sensor could not be initialized!')
        print('Exception message: ' + str(e))
        exit(1)

    # gets some sensor information
    print('Currently used templates: ' + str(f.getTemplateCount()) +'/'+ str(f.getStorageCapacity()))

    # search the finger and calculate hash
    try:
        while True:
            print('Waiting for finger...')

            # wait for that finger is read
            while ( f.readImage() == False ):
                pass

            # converts read image to characteristics and stores it in charbuffer 1
            f.convertImage(0x01)

            # searchs template
            result = f.searchTemplate()

            positionNumber = result[0]
            accuracyScore = result[1]

            if ( positionNumber == -1 ):
                print('No match found, please try again')
                return "Please retry.", positionNumber
            #exit(0)
            else:
                print('Found template at position #' + str(positionNumber))
                print('The accuracy score is: ' + str(accuracyScore))

                # loads the found template to charbuffer 1
                f.loadTemplate(positionNumber, 0x01)

                # downloads the characteristics of template loaded in charbuffer 1
                characterics = str(f.downloadCharacteristics(0x01)).encode('utf-8')

                # hashes characteristics of template
                print('SHA-2 hash of template: ' + hashlib.sha256(characterics).hexdigest())
                #exit(0)
                
                if flag == 1:
                    if positionNumber == 0 or positionNumber == 1:
                        return "CONGRATULATIONS!", positionNumber
                    else:
                        return "Please retry.", positionNumber
                elif flag == -1:
                    if positionNumber == 2:
                        return "CONGRATULATIONS!", positionNumber
                    else:
                        return "Please retry.", positionNumber
                
    except Exception as e:
        print('Operation failed!')
        print('Exception message: ' + str(e))
        exit(1)

camera_pi.py

camera initilization function for video streaming

import io
import time
import picamera
from base_camera import BaseCamera


class Camera(BaseCamera):
    @staticmethod
    def frames():
        with picamera.PiCamera() as camera:
            # let camera warm up
            time.sleep(2)
            stream = io.BytesIO()
            for foo in camera.capture_continuous(stream, 'jpeg',
                                                 use_video_port=True):
                # return current frame
                stream.seek(0)
                yield stream.read()
                # reset stream for next frame
                stream.seek(0)
                stream.truncate()

base_camera.py

camera control function for video streaming

import time
import threading
try:
    from greenlet import getcurrent as get_ident
except ImportError:
    try:
        from thread import get_ident
    except ImportError:
        from _thread import get_ident


class CameraEvent(object):
    """An Event-like class that signals all active clients when a new frame is
    available.
    """
    def __init__(self):
        self.events = {}

    def wait(self):
        """Invoked from each client's thread to wait for the next frame."""
        ident = get_ident()
        if ident not in self.events:
            # this is a new client
            # add an entry for it in the self.events dict
            # each entry has two elements, a threading.Event() and a timestamp
            self.events[ident] = [threading.Event(), time.time()]
        return self.events[ident][0].wait()

    def set(self):
        """Invoked by the camera thread when a new frame is available."""
        now = time.time()
        remove = None
        for ident, event in self.events.items():
            if not event[0].isSet():
                # if this client's event is not set, then set it
                # also update the last set timestamp to now
                event[0].set()
                event[1] = now
            else:
                # if the client's event is already set, it means the client
                # did not process a previous frame
                # if the event stays set for more than 5 seconds, then assume
                # the client is gone and remove it
                if now - event[1] > 5:
                    remove = ident
        if remove:
            del self.events[remove]

    def clear(self):
        """Invoked from each client's thread after a frame was processed."""
        self.events[get_ident()][0].clear()


class BaseCamera(object):
    thread = None  # background thread that reads frames from camera
    frame = None  # current frame is stored here by background thread
    last_access = 0  # time of last client access to the camera
    event = CameraEvent()

    def __init__(self):
        """Start the background camera thread if it isn't running yet."""
        if BaseCamera.thread is None:
            BaseCamera.last_access = time.time()

            # start background frame thread
            BaseCamera.thread = threading.Thread(target=self._thread)
            BaseCamera.thread.start()

            # wait until frames are available
            while self.get_frame() is None:
                time.sleep(0)

    def get_frame(self):
        """Return the current camera frame."""
        BaseCamera.last_access = time.time()

        # wait for a signal from the camera thread
        BaseCamera.event.wait()
        BaseCamera.event.clear()

        return BaseCamera.frame

    @staticmethod
    def frames():
        """"Generator that returns frames from the camera."""
        raise RuntimeError('Must be implemented by subclasses.')

    @classmethod
    def _thread(cls):
        """Camera background thread."""
        print('Starting camera thread.')
        frames_iterator = cls.frames()
        for frame in frames_iterator:
            BaseCamera.frame = frame
            BaseCamera.event.set()  # send signal to clients
            time.sleep(0)

            # if there hasn't been any clients asking for frames in
            # the last 10 seconds then stop the thread
            if time.time() - BaseCamera.last_access > 10:
                frames_iterator.close()
                print('Stopping camera thread due to inactivity.')
                break
        BaseCamera.thread = None

door_control.py

door control function


import RPi.GPIO as GPIO
import time
import os
import re

def file_read():
    file = open("Global_variable_for_door.txt","r")
    line = file.readline()
    ds = line.split(",")[0]
    i = line.split(",")[1]
    file.close()
    return int(ds), int(i)
    
def file_write(ds,i):
    file = open("Global_variable_for_door.txt","w") 
    file.write(str(ds) + "," + str(i))
    file.close()


def door_ctrl():
    GPIO.setmode(GPIO.BCM)
    GPIO.setup(5, GPIO.OUT)
    
    door_state,i = file_read()
    
    if i == 0:
        door_state = 1
        i+=1
        
    if door_state == 1:
        
        #open the door
        
        p = GPIO.PWM(5,46.5509)
        p.start(6.982)
        
    elif door_state == -1:
        #close the door
        p = GPIO.PWM(5,46.782)
        p.start(6.364)
        
##        p = GPIO.PWM(5,46.712)
##        p.start(6.540)
        
    file_write(door_state*(-1), i)
   
    print str(door_state), "           ", str(i)
    
    start_time_1 = time.time()
    
    while (time.time() - start_time_1) < 0.4:
        pass

visitor_verification_upload.py

visitor verification tests result upload function

import os
import re

def file_read():
    file = open("visitor_verification.txt","r")
    line = file.readline()
    i = line.split(",")[0]
    ii = line.split(",")[1]
    iii = line.split(",")[2]
    file.close()
    return int(i), int(ii), int(iii)
    
def file_write(i,ii,iii):
    file = open("visitor_verification.txt","w") 
    file.write(str(i) + "," + str(ii) + "," +str(iii))
    file.close()
    
    
def column_write(index, value):
    
    i, ii, iii = file_read()
    
    file = open("visitor_verification.txt","w")
    
    if index == 0:
        file.write(str(value) + "," + str(ii) + "," +str(iii))
    elif index == 1:
        file.write(str(i) + "," + str(value) + "," +str(iii))
    elif index == 2:
        file.write(str(i) + "," + str(ii) + "," +str(value))
    file.close()

user_control.py

different user group recording function

import time
import os
import re

def file_read():
    file = open("user.txt","r")
    line = file.readline()
    file.close()
    return int(line)
    
def file_write(line):
    file = open("user.txt","w") 
    file.write(str(line))
    file.close()

message_control.py

video message recording function

import time
import os
import re

def file_read():
    file = open("message_for_video.txt","r")
    line = file.readline()
    file.close()
    return int(line)
    
def file_write(line):
    file = open("message_for_video.txt","w") 
    file.write(str(line))
    file.close()

smart_door_system.sh

bash script

#!/bin/bash

sudo modprobe bcm2835-v4l2

sudo python web.py &

sudo python GUI_2.py