Showing posts with label opencv. Show all posts
Showing posts with label opencv. Show all posts

Friday, 6 October 2017

Gesture driven Virtual Keyboard using OpenCV + Python

Hello Readers, long time no see. In this tutorial, I will be teaching you how to create a gesture driven Virtual Keyboard with OpenCV and Python. Though I am not sure if I can call this an Augmented Reality Keyboard (PLZ TELL ME IF YOU KNOW), but this is still an awesome project.

Outcome




So before going on and blabbering about this project I would like you to see the end results. The actual video is of 1 minute and 40 seconds. It is speeded up to compress the video in 30 seconds.
You can see that the keyboard contains only 26 alphabets and a spacebar and I am wearing yellow paper on my finger with which I am trying to simulate a keyboard click.

Requirements

  1. A computer with a good camera
  2. A yellow (though any other color can be used) piece of paper to be worn in a finger (for color segmentation). 
  3. OpenCV for Python 3
  4. PyAutoGui for Python 3
  5. Python 3
  6. Text Editor like Sublime Text 3 or Atom
  7. A little bit of knowledge in Maths

Steps that we are going to take

  1. Get the corner coordinates of each and every key that is used to design the keyboard.
  2. Write a function to recognize the click.

Let's start coding...

Before designing our project we need to first know about our constraints. Obviously we have some restrictions regarding the GUI of the keyboard that is to be designed. Let me list them:- 
  1. Every key needs to be of a fixed width and height
  2. Square keys look much better than any other shape
  3. We need to have equal margins in both sides i.e and left and right side of each row
  4. The total width of a row should not exceed the width of the frame. The same goes for the height.
Now that we know our constraints we can now go on to design the keyboard. Since we will be using OpenCV's rectangle() function we need only the opposite corner coordinates of a key. The text i.e the key label on each key is put using putText() function. The key label is put at the center of the key. So for each key we need 4 things. They are:-
  • Key Label
  • Top Left Corner coordinate
  • Bottom Right Corner coordinate
  • Center coordinate
Let us first define our global variables and then we can design a function that will return the above 4 parameters.

import cv2 import pickle import numpy as np import pyautogui as gui with open("range.pickle", "rb") as f: # range.pickle is generated by range-detector.py t = pickle.load(f) cam = cv2.VideoCapture(0) hsv_lower = np.array([t[0], t[1], t[2]]) hsv_upper = np.array([t[3], t[4], t[5]]) width = cam.get(cv2.CAP_PROP_FRAME_WIDTH) # width of video frame captured by the webcam height = cam.get(cv2.CAP_PROP_FRAME_HEIGHT) # height of the video frame captured by the webcam max_keys_in_a_row = 10 # max number of keys in any row is 10 i.e the first row which contains qwertyuiop key_width = int(width/max_keys_in_a_row) # width of one key. width is divided by 10 as the max number of keys in a single row is 10.

hsv_upper and hsv_lower are automatically initialized if we use the range-detector.py which I have included in the repo.  The easiest way to use it is to put the yellow paper in front of the camera and then slowly increasing the lower parameters(H_MIN, V_MIN, S_MIN) one by one and then slowly decreasing the upper parameters (H_MAX, V_MAX, S_MAX). When the adjusting has been done you will find that only the yellow paper will have a corresponding white patch and rest of the image will be dark.

With our global variables all set we can now proceed to defining the function that will return the above said properties of all the keys. I have named the function as get_keys().  Before going into much details met me first discuss the steps that I have taken.
  1. Since the max number of keys in one row is 10 i.e the first row we can divide the width by 10 to get the key_width.
  2. So total width of any row is (key_width * total no. of keys in that row)
  3. Hence total width of 1st row  i.e row1_key_width = key_width * 10.
  4. Similarly for the 2nd row it is key_width * 9, third row it is key_width * 7, and for the space bar I have decided to keep it as key_width * 5.
  5. To determine the corner coordinates of the first key of the first row i.e the "q" key we do the following:
      • We know that the height of each key is key_width. Since we have 4 rows the total height i.e height of the keyboard is 4 * key_width.
      • To keep equal top and bottom margins for the keyboard we can do the following operation (height - 4 * key_width) / 2.
      • Let us set the above value to y1.
      • And x1 is set to 0, so that the first row begins from the left border of the frame
      • Hence (x1, y1) gives us the top left corner of the "q" key
      • Since the keys are of square shape then the opposite corner coordinate will be (x2, y2) = (key_width + x1, key_width + y1)
  6. For the next key i.e "w" only the x1 and x2 will change. Both of them will be increased by key_width.
Let us see that in code:-
def get_keys(): """ this function is used to design the keyboard. it returns the 4 parameter that are needed to design the keys. they are key label, top right corner coordinate, bottom left corner coordinate, and center coordinate """ row1_key_width = key_width * 10 # width of first row of keys row2_key_width = key_width * 9 # width of second row row3_key_width = key_width * 7 # width of third row row4_key_width = key_width * 5 # width of spacebar row_keys = [] # stores the keys along with its 2 corner coordinates and the center coordinate # for the first row x1, y1 = 0, int((height - key_width * 4) / 2) # 4 is due to the fact that we will have 4 rows. y1 is set such that the whole keyboard has equal margin on both top and bottom x2, y2 = key_width + x1, key_width + y1 c1, c2 = x1, y1 # copying x1, y1 c = 0 keys = "qwertyuiop" for i in range(len(keys)): row_keys.append([keys[c], (x1, y1), (x2, y2), (int((x2+x1)/2) - 5, int((y2+y1)/2) + 10)]) x1 += key_width x2 += key_width c += 1 x1, y1 = c1, c2 # copying back from c1, c2

For the second row we can proceed with similar steps with the only difference of (x1, y1). In this case x1 = (row1_key_width - row2_key_width) / 2 and y1 = key_width + y1. We will have a similar case for third row.
In case of 4th row, we have a space bar which is 5 times wider than any other key. Hence only in that case we will have x2 = 5 * key_width + x1

The function will look like:
def get_keys(): """ this function is used to design the keyboard. it returns the 4 parameter that are needed to design the keys. they are key label, top right corner coordinate, bottom left corner coordinate, and center coordinate """ row1_key_width = key_width * 10 # width of first row of keys row2_key_width = key_width * 9 # width of second row row3_key_width = key_width * 7 # width of third row row4_key_width = key_width * 5 # width of spacebar row_keys = [] # stores the keys along with its 2 corner coordinates and the center coordinate # for the first row x1, y1 = 0, int((height - key_width * 4) / 2) # 4 is due to the fact that we will have 4 rows. y1 is set such that the whole keyboard has equal margin on both top and bottom x2, y2 = key_width + x1, key_width + y1 c1, c2 = x1, y1 # copying x1, x2, y1 and y2 c = 0 keys = "qwertyuiop" for i in range(len(keys)): row_keys.append([keys[c], (x1, y1), (x2, y2), (int((x2+x1)/2) - 5, int((y2+y1)/2) + 10)]) x1 += key_width x2 += key_width c += 1 x1, y1 = c1, c2 # copying back from c1, c2, c3 and c4 # for second row x1, y1 = int((row1_key_width - row2_key_width) / 2) + x1, y1 + key_width # x1 is set such that it leaves equal margin on both left and right side x2, y2 = key_width + x1, key_width + y1 c1, c2 = x1, y1 c = 0 keys = "asdfghjkl" for i in range(len(keys)): row_keys.append([keys[c], (x1, y1), (x2, y2), (int((x2+x1)/2) - 5, int((y2+y1)/2) + 10)]) x1 += key_width x2 += key_width c += 1 x1, y1 = c1, c2 # for third row x1, y1 = int((row2_key_width - row3_key_width) / 2) + x1, y1 + key_width x2, y2 = key_width + x1, key_width + y1 c1, c2 = x1, y1 c = 0 keys = "zxcvbnm" for i in range(len(keys)): row_keys.append([keys[c], (x1, y1), (x2, y2), (int((x2+x1)/2) - 5, int((y2+y1)/2) + 10)]) x1 += key_width x2 += key_width c += 1 x1, y1 = c1, c2 # for the space bar x1, y1 = int((row3_key_width - row4_key_width) / 2) + x1, y1 + key_width x2, y2 = 5 * key_width + x1, key_width + y1 c1, c2 = x1, y1 c = 0 keys = " " for i in range(len(keys)): row_keys.append([keys[c], (x1, y1), (x2, y2), (int((x2+x1)/2) - 5, int((y2+y1)/2) + 10)]) x1 += key_width x2 += key_width c += 1 x1, y1 = c1, c2 return row_keys

With our row_keys in hand we can now design the function that will be simulating the key press. Let's call this function do_keypress(). This function will take 3 inputs:
  • The image object
  • The position/coordinate of the click gesture
  • row_keys
The logic here is very simple. If the position of the click is at (x, y) then for any key having corner coordinates (x1, y1) and (x2, y2) the following condition must be satisfied x1=>x<=x2 and y1=>y<=y2. In code this will be like:

def do_keypress(img, center, row_keys_points): # this fuction presses a key and marks the pressed key with blue color for row in row_keys_points: arr1 = list(np.int0(np.array(center) >= np.array(row[1]))) # center of the contour has greater value than the top left corner point of a key arr2 = list(np.int0(np.array(center) <= np.array(row[2]))) # center of the contour has less value than the bottom right corner point of a key if arr1 == [1, 1] and arr2 == [1, 1]: gui.press(row[0]) cv2.fillConvexPoly(img, np.array([np.array(row[1]), \ np.array([row[1][0], row[2][1]]), \ np.array(row[2]), \ np.array([row[2][0], row[1][1]])]), \ (255, 0, 0)) return img

Now let us design the main() function. The main() function does the following tasks:-
  1. Recognizes the yellow paper
  2. Gets the corner coordinates of every key by calling get_keys()
  3. Detects the center position of the yellow paper
  4. If there is little movement of the yellow paper it ignores it
  5. If the movement is high the click gesture is ignored
  6. If a valid click gesture is formed it calls do_keypress()
Let's look at its code:-
def main(): row_keys_points = get_keys() new_area, old_area = 0, 0 c, c2 = 0, 0 # c stores the number of iterations for calculating the difference b/w present area and previous area # c2 stores the number of iterations for calculating the difference b/w present center and previous center flag_keypress = False # if a key is pressed then this flag is True while True: img = cam.read()[1] img = cv2.flip(img, 1) imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) mask = cv2.inRange(imgHSV, hsv_lower, hsv_upper) blur = cv2.medianBlur(mask, 15) blur = cv2.GaussianBlur(blur , (5,5), 0) thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1] contours = cv2.findContours(thresh.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)[1] if len(contours) > 0: cnt = max(contours, key = cv2.contourArea) if cv2.contourArea(cnt) > 350: # draw a rectangle and a center rect = cv2.minAreaRect(cnt) center = list(rect[0]) box = cv2.boxPoints(rect) box = np.int0(box) cv2.circle(img, tuple(np.int0(center)), 2, (0, 255, 0), 2) cv2.drawContours(img,[box],0,(0,0,255),2) # calculation of difference of area and center new_area = cv2.contourArea(cnt) new_center = np.int0(center) if c == 0: old_area = new_area c += 1 diff_area = 0 if c > 3: # after every 3rd iteration difference of area is calculated diff_area = new_area - old_area c = 0 if c2 == 0: old_center = new_center c2 += 1 diff_center = np.array([0, 0]) if c2 > 5: # after every 5th iteration difference of center is claculated diff_center = new_center - old_center c2 = 0 # setting some thresholds center_threshold = 10 area_threshold = 200 if abs(diff_center[0]) < center_threshold or abs(diff_center[1]) < center_threshold: print(diff_area) if diff_area > area_threshold and flag_keypress == False: img = do_keypress(img, new_center, row_keys_points) flag_keypress = True elif diff_area < -(area_threshold) and flag_keypress == True: flag_keypress = False else: flag_keypress = False else: flag_keypress = False # displaying the keyboard for key in row_keys_points: cv2.putText(img, key[0], key[3], cv2.FONT_HERSHEY_DUPLEX, 1, (0, 255, 0)) cv2.rectangle(img, key[1], key[2], (0, 255, 0), thickness = 2) cv2.imshow("img", img) if cv2.waitKey(1) == ord('q'): break cam.release() cv2.destroyAllWindows()

So the full code looks like this:-
import cv2 import pickle import numpy as np import pyautogui as gui with open("range.pickle", "rb") as f: # range.pickle is generated by range-detector.py t = pickle.load(f) cam = cv2.VideoCapture(0) hsv_lower = np.array([t[0], t[1], t[2]]) hsv_upper = np.array([t[3], t[4], t[5]]) width = cam.get(cv2.CAP_PROP_FRAME_WIDTH) # width of video captured by the webcam height = cam.get(cv2.CAP_PROP_FRAME_HEIGHT) # height of the video captured by the webcam max_keys_in_a_row = 10 # max number of keys in any row is 10 i.e the first row which contains qwertyuiop key_width = int(width/max_keys_in_a_row) # width of one key. width is divided by 10 as the max number of keys in a single row is 10. def get_keys(): """ this function is used to design the keyboard. it returns the 4 parameter that are needed to design the keys. they are key label, top right corner coordinate, bottom left corner coordinate, and center coordinate """ row1_key_width = key_width * 10 # width of first row of keys row2_key_width = key_width * 9 # width of second row row3_key_width = key_width * 7 # width of third row row4_key_width = key_width * 5 # width of spacebar row_keys = [] # stores the keys along with its 2 corner coordinates and the center coordinate # for the first row x1, y1 = 0, int((height - key_width * 4) / 2) # 4 is due to the fact that we will have 4 rows. y1 is set such that the whole keyboard has equal margin on both top and bottom x2, y2 = key_width + x1, key_width + y1 c1, c2 = x1, y1 # copying x1, x2, y1 and y2 c = 0 keys = "qwertyuiop" for i in range(len(keys)): row_keys.append([keys[c], (x1, y1), (x2, y2), (int((x2+x1)/2) - 5, int((y2+y1)/2) + 10)]) x1 += key_width x2 += key_width c += 1 x1, y1 = c1, c2 # copying back from c1, c2, c3 and c4 # for second row x1, y1 = int((row1_key_width - row2_key_width) / 2) + x1, y1 + key_width # x1 is set such that it leaves equal margin on both left and right side x2, y2 = key_width + x1, key_width + y1 c1, c2 = x1, y1 c = 0 keys = "asdfghjkl" for i in range(len(keys)): row_keys.append([keys[c], (x1, y1), (x2, y2), (int((x2+x1)/2) - 5, int((y2+y1)/2) + 10)]) x1 += key_width x2 += key_width c += 1 x1, y1 = c1, c2 # for third row x1, y1 = int((row2_key_width - row3_key_width) / 2) + x1, y1 + key_width x2, y2 = key_width + x1, key_width + y1 c1, c2 = x1, y1 c = 0 keys = "zxcvbnm" for i in range(len(keys)): row_keys.append([keys[c], (x1, y1), (x2, y2), (int((x2+x1)/2) - 5, int((y2+y1)/2) + 10)]) x1 += key_width x2 += key_width c += 1 x1, y1 = c1, c2 # for the space bar x1, y1 = int((row3_key_width - row4_key_width) / 2) + x1, y1 + key_width x2, y2 = 5 * key_width + x1, key_width + y1 c1, c2 = x1, y1 c = 0 keys = " " for i in range(len(keys)): row_keys.append([keys[c], (x1, y1), (x2, y2), (int((x2+x1)/2) - 5, int((y2+y1)/2) + 10)]) x1 += key_width x2 += key_width c += 1 x1, y1 = c1, c2 return row_keys def do_keypress(img, center, row_keys_points): # this fuction presses a key and marks the pressed key with blue color for row in row_keys_points: arr1 = list(np.int0(np.array(center) >= np.array(row[1]))) # center of the contour has greater value than the top left corner point of a key arr2 = list(np.int0(np.array(center) <= np.array(row[2]))) # center of the contour has less value than the bottom right corner point of a key if arr1 == [1, 1] and arr2 == [1, 1]: gui.press(row[0]) cv2.fillConvexPoly(img, np.array([np.array(row[1]), \ np.array([row[1][0], row[2][1]]), \ np.array(row[2]), \ np.array([row[2][0], row[1][1]])]), \ (255, 0, 0)) return img def main(): row_keys_points = get_keys() new_area, old_area = 0, 0 c, c2 = 0, 0 # c stores the number of iterations for calculating the difference b/w present area and previous area # c2 stores the number of iterations for calculating the difference b/w present center and previous center flag_keypress = False # if a key is pressed then this flag is True while True: img = cam.read()[1] img = cv2.flip(img, 1) imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) mask = cv2.inRange(imgHSV, hsv_lower, hsv_upper) blur = cv2.medianBlur(mask, 15) blur = cv2.GaussianBlur(blur , (5,5), 0) thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1] contours = cv2.findContours(thresh.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)[1] if len(contours) > 0: cnt = max(contours, key = cv2.contourArea) if cv2.contourArea(cnt) > 350: # draw a rectangle and a center rect = cv2.minAreaRect(cnt) center = list(rect[0]) box = cv2.boxPoints(rect) box = np.int0(box) cv2.circle(img, tuple(np.int0(center)), 2, (0, 255, 0), 2) cv2.drawContours(img,[box],0,(0,0,255),2) # calculation of difference of area and center new_area = cv2.contourArea(cnt) new_center = np.int0(center) if c == 0: old_area = new_area c += 1 diff_area = 0 if c > 3: # after every 3rd iteration difference of area is calculated diff_area = new_area - old_area c = 0 if c2 == 0: old_center = new_center c2 += 1 diff_center = np.array([0, 0]) if c2 > 5: # after every 5th iteration difference of center is claculated diff_center = new_center - old_center c2 = 0 # setting some thresholds center_threshold = 10 area_threshold = 200 if abs(diff_center[0]) < center_threshold or abs(diff_center[1]) < center_threshold: print(diff_area) if diff_area > area_threshold and flag_keypress == False: img = do_keypress(img, new_center, row_keys_points) flag_keypress = True elif diff_area < -(area_threshold) and flag_keypress == True: flag_keypress = False else: flag_keypress = False else: flag_keypress = False # displaying the keyboard for key in row_keys_points: cv2.putText(img, key[0], key[3], cv2.FONT_HERSHEY_DUPLEX, 1, (0, 255, 0)) cv2.rectangle(img, key[1], key[2], (0, 255, 0), thickness = 2) cv2.imshow("img", img) if cv2.waitKey(1) == ord('q'): break cam.release() cv2.destroyAllWindows() main()


And, that's about it. At the beginning of the tutorial I asked that if you can tell me if this a Augmented Reality project. Well I ask that again. Comment it down below. 2 of my friends said that it is a Augmented Reality project. One of them said it is not. I am not sure who to believe. Get the full code here. You can find me on-
Bye.....

Tuesday, 26 September 2017

Motion Gesture Recognition within 200 lines using OpenCV +Python

Hello Readers. In this tutorial, I will be teaching you gesture recognition in OpenCV+Python using only Image Processing and no Machine Learning or any Neural Networks.

What is Gesture Recognition

Gesture recognition is the mathematical interpretation of a human motion by a computing device. There are many algorithms out on the Internet that gives a very good and accurate results on gesture recognition. But here we are not going see them. I developed a very simple and naive algorithm to recognize gestures that are made up of straight lines. Lets see....
Gesture Recognition

Give me the code already...

Ok. I got you. Here is the code https://github.com/EvilPort2/SimpleGestureRecognition.

Why no Machine Learning or Neural Net?

The answer to this question is very simple. I do not know much about them. Though I have some knowledge about machine learning, I have little to no knowledge about neural networks.

Requirements

  1. A computer with a good camera
  2. A yellow (though any other colour can be used) piece of paper to be worn in a finger (for image segmentation)
  3. OpenCV for Python 3
  4. PyAutoGui for Python 3
  5. Python 3
  6. Text Editor like Sublime Text 3 or Atom
  7. A little bit of knowledge in Maths

Steps we are taking

Since I am using only image processing for this project, I will be using only the direction of movement to determine the gesture.
  1. Take one frame at a time and convert it from RGB colour space to HSV colour space for better yellow colour segmentation.
  2. Use a mask for yellow colour.
  3. Bluring and thresholding the mask.
  4. If a yellow colour is found and it crosses a reasonable area threshold, we start to create a gesture.
  5. The direction of movement of the yellow cap is calculated by taking the difference between the old centre and the new centre of the yellow colour after every 5th iteration or frame.
  6. Take the directions and store in a list until the yellow cap disappears from the frame.
  7. Process the created direction list and the processed direction list is used to take a certain action like a keyboard shortcut.
 

Let's get our hands dirty...

gesture_action.py

Let us begin with all the important imports and a few global variables
import cv2 import numpy as np from collections import deque import pyautogui as gui from gesture_api import do_gesture_action cam = cv2.VideoCapture(0) # Camera Object yellow_lower = np.array([7, 96, 85]) # HSV yellow lower yellow_upper = np.array([255, 255, 255]) # HSV yellow upper screen_width, screen_height = gui.size() camx, camy = 360, 240 # Resize resolution buff = 128 line_pts = deque(maxlen = buff) # Create a deque data structure which store the present location of centre point of the yellow patch

The gesture_api is a different file that I created. do_gesture_action is a function in that file. The yellow_lower and yellow_upper can be determined by using this python program. So in your case, these values might be different in different lighting conditions. The easiest way to use it is to put the yellow paper in front of the camera and then slowly increasing the lower parameters(H_MIN, V_MIN, S_MIN) one by one and then slowly decreasing the upper parameters (H_MAX, V_MAX, S_MAX). When the adjusting has been done you will find that only the yellow paper will have a corresponding white patch and rest of the image will be dark. Now let's get into the main function and some of its local variables
def gesture_action(): centerx, centery = 0, 0 # Present location of the centre of the yellow patch old_centerx, old_centery = 0, 0 # Previous location of the centre of the yellow patch area1 = 0 # Area of the yellow patch c = 0 # Stores the number of yellow objects in the picture flag_do_gesture = 0 # If a gesture has been completed then this flag is 1 flag0 = True # Checks if a yellow object is present in the frame created_gesture_hand1 = [] # stores the direction of the movement

With that out of the way we can now extract each frame and do the operations as required. These are steps we will be doing
    1. Get a frame
    2. Flip and resize the image to 360*240 for faster processing
    3. Convert the frame from RGB colour space to HSV colour space
    4. Now we will be using the yellow colour mask to segment the yellow colour
    5. Because every camera has some flaws in them which introduces some error in the frame hence we need to reduce the noise in the image and the easiest way to do that is to heavily blur the frame.
    6. Now if we set the colour threshold to any colour which is above black then we can get the almost exact shape of the the yellow patch.
    7. Take the contour of the thresholded frame.
    8. Repeat the above steps for every frame

while True: _, img = cam.read() # Resize for faster processing. Flipping for better orientation img = cv2.flip(img, 1) img = cv2.resize(img, (camx, camy)) # Convert to HSV for better color segmentation imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # Mask for yellow color mask = cv2.inRange(imgHSV, yellow_lower, yellow_upper) # Bluring to reduce noises blur = cv2.medianBlur(mask, 15) blur = cv2.GaussianBlur(blur , (5,5), 0) # Thresholding _,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) cv2.imshow("Thresh", thresh) _, contours, _ = cv2.findContours(thresh.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
After getting the contours we can have 2 cases-
  1. Number of contours is greater than zero then yellow colored objects are in the frame.
  2. Number of contours is zero then no yellow colored objects are in the frame.

Case 1- Yellow colored objects in the frame

  1. Assign 0 to flag_do_gesture.
  2. Take the contour that has the maximum area. Let us call this max_contour.
  3. Find a minimum area rectangle that surrounds the max_contour.
  4. Take the width and height of the rectangle.
  5. Find the area of the rectangle by width*height.
  6. If the area crosses a reasonable threshold then start making a gesture. I found the threshold experimenting with different values and in my case it was 450.
  7. If the area of the contour crosses the threshold then find the center of the yellow object.
  8. Draw a rectangular box around it.
  9. Draw a dot at the center.
  10. Append the center to the deque line_pts.
  11. Update the center after every 5th iteration or frame.
  12. At the 5th iteration take the difference between the old center (x1, y1) and new center (x2, y2). I have used diffx = (x2-x1) and diffy = (y2-y1).
  13. Hence values of diffx and diffy gives us the direction of movement.
  14. If the flag0 is False then append the direction to the created_gesture_hand1 list.
  15. Draw a line for all the points in line_pts
  16. Assign False to flag0.

Case 2- No yellow colored objects in the frame

  1. Empty the deque line_pts.
  2. Process the created_gesture_hand1 by removing the 'St' and the consecutive directions. Let us call it processed_gesture_hand1.
  3. If flag_do_gesture is 0 and processed_gesture_hand1 then take an action corresponding to a particular gesture.
  4. Assign 1 to flag_do_gesture. This avoids the gesture action to be run only once and not repeatedly.
  5. Empty created_hand_gesture.
  6. Assign True to flag0.
Enough said..... In a code it looks something like this
if len(contours) == 0: # Completion of a gesture line_pts = deque(maxlen = buff) # Empty the deque processed_gesture_hand1 = tuple(process_created_gesture(created_gesture_hand1)) if flag_do_gesture == 0: # flag_do_gesture to make sure that gesture runs only once and not repeatedly if processed_gesture_hand1 != (): do_gesture_action(processed_gesture_hand1) flag_do_gesture = 1 print(processed_gesture_hand1) # for debugging purposes created_gesture_hand1 = [] flag0 = True else: flag_do_gesture = 0 max_contour = max(contours, key = cv2.contourArea) rect1 = cv2.minAreaRect(max_contour) (w, h) = rect1[1] area1 = w*h if area1 > 450: center1 = list(rect1[0]) box = cv2.boxPoints(rect1) # to draw a rectangle box = np.int0(box) cv2.drawContours(img,[box],0,(0,0,255),2) centerx = center1[0] = int(center1[0]) # center of the rectangle centery = center1[1] = int(center1[1]) cv2.circle(img, (centerx, centery), 2, (0, 255, 0), 2) line_pts.appendleft(tuple(center1)) if c == 0: old_centerx = centerx old_centery = centery c += 1 diffx, diffy = 0, 0 if c > 5: # check after every 5 iteration the new center diffx = centerx - old_centerx diffy = centery - old_centery c = 0 if flag0 == False: # the difference between the old center and the new center determines the direction of the movement if abs(diffx) <=10 and abs(diffy) <= 10: created_gesture_hand1.append("St") elif diffx > 15 and abs(diffy) <= 15: created_gesture_hand1.append("E") elif diffx < -15 and abs(diffy) <= 15: created_gesture_hand1.append("W") elif abs(diffx) <= 15 and diffy < -15: created_gesture_hand1.append("N") elif abs(diffx) <= 15 and diffy > 15: created_gesture_hand1.append("S") elif diffx > 25 and diffy > 25: created_gesture_hand1.append("SE") elif diffx < -25 and diffy > 25: created_gesture_hand1.append("SW") elif diffx > 25 and diffy < -25: created_gesture_hand1.append("NE") elif diffx < -25 and diffy < -25: created_gesture_hand1.append("NW") for i in range(1, len(line_pts)): if line_pts[i - 1] is None or line_pts[i] is None: continue cv2.line(img, line_pts[i-1], line_pts[i], (0, 255, 0), 2) flag0 = False

The process_created_gesture function looks like this
def process_created_gesture(created_gesture): """ function to remove all the St direction and removes duplicate direction if they occur consecutively. """ if created_gesture != []: for i in range(created_gesture.count("St")): created_gesture.remove("St") for j in range(len(created_gesture)): for i in range(len(created_gesture) - 1): if created_gesture[i] == created_gesture[i+1]: created_gesture.remove(created_gesture[i+1]) break return created_gesture
So the whole file gesture_action.py looks like this.
import cv2 import numpy as np import pyautogui as gui from gesture_api import do_gesture_action from collections import deque cam = cv2.VideoCapture(0) yellow_lower = np.array([7, 96, 85]) # HSV yellow lower yellow_upper = np.array([255, 255, 255]) # HSV yellow upper screen_width, screen_height = gui.size() camx, camy = 480, 360 buff = 128 line_pts = deque(maxlen = buff) def process_created_gesture(created_gesture): """ function to remove all the St direction and removes duplicate direction if they occur consecutively. """ if created_gesture != []: for i in range(created_gesture.count("St")): created_gesture.remove("St") for j in range(len(created_gesture)): for i in range(len(created_gesture) - 1): if created_gesture[i] == created_gesture[i+1]: created_gesture.remove(created_gesture[i+1]) break return created_gesture def gesture_action(): centerx, centery = 0, 0 old_centerx, old_centery = 0, 0 area1 = 0 c = 0 flag_do_gesture = 0 flag0 = True created_gesture_hand1 = [] while True: _, img = cam.read() # Resize for faster processing. Flipping for better orientation img = cv2.flip(img, 1) img = cv2.resize(img, (camx, camy)) # Convert to HSV for better color segmentation imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # Mask for yellow color mask = cv2.inRange(imgHSV, yellow_lower, yellow_upper) # Bluring to reduce noises blur = cv2.medianBlur(mask, 15) blur = cv2.GaussianBlur(blur , (5,5), 0) # Thresholding _,thresh = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) cv2.imshow("Thresh", thresh) _, contours, _ = cv2.findContours(thresh.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE) w, h = 0, 0 if len(contours) == 0: # Completion of a gesture line_pts = deque(maxlen = buff) # Empty the deque processed_gesture_hand1 = tuple(process_created_gesture(created_gesture_hand1)) if flag_do_gesture == 0: # flag_do_gesture to make sure that gesture runs only once and not repeatedly if processed_gesture_hand1 != (): do_gesture_action(processed_gesture_hand1) flag_do_gesture = 1 print(processed_gesture_hand1) # for debugging purposes created_gesture_hand1 = [] flag0 = True else: flag_do_gesture = 0 max_contour = max(contours, key = cv2.contourArea) rect1 = cv2.minAreaRect(max_contour) (w, h) = rect1[1] area1 = w*h if area1 > 450: center1 = list(rect1[0]) box = cv2.boxPoints(rect1) # to draw a rectangle box = np.int0(box) cv2.drawContours(img,[box],0,(0,0,255),2) centerx = center1[0] = int(center1[0]) # center of the rectangle centery = center1[1] = int(center1[1]) cv2.circle(img, (centerx, centery), 2, (0, 255, 0), 2) line_pts.appendleft(tuple(center1)) if c == 0: old_centerx = centerx old_centery = centery c += 1 diffx, diffy = 0, 0 if c > 5: # check after every 5 iteration the new center diffx = centerx - old_centerx diffy = centery - old_centery c = 0 if flag0 == False: # the difference between the old center and the new center determines the direction of the movement if abs(diffx) <=10 and abs(diffy) <= 10: created_gesture_hand1.append("St") elif diffx > 15 and abs(diffy) <= 15: created_gesture_hand1.append("E") elif diffx < -15 and abs(diffy) <= 15: created_gesture_hand1.append("W") elif abs(diffx) <= 15 and diffy < -15: created_gesture_hand1.append("N") elif abs(diffx) <= 15 and diffy > 15: created_gesture_hand1.append("S") elif diffx > 25 and diffy > 25: created_gesture_hand1.append("SE") elif diffx < -25 and diffy > 25: created_gesture_hand1.append("SW") elif diffx > 25 and diffy < -25: created_gesture_hand1.append("NE") elif diffx < -25 and diffy < -25: created_gesture_hand1.append("NW") for i in range(1, len(line_pts)): if line_pts[i - 1] is None or line_pts[i] is None: continue cv2.line(img, line_pts[i-1], line_pts[i], (0, 255, 0), 2) flag0 = False cv2.imshow("IMG", img) if cv2.waitKey(1) == ord('q'): break cv2.destroyAllWindows() cam.release() gesture_action()

gesture_api.py

This file contains nothing but the gesture directions and the keyboard shortcut that it needs to emulate. So a square can be made using directions like (North, West, South, East). Now let's say that when a square is made we need to emulate the keyboard shortcut winkey (For Windows) or altleft+f1 (For KDE) and so on. We can have 2 cases for the keyboard shortcut emulation.
  • Only one key press needs to be emulated e.g winkey
  • More than one key press needs to be emulated e.g winkey + l, alt + f4 etc.
For the first case, we need to just press the key. For the second case, we need to hold all the keys except the last key, press the last key and then un-hold the keys. In code this can be accomplished by-
import pyautogui as gui import os GEST_START = ("N", "E", "S", "W") GEST_CLOSE = ("SE", "N", "SW") GEST_COPY = ("W", "S", "E") GEST_PASTE = ("SE", "NE") GEST_CUT = ("SW", "N", "SE") GEST_ALT_TAB = ("SE", "SW") GEST_ALT_SHIFT_TAB = ("SW", "SE") GEST_MAXIMISE = ("N",) GEST_MINIMISE = ("S",) GEST_LOCK = ("S", "E") GEST_TASK_MANAGER = ("E", "W", "S") GEST_NEW_FILE = ("N", "SE", "N") GEST_SELECT_ALL = ("NE", "SE", "NW", "W") # Gesture set containing the directions and the key press actions GESTURES = {GEST_CUT: ('ctrlleft', 'x'), GEST_CLOSE: ('altleft', 'f4'), GEST_ALT_SHIFT_TAB: ('altleft', 'shiftleft', 'tab'), GEST_PASTE: ('ctrlleft', 'v'), GEST_ALT_TAB: ('altleft', 'tab'), GEST_COPY: ('ctrlleft', 'c'), GEST_NEW_FILE: ('ctrlleft', 'n'), GEST_SELECT_ALL: ('ctrlleft', 'a')} # Windows PCs if os.name == 'nt': GESTURES[GEST_START] = ('winleft',) GESTURES[GEST_LOCK] = ('winleft', 'l') GESTURES[GEST_TASK_MANAGER] = ('ctrlleft', 'shiftleft', 'esc') # Linux using KDE else: GESTURES[GEST_START] = ('altleft', 'f1') GESTURES[GEST_LOCK] = ('ctrlleft', 'altleft', 'l') GESTURES[GEST_TASK_MANAGER] = ('ctrlleft', 'esc') def do_gesture_action(gesture): if gesture in GESTURES.keys(): keys = list(GESTURES[gesture]) last_key = keys.pop() # get the last key press if len(keys) >= 1: # case 2 for key in keys: # hold all the keys except the last key gui.keyDown(key) gui.press(last_key) # press the last key. for case 1 the last key and the first key are the same if len(keys) >= 1: keys.reverse() # un-holding the keys for key in keys: gui.keyUp(key)
 

Conclusion

Yes. And that's about it. Using only 2 files and only image processing we have successfully implemented a very simple and naive gesture recognition system. That too happened within only 200 lines of code. Get the full code here. You can find me on-
Bye.....

Gesture driven Virtual Keyboard using OpenCV + Python

Hello Readers, long time no see. In this tutorial, I will be teaching you how to create a gesture driven Virtual Keyboard with OpenCV and P...