• Beer Goggles Code Running Windows

    DaxMoonJuice05/02/2016 at 21:17 0 comments

    In order to test full body detection (and see just how fast I can get OpenCv to run on a relatively beefy Gaming PC) I decided the edit my code to run on windows using a hacked Eye Toy webcam.

    Getting the Eye Toy to run was easy enough, several drivers written by the community make it relatively easy to install on windows 10. I found a helpful guide to getting it running at http://metricrat.co.uk/ps2-eyetoy-on-windows-8-64-bit-working/.

    Porting the code to windows was easy enough. Instead of using the Raspi camera library to capture images I had to rewrite it to use the videocapture method in OpenCV

    import numpy as np
    import cv2
    
    
    face_cascade=cv2.CascadeClassifier('C:/Users/Joshes Computer/Documents/opencv/build/etc/haarcascades/haarcascade_frontalface_alt.xml')
    print face_cascade
    # initialize the camera and grab a reference to the raw camera capture
    
    #VideoCapture object for Windows/Linux/Mac
    cap= cv2.VideoCapture(0)
    
    
    
    # allow the camera to warmup
    time.sleep(0.1)
     
    # capture frames from the camera
    # this is the pi-camera method for looping through video stream
    while (True):
            # grab the raw NumPy array representing the image, then initialize the timestamp
            # and occupied/unoccupied text
            
            //turns the video into a NumPy Array (I think)
            ret,image = cap.read()
            
            #Face Detection--1.Convert Image to Greyscale so Haar cascade works
            gray=cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
            
            #creates the face object
            faces = face_cascade.detectMultiScale(gray, 1.5,5)
            print ("Found"+str(len(faces))+" faces")
            #for each face in faces draw a rectangle with dimensions w+h at the face co-rd x+y
            
            for (x,y,w,h) in faces:
                    cv2.rectangle(gray,(x,y),(x+w,y+h),(255,0,0),2)
                    #print w
                    #print h
                    
                    s_img=cv2.imread('C:/Users/Joshes Computer/Documents/Hacking Folder/Retro VR googles/Machine Vision Scripts/smileyface.jpg')
                    s_img=cv2.cvtColor(s_img, cv2.COLOR_BGR2GRAY)
    
                    dim=(w,h)
                    
                    s_img= cv2.resize(s_img, dim, interpolation =cv2.INTER_AREA)
                    c1=y
                    r1=x
                    offset1=s_img.shape[0]
                    gray[c1:c1+offset1,r1:r1+offset1]=s_img
                    #cv2.imwrite("test.jpg",gray)
                    x_offset=x+w
                    y_offset=y+h
                    
    
                    
                    offsetx=s_img.shape[0]
                    offsety=s_img.shape[1]
                    #gray[0:50,0:50]
                    #gray[w:offsety,h:offsety]=s_img                                           
     
            # show the frame
            cv2.imshow("Frame", gray)
            #stores the value of a the last keypress?
            key = cv2.waitKey(1) & 0xFF
     
            # clear the stream in preparation for the next frame
            
     
            # if the `q` key was pressed, break from the loop
            if key == ord("q"):
                    break
    
    
    cap.release
    cv2.destroyAllWindows()
    

  • Beer Goggles Code Running Windows

    DaxMoonJuice05/02/2016 at 21:16 0 comments

    In order to test full body detection (and see just how fast I can get OpenCv to run on a relatively beefy Gaming PC) I decided the edit my code to run on windows using a hacked Eye Toy webcam.

    Getting the Eye Toy to run was easy enough, several drivers written by the community make it relatively easy to install on windows 10. I found a helpful guide to getting it running at http://metricrat.co.uk/ps2-eyetoy-on-windows-8-64-bit-working/.

    Porting the code to windows was easy enough. Instead of using the Raspi camera library to capture images I had to rewrite it to use the videocapture method in OpenCV

    import numpy as np
    import cv2
    
    
    face_cascade=cv2.CascadeClassifier('C:/Users/Joshes Computer/Documents/opencv/build/etc/haarcascades/haarcascade_frontalface_alt.xml')
    print face_cascade
    # initialize the camera and grab a reference to the raw camera capture
    
    #VideoCapture object for Windows/Linux/Mac
    cap= cv2.VideoCapture(0)
    
    
    
    # allow the camera to warmup
    time.sleep(0.1)
     
    # capture frames from the camera
    # this is the pi-camera method for looping through video stream
    while (True):
            # grab the raw NumPy array representing the image, then initialize the timestamp
            # and occupied/unoccupied text
            
            //turns the video into a NumPy Array (I think)
            ret,image = cap.read()
            
            #Face Detection--1.Convert Image to Greyscale so Haar cascade works
            gray=cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
            
            #creates the face object
            faces = face_cascade.detectMultiScale(gray, 1.5,5)
            print ("Found"+str(len(faces))+" faces")
            #for each face in faces draw a rectangle with dimensions w+h at the face co-rd x+y
            
            for (x,y,w,h) in faces:
                    cv2.rectangle(gray,(x,y),(x+w,y+h),(255,0,0),2)
                    #print w
                    #print h
                    
                    s_img=cv2.imread('C:/Users/Joshes Computer/Documents/Hacking Folder/Retro VR googles/Machine Vision Scripts/smileyface.jpg')
                    s_img=cv2.cvtColor(s_img, cv2.COLOR_BGR2GRAY)
    
                    dim=(w,h)
                    
                    s_img= cv2.resize(s_img, dim, interpolation =cv2.INTER_AREA)
                    c1=y
                    r1=x
                    offset1=s_img.shape[0]
                    gray[c1:c1+offset1,r1:r1+offset1]=s_img
                    #cv2.imwrite("test.jpg",gray)
                    x_offset=x+w
                    y_offset=y+h
                    
    
                    
                    offsetx=s_img.shape[0]
                    offsety=s_img.shape[1]
                    #gray[0:50,0:50]
                    #gray[w:offsety,h:offsety]=s_img                                           
     
            # show the frame
            cv2.imshow("Frame", gray)
            #stores the value of a the last keypress?
            key = cv2.waitKey(1) & 0xFF
     
            # clear the stream in preparation for the next frame
            
     
            # if the `q` key was pressed, break from the loop
            if key == ord("q"):
                    break
    
    
    cap.release
    cv2.destroyAllWindows()
    

  • New "Headset"

    DaxMoonJuice05/02/2016 at 19:57 0 comments

    So today I built (or rather bodged) together a headset now the project actually uses Goggles! or goggle? (the screen only covers one eye). Here are some photos.

    This one does a good job of showing why more people don't use CRT displays for AR goggles. The combined weight/strap positioning puts an uncomfortable amount of force on the bridge of my nose in addition to making the "headset" rather more cumbersome than it needs to be. In the future I plan to build another headset that addressed this. Possibly by using two straps arranged in an X shape to better distribute weight.

    I plan to power the headset off a 16Wh battery pack. Just need to buy or build the holder and some form of locking the tabs into place. Charging for now is done using a seperate Li-Po charger.

  • Ideas for Further Development/Code Explanation

    DaxMoonJuice04/26/2016 at 16:44 0 comments

    Ideas for expansion.

    -Swapping the face detection function for a body detection function. OpenCV provides Haar-cascade files for this. Implementation should be as simple as swapping the files

    -Randomizing the image which is placed over the detected face

    -Experimenting with face-swapping. In OpenCV detected faces are stored as a NumPy array within a rectangle. Overlaying images on this face is made simple thanks to OpenCV library. You simply scale the image to match the size of the face box and then move all the pixels (NumPy array) into the face box.

    //Code to Overlay smaller image over the main image
    
    s_img=cv2.imread("smileyface.jpg")
    s_img=cv2.cvtColor(s_img, cv2,COLOR_BGR2GRAY)
    
    // w and h are two of the four co-ordinates of the facebox
    dim=(w,h)
    
    //resizes s_img by moving two-coordinates of s_img to the position of w,h on the facebox, interpolation is set to cv2.INTER_AREA
     
    s_img=cv2.resize(s_img,dim, interpolation =cv2.INTER_AREA)
    
    c1=y
    r1=x
    
    //sets the area defined by c1 and r1 in the mainImage to equal the contents of s_img.
    
    offset1=s_img.shape[0]
    mainImage[c1 : c1+offset1, r1 : r1+offset1]=s_img
    So if inserting a new image over the main image is easy and faces are stored as numpy arrays then face swapping would just involve copying the first face storing it then pasting the second face over the first face and then finally placing the store face over the second face.