Monthly Archives: February 2012

First Proof of Concept: Multi-touchable Surface with Kinect, SKT and Kivy

Slowly but surely. That’s our mantra in this project since it implies a lot of coding and to handle very different techonolgies, some of them –I really mean all of them– in the very stages of development. Anyway, we finally decided to implement our multi-touchable surface using a projector, a Kinect device and the libraries SKT and Kivy. As the next video shows, we are getting some advances, but we develop slower than we expect.

And finally, our first proof of concept of everything working together. It’s about a very basic project that is able to handle several interactions at the same time. At the moment, it only allows to draw lines and identify how many different contact points are there using one color for each one.

Behind scenes, what it happens is described below:

  1. The Simple Kinect Touching allow us to define the boundaries of the projected screen.
  2. We adjust a bunch of parameters related with deepness in order to focus only in the coordinates that mean something is touching the screen.
  3. Then, that information is transformed and sent according the TUIO protocol to a local server. Now we have service streaming of data relative to the touchs and movements on the screen.
  4. In this point, we run our Kivy client application, that we call gamex, by setting the input interface as a TUIO server instead of the mouse.
  5. Black magic.
  6. Profits!

And that’s it. We really hope to have time to focus in the development of the machine learning application once the most difficult technical issues seem almost solved. However, to do the projection on a bed sheet or table cover is going to be kind of traumatic for setting the paramereters in the Kinect recognition layer. We will need a very rigid wood frame or something for making the surface as smooth as we can.

1 Comment

Filed under Tasks

Thinking about an exhibit project

This week has been, again, very funny. I was participating in the building of a MakerBot 3D printer. I have to say that is easier than I thought and all you need is inside the box. I couldn’t be all the days needed to finish the printer because I had classes to attend. But anyway, the days I did do I enjoyed a lot.

Finishing the MakerBot 3D Printer

Finishing the MakerBot 3D Printer

Moving on to something else, Roberto and I were talking to Prof. William Turkel about some ideas for our final project. And after listening a lot of very good ideas and proposals, and given the fact that our time is finite, we’ve almost defined our project. It has three different parts that I am going to detail:

  1. We are going to build a multi-touchable big screen using a table cover. Yes, using a table cover. To do that we’ll use a Microsoft Kinect device. The Kinect, as much of you already know, is able to get the deepness of the scene what the device is focusing to. So, we also have a projector that it will be connected to one of our computers. Thanks to the Simple Kinect Touch library that we found, we can define the area of the projected screen as a delimited plain surface. Done this, any change in the surface is properly detected by the deepness camera of the Kinect. There are many ways to transform that information into a single point to emulate a mouse click. Maybe we’ll calculate the middle point according the distorted surface by pressing the screen. But what screen? The table cover on which the projector will be used.
  2. In the step above we explained briefly a way to have an arbitrary surface as a touchable screen. After this, we are going to show images in the screen. But not random images. In a parallel process, and using maybe OpenCV and some improvements to the face recognition algorithms, we extract the position of the bounding box for the faces from certain baroque paintings. However, this process is not always as good as we expect. And that’s the reason of the step three.
  3. In the final step, we use the bounding boxes extracted for each image. With that information, we project a image in the big screen and now,  using Kivy or Processing, we ask for users to touch the devil, angel, saints, virgins or Jesus faces. One of every type at the same time. So we can use that information as features to enhance a machine learning algorithm with scikit-learn. We can even propose different methods to touch the faces: pray for angels, punch for devils, etc. In this way, we give feedback (formally training) to recognise different kinds of faces, because our first algorithm is only able to detect faces and it doesn’t very well for faces in pictures.

And that is. The idea is to have this in some museum or public place, pretending that people play with the screen selecting the type of “touch” to give. And, in the background, providing very useful information for face recognition patterns in artworks.

But, due to the difficulty of having the tings working in 64 bits Linux OS, we are moving to 32 bits. It’s really better to focus on the task to do instead of the installation of requirements.

We also have already some materials to build the screen, I hope during this week to have a very first version of the code, that will be hosted ad GitHub. The name, in a lack of a better one, is Gamex: Game Exhibis for Machine Learning.

2 Comments

Filed under Tasks

The pain of having a 64 bit Linux laptop and OpenCV

This is the story of a very big frustration. I’ve been an evangelist of Open Source from almost 8 years now. One of the first thing I always defend is the the freedom of a free operative system. Of course, with great power comes great responsibility, but that didn’t really worry me because actually I’ve always been enjoying of dealing with the system. There is something special in the satisfaction of getting things working in Linux. Even, and secretly, it is one on the reaseons because we like Linux, because when something gets wrong, we have the power to fix it. And we love this.

But this mantra could have been a mess. Actually I thought I wasn’t able to get the things done. In this case, I was trying to install the open library for computer vision, called OpenCV, in order to use it from Processing. But my laptop has a 64 bits processor, and there started the problem. The most of the libraries are compiled and have binaries for 32 bits systems, so it’s supposed to could be enough if I compiled the libraries by myself. But that’s not alwasy as easy as it looks like. After try, nor one, two, three, four, but six different methods, I got OpenCV libraries working on my machine with Python bindings, and also the OpenNI and NITE PrimerSensor enabled, what it’s supposed to be good in you want to connect a Kinect (in the future I would like to use Point Cloud Library, aka PCL). And not the same luck for CUDA. Once the OpenCV was working, of course, I discovered the fantastic OpenCV PPA with 64 bits libraries packaged for Ubuntu 11.10.

After OpenCV, I needed to bind the library with Java first, and with Processing then. But this step hasn’t been likely at all. I tried the common OpenCV for Processing, I also tried the recommendations made by my classmate Roberto (whose laptop is 32 bits processor based) and generating again the libOpenCV.so following several instrucionts. Even with another different library called JavacvPro, but nothing, I always got the the error:

wrong ELF class: ELFCLASS64

So, if I wanted to build a proof of concept of our idea of a green LED blinking when the camera detects a fece, and a red one when it doesn’t, I had to use the almighty Python for what the OpenCV was already working pretty well. Then I took the typical facedect.py example and added a couple of lines for connecting to Arduino, so the new code looks like:

import numpy as np
import cv2
import cv2.cv as cv
import serial
from video import create_capture
from common import clock, draw_str

arduino = serial.Serial('/dev/ttyACM0', 9600, timeout=1)

help_message = '''
USAGE: facedetect.py [--cascade <cascade_fn>] [--nested-cascade <cascade_fn>] [<video_source>]
'''

def detect(img, cascade):
    rects = cascade.detectMultiScale(img, scaleFactor=1.3, minNeighbors=4, minSize=(30, 30), flags = cv.CV_HAAR_SCALE_IMAGE)
    if len(rects) == 0:
        arduino.write("N")
        return []
    rects[:,2:] += rects[:,:2]
    arduino.write("F")
    return rects

# The rest of the code is the same

And having the next Arduino program loaded

#define GREEN 8
#define RED 7

int val = 0;
int face = 70; // "F"
int none = 78; // "N"

void setup() {
  pinMode(GREEN, OUTPUT);
  pinMode(RED, OUTPUT);
  Serial.begin(9600);
}

void loop() {
  if (Serial.available()) {
    val = Serial.read();
    if (val == face) {
        digitalWrite(GREEN, HIGH);
        digitalWrite(RED, LOW);
    } else if (val == none) {
        digitalWrite(GREEN, LOW);
        digitalWrite(RED, HIGH);
    }
    Serial.println(val);
  }
}

And here we can watch the amazing result of the work 😀

2 Comments

Filed under Tasks