A Tentative Architecture for Gamex

The other day, during the weekly class, I just realized there is no document explaining the architecture of our project. Roberto and I have been talking a lot about it, but never wrote anything detailed from a technical point of view. In past entries I talked about the idea behind the project, called Gamex, and how it is going to work. Because we are really excited with the project, sometimes we get lost our selves in the universe of coding (and fighting with Kivy) and we forget to document technical issues. So I hope to fix that with the current blog post.

General Architecture of Gamex

General Architecture of Gamex

The architecture of the project, as depicted in the image above, has different parts that I’m going to describe illustrating a complete application cycle:

  1. In a previous phase, and in order to save some time and increase performance, we pre-process a set of baroque paintings applying different techniques to recognize faces in pictures. All the data is stored using JSON files for medatadata and JPEG for images format.
  2. Then, the user, I mean, the audience of the exhibition, walk in front of the screen.
  3. The screen, thanks to a projector, shows a slideshow of baroque paintings and calls for the action, it demands interactivity. Maybe some fancy text with blinking effect or similar. Every phrase it correspond to a different game and different process to collect data. These ones are the current games we are developing (it is very easy to extend to collect data to, for example, just the virgins, or the saints, or the children, etc):
    • “Hey! Punch these people in the face,” to collect information from the position of the heads in the painting.
    • “Dude, better if you get this people’s eyes out,” to collect the pair of points related to the position of the eyes.
  4. When the user punch or touch the screen, a change in the deepness of the screen is produced. This alteration is observed by a Microsoft Kinect device.
  5. The signal is encoded as a point or set of points using STK and sent to a TUIO server.
  6. Tha main application written in Kivy is listening to that server. When a point is received into the TUIO server, the Kivy application translate it into a mouse click in the application frontend.
  7. In that moment, the information about the point is stored in a MongoDBNoSQL database, actually a key-value store. We create something similar to a table for every image with two different lists:
    1. The first one store a list of points and a timestamp for the face punch game.
    2. The second one is intended for the get the eyes out game and store a couple of points.
  8. If the point provided by the user is inside an area previously calculated in our metadata, we assign high points to provide some feedback to the user. If the point is new, we mark that information and show different score.
  9. When the number of points the user gives is similar to the number of faces we get in the preprocessing stage, the slideshow browse to another image and it starts again.

The second game is a way to tune the information we are collecting. For that game, Gamex is going to show paintings with information about where the faces are supposed to be. With the information about the position of the faces in the painting and the position of the eyes, we can calculate the size of the head and, even, create a probability heat map of where the faces are. Finally we will be able to enhance the algorithms used to detect faces.

Of course, this project is senseless if we are not able to get a lot of users and if we require only a minimum effort in the users side to interact. Let’s see what we are able to do.

4 Comments

Filed under Tasks

4 Responses to A Tentative Architecture for Gamex

  1. This looks like a really fun project. I have a question and some suggestions.

    How do you handle the Kinect calibration for each user? When I played with a Kinect attached to an XBox game system it needed to calibrate each player to handle different human form factors :-)

    Do you know about Skeltrack http://arstechnica.com/gadgets/news/2012/03/igalia-releases-open-source-kinect-skeleton-tracking-library.ars ? It was released recently and it may be useful for your system.

    ALso, you may consider using some Hematocritico titles for the paintings in order to keep your audience engaged :-) http://hematocritico.tumblr.com/

    • And it is! What it happens is that we don’t need calibrate the Kinect for the different users. To avoid that tedious task every time and for every user, we are using a rear projection screen between the user and the Kinect, so the Kinect has only to work with the deformation of the screen. To achieve this we had to build a stretching screen and calibrate just the maximum level of deepness to translate a poke into a mouse click (and another things like filtering by blobs size and number, Kinect is very sensible). I’ll talk a bit about the screen in the next post.

      I didn’t know about Igalia’s library, it looks very useful for skelenton detection. Thanks for the link.

      BTW, +1 to Hematocrito titles XD

  2. Pingback: Final Post: Gamex and Faces in Baroque Paintings | In my humble opinion…

  3. Pingback: Baroque Faces: Final Post | The Digital Fingerprint of the Brush

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>