The other day, during the weekly class, I just realized there is no document explaining the architecture of our project. Roberto and I have been talking a lot about it, but never wrote anything detailed from a technical point of view. In past entries I talked about the idea behind the project, called Gamex, and how it is going to work. Because we are really excited with the project, sometimes we get lost our selves in the universe of coding (and fighting with Kivy) and we forget to document technical issues. So I hope to fix that with the current blog post.
The architecture of the project, as depicted in the image above, has different parts that I’m going to describe illustrating a complete application cycle:
- In a previous phase, and in order to save some time and increase performance, we pre-process a set of baroque paintings applying different techniques to recognize faces in pictures. All the data is stored using JSON files for medatadata and JPEG for images format.
- Then, the user, I mean, the audience of the exhibition, walk in front of the screen.
- The screen, thanks to a projector, shows a slideshow of baroque paintings and calls for the action, it demands interactivity. Maybe some fancy text with blinking effect or similar. Every phrase it correspond to a different game and different process to collect data. These ones are the current games we are developing (it is very easy to extend to collect data to, for example, just the virgins, or the saints, or the children, etc):
- “Hey! Punch these people in the face,” to collect information from the position of the heads in the painting.
- “Dude, better if you get this people’s eyes out,” to collect the pair of points related to the position of the eyes.
- The first one store a list of points and a timestamp for the face punch game.
- The second one is intended for the get the eyes out game and store a couple of points.
The second game is a way to tune the information we are collecting. For that game, Gamex is going to show paintings with information about where the faces are supposed to be. With the information about the position of the faces in the painting and the position of the eyes, we can calculate the size of the head and, even, create a probability heat map of where the faces are. Finally we will be able to enhance the algorithms used to detect faces.
Of course, this project is senseless if we are not able to get a lot of users and if we require only a minimum effort in the users side to interact. Let’s see what we are able to do.