Slowly but surely. That’s our mantra in this project since it implies a lot of coding and to handle very different techonolgies, some of them –I really mean all of them– in the very stages of development. Anyway, we finally decided to implement our multi-touchable surface using a projector, a Kinect device and the libraries SKT and Kivy. As the next video shows, we are getting some advances, but we develop slower than we expect.
And finally, our first proof of concept of everything working together. It’s about a very basic project that is able to handle several interactions at the same time. At the moment, it only allows to draw lines and identify how many different contact points are there using one color for each one.
Behind scenes, what it happens is described below:
- The Simple Kinect Touching allow us to define the boundaries of the projected screen.
- We adjust a bunch of parameters related with deepness in order to focus only in the coordinates that mean something is touching the screen.
- Then, that information is transformed and sent according the TUIO protocol to a local server. Now we have service streaming of data relative to the touchs and movements on the screen.
- In this point, we run our Kivy client application, that we call gamex, by setting the input interface as a TUIO server instead of the mouse.
- Black magic.
And that’s it. We really hope to have time to focus in the development of the machine learning application once the most difficult technical issues seem almost solved. However, to do the projection on a bed sheet or table cover is going to be kind of traumatic for setting the paramereters in the Kinect recognition layer. We will need a very rigid wood frame or something for making the surface as smooth as we can.