Category Archives: Tasks

Thinking about an exhibit project

This week has been, again, very funny. I was participating in the building of a MakerBot 3D printer. I have to say that is easier than I thought and all you need is inside the box. I couldn’t be all the days needed to finish the printer because I had classes to attend. But anyway, the days I did do I enjoyed a lot.

Finishing the MakerBot 3D Printer

Finishing the MakerBot 3D Printer

Moving on to something else, Roberto and I were talking to Prof. William Turkel about some ideas for our final project. And after listening a lot of very good ideas and proposals, and given the fact that our time is finite, we’ve almost defined our project. It has three different parts that I am going to detail:

  1. We are going to build a multi-touchable big screen using a table cover. Yes, using a table cover. To do that we’ll use a Microsoft Kinect device. The Kinect, as much of you already know, is able to get the deepness of the scene what the device is focusing to. So, we also have a projector that it will be connected to one of our computers. Thanks to the Simple Kinect Touch library that we found, we can define the area of the projected screen as a delimited plain surface. Done this, any change in the surface is properly detected by the deepness camera of the Kinect. There are many ways to transform that information into a single point to emulate a mouse click. Maybe we’ll calculate the middle point according the distorted surface by pressing the screen. But what screen? The table cover on which the projector will be used.
  2. In the step above we explained briefly a way to have an arbitrary surface as a touchable screen. After this, we are going to show images in the screen. But not random images. In a parallel process, and using maybe OpenCV and some improvements to the face recognition algorithms, we extract the position of the bounding box for the faces from certain baroque paintings. However, this process is not always as good as we expect. And that’s the reason of the step three.
  3. In the final step, we use the bounding boxes extracted for each image. With that information, we project a image in the big screen and now,  using Kivy or Processing, we ask for users to touch the devil, angel, saints, virgins or Jesus faces. One of every type at the same time. So we can use that information as features to enhance a machine learning algorithm with scikit-learn. We can even propose different methods to touch the faces: pray for angels, punch for devils, etc. In this way, we give feedback (formally training) to recognise different kinds of faces, because our first algorithm is only able to detect faces and it doesn’t very well for faces in pictures.

And that is. The idea is to have this in some museum or public place, pretending that people play with the screen selecting the type of “touch” to give. And, in the background, providing very useful information for face recognition patterns in artworks.

But, due to the difficulty of having the tings working in 64 bits Linux OS, we are moving to 32 bits. It’s really better to focus on the task to do instead of the installation of requirements.

We also have already some materials to build the screen, I hope during this week to have a very first version of the code, that will be hosted ad GitHub. The name, in a lack of a better one, is Gamex: Game Exhibis for Machine Learning.

2 Comments

Filed under Tasks

The pain of having a 64 bit Linux laptop and OpenCV

This is the story of a very big frustration. I’ve been an evangelist of Open Source from almost 8 years now. One of the first thing I always defend is the the freedom of a free operative system. Of course, with great power comes great responsibility, but that didn’t really worry me because actually I’ve always been enjoying of dealing with the system. There is something special in the satisfaction of getting things working in Linux. Even, and secretly, it is one on the reaseons because we like Linux, because when something gets wrong, we have the power to fix it. And we love this.

But this mantra could have been a mess. Actually I thought I wasn’t able to get the things done. In this case, I was trying to install the open library for computer vision, called OpenCV, in order to use it from Processing. But my laptop has a 64 bits processor, and there started the problem. The most of the libraries are compiled and have binaries for 32 bits systems, so it’s supposed to could be enough if I compiled the libraries by myself. But that’s not alwasy as easy as it looks like. After try, nor one, two, three, four, but six different methods, I got OpenCV libraries working on my machine with Python bindings, and also the OpenNI and NITE PrimerSensor enabled, what it’s supposed to be good in you want to connect a Kinect (in the future I would like to use Point Cloud Library, aka PCL). And not the same luck for CUDA. Once the OpenCV was working, of course, I discovered the fantastic OpenCV PPA with 64 bits libraries packaged for Ubuntu 11.10.

After OpenCV, I needed to bind the library with Java first, and with Processing then. But this step hasn’t been likely at all. I tried the common OpenCV for Processing, I also tried the recommendations made by my classmate Roberto (whose laptop is 32 bits processor based) and generating again the libOpenCV.so following several instrucionts. Even with another different library called JavacvPro, but nothing, I always got the the error:

wrong ELF class: ELFCLASS64

So, if I wanted to build a proof of concept of our idea of a green LED blinking when the camera detects a fece, and a red one when it doesn’t, I had to use the almighty Python for what the OpenCV was already working pretty well. Then I took the typical facedect.py example and added a couple of lines for connecting to Arduino, so the new code looks like:

import numpy as np
import cv2
import cv2.cv as cv
import serial
from video import create_capture
from common import clock, draw_str

arduino = serial.Serial('/dev/ttyACM0', 9600, timeout=1)

help_message = '''
USAGE: facedetect.py [--cascade <cascade_fn>] [--nested-cascade <cascade_fn>] [<video_source>]
'''

def detect(img, cascade):
    rects = cascade.detectMultiScale(img, scaleFactor=1.3, minNeighbors=4, minSize=(30, 30), flags = cv.CV_HAAR_SCALE_IMAGE)
    if len(rects) == 0:
        arduino.write("N")
        return []
    rects[:,2:] += rects[:,:2]
    arduino.write("F")
    return rects

# The rest of the code is the same

And having the next Arduino program loaded

#define GREEN 8
#define RED 7

int val = 0;
int face = 70; // "F"
int none = 78; // "N"

void setup() {
  pinMode(GREEN, OUTPUT);
  pinMode(RED, OUTPUT);
  Serial.begin(9600);
}

void loop() {
  if (Serial.available()) {
    val = Serial.read();
    if (val == face) {
        digitalWrite(GREEN, HIGH);
        digitalWrite(RED, LOW);
    } else if (val == none) {
        digitalWrite(GREEN, LOW);
        digitalWrite(RED, HIGH);
    }
    Serial.println(val);
  }
}

And here we can watch the amazing result of the work 😀

2 Comments

Filed under Tasks

Arduino, twitter & Python

The second week in the fascinating world of Arduino left us very interesting things. The class was about connecting new kinds of periphals, like buzzers or LCD displays. So, the idea of my classmate Roberto and me was to show the last tweet from twitter in a LCD display. After that, and due to the time limitations of the class, we changed our minds and decided analyze the last tweet in a sense of positive or negative message, and then brigth a blue LED for positive or red LED for negative. But sadly, this also was too much for one and a half hour.

So, what we finally did was to communicate the Arduino device with twitter through serial port connecting to a Python program. The workflow is the next. A Python program in a infinite loop gets the last tweet, then analyzes the positive and negative sentiment (using ViralHeat API) of hte text, after that, it sends a signal to the Arduino device, that it is listening in a specific port. Regardless to the singal received, plus the message, Arduino does the right LED to brigth.

The message is not trivial, and it has to be defined. In our case, we first send the signal for blue, red or both (whether the sentiment was unable to get), followed by a special token as separator, all the characters of the text for a future implementation of the LCD display. The separator token should be a not printable character or some you are sure it is nor possible to use in a twitter message.

So here you can find the Python code fofr the program. As you can see, we used the twython library for connecting with Twitter. Also, if you are using Linux, in order to know the right port to connect, go to the Arduino window → Tool → Serial monitor, and see what it is written in the title of the window. It used to be something like

/dev/ttyACM0

.

import serial
import time
import urllib
import requests

from json import loads
from twython import Twython

# Token, we choose not printable values
SEPARATOR_TOKEN = 666
# Sentimen API
url = "http://www.viralheat.com/api/sentiment/review.json?text=%s&api_key=<KEY>"
# Connect to arduino via serial port
arduino = serial.Serial('/dev/ttyACM0', 9600, timeout=1)

# Write to Arduino
def writeToArduino():
    twitter = Twython()
    public_timeline = twitter.getPublicTimeline()
    last_tweet = public_timeline[-1]
    text = last_tweet["text"]
    # We get the numbers of the each character
    codes = [ord(c) for c in text]]
    # Get the sentiment
    response = requests.get(url % urllib.quote(text))
    sentiment_response = loads(response.content)
    if sentiment_response["mood"].lower() == "negative":
        sentiment = 0
    elif sentiment_response["mood"].lower() == "positive":
        sentiment = 1
    else:
        sentimen = 2
    # Add characters codes and sentiment value
    arduino.write(codes + [SEPARATOR_TOKEN, sentiment])

while True:
    writeToArduino()

And finally the Arduino code.

#define GREEN 8
#define RED 12

const int sepToken = 666;
int val = 0;
int lastVal = 0;

void setup() {
  pinMode(GREEN, OUTPUT);
  pinMode(RED, OUTPUT);
  Serial.begin(9600);
}

void loop() {
  if (Serial.available()) {
    val = Serial.read();
    if (lastVal == sepToken) {
      if (val == 0) {
        digitalWrite(GREEN, HIGH);
        digitalWrite(RED, LOW);
      }
      if (val == 0) {
        digitalWrite(GREEN, HIGH);
        digitalWrite(RED, LOW);
      }
      if (val == 0) {
        digitalWrite(GREEN, HIGH);
        digitalWrite(RED, LOW);
      }
    } else {
      // Print "val" to the LCD Display
    }
    Serial.println(val);
    lastVal = val;
  }
}

Leave a Comment

Filed under Tasks

Arduino: First Contact

This week has been, at the end, my first Interactive Exhibit Design class. From the first moment, and working in pairs, we were given an Arduino UNO device, with the corresponding trainer, wires, light-emitting diodes (aka LED), light sensors, one button and the USB cable to connect to the laptop. I cannot say I weren’t excited. I’ve been hearing about Arduino almost 4 years and finally yesterday I could use it.

As a guy with some Computer Sciences Old School background, has been amazing to see the simplicity of Arduino and the magic of the in board microcontroller. I remember the times when I had to pay a lot of attention in order to get my circuit working. If you made a mistake, you should rethink about your wiring, connect the oscilloscope and analyze what the hell was going on. Now, with Arduino, you can layout the components of the circuit and build as much programs on it as you want. For me, it is like magic.

Arduino and its components

Arduino and its components

This first contact was näive. My classmate and I made the tipical first experiment: make a LED to bright. After that, and using the software development kit also provided, and based on Processing, we plug a light sensor, some resistances and build this time a proximity alarm: if you bring your hand closer to the light sensor, this one lose some light, then our code was design to make blink a diode in a inverse way to the light the sensor were receiving. Really funny and didactic.

Our "proximity" detector

Our "proximity" detector

So now, I am exciting again for the next class and to see what Arduino has for us!

2 Comments

Filed under Analysis, Tasks

The Futurible Closet

As the very first assignment of History 9832B: Interactive Exhibit Design, William Turkel proposes a blog post about what it’s called History Appliances. The steps are “easey”: take an common object, do some miracle, think about its way to operate, and connect it to the historical flow. It looks really interesting.

However, and besides of this assignment, I know the course has some readings about totally awesome new tools: Interaction Design and Visualization, Making and Hacking, Coding, Electronics, Physical Computing, Microcontrollers, Desktop Fabrication, Digital Representations and 3D Photography, Scanning,Visualization and Printing. As a computer scientist turned on a humanist, these ones are fields of much interest.

The closet wasn't always what we think it was for

The closet wasn't always what we think it was for

I started thinking about my closet, so is one of the appliances I use every day. Actually almost every day, because the usual chair or couch do the functions of the closet more times that I would like. As a historical appliance, the use of the closet started in the Roman Age. In Herculaneum we find the first closets, but the was used as a place ni which put war weapons and armors, so the latin name is armarĭum —armario in spanish– it comes from arma, that means weapon. Roman people also used it to put in it some portraits made of wax, and books as the main storage systems for libraries. Eventually it became a good place to save any kind of objects, but it wasn’t until the 15th century that the closet was used as today, to store and save clothes.

Said this, I think that would be interesting to have some closet the allows swap your clothes with objects saved in it in any other past or future age. I would like to open my closet tomorrow for put my “folded” clothes in it and find that my closet only has roman papyrus, war armors, or pompous french clothes. And the same thing could happend for the future, who knows what the future closet will have inside? Even our present is the future for our ancestors. In same way, this idea remembers me to the book and movie “Narnia Chronicles: The Lion,The Witch And The Wardrobe”, in which a few kids can enter into a new magical world through the closet, in a different time and space. Kids always have used the closet as a hiding and magical place.

In the other hand, our closets are essentially the same ones that our ancestors used. Not any technological improvements has been carried out. Maybe we put some light bulbs, made decorations or made the doors automatic. But they are, actually, the same appliances from Rome. So, what we could do to enhance our traditional closets? From my point of view, as a bit lazy guy about domestic tasks, personally, I dislike the workflow of the clothes: I wear something, the garment gets dirty, put the garment in the dirty clothes box, wait for the dirt clothes box gets full, carry the dirty clothes box to the wash machine, put the clean clothes in the tumble dryer, and the worst part, do the ironing where necessary, fold the clothes and place them on hangers into the closet in a ordered way. Exactly the same process as people did five centuries ago. I hope to have in the near future some high technological closet why I can just put dirty clothes in it and, in the next day, grab some fresh, clean and folded garments. I think I’m not hoping too much. But I man can dream…

Leave a Comment

Filed under Tasks

Science-Fiction in the Eaton’s Fall and Winter Catalogue

I’m a bit impressed for the contents of the Eaton’s Fall and Winter Catalogue. It’s a very old Canadian mail order catalogue, in somehow, the ancestor of websites such as Amazon. I was thinking that this kind of services in the past is not very common today, because the most of the e-commerce sites are quite specific and centered only in one type of products. For example you can use Best-Buy for electronics or iTunes for music, but there’s no much sites like Amazon in which you can order almost everything you want, from books to movies, from clothes to electronics. And it is spreading even more and more. Probably Amazon is going to be the perfect replacement for the ancient mail order catalogues.

I tried to find some products in the Eaton’s Fall and Winter Catalogue with no luck. Using the search terms “robot”, “droid”, “tv”, “radio”, “future”, “nuclear”, “war”, “science” or “maths” return no results at all. This is because in the culture of our generation has been mainly used concepts from after the World War II. We cannot live today without those terms in our lifes, such as “science fiction”. The most similar books related to “science fiction” are in the section called “Mechanical Books for Home Study,” for the “science”, and then we need to browse into another section called ”High-Class Recent Fiction” to find the “fiction” part of the term. In the first one, there are titles like “Light and Heavy Timber Framing Made Easy,” “Hodgson’s Modern Estimator and Contractors,” or “The Steel Square,” all of them available on-line, in somehow, and authored by Frederick Thomas Hodgson, who seems to write several books of that kind in the catalogue.

Light and Heavy Timber Framing Made Easy

Light and Heavy Timber Framing Made Easy

An original edition of "A practical treatise on the steel square and its application to everyday use"

An original edition of "A practical treatise on the steel square and its application to everyday use"

In the second section, and despite of the big sets fo books Bible-related, there some books that attracted my attention. Besides, this section has more titles than the section before. There are here some of them.

"Satan Sanderson", by Hallie Erminie Rives

"Satan Sanderson", by Hallie Erminie Rives

"Coniston," by Wiston Churchill

"Coniston," by Wiston Churchill

There have been science fiction in my life from I can remember, so it is pretty hard for me to figure out a life with no science fiction at all. Definitely, we live interesting times (and with better readings than the past).

And finally, just a curiosity, I would like to say that I consider really interesting the section about musical instruments and the prices you can find on it. Pianos are on sale just for a half of a dollar!

2 Comments

Filed under Tasks

Creating a Globe of Data

Before starting, you can see the final result of this post on World Poverty.

Some months ago, I was impressed with the web Chrome Experiments. In that site, you can find a lot of experiments made using the new WebGL technology, that it’s supposed to work in the most of new browsers. WebGL is the most recent standard for 3D representations on the Web. So, with WebGL, a new form of data representation is now possible. In fact, there are artists, scientists, game designers, statistics and so on, creating amazing visualizations of their data.

Google WebGL Globe

Google WebGL Globe

One of these new ways of representations was made by Google. It’s called WebGL Globe and allows to show statistical geo-located data. The only thing you need is split up your data into several series of latitude, longitude and magnitude in JSON format, as the next example illustrates:

var data = [
  [
    'seriesA', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ]
  ],
  [
    'seriesB', [ latitude, longitude, magnitude, latitude, longitude, magnitude, ... ]
  ]
];

JSON, acronym for JavaScript Object Notation, is not only a format to represent data in Javascript. It’s also the data type that WebGL Globe needs to work. In this format, a list is inclosed between brackets, “[” for start and “]” to end. Therefore, the data series for WebGL Globe is a list of lists. Every one of these lists have two elements. The first one is the name of the serie and the second one is another list containing the data. The data is written comma separated, so that you must indicate your information in set of three elements: the first is the geographical coordinate for latitude, the second one is the same for longitude, and the third one is the value of the magnitude you would like to represent.

Let’s say we want to represent information from the Human Poverty Index. The first we need is to download the data in the format provided by United Nations’ site for the Multidimensional Poverty Index, that has replaced the old Human Poverty Index. Now we got a spreadsheet document, it’s time to open it and collect just the data we need, thus, go to the page 5 of the book, and copy and paste the cells into a clean spreadsheet. We clean all the date we don’t need like titles, captions, extra columns, etc and we leave just country names, the second “Value” column under the cell “Multidimensional Poverty Index”, the population under poverty in thousands, and the “Intensity of deprivation” column. The next step is to remove the rows with no data for that indicators, marked as “..”. After doing this, we should have a document with 4 columns and 109 rows.

Spreadsheet before getting coordinates for countries

Spreadsheet before getting coordinates for countries

But, although we have the name of the countries, we need the geographical coordinates for them. There are several services that provide the latitude and longitude for a given address. In the case of having just the name of a country, the main coordinates for the capital is provided. We will use geopy, which is a Python library able to connect to different providers and get several kinds of information. To use geopy, a terminal or console is needed in order to get installed, that is very easy with just a command.

$ easy_install geopy

After that, we can open a terminal or interfactive console like iPython and just get the latitude and longitude of, for instance, “Spain”, with next commands:

>>> from geopy import geocoders

>>> g = geocoders.Google()

>>> g.geocode("Spain")
(u'Spain', (40.463667000000001, -3.7492200000000002))

In this way, we can build a list of our countries and pass it to the next script:

>>> from geopy import geocoders

>>> g = geocoders.Google()

>>> countries = ["Slovenia", "Czech Republic", ...]
>>> for country in countries:
try:
    placemark = g.geocode(country)
    print "%s,%s,%s" % (placemark[0], placemark[1][0], placemark[1][1])
except:
    print country
....:
    ....:
Slovenia,46.151241,14.995463
Czech Republic,49.817492,15.472962
United Arab Emirates,23.424076,53.847818
...

Now, we can select all the results corresponding to the latitudes and longitudes of every country and copy them with Ctrl-C or mouse right-click and copy. Go to our spreadsheet, in the first row of a new column, and then paste all. We should see a dialogue for paste the data, and on it, check the right option in order to get the valies separated by commas.

Paste the result comma separated

Paste the result comma separated

Done this, we hace almost all the coordinates for all the countries. Anyway, could be some locations for which the script didn’t get the right coordinates, like “Moldova (Republic of)” or “Georgia”. For these countries, and after a carefull supervision, the better thing to do is to run several tries fixing the names (trying “Moldova” instead of “Moldova (Republic of)”) or just looking the location in Wikipedia –for example for Georgia, Wikipedia provides a link in the information box at the right side with the exact coordinates. When the process is over, we remove the columns with the names and order the columns in order to get first the latitude, second the longitude, and the rest of the columns after that. We almost have the data prepared. After this, we need to save the spreadsheet as CSV file in order to be processed by a Python script that converts it into the JSON format that WebGL Globe is able to handle. The script that processes the CSV file and produces a JSON output is the detailed the next:

import csv
lines = csv.reader(open("poverty.csv", "rb"))
mpis = []  # Multidimensional Poverty Index
thousands = []  # People, in thousands, in a poverty situation
deprivations = []  # Intensity of Deprivation
for lat, lon, mpi, thousand, deprivation in lines:
    mpis += (lat, lon, mpi)
    thousands += (lat, lon, thousand)
    deprivations += (lat, lon, deprivation)
print """
[
["Multidimensional Poverty Index", [%s]],
["People affected (in thousands)", [%s]],
["Intensity of Deprivation", [%s]]
""" % (",".join(mpis),
       ",".join(thousands),
       ",".join(deprivations))

And the output will look like:

[
["Multidimensional Poverty Index", ["46.151241", "14.995463", "0", ... ]
...

Now, if we copy that output into a file called poverty.json we will have our input data for WebGL Globe. So, the last step is setup the Globe and and the data input file all toghether. We need to download the webgl-globe.zip file and extract the directory named as “globe”  into a directory with the same name. In it, we copy our poverty.json file and now edit the index.html in order to replace the apparitions of “population909500.json” with “poverty.json”, and do some other additions like the name of the series. Finally, to see the result, you can put all the files in a stativ web server and browse the URL. Another option, just for local debugging, is run the next command under the directory itself:

$ python -m SimpleHTTPServer
Serving HTTP on 0.0.0.0 port 8000 ...

And then, go to http://localhost:8000 to see the result.

Globe before normalization

Globe before normalization

It seems like there is something wrong with two of the series: the population in poverty conditions, and the intensity of the poverty. This is because we need to normalize the values in order to get values in the range o to 1. To do that, we open again our CSV file as a spreadsheet, calculate the sum of the columns that we want to normalize, and then, we create a new column in which every single cell is the result of the division of the old value of cell by the total sum of all the values in the old column, We repeat the proccess with the another column and replace the old ones with just the values in the new ones. Now, we can run the steps of generate the JSON file and try again.

Now, you can click on World Poverty to see everything properly woriking.

3 Comments

Filed under Tasks

“Piracy” will save the Culture

Until 50 years ago, the only manner to store information was through physical means. These tangible objects reproduced the thoughts and the history of writing, painting, music or architecture. However, the technological revolution that happened in the end of twentieth century, involved the creation of new formats to confine information. Since then, any kind of information comparatively requires an infinitesimal amount of space to be stored.The study of the Humanities is focused, in one side, on the analysis of existing documentation about the culture –understood under any fo its definitions [1, 2]–  and knowledge of an age, thus about the history. But even being as purist as we can, it is not possible to deny the evidence of that nowadays the cultural production has experimented a very profound renovation. In this way, and motivated by the tele-communications development, this change allows to complement all and every single parts that compose the culture of our time. Social networks, blogs, microblogging, comments, gadgets reviews, video games, etc. play a fundamental role as human creation that deserve to be study like influential artifacts in the life. Not in vane there is who consider blogging or programming codes –also known as Critical Code Studies— as genres [3]. These new expression forms constitute not only a new entertainment way, but they are an intrinsic part in respect to the manner in which the culture is consumed. These new ways will also mold the culture to face the future generations. Nevertheless, so often the current entertainment industry enforce its rules over the multimedia production through privative information formats, too restrictive licenses, and rigid policies about intelectual property and content distribution.

The most common feature of all technological development is the quick expiration. This transforms the amazing and innovative formats from yesterday into obsolete and old-fashioned ones for today. The eternal race against Moore’s Law produces casualties on the way: physical means fall into disuse, formats that cannot be translated, lost conversations in huge datacenters, etc. Therefore, in the near future it will be virtually impossible to perform studies based on them. And that is because, even assuming there won’t be legal pitfalls, the problems about obsolescence of used formats will always be there.

Fortunately, whether the absence of continuity of the technology is one of its essential characteristics, it could be proposed that the badly called “piracy” is the positive counterpart. In reference to the act itself of the copyright infringement, it’s becoming more usual the usage of the expression “piracy” to refer to the non authorized copy. This exaggeration pretends to put the act of free sharing (hackers) on the same level with the violence of pirates of the sea (buccaneers), so that is a way to criminalize the users. In some countries, as the case of Spain or Italy for example, the situation is even worse: users that buy new storage devices, virgin DVD-ROM’s or MP3 players, have to pay a special royalty, called “canon”, just in case because they potentially use them for storing illegal content. Perhaps it is a good idea to avoid the debate argued by the industry about the profitable side of this activity. However, the latest goal of the “piracy” and the communities that provide support, has always been to facilitate access to the privative information from big contemporary “cultural” companies. To achieve it, the “piracy” promotes a total openness with no objections.

Just through to this kind of transparency and universality of the technology, sooner rather than later, when the historic perspective allows us, we will be able to carry humanistic studies out about our time. Because the supports and formats in which the information is maintained will be also freely exposed and made available to the general public. Of course, this will only be possible thanks to developments implemented in pursuit of “piracy”. Maybe we are not totally conscious of what is happening. Something apparently so simple as saving personal documents as .doc or .docx formats could eventually be an obstacle in order to access the contents in the future. Besides, the standarization process for Office Open XML by Microsoft was not very clear; in contrast to Open Data Format promoted by the OASIS Consortium. This is one of the dangers of trusting the specification of files to private companies: they can change the rules whenever they want. Our data belong us, regardless of the format or service in which they are stored. Notwithstanding there are a plenty of old files that used privative formats to be stored and that could be problematic in order to analyze what happenend in the past. We should be aware of this today. If the traditional historians used to face with stacks of papers and books, maybe they are going to do research using hard-drives and files in very old and non-free formats.

The problem may be not applicable to the traditional contents and documents, because they actually are physical objects rather than virtual and intangible entities like bits. But we are already producing more digital content today that all the books ever written in the past. So, what is going to happen with the contents born digital? We can also take a really good example: museums and video-games. The museums used to be statal or private properties, and in this point other players –nation-states and companies– come into the debate about the control and management of the culture and history. But what will happen when these companies, the current lobbies, disappear or, simply, stop to be interested in maintaining the formats or providing support for them?  So the only way to survive will be the “piracy”, the same piracy that now allows us playing PlayStation video games in PC’s, the same that created emulators for Neo-Geo. Undoubtedly an exhibit about video games will hace only two paths: the first one will be reach an agreement with the company; the second one, and in case of extincted companies, will be the “piracy”.


UPDATE @ Nov 3rd: I have just read When Data Disappear and, just like Juan Luis said, maybe it would be better to use the name “curatorial activism”. I was astonished about how similar are my thoughts to The New York Times’ article. In conclusion: A new movement should be started.


Note: In fact, to what extent a company can be considered as the owner of a cultural icon of a generation as Super Mario Bros or PacMac are? How long will we let them have the power to decide about this? This is material for another post.


[1] Geertz, C. The interpretation of cultures: Selected essays, vol. 5043. Basic Books New York, 1973.
[2] Tylor, S.E.B. Primitive culture, vol. 1. Harper, 1958.
[3] Herring, S.C. and Scheidt, L.A. and Bonus, S. and Wright, E. Bridging the gap: A genre analysis of weblogs. In System Sciences, 2004. Proceedings of the 37th Annual Hawaii International Conference on}, pp 11, 2004.

2 Comments

Filed under Tasks