Category Archives: Analysis

Now I have a MOOC platform, what are the physical stuff I need?

It’s been a while since I wrote my last MOOC-related post. But now, after the crazy days of starting the first MOOC class in which I have the honour to participate in, I can write a bit about the second main aspect of a MOOC: what do you need to create those awesome videos. I already know that these posts are not about content, but about things you need to start. About the content and political or philosophical implications of teaching a MOOC there already are conversations out there that can fit your interest and answer your questions. For me, it is simply an interesting trend that universalizes the access to higher education, so as an academic member of an university, it’s a must to at least give it a try.

Said this, let’s talk about physical stuff. In the last post, I talked about infrastructure needs. Well, we finally forked the OpenMOOC engine and started our own development, which includes an all-in-one solution (registration, users, discussion, etc.) with a very easy installation process –stay tuned for detailed instruction to deploy it in your own server. And now that the course started, we are producing the videos as fast as we can. In an ideal world, you should buy one of those amazing Wacom tablets that already does all the work for you, but if you don’t have $4,000, as we do in our lab where the resources are limited, you should use what you already have. So far, what we are using is:

  • Digital camera recorder. The Panasonic HC-V700 ($460), but any modern camera, a good DSLR or even a small digital camera, is able to record in good quality (1080p) and is not that expense.
  • Tabletop monopod. This time we bought one from Amazon, a Sharpics SPMP16 ($30), in order to record what the teacher writes.
  • Lamp. In order to avoid annoying hand shadows when writing, we got a basic swing arm lamp ($25).
  • iPad ($399). We already had one, so no more to say.
  • Stylus. We are using a Bamboo Stylus Solo ($30), but there are cheaper options out there. It’s most about how confident you feel with it.
And I think that’s all. The process we are following on the cheap, in order to achieve results as close to Udacity‘s videos as possible, shown above, in which the hand never hides the written content, is the next:
  1. Write a small script of the video, that it is called nugget on the OpenMOOC terminology.
  2. Fix the iPad on a desktop under the camera lens, using the monopod and the lamp light.
  3. Write the content on Paper or Sketchbook Pro ($4.99) and record all the thing.
  4. At the same time, we screencast the iPad using screen mirroing through AirServer ($14.99) and Camtasia ($99).
  5. In the same Camtasia, using chroma key, we put the texts and diagrams over the hand, creating the similar effect.

But we need a lot of more practice 😀

On the other hand, we are also streaming the classes, so we can record and cut the session into pieces and make more concepts videos. So far, we are not using videos for homeworks, but Dr. Glearning, a service that enables you to create homework that your students can do in their phones. I wish you could see the students’ faces after telling them they will do homework in their phones, it’s simply priceless. But, although Dr. Glearning app is already available on iTunes and Google Play stores, is still in beta for teachers to create their courses. In addition, our OpenMOOC fork, we developed a basic integration, so you can embed Dr. Glearning courses into your MOOC course. Awesome, isn’t it?

4 Comments

Filed under Analysis

What if I decide to teach a MOOC? Well, then I should learn some Python :)

Well, I think that today everybody knows what is MOOC. MOOC stands for Massive Open Online Course. You may heard about the term first from Stanford University, and then by Udacity, Coursera, edX, or TEDEd. So there is so much hype about the concept and the idea of MOOC’s, although it is not as new as we can think. Open Learning Initiative could be one of the first exploring this trend, or more recently, P2P University and Khan Academy. However, when you decide to teach your content following the MOOC model, there are some steps to overcome. First question is if you need your own platform or just use one of the available. If the answer to this question is something like “what are you talking about”, then you can pur yout content in sites like Udemy , CurseSite by BlackBoard or iTunesU, and forget about systems administration, users registration, machine requirements, bandwidth, etc. But you will be tied to a company and its constraints. Or, if you are part of a bigger institution, you can beg your boss to join to one of the biggest consortium mentioned above. Let me tell you something, this is not going to happen quickly (or at all, the wheels of bureaucracy turn slowly), so better get a new approach. On the other hand, if you have a passable server with acceptable bandwidth, some tech guys with free time (what it is an oxymoron), and a lot of energy and passion, you can also setup your own infrastructure. If this is your case, what are your options? Well, just a few that I will enumerate.

  • OpenMOOC, aims to be an open source platform (Apache license 2.0) that implements a fully open MOOC solution. It fully video-centered following the steps of the first experiment of Standford AI-Class. It is a new approach but makes harder to add traditional questions no based on videos or even the send of essays. Is prepared to be used with an IdP in order to have an identity federation for big sites. It is able to process automatically YouTube videos and extract the last frame as the question if required. Because we particularly don’t need the federation, we removed that feature and added some more in our own fork, just to try the solution. Also is able to connect to AskBot for a forum-like space for questions and answer. Successfully deployed in UNED COMA.
  • Class2Go, easier to install and have it running but kind of complex to manage. It integrates very well with services such Amazon SES (that we added to our OpenMOOC fork), Piazza, the Khan Academy HTML-based exercise framework , and Amazon AWS. Used by Standford.
  • Course Builder, pretty beauty but hard to deploy or add content. Used by Google and some of its free courses.
  • Learnata is with no doubt the best documented and easiest in install. It is the underlying system of the P2PU and it counts with an active and real community behind. It has an awesome badges system, a detailed dashboard, and API, and a bunch of modules (formerly Django applications). But doesn’t manage videos as well as the other two.

All of them are built using Python and, except Course Builder, Django as core technology. It just so happens that here at CulturePlex Lab we use Python and Django a lot. That’s why we are currently forking everywhere and creating our own MOOC system. And that’s the magic of Open Source: we can fork OpenMOOC, take some features of Class2Go and another ones from Learnata and, whenever we respect the licenses, release a new MOOC system, the CulturePlex Courses (still under hard testing).

Next post? Some notes about what you need, in physical terms, like a camera, a monopods, a tablet, etc.

12 Comments

Filed under Analysis

More ideas on the Virtual Cultural Laboratory

These days, after reading the article about the VCL, A Virtual Laboratory for the Study of History and Cultural Dynamics (Juan Luis Suárez and Fernando Sancho, 2011) for our first session of the incipient reading group in the lab, some ideas came to my mind. The article presents a tool in order to help researchers to model and analyze historical processes and cultural dynamics.

The tool defines a model with messages, agents, and behaviours. Very briefly, a message is the most basic unit of information that can be stored or exchanged. There are three types of agents: individuals, mobile members of a social group that can interchange messages among them or adquire a new one; repositories, like individuals but fixed in space; and cultural items, as a way to store an unmutable message to transfer, also immobile. Finally, we find four ways in which agents can behave: reception, memory, learning and emission. Every kind of agent has a different set of behaviours. Cultural items do not receive information and always emit the same message; repositories work as a limited collection of messages, when the repository is full, a message is selected for elimination. And individuals can be Creative, Dominant and Passive, according to the levels they show of attentionality and engagement with the messages. These three simple models provided make the VCL a really versatile cultural simulator. However, as the authors say in the article, VCL is a beta version and could be improved a bit.

I am lucky enough to be able to talk to the authors, and we are having a really interesting discussion about new ways to expand the VCL. On my side, I have been quite influenced by the book La evolución de la cultura (Luigi Luca Cavalli-Sforza, 2007) and the already mentioned before Maps of Time (David Christian, 2011), in such a way that demography and concepts networks have become a very significant factors from my point of view.

The idea is to use graphs to represenet and store the culture of the individuals, and also graphs to represent the different cultures, trying to shift everything a bit to the domain of Graph Theory. We will be able to store the whole universe of concepts  defined through semantic relationships among them. In this scenario, we can figure out a degree pruning to get the diferent connected componentes that represent the cultures, but keeping always the source graph. This prune function could be a measure over the relationships, like for example ‘remove relationships between nodes with this value of betweeness centrality’, or even a randomly way to get connected components. But better if the removed relationships have a sense in terms of semantic.

After we have different graph cultures, we put them all in different places. Then we can get culture sub-graphs and store them in the individiuals in order to give them a cultural feeling of membershipto a certain culture. Sub-graphs form the same culture could overlap each other, but sub-graphs from different cultures should be disjointed. Now, individuals start to move across the world. Also I would introduce the notion of innovations for culture sub-graphs: an innovation is a deciduous concept with no relationships to any concept of the sub-graph, but at least one relationship if we consider the set of relationships of the original graph. Somehow, this implies that everything is already in the world, but it is an interesting assumption to experiment with. Maybe the original graph could be dynamic and get new concepts across time.

So, individuals could show specific behaviours with regard to innovations: Conservative, Conformist and Liberal. And another property to draw the feeling to belong to a group, distinct to the one the individual was born. This value is kind of similar to the permeability to ideas, but different, while permeability works during the whole life of the individual, the membership feeling could operate until it is satisfied, so we can use it as a way to stop individuals, or to define the equilibrium.

Well, these are just ideas. Another approach could be to use population pyramids as inputs for the simulation. Yes, it’s me and demography again. If we do this, given a culture and a number of individuals that changes across time thanks to the population pyramid, we could see, and this is the point, how concepts move through cultures, and even more important, what is the culture of the individuals when the simulation stops. Calculating this is as easy as checking what sub-graphs are a sub-set of the existing cultures. This idea of using a populational pyramid seems interesting to me because allows to analyze the importance of the lost of permeability of the indivoduals to innovations. Therefore, we could find what the elements are of the vertical cultural transmission, traditional, familiar, and ritual; in opposition to the horizonatal transmission (does not imply kinship but relations between individuals).

And one more idea! This one the craziest, I think. We could use a biology-inspired model for the concepts, so a concept would be defined by a vector that quantifies it using previously established knowledge fields. For instance, let’s say that an idea, i, is formed by a 20% of Literature, a 20% of Physics, and a 0% of Biology, so the resulting vector will be i = [20, 20, 0]. Also, ideas are related to each other through a graph. Following this biological analogy, we could set the vector to have 23 pairs of values, in such a way that allows individuals adopt new ideas and modify them according to random changes in the last pair of values… or maybe this is too much craziness. Let’s see!

Leave a Comment

Filed under Analysis

Raiders of the Lost Thesis: A Proposal for Big Culture?

Well, well, well. It’s been a while with no entries in this blog. Mainly due to the end of the last academic year, my awesome vacations during August and, why not, I didn’t feel like communicating or writing.

The year already started and my main proposal is to start the thesis. “But, hey! Before writing anything, you should read a lot”, someone could think. And he would be right. I have never been a reader of essays or articles, but is almost the only way to go, it seems. “But, hey again! You need first a topic”, somebody could also say. And he would be god damn right again! I don’t have a “topic”, as usually people do. However, I expect the topic emerges from the readings. We could say that my research is focused on Culture, with upper or lower “C”, the frontiers or borders that delimit it, and how it evolves. With the hope, of course, of finding some interesting result or conclussion.

For the time being, I have read “Mainstream Culture” by Frédéric Martell, and ending “Maps of Time” by David Christian, and starting “Things and Places” by Zenon W. Pylyshyn. It is not that much, but is a beginning. In this point of my research, and with a lot of weird and strange thoughts and connections in my mind, I started to think in something that could be interesting: Big Culture. Let me explain.

In the last of the books mentioned above, I am discovering how the mind is able to link between the perception and the world. It is a tough start, but it is needed to unveil the mechanisms that operate in the brain, and to understand how demonstrative thoughts and perception are related. As Pylyshyn cites, John Perry “argued that such demonstratives are essential in thoughts that occasion action”, actions by the motor system of the body. And for making this possible, humans need some frame of reference that has not why to be necessarily global, but local. What it could be a good starting point for a cultural references system.

On the other hand we have civilizations. In “Maps of Time”, David Christian summarizes the history of everything, including us, into a cycle of manipulation of energy and emergence of orders more and more complex: the life; what if goes against the Second Law of Thermodynamics. This is not a negative criticism, quite the contrary. He does an extremely brilliant exercise of synthesis, since the creation of the Universe until our days. This idea of energy comsumption-production and malthusian cycles is really valid for pre-modern civilizations, like agrarian or pastoral ones. But in the last two or three hundred years, when the modern concept of time was invented, the comercial networks –as one of the big reasons for innovation– were followed by cultural transmissions. And at the same time, innovation was one of the cause for the biggest increase of population in history. In the current mega-cities, all the “natural” purposes and preocupation of humans are a bit hidden. First, this provokes what Émile Durkheim calls anomia, and secondly, blurred definitions of identity and cultural unity.

Finally we have our current crazy world moved by economic interests, egos, and supposed superior morals: the mainstream culture, as defined by Frédéric Martell in “Mainstream Culture”. This huge research exposes how delicate, vague and artificial are actually all cultures. The complexity of the information networks, joint to global scale comercial networks, defines what we understand by culture. However, while reading this excellent book, a thing just came on my mind: maybe, instead of everything being local and global at the same time, humans have developed an unusual skill for handling cultural scopes across the time.

So, I think maybe it is a good idea to organize a good set of thoughts about what is the Culture, why it exists and what it means. From its origin in the cognitive studies and neuroscience of the brain, to the daily world, governed by complex networks. Without forgetting the process by which we became cultural beings, from our ancestors until today.

Leave a Comment

Filed under Analysis

Final Post: Gamex and Faces in Baroque Paintings

Face recognition algorithms (used in digital cameras) allowed us to detect faces in paintings. This has gave us the possibility of having a collection of faces of a particular epoch (in this case, the baroque). However, the results of the algorithms are not perfect when applied in paintings instead of pictures. Gamex gives the chance to clean this collection. This is very important since these paintings are the only historical visual inheritance we have from the period. A period that started after the meet of two worlds.

1. Description

Gamex was born from the merging of different ideas we had at the very beginning of the Interactive Exhibit Design course. It basically combines motion detection, face recognition and games to produce an interactive exhibit of Baroque paintings. The user is going to interact with the game by touching, or more properly poking, faces, eyes, ears, noses, mouths and throats of the characters of the painting. We will be scoring him if there is or there is not a face already recognized on those points. Previously, the database has a repository with all the information the faces recognition algorithms have detected. With this idea, we will be able to clean mistakes that the automatic face recognition has introduced.

The Gamex Set

The Gamex Set

2. The Architecture

A Tentative Architecture for Gamex explains the general architecture in more detail. Basically we have four physical components:

  • A screen. Built with a wood frame and elastic-stretch fabric where the images are going to be projected from the back and where the user is going to interact poking them.
  • The projector. Just to project the image from the back to the screen (rear screen projetion).
  • Microsoft Kinect. It is going to capture the deformations on the fabric and send them to the computer.
  • Computer. Captures the deformations send by the Kinect device and translates them to touch events (similar to mouse clicks). These events are used in a game to mark on different parts of the face of people from baroque paintings. All the information is stored in a database and we are going to use it to refine a previously calculated set of faces obtained through face recognition algorithms.

3. The Technology

There were several important pieces of technology that were involved in this project.

Face Recognition

Recent technologies offers us the possibility of recognizing objects in digital images. In this case, we were interested in recognizing faces. To achieve that, we used the libraries OpenCV and SimpleCV. The second one just allowed us to use OpenCV with Python, the glue of our project. There are several posts in which we explain a bit more the details of this technology and how we used.

Multi Touch Screen

One of the biggest part of our work involved working with multi-touch screens. Probably because it is still a very new technology where things haven’t set down that much we have several problems but fortunately we managed to solved them all. The idea is to have a rear screen projection using the Microsoft Kinect. Initially though for video-game system Microsoft Xbox 360, there is a lot of people creating hacks (such as Simple Kinect Touch) to take advantage of the abilities of this artifact to capture deepness. Using two infrared lights and arithmetic, this device is able to capture the distance from the Kinect to the objects in front of it. It basically returns an image, in which each pixel is the deepness of the object to the Kinect. All sorts of magic tricks could be performed, from recognizing gestures of faces to deformations in a piece of sheet. This last idea is the hearth of our project. Again, some of the posts explaining how and how do not use this technology.

Calibrating the multi-touch screen

Calibrating the multi-touch screen

Games

Last but not least, Kivy. Kivy is an open source framework for the development of applications that make use of innovative user interfaces, such as multi-touch applications. So, it fits to our purposes. As programmers, we have developed interfaces in many different types of platforms, such as Java, Microsoft Visual, Python, C++ and HTML. We discovered Kivy being very different from anything we knew before. After struggling for two or three weeks we came with our interface. The real thing about Kivy is that they use a very different approach which, apart from having their own language, the developers claim to be very efficient. At the very end, we started to liked and to be fair it has just one year out there so it will probably improve a lot. Finally, it has the advantage that it is straightforward to have a version for Android and iOS devices.

4. Learning

There has been a lot of personal learning in this project. We never used before the three main technologies used for this project. Also we included a relatively new NoSQL database system called MongoDB. So that makes four different technologies. However, Javier and me agree that one of the most difficult part was building up the frame. We tried several approaches: from using my loft bed as a frame to a monster big frame (with massive pieces of wood carried from downtown to the university in my bike) that the psyco duck would bring down with the movement of the wings.

It is also interesting how ideas changes over the time, some of them we probably forgot. Others, we tried and didn’t work as expected. Most of them changed a little bit but the spirit of our initial concept is in our project. I guess creative process is a long way between a driven idea and the hacks to get to it.

5. The Exhibition

Technology fails on the big day and the day of the presentation we couldn’t get our video but there is the ThatCamp coming soon. A new opportunity to see users in action. So the video of the final result, although not puclib yet, is attached here. It will come more soon!

6. Future Work

This has been a long post but there is still a few more things to say. And probably much more in the future. We liked the idea so much that we are continuing working on this and we liked to mention some ideas that need to be polished and some pending work:

  • Score of the game. We want to build a better system for scores. Our main problem is that the data that we have to score is incomplete and imperfect (who has always the right answers anyway). We want to give a fair solution to this. Our idea is to work with fuzzy logic to lessen the damage in case the computer is not right.
  • Graphics. We need to improve our icons. We consider some of them very cheesy and needs to be refined. Also, we would like to adapt the size of the icon to the size of the face the computer already recognized, so the image would be adjusted almost perfectly.
  • Sounds.  A nice improvement but also a lot of work to have a good collection of midi or MP3 files if we don’t find any publicly available.
  • Mobile versions. Since Kivy offers this possibility, it would be silly not to take advantage of this. At the end, we know addictive games are the key to entertain people on buses. This will convert the application in a real crowd sourcing project. Even if this implies to build a better system for storing the information fllowing the REST principles with OAuth and API keys.
  • Cleaning the collection. Finally, after having enough data it would be the right time to collect the faces and have the first repository of “The Baroque Face”. This will give us an spectrum of how does the people of the XVI to XVIII looked like. Exciting, ¿isn’t it?
  • Visualizations. Also we will be able to do some interesting visualizations, like heat maps where the people did touch for being a mouth, or an ear, or a head.

6. Conclusions

In conclusion we can say that the experience has been awesome. Even better than that was to see the really high level of our classmates’ projects. In the honour of the truth, we must say that we have a background in Computer Science and we played somehow with a little bit more of adventage. Anyway, it was an amazing experience the presentation of all the projects. We really liked the course and we recommend to future students. Let’s see what future has prepared for Gamex!

Some of the very interesting projects

Some of the projects

This post was written and edited together to my classmate Roberto. So you can also find the post on his blog.

4 Comments

Filed under Analysis, Tasks

Arduino: First Contact

This week has been, at the end, my first Interactive Exhibit Design class. From the first moment, and working in pairs, we were given an Arduino UNO device, with the corresponding trainer, wires, light-emitting diodes (aka LED), light sensors, one button and the USB cable to connect to the laptop. I cannot say I weren’t excited. I’ve been hearing about Arduino almost 4 years and finally yesterday I could use it.

As a guy with some Computer Sciences Old School background, has been amazing to see the simplicity of Arduino and the magic of the in board microcontroller. I remember the times when I had to pay a lot of attention in order to get my circuit working. If you made a mistake, you should rethink about your wiring, connect the oscilloscope and analyze what the hell was going on. Now, with Arduino, you can layout the components of the circuit and build as much programs on it as you want. For me, it is like magic.

Arduino and its components

Arduino and its components

This first contact was näive. My classmate and I made the tipical first experiment: make a LED to bright. After that, and using the software development kit also provided, and based on Processing, we plug a light sensor, some resistances and build this time a proximity alarm: if you bring your hand closer to the light sensor, this one lose some light, then our code was design to make blink a diode in a inverse way to the light the sensor were receiving. Really funny and didactic.

Our "proximity" detector

Our "proximity" detector

So now, I am exciting again for the next class and to see what Arduino has for us!

2 Comments

Filed under Analysis, Tasks

Word Frequency and Sentiment Analysis of the Spanish Elections Manifestos for #20N

Yesterday was an important date for all the Spaniards. It was a polling day. The first elections after the acceptance of the Global Crisis. Two days before, we had a day called the Reflexion Day, named in that way in order to invite all the voters to think about their decision. However, the common feeling these months has been the indignation of the spanish people across the country and, why not, across the world. Well, as a Spaniard who lives in Canada, what I did in my Reflexion Day was a quick and dirty word frequency and sentiment analysis of the Election Manifestos from some of the most “important” parties in Spain.

To achieve it, I followed a serie of steps, mechanically, and I applied them to the official manifestos. Of course, there were some of them, for example the EAJ-PNV‘s manifesto, that they weren’t in a text format, but in image format. That kind of files weren’t processed. The list of the parties analyzed is IU, PP, PSOE, EsquerraUPyD, CC, EQUO and GEROABAI.

Once I downloaded all the PDF’s files, some of them really heavy, I used the pandoc tool to extract just the text. This done, I created a little Python script to split all the text into single sentences, join sentences that are between two lines, and clean several things, like numbers of pages or the extra dots. After that, at the same time, the script connects to Sentimen API from ViralHeat to get the positive or negative feeling of every sentence in the political manifesto. With the result in JSON format properly stored in a file, one line per sentence, using another different Python script, I extract just the numbers in a CSV format, in order to be included into a Spreadsheet file and calculate some statistics.

The last part of the analysis was to create a visualization of the data. For this issue, I chose the Nightingale’s Rose from Protovis visualization toolkit, and the Wordle tool to create tag clouds. The result of the first one can be seen below.

Diagram of Sentiments of the Political Manifestos in the Spanish Elections

Diagram of Sentiments of the Political Manifestos in the Spanish Elections

Every slice in the diagrama has two areas. One in blue represents the total number of sentences in a positive sentiment. The one in red represents the total number of sentences in a negative sentiment. The length of the manifestos goes from 2,250 sentences from Esquerra’s one, to the much more little 623 sentences from UPyD. In relative terms, we can find the next results.

Percentage of Positive and Negative Sentences

Percentage of Positive and Negative Sentences

It seems like UPyD is the more realistic party, because it has the speech with more negative sentences in percentage than the rest (~16%), but it also preserves a good number of positive sentences. On the other side, CC y PP have the more optimistic manifesto, with percentage of positive sentences higher than the 95%. But, what kind of words are more used in their respective manifestos? Let’s see…

PSOE Election Manifesto Tag Cloud

PSOE Election Manifesto Tag Cloud

This one is the cloud of the current party in the Goverment. But its manifesto seems to center around on the words social, employment (“empleo”), system (“sistema”), politics (“política”), economy (“economía”), equality (“igualdad”) and companies (“empresas”); precisely the topics in which they has notably failed.

PP Election Manifesto Tag Cloud

PP Election Manifesto Tag Cloud

The PP is the main party in the opposition and allegedly more right-wing (actually the both have shown the same social politics in the past). Its manifesto is strongly focused in the word change (“cambio”), followed by employment (“empleo”), society (“sociedad”), stability (“estabilidad”), reforms (“reformas”), better (“mejor”), european (“europea”), welfare (“bienestar”), and the future tense for motivate, estimulate or boost (“impulsaremos”). Of course it is a really positive speech. Who wouldn’t vote to them with that kind of happiness and improvements? I won’t be me…

IU Election Manifesto Tag Cloud

IU Election Manifesto Tag Cloud

On the other hand, the historically more left-wing party is focused on highlight just the words, again, left-wing (“izquierda”), proposals (“propuestas”), rights (“derecho”), united (“unida”, just because the name es something like United Left-Wing Party), social in many ways (“social”), elections (“electoral”), public in other bunch of forms (“público”, “pública” and so on) and services (“services”). Under my opinion, not a very strong manifesto and maybe a little bit fainthearted.

UPyD Election Manifesto Tag Cloud

UPyD Election Manifesto Tag Cloud

It looks like our more realistic party has any prominent word. In its place, they focus on comunities (“comunidades”), development (“development”), regions (“autónomas”) and administration (“adminitración”). Perhaps it’s the more heterogeneous manifesto that I have analyzed.

Esquerra Election Manifesto Tag Cloud

Esquerra Election Manifesto Tag Cloud

Esquerra is a Catalonian party and I couldn’t find the manifesto in Spanish. Anyway, they seem to be centered around estate (“estat”), people (“persones”), the name of its region, Catalonia (“Catalunya”), social (“social”), action (“acció”) and politics (“política).

EQUO Election Manifesto Tag Cloud

EQUO Election Manifesto Tag Cloud

Equo is a just created party, founded by the ex-director of Greenpeace Spain and with a strong focus on environment and global warming. That’s why we can find words like “salud” (health), “sostenible” (sustainable) or “desarrollo” (development).

CC Election Manifesto Tag Cloud

CC Election Manifesto Tag Cloud

Geroa Bai Election Manifesto Tag Cloud

Geroa Bai Election Manifesto Tag Cloud

In these last two clouds we can see the name of the party and, more important, the name of the corresponding autonomous regions: Canarias and Navarra. The resto of the words are barely used. Maybe they are trying to win voters in their own regions, because all the manifesto is aorund the names of the regions.

Sadly, the worst came. And what is it about? It’s not about having a hard right-wing party for the next 4 years. It’s about granting a party the power to rule alone in the Goverment, with an absurd absolute mayority and, the most of the times, counterproductive.

Final Congress Results 2011

Final Congress Results 2011 (Source: elpais.com)

Leave a Comment

Filed under Analysis