Thursday, September 27, 2007

Evolution according to Tomas Persson & Co

[email] Hi Jeroen, Happened upon your blog. Thought you might enjoy this paper on a proposed iconic-gestural origin of language. Or perhaps another of the publications [from SEDSU]. All the best, Tomas Persson
SEDSU Frontpage illustration from the paper. Well, I checked it out and for all those interested in evolution it might be nice to do the same. The paper's full title is 'Bodily mimesis as “the missing link” in human cognitive evolution', by Jordan Zlatev, Tomas Persson and Peter Gärdenfors. First impression: Strange how people tend to think that the topic of their study (in Lund's case it is a workpackage on 'imitation and mimesis') is the one decisive factor in human evolution. And I never have a shred of evidence to prove them wrong. But it will be interesting to read their case in more detail. {I, for one, believe that it is our ability to blog that sets us apart from other animals. And, of course, I mean blogging in a broad sense. For what is blogging if it is not the continual provision of unelicited non-information on how we feel about things and about what we know. Humans have always 'blogged', even before the internet and before the alphabet. We filled the world with our own thoughts and listened to ourselves, not to anyone else. This constant egoistic reflection created an evolutionary pressure whereby only individuals who could sustain this confrontation with the inner blogger, were still confident enough to reproduce. Since then, most of the strains of humanity who had any shame or humility left have died out in (relative) silence. What is left is what we are now: wanderers of the web, captains of comments, and slaves to our next posting}
[SEDSU's main hypothesis:] There remains, despite centuries of debate, no consensus about what makes human beings intellectually and culturally different from other species, and even less so concerning the underlying sources of these differences. The main hypothesis of the project Stages in the Evolution and Development of Sign Use (SEDSU) is that it is not language per se, but an advanced ability to engage in sign use that constitutes the characteristic feature of human beings; in particular the ability to differentiate between the sign itself, be it gesture, picture, word or abstract symbol, and what it represents, i.e. the “semiotic function” (Piaget 1945).
Substantial work has of course been done on gesture (or sign language) with primates (see this entire issue of Gesture). In some cases chimpansees or gorillas were taught to use gestures or pictures as signs (with a semiotic function). How does that fit into SEDSU's picture? By intuition, I would sooner propose that it is our ability to create 'systems of systems' of signs that sets us apart. Or maybe our ability to create and remember such large quantities and varieties of signs. I think even most animals and perhaps (what the hell) plants can be argued to 'gesture'. Do they differentiate between a signal and that which it represents? I think they do. Any animal that warns his group against predators is sending out a signal. The group members see the signal, not the predator, right? Or perhaps they can only communicate about what is actually present and not refer to things in other times and places? Enough speculation. It is time to read. I expect your reactions to the paper within this week... ps. Did you wonder about the semiotic function of the {curly brackets} as used above? Then you must be human. The answer: I signaled a humorous intermezzo.

Monday, September 24, 2007

Air Guitar Toy by Mannak

Ronald Mannak, a former colleague, is now developing toys at his own 1uptoys. His toys at hand are the SilverLit V-Beat AirDrums, AirGuitar and BoomBox. This week our university's 'newspaper' has an interview with him: Luchtgitaar met Geluid. And here he is in a video demonstrating his AirGuitar: Still a long way to go before he can try for world champion airguitar, I think. But the product is interesting to consider. At first I thought it looked quite nice and cool. But then I wondered: why would anyone want to actually have an AirGuitar? Isn't the point of playing air guitar that you don't have to have the damn thing? If I am going to buy something to play guitar I might as well, or even better, buy a real (toy) guitar, right? Is this going to be cheaper than a real guitar? I would guess that the additional electronics will not be cheaper than the bits of extra wood, metal or plastic needed for a physical guitar. But then again, microelectronics can be cheap if they are sold in large quantities. So, is this going to provide a better experience? I think that by definition that is impossible. The point of playing air guitar is to imitate the actual playing, to go thorugh the motions and almost 'feel like' you are really playing. In other words, it can never be better than the real thing, or can it? Maybe it can. Maybe it can help people who can not play guitar 'feel more like' they are playing guitar. Maybe the AirGuitar can take care of the difficult stuff like putting your fingers in the right position on the strings and remembering the chords and licks, and leave the exciting stuff to you, like strumming wildly, creating vibrato or smashing it. That would be neat, Ronald if you read this, can you make it so it can be smashed?

Art of Gesture on Stage

Here is nice article on the art of gesture in theatre: Music students help revive the art of Baroque gesture. Paris Judgment Reviving an ancient art: students from the University’s Faculty of Music worked with theatre director Helga Hill to present a fully-staged and gestured season of Eccles’ The Judgment of Paris: Above, Paul Bentley as Paris and Janelle Hopman as Venus. [Photo: Mark Wilson] (source) Johann Jakob Engel (DE) wrote in a very interesting way about gestures, especially in Ideen zu einer Mimik. From the perspective of actors on stage, he analyzed how gestures function. I read only the paper by Sara Fortuna (2003) in Gesture: Gestural expression, perception and language. A discussion of the ideas of Johan Jakob Engel. It is intriguing reading material. A bit difficult to summarize in a few sentences here, so I will not try. An open mind, keen on philosophical musings is a good companion while chewing on Engel's thoughts. If we go further back in time, the work of Quintillian (and Cicero) is related. They wrote for orators, which were actors as much as they were politicians and lawyers. Wittgenstein is also referenced a lot.

Microsoft Surface

Microsoft is making a big deal out of their Surface. Basically, it is a regular computer with some fancy software that works together with a new type of table sized touchscreen. It enables people to work with the ten fingers of two hands or with artefacts (multi-touch) and it is sensitive to pressure. This idea was most eloquently presented by Jeff Han earlier, maybe Microsoft bought the idea? Anyway, here it is, one of the most expensive tables you will ever desire: Microsoft Surface parody (source) See also Ianus' Cabinet and Palette, the Studiolab Surface.

Tuesday, September 18, 2007

Deafblind Haptic Sign Language

A Dutch local newspaper, Leidsch Dagblad, has written a good report of the annual holiday gathering organized by the national foundation for the Deafblind (De Nederlandse Stichting voor Doofblinden). About 70 deafblind people (and their interpreters) apparently had a good time there. Impressions of the gathering (source) I previously wrote about their four hands sign language (vierhandengebarentaal): 'signs that made a life worth living'. I think this is a language that is not very well known or understood. So, wouldn't it be great to set up some research on this haptic sign language. There are plenty of people who are interested in sign language because it provides insight in the human language capacity. They compare how people listen and talk (and gesture) to how they watch and sign (and gesture). General human language processing must be separated from modality dependant processing stuff (though it is actually more like oral/auditory+visual/gestural vs. visual/gestural). Very interesting nevertheless. Lots of brain research with fMRI scanners... Just imagine what we could learn by studying deafblind people while letting them 'talk' or 'listen' in haptic sign language. They should probably go two-by-two? Or else, what would be the stimulus material to which to must respond? Prepared haptic sign language material? Hmm, maybe some observations should be the first step, or recordings using video or perhaps datagloves? Anyway, I would love to see more of it. Investigate how deafblind people manage to defy the odds and together create a language of their own. They are apparently already telling jokes. When shall we see/feel the first haptic sign language poem? And how can it be captured, transcribed or annotated? What sort of grammar does it have? Does iconicity play a role in sign formation and language use? Is iconicity achieved using similar strategies as in gesture and sign language? An ambitious man could write a research proposal for a nice post-doc position about it. Sometimes you don't have to go to small villages in Africa or the Middle East to find interesting languages. Sometimes you just need to hold out your hand. A professor in Utrecht who does haptic research: Astrid Kappers Professors in Nijmegen who study language: Levinson - Hagoort

Saturday, September 15, 2007

In Love with SiSi

A wonderful bit of news has been hitting the headlines:
BBC News: Technique links words to signing: Technology that translates spoken or written words into British Sign Language (BSL) has been developed by researchers at IBM. The system, called SiSi (Say It Sign It) was created by a group of students in the UK. SiSi will enable deaf people to have simultaneous sign language interpretations of meetings and presentations. It uses speech recognition to animate a digital character or avatar. IBM says its technology will allow for interpretation in situations where a human interpreter is not available. It could also be used to provide automatic signing for television, radio and telephone calls.
Read the full story at IBM: IBM Research Demonstrates Innovative 'Speech to Sign Language' Translation System Demo or scripted scenario? Serendipity. Just this week a man called Thomas Stone inquired whether he could get access to the signing avatars of the eSign project. I passed him on to Inge Zwitserlood. She first passed him on to the eSign coordinator at Hamburg University, which was a dead end. Finally, he was pointed to the University of East Anglia, to John Glauert. And who is the man behind the sign synthesis in SiSi? From the press release from IBM:
John Glauert, Professor of Computing Sciences, UEA, said: "SiSi is an exciting application of UEA's avatar signing technology that promises to give deaf people access to sign language services in many new circumstances." This project is an example of IBM's collaboration with non-commercial organisations on worthy social and business projects. The signing avatars and the award-winning technology for animating sign language from a special gesture notation were developed by the University of East Anglia and the database of signs was developed by RNID (Royal National Institute for Deaf People).
Well done professor Glauert, thank you for keeping the dream alive. Now for some criticism: the technology is not very advanced yet. It is not at a level where I think it is wise to make promises about useful applications. The signing is not very natural and I think much still needs to be done to achieve of basic level of acceptability for users. But it is good to see that the RNID is on board, although they choose their words of praise carefully. It is amazing how a nice technology story gets so much media attention so quickly. Essentially these students have just linked a speech recognition module to a sign synthesis module. The inherent problems with machine translation (between any two languages) is not even discussed. And speech recognition only works under very limited conditions and produces limited results.
IBM says: "This type of solution has the potential in the future to enable a person giving a presentation in business or education to have a digital character projected behind them signing what they are saying. This would complement the existing provision, allowing for situations where a sign language interpreter is not available in person".
First, speech recognition is incredibly poor in a live event like a business presentation (just think of interruptions, sentences being rephrased, all the gesturing that is linked to the speech, etc.) and second, the idea that it will be (almost) as good as an interpreter is ludicrous for at least the next 50 years. The suggestion alone will probably be enough to put off some Deaf people. They might (rightly?) see it as a way for hearing people to try to avoid the costs of good interpreters. I think the media just fell in love at first sight with the signing avatar and the promises it makes. I also love SiSi, but as I would like to say to her and to all the avatars I've loved before: My love is not unconditional. If you hear what I say, will you show me a sign?

Thursday, September 13, 2007

Doodling, Gesture, and Language Origins, the Movie

Here is a very entertaining video (nice music) that tells the tale of gesture and the origins of language in a nutshell. Much has been written about how the language capability may have evolved in humans with gesture as a stepping stone or how Man's first language may have been a signed language. Recent brain research findings (gesture+speech, mirror neurons, lateralization, sign language aphasia) have added more indirect 'evidence' for these theories. It is still hard to really prove anything about pre-historic events though... One thing that struck me is how the author talks about how people might be aided in their thinking when the gesture, or doodle and fidget. A reference to fidgeting! Hooray! Should I point out that I think gesture and fidgeting are quite different? No, I will just let it be.

Sunday, September 09, 2007

Lead Guitar Body Language

Here is the Air Guitar World Champion 2007, Ochi "Dainoji" Yosuke (Japan) performing at Air Guitar World Championships 2007, Oulu, Finland: What a nice gesture performance: the pantomime, the gestures, the emotional expressions, the mimicry of the actual guitar play, and of course the dramatic gestures of a lead guitar player on stage. It makes me realize that a language may be found around in every hidden corner of human activity. In this case Dainoji shows a hilarious command of the body language of lead guitars. It also makes me wonder what exactly would remain of 'musical gestures', when all of a musicians 'body language' were hidden to the audience? I guess something would remain, and that would then be the real musical gesture.

Friday, September 07, 2007

Baby Sign Mini Dictionary Flash

Here is a wonderful flash animation from Babystrology, featuring a signing baby: Lord knows, I am not the world's biggest fan of baby signing, but this is positively funny. I hope the creators keep treating baby sign with the same sense of humor. It is far too important a subject to ever talk seriously about. *Another nice example of using flash to present sign language online: Avon and Somerset Police. *Another nice example of animated kids signing: XV Congreso Mundial de la WFD.

Thursday, September 06, 2007

Plains Indian Sign Language, Browning 1930

A wonderful collection of videos with Plains Indian Sign Language has been put on YouTube by Tommy Foley. A short 'teaser' with subtitles The videos were recorded in 1930, Browning Montana, when sign talkers from 14 different Plains nations gathered as participants in a conference organized by General Hugh L. Scott for the purpose of demonstrating their use of sign language. The first four videos (see this playlist) contain material from the participants at the conference themselves: Indians telling stories. Another six videos are a video version of a dictionary of the language (see this playlist).
Following the 1930 Plains Indian Sign Language Conference, General Scott intended to produce a cinematic dictionary of over thirteen hundred signs. Due to the Great Depression it would have been too difficult to get a second appropriation bill passed through congress to finish the cinematic dictionary. He did manage to get over three hundred signs filmed. (Note from Tommy Foley)
An important documenter of the Plains Indian Sign Language was Col. Garrick Mallery. He wrote 'Sign Language Among North American Indians Compared With That Among Other People And Deaf-Mutes' a report for the Smithsonian Insitute which was published in 1881, which is avaliable for free download as an e-book via Project Gutenberg.

Monday, September 03, 2007

Wiki Interactive Gestures

Dan Saffer, from Adaptive Path, and a member and contributor to the Interaction Design Association (IxDA), calls upon interaction designers to share their knowledge on gestural interaction in this new Interactive Gestures wiki. Read his 'Call to Arms' for more info and some nice links to web resources. screen capture wikipedia Today's screen capture of the Interactive Gestures wiki I think it is a a good idea, which is why I am repeating the news here. Share and enjoy.