Sunday, December 28, 2008

Textbook - Section 6

Hello.

This is the final section of the textbook chapter. There's Further Readings and Homework and Discussion sections, but I doubt any of you are that serious.

From the Language Computer Back to the Language Human

This chapter has focused so far on how we as humans use computers to talk to each other and we’ve seen that the more we ask of computers the more like us they have to become. On an everyday basis, what this means for language engineers is that they often spend time reading all about humans and language. All the areas discussed in other chapters of this book, grammar, phonetics, psychology of language, society and language, etc., can be evidence for how a computer can be created to do what we do. However, researchers in these human-focused areas of language study have also been looking increasingly to computers for insight as well. Just as we might figure out how to create a computer by studying people, seeing how a computer does something can help us figure out how the person does it. There are two main components to this: Corpus Linguistics and computational linguistics, particularly computational modeling.

We’ve already run into corpora (corpora is the plural of corpus) in this chapter. A corpus, again, is a body of language data. Technically a corpus can be any size. Our one sentence text message was a corpus and we studied it to find patterns of language use in text messaging. A larger corpus filled with tens of thousands of text messages would allow for much larger questions to be asked, such as: how common are consonant-only abbreviations? How about the smileys? Are the text messages from women distinct from those of men? Is text messaging ever used to speak to someone of higher status, or is it only to people of same status and lower? Software tools can help us scan millions of messages, which would be infeasible to do by hand.

Large corpora have often revealed facts about our language that were not realized before. For instance, there have been claims that children never hear certain types of grammatical patterns, and so there is no way for a child to learn them. However, analysis of large databases of real language reveals just those grammatical patterns. Corpora can open up a world of language details that we’ve never had access to in the past. You can try out some basic analysis on one of the world’s largest corpora today if you wish, namely Google. Want to find out if two words ever occur together in speech? Enter them into Google and see.

Linguists are also using computational models more and more in their study of language use. One reason for this is imminently practical. The strict formalism we’ve been discussing in earlier sections requires a linguist to really find out if they know what they’re talking about. To put it another way, we might think we have a very clear idea of what a grammatical subject is. Then we try to tell an unthinking computer how to find the subject and discover there are a lot of details we forgot about. As another example, let’s say we have a hypothesis about speaking that involves 1) coming up with some sort of meaning to express, 2) putting that meaning into the right grammatical structure, and then 3) finding the words to fill out that structure. It may seem very clear to us as we sit in our armchairs thinking of it. But when you go to teach a computer that clear idea, you discover that your ideas on how grammar relates to words were fuzzy or that your notion of the meaning of verbs is incompatible with your notion of the meaning of sentences.

Beyond using the computer’s formalism to straighten our own thinking out, computational models can sometimes predict what humans would do in a similar situation. Language is an enormously complicated system and it is often difficult to see how tweaking one bit here will relate to the entire system. If a psychologically plausible model can be created for some small area of language use, we can run simulations of our linguistic psychology right on the computer.

This spiral of human to computer to human again is perhaps best seen with connectionist models of language. Connectionist models, also called neural networks, are computational models based upon some properties of the human brain. In these networks, neuron-like models become activated based upon the data they encounter and then “wire” together to learn the patterns of the data. Such models can of course be trained with language data and then make predictions about how humans would react to the same language. One example is the study of the effects of brain lesions on language comprehension and production. It is unethical to purposefully destroy part of a real person’s brain to see what effects damage to that part of the brain (called a brain lesion) would have on language. However, one can simulate brain lesions on neural networks without going to jail and therefore safely test hypotheses.

In all realms, we reveal who we are by what we do. One might also say that we reveal who we are by what we create. As we attempt to create more sophisticated tools to accomplish tasks that humans alone, as far as we know, can accomplish, we have to put more and more of ourselves into the tool. With computers, particularly in the realm of language, this is not just an abstract idea, but an accurate description of what is happening in language labs and industry offices all around the world. We study ourselves to understand computers and we study computers to understand ourselves.

2 comments:

McKoala said...

Darn, you're smart.

And there endeth my input.

fairyhedgehog said...

I held off from reading this because I thought it was going to be too difficult but it wasn't at all and I really enjoyed it.

I'm fascinated with the way computers can tell us things about language that we just didn't know, and the way that having to explain something to a computer shows up flaws in our thinking.

Thanks for posting all these, it's been fun.