Google creates a tool to probe 'genome' of English words for cultural trends

How many words in the English language never make it into dictionaries? How has the nature of fame changed in the past 200 years? How do scientists and actors compare in their impact on popular culture?

These are just some of the questions that researchers and members of the public can now answer using a new online tool developed by Google with the help of scientists at Harvard University. The massive searchable database is being hailed as the key to a new era of research in the humanities, linguistics and social sciences that has been dubbed "culturomics".

The database comprises more than 5m books – both fiction and non-fiction – published between 1800 and 2000, representing around 4% of all the books ever printed. Dr Jean-Baptiste Michel and Dr Erez Lieberman Aiden of Harvard University have developed the search tool, which they say will give researchers the ability to quantify a huge range of cultural trends in history.

"Interest in computational approaches to the humanities and social sciences dates back to the 1950s," said Michel, a psychologist in Harvard's Program for Evolutionary Dynamics. "But attempts to introduce quantitative methods into the study of culture have been hampered by the lack of suitable data. We now have a massive dataset, available through an interface that is user-friendly and freely available to anyone."

In their initial analysis of the database, the team found that around 8,500 new words enter the English language every year and the lexicon grew by 70% between 1950 and 2000. But most of these words do not appear in dictionaries. "We estimated that 52% of the English lexicon – the majority of words used in English books – consist of lexical 'dark matter' undocumented in standard references," they wrote in the journal Science (the full paper is available with free online registration).

The researchers were also able to trace how words had changed in English, for example a trend that started in the US towards more regular forms of verbs from irregular forms like 'burnt', 'smelt' and 'spilt'. "The [irregular] forms still cling to life in British English. But the -t irregulars may be doomed in England too: each year, a population the size of Cambridge adopts 'burned' in lieu of 'burnt'," they wrote. "America is the world's leading exporter of both regular and irregular verbs."

The team also investigated the changing nature of fame over the past two centuries. By looking at the frequency of famous people's names in literature, they showed that celebrities born in the mid-20th century tended to be younger and more famous than those of the 19th century, but their fame lasted for a shorter period of time. By 1950, celebrities were achieving fame, on average, when they were 29, compared with 43 for celebrities around 1800. "People are getting more famous than ever before," wrote the researchers, "but are being forgotten more rapidly than ever."

"Mark Twain is among the most famous writers and among the most famous people," said Michel. "Among the American presidents, it's Theodore Roosevelt."

Aiden warned against straightforward comparisons of historical figures, however. "It's comparing apples and oranges comparing presidents from the mid to late 20th century and those that precede them. The reason is that they haven't really had the full opportunity to reach the height of their fame trajectory. By virtue of having been around longer, someone in the mid-19th century is going to have accrued a lot of fame."

By the mid 20th century, the most famous actors tended to achieve fame at around 30 years of age, while writers had to wait until they were 40. For politicians, fame didn't tend to happen until they reached 50 or above.

"Science is a poor route to fame. Physicists and biologists eventually reached a similar level of fame as actors but it took them far longer," wrote the researchers. "Alas, even at their peak, mathematicians tend not to be appreciated by the public."

For anyone tracking the cultural spread of specific thinkers, it is worth noting that "Freud" appears more times in the digitised books than instances of "Galileo," "Darwin," or "Einstein".

The database can also identify patterns of censorship in the literature of individual countries. The Jewish artist Marc Chagall, for example, was mentioned only once in the entire German literature from 1936 to 1944, even though his appearance in English-language books grew around fivefold in the same period. There is also evidence of censorship in Chinese literature when it comes to Tiananmen Square and in Russian books with regard to Leon Trotsky.

Claire Warwick, director of the Centre for Digital Humanities at University College London, said that humanities researchers had been using the word-frequency techniques being described by Michel and Aiden for several decades. But the sheer size of their dataset marked it out from the usual tools. "What's different is that this allows people to not just look at several hundred thousand words or several million words but several million books. So the overview is much bigger. That may bring out some hitherto unexpected ideas."

The database of 500bn words is thousands of times bigger than any existing research tool, with a sequence of letters that is 1,000 times longer than the human genome. The vast majority, around 72%, is in English with small amounts in French, Spanish, German, Chinese, Russian, and Hebrew.

"In science, huge datasets which people have used super-computing on have led to some fascinating new discoveries that otherwise wouldn't be possible," said Warwick. "Whether that's going to be the same in the arts and humanities, I don't know yet."

Aiden said that while the database will allow people to track the discussion of a topic in history, it is not yet possible to gauge sentiment or meaning. For example, you might want to know not only that there was more discussion of slavery in books, but also whether people's thoughts about it were changing from positive to negative.

This refinement of the database would be possible, he said, but there is a problem for most of the works published in the 20th century because they are still in copyright. For the 19th century, however, Aiden expects this extra functionality to become available within a few years.

Michel said they also hoped to include more books in more languages, as well as the contents of newspapers, manuscripts, letters, websites and blogs.

To coincide with the publication of the Science paper, Google has released a tool that allows members of the public to see how often a word or phrase has appeared in its scanned literature and how this usage has changed over time.

"One of the ways to use this is to suggest ideas," said Warwick. "You can look at something like this and say, how fascinating that a certain term seems to occur so commonly and I wonder why that should be."

Michel agreed that mining the culturomics dataset should be thought of as just the beginning of a piece of cultural research. "These are starting points for discovery and they don't tell the whole story. They could be linguistic changes, social changes. One should be very careful in how we analyse these data."

Comment on this article in our story tracker and email updates and links for inclusion to [email protected] We'd like to hear about your own research using the new tool. What trends have you unearthed? The paper's authors have agreed to analyse some of the best ones for us.

0 Comment



Captcha image