Current projects

Complex word reading and individual differences

How do we come recognize words built out of multiple parts? The concept of alehouse contains more information than the word form alone might suggest. For example, at a theoretical level the concept of ‘alehouse’ may denote a relationship between the concepts of ale and house, e.g. a ‘house’ in which ‘ale’ is consumed. A large bulk of my current projects focus on whether conceptual structure like this is learned and employed by cognitive resources during language comprehension. I use the eye-tracking methodology in order to build statistical models of word recognition which take into account the effects of semantic-conceptual structure of complex word meanings and individual differences in reading skill. This work is part of a SSHRC funded project in collaboration with principal investigators Thomas Spalding and Christina Gagné.

Our most recent publication in the Psychonomic Bulletin and Review can be found here.

The time-course of word recognition

When reading a word, how does the process unfold in time? On a simplistic level, the time-line begins when the eye recognizes light patterns, which then translate into recognition of individual letters and whole words, which the brain automatically recognizes as meaningful linguistic symbols, which in turn triggers the unlocking of the word’s meaning(s).  Even though this is a highly simplistic view of reading, it is fascinating that the whole process can occur within 250 ms (and perhaps even faster!). The goal of this project is to trace the time-line of the cognitive processes involved in this highly automatized and seamless process.

Using Twitter to examine dialectal differences at the English and Scottish border.

Do lines in the sand correspond to lines in the mind? This study investigates the influence of the national border on geo-linguistic variation. Using computational linguistics tools the research goal was to examine the correspondence between dialectal lexical preferences and national boundary location. We analyzed millions of geo-tagged Twitter posts of speakers who live close to the England-Scotland. This project uses (1) an open vocabulary approach to detect the most extreme divergences in lexical usage between English and Scottish dialects (e.g., wee vs. little; maw vs. mam), and (2) uses generalized additive mixed effect models to predict where these divergences occur as a function of the longitude and latitude of the tweet locations. Our findings reveal that the true lexical boundary between England and Scotland does not correspond to the location of the national border and enable us to identify dialects within and across the UK and Scotland. To view an interactive plot of this project’s findings, visit http://geotwit.mcmaster.ca/scot/. Collaborators in this project are Bryor Snefjella and Victor Kuperman.

Do national character stereotypes correspond to a nation’s distinctive words?

What are national character stereotypes grounded in? Linguistic behaviours between nations might provide an answer. Using tweets from Canada and the U.S., the goal of this project is to use data-driven techniques to find words that are most representative of Canada and the US. We then use a nation’s most distinctive vocabulary as a window into national character stereotypes, particularly the positivity of a nation’s outlook and the personality profile that characterizes a nation. The findings of this project suggest that distinctively Canadian words are more positive, agreeable, conscientious, and less neurotic. To view an interactive plot that provides the most distinctive word usage for the East Coast of Canada and the US, visit http://geotwit.mcmaster.ca/usca/. Collaborators in this project are Bryor Snefjella and Victor Kuperman.