Year-end celebration

As the LCL looked back on this year, we focused on the Good Things that have happened! Here is a (non-exhaustive) list of things to celebrate (even over Zoom):

  1. Research went really well (considering, well, a pandemic and the complete closure of the lab since March 2020)! For instance,
    • Ebru Evcen (with the help of Lea and Miguel) managed to collect beautiful eye-tracking data online, using people’s webcam. A revolution born out of necessity, but impressive—and very practical—nonetheless!
    • Josh Wampler got his first single-author paper accepted, in Glossa!
    • We had/have five new papers coming out of the lab since the beginning of the academic year (Fall 2020), in Cognition and Glossa, among others, and four more are under review!
  2. We gained back an old friend! Miguel Meija, one of LCL’s first Research Assistants, came back as a volunteer. We are extremely lucky to have him, his enthusiasm, and his expertise back!
  3. Our people are off to Great Things!
    • Ruoqi Wei will be attending USC for a PhD in theoretical linguistics. Currently, she’s been working on research in the pragmatics of courtroom linguistics, as well as the pragmatics of verbs of veracity (it is right that, it is correct that, etc), but during graduate school, she hopes to expand my research beyond just courtroom data, and explore more formal semantics. Congratulations, Ruoqi!
    • Lea Zaric got accepted into the “Elite Master Program in Neuro-Cognitive Psychology” at Ludwig-Maximilians-Universität in Munich, Germany, as one of only 12-13 students out of nearly 1000 applicants. Having discovered a passion for cognitive science and linguistics at UCSD, Lea is now seeking to deepen her understanding of the brain and to further explore her interests in such areas of cognition as language, memory, and their dysfunction resulting from neuropathologies. Congratulations, Lea!
    • Mohit Gurumukhani will be joining the Computer Science PhD program at Cornell University, generously supported by a first year fellowship by Cornell University. Mohit is broadly interested in studying computational complexity theory and pseudorandomness (who isn’t, really?). He will investigate questions such as: What are the minimal resources required for solving natural computational tasks? What is the power of randomness in speeding up computation and in low memory computation? Can we create faster theoretical algorithms for satisfying boolean formulas? Congratulations, Mohit!
    • Our first-year graduate student Ebru Evcen will be participating in a fully-funded four-day summer school on anaphora and presupposition at the University of Göttingen, Germany. The fall school offers both theoretical and empirical courses covering the two phenomena from cross-linguistics and cross-modular perspectives. Ebru aims to become familiar with dynamic semantics and strengthen the theoretical motivation of her research on counterfactuals learning more about how a set of possible worlds in counterfactuals is dependent on context and evolves with the discourse. Congratulations, Ebru!

…and congratulations to all our graduating seniors!

We wish everyone a restful, relaxed, and interesting summer — thank you!

New paper in Glossa!

Wittenberg, E., Momma, S., & Kaiser, E. (2021). Demonstratives as bundlers of conceptual structure. Glossa: A Journal of General Linguistics6(1), 33. DOI: http://doi.org/10.5334/gjgl.917

Abstract: Pronoun resolution has long been central to psycholinguistics, but research has mostly focused on personal pronouns (“he”/“she”). However, much of linguistic reference is to events and objects, in English often using demonstrative pronouns, like “that”, and the non-personal pronoun “it”, respectively. Very little is known about potential form-specific preferences of non-personal and demonstrative pronouns and the cognitive mechanisms involved in reference using demonstratives. We present a novel analysis arguing that the bare demonstrative “that” serves a different function by bundling, and making linguistically accessible, complex conceptual structures, while the non-personal pronoun “it” has a form-specific preference to refer to noun phrases mentioned in the previous discourse. In two English self-paced reading studies, each replicated once with slight variations, we show that readers are reading the demonstrative slower throughout, independently of frequency or complexity of the referent, as a reflection of differences in processing demonstratives vs. pronouns. These findings contribute to two distinct but connected research areas: First, they are compatible with an emergent experimental literature showing that pronominal reference to events is preferably done with demonstratives. Second, our model of demonstratives as conceptual bundlers provides a unified framework for future research on demonstratives as operators on the interface between language and broader cognition.

Upcoming talks

Eva is giving two talks in the next few weeks:

May 10th, 2021: What’s in a bit of a diminutive? Experiments in dialect psycholinguistics.
Department Colloquium Linguistics, University of Mainz, Germany.

April 13th, 2021: Linguistic theory-building from behavioral data: How far can we go?
Language and Cognition Colloquium, Harvard University, USA.

Slides available upon request!

New paper: Hindi light verbs!

Complex predicates like the light verb constructions “take a look” or “give a call” aren’t rare in English, but they’re also not the most common way to form a predicate either — usually, in English we just use simple verbs to talk about an action, like “look” or “call”; there is a simple-verb preference in English.

Many other languages, like Hindi, have the opposite preference: There, complex predicates are the preferred way to encode an action. In a new paper coming out in the Journal of South Asian Linguistics, Ashwini Vaidya and Eva Wittenberg show in a series of four experiments that, like with so many things in life, practice makes perfect: Processing costs of light verb constructions that we had found in English and German are undetectable in Hindi.

What does that mean for our linguistic theory-building? Take a look!

Upcoming talks!

Eva will be giving a number of talks in the next few months, and thanks to remote everything, they’re all online!

If you’re interested in joining, please email Eva for access information!

LCL at NACCL!

Catherine gave a talk at the 32nd North American Conference on Chinese Linguistics (NACCL-32) about her findings on verbal reduplication in Mandarin Chinese!

Arnett, C. & Wittenberg, E. (2020). Conceptual effects of verbal reduplication in Mandarin Chinese.

Abstract. Full or partial reduplication of words has long been known to induce non-truth-conditional effects on how people conceptualize a referent (e.g., Ghomeshi et al., 2004; Inkelas and Zoll, 2005), but the conditions and mechanisms of this effect are in some cases not very well understood. In this paper, we explore how verbal reduplication affects the way Mandarin speakers conceptualize events. Reduplication is frequent in Chinese, and has traditionally been analyzed as inducing a diminishing, ‘fast’ meaning: According to Melloni and Basciano (2018) and Arcodia et al. (2015), walk-walk around the pond would denote a faster and shorter event than walk around the pond. However, the meaning of reduplication may also vary across Chinese dialects (Fu and Hu, 2012; Arcodia et al., 2014), and potentially, interpretation is influenced by the emotive content of the verb (Arcodia et al., 2015). This last factor is connected to semantic specificity – frequent, basic-level words may pattern differently from expressions denoting a more complex semantic content (Rice and Bode, 1993). This work investigates the empirical validity of these claims, and proposes tentative routes towards explanations of the data pattern: 1) Dialectal differences, 2) emotive content of the events themselves, and 3) semantic specificity.

LCL at SAFAL-1 and AMLaP!

The lab has four presentations this week, one at the First South Asian Forum on the Acquisition and Processing of Language (SAFAL), and three at AMLaP!