[Grads] Laura Gwilliams on the computational architecture of speech comprehension

Malcolm Slaney malcolm at ccrma.Stanford.EDU
Sun Oct 1 15:40:56 PDT 2023


CCRMA Folks, especially those of you that are new to CCRMA

I want to introduce you to the CCRMA Hearing Seminar.  We normally meet Friday at 10:30AM to hear and discuss a presentation about a topic related to hearing. We have a very active audience and that brings good speakers to CCRMA.  I don’t expect every talk will be of interest to you, but I’m sure we’ll cover some topics that matter to you.  After all, what is music without hearing.  Upcoming talks, and a history of the topics we’ve covered are at
	https://ccrma.stanford.edu/hearing-seminars

The talks are listed on the CCRMA front page, or you can sign up for the mailing list by going to this URL
	https://cm-mail.stanford.edu/mailman/listinfo/hearing-seminar
The mailing list is relatively low trafic--I usually send one announcement out the week before, and then one the night before.

Do let me know if you know somebody interesting coming to campus, I’m happy to help host a presentation all all topics related to hearing.

See you in the Seminar Room Friday’s at 10:30.  (Except when an out of town visitors needs a different time.. watch your email and the CCRMA front page.)

- Malcolm


I'm really happy to welcome Prof. Laura Gwilliams to Stanford and the Hearing Seminar.

How do our brains translate sound into language and semantic concepts?  How and where do the different types of linguistic concepts (phonetics, words, etc) show up in the brain.  And when?  What tools can we use to decipher these parts of the brain?  

Laura trained as a linguist, recently finished a neuroscience postdoc with Prof. Eddie Chang at UCSF, and is now a newly minted Professor in Psychology here at Stanford. Eddie’s lab has done a lot of amazing work with many different kinds of invasive techniques and through his lab’s work we are understanding how the human brain processes and generates language.  I remember being blown away by the breadth of neuroscience techniques that Laura was able to apply to her linguistic questions.

Who:	Prof. Laura Gwilliams (Stanford Psych)
When	Friday, October 6 at 10:30AM
What: 	Computational architecture of speech comprehension
Where:	CCRMA Seminar Room, Top Floor of the Knoll at Stanford (behind the elevators)
Why:	Understanding how we understand speech is one of hardest and best problems in the auditory brain

Covid:  Covid is a bit more common now than it was, and we do meet in a rather small room for a rather vigorous discussion.  I know we’re all tired of the pandemic, but if you are worried then this might be a good time to bring your mask.

Welcome back to CCRMA for the first Hearing Seminar of the quarter. We’ve got some great talks coming up and I’m looking forward to seeing you all again.  Our upcoming schedule is online at
	https://ccrma.stanford.edu/hearing-seminars

— Malcolm

Computational architecture of speech comprehension

Laura Gwilliams
Stanford University

Humans understand speech with such speed and accuracy, it belies the complexity of transforming sound into meaning. The goal of my research is to develop a theoretically grounded, biologically constrained and computationally explicit account of how the human brain achieves this feat. In my talk, I will present a series of studies that examine neural responses at different spatial scales: From population ensembles using magnetoencephalography and electrocorticography, to the encoding of speech properties in individual neurons across the cortical depth using Neuropixels probes in humans. The results provide insight into (i) what auditory and linguistic representations serve to bridge between sound and meaning; (ii) what operations reconcile auditory input speed with neural processing time; (iii) how information at different timescales is nested, in time and in space, to allow information exchange across hierarchical structures. My work showcases the utility of combining cognitive science, machine-learning and neuroscience for developing neurally-constrained computational models of spoken language understanding


Laura Gwilliams received her BA in Linguistics from Cardiff University (UK) and a Master's degree in Cognitive Neuroscience of Language from the BCBL (Basque Country, Spain). Laura then joined NYU, first as a research assistant, and then as a PhD student, working with David Poeppel, Alec Marantz and Liina Pylkkanen. There she used magnetoencephalography (MEG) to study the neural computations underlying dynamic speech understanding. Laura's dissertation `Towards a mechanistic account of speech comprehension' aims to combine insight from theoretical linguistics, neuroscience and machine-learning, and has received recognition from Meta, the William Orr Dingwall Foundation, the Martin Braine Fellowship, the Society for Neuroscience, and the Society for the Neurobiology of Language. As a post-doctoral scholar with Eddie Chang, Laura added intracranial EEG and single-unit recordings to her repertoire of techniques. She used her time in the Chang Lab to understand how laminar single neuronal spiking encodes speech properties in humans. Now as the director of the Gwilliams Laboratory of Speech Neuroscience (the GLySN Lab) at Stanford University, her group aims to understand the neural representations and computations that give rise to successful speech comprehension, using a range of recording methodologies, and analytical techniques.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://cm-mail.stanford.edu/pipermail/grads/attachments/20231001/1a851c02/attachment.html>


More information about the Grads mailing list