Friday 3 November 2017, 9 a.m. – 6 p.m., London Science Museum
Register now – this link will take you to the Oxford University web store
The registration fee is £30 and includes coffee breaks, lunch, and a wine reception.
Sound Talking is a one-day event at the London Science Museum that seeks to explore the complex relationships between language and sound, both historically and in the present day. It aims to identify the perspectives and methodologies of current research in the ever-widening field of sound studies, and to locate productive interactions between disciplines.
Bringing together audio engineers, psychiatrists, linguists, musicologists, and historians of literature and medicine, we will be asking questions about sound as a point of linguistic engagement. We will consider the terminology used to discuss sound, the invention of words that capture sonic experience, and the use and manipulation of sound to emulate linguistic descriptions. Talks will address singing voice research, the history of onomatopoeias, new music production tools, auditory neuroscience, sounds in literature, and the sounds of the insane asylum.
|9.00 – 9.20||ARRIVAL & REGISTRATION|
|9.20 – 9.30||WELCOME & INTRODUCTION|
|9.30 – 10.15||Maria Chait, UCL Ear Institute||The auditory system as the brain’s early warning system|
|10.15 – 11.00||Jonathan Andrews, Newcastle University||Bedlam as soundscape: Noise at early modern Bethlem|
|11.00 – 11.30||COFFEE BREAK|
|11.30 – 12.15||Melissa Dickson, University of Oxford||Sounding out the body: The nineteenth-century stethoscope and the language of the heart|
|12.15 – 1.00||Mariana Lopez, University of York||The language of sound: Creating accessible film experiences for visually impaired audiences|
|1.00 – 1.45||LUNCH|
|1.45 – 2.15||Aleks Kolkowski, Recording Angels|
|2.15 – 3.00||David M. Howard, Royal Holloway University of London||The sound of voice and voice of sound|
|3.00 – 3.45||Brecht De Man, Queen Mary University of London||“A bit more ooomph”: The language of music production|
|3.45 – 4.00||BREAK|
|4.00 – 4.45||Mandy Parnell, Black Saloon Studios||Artistic direction: The various languages|
|4.45 – 5.30||Trevor Cox, Salford University||Categories for quotidian sounds|
|5.30||CLOSING REMARKS & WINE RECEPTION|
Dana Research Centre, London Science Museum, 165 Queen’s Gate, Kensington, London SW7 5HD
The Dana Library and Research Centre is part of the Science Museum. Entrance to the centre is on at 165 Queen’s Gate, South Kensington, London SW7 5HD.
The nearest tube stations are Gloucester Road, which is an 8 minute walk from the Centre, and South Kensington, which is a 9 minute walk from the Centre. Both these stations are on the District, Circle, and Piccadilly lines.
Find out the latest Transport for London tube status updates.
Bus routes 14, 49, 70, 74, 345, 360, 414, 430 and C1 stop outside South Kensington Underground Station.
Bus routes 9, 10, 52, 452 and 70 stop outside the Royal Albert Hall on Kensington Gore.
The Science Museum does not have car parking facilities and local parking is very limited.
The nearest pay and display car parking is in Prince Consort Road and Queen’s Gate. There is no visitor parking in Exhibition Road.
Royal Borough of Kensington and Chelsea have more information about parking near the Museum.
A small number of disabled parking spaces are available outside the Museum on Exhibition Road.
Blue Badge holders may park here for four hours between 08.30 and 18.30.
The auditory system as the brain’s early warning system
In my laboratory we use behavioural, brain imaging and eye tracking methods to understand auditory perception and how the brain analyses and represents the dynamics of our surrounding acoustic environment. My presentation will focus on our efforts to uncover the auditory system’s role as the brain’s ‘early warning system’. Hearing is sensitive to a wider space than the other senses (above, below, behind, in the dark…) and is therefore often hypothesized to serve as a monitor – continuously scanning the unfolding acoustic scene for behaviourally-relevant events even when attention is focused elsewhere. Beyond basic science, understanding this capacity of the auditory system is central for designing human-computer interfaces and is crucial for evaluating and quantifying the perceptual consequences of aging and hearing impairment.
Maria Chait is a Professor of auditory cognitive neuroscience at the Ear Institute, UCL. She moved to UCL in 2007 as a Marie Curie research fellow, following a short post-doc at École Normale Supérieure, Paris. Her PhD research (2006) was conducted at the Neuroscience and Cognitive Science program, University of Maryland College Park, USA under the supervision of Jonathan Simon and David Poeppel. She has an undergraduate background in Computer Science, Economics, and East Asian Studies.
Professor Chait is co-director of the Sensory Systems, Technologies and Therapies (SenSyT) PhD program and the Dual Masters in Brain and Mind Sciences, and a member of the UCL Neuroscience domain steering group.
Bedlam as soundscape: Noise at early modern Bethlem
This paper considers auditory factors relevant to comprehending the historical representation and experience of ‘Bedlam’, or of Bethle[he]m Hospital for Lunatics in the early modern era, concentrating primarily on the period from ca. 1676-1815. Taking inspiration from recent historical, sociological, musicological and ethnographic work on institutional soundscapes, this paper considers what Bethlem’s environment sounded like, and what wider medical, social and cultural meanings were chiefly associated with that sound. As the English nation’s first and for centuries only specialist hospital housing the insane, but moreover one which was rather spectacularly exposed to a heterogeneous mass of tourists and visitors, Bethlem evidently had a significant influence not only on now the general public saw the insane and thought about madness, but on what they heard, expected to hear or imagined they heard of madness, and how more precisely they heard it. I examine the semantic formats used to depict Bedlam noise, exploring its fundamental phenomenological character, representational meaning and impact, not just from the perspective of the spectating and wider public but also from that of patients themselves. In addressing the preoccupation of a range of seventeenth and eighteenth-century texts with the allegedly discordant din and cacophony of Bedlam and of the insane it housed, this paper elucidates the origins of the etymological, metaphorical and cultural association of Bedlam with chaos, uproar and noise. It seeks to shed light on and grant audibility to elements of phenomenological reality and also of hyperbole and cultural construction in this enduring and powerful association over centuries of the hospital’s history.
Jonathan Andrews is a Reader in the History of Psychiatry, in the School of History, Classics and Archaeology, at Newcastle University. His research interests reside primarily in the history of mental illness, learning disabilities and the history of psychiatry, in Britain, from roughly 1600-1914. He has published 3 monographs in the field, most recently (with Andy Scull) Customers and Patrons of the Mad Trade (2003), and Undertaker of the Mind (2001), and previous to this (with Roy Porter et al.) The History of Bethlem (1997). He has published numerous edited collections and articles, including most recently two special issues of History of Psychiatry entitled “Lunacy’s Last Rites” (2012) and “Histories of Asylums, Insanity and Psychiatry in Scotland” (with Chris Philo, 2017). Since 2012 he has been focusing on a Leverhulme funded research project on “Fashionable Diseases: Medicine, Literature and Culture, ca. 1660-1832”.
Sounding out the body: The nineteenth-century stethoscope and the language of the heart
René Laennec’s invention of the stethoscope in 1816 heralded a new era in modern clinical diagnosis. Through this simple wooden tube, physicians might know, understand, and diagnose the body through the medium of sound. However, the relationship between abstract bodily sounds and diseases of the human body were by no means self-evident. Laennec sought to illuminate these relationships in a long and painstaking process of describing and matching the sounds he detected during his assessment of patients with the physical changes in diseased organs that could be observed during autopsy. Through prolonged trials, he gradually developed a new vocabulary in order to systematise and describe as precisely as possible each of the hisses, buzzes, and pulses emanating from his patients’ bodies, and assign to each a specific pathology. This metalanguage, although intended to provide an independently verifiable framework for acoustic signals representing pathological phenomena, nonetheless drew upon subjective, even rather idiosyncratic, metaphors, musical references, and the broader cultural imaginary. Art and science, or music and medicine, were thus, from the outset, intimately intertwined in the development of the stethoscope.
Melissa Dickson completed a PhD in English at King’s College London in 2013. She is currently a Postdoctoral Researcher on ‘Diseases of Modern Life’, an ERC funded project based at St Anne’s College, Oxford, which investigates nineteenth-century cultural, literary, and medical understandings of stress, overwork, and other disorders associated in the period with the problems of modernity. Her major contribution to this project is a monograph on explorations of the body’s physiological and psychological responses to sound and music in the nineteenth century. She has recently published articles on the interconnections between Victorian literature, science, and material culture, and is currently co-editing a volume on the relationship between medicine and modernity in the nineteenth century, which is forthcoming with Manchester University Press. In January 2018, Melissa will take up a new role at Lecturer in Victorian Literature at the University of Birmingham.
The language of sound: Creating accessible film experiences for visually impaired audiences
The Enhancing Audio Description project (AHRC) seeks to provide accessible audio-visual experiences to visually impaired audiences by using sound design strategies.
Film grammars we are familiar with have been developed throughout film history, but such languages have matured with sighted audiences in mind and assuming that seeing is more important than hearing. This talk will challenge such assumptions by demonstrating how the use of sound effects, first person narration as well as breaking the rules of sound mixing, can allow us to create accessible versions of films that are true to the filmmaker’s conception.
Mariana has a background in music, sound design and acoustics. Her MA dissertation (University of York) focused on exploring the creation of a new format of sonic art entitled ‘audio film’ that may be considered as an alternative to Audio Description for visually impaired audiences. In 2013 she completed her PhD at the University of York on the importance of virtual acoustics to further our understanding of medieval drama.
In 2014 she joined the Cultures of the Digital Economy Research Institute (CoDE, Anglia Ruskin University) as a Senior Research Fellow. In 2016 she moved to York to start a position as Lecturer in Sound Production and Post Production at the Department of Theatre, Film and Television. She is also active in the field of sound design, having worked on a number of short films, theatre productions and installations. She is currently the Vice-chair of the Audio Engineering Society British section.
David M. Howard
The sound of voice and voice of sound
The human voice has evolved (in conjunction with our hearing system) to enable us to communicate complex ideas even in the presence of competing acoustic noise via speech and singing. Whilst speech and singing are in so many diverse ways vital and special to our life experience, they are basically a series of acoustic variations that are perceived as voice. This lecture will look at the sound of voice, some of the descriptors we use for it, the special place of voice in our world and the voice of sound itself. To understand the voice of the lecture, just bring a pair of ears.
Professor David Howard, Founding Head of the Department of Electronic Engineering at Royal Holloway University of London, is starting the department from scratch (first intake was last month in September 2017) with the strap line “Creativity first, science second” for the creation of excellent engineering solutions. His research focuses on the analysis and synthesis of singing, speech and music. Specific areas of interest include digital speech and singing synthesis based on replicating virtual vocal tracts acquired from magnetic resonance imaging (MRI), voice pitch analyses for singing development, detection of babbling in infants to encourage speech learning, the Vocal Tract Organ (a keyboard or human gesture controlled instrument that uses 3D printed vocal tracts on loudspeakers to recreate human vowel sounds), the theoretical and practical analysis of tuning and pitch drift in a cappella (unaccompanied) choral singing, and understanding better the scientific 13th century writings of Robert Grosseteste.
David was elected Fellow of the Royal Academy of Engineering (FREng) in 2016, and he is a Chartered Engineer (Ceng), Fellow of the Institution of Engineering and Technology (FIET), and Fellow of the Institute of Acoustics (FIOA).
Brecht De Man
“A bit more ooomph”: The language of music production
Music production software has come a long way since it first entered the recording studio, offering ever increasing computing power and higher numbers of simultaneously playing sources. Yet, manufacturers overwhelmingly cater to those who desire an emulation of a 1980s recording studio, requiring expert knowledge from the operator. Cautious attempts at more intuitive interfaces – including less esoteric knob labels – have emerged in recent years, though a more radical rethinking of the sound engineer’s workflow has yet to occur.
The main challenge in providing more accessible control over the recording’s sonic signature is the lack of understanding of the complex process of multi-source post-production. The talk will provide an overview of recent work where differently processed versions of the same source material were compared both analytically and subjectively, connecting digital signals to perceptual constructs. Supported by audio examples and associated descriptions, it demonstrates several approaches to defining sonic terms from ‘air’ to ‘zing’.
Brecht De Man is a postdoctoral researcher at the Centre for Digital Music at Queen Mary University of London. He completed a PhD at the same institution, during which he published and presented research on the perception of recording and mixing engineering, the development of intelligent audio effects, and the analysis of music production practices. He received an MSc in Electronic Engineering from the University of Ghent, Belgium, in 2012. Since 2014, Brecht has been working closely with Yamaha Corporation on the topic of Semantic Mixing.
Brecht is the organiser of many events and serves on several committees, including those of the Workshop on Intelligent Music Production, and a number of technical committees of the Audio Engineering Society. More info about past and current projects can be found on www.brechtdeman.com.
Artistic Direction: The Various Languages
When working a project, it is important for the engineer to listen to everyone’s opinions, regardless of their technical ability. This can require the engineer to speak on many different levels, as they find different ways of communicating that allow them to realise the vision of the producer or artist. Mandy brings to you in this talk her 30+ years of experience of working with artists that include the likes of Björk, Feist, Tom Jones, Aphex Twin and Brian Eno, each with unique approaches to creative input, whether that be through technical conversation or by more abstract means.
Mandy Parnell became interested in recorded music at the age of 5, listening to records on a portable Dansette player – the iPod of the time. She studied music and music technology through her school & college years, trained and worked in recording studios until landing an internship, which led to her becoming a world renowned mastering engineer. Mandy then decided to launch her own facility – Black Saloon Studios.
Mandy Parnell’s 33 years of experience have allowed her to discover and develop her philosophies in analogue and digital audio, while working with an amazing array of artists including Bjork, Feist, The XX, Herbert, Frightened Rabbit, Sigur Ros and Brian Eno. Mandy’s unique style as a mastering engineer has afforded her respect from all areas of the industry. She has mastered countless records that have achieved silver, gold, platinum and Grammy status around the world, as well as winning for the third time this year the prestigious MPG Mastering Engineer of the Year award.
More recently, Mandy has been working with Annie Lennox, mastered Aphex Twin’s first release in 13 years ‘Syro’, Jungle, Philip Selway, Glass Animals, Ghostpoet and Björk’s critically acclaimed Vulnicura, which has since been touring the globe as an interactive virtual reality exhibition. Further, Mandy has been featured on the BBC, Resolution magazine, Sound On Sound, Prosound news, NME and Audio Musica & Technologia.
As a firm believer in educating the next generation of producers and engineers, Mandy frequently lectures on mastering and the music industry at universities, colleges and organizations.
Categories for quotidian sounds
What words do people use to describe everyday sounds, and what does that tell us about how people perceive sound? Specialist vocabularies are often used by expert groups such as experimental psychologists, musicologists and sound designers, but what about the general public? Categorization is a core cognitive process that helps us make sense of the world and react appropriately. Classification is also central to much science: chemistry has the periodic table and biology has numerous taxonomies. How do people categorise everyday sounds, what words do they use to label the groups and what does this tell us about listening? This has been examined using a series of online sorting and category-labelling experiments that elicits rather than prescribes descriptive words. When asked to sort sounds within soundscapes, 81% of participants labelled groups according to the identified sound source, 15% described the acoustic features of the signals, and 4% described affect-judgements. Two further sorting tasks, one based on manmade sounds and the other on nature sounds, found a similar domination of labelling driven by source identification. Mammalian hearing first evolved to identify threats, prey and for simple conspecific signalling, which would appear to explain the dominance of source identification. However, other sorting tasks produced different categorisation strategies. For dog vocalisations affect-judgements was most commonly used, suggesting possible anthropomorphization or appropriation of categorisation strategies from human vocalisations. For engine sounds it was acoustic features that underpinned the most popular approach. A final sorting task with onomatopoeia, found a mixture of source and signal features being used by participants. The results show there are distinct and different strategies for categorizing and labelling quotidian sounds.
Trevor Cox is a Professor of Acoustic Engineering at the University of Salford, where he carries out research, teaching and commercial activities in acoustic engineering, focusing on room acoustics, signal processing and perception. Professor Cox was an EPSRC Senior Media Fellow, has presented on BBC radio and wrote Sonic Wonderland (Bodley Head). He is a former President of the Institute of Acoustics (IOA), and was awarded the prestigious Tyndall Award by the IOA as well as their award for Promoting Acoustics to the Public.
Historic artifact display (Aleks Kolkowski)
Participants will be invited to make acoustic recordings on wax cylinders, hear them played back through horns and listening tubes; strike tuning forks and listen to beat frequencies; bow a monochord and other scientific instruments used for the study of acoustics.
Aleksander Kolkowski is a composer, violinist and researcher who uses historical sound recording and reproduction apparatus and obsolete media to make contemporary mechanical-acoustic music. His numerous international projects in this field have combined wax cylinder phonographs, gramophones and vintage disc recording machines together with live musicians. His performances and installations often emphasise archaic techniques of sound amplification and modes of listening. After completing a PhD at Brunel University in 2012, he was appointed as the first sound artist-in-residence at the Science Museum, London, and has since held research associateships at the Royal College of Music and the Science Museum. In 2016, he was a composer-in-residence at the British Library Sound Archive and his latest installation work Boy Wireless, is featured as part of the Library’s current Listen: 140 Years of Recorded Sound exhibition.
Sound Talking is a workshop organised by Dr Melissa Dickson, a postdoctoral researcher on the Diseases of Modern Life project based at St Anne’s College, Oxford, and Dr Brecht De Man, a postdoctoral researcher at Centre for Digital Music, Queen Mary University of London.