The LIVELab Facility at McMaster University in Hamilton, Ontario
The following Q and A is about the LIVELab facility at McMaster University in Hamilton, Ontario. Answers were supplied by Dr. Dan Bosnyak, PhD, technical director, MIMM LIVELab and Dr. Laurel Trainor, director, LIVELab, McMaster Institute for Music and the Mind, professor, psychology, neuroscience & behavior, McMaster University.
This facility has a number of “audience” seats that allow a large number of subjects to be assessed simultaneously with a wide range of data being compiled for any number of projects and research studies. Reverberation times, various signal to noise ratios, and a wide range of music and speech stimuli can be controlled with the flip of a software switch on a wireless tablet.
1. Can you give me some background information about the LIVELab? What is its purpose and how did it get started?
The LIVE (Large Interactive Virtual Environment) Lab is a unique 106-seat Research Performance Hall designed to investigate the experience of music, dance, multimedia presentations, and human interaction. This is one of the only research centres that incorporates a purpose-built concert hall that is specifically designed for research. As we were designing the infrastructure of the lab, we were thinking about it being both a working small concert venue, and also a working neuroscience lab. Provisions were made to allow us to collect massive amounts of data from audience members and performers while a concert was being conducted in a realistic setting – data such as brain waves (EEG), heart rate, and various other physiological measures that allow us to gauge the impact of a performance on an audience member, or the cognitive effort of a performer. The entire volume of the lab is covered by an infra-red marker-based motion capture system, allowing us to measure movement of performers or audience members down to the millimeter level. We also paid a lot of attention to the sound quality of the room. The LIVELab is constructed with the strictest noise isolation standards, allowing us to study the impact of levels of background noise down to NC10. The Active Acoustics system allows us to change the reverberation time digitally to recreate any type of environment, such as a cathedral, concert hall, or classroom. A wireless tablet system allows us to collect feedback from the audience as the performance is conducted. Instruments such as a KEMAR acoustic mannequin allow us to collect information from the perspective of a hearing-aid user from within the audience. Many of our researchers are interested in hearing loss and we use these soundscapes to study hearing loss in more ecologically valid environments.
2. I recently visited the LIVElab web site (http://livelab.mcmaster.ca) but I could not find any information about the types of ambient noise that can be generated within LIVELab. Can you tell me about its capabilities in this respect?
The Meyer Constellation Active Acoustics system actually has 75 separately addressable speakers and subwoofers distributed throughout the space. We can take in up to 64 channels of audio and route them to any speaker or combination of speakers within the space while at the same time maintaining the simulated acoustics. This allows us to create complex immersive soundscapes.
3. I have heard LIVELab's acoustic characteristics described as "near anechoic quality." Is there an actual specification for the lab's performance?
The interior of the lab is about 12 m by 15 m with a volume of around 450m3. With the Active Acoustics disabled, we have a measured RT60 of less than 500 ms across the whole spectrum, which is very dead for a room this size. Coupled with the extremely low background noise (NC10) it certainly gives you the impression of being in an anechoic space. We did this by using a lot of acoustic paneling, with 4 inches of absorption on the entire ceiling and 2 inches most everywhere else. The ability to control the acoustics of the room depends on having a really good base to work with – you can only add noise and reverberation to a room electronically, it is impossible to remove it.
4. I believe that about 30 of the 100 seats in the LIVELab auditorium are "live" seats, which can monitor and record certain physiological parameters of individuals sitting in these seats. Exactly which parameters can be monitored and how do you plan to use this information in hearing loss research?
We can measure a number of things from the audience members. We have the ability to monitor up to 8 channels of brain wave (EEG) data from each person, as well as heart rate, heart rate variability, breathing rate, and galvanic skin response. These measures give us different ways of looking at things generally related to 'arousal', which in turn is related to cognitive effort. If a group of participants with hearing loss are having a difficult time comprehending a speech or music signal in a given acoustic environment, we expect to measure a completely different set of physiological parameters from them compared to a more congenial acoustic environment. Of course this could be done without the LIVELab, but we can switch between environments more or less instantly so we have much more control than you would if you had to move participants from one environment to another.
5. Is LIVELab strictly a university research facility, or can external institutions gain access to it for their own research?
We are always open to potential research partnerships with industry. We have a large cross-disciplinary team of experts who we can direct external partners to, either as collaborators or just as consultants. We are also open to allowing external institutions to just use the facility for private purposes, for example something like market research or new product testing where the results are not published. Interested parties can quickly obtain data on a large sample of subjects.
6. The LIVELab web site says that the magnitude and duration of reverberation in the LIVELab auditorium can be electronically adjusted from "nearly dead" to that of a "huge cathedral". How do you see this feature being used to the benefit of hard of hearing people?
We know that many hearing aid algorithms work great in the quiet of an audiologist's office but tend to be less acceptable in a more realistic auditory environment. Background noise of course is a big issue and we can manipulate that in our lab, but we can also manipulate the amount of reverberation, which is a characteristic of real rooms that would be difficult to manipulate in a small lab setting. By being able to study things like musical enjoyment during a concert or speech comprehension during the performance of a dramatic play, with different reverberation times applied at different times, we can focus in on the ideal hearing aid settings for these situations, or how rooms like theatres and concert halls should be set up in the first place in order to optimize the acoustics for the increasing numbers of people with mild to moderate hearing loss in our aging population.