Positions Available

We are pleased to announce the following positions in the Shared Reality Lab at McGill University. Instructions to apply can be found below.

Post-doctoral fellows or research associates in Computational Neuroscience, AI and Acoustics

Join our "bionic ears" project and help us tackle the cocktail party problem! We are recruiting research associates or post-doctoral fellows who will conduct research on improving intelligibility of speech in noisy environments using cognitive load measurements obtained from EEG indicators. We expect this approach to result in improved sound quality from hearing aids, which are presently unable to distinguish between important sounds to enhance and distracting noise to suppress.

The cocktail party problem poses a challenge to hearing aids, in that devices cannot distinguish between important sounds to enhance and distracting noise to suppress, and thus, they enhance all sounds and noise equally. We aim to solve this problem by developing a miniaturized system to allow head-worn devices to enhance a user's desired sounds and improve perceived speech when the user encounters several sounds from the environment, without manual intervention from the user. To do so, the system must understand the level of mental exertion a user is employing to listen to the desired sound, or their cognitive load, which we posit can be done purely from brain signals. Then, in an attempt to lower the cognitive load, the audio parameters can be automatically tuned to ease listening effort and improve speech intelligibility. The results of this project will benefit anyone who wants to improve speech intelligibility in noisy environments, but in particular, will be of value for the hard-of-hearing population, who need the support of hearing assistance devices.

The project involves a fusion of technologies, including brain and bio-signal analysis, advanced acoustic processing, artificial intelligence, and embedded system and hardware design. The system first decodes the cognitive load of the user from brain signals, such as EEG, and bio-signals, simply from unobtrusive sensors placed in and around the ears of a user, as would be expected for a hearing assistance device. Then, a complementary acoustic technology is employed that tunes the audio parameters to improve speech intelligibility, and delivers the resulting sounds, favoring the target sound as enhanced audio over noise. This will be done using objective and subjective analyses techniques based on signal to noise ratio (SNR) performance, delay, user satisfaction, and other means. From this research, we want to understand how the two elements could be combined, where the EEG markers used to indicate the level of cognitive overload will be used to tune the rendered sounds in order to increase speech intelligibility in a real-time, closed-loop system.

We are recruiting:

1. Computational Neuroscientist: primary activities involve collecting participant data for electroencephalography (EEG) studies, developing artificial intelligence models to process and analyze these data.

2. AI and Acoustics researcher: primary activities involve working with ML/deep learning models to carry out processing of acoustic time series data, signal analysis and interpretation of EEG data.

Candidates in both positions will be expected to collaborate and participate actively in research dissemination, including the preparation of research publications. A travel budget is available to support presentation of your work in top-tier venues. This project is being conducted with industry partner AAVAA, and is supported by an NSERC Alliance and MEDTEQ Partenar-IA grant.

Graduate students and post-docs in ADvanced AIRspace Usability (ADAIR)

Aviation is rapidly changing: In the next 15 years, the number of air passengers and the number of aircrafts will double compared to their levels in 2017. New flight deck technologies are urgently required to sustain this expected growth. Polytechnique Montreal, McGill University and Ryerson University joined together to design the flight deck of the future. We work in human factors, user-centered design, mixed reality and avionics systems, and you can be part of that team.

Research activities include:

Further details are available here.


How to Apply

To apply for any of these positions, please email Jeremy Cooperstock, preferably including::

  1. A brief letter of application, describing your qualifications and relevant experience to the position of interest, along with your dates of availability.
  2. Detailed CV with links to online papers and/or project portfolios.
  3. Three (3) reference letters (sent separately).

The positions are available immediately, with a reasonably flexible start date. Informal inquiries are welcome.

About us: The Shared Reality Lab conducts research in audio, video, and haptic technologies, building systems that leverage their capabilities to facilitate and enrich both human-computer and computer-mediated human-human interaction. The lab is part of the Centre for Intelligent Machines and Department of Electrical and Computer Engineering of McGill University. McGill, one of Canada's most prestigious universities, is located in Montreal, a top city to live in, especially for students.

McGill University is committed to equity in employment and diversity. It welcomes applications from women, Aboriginal persons, persons with disabilities, ethnic minorities, persons of minority sexual orientation or gender identity, visible minorities, and others who may contribute to further diversification. In Quebec, "Postdoctoral Fellow" is a regulated category of trainee. Notably, a postdoctoral candidate must be within five years of graduating with a Ph.D. For more information, please consult www.mcgill.ca/gps/postdocs/fellows.