Dr Emily Corrigan-Kavanagh

Research Fellow in Design Research

Emily Corrigan-Kavanagh is a research fellow in design research at the Centre for Vision, Speech, and Signal Processing (CVSSP) at the University of Surrey. Emily is currently working on the EPSRC-funded fellowship “AI for Sound”, won by Prof Mark Plumbley.

Prior this, Emily was a research fellow on the EPSRC funded bid “Next Generation Paper Project” exploring new augmented paper technologies within travel and tourism at Surrey and completed a fully funded PhD in designing for home happiness at Loughborough University.

Her main research interests include design for happiness and wellbeing, social and sustainable design, applied AI for societal good, creative research methods, and participatory approaches.

Exploring Responsible Sound Sensing Technology for
Improving Urban Living

Following the successful application of AI and machine learning technologies to the recognition of speech and images, computer systems can now automatically analyse and recognise everyday real-world sound scenes and events. This new AI technology presents promising applications in environmental sound sensing and urban living. It could be used to monitor and improve soundscapes experienced for people in towns and cities, helping identify strategies for enhancing quality of life, such as through future urban planning and development.

Nevertheless, current use cases are often unrealistic, lacking appropriate end-user feedback and engagement during the development of such technologies. In response, this research will employ a series of participatory approaches with stakeholders and end-users, such as world cafes and soundwalks, to explore how people feel about sounds in their locality, how they would like to change them, and what kinds of technology could facilitate this responsibly.

This presentation reports on the “virtual world cafés” we ran to engage with local residents while adhering to UK national lockdown restrictions. A world café is typically set in a café style environment where participants engage in three 20-minute discussions on a question posed in small groups (max of five), ending with a harvest session where everyone collates conclusions drawn from conversations together. Our virtual world cafes follow this style utilising a video conference tool where virtual Breakout Rooms are used to divide participants into small conversation groups on the same call to explore how their local soundscapes could be improved through AI for sound.

This presentation provides some highlights from these and plans for forthcoming research.

Back to Speakers
Buy Your Ticket