Guest Lectures
Occasionally, courses offered at CCRMA will bring in a guest lecturer. Often times, those lectures are open, not only to CCRMA students, staff, faculty and researchers, but also to the public. Such events are listed below.
Recent Guest Lectures
Total variation in popular rap vocals from 2009-2023
Date:Mon, 07/22/2024 - 3:00pm - 4:00pmLocation:CCRMA ClassroomEvent Type:Guest LectureFREEOpen to the PublicHarmonicity and Inharmonicity in Instruments of the Percussion/Resonance Family in Interaction with Electronics
Date:Thu, 07/11/2024 - 1:00pm - 2:30pmLocation:CCRMA StageEvent Type:Guest LectureFREEOpen to the PublicNeuralNote: An Audio-to-MIDI Plugin Using Machine Learning
Date:Tue, 05/28/2024 - 4:30pm - 6:50pmLocation:CCRMA Classroom [Knoll 217] and ZoomEvent Type:Guest LectureAbstract: NeuralNote is an open-source audio-to-MIDI VST/AU plugin that uses machine learning for accurate audio-to-MIDI transcription. This talk will begin with an in-depth look at BasicPitch, the machine learning model from Spotify that powers NeuralNote. We will explore its internal workings and how it processes audio to generate MIDI data. Next, we will cover the integration of BasicPitch into the NeuralNote plugin, implemented in C++ using the JUCE framework. We will discuss the challenges of incorporating neural network inference in audio plugins, focusing on real-time processing, thread safety, and performance. A comparison of the ONNXRuntime and RTNeural libraries will highlight the options for neural network integration in this domain.
FREEOpen to the Public[CANCELLED!] TEMPO VS. PITCH: UNDERSTANDING SELF-SUPERVISED TEMPO ESTIMATION
Date:Fri, 08/25/2023 - 11:00am - 12:00pmLocation:ClassroomEvent Type:Guest LectureGiovana Morais (NYU) joins us to talk about her recent ICASSP paper. ABSTRACT: Self-supervision methods learn representations by solving pretext tasks that do not require human-generated labels, alleviating the need for time-consuming annotations. These methods have been applied in computer vision, natural language processing, environ- mental sound analysis, and recently in music information retrieval, e.g. for pitch estimation. Particularly in the context of music, there are few insights about the fragility of these models regarding differ- ent distributions of data, and how they could be mitigated. In this paper, we explore these questions by dissecting a self-supervised model for pitch estimation adapted for tempo estimation via rigor- ous experimentation with synthetic data.
FREEOpen to the PublicSound localization using a deep graph signal-processing model for acoustic imaging
Date:Wed, 08/23/2023 - 3:30pm - 4:30pmEvent Type:Guest Lecture
ABSTRACT:Our research explores ways to leverage the architecture of DeepWave, originally used as an acoustic camera, to enable precise localization of sound sources. While DeepWave inherently generates spherical maps in the form of sound intensity fields, it has not been utilized for determining precise localization coordinates of sound sources.FREEOpen to the PublicEXPLORING APPROACHES TO MULTI-TASK AUTOMATIC SYNTHESIZER PROGRAMMING
Date:Mon, 08/21/2023 - 3:30pm - 4:30pmLocation:ClassroomEvent Type:Guest Lecture
Automatic Synthesizer Programming is the task of transform-
ing an audio signal that was generated from a virtual instru-
ment, into the parameters of a sound synthesizer that would
generate this signal. In the past, this could only be done for
one virtual instrument. In this paper, we expand the current
literature by exploring approaches to automatic synthesizer
programming for multiple virtual instruments. Two different
approaches to multi-task automatic synthesizer programming
are presented. We find that the joint-decoder approach per-
forms best. We also evaluate the performance of this modelFREEOpen to the PublicRetrieving musical information from neural data: how cognitive features enrich acoustic ones
Date:Fri, 08/18/2023 - 3:30pm - 4:30pmLocation:ClassroomEvent Type:Guest Lecture
Various features, from low-level acoustics, to higher-level
statistical regularities, to memory associations, contribute
to the experience of musical enjoyment and pleasure. Re-
cent work suggests that musical surprisal, that is, the un-
expectedness of a musical event given its context, may di-
rectly predict listeners’ experiences of pleasure and enjoy-
ment during music listening. Understanding how surprisal
shapes listeners’ preferences for certain musical pieces has
implications for music recommender systems, which are
typically content- (both acoustic or semantic) or metadata-FREEOpen to the PublicInsights into Soundscape Synthesis and Energy consumption of Sound Event Detection systems
Date:Thu, 08/17/2023 - 10:00am - 11:00amLocation:ClassroomEvent Type:Guest LectureFREEOpen to the PublicThe Sound of AI Accelerator
Date:Wed, 08/16/2023 - 11:00am - 12:00pmLocation:ClassroomEvent Type:Guest Lecture"The Sound of AI Accelerator: From Idea to Music AI Startup"Are you interested in starting a music AI company? In this talk, Valerio will introduce The Sound of AI Accelerator, the first startup accelerator focused on music, audio, and voice AI.FREEOpen to the PublicDeep learning for symbolic music representations
Date:Tue, 08/15/2023 - 3:30pm - 4:30pmLocation:ClassroomEvent Type:Guest Lecture
Abstract: The talk will discuss the specific challenges of symbolic music representations for deep learning, with a particular emphasis on harmony and tonal analysis (although the methods discussed are applicable to other domains too). Valuable resources will be provided, including access to symbolic music datasets, essential software libraries, effective workflows, and practical insights for symbolic music data manipulation. The talk will also briefly discuss popular papers on the topic, as well as Néstor's research.FREEOpen to the Public