Program

The SMC-17 conference will open on Wednesday, July 5, 2017, at 10am. The closing ceremonies will take place on Saturday, July 8, 2017 around 3 pm.

Click here to download the SMC-17 conference program in PDF format.

Talks will take place in TUAS building (Maarintie 8, Espoo) Room AS2.

The conference proceedings and full music program are now available in PDF format:


Keynote Speakers

SMC-17 is proud to present this year's keynote speakers!

Toshifumi Kunimoto (Yamaha)

History of Yamaha's electronic music synthesis

This keynote presentation will cover Yamaha's history as an electronic keyboard instrument manufacturer. Yamaha entered the electronic organ business in 1959 with the release of the Electone D-1. In the beginning, Yamaha's organs were designed using analog circuitry. Nowadays, Yamaha uses state-of-the-art digital signal processing (DSP) technologies in their products. This knowledge of DSP technologies for audio and musical applications can be applied not only to musical instruments, but also to audio equipment. This presentation will discuss the issues that arose during Yamaha's transition from analog to digital, as well as introduce some of the latest products Yamaha has developed with their cutting-edge DSP technologies.

About the speaker: Toshifumi Kunimoto was born in Sapporo, Hokkaido, Japan in 1957. He received the B.S., M.S. and Ph.D. degrees for his work on ARMA digital filter design from the Faculty of Engineering, Hokkaido University, Sapporo, Japan in 1980, 1982, and 2017, respectively. In 1982 Toshifumi Kunimoto joined Yamaha, where he has designed large-scale integration (LSI) systems for numerous musical instruments such as the Electone, Yamaha's trademark electronic organ line. He has created numerous signal processing technologies used in Yamaha synthesizers and pro-audio equipment. Among his designs are the famous Yamaha VL1 Virtual Acoustic Synthesizer and the CP1 electronic piano. He has also contributed to several Japanese specialized music and audio publications with articles describing the behavior of analog audio equipment such as guitar stompboxes. Since 2008, he has worked at Yamaha's Center for Research & Development in Hamamatsu, Japan.


Anssi Klapuri (Yousician)

Learnings from developing the recognition engine for a musical instrument learning application

Yousician develops an educational platform that helps our users to learn to play a musical instrument. In a typical setting, the user has a mobile device in front of her and plays a real acoustic instrument to interact with the application running on the device. The application listens to the user's performance via the device's built-in microphone and gives real-time feedback about the performance, while also showing written music on the screen and playing an accompanying backing track. In this talk, I will discuss the learnings that we had while building the music recognition engine for our application. There are several technical challenges in using a musical instrument as a "game controller", particularly so in a cross-platform environment with a very heterogeneous set of devices. The most obvious challenge is to recognise the notes that the user is playing, as the quality and acoustic characteristics of instruments vary widely. The noise conditions for the recognition are often quite demanding: the accompanying music from the device speakers tends to leak back to the microphone and correlates strongly with what the user is supposed to play. At the same time, the application should be robust to background noises such as someone talking in the background and potentially other students playing a few meters away in a classroom setting. Timing and synchronisation sets another challenge: giving feedback about performance timing requires synchronisation accuracy down to roughly 10 ms, whereas the audio I/O latency uncertainties are often an order of magnitude larger on an unknown device. I will discuss some of our solutions to these challenges and our workflow for developing and testing the recognition DSP. I will also discuss briefly the audio processing architecture that we use and the reasons why we decided to build our own audio engine (on top of JUCE) to be in full control of the audio capabilities. I will end with a more personal note as a person who moved from the academia to a startup roughly six years ago, reflecting on the most satisfying vs. frustrating moments and the most interesting or surprising aspects during that time.

About the speaker: Anssi Klapuri received his Ph.D. degree from Tampere University of Technology (TUT), Finland in 2004. He visited as a post-doc researcher at École Centrale de Lille, France, and Cambridge University, UK, in 2005 and 2006, respectively. He led the Audio Research Group at TUT until 2009 and then worked as a Lecturer in the Centre for Digital Music at Queen Mary University of London in 2010–2011. Since 2011, he has worked as the CTO of Yousician, developing educational applications for learning to play musical instruments. His research interests include audio signal processing, auditory modeling, and machine learning.


Simon Waters (Queen's University Belfast)

Revisiting the ecosystemic approach

It is ten years since the publication of my "Performance Ecosystems: Ecological approaches to musical interaction" in the EMS07 Proceedings, a paper in which I attempted to think through the implications of regarding player, instrument and environment as contiguous and interpenetrating in musical activity. As I acknowledged at the time this drew on the work of my colleague John Bowers and on that of my community of research students at the University of East Anglia. In the intervening period the studios and research centre there have been closed down, and the utility of thinking in such a manner has been tested by my moving to the (historically) more "interaction design" focused context of the Sonic Arts Research Centre in Belfast.

Speaking at SMC 2017 affords me the opportunity to reflect both on antecedents to the ideas in the original paper, and on the implications of ecosystemic thought beyond musical performance, in the related disciplines of design and computing, and in human conduct more generally. As my musical concerns have during the same period increasingly focused on improvisation, this also leads to a consideration of time, timing and timescales in such conduct, and of the immense resource represented by surviving acoustic musical instruments - which embody histories and ideas which extend their capacity beyond mere instrumentality.

About the speaker: Dr. Simon Waters joined the staff of the Sonic Arts Research Centre at Queen's University Belfast in September 2012, moving from his previous role as Director of Studios at the University of East Anglia (1994 - 2012). He initially established a reputation as an electroacoustic composer, with awards and commissions and broadcasts in the UK and abroad, working with many contemporary dance and physical theatre companies and visual artists including Ballet Rambert, Adventures in Motion Pictures and the Royal Opera House Garden Venture, and with pioneering multimedia theatre practitioners Moving Being.

From a focus on electroacoustic works for contemporary dance in the 1980s, his work has shifted from studio-based acousmatic composition to a position which reflects his sense that music is at least as concerned with human action as with acoustic fact. His current research explores continuities between tactility and hearing, and is particularly concerned with space as informed by and informing human presence, rather than as a benign "parameter". He has supervised over fifty research students in areas as diverse as improvising machines, inexpertise and marginal competence in musical interaction, silence and silencedness, soundscape and acoustic ecology, real-time audio-visual composition and performance systems design as well as in more explicitly "compositional" areas.

His publications are similarly diverse, recent examples exploring empathy and ethics in improvised conduct, early nineteenth century woodwind instrument production in London, the notion of the "local" in a ubiquitously digitized world, and hybrid physical/virtual instrument design.