メインコンテンツまでスキップ
Max summer school 2024

イブニングセッション (予定)

全てのイブニングセッションは英語で行われます。日本語の通訳はつきません。
*イブニングセッションの内容の変更があり次第、随時アップデートをして行きます。詳細はこのウェブサイトで確認をしてください。

7月29日 18:00 - 19:30 Improtech lecture session #1

  • Miller Puckette (UCSD), An inside view of an instrument
  • Marc Chemillier (EHESS), Keeping the swing, AI cocreative musicianship in collective idiomatic settings

7月30日 18:00 - 19:30 Improtech lecture session #2

  • Shlomo Dubnov (UCSD, Qualcomm Institute), Advanced Machine Learning and Music Information dynamics for Deep and Shallow CoCreative Systems
  • Steve Lehman, Professor of Music at CalArts, Current Trends in Computer-Driven Interactivity with Tempo-Based Rhythm

7月31日 18:00 - 19:30 Improtech lecture session #3

  • Nao Tokui (Qosmo Inc.), Surfing musical creativity with AI — what DJing with AI taught me
  • Mari Kimura (UC Irvine), MUGIC®: endless possibilities extending musical expression

8月1日 19:00 Performance at Konnoh Hashimangu Shrine in Shibuya

  • Tokyo Bout à Bout
    Georges Bloch (composer, generative electronics), Taketeru Kudo (Butoh dancer), Takashi Seo (Bass)

Abstracts

Miller Puckette (Ircam), Irwin, An inside view of an instrument

Signal delays are very bothersome to live musicians, especially percussionists. As a duo using percussion, we have worked out a way to avoid having to send audio signals between computers, which would always add some delay. Instead, we work as a duo within one computer by making plug-ins that can be remotely controlled. The plug-ins can be any kind of patch, either Max or Pure Data, and can be hosted by any digital audio workstation. The result is a single software percussion instrument played live by a musician but simultaneously played by a second performer using controllers that act within one or several plug-ins in a single signal chain.

Marc Chemillier (EHESS), Keeping the swing, AI cocreative musicianship in collective idiomatic settings

Artificial intelligence can be seen as antagonistic to certain traditional activities, particularly music. We are going to criticize this stereotype by showing how machine learning can be used for music with an oral tradition. During the REACH project, we developed an improvisation software programmed in Max/MSP, which has the particularity of taking a regular pulse into account. All pulse-based musical sequences captured by the software can be reused after the learning phase, retaining the same culturally relevant rhythmic position. The improvisation software is thus able to play in the style of native players. The outputs of the program are good enough to allow duets between a musician and the computer. Musicians reacting to the outputs of the machine can shed new light on the analysis of their repertoires. By refining the generation parameters, we can get closer to an optimal characterization of the music studied. We’ll show examples of experiments with musicians from Madagascar. Moreover, the system can also explore various degres of hybridation. One can inject in the context of Malagasy music generated solos from other traditions (for instance jazz) and study how it fits the musical context according to the native musicians point of view, which can shed new light on the boundaries of a given musical tradition.

Shlomo Dubnov (UCSD, Qualcomm Institute), Advanced Machine Learning and Music Information dynamics for Deep and Shallow CoCreative Systems

In the talk Shlomo Dubnov will survey his recent research on advanced generative music AI methods with emphasis on diffusion methods and information theory. He will then describe creative applications of text-to-music, voice conversion and multi-track synthesis, and analysis of polyphonic music in terms of multi-information dynamics. Questions of co-creativity, artistic sensibility and Kansei in AI will be discussed.

Nicolas Brochec, Marco Fiorini (Geidai, Ircam) : Real-Time Recognition of Instrument Playing Techniques for Mixed Music and CoCreative Interaction

We are going to detail the techniques, methodologies, and outcomes that led to the development of an interactive system based on real-time Instrumental Playing Technique (IPT) recognition. Starting from exploratory studies on the flute, we will discuss soundbank recording, data format, and data augmentation, as well as state-of-the-art machine learning model architectures developed in our research. By connecting our model to the co-creative AI system Somax2, we are able to interact with generative agents by means of real-time recognition of IPT classes, adding a new dimension to its interaction paradigm and addressing potential scenarios of co-creative human-machine interaction in mixed music for improvisation and composition.

Mari Kimura (UC Irvine), MUGIC®: endless possibilities extending musical expression

MUGIC® is a 9-axis motion sensor similar to other generic 9-axis sensors available on the market. However, what sets MUGIC® apart is its comprehensive, user-friendly design. Created by violinist and composer Mari Kimura, MUGIC® is a turnkey product that allows musicians to create their art immediately without requiring extensive programming or electrical engineering skills. The first version of MUGIC® sold out following a significant bulk order from the Lincoln Center in NYC this spring. As MUGIC® v.2 is under development, Kimura will demonstrate the importance of fostering a community around new technology and how MUGIC® users are expanding its application not only in music but also in other forms of art and beyond.

Jose-Miguel Fernandez and Lara Morciano (Ircam) Composition and Interaction with Somax2

In this presentation, we will discuss the integration of Somax2 into musical composition through the works of Lara Morciano and José Miguel Fernández. We will also present the Somax2Collider environment for Spatial Interactive Agents, which is a preliminary approach to using agents in the context of spatialized improvisation using the SuperCollider software and a system of wireless connected speakers.

Nao Tokui (Qosmo Inc.), Surfing musical creativity with AI — what DJing with AI taught me

Nao Tokui discusses the progression of his AI DJ project, which incorporates machine learning systems for live performances, and shares the insights he gained from it. He also explores the potential implications of the latest AI technology in music improvisation.

Steve Lehman , Professor of Music at CalArts, Current Trends in Computer-Driven Interactivity with Tempo-Based Rhythm

Steve Lehman will present a survey of current trends in experimental musics that draw from tempo-based modalities of rhythm, with a particular focus on their application to computer-driven models for real-time interaction.

Improtech

Improtech Paris - Tokyo 2024, is part of the ERC REACH project Raising co-creativity in cyber-human Musicianship which receives financial support from the European Research Council under Horizon 2020 programme (Grant 883313). It is hosted by Tokyo University of the Arts as part of the Max Summer School 2024.

https://improtech.ircam.fr/

Additional support by

  • IRCAM
  • French Ministry of Culture
  • Maison Franco Japonaise in Tokyo

Improtech Paris - Tokyo chairs

  • Pr Dr Suguru Goto
  • Pr Marc Chemillier
  • Gérard Assayag, DR

Improtech Paris - Tokyo organising team

  • Marco Fiorini
  • Nicolas Brochec
  • Vasiliki Zachari
  • Mikhail Malt