Technological Co-Creation of AI and Improvisation — Shun Ishiwaka × Kei Matsumaru Special Talk, Part 2

Percussionist, drummer Shun Ishiwaka and the Yamaguchi Center for Arts and Media [YCAM] collaborated on a performance event entitled Echoes for unknown egos—manifestation of sound.

The event was held at YCAM over two days from June 4 to 5 this year, after a year and a half of joint research and development by Ishiwaka, YCAM, and AI researchers. During the performance, Ishiwaka did an improvisational session with an AI (artificial intelligence) agent that had been trained on a data set of Ishiwaka’s own musical performance, and he was joined by saxophonist Kei Matsumaru on the second day.

We conducted an interview with Ishiwaka and Matsumaru after the second performance. In the first part, we asked them about the reason for having Matsumaru as a collaborator, their creation process complemented by a trial live performance, the sense of time and some kind of human will specific to AI that they felt during the performance, and their respective attitudes toward solo improvisation.

The second part highlights how their perception of improvisation changed through the creation process, their ideas on how they utilize the system created this time, and their insights into the future of music education.

The way of thinking about improvisation changed through the performance

–Shun, in a public talk after the performance on the first day, you said, “My way of thinking about improvisation has changed (through this creation).” What exactly was that change?

Shun Ishiwaka (Ishiwaka): In the process of creating co-performers using AI and other technologies, I verbalized my improvisation method to have the AI learn it. At that time, I was confronted with the question, “Can we really call what we have verbalized an improvised performance?” At that time, I was like, “If players other than myself can play, isn’t it actually composition? No, even so, it is still improvisation. No, no, it may not be improvisation.”

As for improvisation, sometimes I play music with an idea of the kind of sound I want to make, and other times I play without any particular idea in mind. Even when I try to create the sound I am aiming for, coincidence sometimes works in the right direction, and unexpected and interesting developments may occur. I realized once again that I am doing improvisation in these kind of many different layers, and this production was an opportunity for me to think about such things in detail.

When I improvise, I look forward to seeing and hearing things I have never seen or heard before. But I used to only intuitively perceive what that is, what was happening during improvisation, and how it could be interesting. It was just sort of fantasy. In the process of teaching such things to my mechanical collaborators, I began to break down my improvisations into their component parts and verbalize them, which gave birth to many new questions. But in any case, it increased my level of understanding.

Kei Matsumaru (Matsumaru): By participating in this project, I feel I’ve been faced with very important questions about what improvisation is and what moments in music make people feel that what they hear is “good.” Kazuhisa Uchihashi once made a strong statement in conversation about the difference between “good improvisation” and “bad improvisation,” which I could really empathize with. In other words, I think there is a vague yet somewhat common understanding about what kind of improvisation is not good.

 However, it is difficult (and dangerous) to verbalize what is good and what is bad. At the same time, the beauty of improvisation as a method may lie in this very difficulty. Music with clearly defined style is rather easy to verbalize. It is relatively unchallenging to extract the characteristics and teach them to others in the academic world and to perform in a similar style. With improvised music, it is quite difficult to do so. Nevertheless, there are definite differences between “good improvisations” and “bad improvisations,” which I am always trying to figure out.

Ishiwaka: Mr. Uchihashi said, “Most of the improvisation that exists in the world is fake.” And I can sympathize with that. But it is difficult to teach “good improvisation” to someone else. For example, this time, I had a hard time thinking, “When I am playing like this, what kind of performance by which agent would be appropriate?” I myself don’t necessarily play the quiet sounds just because my collaborator starts playing quietly, but I sometimes do. So I wondered how to teach such things to the machine, which may sound like a Zen riddle (laughs), but I wanted to create a state where you never know what will happen once the performance starts.

Difference between jazz improvisation and free improvisation

–For example, jazz music also has an improvisation aspect, doesn’t it? Do you feel that there is a difference between free improvisation like this and jazz improvisation in terms of “how you can teach others how to improvise” even though both of them are forms of improvised performance?

Ishiwaka:It is based on a vast amount of music accumulated by our great predecessors. And there is a lot of data that says, “This is how you should play a solo on this piece of music.” I think that by studying and practicing such data, we will be able to improvise jazz music.

Matsumaru: I think jazz is a music that focuses on history, even if the performers themselves are not conscious of it. It uses historically established musical vocabulary, and the types and tendencies of chord progressions are to some extent fixed. Improvisation in jazz is part of that history. But in the case of free improvisation, I am not trying to dedicate it to any kind of history, nor am I trying to focus on history. Of course, there may be musicians who want to play music in the context of free improvisation as a genre, but that is not the case with me.

Ishiwaka:Since I am very much at the point where I like to create music with computers, I had the sense that I was designing something that could be freely improvised and sessioned. For example, neither I nor the rhythm AI play eighth-note beats in a straightforward manner. Technically, I do make beats, but using a fixed rhythm pattern and fitting it into the computer to make music is not what I want to do.

Matsumaru:If it had been a drummer other than Shun, there might have been moments where a rock-style rhythm suddenly pops up, for example.

Ishiwaka: Even though I call it free improvisation, I sort of had the form of music I wanted to play in mind while I was working on the project. What was at the core was to simply develop the idea of “wanting to perform with myself.”

The idea of “wanting to perform with myself”

–Kei, do you also feel that you would like to perform with yourself?

Matsumaru:Not necessarily.  Most likely because of the nature of the saxophone as an instrument. I simply don’t like the musical texture of the sound of just two saxophones playing simultaneously. In jazz as well, but especially in improvised sessions, two saxophones are a bit too much.

Ishiwaka:In terms of drums, there can often be free sessions with twin drums.

Matsumaru:Yes, with drums, it’s listenable. There’s more blank space, no matter how dense the rhythm ist. On the other hand, saxophones have a clearly audible pitch range, and every note is very assertive, so when there are two saxophonists playing at the same time……. That’s probably one of the reasons why I wouldn’t really think of performing with myself.

–Shun, you mentioned that one of the reasons why you find it interesting to perform with yourself is that you can look at yourself objectively. For example, recording your own performance and listening back to it also leads to “objectivity,” but what do you feel is the difference between that and having an AI learn from you?

Ishiwaka:It’s all about real-time performance. That is what I focused on when creating the agent. At the beginning of the creation process, after performing once at the Black Swan (blkswn welfare center), we performed again for the same period of time solely based on memory, and overlapped the recordings and videos of those two performances. Then I felt there is some kind of commonality between these two and sort of connection to myself. However, I wanted to do this interactively, not with my past performance, but with something that was going on at the same time. In other words, I wanted to create ears that would listen to my performance. And I wanted it to not only listen to me, but also to learn my performance and respond to it in real time.

“I want other musicians to use the system.”

–Shun, you mentioned that you would like other musicians to use the AI system you created this time.

Ishiwaka: I used six different agents this time, including a meta-agent, and I am very interested in how the system would sound if other musicians have each agent learn their own improvisations. This is how it turned out this time, but I think other musicians will have different sounds they want to express on the computer and different ways of having the computer learn their performance. In that case, I wonder what kind of sounds the rhythm AI or melody AI would produce. Besides, someone would think, “I want to make this kind of instrument to produce this kind of sound,” and new ideas for new agents might emerge.

–Matsumaru, you said earlier that you don’t feel like performing with yourself. But do you have any desire to utilize this system?

Matsumaru: Definitely. For example, I think it would be very interesting if what I play comes out in different sounds from different instruments. I am not really interested in having saxophones perform with me, but as I did in the foyer at the end of this performance, it was very stimulating to have automated cymbals that used my performance data and to perform with it. When data acquired from one instrument is output from another kind of instrument, I think t ideas may be produced that are not or cannot normally be done with that instrument used as the output. I think there are new possibilities there.

For example, with the saxophone, there must be some rhythmic elements that are specific to the performance of this instrument, and if data is extracted and output from the drums, the performance would be different from that of normal drums. Of course, in reality, there would be a more complex conversion process, but either way, I feel that it would be an opportunity to discover new possibilities for the instrument.

“I came to see again what I cannot do.”

–Shun, don’t you think that the great achievement of this project was the fact that it became not only an opportunity for you to “perform with yourself” on the stage, but also an opportunity for you to “look at yourself” even during the production process?

Ishiwaka: Yeah, absolutely. This was what we talked about at the after-party, but the first thing I thought after the performance was “I want to practice more and broaden the range of my techniques& ideas!!!”(laugh). There were many moments when I felt, “I want to play this kind of sound in response to the development of the session but my technique is not enough.”

 Also, listening to the percussive sounds produced by the rhythm AI for a long time, I wondered how a human could play them in a cooler manner. More specifically, the feedback sound of the cymbals, which gradually changed its tone as if one tweaked synth filter without changing pitch, made me wonder what kind of playing technique I could use to express this kind of sound change. I am sure that if other musicians do what I did with AI, they will also discover something new, and come up with new ideas and new views that they have never seen before.

 I also found the speed of technological development to be much faster than I had imagined. Just comparing now with two years ago when I started working on this project, it has already changed significantly. So I think that by continuing this kind of technological collaboration in some form or another, we will be able to make new attempts in the years to come. However, even if we do not necessarily stick to technology, I think it is interesting to keep searching for something new and different to acquire, rather than aiming at some destination and ending up with some results, no matter what kind of music it is. This time was just focused on AI and other kind of technologies and improvisation.

Thinking about the future of music education through improvisation

Matsumaru:Thinking deeply about improvisation like we did today is the same as thinking deeply about music itself, rather than about a specific genre or context. I felt that this performance in particular could develop into something significant educationally. In the field of music education, most of what is done is routine-based, and at least in my experience, it rarely touches the core aspects of music.

 Even children who were originally curious about many things develop a preconception that says “this is how music is supposed to be” after taking such classes, and as a result, they will come to have music that they dislike without reason, music that they find difficult, and music that they find easy to listen to. However, if we can develop this kind of performance into an educational tool, we may be able to break free from such ways of thinking.

Ishiwaka:Yeah, you are right. I think this work has many layers. From how we perceive music, to how we improvise, to how we choose sounds, there are parts that are usually done without explanation, but during the creation process, we have to explain each of these multiple layers in detail. If such way of explanations could be introduced into music education, I think it would change the way young children interact with music. Those things should be considered first before, for example, the stage where children become able to play the Do Re Mi on the recorder, or to sing in a chorus together.

Matsumaru: Yeah, I agree with that. It would be better to teach things like, “This is a historically important piece of music,” at a later stage after what you just mentioned. I think one of the wonderful things about music is that it can stimulate and expand the imagination. Not only for our imagination associated with music, but it also leads to developing an imagination for many other things.  A person receiving a musical education that stifles the imagination, may grow up to be a person with that kind of mindset about other aspects in life.

 In a sense, for me, getting into improvisation was a way to break free from the stereotypes that I had developed as a child. In my case, I grew up in a closed community in Papua New Guinea, in a very biased environment. But there was a time when I tried to move my fixed thinking in a different direction, and I started to work on improvisation alongside that.

Ishiwaka: If showing such performances can be a chance to think about the future of music education, I think it is important to create opportunities to present them by myself. I was introduced to Takeo Moriyama’s free jazz when I was a child, and I studied orchestral and contemporary music later, but even now, what underlies in myself is free music. Since I have lived my life in this way, I feel anew that I must continue to present my own music and that I have a mission to create opportunities for that purpose.

Shun Ishiwaka
Born in 1992 in Hokkaido, Japan, Shun Ishiwaka graduated from Tokyo University of the Arts after studying percussion at the high school attached to the Faculty of Music of Tokyo University of the Arts. Upon graduation, he received the Acanthus Music Award and the Doseikai Award. In addition to leading Answer to Remember, SMTK, and Songbook Trio, he has participated in numerous live performances and productions by Kururi, CRCK/LCKS, Kid Fresino, Kimijima Ozora, Millennium Parade, and many others. As a recent practice, he presented Sound Mine, a new concert piece by Miyu Hosoi + Shun Ishiwaka + YCAM at the Yamaguchi Center for Arts and Media[YCAM]under the theme of evoking memories through sound and echoes.
Official website:
Twitter: @shunishiwaka

Kei Matsumaru
Though born in Japan in 1995, Kei Matsumaru was raised in a small village in the highlands of Papua New Guinea where he calls home. From there, he moved to Boston to study music in 2014, after which he relocated to Japan in late 2018.
Kei is currently based in Tokyo and has been active mostly in the jazz and improvised music scene, but has increasingly been collaborating with artists from other musical genres and creative disciplines, such as contemporary dance, visual arts, and various media arts. Kei is a member of SMTK, a rock/free jazz/instrumental band, as well as mºfe (em-doh-feh), an electro-acoustic trio. In 2020, he released Nothing Unspoken Under the Sun as his quartet’s first album.
He also periodically presents “dokusō”, a series of live 90-minute solo saxophone performances through which he explores the relationship between time, space, body, and instrument and how the performance affects cognition and perception of these elements in both the audience and himself.
Recent: Eiko Ishibashi, Tatsuhisa Yamamoto, Jim O’Rourke, Otomo Yoshihide, Kazuhisa Uchihashi, Dos Monos, etc.
His 2nd album The Moon, Its Recollections Abstracted is set to release on October 19, 2022.
Official website:
Instagram: @kmatsumaru
Twitter: @keimatsumaru

Photography Yasuhiro Tani / Courtesy of Yamaguchi Center for Arts and Media [YCAM]
Translation Shinichiro Sato (TOKION)

Photography Yasuhiro Tani / Courtesy of Yamaguchi Center for Arts and Media [YCAM]


Narushi Hosoda

Born in 1989, Narushi is a writer and a music critic. He edit “AA: Fifty Years Later Albert Ayler” (COMPANYSHA, 2021). His most famous pieces are “New wave of improvised music – a disk guide for first encounters or initial thoughts”, “Towards the inevitable 'non-being of sound' – The concrete and real music of Asian Meeting Festival”, and more. He’s in charge of planning and holding a series of events about contemporary improvisational music at Kokubunji M’s. Twitter: @HosodaNarushi