Shun Ishiwaka is a percussionist active in the Japanese music scene, transcending genres from jazz to pop. In early June of this year, Ishiwaka held a collaborative performance event titled “Echoes for unknown egos―manifestations of sound” at the Yamaguchi Center for Arts and Media [YCAM], known as a venue for exploring cutting-edge expression using media and technology.
The two-day event, held June 4-5, centered on Ishiwaka’s idea of performing with himself, in which he and AI (artificial intelligence) agents (a device that records Ishiwaka’s performance data, extracts performance characteristics, and performs autonomously/semi-autonomously based on that data), improvised sessions. On the second day, saxophonist Kei Matsumaru participated in an interactive improvisation session with a variety of sound-installation-like automatic instruments that were placed at the venue just as on the first day.
This event was realized after about a year and a half of his joint research and development with YCAM and AI researchers. What possibilities did “improvisation with AI” open up? We asked Ishiwaka and Matsumaru about their thoughts immediately after the performance.
Reasons for inviting Kei Matsumaru as a co-performer
–In this performance, the main theme was Ishiwaka-san’s collaboration with himself. So Ishiwaka-san, why did you bring in Matsumaru as a co-performer?
Shun Ishiwaka (Ishiwaka): My initial idea was to show the “before and after of Shun Ishiwaka” through the two days of performance. In other words, I was going to focus on how I change between the first and second days, but the focus of the second half, which was simply going to feature “me after I changed,” shifted to “me performing with co-performer, after I changed.” Well, I had been working with YCAM for more than a year on this project, and I had experienced playing with the computer for a long time, so I wondered what would happen if another artist joined and played with me, and what new discoveries might be made there.
When I thought about who might be interested in this kind of work using leading-edge technology, the first name that came to mind was Kei Matsumaru. We had performed together countless times, and since I have participated in his quartet and performed with him in SMTK, I myself feel a great deal of sympathy for the music he is trying to make and the methods he uses to realize that music. I wanted to have someone with a similar view of music to me be involved, and I also thought that if Kei had this kind of opportunity, he would think about many different things, which might lead to even greater development of the performance.
Kei, I wanted to ask you this, but do you think my way of performing has changed through this creation?
Kei Matsumaru (hereafter, Matsumaru): Well, I can’t really say as of yet. . Maybe I haven’t noticed much of a change because we’ve been playing together for a long time..
Ishiwaka: I personally think I have changed a lot.
Matsumaru： When we look back after a while, I might be able to start seeing this creation as a turning point in hindsight, but I can’t really give a clear answer now. Not because I think there haven’t been any changes, but it may be similar to the way you don’t really notice when someone you live with loses or gains a little weight , for example.
“The experience of the time passage is different from usual.”
— Did you feel any difference between your usual duo session and this performance with the agents including AI?
Ishiwaka: We talked with Kei during the rehearsal about how the experience of the passage of time was different from usual due to the fact that the co-performer of the session is not a human being. We are creating something with the AI, and when the meta-agent gives commands and the five different agents switch between them, the work we are creating may suddenly end in the middle of the process, or we may be made to feel that we should continue. I felt what was going on during the performance was totally different from those with ordinary human performers.
Matsumaru：I have done many duo improvisation performances not only with Shun, but also with many other people, and I can say with confidence that this performance felt considerably closer to improvising in a trio setting rather than duo. When there are only two performers in a session, decisions are made only on one end or the other, in turn or at the same time. . But when a third party is added, it not only adds one more option for decision making, it exponentially broadens the scope of the relationships that can happen simultaneously. After three players, you don’t really feel the difference, which is why I think improvisations can be broadly categorized into ”solo”, “duo”, or “trio or more”, with each becoming a different kind of music. This time it was closer to the feeling of a trio than a duo.
Matsumaru: Playing as a trio means that three patterns of duos may be created within the trio. Therefore, a duo and a trio have very different feelings, almost in a way that a two-dimensional thing becomes three-dimensional. Moments of engaging with another player with intention and situations where multiple textures exist in parallel suddenly become a lot more complex. . In this performance, there were only two human players, myself and Shun, but the meta-agent was also present as another player, and there was a sense that the three of us were creating something together.
After a trial show, the performance became more “free”.
–One month prior to the performance, a trial show was held at Shibuya Koen-dori Classics with Matsumaru. How did you brush up the performance after that?
Ishiwaka：The live performance at Classics was just an experiment, more like a demonstration. We focused on playing with each agent in the first set, and in the second set, person behind the stage switched the agents manually during the performance to create a musical flow. Based on the results, we created a meta-agent, a higher-level entity, with which the computer can switch between the agents.
Matsumaru: Also, at the time of Classics, my performance data was not reflected in the agent, so it didn’t feel like a trio improvisation as in the actual performance. An interactive duo situation was created between me and Shun, and between Shun and the agent, but between me and the agent, there was no interactive communication. We may have influenced each other indirectly, but we never directly communicated as a trio, which was a definite difference from the actual performance. In the performance at YCAM, what I played on the saxophone was also reflected in the meta-agent.
Ishiwaka：It was great that we were able to play more and more freely in this process toward the final performance. During the one at Classics, there were times when we had to change the way we perform for each scene, and at other times, I felt that we taught too many techniques and the agents were too close to the human side.
Matsumaru: If we had not had the trial performance at Classics, perhaps we wouldn’t have been completely satisfied with the final performance.
Something close to a human will felt in the meta-agent
— During the performance, did you ever sense anything like human will in the meta-agent?
Ishiwaka: Yeah, I did a lot. I think if player has the certain sound image he or she wants to make, we may consequently feel as if the meta-agent has a will. For me, there are times when I want to create a sound with a beat, times when I value coincidence and create an idea randomly, and times when I create a very quiet sound. In order to express these different sounds, each agent is allocated to a specific role, and the meta-agent, who has a bird’s eye view of the whole, creates music as if it were playing with a will. In other words, we gave the agents “ears” as well.
Matsumaru: That kind of human-like will was an element that we didn’t feel at the time of the performance at Classics. I was a little surprised at how different it felt after introducing the meta-agent in the performance, because it felt like we were making music together.
–Among the agents, the “Rhythm AI,” which produces percussion sounds, was the closest automatic instrument to Ishiwaka-san’s performance. Did you ever feel “Shun Ishiwaka’s character” in the sound AI produced?
Ishiwaka: Yes, I did. Especially on the first day, I felt a swingin’ feel. When I had the Rhythm AI learn, I set the tempo and the number of bars, and tried various patterns of swing rhythms with an electronic drum called Roland V-Drums. I had it learn as many patterns of my favorite phrases as I could think of. Sometimes when I listen to the agent’s performance while playing, I could recognize the swing rhythms that I had had the agent learn.
Matsumaru: Especially in terms of the rhythm AI, I could feel elements similar to Shun’s improvisation in the density and changes in density of the sounds produced. . Instead of improvising with the same rhythmic density all the time, there was a wide range, just like in Shun’s performance. At times, certain parts of the agent’s rhythm were much less dense than others, and at other times, a very small part of the rhythm had high density. These shades of density repeated in certain cycles may have reflected his characteristics.
The sound from a sampler has little touches of humanity
— Were there any scenes in which you felt that a co-performer was not human?
Matsumaru：I was always conscious of the fact that the agents I was working with were not human. However, one thing that was unexpected was how the sampler functioned. There was an agent with a sampling machine that played fragments of our past recordings through speakers. We thought this would easily bring out human-like qualities because it played actual recordings, but instead it gave us the opposite impression. The human quality was rather weak precisely because it merely played back samples. . The way it chose the sounds and played them didn’t feel human at all. .
Ishiwaka：When I first decided to use a sampler, I had the idea of creating a raw sensation or subtle fluctuations in sound and changes in texture that only humans can create. It was interesting that, although that was my original goal, when I actually tried it, it sounded less human-like.
Matsumaru：If the agent had learned from the performance data of a musician who usedg a sampler as their main tool, the choice of sounds played by the agent might have sounded more human-like. In our performance, the sampler just extracted clips from past recordings with similar sound characteristics to the real-time performance.
–The sampler sounds had a slightly lo-fi texture, so you can tell at first listen that it is a sampler sound, even if it is the sound of the same instrument., right?
Ishiwaka: That’s right. All of the other agents had a mechanism for physically tapping the instruments to produce sound on the spot, but in the case of the sampler, the sound is played back from the speakers, so the sound source can be processed. In order not to confuse the sampler sound with the live sound we were producing in real time, we dared to apply effects to the sampler sound to change its texture.
In search for a view we have never seen before
–This event can be seen as an extension of Ishiwaka’s solo performance. Are there any barriers that you have felt you wanted to break through in your regular solo improvisation performances?
Ishiwaka：The main problem for me is when I get bored with myself. If I feel that what I am looking now is the same as what I saw before during the performance, I may suddenly realize that I am bored with myself. To avoid this, I experiment with various ideas and try to create a situation that feels fresh by keeping my ears sensitive. I always want to go somewhere I have never seen before.
Matsumaru: In my case, I sometimes imagine there being something beyond this type of boredom. I am interested in how far I can go with repeating an idea. When I get really tired of listening to an idea, my perception of that musical idea starts to deform and unravel, sometimes to the point where I can’t think about what will happen next. So, for example, I may think about how many times I can repeat the exact same phrase during a solo, and I believe this kind of patience also plays an important role in improvisation. I’ll sometimes explore this not only in solo performances, but also in duos, depending on the person. .
Ishiwaka：There was a time when I was challenging myself to keep on playing the same drumming phrase over and over again. If I keep on drumming the same phrase, my ear gradually becomes more sensitive. Then I could sense subtle changes in the accents, and I would explore what I might be able to see if I stretched this or that part. But there is a procedure and a pattern, and I am looking for something I could see by continuing the prescribed movements, so I thought it was improvised but also composed, so I named the performance and did it as a concert piece. Recently, I tend to do the improvisation in a different way. Maybe what we are looking for in improvisation differs depending on the general character, role, and features of the instruments. In the case of drummers, we are basically repeating a fixed pattern all the time in order to generate beats in their everyday performance. In order to escape from such a situation, I think that improvisation for drummer is an attempt to expand the range of expression of the drums in various ways.
Matsumaru：That’s true. In that sense, saxophone players are the opposite. We’re often required to play melodies at specific parts, develop solos in specified places, and play ideas that are fairly non-repetitive. Of course, there are many different saxophonists, but in my case, perhaps it is because of the instrument’s characteristics and given role that I’m curious about repeating ideas when I improvise.
(Continued in Part 2)
Born in 1992 in Hokkaido, Japan, Shun Ishiwaka graduated from Tokyo University of the Arts after studying percussion at the high school attached to the Faculty of Music of Tokyo University of the Arts. Upon graduation, he received the Acanthus Music Award and the Doseikai Award. In addition to leading Answer to Remember, SMTK, and Songbook Trio, he has participated in numerous live performances and productions by Kururi, CRCK/LCKS, Kid Fresino, Kimijima Ozora, Millennium Parade, and many others. As a recent practice, he presented Sound Mine, a new concert piece by Miyu Hosoi + Shun Ishiwaka + YCAM at the Yamaguchi Center for Arts and Media[YCAM]under the theme of evoking memories through sound and echoes.
Official website: http://www.shun-ishiwaka.com
Though born in Japan in 1995, Kei Matsumaru was raised in a small village in the highlands of Papua New Guinea where he calls home. From there, he moved to Boston to study music in 2014, after which he relocated to Japan in late 2018.
Kei is currently based in Tokyo and has been active mostly in the jazz and improvised music scene, but has increasingly been collaborating with artists from other musical genres and creative disciplines, such as contemporary dance, visual arts, and various media arts. Kei is a member of SMTK, a rock/free jazz/instrumental band, as well as mºfe (em-doh-feh), an electro-acoustic trio. In 2020, he released Nothing Unspoken Under the Sun as his quartet’s first album.
He also periodically presents “dokusō”, a series of live 90-minute solo saxophone performances through which he explores the relationship between time, space, body, and instrument and how the performance affects cognition and perception of these elements in both the audience and himself.
Recent: Eiko Ishibashi, Tatsuhisa Yamamoto, Jim O’Rourke, Otomo Yoshihide, Kazuhisa Uchihashi, Dos Monos, etc.
His 2nd album The Moon, Its Recollections Abstracted is set to release on October 19, 2022.
Official website: https://www.keimatsumaru.com
Photography Yasuhiro Tani / Courtesy of Yamaguchi Center for Arts and Media [YCAM]
Translation Shinichiro Sato (TOKION)