Kei Matsumaru Archives - TOKION https://tokion.jp/en/tag/kei-matsumaru-2/ Wed, 18 Jan 2023 02:26:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.4 https://image.tokion.jp/wp-content/uploads/2020/06/cropped-logo-square-nb-32x32.png Kei Matsumaru Archives - TOKION https://tokion.jp/en/tag/kei-matsumaru-2/ 32 32 Technological Co-Creation of AI and Improvisation — Shun Ishiwaka × Kei Matsumaru Special Talk, Part 2 https://tokion.jp/en/2023/01/17/shun-ishiwaka-x-kei-matsumaru-vol2/ Tue, 17 Jan 2023 06:00:00 +0000 https://tokion.jp/?p=164393 What possibilities does AI bring to improvisation and performance? This is the second part of the special conversation between percussionist Shun Ishiwaka and saxophonist Kei Matsumaru, who held a session with AI that learned their performance at the Yamaguchi Center for Arts and Media [YCAM] last June.

The post Technological Co-Creation of AI and Improvisation — Shun Ishiwaka × Kei Matsumaru Special Talk, Part 2 appeared first on TOKION - Cutting edge culture and fashion information.

]]>
Percussionist, drummer Shun Ishiwaka and the Yamaguchi Center for Arts and Media [YCAM] collaborated on a performance event entitled Echoes for unknown egos—manifestation of sound.

The event was held at YCAM over two days from June 4 to 5 this year, after a year and a half of joint research and development by Ishiwaka, YCAM, and AI researchers. During the performance, Ishiwaka did an improvisational session with an AI (artificial intelligence) agent that had been trained on a data set of Ishiwaka’s own musical performance, and he was joined by saxophonist Kei Matsumaru on the second day.

We conducted an interview with Ishiwaka and Matsumaru after the second performance. In the first part, we asked them about the reason for having Matsumaru as a collaborator, their creation process complemented by a trial live performance, the sense of time and some kind of human will specific to AI that they felt during the performance, and their respective attitudes toward solo improvisation.

The second part highlights how their perception of improvisation changed through the creation process, their ideas on how they utilize the system created this time, and their insights into the future of music education.

The way of thinking about improvisation changed through the performance

–Shun, in a public talk after the performance on the first day, you said, “My way of thinking about improvisation has changed (through this creation).” What exactly was that change?

Shun Ishiwaka (Ishiwaka): In the process of creating co-performers using AI and other technologies, I verbalized my improvisation method to have the AI learn it. At that time, I was confronted with the question, “Can we really call what we have verbalized an improvised performance?” At that time, I was like, “If players other than myself can play, isn’t it actually composition? No, even so, it is still improvisation. No, no, it may not be improvisation.”

As for improvisation, sometimes I play music with an idea of the kind of sound I want to make, and other times I play without any particular idea in mind. Even when I try to create the sound I am aiming for, coincidence sometimes works in the right direction, and unexpected and interesting developments may occur. I realized once again that I am doing improvisation in these kind of many different layers, and this production was an opportunity for me to think about such things in detail.

When I improvise, I look forward to seeing and hearing things I have never seen or heard before. But I used to only intuitively perceive what that is, what was happening during improvisation, and how it could be interesting. It was just sort of fantasy. In the process of teaching such things to my mechanical collaborators, I began to break down my improvisations into their component parts and verbalize them, which gave birth to many new questions. But in any case, it increased my level of understanding.

Kei Matsumaru (Matsumaru): By participating in this project, I feel I’ve been faced with very important questions about what improvisation is and what moments in music make people feel that what they hear is “good.” Kazuhisa Uchihashi once made a strong statement in conversation about the difference between “good improvisation” and “bad improvisation,” which I could really empathize with. In other words, I think there is a vague yet somewhat common understanding about what kind of improvisation is not good.

 However, it is difficult (and dangerous) to verbalize what is good and what is bad. At the same time, the beauty of improvisation as a method may lie in this very difficulty. Music with clearly defined style is rather easy to verbalize. It is relatively unchallenging to extract the characteristics and teach them to others in the academic world and to perform in a similar style. With improvised music, it is quite difficult to do so. Nevertheless, there are definite differences between “good improvisations” and “bad improvisations,” which I am always trying to figure out.

Ishiwaka: Mr. Uchihashi said, “Most of the improvisation that exists in the world is fake.” And I can sympathize with that. But it is difficult to teach “good improvisation” to someone else. For example, this time, I had a hard time thinking, “When I am playing like this, what kind of performance by which agent would be appropriate?” I myself don’t necessarily play the quiet sounds just because my collaborator starts playing quietly, but I sometimes do. So I wondered how to teach such things to the machine, which may sound like a Zen riddle (laughs), but I wanted to create a state where you never know what will happen once the performance starts.

Difference between jazz improvisation and free improvisation

–For example, jazz music also has an improvisation aspect, doesn’t it? Do you feel that there is a difference between free improvisation like this and jazz improvisation in terms of “how you can teach others how to improvise” even though both of them are forms of improvised performance?

Ishiwaka:It is based on a vast amount of music accumulated by our great predecessors. And there is a lot of data that says, “This is how you should play a solo on this piece of music.” I think that by studying and practicing such data, we will be able to improvise jazz music.

Matsumaru: I think jazz is a music that focuses on history, even if the performers themselves are not conscious of it. It uses historically established musical vocabulary, and the types and tendencies of chord progressions are to some extent fixed. Improvisation in jazz is part of that history. But in the case of free improvisation, I am not trying to dedicate it to any kind of history, nor am I trying to focus on history. Of course, there may be musicians who want to play music in the context of free improvisation as a genre, but that is not the case with me.

Ishiwaka:Since I am very much at the point where I like to create music with computers, I had the sense that I was designing something that could be freely improvised and sessioned. For example, neither I nor the rhythm AI play eighth-note beats in a straightforward manner. Technically, I do make beats, but using a fixed rhythm pattern and fitting it into the computer to make music is not what I want to do.

Matsumaru:If it had been a drummer other than Shun, there might have been moments where a rock-style rhythm suddenly pops up, for example.

Ishiwaka: Even though I call it free improvisation, I sort of had the form of music I wanted to play in mind while I was working on the project. What was at the core was to simply develop the idea of “wanting to perform with myself.”

The idea of “wanting to perform with myself”

–Kei, do you also feel that you would like to perform with yourself?

Matsumaru:Not necessarily.  Most likely because of the nature of the saxophone as an instrument. I simply don’t like the musical texture of the sound of just two saxophones playing simultaneously. In jazz as well, but especially in improvised sessions, two saxophones are a bit too much.

Ishiwaka:In terms of drums, there can often be free sessions with twin drums.

Matsumaru:Yes, with drums, it’s listenable. There’s more blank space, no matter how dense the rhythm ist. On the other hand, saxophones have a clearly audible pitch range, and every note is very assertive, so when there are two saxophonists playing at the same time……. That’s probably one of the reasons why I wouldn’t really think of performing with myself.

–Shun, you mentioned that one of the reasons why you find it interesting to perform with yourself is that you can look at yourself objectively. For example, recording your own performance and listening back to it also leads to “objectivity,” but what do you feel is the difference between that and having an AI learn from you?

Ishiwaka:It’s all about real-time performance. That is what I focused on when creating the agent. At the beginning of the creation process, after performing once at the Black Swan (blkswn welfare center), we performed again for the same period of time solely based on memory, and overlapped the recordings and videos of those two performances. Then I felt there is some kind of commonality between these two and sort of connection to myself. However, I wanted to do this interactively, not with my past performance, but with something that was going on at the same time. In other words, I wanted to create ears that would listen to my performance. And I wanted it to not only listen to me, but also to learn my performance and respond to it in real time.

“I want other musicians to use the system.”

–Shun, you mentioned that you would like other musicians to use the AI system you created this time.

Ishiwaka: I used six different agents this time, including a meta-agent, and I am very interested in how the system would sound if other musicians have each agent learn their own improvisations. This is how it turned out this time, but I think other musicians will have different sounds they want to express on the computer and different ways of having the computer learn their performance. In that case, I wonder what kind of sounds the rhythm AI or melody AI would produce. Besides, someone would think, “I want to make this kind of instrument to produce this kind of sound,” and new ideas for new agents might emerge.

–Matsumaru, you said earlier that you don’t feel like performing with yourself. But do you have any desire to utilize this system?

Matsumaru: Definitely. For example, I think it would be very interesting if what I play comes out in different sounds from different instruments. I am not really interested in having saxophones perform with me, but as I did in the foyer at the end of this performance, it was very stimulating to have automated cymbals that used my performance data and to perform with it. When data acquired from one instrument is output from another kind of instrument, I think t ideas may be produced that are not or cannot normally be done with that instrument used as the output. I think there are new possibilities there.

For example, with the saxophone, there must be some rhythmic elements that are specific to the performance of this instrument, and if data is extracted and output from the drums, the performance would be different from that of normal drums. Of course, in reality, there would be a more complex conversion process, but either way, I feel that it would be an opportunity to discover new possibilities for the instrument.

“I came to see again what I cannot do.”

–Shun, don’t you think that the great achievement of this project was the fact that it became not only an opportunity for you to “perform with yourself” on the stage, but also an opportunity for you to “look at yourself” even during the production process?

Ishiwaka: Yeah, absolutely. This was what we talked about at the after-party, but the first thing I thought after the performance was “I want to practice more and broaden the range of my techniques& ideas!!!”(laugh). There were many moments when I felt, “I want to play this kind of sound in response to the development of the session but my technique is not enough.”

 Also, listening to the percussive sounds produced by the rhythm AI for a long time, I wondered how a human could play them in a cooler manner. More specifically, the feedback sound of the cymbals, which gradually changed its tone as if one tweaked synth filter without changing pitch, made me wonder what kind of playing technique I could use to express this kind of sound change. I am sure that if other musicians do what I did with AI, they will also discover something new, and come up with new ideas and new views that they have never seen before.

 I also found the speed of technological development to be much faster than I had imagined. Just comparing now with two years ago when I started working on this project, it has already changed significantly. So I think that by continuing this kind of technological collaboration in some form or another, we will be able to make new attempts in the years to come. However, even if we do not necessarily stick to technology, I think it is interesting to keep searching for something new and different to acquire, rather than aiming at some destination and ending up with some results, no matter what kind of music it is. This time was just focused on AI and other kind of technologies and improvisation.

Thinking about the future of music education through improvisation

Matsumaru:Thinking deeply about improvisation like we did today is the same as thinking deeply about music itself, rather than about a specific genre or context. I felt that this performance in particular could develop into something significant educationally. In the field of music education, most of what is done is routine-based, and at least in my experience, it rarely touches the core aspects of music.

 Even children who were originally curious about many things develop a preconception that says “this is how music is supposed to be” after taking such classes, and as a result, they will come to have music that they dislike without reason, music that they find difficult, and music that they find easy to listen to. However, if we can develop this kind of performance into an educational tool, we may be able to break free from such ways of thinking.

Ishiwaka:Yeah, you are right. I think this work has many layers. From how we perceive music, to how we improvise, to how we choose sounds, there are parts that are usually done without explanation, but during the creation process, we have to explain each of these multiple layers in detail. If such way of explanations could be introduced into music education, I think it would change the way young children interact with music. Those things should be considered first before, for example, the stage where children become able to play the Do Re Mi on the recorder, or to sing in a chorus together.

Matsumaru: Yeah, I agree with that. It would be better to teach things like, “This is a historically important piece of music,” at a later stage after what you just mentioned. I think one of the wonderful things about music is that it can stimulate and expand the imagination. Not only for our imagination associated with music, but it also leads to developing an imagination for many other things.  A person receiving a musical education that stifles the imagination, may grow up to be a person with that kind of mindset about other aspects in life.

 In a sense, for me, getting into improvisation was a way to break free from the stereotypes that I had developed as a child. In my case, I grew up in a closed community in Papua New Guinea, in a very biased environment. But there was a time when I tried to move my fixed thinking in a different direction, and I started to work on improvisation alongside that.

Ishiwaka: If showing such performances can be a chance to think about the future of music education, I think it is important to create opportunities to present them by myself. I was introduced to Takeo Moriyama’s free jazz when I was a child, and I studied orchestral and contemporary music later, but even now, what underlies in myself is free music. Since I have lived my life in this way, I feel anew that I must continue to present my own music and that I have a mission to create opportunities for that purpose.

Shun Ishiwaka
Born in 1992 in Hokkaido, Japan, Shun Ishiwaka graduated from Tokyo University of the Arts after studying percussion at the high school attached to the Faculty of Music of Tokyo University of the Arts. Upon graduation, he received the Acanthus Music Award and the Doseikai Award. In addition to leading Answer to Remember, SMTK, and Songbook Trio, he has participated in numerous live performances and productions by Kururi, CRCK/LCKS, Kid Fresino, Kimijima Ozora, Millennium Parade, and many others. As a recent practice, he presented Sound Mine, a new concert piece by Miyu Hosoi + Shun Ishiwaka + YCAM at the Yamaguchi Center for Arts and Media[YCAM]under the theme of evoking memories through sound and echoes.
Official website: http://www.shun-ishiwaka.com
Twitter: @shunishiwaka

Kei Matsumaru
Though born in Japan in 1995, Kei Matsumaru was raised in a small village in the highlands of Papua New Guinea where he calls home. From there, he moved to Boston to study music in 2014, after which he relocated to Japan in late 2018.
Kei is currently based in Tokyo and has been active mostly in the jazz and improvised music scene, but has increasingly been collaborating with artists from other musical genres and creative disciplines, such as contemporary dance, visual arts, and various media arts. Kei is a member of SMTK, a rock/free jazz/instrumental band, as well as mºfe (em-doh-feh), an electro-acoustic trio. In 2020, he released Nothing Unspoken Under the Sun as his quartet’s first album.
He also periodically presents “dokusō”, a series of live 90-minute solo saxophone performances through which he explores the relationship between time, space, body, and instrument and how the performance affects cognition and perception of these elements in both the audience and himself.
Recent: Eiko Ishibashi, Tatsuhisa Yamamoto, Jim O’Rourke, Otomo Yoshihide, Kazuhisa Uchihashi, Dos Monos, etc.
His 2nd album The Moon, Its Recollections Abstracted is set to release on October 19, 2022.
Official website: https://www.keimatsumaru.com
Instagram: @kmatsumaru
Twitter: @keimatsumaru

Photography Yasuhiro Tani / Courtesy of Yamaguchi Center for Arts and Media [YCAM]
Translation Shinichiro Sato (TOKION)

Photography Yasuhiro Tani / Courtesy of Yamaguchi Center for Arts and Media [YCAM]

The post Technological Co-Creation of AI and Improvisation — Shun Ishiwaka × Kei Matsumaru Special Talk, Part 2 appeared first on TOKION - Cutting edge culture and fashion information.

]]>
Technological Co-Creation between AI and Improvisation — Shun Ishiwaka × Kei Matsumaru Special Conversation, Part 1 https://tokion.jp/en/2022/10/21/shun-ishiwaka-x-kei-matsumaru-vol1/ Fri, 21 Oct 2022 06:00:00 +0000 https://tokion.jp/?p=151694 What possibilities does AI bring to improvisation and performance? This is the first part of the conversation between percussionist Shun Ishiwaka and saxophonist Kei Matsumaru, who held a session with an AI that learned their own performance at the Yamaguchi Center for Arts and Media [YCAM] last June.

The post Technological Co-Creation between AI and Improvisation — Shun Ishiwaka × Kei Matsumaru Special Conversation, Part 1 appeared first on TOKION - Cutting edge culture and fashion information.

]]>
Shun Ishiwaka is a percussionist active in the Japanese music scene, transcending genres from jazz to pop. In early June of this year, Ishiwaka held a collaborative performance event titled “Echoes for unknown egos―manifestations of sound” at the Yamaguchi Center for Arts and Media [YCAM], known as a venue for exploring cutting-edge expression using media and technology.

The two-day event, held June 4-5, centered on Ishiwaka’s idea of performing with himself, in which he and AI (artificial intelligence) agents (a device that records Ishiwaka’s performance data, extracts performance characteristics, and performs autonomously/semi-autonomously based on that data), improvised sessions. On the second day, saxophonist Kei Matsumaru participated in an interactive improvisation session with a variety of sound-installation-like automatic instruments that were placed at the venue just as on the first day.

This event was realized after about a year and a half of his joint research and development with YCAM and AI researchers. What possibilities did “improvisation with AI” open up? We asked Ishiwaka and Matsumaru about their thoughts immediately after the performance.

Reasons for inviting Kei Matsumaru as a co-performer

–In this performance, the main theme was Ishiwaka-san’s collaboration with himself. So Ishiwaka-san, why did you bring in Matsumaru as a co-performer?

Shun Ishiwaka (Ishiwaka): My initial idea was to show the “before and after of Shun Ishiwaka” through the two days of performance. In other words, I was going to focus on how I change between the first and second days, but the focus of the second half, which was simply going to feature “me after I changed,” shifted to “me performing with co-performer, after I changed.” Well, I had been working with YCAM for more than a year on this project, and I had experienced playing with the computer for a long time, so I wondered what would happen if another artist joined and played with me, and what new discoveries might be made there.

When I thought about who might be interested in this kind of work using leading-edge technology, the first name that came to mind was Kei Matsumaru. We had performed together countless times, and since I have participated in his quartet and performed with him in SMTK, I myself feel a great deal of sympathy for the music he is trying to make and the methods he uses to realize that music. I wanted to have someone with a similar view of music to me be involved, and I also thought that if Kei had this kind of opportunity, he would think about many different things, which might lead to even greater development of the performance.

Kei, I wanted to ask you this, but do you think my way of performing has changed through this creation?

Kei Matsumaru (hereafter, Matsumaru): Well, I can’t really say as of yet.  . Maybe I haven’t noticed much of a change because  we’ve been playing together for a long time..

Ishiwaka: I personally think I have changed a lot.

Matsumaru: When we look back after a while, I might be able to start seeing  this creation  as a turning point in hindsight,  but I can’t really give a clear answer now. Not because I think there haven’t been any changes,  but it may be similar to the way you don’t really notice when someone you live with loses or gains a little weight , for example.

“The experience of the time passage is different from usual.”

— Did you feel any difference between your usual duo session and this performance with the agents including AI?

Ishiwaka: We talked with Kei during the rehearsal about how the experience of the passage of time was different from usual due to the fact that the co-performer of the session is not a human being. We are creating something with the AI, and when the meta-agent gives commands and the five different agents switch between them, the work we are creating may suddenly end in the middle of the process, or we may be made to feel that we should continue. I felt what was going on during the performance was totally different from those with ordinary human performers.

Matsumaru:I have done many duo improvisation performances not only with Shun, but also with many other people, and I can say with confidence that this performance felt considerably closer to improvising in a trio setting rather than duo. When there are only two performers in a session, decisions are made only on one end or the other, in turn or at the same time. . But when a third party is added, it not only adds one more option for decision making, it exponentially broadens the scope of the relationships that can happen simultaneously. After three players, you don’t really feel the difference, which is why I think improvisations can be broadly categorized into ”solo”, “duo”, or “trio or more”, with each becoming a different kind of music. This time it was closer to the feeling of a trio than a duo.

Ishiwaka:That’s true.

Matsumaru: Playing as a trio means that three patterns of duos may be created within the trio. Therefore, a duo and a trio have very different feelings, almost in a way that a two-dimensional thing becomes three-dimensional. Moments of engaging with another player with intention and situations where multiple textures exist in parallel suddenly become a lot more complex. . In this performance, there were only two human players, myself and Shun, but the meta-agent was also present as another player, and there was a sense that the three of us were creating something together.

After a trial show, the performance became more “free”.

–One month prior to the performance, a trial show was held at Shibuya Koen-dori Classics with Matsumaru. How did you brush up the performance after that?

Ishiwaka:The live performance at Classics was just an experiment, more like a demonstration. We focused on playing with each agent in the first set, and in the second set, person behind the stage switched the agents manually during the performance to create a musical flow. Based on the results, we created a meta-agent, a higher-level entity, with which the computer can switch between the agents.

Matsumaru: Also, at the time of Classics, my performance data was not reflected in the agent, so it didn’t feel like a trio improvisation as in the actual performance. An interactive duo situation was created between me and Shun, and between Shun and the agent, but between me and the agent, there was no interactive communication. We may have influenced each other indirectly, but we never directly communicated as a trio, which was a definite difference from the actual performance. In the performance at YCAM, what I played on the saxophone was also reflected in the meta-agent.

Ishiwaka:It was great that we were able to play more and more freely in this process toward the final performance. During the one at Classics, there were times when we had to change the way we perform for each scene, and at other times, I felt that we taught too many techniques and the agents were too close to the human side.

Matsumaru: If we had not had the trial performance at Classics, perhaps we wouldn’t have been completely satisfied with the final performance.

Something close to a human will felt in the meta-agent

— During the performance, did you ever sense anything like human will in the meta-agent?

Ishiwaka: Yeah, I did a lot. I think if player has the certain sound image he or she wants to make, we may consequently feel as if the meta-agent has a will. For me, there are times when I want to create a sound with a beat, times when I value coincidence and create an idea randomly, and times when I create a very quiet sound. In order to express these different sounds, each agent is allocated to a specific role, and the meta-agent, who has a bird’s eye view of the whole, creates music as if it were playing with a will. In other words, we gave the agents “ears” as well.

Matsumaru: That kind of human-like will was an element that we didn’t feel at the time of the performance at Classics. I was a little surprised at how different it felt  after introducing the meta-agent in the performance,  because it felt like we were making music together.

–Among the agents, the “Rhythm AI,” which produces percussion sounds, was the closest automatic instrument to Ishiwaka-san’s performance. Did you ever feel “Shun Ishiwaka’s character” in the sound AI produced?

Ishiwaka: Yes, I did. Especially on the first day, I felt a swingin’ feel. When I had the Rhythm AI learn, I set the tempo and the number of bars, and tried various patterns of swing rhythms with an electronic drum called Roland V-Drums. I had it learn as many patterns of my favorite phrases as I could think of. Sometimes when I listen to the agent’s performance while playing, I could recognize the swing rhythms that I had had the agent learn.

Matsumaru: Especially in terms of the rhythm AI, I could feel elements similar to Shun’s improvisation in the density and changes in density of the sounds produced. . Instead of improvising with the same rhythmic density all the time, there was a wide range, just like in Shun’s performance. At times, certain parts of the agent’s rhythm were much less dense than others, and at other times, a very small part of the rhythm had high density. These shades of density repeated in certain cycles may have reflected his characteristics.

The sound from a sampler has little touches of humanity

— Were there any scenes in which you felt that a co-performer was not human?

Matsumaru:I was always conscious of the fact that the agents I was working with were not human. However,  one thing that was unexpected was how the sampler functioned. There was an agent with a sampling machine that played fragments of our past recordings through speakers. We thought  this would easily bring out  human-like qualities because it played  actual recordings, but instead  it gave us the opposite impression. The human quality was rather weak precisely because it merely played back samples. . The way it chose the sounds and played them didn’t feel human at all. .

Ishiwaka:When I first decided to use a sampler, I had the idea of creating a raw sensation or subtle fluctuations in sound and changes in texture that only humans can create. It was interesting that, although that was my original goal, when I actually tried it, it sounded less human-like.

Matsumaru:If the agent had learned from the performance data of a musician who usedg a sampler as their main tool, the choice of sounds played by the agent might have sounded more human-like. In our performance, the sampler just extracted clips  from past recordings with similar sound characteristics to the real-time performance.

–The sampler sounds had a slightly lo-fi texture, so you can tell at first listen that it is a sampler sound, even if it is the sound of the same instrument., right?

Ishiwaka: That’s right. All of the other agents had a mechanism for physically tapping the instruments to produce sound on the spot, but in the case of the sampler, the sound is played back from the speakers, so the sound source can be processed. In order not to confuse the sampler sound with the live sound we were producing in real time, we dared to apply effects to the sampler sound to change its texture.

In search for a view we have never seen before

–This event can be seen as an extension of Ishiwaka’s solo performance. Are there any barriers that you have felt you wanted to break through in your regular solo improvisation performances?

Ishiwaka:The main problem for me is when I get bored with myself. If I feel that what I am looking now is the same as what I saw before during the performance, I may suddenly realize that I am bored with myself. To avoid this, I experiment with various ideas and try to create a situation that feels fresh by keeping my ears sensitive. I always want to go somewhere I have never seen before.

Matsumaru: In my case, I sometimes  imagine there being  something beyond this type of boredom. I am interested in how far I can go with  repeating an idea. When I get really tired of listening to an idea, my perception of that musical idea starts to deform and unravel,  sometimes to the point where  I can’t think about what will happen next. So, for example, I may think about how many times I can repeat the exact same phrase during a solo, and I believe  this kind of patience also plays an important role in improvisation. I’ll sometimes explore this  not only in solo performances, but also in duos, depending on the person. .

Ishiwaka:There was a time when I was challenging myself to keep on playing the same drumming phrase over and over again. If I keep on drumming the same phrase, my ear gradually becomes more sensitive. Then I could sense subtle changes in the accents, and I would explore what I might be able to see if I stretched this or that part. But there is a procedure and a pattern, and I am looking for something I could see by continuing the prescribed movements, so I thought it was improvised but also composed, so I named the performance and did it as a concert piece. Recently, I tend to do the improvisation in a different way. Maybe what we are looking for in improvisation differs depending on the general character, role, and features of the instruments. In the case of drummers, we are basically repeating a fixed pattern all the time in order to generate beats in their everyday performance. In order to escape from such a situation, I think that improvisation for drummer is an attempt to expand the range of expression of the drums in various ways.

Matsumaru:That’s true. In that sense, saxophone players are the opposite.  We’re often  required to play melodies at specific parts, develop solos in specified places, and play ideas that are  fairly non-repetitive. Of course, there are many different saxophonists, but in my case, perhaps it is because of the instrument’s characteristics and given role  that I’m curious about  repeating  ideas when I improvise.

(Continued in Part 2)

Shun Ishiwaka
Born in 1992 in Hokkaido, Japan, Shun Ishiwaka graduated from Tokyo University of the Arts after studying percussion at the high school attached to the Faculty of Music of Tokyo University of the Arts. Upon graduation, he received the Acanthus Music Award and the Doseikai Award. In addition to leading Answer to Remember, SMTK, and Songbook Trio, he has participated in numerous live performances and productions by Kururi, CRCK/LCKS, Kid Fresino, Kimijima Ozora, Millennium Parade, and many others. As a recent practice, he presented Sound Mine, a new concert piece by Miyu Hosoi + Shun Ishiwaka + YCAM at the Yamaguchi Center for Arts and Media[YCAM]under the theme of evoking memories through sound and echoes.
Official website: http://www.shun-ishiwaka.com
Twitter: @shunishiwaka

Kei Matsumaru
Though born in Japan in 1995, Kei Matsumaru was raised in a small village in the highlands of Papua New Guinea where he calls home. From there, he moved to Boston to study music in 2014, after which he relocated to Japan in late 2018.
Kei is currently based in Tokyo and has been active mostly in the jazz and improvised music scene, but has increasingly been collaborating with artists from other musical genres and creative disciplines, such as contemporary dance, visual arts, and various media arts. Kei is a member of SMTK, a rock/free jazz/instrumental band, as well as mºfe (em-doh-feh), an electro-acoustic trio. In 2020, he released Nothing Unspoken Under the Sun as his quartet’s first album.
He also periodically presents “dokusō”, a series of live 90-minute solo saxophone performances through which he explores the relationship between time, space, body, and instrument and how the performance affects cognition and perception of these elements in both the audience and himself.
Recent: Eiko Ishibashi, Tatsuhisa Yamamoto, Jim O’Rourke, Otomo Yoshihide, Kazuhisa Uchihashi, Dos Monos, etc.
His 2nd album The Moon, Its Recollections Abstracted is set to release on October 19, 2022.
Official website: https://www.keimatsumaru.com
Instagram: @kmatsumaru
Twitter: @keimatsumaru

Photography Yasuhiro Tani / Courtesy of Yamaguchi Center for Arts and Media [YCAM]
Translation Shinichiro Sato (TOKION)

The post Technological Co-Creation between AI and Improvisation — Shun Ishiwaka × Kei Matsumaru Special Conversation, Part 1 appeared first on TOKION - Cutting edge culture and fashion information.

]]>