Event report

June 8, 2025

The evolution of AI raises questions about the nature of human intellect: New forms of creativity explored through play and anthropology

Tomoko Ohata

Writer

Tokyo

On Wednesday, May 22, 2024, FabCafe Tokyo held a Talk Session, AI from an Anthropological Perspective – What do people create with AI?
In this event, Daniel Koppen and Saki Maruyama of the design/art unit “Playfool”, media artist and game developer Kyou Kihara, and anthropologist Akinori Kubo joined us to think about creating “with” AI from the perspective of play and anthropology.
In the first half, Playfool and Kihara introduced their projects using AI and technology. In the second half, anthropologist Kubo talked about the relationship between AI and anthropology and Shogi software.

 

What if AI had developed a perspective other than human way of thinking?

First, Maruyama introduced Playfool’s work and shared some of the questions of their current interest.

Second from left, Saki Maruyama of Playfool

Playfool is a design and art unit run by Daniel Koppen and Saki Maruyama. They have launched various projects while exploring the relationship between society and technology through the medium of play.

Forest Crayons which are made from wood

Forest Crayons, a product made from wood, was born with the aim to connect the forestry industry, which has developed to maintain and preserve forests, the environment that is changing as a result, and our daily lives. Maruyama introduces the product by saying, “Wood is often thought of as simply being brown, but in fact a wide range of colors exist.”

Regional Renku sticker

In the Regional Renku event, strangers living in various regions of Japan composed renku poems anonymously and collaboratively. Renku stickers with QR codes printed on them were placed Shibuya, Shimokitazawa, and Koto Ward in the Corona disaster area. Through the renku, differences in the scenery in each area emerged, and “it became an opportunity to change the way we look at the area,” Maruyama shared.

Playfool’s current interest is how our understanding of intelligence changes when we view AI away from anthropocentrism.

“Today’s AI is built on our data and imitates humans. But if it were possible to escape human intervention, how would we define intelligence? We are exploring this question.”

Based on these questions, Playfool has been conducting several experiments. One of these is a project called CREATURE SPECULATRIX (tentative name).

Maruyama recalls, “My inspiration was a researcher named Gray Walter. He lived at the same time as Alan Turing, the father of the computer, and created an analog turtle-shaped robot that moved in the direction of light.

Mechanical Tortoise (1951)

While Turing judged machines based on whether they thought like humans, Walter focused on whether they appeared to exhibit the free will typical of living organisms. Maruyama shared: “AI has developed by focusing on whether it is human-like or better than humans. However, we want to explore what would have happened if machines had developed based on non-human organisms, as Walter did.”

However, as intelligence and free will depend on human perception, we cannot perceive intelligence or free will that humans cannot. Playfool will be working on this project over the next three months to address these issues.

What happens when you use AI to visualize the choices in your life?

The next speaker was Kyo Kihara, a media artist and game developer. He creates experimental games and installations based on play, which encourage people to ask new questions.

Media Artist and Game Developer Kyo Kihara

In the past few years, we have explored how AI and other technologies will impact our lives. One of the projects that emerged from this exploration is Future Collider.

“Fictitious signs and billboards can be placed in the city using AR. For example, what would you think if a signboard saying, Face Recognition in Progress, was placed in front of a garbage dumpster in a future with advanced AI surveillance? We are discussing this kind of urban future together with the general public.”

Future Collider,” which allows AR to install possible future signs and billboards in the city.

Kihara also developed Diary of Tomorrows(s) based on the question, “What would happen if we let AI explore the impact of our life decisions in advance?” By loading your own past data such as diaries and calendar appointments into a large-scale language model, you can generate an imaginary diary of your future that branches off based on a major decision such as “Do I quit my job?”

Diary of Tomorrows(s) weaves together one’s own future.

“When I look at my calendar, I see a turning point: I will be based in either Tokyo or London. If I select to be based in Tokyo, I will focus on my artistic activities, such as exhibiting my new works, and appear to enjoy it. On the other hand, if I choose to move to London, there is an appointment with a counsellor on my calendar, indicating that I am experiencing a mental and physical breakdown. This may have been generated based on my description of how hard the dark winters were when I lived in London.”

Through this project, we have realized that even though people think they are making decisions freely with their will, in fact they may be making decisions only to a very limited extent. Through Diary of Tomorrows, we will continue to simulate the tendency of our own life choices for bias and explore how we can relativize the process of making such choices.

Explore new human expressions that deviate from AI

Playfool and Kihara are involved in various projects with the theme of technology and play. The collaboration between the two organizations resulted in the introduction of two projects that were undertaken with the goal of achieving a symbiotic relationship between humans and AI through play.

One is a game installation called ‘How (not) to get hit by a self-driving car’. Players must think of ways to reach the goal while skilfully dodging and avoiding detection by the AI-equipped camera. They can, for example, stack cones on top of each other to resemble flames or cover their bodies with a coat to make themselves unrecognizable as humans. Every time a player wins, a flaw in the current image recognition algorithms that cannot detect pedestrians is highlighted.

The other project is outdraw.ai. Each time a new technology emerges to replace human modes of expression, people deviate from it and create original expressions that cannot be replicated by that technology. This project was launched from the starting point of the question: “What is a new form of expression created together with AI?”

The rules are simple. One person draws a picture in response to a given topic. The picture must be something that AI cannot understand and that only humans can understand. If only a human can give the correct answer, he or she wins. This time, we experienced “outdraw.ai” together with the audience.

The topic was objects. Everyone was looking at the pictures on the screen and thinking frantically.

The picture of “CD” drawn in the upper right corner of the screen. The audience responded “LP” and “light”.

The correct answer was a CD. Unfortunately, both AI and humans answered incorrectly.

Maruyama said, “No one came up with the right answer, but we have not yet reached a conclusion on how to regard the tie with AI. As Kubo pointed out earlier, a truly new expression may lie within this tie,” he said, leaving room for exploration.

 

Changes in the creative process resulting from the interaction between humans and technology

Here is a presentation by anthropologist Kubo. Kubo began his research on anthropology surrounding robots and AI after being influenced by anthropologist Lévi-Strauss’ book Mythic Logic, which he read as a graduate student.

Akinori Kubo, Anthropologist

“In ‘Mythological Logic’, it is described how, in Native American myths, particularly origin myths, there was once no distinction between animals and humans. They spoke the same language, had the same technology and were able to marry and raise their children in the same way. However, it is believed that over time, the boundary between animals and humans was created.

Predictions about the future, such as the robot symbiosis society and the singularity hypothesis, suggest that machines and humans will either become equals or that machines will surpass humans. My research is based on the idea that these concepts, along with the mythical boundaryless state between animals and humans, may be similar.”

From an anthropological perspective, technology can be viewed as the process by which social systems are shaped through interactions with non-human entities that humans cannot fully control. Anthropological thinking about technology considers how we change through our interactions with non-human entities. To illustrate this, Kubo used the example of Shogi (Japanese chess) software.

The “Den-O-Sen” series, held from 2012 to 2015, featured games between professional Shogi players and computer software. As a result, significant differences in thinking between humans and computer Shogi software were revealed.

Koru Abe, the 6-dan player who won the first round of the second Den-ou Tournament, commented on the differences in thought processes between humans and shogi software as follows: “A human is afraid of changes that might be disadvantageous, so they don’t want to consider them and try to take the safer path. However, computers are not afraid; they read them and take action. They are strong. They don’t get scared or tired; they don’t give up until the end, even if they are beaten. I realized that all these things are really necessary for a human Shogi player.”

Kubo recalls, “During the Den-Oh Tournament, when a professional player was defeated by the shogi software, there were some intense emotional scenes, such as a female professional player in charge of the live broadcast breaking down in tears. Even the slightest emotional reaction could ruin a game of shogi. That is why professionals usually need to suppress their emotions and play calmly. Shogi software has characteristics that humans do not have, and professionals have changed their approach through playing games with them.”

“Shogi software was not just a tool, but a ‘medium’ through which humans could create new emotions and concepts,” he said. This suggests the relationship between AI, robots and humans: AI is not just a tool or an independent entity; it can also be seen as a medium that creates new possibilities by interacting with humans.

 

Finally, Kubo concluded as follows.

The interaction between professional Shogi players and Shogi software in modern Shogi: Through the mediating relationship between entities that perceive the board in markedly different ways, new emotions (such as the software’s ‘lack of fear’) and concepts (‘an unfortified fortress’) emerge. In this process, human players have come to be shaped by the non-human entity of Shogi software. It is precisely because the issue of ‘strength’ is at stake that this unique phenomenon — direct confrontation between humans and machines — has occurred.

This suggests that, even when certain technologies are labelled as AI or robots rather than just software or hardware, they can act as mediating agents rather than merely tools or autonomous beings in their interactions with machines.

If the mediational phrase ‘What to create with AI’ has a different effect to the instrumental phrase ‘What to create using AI’,

Then perhaps it is through the interaction between creators and AI that the transformation of emotions and concepts surrounding the very act of “creating” is driven and made tangible.

 

“In terms of the Shogi example, making something with AI can be seen as causing a transformation of emotions and concepts about making itself through the interaction between the creator and the being called AI. Conversely, a technology that does not sway emotion may not actually create much interaction with humans. The emotional part is easily left out of future predictions, and little is said about what is happening to human emotions and sentiments. In changing and concretizing emotions and concepts, we may find the effectiveness of the idea of what we make with AI.”

Identifying all things and questioning “intelligence”

During the crosstalk session, various discussions took place, starting with questions posed by the moderator and speakers. First, moderator Kanaoka asked the speakers, “Why do people want to call machines and technology ‘AI’ all together?”

In response to this question, Kubo answered, “We have become unable to imagine that non-human entities will create the future. This is because we have come to make a clear distinction between humans and other entities in terms of intelligence. That is why the image of AI as a machine that is created by humans but can possess an intelligence that surpasses humans is exceptionally appealing and unsettling when talking about the future.”

Kihara also pointed out that “there is a tendency to call technologies that have not yet been realized artificial intelligence (AI) for the time being,” and that “the term ‘artificial intelligence’ may lead to the risk of humans finding their own will in machines, or shifting the responsibility for problems to the machine side. There is also the danger of shifting the responsibility for any problems to the machine side,” he pointed out. Therefore, he suggested that the use of terms such as automatic decision maker instead of artificial intelligence would make it clearer what kind of decision-making is being transferred from humans to machines.

Maruyama of Playfool then asked, “What is intelligence?” and “Why do humans feel threatened by AI when there is so much that is not yet clear in plants and animals?”

According to Kubo, people are afraid of machines because they are no longer afraid of plants and animals. For example, if you directly fight a bear living in the forest, you may die, but if you have a gun, you may be saved, and with information technology, you can avoid dangerous situations in advance. He states, “It is precisely because machines stand between humans and animals and plants that we are afraid of them.”

Kubo then continues.

“It is often said that since machines are made by humans, humans should be able to control them, but machines are also the product of interactions between humans and non-human entities. This includes natural disasters and nuclear power generation, but I think it is quite unreasonable to assume that humans can completely control such interactions. Starting from the premise that we should be able to control them, we cannot settle the debate on whether machines will surpass humans or not, whether it will be a utopia or a dystopia. To stop thinking this way is to stop identifying ourselves with human beings. The implication of the expression, Anthropology After Humans, in the subtitle of my book Machine Cannibalism is to consider the possibilities and difficulties of such an approach.”

When we think about AI and intelligence, we may have to change the way we think about humans as the standard of evaluation.

As AI continues to permeate our daily lives and evolve further, we may be able to break free from our relationship with AI, which is based on the human evaluation axis, and face AI in a flat manner, as Gray Walter once speculated in his work on intelligence, by first recognizing ourselves as “ourselves” rather than as “humans” as a whole. We may be able to face AI in a flat way, as Gray Walter once thought about in his intellect.

Share

Author

  • Tomoko Ohata

    Writer

    Born in 1999 in Kanagawa, Japan. Specializes in writing recruitment articles, case studies, and event reports, mainly in the fields of design and business.
    Born in 1999 in Kanagawa, Japan. Specializes in writing recruitment articles, case studies, and event reports, mainly in the fields of design and business.

Get in touch

Subscribe to FabCafe Global monthly newsletter for more stories in innovation and design.

Our Business Services

Building products and services that push innovation for companies