Survival Strategy in an AI-driven Society: How will humans live in an artificial super-intelligence society?
HULFT Technology Days 2024 was held on Thursday, October 10, 2024, at Bellesalle Tokyo Nihonbashi, and online for two days from Wednesday, October 16 to Thursday, October 17. On Thursday, October 17, Kazuya Ogawa, founder and CEO of a venture company who is also conducting artificial intelligence research at Hokkaido University, took the stage. In his talk entitled "Survival Strategies in an AI-Based Society: How Can Humans Survive in an Artificial Superintelligence Society?", he spoke about how to survive and advance business in an AI-based society, unraveling the past and future.
▼Profile of Kazuya Ogawa
Grand Design Co., Ltd. President and CEO
Visiting Professor, Hokkaido University
*Titles and affiliations are those at the time of interview.
What is the difference between the Industrial Revolution and the evolution of computer technology, including AI?
Since the Industrial Revolution began in the late 18th century, technology surrounding humanity has continued to evolve over the course of about 200 years. However, the 50 years since the end of World War II in the second half of the 20th century have seen advances that surpass those 200 years.
Humanity has evolved slowly and dramatically over the course of more than four million years. From the Stone Age, when stone tools and fire were the technology of choice, came the three major inventions of the 14th century: the compass, gunpowder, and movable type printing. This was followed by the Industrial Revolution in Britain from the mid-18th century to the 19th century, followed by the Second Industrial Revolution in the late 19th century, fueled by technological advances in science, oil, steel, and other areas. This revolution made mass production of consumer goods possible through mechanized manufacturing. The 20th century saw the development of communication and transportation technologies, and these technologies gave rise to computers and the internet. Then, in the 21st century, advances in artificial intelligence and intelligence technologies led to rapid developments in AI, genome technology, and quantum computers, the themes of this article.
The biggest difference between the Industrial Revolution and today's computer technology, centered on artificial intelligence, is the shift from capital-intensive to knowledge-intensive. We are moving from a capital-intensive model, where resources are materials and value is things, to a knowledge-intensive model, where resources are replaced by data and knowledge/information is value.
The fundamental principle of future prediction is "inevitability"
We must first imagine that evolution and innovation are not always explained by neat straight lines, but are nonlinear and discontinuous, and often occur suddenly; evolution is irregular, and this is something we need to understand well when thinking about the future.
Looking ahead, many people are probably wondering how to approach new services and technologies, including the metaverse. They may have also faced situations where they wondered whether they should tackle the issue themselves or commercialize it.
In times like these, we need to focus on whether or not there is a "necessity," which is the essence of science and technology, and I believe that this will serve as a universal message. Mobile phones were originally monochrome devices that could only be used for calls, but they have evolved into the smartphones we know today. This is an example of something that has been revolutionized by continued use by humans. When it comes to new trends, it is important to always consider whether or not there is a necessity for humanity at that time, and this is the fundamental principle of my future predictions.
What is AI?
Let's start by summarizing the basics about AI. There are many different definitions of AI, but most refer to it as the artificial reproduction by computers of the thinking, reasoning, learning, and judgment activities that are carried out by the human brain. In fact, artificial intelligence has undergone repeated trial and error for over 50 years, going through many periods of decline, and it was around the time of deep learning that it finally became widely recognized as being usable in the workplace. Rather than a boom, we believe that AI is now entering a period of establishment.
When people think of AI these days, many of them think of generative AI, which is also a field of deep learning. Deep learning is one part of artificial intelligence, and within it is machine learning, a machine learning technique that improves itself through reinforcement learning, where a system learns from data. Since the advent of deep learning, this technique has become even smarter through the use of multi-layer neural networks. Generative AI exists as an extension of this. This is also a fairly broad classification, and a more detailed explanation is needed to clarify what AI refers to, but today we will proceed within this broad classification.
The current major trend is what is called specialized AI, which is AI that specializes in specific tasks or fields and is able to perform them to a certain extent. However, it is difficult for specialized AI to adapt to diverse situations as flexibly as humans, and specialized AI is sometimes referred to as "weak AI."
As this technology evolves, it will develop into a general-purpose artificial intelligence capable of handling a variety of tasks, and it is predicted that we will enter the era of general-purpose AI in the 2030s. This is sometimes referred to as "strong AI." Furthermore, in the 2040s, it will surpass the power of the human brain and, thanks to its ability to recursively self-improve, will continue to grow on its own, becoming an artificial superintelligence that carries both the possibility and the risk of becoming independent of human control. When such an artificial superintelligence is born, how should humans interact with AI and how should they live their lives?
AI is the artificialization of the brain and the borderless nature of it
Simply put, AI accelerates the artificialization and borderlessness of the brain. AI itself is what we call mathematics, and it involves replacing it with formulas and algorithms, but as we move towards AI, we end up imitating the human brain, meaning the human brain becomes more artificial, and so the boundaries between the living human brain and artificial intelligence become blurred. For example, when creating AI, one approach is to consider what to do if the functions of the frontal lobe are replaced by artificial intelligence.
Although there are various theories, I, and others, believe that our brains are complex. However, as we create AI, the view that it may actually be relatively simple is emerging. Some scholars even believe that the higher functions of the cerebral cortex in the human brain, such as recognition, language comprehension, thinking, and decision-making, could be reasonably realized with just a few dozen modules. Even though the brain itself is complex, if we try to achieve the same output as humans with artificial intelligence, it is not impossible to say that it can achieve human-like results by combining several modules.
Eventually, AI will make virtual worlds feel more real, making virtual reality more accessible than ever before. AI will allow it to better adapt to individual feelings and thoughts, and faster processing speeds will enable real-time processing, eliminating communication gaps and making the virtual world appear more real to the user's environment.
What the borderless world of virtual and real life brings
I believe that as AI and virtual reality are used in various social situations, the borders between virtual and real will become increasingly blurred. One symbolic example is "Virtual Being," the main theme of my research. Virtual Beings are virtual yet substantial entities, such as avatars in virtual reality (VR), to which AI gives reality. One example of Virtual Being is artificial life. It may be possible that artificial life modeled on life using artificial intelligence will exist in virtual spaces, coexisting and thriving alongside real living things.
In fact, in 2020, an artificial life form called a "xenobot" was created using stem cells from African clawed frogs, and although it is a living organism, artificial life has technically been realized. It can also reproduce and give birth to offspring. If it evolves further, it is possible that animals and humans themselves will become artificial life forms.
Another factor contributing to the borderlessness of virtual and real life is the advancement of genome editing technology. The genome, the genetic information that makes up a human, has made significant advances thanks to two American researchers who won the Nobel Prize in Chemistry in 2020. AI technology is advancing the analysis of billions of genome sequences, and it is predicted that by 2030, human genome information will be decoded ultrafast and free of charge. While full genome sequencing of the human genome was only achieved fairly recently, in the early 2000s, its evolution can be largely attributed to AI technology. While ethical issues remain, genome editing technology is already making it possible to have children. The concept of posthumans, beings beyond human, whose genomes have been optimally edited at the will of humans, has also emerged. I believe that AI genome editing technology will eventually lead to posthumans.
I am hopeful that quantum computers, which will be equipped with dramatic computational capabilities by applying the principles of quantum mechanics, will be put into practical use in the 2040s. The combination of AI and quantum computers will bring about tremendous changes and evolutions that go beyond previous dimensions.
From the 2030s onwards, as AI becomes more advanced, it is thought that it will become a realistic possibility to reshape humans themselves, including through the application of artificial materials to living organisms using genome editing technology. Designer babies were already born in 2018, making this at least technically possible, but it is now being managed by humans. Experiments on "chimeras," in which human cells are injected into monkey embryos to create a combination of different cell types, were also announced in 2021, making this science fiction-like world technically possible.
We should ask ourselves what human capabilities are.
As we enter an era where the virtual and the real world merge, how will we be required to deal with this fusion in our work? Many companies are already taking steps such as utilizing digital twins, expanding into virtual spaces such as the metaverse, and introducing hybrid work styles. Strengthening data privacy and security is also important, and many companies are also working to improve the customer experience, which is important when communicating with customers online. As AI becomes more human, jobs that could be replaced by AI are being publicly announced in the media, attracting attention as a topic.
To begin with, humans are constantly influenced by the external environment as they live. In the hunter-gatherer era, it was nature; in the industrial age, it was a machine-driven mass production society; and in the information age and the current age of AI, artificial objects such as clones and avatars, as discussed above, are increasingly part of the external environment. It is expected that artificial objects will outnumber living organisms and this trend will continue to accelerate. As genome editing technology and the connection between humans and computers spread, barriers will disappear, so it could be said that we are currently on the brink of deciding whether to put a stop to the artificial design of humans themselves or to allow it to evolve.
Although there should be a fundamental principle that humans are natural creatures, technological curiosity is taking precedence amidst various challenges. The pace of technological evolution is so rapid that laws and ethics tend to lag behind, which I find both exciting and worrying. What will eventually become important is how we should nurture the internal state of humans as the external environment becomes increasingly artificial.
In other words, we need to ask ourselves once again what human capabilities are and what makes humans special. Rather than being taken away by AI, I personally believe that AI is giving us an opportunity to improve our quality as humans. I am currently working hard to improve my quality as a manager, while also questioning whether my management decisions are better than those of AI.
An approach to work that utilizes empathy, cooperation, love, and human capabilities
In this sense, it will be important to cultivate and increase the internal information that humans have, such as their own experiences, unpublished information, and word-of-mouth, which AI does not have access to. For example, research has shown that warm tea leads to higher business negotiation rates than cold tea, an example of this being related to the unique human perception that warm tea is more comforting. Humans also have sensibilities and perceptions, such as the "first taste" associated with hospitality when entering a restaurant, the "internal taste" that describes the deliciousness of the actual meal, and the "aftertaste" that one feels when receiving a thank-you letter later. Another example is the old expression "mother's cooking," which shows that our sense of taste is influenced by our feelings and mood, a perception unique to humans.
Now is the time to return to physicality and nurture the inherent information and values within humans. Although we often refer to humanity or humans as a whole, in reality, various species of human beings, including Neanderthals, became extinct and were reborn as new species, which became us today, Homo sapiens. One theory is that one of the reasons why Homo sapiens were able to survive in this struggle for survival is that they possess remarkable abilities such as empathy, cooperation, and love. We Homo sapiens, who survived, must have had these abilities from the beginning. When considering jobs that only humans can do, I believe it is important to create jobs that make use of the abilities that humans already possess.
I also think that one thing that is essential is the element of play. In an AI-driven and artificial society, many elements are composed of rational and logical principles, but it is important to create an environment that incorporates a sense of playfulness.
Rather than dividing up roles, we are focusing on how to use AI as a counter to draw out human potential, and how to create so-called Humanistic AI, which is AI that can be used as a partner for interaction and co-creation.
lastly
Finally, I believe we need to return to the perspective of Homo sapiens. Our abilities for love and cooperation are a major factor in why we have survived. However, as the boundaries between the external environment, the real, and the virtual become blurred in an information society driven by AI, artificiality, and information technology, I believe we will move toward a society where clones, artificial intelligence, and robots take on many roles. When the very existence of human beings is called into question in this environment, I believe that creating jobs and environments that return to the abilities unique to Homo sapiens will be the key to shaping the future. While creating communities, collaborative spaces, and social designs unique to Homo sapiens, we must raise awareness of returning to the very abilities of Homo sapiens, such as love and cooperation. This will surely be our strength for survival. I would like to think about things based on these principles when creating jobs and organizations.


