How to adapt generative AI to the realities of corporate activities

In recent years, generative AI technology has rapidly gained attention among Japanese companies, and its application in business activities is expected. However, when using it, it is essential to understand the characteristics of generative AI and adapt it to the specific needs of the company. Here, risk management regarding the handling of corporate data and privacy is crucial.
In this article, we will explore the basic concepts and potential of generative AI, as well as the problem known as hallucination, and introduce concrete methods that companies can use to apply this technology in their work.

The Potential and Limitations of Generative AI

Generative AI generates content by interpreting natural language, i.e., human language. For example, generative AI services such as ChatGPT, Microsoft Copilot, and DeepSeek allow users to ask chatbots questions in everyday speech, allowing them to generate a variety of content, such as searching and organizing information, summarizing large amounts of text, and writing programs.

Large-scale language models (LLMs), commonly referred to as "generative AI," generate sentences by repeatedly outputting the token with the highest probability of following a given token (a unit for processing words) based on a large amount of training data. The answers output by generative AI are simply output by probabilistically calculating and connecting tokens, so the accuracy of the content cannot be guaranteed. As a result, "hallucination" occurs, where the generative AI sometimes produces answers that are different from the facts.

Hallucination is a major barrier to utilizing generative AI in corporate activities. Although generative AI has the potential to streamline various corporate tasks, it does not possess company-specific information such as the actual business situation or how the work is carried out. Therefore, commonly available foundational models (for example, large-scale language models with general-purpose generative capabilities such as GPT-4) often produce irrelevant answers that are not in line with the actual business situation. For this reason, companies must be aware of the risks involved in using generative AI and manage it appropriately.

▼I want to know more about generative AI
Generative AI | Glossary
▼I want to know more about large-scale language models (LLMs)
Large Language Model (LLM) | Glossary

How to adapt generative AI to business operations

So how can generative AI be adapted to the realities of corporate activities? Generally, three approaches are used to avoid hallucination in generative AI: (1) prompt engineering, (2) fine tuning, and (3) RAG (Retrieval-Augmented Generation).

What is Prompt Engineering?

A prompt is an instruction given by a user to a generative AI. The generative AI generates an answer based on the content of the verbal prompt given by the user. Prompt engineering is a method of devising this prompt to guide the generative AI's answer to the expected content. By incorporating company-specific information into the prompt, it is possible to obtain highly accurate answers while reducing risk.

Regarding this technique, LLM Mavericks, a generative AI research team at Saison Technology, has published "15 Tips to Improve Prompts" on the following website. Please take a look.

Related article:15 Tips for Improving Your Prompts

To enable the Generative AI to respond based on company-specific information, the prompt is instructed by adding the company-specific information. By providing reference information about the business or operations after the instruction to the Generative AI in the prompt, the Generative AI will calculate the probability between tokens based on that information and generate an output that is closer to the company-specific information. This prompt engineering method, also known as "In-Context Learning," is both easy to use and effective.

What is Fine Tuning?

Fine-tuning is a technique for fine-tuning a pre-trained generative model, such as a base model, by additional training using additional datasets. If you want to develop a generative AI that is uniquely tailored to your company, fine-tuning is considered the most effective method. Fine-tuning makes it possible to adjust the content to suit your business, providing accurate information based on your company's data.

However, the hurdles to adoption are higher than with other methods, as it requires preparing a dataset and training the model multiple times.From a risk management perspective, it is important to ensure the quality and security of the data used for training.

▼I want to know more about fine tuning
Fine Tuning | Glossary

What is RAG (Retrieval-Augmented Generation)?

RAG is a method in which generative AI references a company's proprietary data to generate answers that take into account the company's specific information. This method is intended to access a knowledge base (information source), allowing the generative AI to provide answers based on the company's actual situation and facts. Therefore, the development of a knowledge base and secure data management are required.

Knowledge bases use a variety of data, including company-specific information, such as documents stored on file servers and in cloud storage, as well as data accumulated in databases such as customer management systems and human resources systems. This allows generative AI to generate answers based on business-specific conditions and support efficient decision-making.

▼I want to know more about RAG (Retrieval Augmented Generation)
Retrieval Augmented Generation (RAG) | Glossary

Finally

We hope you enjoyed this article. This time, we introduced some typical methods for adapting generative AI to the realities of corporate activities. Prompt engineering is easy to get started with, but usability needs to be improved. Fine-tuning a company's own large-scale language model is expensive, so careful planning is required when implementing it.

The last method we introduced, RAG, is one way to utilize generative AI using internal data, but preparing the data as a knowledge base is important. By safely and effectively utilizing corporate data, the introduction of generative AI will contribute to strengthening future competitiveness.

In the next article, I would like to introduce some key points to consider when developing the data infrastructure needed to realize RAG.

The person who wrote the article

Affiliation: Data Integration Consulting Department, Data & AI Evangelist

Shinnosuke Yamamoto

After joining the company, he worked as a data engineer, designing and developing data infrastructure, primarily for major manufacturing clients. He then became involved in business planning for the standardization of data integration and the introduction of generative AI environments. From April 2023, he will be working as a pre-sales representative, proposing and planning services related to data infrastructure, while also giving lectures at seminars and acting as an evangelist in the "data x generative AI" field. His hobbies are traveling to remote islands and visiting open-air baths.
(Affiliations are as of the time of publication)

Recommended Content

Related Content

Return to column list