Retrieval Augmented Generation (RAG)

  • Glossary

Retrieval Augmented Generation (RAG)

This glossary explains various keywords that will help you understand the mindset necessary for data utilization and successful DX.
This time, let's consider how generative AI, which is currently attracting a lot of attention, can be used in practical applications.

What is Retrieval Augmented Generation (RAG)?

Retrieval Augmented Generation (RAG) is a method of utilizing generative AI such as ChatGPT, which improves responses by combining information obtained by searching external sources when using generative AI.
Generative AI such as ChatGPT is based on large-scale language models (LLMs), but LLMs are typically trained using general data and tend to only have general knowledge. RAG is expected to supplement information retrieved from external sources, allowing the LLM to respond more appropriately to topics that it has not yet learned (such as information about the company).

Why is the use of generative AI not progressing as expected?

Generative AI is currently attracting a lot of attention from around the world, and the use of generative AI in business is also attracting attention, but it seems like there aren't many stories of it producing satisfactory results.

For example, when I tried ChatGPT, the impression I got was so strong that I wondered if I could do something with it. Furthermore, when I heard about other companies using it for business purposes, I felt even more compelled to use it.

However, although it is sometimes used for auxiliary purposes such as "summarizing documents" and "assisting with idea generation," it often remains at that level, and I think that the current situation is that its use in business is not progressing as much as it should, despite the strong impression I had when I tried using it.

Why aren't you doing anything active right now?

Why has its use not progressed much? First of all, the hallucination problem, where people casually say things that are not true, is widely known, making it difficult to use in applications that require accuracy and accountability. Another limitation is the limited response capabilities, such as "even if you ask a question related to your company, you will often only be able to give a general answer."

In other words, even if you have a question you want the AI to answer or want to use it in your business, it can be difficult to use because the AI doesn't even know the prerequisite knowledge required to answer. For example, if you ask ChatGPT to give you an answer that requires your company's internal regulations, it won't be able to respond properly.

If you want to have them play an active role in your company's business, then naturally you need to give an answer that takes into account your company's circumstances, but that is not possible.

Your company's data is not included in the training data

The reason this is the case is that the training data used to create the large-scale language model (LLM) that powers ChatGPT is only "publicly available."

Therefore, it can often learn general things that can be found on the web or in dictionaries and answer questions, but since the learning data does not include information about the company's internal regulations, it naturally cannot answer questions properly.

So what can we do?

Given this reality, how should we utilize generative AI?

First, you can consider what ChatGPT knows and can handle, and see if there are any limitations that can be utilized in your business. This is exactly what we see most often in "summarization" and "idea generation assistance."

Another approach is to somehow "make the generative AI understand data related to your company and expand its scope of use." If it's not possible because you don't have the knowledge, then you should prepare something that does have the knowledge.

Generative AI such as ChatGPT is based on large-scale language models (LLMs), so the usual approach to utilizing machine learning is to either create your own LLM or use fine tuning to perform additional learning on an existing LLM.

For more information on machine learning in general, see these articles:
Machine Learning | Glossary
Fine Tuning | Glossary

Unfortunately, at the time of writing, it is not realistic to create a new LLM specifically for your company by training it with your own data, as it would be too time-consuming. Instead, you might want to use an existing LLM as a base and fine-tune it to additionally train your company's own data, but this approach is also said to be technically unworkable in many cases (at present).

In other words, it is difficult to prepare your own LLM, and you are currently forced to use a general-purpose one. Instead, attention is being paid to expanding the scope of use of generative AI by utilizing RAG.

What is RAG (Retrieval Augmented Generation)?

RAG is an abbreviation for "Retrieval Augmented Generation." In Japanese, it translates to "search augmentation generation." This approach uses a general-purpose generative AI (LLM) without modifying the AI itself, and supplements data about the company from outside when it is used, allowing it to provide answers that take into account the company's circumstances and unique data.

  • The user inputs a question or other information.
  • [Search phase] From the question, external information sources (such as internal documents outside of ChatGPT) are searched, and additional information to be provided to the generation AI is prepared from the search results.
  • [Generation phase] In addition to the user's question, additional information created from the search results is provided as a prompt, which changes the generation AI's response to take into account the content of the external information source.

search for

The AI searches for and retrieves information related to the text entered by the user from external sources. For example, if the question is "Based on company regulations, how should I apply for travel expenses (in this case)?", the AI would search internal documents for keywords or topics such as "company regulations" or "travel expense application" to find related documents.

So how do you find the relevant documents?

You may have heard the term "vectorization" in relation to RAG. Technologies like "vectorization" and "vector databases," which are sometimes used in generative AI, are sometimes used to create a process for searching for text data with similar meanings to the question. For more information, please see the article below.

Vectorization / Embedding | Glossary
Vector database | Glossary

When explaining RAG, it is sometimes said that it uses an amazing cutting-edge technology called "vectorization," but it is not something unimaginable; at its core, it is the essence of the process of "searching for related information."

Furthermore, even if it can "search for things with similar meanings," it does not always understand meaning in the same way that humans do, so related document searches using vectorization may not work well. As a result, even if RAG is introduced, it may not produce the results expected or may not be able to provide practical answers with accuracy.

Furthermore, the initiative to "search and utilize documents and knowledge accumulated within a company" is not something that has just started recently; there are existing technologies available for this. For example, related documents can be found using a "full-text search engine (enterprise search)" that has a long track record as a "knowledge management" initiative.

If your company already has such an "information utilization system" in place and is using it effectively, you can use it as the foundation for RAG. The key is not to use vectorization, but to "search and find the information you need" (or you can use various technologies in combination).

Make it into a form that can be input to a generative AI

The data retrieved from the search is processed into a form that can be input into the generative AI being used. This may be vectorized data, or text data extracted from the searched document. Alternatively, if the generative AI can understand Word or Excel files as they are, it may be fine to use the data as it is.

In the previous example, if you search for related documents on "company regulations regarding transportation expenses" using a full-text search engine and find a Word document on the company's file server, you can extract the contents of the Word file and process it into text data.

When the relevant information is normalized tabular data stored in a database (such as a DWH or RDB), the format of the found data may not be compatible with the generative AI. In such cases, it may be necessary to devise a way to preprocess the data so that it can provide a good answer.

Give relevant information to the generative AI to answer questions

Finally, the AI is given the input of the "text entered by the user" along with "related information obtained by searching external sources" to generate a response.

Technically, instead of pre-training the LLM, the behavior of the generative AI is dynamically changed by using "in-context learning," which allows "behavior changes that appear as if additional learning has been performed at the time of use."

⇒In-Context Learning (ICL) | Glossary

Methods for improving RAG performance are being researched every day, and more elaborate procedures may be developed, but the "basic implementation steps" are as explained above.

Manually try out the actual working image of RAG

It would be difficult to actually develop a RAG system, but let's try doing something similar "by hand."

Listening without RAG

Before that, let's listen to it without RAG.

If I take a vacation at my parents' house in Nagoya on Sunday and then travel directly from Nagoya to Osaka on a business trip, how should I process my travel expenses claim in accordance with company regulations?

The specific details of company regulations will vary depending on the company, but we will explain based on a general case. When applying for travel expenses for a business trip from your parents' home in Nagoya to Osaka, you can take the following steps.

  1. 1. Check company regulations:
    First, check your company's travel regulations and travel expense reimbursement regulations. Many companies base travel expenses on the cost of travel from your regular workplace to the business trip location. However, some companies also have regulations regarding how travel expenses are handled when traveling directly from other locations on holidays or vacations.
  2. 2. Talk to your supervisor or the HR department:
    If the regulations are not clear or if you need to make an exception, please consult with your supervisor or the human resources department (or the accounting department). When doing so, it would be a good idea to explain the reason for traveling directly from your parents' home in Nagoya to Osaka, and the difference in transportation costs compared to the usual travel expenses from your workplace (head office, etc.) to Osaka.
  3. 3.Compare transportation costs:
    Compare the travel expenses from your usual place of work to your business trip location with the actual travel expenses (in this case, from Nagoya to Osaka). Most companies will reimburse the lower amount, but some companies may reimburse the full amount of the actual expenses.
  4. 4. Preparation of application documents:
    Prepare the necessary documents for travel expense claim (travel expense claim form, copies of receipts and tickets, etc.). If there are any special circumstances, it is important to include an explanation.
  5. 5. Submitting your application:
    Once you have gathered all the necessary documents, you can apply for travel expenses by following the prescribed procedures.

It is important to check company regulations in advance and consult with your supervisor or relevant department if necessary. In addition, when applying for travel expenses, you must prepare accurate information and the necessary documents.

They're trying their best to answer in general terms (which is appreciated), but I don't know what to do specifically based on their company's rules. However, ChatGPT tends to end up like this because they don't know the company's rules.

Image of providing relevant information in RAG

Next, let's look at an example to see what it means to "provide related information and have the respondent answer." Please note that this is just an example.

Please answer the following questions by referring to the supporting information. If you are answering by referring to the supporting information, please also explain which part of the information you referred to and why you answered.

question:
If I take a vacation at my parents' house in Nagoya on Sunday and then travel directly from Nagoya to Osaka on a business trip, how should I process my travel expenses claim in accordance with company regulations?

Supplementary information:
(Text data extracted from the travel expenses section of the company regulations document)

The user does not enter the entire text. The only part entered by the user is the "bold part" (the same as before). The rest is automatically added by RAG.

RAG performs a search based on the question and references external information, but in the example above, a prompt is created with related information embedded in the "auxiliary information" section.

Please answer by providing relevant information

As an example, let's try listening to the "Regulations for Tokyo Metropolitan Government Employees" at the URL below, which is an external information source, by pasting it as text data into the prompt.

Staff Travel Expense Payment Regulations (Tokyo Metropolitan Government)

Please answer the following questions by referring to the supporting information. If you are answering by referring to the supporting information, please also explain which part of the information you referred to and why you answered.

question:
If I take a vacation at my parents' house in Nagoya on Sunday and then travel directly from Nagoya to Osaka on a business trip, how should I process my travel expenses claim in accordance with company regulations?

Supplementary information:
(The full text of the Tokyo Metropolitan Government regulations has been copied and pasted: It is too long, so I have omitted it.)

If you are taking a vacation at your parents' home in Nagoya and then traveling directly from Nagoya to Osaka, you should refer to the sections "(Electromagnetic Records and Electromagnetic Methods)" and "(Travel Expense Reimbursement Procedures)" to apply for travel expenses. However, these sections do not provide direct information on how to apply for specific business trips or business trips from specific departure points. Therefore, you should apply in accordance with general procedures and rules.

Furthermore, by referring to Schedule 1 and Schedule 2, you can find detailed information on the travel expense reimbursement regulations related to business trips, but these tables also do not include information specific to direct business trips from Nagoya to Osaka. According to the section **(Calculation of Route)**, the route required for calculating travel expenses is calculated according to the classification of rail, waterway, and land route, but for direct business trips from Nagoya to Osaka, it is generally thought that rail or land (car) will be used.

Therefore, when applying for travel expenses for a business trip from your hometown in Nagoya directly to Osaka, it is important to consider the following points:

  1. Travel expenses must be claimed in accordance with the "(Form for travel expense claim procedures, etc.)" using the prescribed form. This applies to "Form No. 4 (Domestic travel expense claim breakdown sheet and receipt)"**, which is used for domestic business trips.
  2. Calculate the distance from Nagoya to Osaka using "(Route Calculation)" and calculate the cost of the trip based on that. Rail or road fares, and express or special express charges may apply if necessary.

Travel expenses are reimbursed and claimed in accordance with the accounting and internal regulations of the employee's organization, including information about the purpose, schedule, and necessity of the trip.

This answer is based on the supporting information you provided, but it's best to check with your organization's human resources or accounting department for specific application procedures and required documentation.

Although the answers are not quite satisfactory (it may be possible to elicit the necessary answers by using prompting techniques such as "changing the way the question is asked" or "asking additional questions to get the user to change their answer"), we found that even with prompts that were not particularly well thought out, and were simply run to try them out, the answers were still reasonably good.

In any case, please note that the answer is no longer a general statement. We have succeeded in getting ChatGPT to "answer with consideration for specific circumstances." The answer to the question is based on the "Tokyo Metropolitan Government regulations" that we provided additionally from an external source.

So, if you use RAG in this way, you can see that even the general-purpose ChatGPT will be able to provide answers that take your company's circumstances into account.

"RAG Swamp" (RAG Reality)

For these reasons, RAG has become a highly anticipated tool for putting generative AI to work in practice, and many companies are now working to "introduce RAG into their own companies."

However, the reality is that (at the time of writing) efforts to introduce RAG often struggle with not achieving the expected results. Because of the difficulty of implementing it and the resulting prolonged difficulties, it has even been called the "RAG swamp." So, specifically, what are the things that tend to happen?

I would like to try RAG

Let's say your company has just started to work on utilizing generative AI, and you learn about something called RAG and think, "Let's give it a try ourselves." Your company's volunteer engineers may take the initiative, or you may consult with an external vendor.

It's easy to fail if you rush into something you don't fully understand. That's why we'll start small. First, we'll have an explanation of what RAG is (including the technology known as In-Context Learning), and then we'll watch a simple demo like the one shown here using a travel expense claim, so we can understand how it works.

Furthermore, we will conduct a PoC (Proof of Concept) to verify whether RAG can be used in business operations by running it within a limited department within the company. Since the PoC showed promising results, we decided to introduce a RAG system that can be used company-wide.

Or, rather than there being many cases of "failing to introduce RAG carelessly," this problem can occur even when "the introduction is carefully thought out and followed step by step."

Not used because the accuracy of the answers is too low

There is considerable expectation within the company, and system development and data preparation are underway to introduce RAG company-wide, but the system development is taking up more man-hours and budget than expected.

Finally, the company's RAG was operational, but the people who tried to use it gave it a harsh evaluation, saying, "This is completely useless," and the system ended up not being used despite being introduced.

I hastily investigated why it wasn't being used, and found that even when people asked questions about using it for work, it was rated as "too often not providing appropriate answers." So I looked into the "answer accuracy" more closely, and found that even at the most, only a few percent of questions were answered appropriately.

  • RAG is often introduced with high expectations, but it seems that the lack of accuracy in responses often becomes a major problem.

"RAG Swamp"

Why can't RAG systems provide appropriate answers? Research reveals that the relevant documents necessary to provide an appropriate answer are often not loaded. It becomes clear that the problem isn't with the generative AI itself, but rather that the system is failing to provide an answer because it is searching for the wrong documents and then loading them to provide an answer.

To improve response accuracy, search accuracy must be improved. However, even when talking to vendors, it becomes clear that simply improving a particular deficiency will not solve the problem all at once, nor is there any prospect of achieving practical accuracy by systematically addressing the issue.

However, if the accuracy was not improved, the RAG that had been introduced with so much effort would be wasted. Therefore, we ended up continuing to make "small, tedious fixes" without any clear outlook. Although it seemed to be improving little by little, sufficient accuracy was still not achieved, and we continued to suffer from "a seemingly endless series of measures to deal with inaccurate RAG."

  • Improving response accuracy is often difficult, and can lead to a difficult situation known as the "RAG Swamp."

But you still need RAG

Many of the technologies that are widely used in the world can be used effectively if they are properly maintained and implemented according to procedures. It is not a good thing that, at the time of writing, it is sometimes necessary to struggle like a swamp just to get a technology to function as intended.

However, it is also true that without RAG, there are limits to the widespread use of generative AI, and in order to advance the use of generative AI, there are circumstances in which we have no choice but to tackle the ``RAG swamp'' that awaits us.

Other concerns about RAG

There are other things to consider besides the "RAG swamp."

Sometimes it's better to use existing search technology

RAG is more like "finding the answer by searching" rather than "generative AI creating the answer." The reason for the lack of accuracy is likely not the generative AI itself, but rather a malfunction in the search process in the vector database.

If that's the case, then there may be times when it's better to use "other existing search technologies." Furthermore, RAG is a means, not an end. For example, it may be more effective in practice for users to search for internal documents themselves using a long-standing "full-text search system (enterprise search)," reading and judging the documents themselves first, rather than passing them through a generative AI that may cause hallucinations.

What you find when you search may not always provide the answer you need.

The search starts with the "question." In other words, based on the question, it searches for and finds related documents that contain similar keywords or meanings. However, the documents that contain the relevant information needed to find the "desired answer" do not necessarily have "similarities to the question."

What's needed is a great answer, not related information that resembles the question. The external documentation needed to generate such an answer may resemble the answer rather than the question (or neither).

For example, if you ask, "Are there any countries other than Japan where this kind of project would be suitable?", what you need is not "materials related to Japan," but rather "various other things." Even if you want a creative answer, having information similar to the question does not necessarily produce a good response. Although expectations are sometimes too high for RAG, there are tasks for which it is suitable and tasks for which it is not suitable.

Another option is to "input all information into the generating AI"

Some people are of the opinion that instead of searching and narrowing down relevant information, it would be better to just input all external information sources into the generative AI, and that RAG would not be necessary.

The amount of data that generative AI can receive at one time is steadily increasing, and it can now easily read text data the size of a book. If you're struggling to improve search accuracy, it's becoming more realistic to have it read all of your company's documents without searching.

However, since generation AI is often charged according to the amount of data read, it will cost a lot to read everything. Also, there are some current studies (research papers, etc.) that suggest that narrowing down the necessary information to only the necessary information may result in higher quality answers than providing all the information without narrowing down the information.

Another advantage of RAG is that it allows you to control and exclude "information you don't want to load." For example, you can exclude information that a user's permissions don't allow them to access. In other words, compared to fine tuning, RAG has the advantage of being able to control and operate the data used each time you answer a question.

Will it help prevent hallucination?

RAG is sometimes described as a technology to combat hallucination. While it may be a mitigation measure in some cases, it is not a technology that can be expected to fundamentally eliminate the problem.

RAG simply inputs data from an external source and does not change the behavior of the generating AI itself. If the external source contains the answer itself, it may refer to it directly and give an accurate answer, but it does not intervene to prevent the essential behavior of Hallucination, which may actively create stories that do not exist.

Will it prevent data leaks?

It is said that RAG can prevent information leakage more effectively than fine tuning, etc. However, if you use external generative AI as a cloud service, data will still be transmitted externally.

It is true that RAG does not use its own data to "train" the generative AI, and no information is released to the public during this process. However, the data is sent as part of prompts when the system is used, so it is transmitted to the public in a different form.

If we think that RAG is okay because the cloud service states that it will not use the information entered during use for other purposes such as training generative AI, then it should also be okay if it states that the training data used for fine-tuning will not be used for other purposes.

However, unlike fine tuning, the company does not send all of its data externally, but only sends the portion found by the search each time, so it can be thought of as being able to localize and reduce the amount of data sent externally.

What is actually necessary to "make RAG work"

There is one more thing to consider when using RAG: how to prepare the data to be searched. Since answers are generated by searching data, it will not work unless the necessary data is prepared and can be searched as needed.

To make it "searchable," it is necessary to organize internal data

For example, even if a company hears that using ChatGPT is effective in implementing RAG and procures a system that can use RAG, if the company does not have internal data management, it will not be able to achieve the full effect because it will not be able to search external information sources effectively.

It is necessary to create a "searchable" environment for the data held by a company by checking where similar data exists and improving the data usage environment. Furthermore, it is necessary to be able to select the necessary data from the search results, preprocess it appropriately, and pass it to the generation AI, which may require improving the quality and format of the data.

There may be cases where the data you want to use is not digitized and remains on paper, so you will have to start from digitizing it. Also, if old and new data exist together, the old and new data will be searched together and fed into the generation AI, which will likely not be able to provide a good response.

In other words, in order to make the most of RAG, it is necessary to first "improve the internal data usage environment."

We must continue to develop an environment where data can be used

Even if you outsource the setup of your internal data usage environment to an external vendor, there are still problems, as the data usage environment needs to be continuously maintained.

For example, if there is an internal organizational change and a new department is created, or if a new business venture is undertaken, new data (external information sources) may be generated, and new needs may arise for data utilization (questions to ask using generative AI).

In other words, to keep RAG functioning, it is necessary to continue to improve the internal data usage environment so that new data can be searched for new needs in line with changes in the organizational and business situation. Otherwise, for example, you may not be able to answer questions about data related to a newly established business division.

"Connecting" technology that helps develop the data environment necessary for RAG

To keep RAG functioning in this way, it is necessary to maintain an internal data usage environment that allows the necessary information to be searched appropriately, and to maintain a situation in which the search results can be appropriately preprocessed and used by generative AI.Trial and error may be required to devise search methods and preprocessing methods to ensure that RAG produces good results in line with business needs.

If it takes time and effort to outsource the work to an external vendor every time data preparation work is performed or trial and error in utilization is required, it may be difficult to make effective use of RAG.

In other words, in order to make effective use of RAG, it is necessary to prepare a means to smoothly and efficiently establish an internal data utilization environment.

Please utilize "connecting" technology

There is a way to efficiently develop these efforts to connect with data on a wide variety of systems and clouds, read, process, and transfer data as needed, and improve the data environment using only a GUI.These are "connecting" technologies such as "DataSpider" and "HULFT Square," also known as "EAI," "ETL," and "iPaaS."

Can be used with GUI only

Unlike regular programming, there is no need to write code. By placing and configuring icons on the GUI, you can achieve integration with a wide variety of systems, data, and cloud services.

Being able to develop using a GUI is also an advantage

No-code development using only a GUI may seem like a simple compromise compared to full-scale programming. However, being able to develop using only a GUI allows on-site personnel to proactively work on cloud integration themselves. On-site personnel are the ones who know the business best.

Full-scale processing can be implemented

There are many products that claim to allow development using only a GUI, but some people may have a negative impression of such products as being too simple.

It is true that things like "it's easy to make, but it can only do simple things," "when I tried to execute a full-scale process it couldn't process and crashed," or "it didn't have the high reliability or stable operating capacity to support business operations, which caused problems" tend to occur.

"DataSpider" and "HULFT Square" are easy to use, but also allow you to create processes at the same level as full-scale programming. They have the same high processing power as full-scale programming, as they are internally converted to Java and executed, and have a long history of supporting corporate IT. They combine the benefits of "GUI only" with full-scale capabilities.

No need to operate in-house as it is iPaaS

DataSpider can be operated securely on a system under your own management. With HULFT Square, a cloud service (iPaaS), this "connecting" technology itself can be used as a cloud service without the need for in-house operation, eliminating the hassle of in-house implementation and system operation.

Related keywords (for further understanding)

Machine learning related keywords

Keywords related to Generative AI/ChatGPT

Keywords related to data integration and system integration

  • EAI
    • It is a concept of "connecting" systems by data integration, and is a means of freely connecting various data and systems. It is a concept that has been used since long before the cloud era as a way to effectively utilize IT.
  • ETL
    • In the recent trend of actively working on data utilization, the majority of the work is not the data analysis itself, but rather the collection and preprocessing of data scattered around, from on-premise to cloud. This is a means to carry out such processing efficiently.
  • MFT(Managed File Transfer)
    • It is a linkage platform that realizes linked processing by files with a high level of "safety, security, and reliability" that can support corporate activities. It is not only possible to transfer files, but also to ensure that the transfer process is carried out, secure and secure transfer, and the ability to keep transfer logs properly to check and manage file transfer them, and the foundation that makes it happen.
  • iPaaS
    • A cloud service that "connects" various clouds with external systems and data simply by operating on a GUI is called iPaaS.

Are you interested in "iPaaS" and "connecting" technologies?

Try out our products that allow you to freely connect various data and systems, from on-premise IT systems to cloud services, and make successful use of IT.

The ultimate "connecting" tool: data integration software "DataSpider" and data integration platform "HULFT Square"

"DataSpider," data integration tool developed and sold by our company, is a "connecting" tool with a long history of success. "HULFT Square," a data integration platform, is a "connecting" cloud service developed using DataSpider technology.

Another feature is that development can be done using only the GUI (no code) without writing code like in regular programming, so business staff who have a good understanding of their company's business can take the initiative to use it.

Try out DataSpider/ HULFT Square 's "connecting" technology:

There are many simple collaboration tools on the market, but this tool can be used with just a GUI, is easy enough for even non-programmers to use, and has "high development productivity" and "full-fledged performance that can serve as the foundation for business (professional use)."

It can smoothly solve the problem of "connecting disparate systems and data" that hinders successful IT utilization. We regularly hold free trial versions and hands-on sessions where you can try it out for free, so we hope you will give it a try.


Why not try a PoC to see if "HULFT Square" can transform your business?

Why not try verifying how "connecting" can be utilized in your business, the feasibility of solving problems using data integration, and the benefits that can be obtained?

  • I want to automate data integration with SaaS, but I want to confirm the feasibility of doing so.
  • We want to move forward with data utilization, but we have issues with system integration
  • I want to consider data integration platform to achieve DX.

Glossary Column List

Alphanumeric characters and symbols

A row

Ka row

Sa row

Ta row

Na row

Ha row

Ma row

Ya row

Ra row

Wa row

»Data Utilization Column List

Recommended Content

Related Content

Return to column list