container
"container"
This glossary explains various keywords that will help you understand the mindset necessary for data utilization and successful DX.
This time, let's take a look at "containers," a technology that is becoming widely used in modern software development, especially in software development on the cloud.
What is a container?
Containers are a technology that creates an execution environment for multiple independent application systems within a single OS execution environment. Containers perform OS-level virtualization, as opposed to the hardware-level virtualization that has been used up until now with virtual machines.
Compared to using virtual machines, there is less overhead involved in virtualizing the environment, which means it is expected to be more resource efficient and start up faster. For this reason, it is increasingly being used as a method for implementing microservices and as a fundamental technology for realizing cloud-native software architectures.
The history of application environments before the advent of containers
"Containers" are often talked about as a new technology for the cloud era, but it can be difficult to understand what they actually are. Even if you understand how they work, it can be hard to understand why they're such a hot topic. On the other hand, there seems to be a lot of unnecessary fuss about containers being the future without understanding the actual situation.
First, I will explain the technical background leading up to the emergence of containers, and then explain why they are a hot topic now.
First: A usage model in which software runs directly on the hardware
When you do something using IT, you need "software" and an "environment to run the software." Just like a game console without any games is useless, there is no point in having software, and there is no point in having software if you cannot run it.
I gave the example of a game console, but traditionally, software and the hardware that runs it were thought of as a set, like game software and a game console. For example, if you were introducing a business system into your company, you would purchase or develop the packaged software, and then purchase, install, and use a server machine, which is the hardware that runs it.
In other words, each software system occupies its own physical execution environment.
But there was a problem
However, this method requires the introduction of new hardware each time a new system is introduced, and if the processing power required by the software system is not sufficient or sufficient for the hardware, it can easily lead to waste and inconvenience.
For example, let's say the accounting department introduces an IT system and allocates a server machine. Next, let's say a production management system is introduced and a new server machine is allocated. When IT is introduced, the number of server machines increases rapidly, which can be a hassle. Improving the specifications to ensure proper use after introduction can cost money, and if costs are reduced, the specifications may be insufficient after introduction, which can hinder business operations.
After implementation, the accounting system had excess hardware specifications, which meant server performance was wasted, while the production management system was used more than expected after implementation, and as a result, it was slow and often caused problems due to insufficient specifications.
You might think that it would be easier to put the accounting system and production management system on one server, but this was not so easy to do. This is because each system is not designed (guaranteed to work) to run properly even when run simultaneously in the same environment as different applications.
"Virtual Machine" using hardware virtualization technology
This kind of waste and inconvenience was common in corporate IT systems. This is where hardware virtualization technology came into use. This is an initiative to run software on a "virtual machine" using hardware virtualization technology, rather than running software on a physical server. For example, in the example above,
- Do not run software directly on physical servers
- Use hardware virtualization technology to run on a virtual machine
- Accounting system: Installed on a dedicated virtual machine
- Production management system: This is also installed separately in a dedicated virtual machine
A virtual machine is a software version of a PC's hardware environment (CPU, memory, hard disk, etc.). From the perspective of each application, it appears almost identical to running on a physical machine, but because the virtualized hardware is data, there is no need to physically manage it, making management easier. For example, it also makes it easy to back up the entire machine.
For example, you can run a virtual machine for an accounting system and a virtual machine for a production management system on one physical machine. Since they are separate virtual machines, their execution environments are independent, so there is no need to worry about them interfering with each other even if they are running on the same physical machine.
If the specs of the production management system are insufficient but the specs of the accounting system are surplus, you can reduce the physical resources allocated to the accounting system's virtual machine and increase the allocation to the production management system, making it easier to transfer specs between virtual machines and utilize machine specs without waste.
- Use virtual machines to separate the direct dependency between the physical execution environment and the application system execution environment.
- By running multiple virtual machines on a single physical machine, multiple execution environments can coexist.
- Since the allocation of hardware resources can be adjusted for each virtual machine, it is possible to transfer resources from areas with excess specs to areas with insufficient specs. This prevents waste due to excess specs and problems with operation due to insufficient specs.
- Backups and migration of operating environments (such as migration to a high-spec operating environment) can also be easily performed.
The key point is that virtualization technology "separates the physical execution environment from the logical execution environment from the application's perspective."
The emergence of "containers" that eliminate the drawbacks of virtual machines
This type of "virtualization" using virtual machines is probably rare among ordinary people when it comes to PC usage, but it is a technology that has many advantages beyond those explained so far, and with cloud services, direct use of physical resources has become rare.
However, virtual machines have a problem of high overhead (waste). In other words, while traditionally running software on physical machines allows software to run on physical hardware without loss,
- Resources spent to realize and run virtual hardware in emulation on a physical machine
- Additional resources consumed by operating systems such as Linux and Windows that also run on virtual machines
The overhead is actually quite large, and when using virtual machines, it is often the case that the original specifications of a physical machine cannot be fully utilized.
Virtual machines are a useful technology, but if you think about it objectively, we don't really want to use separate hardware environments (we don't want to separate them at the hardware level), we just want to separate the application execution environments. So why not just separate the environments? That's how "containers" were created.
- They are not virtual machines and share the same physical hardware (no overhead)
- The OS is also the same Linux (no overhead)
- However, when viewed from within the application environment, it makes it appear as if you are in an environment where you are exclusively using this Linux.
For example, in the case of a file system, if you install applications A and B, A can see B's files, and B can see A's files and data. B can also see the settings A makes to Linux and the processes A is running. Also, running A and B together can be problematic because there is a risk that the operations of A and B may interfere with each other and not function properly.
If so, for example
- A cannot see B's files. It appears that only the files that the OS itself owns and A's own files exist.
- From A, the process of B is invisible, and changes made by B are blocked so that they do not affect the world outside of B. From A, it can be used as if nothing other than A exists in the environment.
Although it only limits the visibility, the range of visibility, and the range of changes that can be made, it still achieves the isolation of application execution environments without using a large-scale system like a virtual machine. This is what is achieved by using the functions available in Linux (and UNIX) - "containers."
Its features are "separating only the execution environment," "conveniently separating the execution environment for each application," and "very low overhead and efficient."
Why are containers such a hot topic?
I hope you have at least a basic understanding of what it is technically. Or perhaps many of you have vaguely heard of the "environment only" concept. You understand how it works, but why is it such a hot topic in the world?
It has become the foundation for realizing advanced development styles such as "microservices."
Virtual machines were a convenient technology, but they had a lot of overhead, so they needed to be used only when necessary. Containers, on the other hand, were a low-overhead technology. While it was not practical to generate a large number of virtual machines, it was technically possible to create a system that utilized a large number of containers.
As a result, a new development style was born, in which systems were developed with an architecture that used a large number of containers. One example is a technology called "microservices." This was a different style from the conventional one, in that it not only separated the entire application execution environment so that they would not interfere with each other, but the system itself was literally composed of a large number of small containers.
The adoption of this type of system architecture by cutting-edge cloud services such as AWS and Netflix has become a hot topic, and talk has shifted to containers and microservices as cutting-edge initiatives in the cloud. In other words, it is being talked about as a fundamental technology that is the basis for "modern, cutting-edge system architecture."
New "Common Software Execution Environment"
Container technology itself has become standardized, and just as there are applications that are compatible with running on Windows 10, applications that are compatible with running in a container environment are now being developed.
For example, if the number of Windows 10 users is increasing worldwide and many applications for Windows 10 are being released, management will likely decide that it is necessary to support Windows 10. Similarly, the number of users of container environments is increasing, and many software assets are being created for container environments.
Additionally, engineers are acquiring skills based on the assumption that a container environment exists, so the number of people with skill sets that assume the use of containers will increase. Similarly, cloud services are beginning to develop services that assume the widespread use of containers. The existence of a container ecosystem is another reason to pay attention to containers.
But containers aren't the ultimate technology
At the time of writing, container technology is a hot topic, but it is not the ultimate technology that should be used in all situations, and it is possible that other technologies will become mainstream in the future. Below, we will briefly introduce some possibilities other than container technology.
You can do the same thing with a virtual machine
Separating the execution environment from the physical environment can also be done with virtual machines, and many of the things that are being done with containers can also be achieved with virtual machines.
In addition, because virtual machines are virtualized and isolated at the hardware level, the environments are highly isolated, achieving strong isolation in terms of security, for example. Isolation by virtual machines is safer than the possibility of malicious container users coexisting in the same environment and being attacked.
In addition, development of virtual machines with even lower overhead is underway, and virtual machines may become as easy to use as containers. Similarly, container technology with the same security considerations as virtual machines is also under development.
Other technologies such as WebAssembly
In the future, technologies other than containers may become mainstream, and there may be cases where other technologies are superior depending on the application.
Containers perform virtualization at the Linux environment level, but that doesn't mean they have to be virtualized at another level. If you just want to be able to use Java, isolation at the Java execution environment level may be sufficient, and if you're just writing a web application in PHP, isolation from the PHP perspective is sufficient, which may be more reasonable.
Additionally, in relation to current needs for containers, a technology called "WebAssembly," which has its roots in the technology of web browser JavaScript execution engines, is gaining attention as a technology that will "decouple environments" in the future. Just as containers are currently being talked about as a technology of the cloud era, perhaps in the near future WebAssembly will also be talked about as the technology of the future.
Related keywords (for further understanding)
- Container Orchestration
- An automation technology that manages, operates, and controls a large number of containers in system development and operation using container technology.
- Microservices
- This is a new system development style that has become popular thanks to container technology. It is a concept in which system components are actively created as microservices, which are small, independent units, and the entire system is composed of multiple microservices.
- Cloud Native
- The development of cloud services or systems on cloud services is based on very different assumptions than traditional system development, which often involved developing software that runs on individual machines. This means that approaches suited to the cloud era are now required, using different ways of thinking, different methods, and different means.
HULFT 10: File Integration Middleware for the Cloud and Container Era
From the days of mainframes to the present day cloud computing era, HULFT has achieved seamless data integration across a variety of IT system development styles and technical environments.
HULFT 10, the de facto standard file sharing middleware in Japan
Please try out HULFT, the pinnacle of domestic MFT products with an overwhelming track record in Japan and the de facto standard for file integration platforms.
HULFT emerged during the technological transition from the mainframe era to the UNIX era as a highly practical technology that could link old and new technology environments, with extremely high reliability and was adopted by financial institutions' systems.It has since evolved into a means of data integration that absorbs the differences in all environments, supporting Windows and Linux, and now also supports seamless integration with cloud services.
File integration may seem like an old technology, but it is now being considered for use with cloud services, and the latest version, "HULFT 10," has been developed to enable smooth use in container environments.
While mainframes are still in use today, there are fields where technologies, ways of thinking, and priorities differ, such as Windows, Linux, integration with cloud-based object storage such as Amazon S3, and even the world of development using container technology. Utilize HULFT as a means for engineers in each field to effortlessly data integration with each other.
⇒ Learn about file transfer mechanisms through HULFT product introduction and online seminars
⇒ HULFT-WebConnect Product Introduction and Online Seminar
⇒ HULFT 10 for Container Services
Glossary Column List
Alphanumeric characters and symbols
- The Cliff of 2025
- 5G
- AI
- API [Detailed version]
- API Infrastructure and API Management [Detailed Version]
- BCP
- BI
- BPR
- CCPA (California Consumer Privacy Act) [Detailed Version]
- Chain-of-Thought Prompting [Detailed Version]
- ChatGPT (Chat Generative Pre-trained Transformer) [Detailed version]
- CRM
- CX
- D2C
- DBaaS
- DevOps
- DWH [Detailed version]
- DX certified
- DX stocks
- DX Report
- EAI [Detailed version]
- EDI
- EDINET [Detailed version]
- ERP
- ETL [Detailed version]
- Excel Linkage [Detailed version]
- Few-shot prompting / Few-shot learning [detailed version]
- FIPS140 [Detailed version]
- FTP
- GDPR (EU General Data Protection Regulation) [Detailed version]
- Generated Knowledge Prompting (Detailed Version)
- GIGA School Initiative
- GUI
- IaaS [Detailed version]
- IoT
- iPaaS [Detailed version]
- MaaS
- MDM
- MFT (Managed File Transfer) [Detailed version]
- MJ+ (standard administrative characters) [Detailed version]
- NFT
- NoSQL [Detailed version]
- OCR
- PaaS [Detailed version]
- PCI DSS [Detailed version]
- PoC
- REST API (Representational State Transfer API) [Detailed version]
- RFID
- RPA
- SaaS (Software as a Service) [Detailed version]
- SaaS Integration [Detailed Version]
- SDGs
- Self-translate prompting / "Think in English, then answer in Japanese" [Detailed version]
- SFA
- SOC (System and Organization Controls) [Detailed version]
- Society 5.0
- STEM education
- The Flipped Interaction Pattern (Please ask if you have any questions) [Detailed version]
- UI
- UX
- VUCA
- Web3
- XaaS (SaaS, PaaS, IaaS, etc.) [Detailed version]
- XML
- ZStandard (lossless data compression algorithm) [detailed version]
A row
- Avatar
- Crypto assets
- Ethereum
- Elastic (elasticity/stretchability) [detailed version]
- Autoscale
- Open data (detailed version)
- On-premise [Detailed version]
Ka row
- Carbon Neutral
- Virtualization
- Government Cloud [Detailed Version]
- availability
- completeness
- Machine Learning [Detailed Version]
- mission-critical system, core system
- confidentiality
- Cashless payment
- Symmetric key cryptography / DES / AES (Advanced Encryption Standard) [Detailed version]
- Business automation
- Cloud
- Cloud Migration
- Cloud Native [Detailed Version]
- Cloud First
- Cloud Collaboration [Detailed Version]
- Retrieval Augmented Generation (RAG) [Detailed version]
- In-Context Learning (ICL) [Detailed version]
- Container [Detailed version]
- Container Orchestration [Detailed Version]
Sa row
- Serverless (FaaS) [Detailed version]
- Siloization [Detailed version]
- Subscription
- Supply Chain Management
- Singularity
- Single Sign-On (SSO) [Detailed version]
- Scalable (scale up/scale down) [Detailed version]
- Scale out
- Scale in
- Smart City
- Smart Factory
- Small start (detailed version)
- Generative AI (Detailed version)
- Self-service BI (IT self-service) [Detailed version]
- Loose coupling [detailed version]
Ta row
- Large Language Model (LLM) [Detailed version]
- Deep Learning
- Data Migration
- Data Catalog
- Data Utilization
- Data Governance
- Data Management
- Data Scientist
- Data-driven
- Data analysis
- Database
- Data Mart
- Data Mining
- Data Modeling
- Data Lineage
- Data Lake [Detailed version]
- data integration / data integration platform [Detailed Version]
- Digitization
- Digitalization
- Digital Twin
- Digital Disruption
- Digital Transformation
- Deadlock [Detailed version]
- Telework
- Transfer learning (detailed version)
- Electronic Payment
- Electronic Signature [Detailed Version]
Na row
Ha row
- Hybrid Cloud
- Batch Processing
- Unstructured Data
- Big Data
- File Linkage [Detailed version]
- Fine Tuning [Detailed Version]
- Private Cloud
- Blockchain
- Prompt template [detailed version]
- Vectorization/Embedding [Detailed version]
- Vector database (detailed version)
Ma row
- Marketplace
- migration
- Microservices (Detailed Version)
- Managed Services [Detailed Version]
- Multi-tenant
- Middleware
- Metadata
- Metaverse
Ya row
Ra row
- Leapfrogging (detailed version)
- quantum computer
- Route Optimization Solution
- Legacy System/Legacy Integration [Detailed Version]
- Low-code development (detailed version)
- Role-Play Prompting [Detailed Version]
