Table of Contents
A NVIDIA brought information about the AI Factories, or “AI Factories”, an industrialized approach to creating products and services driven by artificial intelligencel, leveraging advanced computing resources and Generative AI. Automating everything from data collection and processing to the deployment of AI models, these factories enable rapid scalability to meet growing demands, playing a key role in continuous innovation and creating highly personalized solutions at scale. Understand right away!
What are AI Factories
These factories automate processes from data collection and processing to the creation and deployment of AI models, enabling a Rapid scalability to meet growing demands. Relying on vast computing resources, often provided by cloud data centers, AI Factories can perform complex AI tasks at scale, training sophisticated models that require intensive processing and storage of large volumes of data.
A IA generative, which creates new content such as text, images, videos and music from input data (prompt), is a centerpiece in AI Factories. It allows the creation of products such as virtual assistants, recommendation systems and personalized content. The product lifecycle in an AI Factory includes development, testing, validation, deployment, maintenance and continuous updating, ensuring high quality and performance products. Big tech companies like Google e Amazon, and many startups, especially in niches like fintech and healthcare, operate their own AI Factories.
The main benefits of AI Factories include continuous innovation, reduced operational costs and the ability to create highly personalized solutions at scale. However, these factories face significant challenges such as ensuring data quality, compliance with privacy e ethics, and manage complex technical infrastructures.
The next industrial revolution has already begun. Companies and countries are partnering with NVIDIA to shift traditional trillion-dollar data centers to accelerated computing and build a new type of data center – AI factories – to produce a new commodity: artificial intelligence. From server, networking and infrastructure manufacturers to software developers, the entire industry is preparing for Blackwell to accelerate AI-driven innovation across all fields.
Said during his talk at COMPUTEX, the founder and CEO of NVIDIA, Jensen Huang
NVIDIA MGX modular architecture
The modular architecture NVIDIA MGX (Modular GPU Expansion) is a hardware platform developed to support intensive computing workloads required in artificial intelligence and machine learning environments — perfectly suited for the operation of AI Factories. Based on interchangeable modules, the MGX architecture allows for easy configuration and reconfiguration of systems, facilitating the updating of individual components without the need to replace the entire system.
Although NVIDIA MGX is not exactly new, the technology is present as the solution that the brand offers to companies looking for this type of technology, when we refer to AI needs. This modularity promotes economy and sustainability, allowing companies to adapt their IT infrastructures as needed.
A flexibility The MGX architecture allows companies to customize their configurations according to specific needs, whether related to graphics processing, storage, networks or other components. This customization makes MGX suitable for a variety of applications, including AI computing, data analysis, scientific simulations, and graphics rendering. With the ability to combine different modules, companies can create tailored solutions to meet diverse workload scenarios.
Another key feature of the NVIDIA MGX modular architecture is the scalability. Systems can gradually grow by adding new modules as demand for resources increases, maintaining a high level of performance even with expansions. This scalability allows companies to serve intensive and variable workloads efficiently, ensuring their infrastructures can evolve without significant interruptions.
Furthermore, the MGX architecture is designed to be energy efficient, helping to reduce energy consumption and operating costs. Energy efficiency not only promotes savings, but also contributes to more sustainable and ecological practices. Compatibility and deep integration with other NVIDIA technologies and solutions, such as advanced GPUs and AI software, ensure optimized performance and a superior user experience.
Manufacturers start with a basic system structure for their server chassis and then customize the selection of GPU, DPU and CPU to meet the specific needs of different workloads. Until now, more than 90 systems, coming from more than 25 partners, have been launched or are in the development phase, leveraging the MGX reference architecture. This represents a significant increase from the previous year, with just 14 systems coming from six partners.
Using the MGX architecture can result in a reduction in up to three quarters (3/4) in costs development time and a two-thirds reduction in the time required for development, shortening the production cycle to just six months.
AMD e Intel are collaborating on the MGX architecture, introducing their own CPU host processor module designs for the first time. This includes the platform AMD Turin next generation and the processor Intel Xeon 6 with P-cores. These reference designs can be used by any server system manufacturer, saving development time and ensuring consistency in design and performance.
NVIDIA's latest platform, the GB200 NVL2, incorporates the architecture MGX e Blackwell. Featuring a scalable, single-node design, the GB200 NVL2 offers a variety of system configurations and networking options, enabling accelerated computing integration into existing data center infrastructure. The GB200 NVL2 joins the Blackwell product lineup, which includes NVIDIA Blackwell Tensor Core GPUs, GB200 Grace Blackwell superchips, and the GB200 NVL72. This line provides robust solutions to meet accelerated computing demands in a variety of data center scenarios.
AI Factories Applications

In data centers, it facilitates the construction of infrastructures capable of handling large volumes of data and variable workloads. In the creative industries, it is used in film, design and animation studios for high-quality graphic rendering. In the field of research and development, it supports complex scientific simulations and big data analysis, accelerating discoveries and innovations. Furthermore, in industrial automation, it enables advanced control and automation systems in smart factories.
Jensen Huang revealed that leading companies in Taiwan are quickly adopting Blackwell technology to integrate artificial intelligence into their operations. O Chang Gung Memorial Hospital, a prominent medical center in Taiwan, has plans to incorporate the computing platform with Blackwell architecture in their biomedical research. This initiative aims to accelerate image and language processing, optimizing clinical procedures and, ultimately, raising the standard of care for patients.
On the other hand, Foxconn, one of the global electronics giants, is directing its efforts towards the application of technology NVIDIA Grace Blackwell. Its projects include creating intelligent solutions for AI-driven electric vehicles and robotic platforms. In addition, they are expanding their offering of language-based services, aiming to provide more personalized experiences to customers.
And you, what did you think of the news? Tell us Comment!
See also:
NVIDIA CUDA-Q brings quantum computing to current supercomputers.
With information from: Dell.
reviewed by Glaucon Vital in 2 / 6 / 24.
Discover more about Showmetech
Sign up to receive our latest news via email.