OpenAI launches MRC (Multipath Reliable Connection), an open source network protocol via OCP that improves resilience and performance in supercomputers dedicated to massive AI model training.
OpenAI innovates with MRC: a network protocol for AI supercomputers
OpenAI has just released a new network protocol called Multipath Reliable Connection (MRC), specifically designed for large-scale computing infrastructures used in training artificial intelligence models. This technology, made public through the Open Compute Project (OCP) consortium, aims to address the major challenges of reliability and performance encountered in massive supercomputer clusters.
The MRC protocol establishes a new standard in managing communications between intensive computing nodes by leveraging multiple network paths simultaneously to ensure continuity and speed of exchanges. This innovative approach is particularly suited to the growing demands of contemporary AI models, which require high bandwidth and increased fault tolerance.
What MRC concretely brings: improved robustness and throughput
In practice, MRC allows the use of multiple parallel network connections, distributing data flows to avoid bottlenecks and interruptions. This multipath communication ensures dynamic redundancy: if one path encounters a problem, the others take over without interrupting model training.
This ability to maintain a reliable network even in the event of partial failure is a significant improvement over traditional protocols, often limited to a single communication path vulnerable to hardware failures or congestion. Furthermore, MRC optimizes latency and overall throughput, which is crucial for deep learning algorithms requiring near-instantaneous data exchange.
Compared to classic network solutions used in AI data centers, MRC stands out by its unprecedented combination of performance and reliability, making it possible to train ever larger and more complex models.
Under the hood: architecture and technical mechanisms of MRC
The MRC protocol relies on a multipath architecture that exploits several physical and virtual channels in parallel. This structure is complemented by an intelligent flow management system capable of adapting packet distribution in real time according to congestion and link quality.
Technically, MRC integrates advanced error correction and data reassembly mechanisms, ensuring the integrity of exchanged information even in case of network fluctuations. This protocol thus fits into a high availability logic, essential for long training sessions that can last several weeks on thousands of interconnected GPUs.
Moreover, MRC is designed to be compatible with current supercomputer standards, facilitating its integration into existing infrastructures without requiring major redesign.
Accessibility and deployment: who can use MRC?
OpenAI has chosen to publish MRC under an open license via the Open Compute Project, making the protocol accessible to all sector players, whether industrial, academic, or AI-specialized startups. This openness encourages rapid adoption and collaboration around continuous improvement of the technology.
Infrastructures wishing to deploy MRC will need to have multipath-compatible network equipment and an adapted cluster architecture, which corresponds to the standards of modern computing centers. French users, notably in AI excellence hubs like Paris-Saclay or Grenoble, will thus be able to consider integrating MRC to boost their training capacities.
Implications for the AI sector: a strategic advance
The launch of MRC comes at a time when the race for computing power is crucial to maintain a competitive advantage in artificial intelligence. By improving the robustness and performance of supercomputer networks, MRC paves the way for larger and more efficient models while reducing risks related to interruptions.
This innovation strengthens OpenAI's position not only as a leader in AI software but also as a key player in hardware infrastructure optimization. For French and European actors, access to this technology via OCP represents an opportunity to modernize their training platforms and stay at the forefront of research and development.
Our perspective: a promising but to-be-monitored protocol
MRC promises to be a major breakthrough in the field of networks for AI supercomputers, particularly thanks to its ability to combine performance and resilience. Nevertheless, its success will depend on industrial adoption and compatibility with the specific architectures of different computing centers.
It will also be necessary to observe benchmarks and feedback during the first large-scale deployments to confirm the actual gains in throughput and reliability. For now, this new protocol opens a promising path to support the scaling up of AI models in the coming years.
Historical context and evolution of networks for supercomputers
Since the first generations of supercomputers dedicated to artificial intelligence, network infrastructures have constituted a major bottleneck. Historically, classic protocols proved insufficient to handle massive data volumes and intense communication between thousands of GPU nodes. Early AI clusters mainly used star or tree architectures, where a single link could cause the entire system to fail in case of malfunction.
Faced with these constraints, several innovations were introduced, such as InfiniBand networks, which offered improved throughput and latency but remained limited in redundancy capacity and dynamic flow management. MRC fits into this evolutionary line by proposing a more flexible and resilient multipath approach, adapted to current and future challenges of massive AI model training.
Tactical issues and impact on AI cluster management
Operationally, integrating MRC into supercomputers profoundly changes how network engineering teams manage infrastructures. The ability to dynamically distribute flows over multiple paths reduces the need for manual interventions in case of congestion or failure, improving service availability.
This automation and increased resilience allow data scientists and researchers to focus more on model optimization rather than system maintenance. Moreover, reducing interruptions results in greater efficiency of training campaigns, often costly in time and energy.
Future prospects and development of the MRC ecosystem
In the medium term, deploying MRC could foster the emergence of new communication standards in the AI supercomputer domain, with possible extensions toward distributed computing applications beyond AI. OpenAI and the OCP community are already considering protocol evolutions to integrate additional features, such as better network security management and energy optimization.
Furthermore, growing adoption of MRC could stimulate collaboration between public and private actors, notably in Europe, where strengthening AI infrastructures is a strategic priority. By facilitating compatibility between different hardware and software via an open standard, MRC helps create a more dynamic and innovative ecosystem.
In summary
The Multipath Reliable Connection (MRC) protocol proposed by OpenAI represents a significant advance in networks for supercomputers dedicated to artificial intelligence. By combining multipath communication, robustness, and flow optimization, it meets the growing needs for performance and reliability of modern AI infrastructures. Published under an open license via the Open Compute Project, MRC offers a valuable opportunity for industrial and academic players to modernize their clusters and support the development of increasingly ambitious models. However, its large-scale adoption and future feedback will be decisive to confirm its real impact in the global AI ecosystem.
Source: OpenAI Blog, May 5, 2026