today we Announces multi-year agreement with AMD Power your AI infrastructure with up to 6GW of AMD Instinct GPUs, the silicon computing technology used to support the latest AI models.
At Meta, we are working to build the next generation of AI and give everyone personal superintelligence. Achieving this requires large, scalable computing power that can meet the growing demands of AI workloads. Our partnership with AMD, which builds on existing collaborations, will help meet these needs.
Collaborate with industry leaders
Under the new agreement, the company will also work with AMD to align its roadmap across silicon, systems and software to enable vertical integration across the infrastructure stack. This collaboration across both software and hardware enables innovation at speed and at scale.
“We are proud to expand our strategic partnership with Meta as they push the boundaries of AI at an unprecedented scale,” said Dr. Lisa Su, Chairman and CEO of AMD. “This multi-year, multi-generation collaboration across Instinct GPUs, EPYC CPUs, and rack-scale AI systems aligns our roadmap to deliver high-performance, energy-efficient infrastructure optimized for Meta’s workloads, accelerates the industry’s largest AI deployments, and positions AMD at the center of global AI construction.”
Shipping supporting the first GPU deployments will begin in late 2026 and will be built on the Helios rack-scale architecture. We developed and published Last year’s Open Computing Project Global Summit was co-hosted with AMD.
Mark Zuckerberg, Founder and CEO of Meta, said: “We are excited to build a long-term partnership with AMD to introduce efficient inference computing and deliver personal superintelligence.” “This is an important step for Meta to diversify computing, and I look forward to AMD being a valued partner for many years to come.”
Portfolio-based approach
The agreement with AMD is part of our Meta Computing initiative, an effort to massively scale our infrastructure for the era of personal superintelligence and future-proof our leadership in AI. By diversifying our partnerships and technology stack, we are building a more resilient and flexible infrastructure. We combine hardware sourced from a variety of partners with our rapidly advancing proprietary Meta Training and Inference Accelerator (MTIA) silicon program.
We believe this portfolio approach will enable us to advance and innovate at an unparalleled pace as we deploy powerful and efficient new hardware co-designed with our software stack to support our massive growth. We look forward to working with AMD to power AI innovation and ensure our ability to deliver world-class AI experiences to billions of people around the world.
This post contains forward-looking statements involving Meta’s business. Do not rely on these statements as predictions of future events. Additional information regarding potential risks and uncertainties is contained in our most recent Form 10-K filed with the Securities and Exchange Commission. Meta undertakes no obligation to update these statements as a result of new information or future events.