AMD Strikes Chips-for-Stock Agreement With Meta To Boost AI

Advanced Micro Devices has signed a chips-for-stock deal with Meta Platforms as the companies deepen their relationship around artificial intelligence computing, in a move that highlights the intensifying battle to supply the hardware powering the AI boom.
The arrangement ties AMD more closely to one of the world’s biggest buyers of AI infrastructure, as Meta continues building out data centers for training and running large AI systems. The deal also underscores AMD’s push to win business that has largely flowed to Nvidia, the dominant provider of high-end AI chips used across the industry.
The agreement pairs AMD’s AI chip supply with an equity component involving Meta. The companies have positioned the pact as part of a broader effort to expand Meta’s compute capacity while giving AMD a higher-profile role inside a major hyperscale customer’s AI stack.
Meta, led by CEO Mark Zuckerberg, has been spending heavily on AI infrastructure as it develops and deploys AI across products including its social platforms and messaging services. Those investments have turned AI chips into a strategic chokepoint, where access to high-performance processors can influence product timelines and research capabilities.
For AMD, landing deeper commitments from a customer like Meta is meaningful because major cloud and internet companies tend to buy at enormous scale. A large, visible deployment can also serve as a validation point for other customers evaluating alternatives to Nvidia hardware for training and inference workloads.
The development comes as the competitive dynamics in AI hardware remain in flux. Nvidia has set the pace with its GPUs and tightly integrated software ecosystem, which has become a standard for many AI teams. AMD has been working to expand its footprint with its own accelerators and associated software tools, aiming to lower the friction for customers migrating or running mixed fleets.
For Meta, diversifying suppliers can help secure capacity and reduce dependence on any single vendor as it builds out compute-intensive AI systems. Meta has been public about the scale of its infrastructure ambitions, and hardware procurement is central to meeting those plans.
What happens next will be closely watched inside the AI supply chain. AMD will need to execute on deliveries and performance expectations in Meta’s data centers, while Meta will integrate the chips into its training and deployment pipelines. Investors and competitors will also monitor whether the deal leads to follow-on commitments or broader adoption of AMD accelerators by other large AI buyers.
The partnership signals that the market for AI compute is large enough to support multiple major suppliers, but the race to become a default choice for the biggest customers is far from settled.
