Strategic Partnership
Impact: Major
Strength: High
Meta-Broadcom Multi-Year 2nm AI Chip Partnership, Initial 1GW+ Deployment
Summary
Meta and Broadcom announced multi-year, multi-generation strategic partnership to co-develop MTIA (Meta Training and Inference Accelerator) chips through 2029. Initial deployment exceeds 1GW, with multi-gigawatt expansion planned. Industry-first 2nm AI compute accelerator, based on Broadcom XPU platform. Meta has planned MTIA 300/400/450/500 iterations for recommendation, ranking, and large-scale inference. Broadcom CEO Hock Tan to step down from Meta board, transition to strategic advisor.
Key Takeaways
Key data: 1GW compute capacity can power ~750,000 US homes simultaneously. Meta 2026 capex plan reaches $135B for AI infrastructure. MTIA chips are internal-only, not sold externally (unlike Google TPU/Amazon Trainium). Broadcom also collaborates with Google on TPU development, cementing its position as core custom AI chip supplier. Meta separately signed $100B+ AMD GPU deal (6GW), extended $21B CoreWeave order to 2032, showing multi-channel compute acquisition strategy.
Why It Matters
Critical step in Meta custom silicon strategy, aligning with Google TPU and Amazon Trainium trajectory. Hyperscaler chip self-development trend deepens, Broadcom emerging as largest ASIC outsourcing beneficiary. 2nm process adoption signals new AI chip manufacturing race. Meta aims to reduce NVIDIA/AMD dependence, optimize inference costs through purpose-built silicon. MTIA optimized for Meta workloads, expected to outperform general-purpose GPUs in cost efficiency. Broadcom provides integrated compute-networking solution including Ethernet switching, PCIe, and optical interconnects....
PRO Decision
Decision recommendations are available for Pro users
Upgrade to Pro $29/mo