Alphabet’s Google is in discussions with Marvell Technology to co-develop two new AI chips. As a result, the initiative focuses on improving efficiency for running AI models. Moreover, the chips include a memory processing unit and a new TPU optimized for large language models.
The company aims to finalize the memory processing unit design within the next year. After that, they plan to move into test production. Consequently, the collaboration reflects a push toward specialized hardware for evolving AI workloads.
Reducing Dependence on Existing Partners
The discussions indicate Google’s effort to diversify beyond its long-standing partnership with Broadcom. Previously, Broadcom handled the transition of Google’s TPU designs into manufacturable chips. Meanwhile, TSMC managed fabrication under that arrangement.
However, Google is now exploring additional partners to improve flexibility and performance. In this case, Marvell would contribute design expertise, particularly in high-speed interconnects. Therefore, the partnership could enhance both cost efficiency and system performance.
Rising Role of Custom AI Silicon
Marvell continues to expand its presence in the AI chip ecosystem. For example, Amazon Web Services remains its largest custom silicon customer, while Microsoft collaborates on AI accelerators. In addition, Nvidia has established a strategic partnership to integrate custom chips with its networking systems.
At the same time, demand for custom AI chips continues to grow rapidly. As a result, the data center ASIC market is projected to reach significant scale in the coming years.
Focus Shifts Toward AI Inference
The collaboration also reflects a broader industry shift toward inference-focused hardware. Instead of prioritizing training alone, companies are optimizing chips for running deployed models. Consequently, the proposed TPU and memory unit target high-performance inference workloads.
With large-scale infrastructure investments underway, Google is expanding its supplier base. Therefore, the company can support diverse workloads while maintaining competitive performance across its AI systems.








