Why AMD and Broadcom Stocks Could Beat Nvidia Over the Next Five Years
Investors see AMD and Broadcom gaining ground on Nvidia as AI chip demand moves beyond GPUs into custom hardware, networking, and data center CPUs.

Nvidia continues to lead the global artificial intelligence hardware market, holding more than 90% of the market for graphics processing units. Its CUDA software platform has locked in a vast developer base, making Nvidia hardware the default choice for AI training across cloud providers, research labs, and large enterprise customers.
But the company’s dominance is now facing limits imposed by size. After crossing a $4 trillion market valuation, Nvidia’s growth rate is under scrutiny. Its data center revenue reached $39.1 billion in the most recent quarter, up sharply from prior years, but sustaining this pace will be more difficult as the market matures and competitors expand into adjacent categories.
AMD has increased its focus on AI inference—the deployment of trained AI models in real-world applications such as search, personalized recommendations, and generative text services. One of the largest AI development firms globally has adopted AMD’s GPUs for daily inference workloads, and large cloud service providers have begun incorporating AMD chips for specific AI services.
Nvidia remains the leader in AI training, but AMD’s position in inference is gaining ground as costs and chip availability become deciding factors for cloud operators. AMD’s ROCm software platform, while not as mature as CUDA, has become sufficient for many inference workloads. The financial gap between the two companies highlights AMD’s growth potential. In the last quarter, AMD reported $3.7 billion in data center revenue—a fraction of Nvidia’s, but large enough that small market share gains could translate into significant revenue growth.
AMD is also building momentum in data center CPUs, where it has increased share against Intel. These chips handle memory management, orchestration, and other computing tasks that GPUs do not process directly. As AI workloads expand, demand for high-performance CPUs is expected to grow alongside demand for accelerators. AMD’s existing position in server CPUs gives it another route to benefit from the AI infrastructure cycle.
A further development is AMD’s participation in the UALink Consortium, which is working to create an open standard for high-speed connections between AI chips. Nvidia’s proprietary NVLink technology currently dominates this space, forcing data center operators to build around Nvidia hardware. If UALink succeeds, companies may be able to integrate processors from multiple vendors, which would erode one of Nvidia’s key competitive advantages and give AMD new access to high-performance AI clusters.
Broadcom is taking a different path into the AI market by focusing on the hardware that connects and supports large-scale AI systems. The company supplies Ethernet switches and optical interconnects, which move massive volumes of data between processors inside data centers. As AI models grow larger, networking capacity has become a limiting factor, and Broadcom has benefited from this trend. The company reported a 70% increase in AI networking revenue in its last earnings period, driven by orders from cloud operators and hyperscale data centers.
Broadcom is also expanding its custom chip business. The company designs application-specific integrated circuits—ASICs—for tech companies that require processors optimized for specific workloads. These custom chips typically offer better performance and lower power usage than general-purpose GPUs for targeted AI tasks. Broadcom contributed to the development of Google’s Tensor Processing Units and is now working with other data center operators on large-scale custom AI chips.
Management expects that three of Broadcom’s largest custom chip customers will each deploy up to one million AI chip clusters by fiscal 2027. That deployment represents a revenue opportunity estimated between $60 billion and $90 billion, depending on final production volume and rollout speed. Broadcom has also signed additional chip design deals with new clients in the consumer technology sector, expanding its pipeline beyond hyperscale data centers.
In addition, Broadcom’s acquisition of VMware has positioned the company to sell infrastructure management software that supports AI deployment. VMware’s Cloud Foundation product helps large enterprises manage AI workloads across private data centers and public cloud environments, reducing complexity for companies running AI applications on mixed hardware. This complements Broadcom’s hardware business by providing an integrated solution for AI customers managing hybrid and multi-cloud setups.
While Nvidia’s position in AI hardware remains strong, the company’s rapid growth over the last two years is unlikely to continue at the same rate. Its data center revenue expanded more than ninefold over that period, a pace that is difficult to maintain as the market stabilizes.
By contrast, AMD and Broadcom are starting from lower revenue bases in AI hardware, making their growth trajectories steeper if they continue to gain traction in their respective markets. AMD’s progress in inference chips and data center CPUs gives it direct exposure to rising demand for real-world AI applications. Broadcom’s combination of networking hardware, custom ASIC design, and virtualization software offers a multi-channel approach to AI infrastructure that aligns with how data centers are scaling their AI capabilities.
Both companies are positioned to capture new spending in AI hardware categories where Nvidia has less direct control. For investors looking beyond the largest AI stock in the market, AMD and Broadcom present growth cases tied to specific shifts in how AI computing infrastructure is being built.
Also Read: Nvidia Hits $4 Trillion, Dow, S&P 500, Nasdaq Post Gains, Trump Issues Tariff Notices
Follow iShook on Social Media for More Tips and Updates! |