Invest in AMD?

Published on 05/18/21 | Saurav Sen | 5,201 Words

The BuyGist:

  • This is the full-fledged investment thesis on AMD. 
  • The thesis presentation was posted earlier - subscribers have access here.
  • We go through the following sections: 
    • Competitive Advantage
    • Strategy & Moat
    • Growth Drivers
    • Competition
    • Strategy Risk
    • Key Risk/Threats
  • We end with a definitive conclusion - Buy or Watch.
  • Subscribers have full access.

Competitive Advantage: The Castle

Summary: With the Xilinx merger, unique one-stop-shop for a full stack of logic chips - CPU + GPU + FPGA + ACAP – for an AI-heavy, Heterogenous Computing world.

AMD is the third-largest semiconductor company by market capitalization (we’re leaving TSMC and Samsung out of this for now because they’re mostly just semiconductor fabs, not designers). There are only 3 dominant companies that design CPUs (Central Processing Units) and GPUs (Graphics Processing Units) – AMD, Nvidia and Intel.

AMD makes both CPUs and GPUs, which means they’re competing with both Intel and Nvidia. That’s why we’ve never invested in AMD before – they were going up against the undisputed dominators in CPUs (Intel) and GPUs (Nvidia). We always like a scrappy outsider story but trying to outcompete two global dominators is a tough ask.

In the last couple of years, AMD has proved us wrong. They’ve not only gained market share from the two Goliaths, but they’ve also become a free-cash-flow-generating company.

So, what did AMD get right? In CPUs, they simply executed better than Intel. They remained focused when they had to – in 2017-18 – when Intel was distracted with non-CPU acquisitions. Intel’s distraction was AMD’s advantage. AMD has a different execution model – it focuses on designing chips, not manufacturing them. Intel does both. AMD outsources manufacturing to companies like TSMC (our top holding since inception). This has served them well. They moved fast in design. TSMC kept up in production. They outpaced Intel in the latest generation CPUs – the transition from 10 nanometer chips to 7nm.

In GPUs, AMD’s success has been more surprising. There is no apparent execution advantage here. Nvidia follows the same model – focuses on design and leaves production headaches to TSMC. But Nvidia (another one of our successful holdings since 2018) has also been distracted, somewhat. They’ve spent a lot of energy and resources on capturing the Cloud Datacenter market, maybe at the cost of their original bread-and-butter business – GPUs for PCs and Gaming Consoles. This is the part of the GPU market where AMD has been able to gain market share. Why? Are their GPUs better? It’s hard to tell because each company – AMD and Nvidia – have a myriad of chips in production. But maybe the answer lies in cost. We don’t have exact pricing data – as in, we don’t know what AMD’s contract with, say, Sony PlayStation stipulates – but maybe this gross margin comparison paints a clear picture.

We do expect this Gross Margin gap to close in the next few years (more on that later). But so far, we’re not convinced about AMD’s competitive advantage and Moat. We don’t really put a lot of stock in the “nanometer wars”. AMD boasts of beating Intel to 7nm chips using TSMC’s cutting-edge technology. But we’ve read reports that put Intel’s 10nm chips and TSMC’s 7nm chips on equal footing. So, maybe it does mostly come down to price and, maybe, some marginal improvement in performance vs. competition. We can’t hinge our AMD thesis on this.

Going forward, we think it’s dangerous to hang our hats (or our investment thesis) on a perceived technological edge in AMD’s CPUs and GPUs. We think these advantages are temporary. As long-term investors, we need to look beyond point-in-time competitive advantages that are not necessarily durable. Let’s assume that Intel and Nvidia know what they’re doing even if there have been temporary hiccups. Let’s assume that Intel’s node “disadvantage” will be rectified. Let’s focus on competitive advantages that are durable. Let’s focus on things that competitors don’t do or can’t or won’t.

AMD’s acquisition of Xilinx (yet to be set in stone due to pending regulatory approvals) caught our attention late last year. We’ve wanted to find a palatable entry point, and the recent sell-off in Tech stocks may be just the opportunity. The Xilinx acquisition sets up AMD (or AMDX, as we like to call the combined entity) to deliver something others don’t or won’t or can’t – a full-stack logic chipset.

AMD’s competitive advantage as a stand-alone company was unclear to us. AMDX’s competitive advantage is much more convincing. This is a subjective call. There are 2 main concepts that underpin this subjective call:

  1. Xilinx’s dominance in FPGA chips.
  2. The advent of Heterogenous Computing.

First, Xilinx dominates the FPGA world. FPGAs are Field Programmable Gate Array chips or, for us non-engineers, programmable chips. These chips have been around for a while but its only recently – with the advent of AI, Machine Learning, and 5G – that these programmable chips finally have a big enough rasion d’etre. The only other credible competitor to Xilinx is Altera, which was acquired by Intel a few years ago. Here’s how they stack up:

Why are programmable chips suddenly hot? The simple answer is that computing is changing, workloads are changing. There’s a lot more data to handle, especially a lot more unstructured data (like literature, images, sounds etc.). Not so long ago, a CPU was good enough for everything we needed to do on a computer or even on a server. Intel dominated this world of all-purpose, one-size-fits-all CPUs.

Then in the 1990s, videogames evolved by leaps and bounds with the advent of gaming consoles like the PlayStation. PCs became more powerful as well. And so, games for PCs became more powerful. It was a virtuous cycle. It turned out the gaming was too much for good old Intel CPUs to process. So, the GPU (Graphics Processing Unit) was invented. Nvidia dominated this market for a long time. It still does.

GPUs are built differently. They’re good at specific types of computations, unlike a CPU that’s a jack of all trades. A few years ago, it turned out that GPUs, were good for some types of computations that are needed for Machine Learning. This gave GPUs (and Nvidia) a massive new lease of life. Concurrently, FPGAs, which are the third kind of chip, piggybacked off the success of GPUs, and had their moment too.

Another massive development took place at the same time – Cloud Computing. We can’t stress this enough – AI and Machine Learning applications have grown exponentially over the last 3-4 years because of Cloud Computing. All that heavy computation happens on a heavy-duty server, and insights from that computation can be distributed instantaneously, anywhere, anytime. Some of that computation, however, can happen on our laptops or phones (Edge Devices). This is the concept of Heterogenous Computing.

At the end of the 20th century, almost all computing used to be done locally on our desktops and laptops. Then some of it shifted to server racks, but mostly for storage. Most of the computational tasks were still done on PCs. All that is changing now. Computing can now be done either on a Cloud server or on your laptop or phone. On a Cloud server, most of the data processing can be done on a CPU, a GPU, and FPGA or an ASIC (Application Specific Integrated Circuit). That’s Heterogenous Computing – computing distributed over many chips, on many devices. In 2018, we took a crack at explaining the spectrum of logic chips out there – this article should help you figure out where Xilinx stands in the gamut.

AMD is making a gutsy move. They’re envisioning the world 10 years ahead and course-correcting to ensure that they thrive in this new world. Heterogenous Computing is the future. AMD made strides in CPUs and GPUs for PC and Consoles. More recently, they’ve gained market share in the Cloud Datacenter CPU battle. They’re also trying to break into Nvidia GPU strongholds in Cloud Datacenters.

Until last year, AMD seemed to be content with GPUs for Gaming. But they soon realized what a huge market Cloud Computing is. Sure, Gaming GPUs can be used for AI workloads in the Cloud. But as these applications get more complicated, more “intelligent”, Gaming chips won’t cut it. We were, therefore, glad to see in their 2020 Investor Day presentation that AMD is determined to build specifically for the Cloud. This slide shows that AMD will now have 2 different types of GPUs – each specific for a certain kind of AI workload:

With the Xilinx acquisition, AMD (or AMDX rather) would take this one step further in the race to offer a full suite of logic chips for a heterogenous computing world. Intel and Nvidia are also trying to get there (more on that in the Competition section) but AMDX looks very promising now.

We believe AMDX would have a serious competitive advantage of competitors, especially over Intel, but the battle has just begun. Maintaining that competitive advantage in, say, 5 years from now is going to be tough. It’s up to Management to widen the Economic Moat. How can they do that?

Here’s what we’ll discuss in the next few sections:

  1. Management Strategy & Moat
  2. Growth Drivers
  3. Competition
  4. Strategy Risks
  5. Key Risks
Please Log In or Subscribe to read the full article. Thank you.

We use cookies on this site to ensure the best service possible.