Join us at Chase Center in SF, September 10th! >>

How ARM Became The World’s Default Chip Architecture (with ARM CEO Rene Haas)

ACQ2 Episode

December 2, 2024
December 2, 2024

ARM is an incredibly unlikely story. They were founded in Cambridge, England in 1990 to design a new chip architecture just for low-power devices (like the Apple Newton!), leaving the “serious computing” on desktop and servers to Intel’s x86. Now, nearly three decades later, ARM is the dominant architecture in all of computing today.

ARM is in your phone, your car, data centers, the most advanced AI chips… there are hundreds (or thousands!) of ARM chips you encounter in your everyday life. In this episode, ARM Holdings CEO Rene Haas joins us to tell the story of how ARM become so dominant, weaving through the through the iPod, smartphone, and AI eras. Plus, their wild corporate story of going public, getting bought by SoftBank, going public again, and nearly being acquired by NVIDIA!

Sponsors:

More Acquired: 

Join the Slack
Get Email Updates
Become a Limited PartnerJoin the Slack

Get New Episodes:

Thank you! You're now subscribed to our email list, and will get new episodes when they drop.

Oops! Something went wrong while submitting the form

Transcript: (disclaimer: may contain unintentionally confusing, inaccurate and/or amusing transcription errors)

Ben:  Hello, Acquired listeners. Today we have with us Rene Haas, the CEO of ARM Holdings. ARM is the company that develops the instruction set architecture and many of the designs underpinning CPUs all over your life today, from our phones to our cars. Dave and I actually did an episode way back in 2019 on the history of the company which had a fascinating start out of Cambridge University. The company was publicly traded, then taken private in 2016 by SoftBank, then last year went public again, and is now valued at around $150 billion. Rene has quite the career himself in semiconductors. He's been at ARM for the last 11 years and before that was a VP at NVIDIA, reporting to Jensen. Rene, welcome to Acquired.

Rene: Thank you very much. Pleasure to be here with you both.

David: Thrilled to have you.

Ben: Pleasure is all ours. I thought a fun way to start us off, since there's a lot of people listening to this that are going to see ARM Holdings and say, I know exactly what that is, I know about the strategic shift that they have going on, I've been following every earnings call since they went public, and there are people that are going to say, ARM, what is that? Maybe to level set everyone on how important ARM is in the world, what are the types of devices where the ARM instruction set architecture and ARM designs are used?

Rene: ARM does CPUs, and what is a CPU? The CPU is the digital brain of every modern electronic device. That is your television set, your thermostat, your car. We were having a chat maybe talking about a day in the life inside ARM, and I can walk through some of the all the ARM devices inside my home. The simplest way to think about it is we do CPUs and that CPU is the digital brain of every modern electronic device.

Ben: What is your relationship then with, in my head, Apple makes the CPU in my phone, it's the A18 or in my Mac, the M4? What do you mean ARM does CPUs?

Rene: Drilling down one level deeper, we do the design, the ISA, which is the instruction set architecture. We license that as either an instruction set architecture to a partner. They can develop their own CPU based on ARM, that's what Apple does, or we design and build our own CPUs and license those CPUs to companies like Samsung, MediaTek, Tesla, Qualcomm, Amazon, et cetera. We deliver it in two different ways, but those CPUs that you mentioned inside your iPhone and inside your MacBook, are all ARM-based.

David: Where we were going with this, today, if you were to imagine your house, my house, Ben's house, any house, how many devices have an ARM chip in them?

Ben: And is that a different question than how many ARM chips are floating around my house?

Rene: It's a hard question to answer in terms of just how many ARM chips are in my house or how many ARM chips get delivered in terms of a typical application space because it really varies. Again, let's go back to first principles. ARM designs the CPU and that is the digital brain of every device, which means it runs all the complex software that either runs the dashboard, it runs the operating system, or it runs an application.

I was thinking about the question and I'm going to drop a bunch of brand names here, but let's just walk through. I pull my Audi into the garage. That Audi has ARM processors. Those arm processors are what you see running the display. That digital dashboard, they're also helping with some of the driver assist, and they're probably in the power locks, power windows, et cetera. I have a nest doorbell camera that's ARM, and that's ARM that basically runs the camera, interfaces with the doorbell, et cetera.

Walking by the LG refrigerator or Wolf stove, I can assure you both of those have ARM inside too. They're probably running the displays, they're probably running the temperatures in the stove, they're definitely running the display, they're running everything in terms of the oven. Turn on the television set, which is a Samsung. That Samsung digital TV, it's actually running an operating system, so when you run all those apps and everything that you see shows up on there, that's a version of Android. That's all ARM.

Let's say I want to go downstairs and do some gaming. My PS five has ARM inside, most likely running some of the display controllers and running some of the stuff with the game controller. If I want to flip through on my Pixel phone, that is ARM inside running Android, and I've got my iPad next to me. That's all ARM. You can imagine just about everything that you interacted with that does something, that either runs an application, recognizes your face, gives you some display information, ARM did all of that.

Ben: I think it's probably true, there are hundreds of arm chips or devices with maybe ARM chips. How would you describe, there are hundreds of instances of ARM around my house?

Rene: Probably hundreds. Yeah. If you think about The more your home is connected, all those connected things have ARM inside. It's hard to avoid it because you'd almost have to go back to these old mechanical type of controls on machines that actually don't have something to digital, because if it's digital, I can pretty much assure you that it's ARM.

Ben: It's pretty wild. One stat that I pulled just from your last quarter's financial presentation is that in FY, this is estimated 2024, there's almost 29 billion ARM chips shipped. That is for every human on earth, there is four ARM based chips shipped in the last 12 months.

Rene: It's a crazy number, right? When you think about the laptop market, which is a big market, everyone wants to ship into laptops, big market, et cetera. That's 200 million units plus or minus a year, which is a fraction of that 29 billion.

David: A very small fraction.

Rene: A very small fraction. You look at it and say, how is that possible because laptop computers seem to be pretty ubiquitous? Just walk through that example I just gave you in terms of those eight or nine examples inside the house and then you start to see, how do you avoid it? ARM is an aircraft. You go to an airport, you check in for your flight, and you look up at those displays that are listening to gate information and the flight information, that's all ARM powered that's running that stuff in the background. It's everywhere.

Ben: At this point, counterintuitively, it's also in all the cloud architecture that are running the web services that all of these devices are communicating with. That is, as we'll get into later in the episode, a narrative violation from the way that the world thought about ARM a decade ago versus what is true today.

Rene: That's right. The identity of the company, we grew up, as you mentioned in the opening, 30 plus years ago out of Cambridge, and the company's original product that we were designed into was the Apple Newton. For those who may or may not remember, that was a PDA before anything had a right to be a PDA, before there was the internet, before you had voice recognition, before you had fingerprint recognition.

The chip that was designed that was based upon ARM inside had two important characteristics. It had to be running off a battery. As a result, it had to be defined to be low power. Secondly, performance and cost was really important. Back in the day, they used to build chips in two different ways. They had plastic packages, which was pretty rare and ceramic packages, which were much better in terms of heat dissipation but were costly and not that great in terms of thermals. One of the directives in terms of the original design was let's get it in a plastic package. As a result, from the very early days, the early ARM processor, the ARM1 that was defined, was basically to run off of a battery.

Ben: Yeah, which at the time didn't feel as critical to the world since all computers were basically plugged in all the time, or most of the computing that people did was at computers that were plugged in all the time, and now obviously that's very different.

Rene: Absolutely. If I think back in time to the first time that one could take one of those large satellite phones and walk around with them for 20 minutes without having to plug them in, it just seemed like magic back in the day. If you could get 30 to 40 minutes of battery life off of anything that was doing something sophisticated, it was considered to be just a complete game changer because mobility was simply not something that was very ubiquitous back in the early days.

If I think of stories around this, one of the jobs I had in my career was a field applications engineer. I'm going to date myself here, but we used to call into the offices for messages. In fact, we would be driving from account to account. We'd find ourselves, get to a pay phone. Once we got to that pay phone, we could then dial the office. The office would list a whole bunch of messages. The detail of those messages were something like call me back, I'm not exactly sure what you asked, or I'm busy.

When we suddenly had a phone in our cars that would allow us to do all these things remotely, we thought, oh, my gosh, this is the ultimate productivity gain relative to what seemed like Western Union looking back in terms of making these phone calls back and forth. True story, my first field applications job, that's how we used to correspond with a home office.

Ben: The arc that we're going to keep calling back to over the course of this episode, as you mentioned, the original ARM processor was designed with extreme low heat requirements in mind, low power requirements in order to not quickly drain a battery in a very inefficient world with way fewer advances in lithium ion versus what we have today. You think this crappy processor architecture that's extremely limited in its capabilities will never be the dominant architecture used in all of the most sophisticated and advanced computing applications in the world, and yet it is.

Over the next hour, we're going to wander through, how did we get here? But in Acquired fashion, I want to go way back to the beginning and introduce this idea of the reduced instruction set computer. I wanted to turn it over to you. You are wildly overqualified to do this as the CEO of ARM, but maybe play computer science professor or computer science history professor with us for a little bit. What was the development of the RISC versus the CISC, the complex instruction set computer like?

Rene: The concepts of RISC, and I think they were originally conceived by professors at University of California, Berkeley, David Patterson, the whole notion around RISC versus CISC was these original processors that were invented, and we're going to go way back in time to processor architectures such as the x86 or the 68000 from Motorola, were deemed as CISC processors, which stand for complex instruction set computers, which basically meant that they had lots and lots of instructions that they had to carry forward because the software that was written for them prior relied on them.

They were carrying a lot of baggage to do these very complicated instructions, which burned a lot of power because the simplest way to think about a CISC implementation is an instruction by its definition because it's complex means it has to run multiple operations from a clock standpoint to execute the instruction, which means those transistors are running more than they probably should, and you're burning up a bunch of power.

Ben: By example, at any given clock cycle, I need to allow for the possibility of doing something complicated like, in this instruction or in this operation, I'm going to go fetch something from memory and load it into a register so that it can be added and I can return the answer all within the same clock cycle. You have extra bandwidth everywhere to accommodate doing complicated things in one simple assembly language live.

Rene: Yeah, that's a good way to describe it. Another way to think of a complex instruction set, a complex instruction is, go three steps forward, two steps to your left, diagonally two steps, right three steps. If you can find an operation that benefits from that specialized activity, that's pretty good, but not a lot of programs can. Once a program has been written and relies on that instruction, then by definition, the architecture has to carry that forward. You've got all this heavyweight stuff that's involved.

The concepts with RISC were really around simple movements. Move one step forward, one step backward, one step to your left, one step to your right. I'm oversimplifying, of course, but these are things like add, subtract, et cetera. The idea there being that if you have a simpler set of instructions that can be combined in such a way to be much more efficient, then this is the concept of RISC versus CISC, which I'm thinking now probably back to the 1980s where MIPS was invented and things of that nature that were the original  RISC processors. This was all around reducing instruction set complexity.

Interestingly enough, that was back in the day when a lot of programs were written on mainframes or many computers with previous architectures. It was really interesting. If I look back in that time, you had a lot of energy being spent on developing new processor techniques when actually, you didn't have nearly the mountain of software that you have today.

Yes, if you go back in time, RISC was seen as a much more efficient way to do computing. One of the benefits you had of that was just not only lower power systems, but also going back in time, one of the most more expensive things was actually the memory associated to run all these programs. If you could fit the program in a smaller memory footprint, which again, with a RISC machine you can do that, there was some benefit to that. That was way, way back, I would say probably 70s and 80s timeframe.

Ben: It's so interesting. You can totally see why CISC was conceived of first or at least in the early days believed to be better. It's this really incredibly powerful system, where any given instruction can actually do a lot of cool stuff behind the scenes. When you juxtapose it to RISC, which it's really days had very few instructions, a simple operation would be load. Hey, go grab this thing from memory, put it in a register. Oh, I can't do anything else. That is all we've allowed for in this instruction. Just load. That's it. Oh, add. Oh, that's also it.

Rene: Especially if these larger programs were compiled and then made use of those big instructions from an assembly standpoint. The other thing that was happening was you were having a change from everything being done in what was called low level programming assembly language to higher level programming models such as Fortran, Pascal, and then C and C++.

When you're programming at the higher level languages, you have these compilers. What does the compiler do? The compiler takes that high level language and tries to put it into lower level language, which are what these instructions are. The compilers end up making use of these heavy instructions. As a result, you just got heavier and more inefficient code. Again, one of the things that people were trying to do back in the day was get to smaller memory footprints.

David: Everything you're describing of the old CISC world sounds like you said, fits perfectly with the mainframe and the mini computer era, big iron, big architecture. Nobody's worried about power requirements. Complexity is fine, IBM designs the whole thing. You would think naively that the shift to the PC era would have created the right opening for RISC, but actually CISC continued through the PC era. What happened? Was RISC developed just slightly too late? Did ARM not exist yet?

Rene: Let's continue down this history lesson here for a moment here. One of the most amazing things that took place with the IBM computer, IBM PC, back in the day was IBM, which was the world's leader in computing if you go back in time, if you think about IBM 360, the IBM mainframes, and the IBM minis, IBM was a one stop shop. IBM did the software, IBM did the service, IBM did the hardware. IBM was everything. In 1981, IBM decides, I'm going to enter the PC market. And they were behind.

If you go back to the late seventies, early 1980s, Apple has invented the first "home computer" based on the Motorola architecture. Back in the day, you had lots of, I wouldn't call them toy computers, but things like TRS80 and Commodore. They all have these smaller, weird little processors into it. The irony of the whole IBM story was IBM, the behemoth of computing, decides that we're going to now enter into building computers for the home.

What does IBM decide to do? IBM decides to not in-house the processor, nor do they decide to in house the operating system. They decide that they're going to make this platform "open". They need an operating system, DOS, something that can run off the DISC. They started talking to a company  that was actually not Microsoft.

David: Yeah, Seattle computer products.

Rene: The classic CPM80. They were talking to Gary Kildall and his company about doing that, but they chose Microsoft. They were also looking at Motorola, which was considered the kingpin at the time to do the processor. For various different reasons, they decided on Intel and 8086.

The IBM PC is born. The irony of it is that there's nothing about it that's very IBM-like because it uses external memory, it uses external hard drives, it uses an Intel processor, and it uses an operating system from Microsoft. A little crazy if you look back in time that you'd look at it and say, why would IBM actually? This is what took off with the birth of all the clones because you could build a clone of that system, because if you bought a processor from Intel and you bought a hard drive from Connor, Mac store, or Seagate, you bought a monitor from one of the third parties in Taiwan, and you got a license to DOS, you're in business and off you go.

To your question though, in terms of, okay, why didn't somebody do something on RISC, therein lies the magic of software compatibility and software legacy because all these early programs, it was stuff like Lotus 1, 2, 3, are now written to run on the x86 processor and optimized on that. What happened was over the 1980s, as the IBM PC compatible market started to take off, you had all this software that was written for that platform. The dirty little secret about CPU architectures, and there's been lots of them over the days, whether it's, again, back to Spark, MIPS, or Arc, or Tensilica where I used to work 29000, 68000 DEC Alpha, a CPU is only as good as the software that's written on it and how long that software survives.

The IBM PC and it's clones ultimately built by companies like Compaq, by Dell, by Gateway, and all these other companies that are long gone, AST Research, if you remember those guys, is what created the birth of not only the IBM PC platform, obviously, but the Intel x86 architecture. That's why, as a default, "CISC" because that's what x86 was, is the de facto. It really wasn't a, oh, RISC was better because it probably was, but didn't matter. Once IBM selected 8086, DOS was optimized for that, and then subsequently Windows, off it went. One company that was quite interesting that has probably made the most pivots in this area was Apple because Apple was originally 68000 based Motorola chip. They created a consortium for the power PC with IBM and Motorola.

David: Ironically with IBM.

Rene: With IBM, yeah, exactly, which was a RISC-CISC hybridy thing.

Ben: That's very 90s Apple to have something that is neither RISC nor CISC, but entirely reinvented and proprietary.

Rene: Yeah, and that's what power was. That was a big switching cost. I think the other thing that's interesting about that is it was a large switching cost because of the amount of software work that was there, but not nearly the amount of software that exists today. I was having a discussion on a podcast that I did with Jensen.

Ben: Yeah, you guys just launched your own podcast, right?

Rene: We did. Jensen made a comment on the podcast that software never dies. That continues to be a very true theme relative to the amount of heavy lifting required to switch an architecture. A long winded answer to your story if you go back in time, why did CISC make it, it was the IBM PC. Once that took off, that's been a very sticky platform.

David: It's so funny because RISC was there and arguably would have been better for PCs of like, hey, new paradigm, a lot of new software going to get written. But it was that decision to go with x86 that locked Syskid for the PC era.

Rene: In CPUs, and I would argue for any programmable architecture, to get to something that drives a major switching cost, you need a fairly large paradigm shift in terms of benefits on power or benefits on cost. People will talk about, you need the 10x advantage to make the switch. I'm not sure it's 10x, but it's not 15%. It's got to be something that's quite material that's going to change in terms of lift, and or it has to drive us a level of innovation that could not be done when you're starting out, which  goes all the way back to the Newton.

There was no way that an x86 could have been an option. You simply could not build the product. You either have to start in a space where something is very new, and you need some very unique computing paradigm, and or you've got to drive some different level of innovation.

Ben: To quantify the, if it's not 10x, what is it, I bet if you just go look at the Geekbench scores from whatever Apple's latest and greatest Intel based MacBook Pros were before switching to the M1, that's probably the exact quantification of how much better does something need to be to stay in an existing paradigm and switch from one horse to another.

Rene: That's about right. Yup.

Ben: Okay. We've perfectly set the table for CISC in the PC era, pretty locked in, not going anywhere. ARM is founded, it's using a RISC based approach. What is ARM doing for its first couple decades in existence? What markets does it serve?

Rene: Go back to the invention of ARM. One of the unique things that ARM drove also back in the day that I think couldn't be done today but perfect time, perfect place, perfect strategy, all of this is also luck and timing, all of those processors that I just described to you, x86, 68000, AMD, 29000, the list goes on, were all vertically integrated. Believe it or not, a lot of people used to spend a lot of time designing their own microprocessors.

ARM had an idea that's a lot of work, that's a lot of effort.  There's not a lot of differentiation that one microprocessor can have versus another microprocessor. Why don't we come up with a business model that rather than building our own and trying to enter the market against what is very crowded, I'm going to license it, and I'm going to make it available to companies rather than developing their own and just run on ARM. I'm going to license it. I'm not going to charge, no pun intended, an arm and a leg for it. I'm going to have a business model that's going to require an upfront licensing free, which is modest. I'll take a royalty when you ship in production.

The idea back then was on a shared success model, which I think again, back to the founders, back to people like Robin Saxby and Tudor Brown, that was really a rather brilliant idea because the notion was pay me an upfront license fee, which is a proxy for R&D. In other words, you're not going to spend the money on the engineers anymore to do the development. The licensing fee will be a proxy for R&D, so it's not an exorbitant fee and more importantly, it's not money you wouldn't be spending anyway. By licensing the technology, you're not going to need to hire the engineers to develop the products because I've already done that for you.

On the back end, if you ship  a whole bunch of products which is good for you,  then pay me a percentage of it because it's good for me too. It's a shared success model. You look back and say, wow, brilliant. Of course, why wouldn't everybody do that?

Back when ARM started in the early 1990s, one of the things that was really not there yet  was all of the tools, methodologies, and flows needed in the ecosystem to make it work. Synopsis and high level design language, pretty new. Cadence doing backend design, where you could just take someone else's design and integrated it into an overall flow, pretty new. A set of software tools that were as involved, all pretty new. ARM was really driving a lot of innovation.

Because we were so new, again, going back to the superpower of a CPU was really the software, we had no software. There were no application ecosystem that ran on ARM. There weren't operating systems that ran on ARM. It was very difficult in the early days to get some stickiness from a software standpoint. Our very first design win that made the company, and again, it's a classic story of accidental empires where right time, right place, but now mobile phones are taking off in the mid 1990s.

Texas Instruments is one of the largest suppliers of baseband chips for 2G and GSM phones. What they needed inside the phone was a small microprocessor that could help the baseband machine run. The idea of the processor was not to run any kind of applications because back in the 1990s, there were no applications that ran on a GSM phone.

David: The application was the phone.

Rene: The application was the phone. The customer was TI, but the big customer was Nokia. It was the first Nokia GSM phones that used the TI chip that had an ARM CPU inside. TI chose ARM because they looked around everything they had, and they didn't really have anything that was as elegant as ARM. They thought, well, why would I design my own CPU? The value back then with TI's product was in the radio, it wasn't really in the processor. Every company, if you look back in the chip world, has a design that was the market maker for them. That was it for us.

David: It really reminds me of the TSMC story and journey just a couple years later of starting with, okay, we're going to take a layer of the stack here at the most lowest level layer of production, and you guys one layer up from that. We're going to make it available to all these people who want chips, but we're not going after the PC market. We're not going after anything big that's going to be what it is today. We'll start with this small stuff and applications like these TI CPUs and a component not leading edge in the fab terms. Great, we'll take that. It's just amazing over the next 20-30 years how far it's come.

Ben: It's the same echo of the Windows story, which is It's fine to not make that much money early on, but once everyone standardizes on you, you have a lot of power in a market.

Rene: Exactly right. Once we found our way into the TI handset chipset that went into the Nokia phone, now we have traction. Now other folks who are trying to build baseband chips for GSM phones, ARM becomes the de facto standard. Not so much quite frankly because we ran any operating system where we ran any apps, because there were none. It was just simply, hey, it works pretty well, it got the right power, it got the right performance, and off you go, which is a lot of ways that designs ultimately take off. You get into what was the lift, if you will, underneath the wings of the architecture.

Fast forward, these GSM phones got a little bit smarter. They began to run an operating system called Symbian. We actually began to have some level of stickiness in terms of there was a software community and development ecosystem that started to learn and run on ARM. I would say, if I was to look back and say, what was the design that took ARM completely into the next level, it was the iPhone. If you look back at the iPhone, because ARM now had some street cred, if you will, in terms of low power, and we had street cred in terms that we could run small operating systems and small applications, we were chosen as the engine inside the first iPod.

Ben: I didn't realize that.

Rene: Yeah, if you go back to early 2000s when the first iPod came out.

Ben: Yeah, those early little Toshiba hard drives that had no other use case except for...

Rene: That's right. If you remember that iPod, that iPod had a crew display.

David: It had an operating system.

Renen: It had a little operating system. It had a thumb wheel, so you had a UI. It had all the things of a tiny little computer. The iPod was based on ARM. Fast forward now, this is early 2000s, as the 2000s are moving forward and Apple starts to futz around with are we going to build a phone, are we going to build an iPad, revisionist history, there's all kinds of stories which one they were going to build first, but it's probably less important in that they had a decision to make in terms of what was the process are going to be inside the iPhone. The legend is that they did talk to Intel about using Intel. Intel's processor of choice back then was something called the Atom.

Ben: Which was their low power or attempted low power device.

Rene: Respectfully, it was not really so low power and it was not really so low cost. It was a very stripped down x86. All this history, I'm going back in time here. You guys probably remember a product called a netbook.

David: Yes, of course.

Ben: Yeah. The PC industry was lined up that netbooks were the future, and that was just flat out wrong.

David: It was right until the iPhone.

Rene: It was right until the iPhone, and Atom was the chip inside the netbook. Intel was coming from a very lofty place of selling very high performance and very good Core i7s, Core i5s. This is the classic innovator's dilemma, innovate from the bottom versus the top. Intel was having to come all the way down from i7, i5, i3, Pentium, Celeron, down to a little itty bitty Atom which was designed for the netbook and was probably okay for a stripped down, low power laptop. But for a phone that needs to run at even more lower power, not great. But Intel has got all of the street cred inside of Apple at the time because they've made that transition by now away from power to x86, so all the laptops inside of Apple are all running on x86.

Ben: Which is on its own a miracle. They changed a compiler to make it so that applications written targeting a power platform, the power architecture, could suddenly now, with some changes, compile to Intel. Oh, my god, that is a compiler miracle.

Rene: Massive amount of work, years and years of work by Apple. You can imagine the debates inside of Apple in 2006-2007.

David: The stated goal for the operating system, phone, tablet, whatever it was supposed to be initially, but was basically run OS X or a version of it on a mobile device. OS X ran on Intel at that point in time.

Rene: Yeah. You guys are bringing back all kinds of stuff that I completely thought I had forgotten in my memory. There's a whole different exercise here on how neural nets work because you guys are uncovering all this other stuff. You had operating systems like leopard, snow leopard, and all these things that were pretty powerful, hefty operating systems. They're running all on x86.

Intel and Apple have made the shift now in the mid 2000s away from power into Intel. You have all this investment that's been made on these Mac operating systems, as I mentioned, all of these tigers and leopards that are all optimized to Intel. You have a big franchise inside of Apple that is all based on Intel and the Mac operating system, and then you've got this little futsy little iPod that runs on ARM with a crude display.

Ben: Which is basically an embedded system.

Rene: Which is basically an embedded system. You can imagine that an easy choice would be, we're going to build this on Atom, and we're going to have the operating system of Mac OS, and this new thing look the same because software will be easier, we'll strip it down, and we'll just basically take our laptop and our desktop operating system, strip it down to the phone, and run it on Intel. Or we can build up from this iPod, use ARM, and build something called iOS, which is the operating system for the phone. It's going to be different than the Mac OS, but you know what, this market is very different. It's going to require a different level of efficiency, different level of power. If we clean sheet it and do it right this way, or the bias was at the time from the iPod team was this is the right way to do it, we'll end up with a better product at the end of the day. That was the debate inside. Ultimately, the iPod team won.

Ben: Right? Didn't they split the baby where it was an ARM processor, but it was a version of macOS's kernel that had a new compiler written to target ARM?

Rene: Yeah, for sure, but they didn't start from scratch. But yes, they started cutting things down and to simplify it and build it up. But yes, that was the key design win for us. Once that happened, then very quickly, you had followers from the Android ecosystem, the Samsungs of the world. If you go back in time, companies like HTC, as Andy Rubin and Android started to take off,  now ARM was seen as the de facto standard. You had a lot of work that was already now being done around Linux and such. We had the one two punch of having the iPhone and ultimately the Android ecosystem designing around ARM. This is 2007-2008 timeframe.

Ben: At this point in time, just so listeners can anchor on what ingredient to the stew does ARM provide, they were standardizing Apple and these Android vendors on ARM as the instruction set architecture. Who was actually making the processor in the first iPhone or in these other Android phones?

Rene: If 1981 is 2007, ARM is Intel, except the benefit that ARM has is that instead of Intel being Intel, in other words, Intel builds the x86 and owns the architecture, ARM is licensing the architecture to companies like Samsung. To your question, if you go all the way back in time, believe it or not, that first iPhone chip I think was built by Samsung for Apple. Ultimately, I think Apple went to TSMC.

The chip vendors back in the day are companies like Samsung, Qualcomm, believe it or not, NVIDIA, the Tegra stuff was all ARM based. It was crowded, and why not? You have this smartphone market that's now starting to take off, and chip vendors now have an opportunity to build chips for these phones based on ARM. Again, if I do my IBM PC parallel, it would have been as if Intel would have a license x86, and they did to one guy, AMD, because they were forced to. This is interesting if you just do the parallels.

Because IBM was so worried about multiple sourcing because the x86 was such a critical part, is that they exercised, and I think I have this right, to work on a second source for x86, what you had with ARM was multiple sources. You can see why the business model suddenly became very powerful. To your point, what did we provide in the stew? Whatever the most basic ingredient is in stew, I don't like stew personally, so I don't know what the best ingredient is, but let's assume it's water, that without water you have nothing, we supplied the water. There was no way anybody could do anything to enter the smartphone market unless you went through ARM.

Ben: There's an element of portability. It's beautiful. If you're Apple and you want to design the next version of your phone, you're thinking, well, there are a bunch of arm based processors out there. As long as we pick arm, we have this whole different sea of vendors, including eventually ourselves after we acquire PA semi that we can pick as our chip vendor.

Rene: That's right. They can either pick companies that build ARM chips. Or if they're brave enough, talented enough, and smart enough, ARM will give you the rights to build an ARM compatible chip yourself. Rather than buying the chip from Samsung, who used one of our designs, you can just go build your own, which is what Apple did.

Ben: All right. Now that we're in these early 2010s period, this is probably a good place to explain the dual ARM business models. At least at that point in history, how does ARM make money?

Rene: Back to the simple concept of licensing and royalties, our business model way back in the day, and it still pretty much holds, is that we have an upfront fee for licensing and royalties. As you can imagine, when you're starting out and a lot of companies aren't actually shipping any volume, the vast majority of your revenues come from licensing, and the proxy for that in the chip world is design wins. You get a lot of design wins, you get people committed to the architecture, but they don't actually ship any volume, so you don't really get to a mix of royalties until you're in volume. It took a long time, but licensing was bigger than royalties for many years. You could look at that glass half full and say, wow, the future is going to be bright if you ever get there.

David: Glass half empty would be, hey, this stuff doesn't work.

Rene: Yeah, I'm betting on the front end. Are these things ever going to see the light of day? Another version of the business model is the license. You can either license a core that we built, we call that an implementation. We basically do the blueprint and says, the house looks like this.

Ben: This means in-house. You have your own ship designers. They're using cadence, synopsis, and their floor planning. They're doing what everyone imagines NVIDIA is doing over there.

Rene: That's right. There were a set of customers that believed that either due to the link between hardware and software or the ability of their engineers to develop something that would be higher performance than what we could build, we had these architectural licenses, and it allowed customers to build their own implementation.

One of the things that sometimes gets confused about these licenses is that, are they able to run software that's not ARM compliant? In other words, can they add some special instructions that nobody else has which gives them a unique advantage? They're not allowed to do that, and the reason is very simple. Once instructions look different across a number of different architectures that a customer has, software can't understand it. Let me drill on that a little bit further.

If customer A has an instruction that says accelerate, and customer B has an instruction that says Accelerate 2X, and customer C has an instruction that says accelerate 3X, if I'm a software developer and I'm writing software for ARM, I really don't want to have my program taking advantage of the 3X instruction because I don't know that everybody has it. I end up going to something we call inside as the lowest common denominator approach that  the software developer would not make use of those instructions. It's one of the great things the company has done in its early days, and we've maintained it certainly since I've been running it.

We're never going to break the ISA. We're not going to allow people to add custom instructions because once you do that, you break software compatibility, which is one of the superpowers of ARM. If you think about why did x86 say so sticky on the IBM PC, it's because Intel was the only game in town. Of course, they were going to run. That's why Compaq, Dell, and all these other clone guys were able to copy the PC because the software just ran. If they were not able to do it in such a way that IBM did it, they could never be successful.

We offer these licenses, they're architecture licenses, but all they really do is allow people to build their own implementations. I will say, just adding on to that, I know we're going back and forth between the future and the past, we used to do a lot of them because customers used to believe that (1) they could build a better design than ARM and or (2) there was something specific in the software they want to take advantage of. Not many people do them anymore. They're really hard.

Back to the 10%-15% advantage or even 5% advantage, the ROI isn't all that high. If you're going to have three or four engineers designing an ARM CPU that you can buy from ARM anyway, why not take those 300 or 400 engineers and put them on IP that you do as a customer, that only you do?

David: Nobody's building a CPU with three or four engineers, right? It's 300, 400, or a thousand.

Rene: 300 or 400, not three or four. If I said three or four, that's a big way. 300 or 400 at least. It's a lot of work. It's hard.

David: I imagine for probably almost every customer out there now, the ecosystem and compatibility of software across all vendors, all applications out there, is worth so much that they wouldn't even consider going out and altering the instruction set because then they would lose compatibility with the rest of the ecosystem.

Rene: I know we're hopping around in terms of history dates, but that's one of the things that I think gets lost in terms of what's gone on with CPUs and software compatibility over the last 15-20 years. As we were talking about in 1980s, early 1990s, I mentioned a lot of microprocessors, the 68000 power PC, 29000 DEC Alpha Spark, there's a pretty large graveyard of CPUs. They're very good products, very good in terms of performance, very good in terms of their design, and they've just entered the graveyard of CPUs.

You say to yourself, why do they all die off? Once the flywheel of software gets built onto a certain architecture, it's very difficult if you're developing a new piece of hardware to say, I'll choose one of the ones I just mentioned, because there really isn't a software story around it, so they all began to wither away. Once the internet took off and particularly as you got into the dot-com era and a little bit after it, huge amounts of investment started to go into software companies.

Software as a service, subscriptions, SaaS models recurring revenue, everything around the software industry which was wonderful, two things happened with that. (1) It drove an increased the innovation and investment into software, all levels of software, complexity of software, software stacks that run the cloud that run in a network switch, that run in an automobile. At the same time, semiconductor investments, which is changing a little bit now, began to wane. Very little venture money started to go into startups and semiconductor startups in particular.

That's the fertile ground where new innovation happens, whether it's around new compute architectures, including CPUs. You had very little innovation taking place with companies building CPUs and startups. In fact, I was with one of the very last ones funded in the late 1990s, a company called Tensilica. We were a bunch of ex Synopsys and ex MIP guys  building configurable processors. The idea there being that you could build a custom piece of processor with your own custom extensions, et cetera. We started in 1997 I think and I left in 2004. The company was ultimately bought by Cadence I think in 2012. It shipped a lot of cores, I think maybe over a billion cores. The point was after Tensilica and another company called Arc that was doing the same thing, there was very little innovation taking place or investment in semiconductor CPU startups.

Ben: The great irony of the namesake of Silicon Valley is that if you were a silicon startup, you could no longer raise venture capital dollars there.

Rene: Yes, exactly. What you have is, as all these architectures start to wane away, the amazing amount of investment that's now going into the software industry in general, and all of the investment going into stuff going into the cloud, two architectures really ultimately remain, x86, which has been around for 40 plus years and ARM. We were talking earlier about the data center.

Why ARM in the data center? Two things. (1) The choices aren't massive. It's not like there are 17 different choices as we just talked about. (2) One of the things that's becoming extremely important in the data center is power efficiency because when you're running these extremely large loads, whether it's general purpose compute and now with the advent of running accelerated compute with AI models, you need incredible efficiency in the processor space. I think we've arrived at this place both as a combination of having (1) really good, low power architecture, (2) an incredible amount of software innovation that's been done on ARM, and (3) just optionality has gone away because investment has waned.

Ben: That last one is just like a last man standing. Why is the winner? There was going to be a winner, as all the  competitors fell by the wayside. It's almost tautological that whoever becomes the winner, there was going to be one who was left standing, or two in this case.

Rene: I would argue it's not one of these industries where last man standing has occurred because the market is uninteresting. It's actually the reverse. The market's never been more interesting, but because of the massive amount of investment required from a software standpoint, optionality is limited because if you were to rock up today and say, I want to go build a system on chip based upon the Motorola 68000 architecture, what software exists is going to run on it?

David: It's so funny. It really is just like the fab industry. The capital investment required and the software investment required is so massive that you get to where we are now, where you've got TSMC, we got Samsung.

Ben: Yeah, global foundries.

David: But at the leading edge, it's all that's left, right?

Rene: There are definite parallels. The fab industry is direct capex. You'd look at it and say, if I'm going to build a two nanometer fab and beyond, I'm going to have $30-$35 billion of capex, our industry is not that. But on the flip side, it's not unlike that when you think about the opex of all of the 20 million developers and plus that have developed on ARM. You're actually having to tilt that.

Ben: Incredible momentum there. I still am floored by this architecture that was originally built not to melt plastic to be super low power, ended up becoming, I'm sure you have better stats than I do, but a dominant architecture running in data centers doing this heavy compute load AI training inference. Maybe Rene, I could ask you with your most honest assessment on, where is there still a place for x86 architectures? Should the whole world be ARM? Is it just actually better, or are there different use cases for each?

Rene: I'm going to try hard to be unbiased, even though my job is the CEO of ARM. There's a lot of things that are in our favor, one of them is quite frankly the fact that we have an open model, where our products can be built at any fab by any chip company. If you're looking at x86, you're looking at two people who build it. One of them builds a TSMC, AMD, and the other one builds in-house at Intel, although they build a whole bunch of stuff at TSMC too these days. It's just two people. Not only are you betting on those two people, but the IP around the chip that they build, whether it's around communications, whether it's around accelerated computing, whether it's around  network storage, you're banking on that to bring a lot to the party. One might look at it and say, why isn't Intel and AMD just licensed x86 and just flattened out the playing field? Maybe that playbook probably could have been run a while ago.

Ben: Also, when you have a high margin business model, it's very hard to switch to a low margin business model.

Rene: Bingo. ARM came from a very different place. As a result, we have a huge advantage just with our model. In the data center, we have another fairly significant advantage in that if you look at customers like Microsoft, Google, or AWS, all who have custom chip efforts on ARM, all who have talked about getting 60% benefit in terms of performance on a like for like basis, that's not just the ARM ISA. That's not just the fact that we are more efficient than x86. They can build a custom SOC with a custom piece of memory, let's say, or custom storage, custom blade, custom interconnect, or custom offload, where from a TCO standpoint, their optionality is incredible.

As a result, their flexibility in terms of building something that is absolutely right for an Azure estate, a GCP estate, or an AWS estate, because they have the volume and spend that can drive that. Again, one of the benefits we get with the hyperscalers is because, no pun intended, the scale is so large. Doing custom chips, they can get an ROI on it. You can't do that with x86.

David: Right. You go to Intel and they say, here's my product for you.

Rene: Here's my product. You've got to put the pieces together and see how it all fits. That in itself gives us a big advantage. We have optionality with people like AMP here, for example, who do standard products, but that optionality of there's a standard market play, a custom play, or Grace, for example, the CPU from NVIDIA, Goodbye Grace and, or the way they ship it today, increasingly with Grace Blackwell, where it's highly integrated.

Again, why Grace Blackwell versus Intel plus Blackwell or AMD plus Blackwell? If you look at the architecture and some of the things that they do with NVLink, how they couple the CPU to the GPU, and how the interface between HBM memory and CPU memory, it's just they can't do that in an x86 world. By the way, in a Grace Blackwell system, the other benefit you have is that Grace can run all major pieces of the operating system. You can run an AI cluster, AI cloud, and the software stacks that are native that run for an ARM general purpose compute can run in your AI cluster. That in itself gives huge optionality. I don't know how we start on this. I was advocating what to do about x86. I started talking about ARM all day, but it's just hard.

Ben: Yeah, it makes total sense. Okay, we'll call a spade a spade. We're at the present. We have come forward today, and I want to talk to you about a couple things. (1) How the business model has evolved, how you deal with your customers differently, your products that you sell to customers now, and the way in which you work with customers. The other of which is (2) last quarter as of recording, you did $939 million in revenue, so right around a run rate of $4 billion dollars.

The market cap is about $150 billion. Investors think the future is very bright for this company as we move into this world of AI and connected devices everywhere. Why are people so insanely bullish on ARM? What is the incredible future hold, and why is that valuation the valuation?

Rene: We've been talking for 40 minutes or so, but hopefully these last 40 minutes have been helping build that case study.

Ben: Yes, very much so.

Rene: I think it goes back to the fundamental advantages both from a technology standpoint and probably more importantly, as it tends to be with this world, the market forces that are in our favor. If you just start with the fact that more and more chips are shipped every year, more and more of those chips are based on ARM, and you look at the end markets, whether the examples I gave you in my house, from my car to my camera, to my stove, they are all ARM-based and they are all in a growth mode. You look at it and say, gosh, there's a ton of tailwind associated with this company.

Maybe people are a bit more excited since the IPO, I don't know, is around the fact that AI has created this next level of compute need. One can argue incessantly around, gosh, $40 for Copilot, am I really getting the ROI on that? And what are the near term economic models?

Ben: You sound like Marc Benioff.

Rene: I just think the near term economic models on AI is the wrong way to think about it. I look at it much more in the parallels of the automobile, the industrial revolution, the smartphone revolution, the internet revolution. For a company like ARM, because AI requires a next level of compute capacity and capability, and it's not just running strawberry training models, and the massive amount that's required to train all these next generation LLMs or even beyond large language models, video related models, but it's actually then running those applications, the inference in your car, on your stove, in your headset, on your wearable, inference is going to run across all those workspaces, that all requires a lot of compute.

One of the things that we used to talk about when I was at NVIDIA was, what is the death for anybody who's either in the computing category or accelerated computing category? That's when you get to "good enough".  I remember being in good enough. I've been in the semiconductor industry since I got out of school in 1984 and started TI. There's definitely been periods of good enough. I think the late 2000s, early 2010s felt like good enough.

Netbooks were a good definition of good enough, where at that time, it didn't seem like you had the application space and area to drive the need for more compute. What did you end up building? A little crummy $199 computer because it could do everything your big computer did. We've definitely had periods in our industry where good enough has existed and the need for compute innovation has slowed. It never stopped, but it's slowed.

With AI, in the foreseeable future, you look at it and say, this appears to be almost unabated because when you think about the benefits that AI could bring, whether it's around education, drug research, investment, it's mind boggling. ARM is going to be in the center of that. Whether it's in the data center, whether it's in your automobile, whether it's on your smartphone, whether it's in your wearable, the AI compute path is going to run through ARM on some way, shape, or form.

David: It's like the Bezos comment of I can't imagine a future where my customers ever say, gosh, I wish this were a little more expensive. You can't imagine a future where, gosh, I wish GPT-7 were just a little dumber.

Rene: I actually like the fact that people look at it and say, I'm not really seeing much benefit from this yet because that actually says, oh, my gosh, what a fantastic opportunity to innovate and do more. A big part of it is the hardware that you're seeing today, particularly the edge based hardware. Those were designed a couple of years ago when these large language models weren't even needing to run locally. You have completely unoptimized architectures everywhere to take advantage of the AI capability that we're going to unharness.

To me, I look at this and it's like white space in terms of the compute opportunity. Back to the question that Ben asked in terms of why people so bullish on the company, I'd like to think that's why it is. We play in a super large market. Semiconductors are a trillion dollar market by the end of the decade.  You said we're four billion dollars. We probably could take a bigger chunk of that one trillion dollar market at some point in time because of the importance of the company.

Ben: This is a good lead into this question I have for you. I've heard you espouse this idea. I'm sure there's a way to rationalize these two things, but it almost feels heretical. You opened the episode by saying, we do CPUs. The whole industry over the last five or 10 years, including David and I on our NVIDIA episodes, had this obsession with GPUs, with accelerated computing, with get those stupid serial workloads off the CPU, get them onto the GPU, where you can do pure magic with it, that's enabled the whole AI revolution.

You're the CPU company, and I've heard you talk about, okay, now that we know some of the use cases that are happening on GPUs, history has shown us that those tend to migrate back to the CPU over time, and the definition of CPU changes. How do you view the state of things right now with everyone being so excited about GPUs and incredibly parallel GPUs of the future and CPUs are fine, but they're a known quantity?

Rene: I think accelerated computing and the advent of GPUs is fantastic for ARM. What it indicates is that there's lots of compute out there, and more compute needs to run in such a way that you have not only base compute but accelerated compute. The reason I think it's oversimplified. It's almost a notion of, oh. I've met with investors who have had this questions to us and say, well, everything's moving to the GPU, do you need a CPU anymore?  It's almost like saying, well, I've got this V6 engine going to a V8, I don't need tires and a steering wheel anymore, do I? It's nonsensical. Just think about the architecture of it.

What the advent of all of these accelerated computing models that are doing, again, it's primarily the data center. Let's just be very real about this. It's all happening in the data center. It's a fantastic outcome for CPUs. Why is that? (1) All these data centers need CPUs, obviously. I just gave the example in Grace Blackwell and why that's a great positioning piece for ARM. More importantly, all of that training converts into inference. If training is the teacher, inference is the student. There are far more students than teachers in the universe, and that's why there will be far more inference workloads than training.

That's going to run everywhere relative to the smallest devices, whether it's wearables, whether it's a headset, an augmented reality. You're not going to run a hundred watt GPU on your head. I'm sorry, it's not going to happen. You're going to have to get into very, very different form factors.

Naturally, a CPU is going to be there. You can't have an accelerator out there without something that's running the main and the system. That's a fantastic opportunity for ARM because it means a couple of things for us. We can solve that in a few ways. We can add more and more capability to our CPUs, which we are today, around extensions that help with AI acceleration. This goes back to  RISC versus CISC and things that we can add in terms of just extensions that will help with AI, but also back to the customization, you could add small AI acceleration, which we do today with our Ethos NPUs that are four tops, eight tops, et cetera. That will do some level of offload.

I think the model will be for these edge devices to run in conjunction with cloud, where you're going to have some processing happening locally, some processing going to be happening in the cloud. You're going to need to have some level of security, authentication, and attestation locally so that the models know that it's you, it's not somebody else, and the information is kept private to you. Game on. All this GPU accelerated compute is wonderful for us because it's just going to drive incredible demand. The idea that the only way you'll ever run a computer is through a large GPU in the data center, it's not the way the world works. The last thing I'll say on this, and I love Jensen, he's done a brilliant job with the company, remember, he tried to buy ARM.

David: I was going to say, there's no better data point than this than NVIDIA tried to buy ARM.

Rene: When he tried to buy ARM, ARM was a $2 billion company and he was a $25 billion company. He certainly didn't do it because he wanted to be revenue accretive. He knew the importance of what ARM meant to the industry.

Ben: Was that really the valuations of both companies?

Rene: No, that was their revenue rates. NVIDIA tried to buy us for $40 billion back in 2020. I think their market cap was $350-$400 billion. It wasn't anything close to what it is now. If you looked at from the outside in, back then, 2020, we had not yet gone public. We hadn't really started the turnaround in our core businesses yet.

There were a lot of people at the time looking at the deal. Masa bought ARM in 2016 for $32 billion and basically sold it four years later for $32 billion plus some change, $40 billion. There were a lot of critics of the deal that said, NVIDIA overpaid for this thing because it's not really a growth company.

Ben: Do you mean SoftBank overpaid?

Rene: No, I'm sorry. NVIDIA overpaid for ARM.

David: That had the acquisition gone through, NVIDIA would have been overpaying for it?

Rene: Yeah, I'm sorry. The price that they put down was $40 billion. There was a lot of criticism that they had overpaid back in the day. You look back at it now and it seems laughable in terms of their market cap is probably 10x, their revenue is 4x.

By the way, the last thing I would say about that acquisition, first off, a lot of people thought NVIDIA overpaid. Secondly, a lot of people hated it. There was a lot of opposition that we got from regulators, customers, ecosystem partners, which I think belied the importance of the company and in a roundabout way that said, gosh, this is a company being bought for this amount of money at this valuation, and so many people are against it. Maybe the company is more important than folks had originally gave us credit for.

Ben: This seems like an area, where regulation did exactly what it's supposed to. You were a broad horizontal provider that served a whole bunch of customers, that was integral to an industry, and is essential for the further advancement of humankind truly in our most important innovation area. One of your customers wanted to own all of it, which over time presumably means all the other customers wouldn't quite have the same access to it.

Rene: It was a fascinating case study because I learned a lot about M&A and regulatory.  One of the things that had surprised our teams that were advocating on the deal was that generally, most of the blocking takes place. Let me back up and say it this way, it was a vertical merger. It wasn't a horizontal merger, it was a vertical merger.

Typically in a vertical merger, people will object to the merger if it forecloses a market or the stifles competition in a given market. But at the time, we were predominantly smartphone revenue, and NVIDIA is not a smartphone company. The folks looked at it and said, well, because it doesn't really violate a vertical integration mantra, and regulators tend to care more about the near term than the long term, this should be okay. What they actually did in that case was cared much more about the long term of the what may happen someday versus what we think will happen in the near term.

David: I'm curious, actually, if you know, since you were at NVIDIA for a long time, the ARM journey for NVIDIA also seems like an improbable one because NVIDIA started as obviously a graphics card company for PCs, which ran on x86 and then did this incredible shift into the data center. But at the time, as they were making that shift, data center was also an x86 environment. When did the company start really realizing, hey, this ARM platform is going to be a lot more than just not melting plastic and 2G phones?

Rene: NVIDIA's been an amazing partner for ARM. When I was working there, we made a very distinct pivot to try to accelerate our mobile business with Tegra and really accelerate everything we were doing with ARM. NVIDIA bought a company by the name of Portal Player. You may remember those guys. They were actually doing the audio chip for iPod back in the day. We at NVIDIA were actually doing the SoC for Zune. I don't know if you guys remember that.

David: That's right.

Rene: Yeah, that was the Microsoft equivalent iPod. We had been flirting with all kinds of stuff that were ARM based, whether it was Microsoft with Windows CE and Zune, but the real thing that had NVIDIA double down on it was (1) when the smartphone thing really took off. (2) This was the business I was managing at the time. This is 2009-ish timeframe, when Microsoft made the commitment to do Windows on ARM.  We felt at NVIDIA at the time that we were very well-positioned to do very well in that market because of all the history that NVIDIA had with the Windows ecosystem, all the work that they had done with PC gaming.

David: DirectX, yeah.

Rene: Yeah. I was running the business for all laptops back then for NVIDIA. I took over all the Windows on ARM stuff, so I was doing that first hand myself.

Ben: Windows on ARM is like another miracle if you can make it happen. All that translation layer, all those compilers, everything that's been written for decades specifically for x86 chips, theoretically, you're going to be able to press one button, compile your code differently, and now it runs on ARM. That is quite the promise.

Rene: Yeah. A lot of the native stuff now has all been ported to ARM, and that really benefited from stuff on mobile. If you think about all the apps, all the Microsoft apps that run on iPads today, whether it's office and whatnot. We got a huge benefit of that. Going back to your question, David, in terms of NVIDIA, they stuck with ARM for quite some time. We stuck with ARM with windows on ARM. After I left, ARM became the default platform for everything they're doing on automotive. If you look at the NVIDIA drive platform, everything NVIDIA does around robotics, that's all ARM based.

Everything that they do that uses "accelerated computing", that whole software stack all runs on ARM, which is why ARM is so ubiquitous and automotive. If you look at work done by Renesas, work done by NVIDIA, or work done by Qualcomm, a lot of those software stacks are now native and all run onto ARM. It's why we're so strong in the automotive space. Back to your NVIDIA question, they were very committed to ARM for a long, long time. A combination of A, Tegra, B, Windows, then all the stuff on auto.

David: And then the data center probably really starting to come online.

Rene: The data center really started to take it off. Back to the customization, the way they architected Grace Hopper and now Grace Blackwell gives them a degree of innovation that they can't get any other way.

Ben: All right. On a closing topic, I want to ask about what seems to me to be a little bit of a strategic evolution. Can you tell us what you're doing with subsystems and how that came to be?

Rene: Yeah. Subsystems are  a natural extension of an IP business model. The core model of doing CPUs, and I say we do CPUs, I oversimplify it, there's a lot of other products we do inside the company, we do GPUs, we do NPUs for AI, we do all of the complex interconnect that's required to build a server chip, CMNs, which are coherent mesh networks. These are essentially the plumbing. If you're building an SOC that has 128 CPUs, you need this mesh network that helps connect the CPUs together and then interfaces them in the memory. It's just a lot of plumbing.

This is the analogy, I know your audience is pretty technical, but I use during our roadshow. Think of all those things as disparate Lego blocks. To very sophisticated customers, you can basically sell them or provide to them these Lego blocks, and they will provide a beautiful copy of the statue of Liberty. Or you can basically say, look, connect everything exactly this way in this particular form, and you will get the Statue of Liberty a heck of a lot faster than if you built it yourself.

David: It really is like Lego.

Rene: That's what compute subsystems are. We basically take the 128 CPUs, we take the coherent mesh network, and other controllers and memory interfaces. Not only do we stitch them together, but we also verify that this is all going to functionally work and be correct, that when you put it into your design, it's just going to work. That can save three months, six months, nine months of engineering time. I can get a product out to market a heck of a lot sooner.

We can take that a step deeper, which we do in terms of we may work with a TSMC, a Samsung, or an Intel and say, we're going to actually now say, if you build it this way with these type of characteristics, we will guarantee that you will get 4.4 gigahertz of frequency output. We know that you can get this  performance. We are taking it much further than we have. It's almost I would say a virtual chipset. Not quite to the final building a chip, but it's pretty darn close. You say, well, why would you do that?

Ben: Yeah, this is a lot of integration. It's a lot of bundling from just the instructions that says our architecture to design to now this complete solution. Hey, connect it all this way and 4.4 gigahertz are yours.

Rene: Yup. I'll call it packaging instead of bundling, but it's a way of providing a full solution that will simply allow customers to get to market a heck of a lot faster. It provides us a lot of benefit because we can do early prototyping from a software standpoint earlier, but for customers, the big benefit is they get to market much faster than they would. Back to the IP standpoint, connecting up all the CPUs, taking the IP that we deliver, that's not really value-add from an end customer.

An end customer that's building a phone chip wants to focus on the ISP and the camera. If you're a cloud customer, you may want to focus on the accelerator or something on analog IO. For us, our position is, if it's around the computer, essentially what's running the main software of the system, and how that performs in a certain fab, we're probably in the best position to be able to define what the best performance output will look like.

Ben: All right, you now have this essentially reference design for how to make an amazing chip. Are we ever going to see ARM call up TSMC? And say hey, go make a few million of these.

Rene: Nothing I can say about that today.

David: Fair enough.

Ben: Great, well Rene this has been awesome, thank you so much.

Rene: That was great, Thank you.

Ben: Awesome, listeners, we'll see you next time.

David: We’ll see you next time.

Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.

More Episodes

All Episodes > 

Thank you! You're now subscribed to our email list, and will get new episodes when they drop.

Oops! Something went wrong while submitting the form