Hot Chips for AI: Day 2


Article content
Some of the chips discussed at Hot Chips. Intel Lunar Lake, IBM Telum II, Nvidia Blackwell & AMD MI300X

Although I refer to this as Day 2 of Hot Chips, it is really the first day of the main program, which is focused on discussing chips and technologies for the participating companies. In the past, the two days would be packed with new technology, architecture, and chip announcements. As many technology companies have transitioned to hosting their own product launch events, Hot Chips, like many of the other technology conferences, is often less about new announcements and more about information on previous announcements, such as a deep dive into a processing core or architecture. However, even when the information is more of a recap of details previously announced, there is still value in gleaning new details from the information, Q&A, or offline discussions with the product managers and architects. With that said, there is still too much information to be covered in one article. It would take a book to cover all the information presented in a single day at Hot Chips. But I will summarize here and encourage everyone to review the presentations and/or seek information from the respective company. Note that Hot Chips is also a virtual conference, so you can participate remotely and have full access to the presentation materials for a nominal fee. Eventually, the presentations will be released into the public domain.

Besides the great California weather, the beautiful Stanford University campus, and the great nerdy discussions about chips and technology, one of the other benefits of attending Hot Chips is to get a feel for the industry and what's changing. The most notable change evident at Hot Chips this year is the impact of AI. AI is driving the market for new chips and chip architectures, and increasingly, it is being used in the development of new chips and systems. But, another trend is the need to architect “balanced” designs. In the past, the first generation of a new processor architecture was often somewhat inefficient. The long list of proposed requirements, limited development window, and high cost of doing silicon design respins led to what I would call less than optimal designs. It was usually the second or even third generation of an architecture where missing features were added, and the characteristics of the design were refined and balanced to provide a more efficient solution. Companies are now talking about designing a more balanced solution with the first generation, which makes the upfront analysis, simulation, and design more critical. Much of this is being enabled by better EDA tools, some of which leverage AI, and the use of AI agents to perform specific tasks in the design and validation process. 

If this year’s high attendance at Hot Chips is any indication, there is growing interest in both the available solutions and how to design better solutions for AI. It should also be noted that the conference is popular with students and professors, and there is fairly strong academic participation.

The first official day of the conference featured four sessions: one on high-performance processors, one on specialized processors, and two on AI processors. To be honest, they all seem to bleed together in one aspect or another. Once again, there was too much information to cover all the presentations in detail, so the following is a brief summary of the presentation topics.

  • Qualcomm provided a deep dive into the new Arm-compatible Oryon CPU core, which was first announced for the Snapdragon Elite X SoC for thin-and-light mobile AI PCs but will be used in everything from XR to automotive in the future.
  • Intel provided an overview of its next-generation Core Ultra processor called Lunar Lake for thin-and-light mobile AI PCs with its multi-tile design
  • IBM introduced the new Telum II processor and Spyre AI accelerators for the next-generation IBM Z systems
  • Tenstorrent provided an overview of the Blackhole AI processor architecture and programming model
  • SK Hynix discussed an accelerator-in-memory (AiM) computing architecture in development
  • Intel discussed the Xeon 6 server processor architectures that include the recently announced Sierra Forrest and upcoming Granite Rapids products.
  • OpenAI discussed the challenges of scaling to support the rapid growth and scaling of AI
  • Nvidia provided a deep dive into certain aspects of its Blackwell platform and GPU architecture, including early performance data
  • SambaNova provided details on the new SN40L RDU AI processor
  • In its second presentation of the day, Intel discussed the architecture and programming model for the forthcoming Guadi 3 AI accelerator
  • AMD provided a deep dive into the MI300X AI accelerator architecture and its chiplet designs
  • Broadcom discussed a new co-packaged optics (COP) platform for in-system optical interconnects to scale past 100Tbps.
  • And last, but not in any way least, FuriosAI provided details about its new RNGD processor and programming models for AI processing.

It was a full day of presentations packed with technical details, and it was only the first day of product and technology presentations. Day 3 of Hot Chips will feature more sessions on AI, networking, and high-performance processors, as well as a view on life with AI pervasiveness from retiring AMD President and former Xilinx CEO Victor Peng. Look for that recap here on LinkedIn. After the conclusion of Hot Chips, my colleague Kevin Krewell will join me in a video recap of Hot Chips that will be available on EETimes.com.

Thanks Jim. I need to look into this stuff. One interest I have is the degree to which a lot of this will be integrated (e.g. Z) and to what we'll have high speed primary interconnects (which mostly are the case with TPUs and GPUs today).

Like
Reply

To view or add a comment, sign in

More articles by Jim McGregor

  • Hot Chips for AI: Day 3 Wrap-up

    The Journey to Pervasive AI, as presented by Victor Peng The third and final day of Hot Chips is very much like the…

  • Hot Chips for AI: Day 1

    Two interesting topics were chosen for this year’s Hot Chips Sunday tutorials - AI-assisted hardware design and the…

  • Hot Chips for AI

    The 2024 Hot Chips Conference at Stanford University Fall means the annual Hot Chips conference at Stanford University.…

  • TIRIAS Research Turns Eight and Adds A New Team Member

    It is hard to believe that we formed TIRIAS Research over eight years ago. We have had the opportunity to be a part of…

    16 Comments
  • Analysts Question FTC’s Case Against Qualcomm

    The FTC has filed suit against Qualcomm claiming that the company has too much market power and has used its modem chip…

  • TIRIAS Research at 5 Years and Growing

    I am amazed to think that it has been over five years since I launched TIRIAS Research, and it continues to grow. I…

    3 Comments
  • Happy 4th Anniversary TIRIAS Research

    The internet is filled with statistics on startup ventures. Most seem to die in the first few years.

    7 Comments
  • Mobile World Congress (MWC) Highlights Good AND Not So Good Products And Technologies

    The day before Mobile World Congress (MWC) is filled with press events by the major OEMs and a few small trade show…

Insights from the community

Others also viewed

Explore topics