OpenAI Set To Finalize First Custom Chip Design This Year (reuters.com) 15
OpenAI is pushing ahead on its plan to reduce its reliance on Nvidia for its chip supply by developing its first generation of in-house AI silicon. From a report: The ChatGPT maker is finalizing the design for its first in-house chip in the next few months and plans to send it for fabrication at Taiwan Semiconductor Manufacturing Co, sources told Reuters. The process of sending a first design through a chip factory is called "taping out."
The update shows that OpenAI is on track to meet its ambitious goal of mass production at TSMC in 2026. A typical tape-out costs tens of millions of dollars and will take roughly six months to produce a finished chip, unless OpenAI pays substantially more for expedited manufacturing. There is no guarantee the silicon will function on the first tape out and a failure would require the company to diagnose the problem and repeat the tape-out step. Inside OpenAI, the training-focused chip is viewed as a strategic tool to strengthen OpenAI's negotiating leverage with other chip suppliers, the sources said.
The update shows that OpenAI is on track to meet its ambitious goal of mass production at TSMC in 2026. A typical tape-out costs tens of millions of dollars and will take roughly six months to produce a finished chip, unless OpenAI pays substantially more for expedited manufacturing. There is no guarantee the silicon will function on the first tape out and a failure would require the company to diagnose the problem and repeat the tape-out step. Inside OpenAI, the training-focused chip is viewed as a strategic tool to strengthen OpenAI's negotiating leverage with other chip suppliers, the sources said.
Why didn't ChatGPT design it's own chips? (Score:2)
The chip is being designed by OpenAI’s in-house team led by Richard Ho
Did anyone tell Richard and the team they can be replaced with the LLM?
Re:Why didn't ChatGPT design it's own chips? (Score:4, Funny)
I guess they couldn't find a torrent with leaked nvidia designs to train ChatGPT on - you can't plagiarise stuff you've never seen.
Re: (Score:2)
Automated design tools have been a thing in the semiconductor world for some time. Not exactly the same as using an llm for everything, but unless you're a team at Intel, you aren't doing everything by hand (and even they aren't necessarily doing it old school anymore).
Re: (Score:2)
My bad was just being cheeky, you as they say "the best kind of correct"
Re: (Score:2)
Although... Were you to ask AI what sort of computing architecture it would prefer to run on, you might get a different answer to what humans are suggesting.
e.g. Itanic is no more but one guy suggests revisiting VLIW - letting the AI dynamically optimize its own code-morphs.
https://thilthomas.medium.com/... [medium.com]
Re: (Score:2)
Wow we might actually get a decent compiler for Itanium.
Re: (Score:2)
No worries. Early automated design tools for semiconductor design were considered to be sketchy, and it took a long time for some design teams to get good results with them. Remember AMD's Bulldozer?
https://arstechnica.com/gadget... [arstechnica.com]
It's completely understandable why people would be making jokes about AI designing a CPU.
Could they onshore this? (Score:2)
Micron makes chips near me. So it can be done.
OpenAI is so sure they're going to eliminate my job, it'd be nice if they gave something back.
Oh, boy (Score:2)
Iterative AI (Score:2)
Semiconductor design and testing already uses a huge amount of automation. In principle they could train a LLM to design and validate its own hardware using existing libraries and tools. They'll get a buggy v0.1 from TSMC, feed the results back into the LLM and iterate.
Pretty basic stuff? (Score:2)
As I understand it, AI chips are just big arrays of low precision (2/4/6 bit) multiply/add units. Not a super big challenge.
Catching up to Google (Score:2)
So OpenAI is going to catch up to Google, doing in two years what Google needed more than ten years to do. That would be quite the accomplishment. The basic theory of building an AI processor seems fairly straightforward, and yet not a single hyperscaler has been able to eliminate their dependence on Nvidia GPUs. In practice, the challenge is far harder than it appears. Math processing is the easy part. Moving data efficiently and quickly to where it needs to be is the real hardware challenge, and it'