Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI Technology

OpenAI Set To Finalize First Custom Chip Design This Year (reuters.com) 15

OpenAI is pushing ahead on its plan to reduce its reliance on Nvidia for its chip supply by developing its first generation of in-house AI silicon. From a report: The ChatGPT maker is finalizing the design for its first in-house chip in the next few months and plans to send it for fabrication at Taiwan Semiconductor Manufacturing Co, sources told Reuters. The process of sending a first design through a chip factory is called "taping out."

The update shows that OpenAI is on track to meet its ambitious goal of mass production at TSMC in 2026. A typical tape-out costs tens of millions of dollars and will take roughly six months to produce a finished chip, unless OpenAI pays substantially more for expedited manufacturing. There is no guarantee the silicon will function on the first tape out and a failure would require the company to diagnose the problem and repeat the tape-out step. Inside OpenAI, the training-focused chip is viewed as a strategic tool to strengthen OpenAI's negotiating leverage with other chip suppliers, the sources said.

This discussion has been archived. No new comments can be posted.

OpenAI Set To Finalize First Custom Chip Design This Year

Comments Filter:
  • The chip is being designed by OpenAI’s in-house team led by Richard Ho

    Did anyone tell Richard and the team they can be replaced with the LLM?

  • Micron makes chips near me. So it can be done.

    OpenAI is so sure they're going to eliminate my job, it'd be nice if they gave something back.

  • Is it wrong for me to hope they let ChatGPT design the chips?
  • Semiconductor design and testing already uses a huge amount of automation. In principle they could train a LLM to design and validate its own hardware using existing libraries and tools. They'll get a buggy v0.1 from TSMC, feed the results back into the LLM and iterate.

  • As I understand it, AI chips are just big arrays of low precision (2/4/6 bit) multiply/add units. Not a super big challenge.

  • So OpenAI is going to catch up to Google, doing in two years what Google needed more than ten years to do. That would be quite the accomplishment. The basic theory of building an AI processor seems fairly straightforward, and yet not a single hyperscaler has been able to eliminate their dependence on Nvidia GPUs. In practice, the challenge is far harder than it appears. Math processing is the easy part. Moving data efficiently and quickly to where it needs to be is the real hardware challenge, and it'

"Now this is a totally brain damaged algorithm. Gag me with a smurfette." -- P. Buhr, Computer Science 354

Working...