Facing More Nimble Rivals, OpenAI Won't Bend (semafor.com) 17
Customers have asked to run OpenAI models on non-Microsoft cloud services or on their own local servers, but OpenAI has no immediate plans to offer such options, Semafor reported Wednesday, citing people familiar with the matter. From the report: That means there's one area where rivals of the ChatGPT creator have an edge: flexibility. To use OpenAI's technology, paying customers have two choices: They can go directly through OpenAI or through investment partner Microsoft, which has inked a deal to be the exclusive cloud service for OpenAI.
Microsoft will not allow OpenAI's models to be available on other cloud providers, according to a person briefed on the matter. Companies that exclusively use rivals, such as Amazon Web Services, Google Cloud or Oracle, can't be OpenAI customers. But Microsoft would allow OpenAI models to be offered "on premises" in which customers build their own servers. Creating such solutions would pose some challenges, particularly around OpenAI's intellectual property. But it is technically feasible, this person said.
Microsoft will not allow OpenAI's models to be available on other cloud providers, according to a person briefed on the matter. Companies that exclusively use rivals, such as Amazon Web Services, Google Cloud or Oracle, can't be OpenAI customers. But Microsoft would allow OpenAI models to be offered "on premises" in which customers build their own servers. Creating such solutions would pose some challenges, particularly around OpenAI's intellectual property. But it is technically feasible, this person said.
must RENT / BUY OUR disks at X5-X10 markup (Score:2)
must RENT / BUY OUR disks at X5-X10 markup
infrastructure (Score:3, Informative)
> Microsoft would allow OpenAI models to be offered "on premises" in which customers build their own servers. Creating such solutions would pose some challenges
Yeah, mostly around power and cooling. Machines with a bunch of nvidia cards in them have very high power density. You are not filling your standard rack offering with these units, not even close. You are going to bring in special power, need a larger footprint, and need more cooling.
Re: (Score:1)
Re: (Score:2)
You clearly haven't been in hyperscaling datacenters.
AWS standard position is 30kVA, next iteration will be 40kVA, future AI/ML solutions will be multi-rack.
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
https://resources.nvidia.com/e... [nvidia.com]
Each H100 SXM card can use up to 700W. You can do the math on how many of these cards you can put in one chassis, and maybe include some power use for the chassis/host itself, before you exhaust a full rack worth of normal datacenter power. You are going to need 240V to start, your 120V PDU won't do.
Re: (Score:2)
https://lambdalabs.com/deep-le... [lambdalabs.com]
6x 3000W power supplies for one chassis, 210-240Vac / 16-14.5A / 50-60 Hz for each one.
Re: (Score:3)
Re: (Score:2)
Nothing to see here! (Score:4, Informative)
Re: Nothing to see here! (Score:3)
PunGPT (Score:1)
I see what they did there.
bollocks (Score:1)
they wont let you look under the hood or risk anyone else having direct access because they know that its built on copyrighted data sets, pirated books and the like, plus i expect heavy usage of GPL software, nobody gets to see under the hood because then the game is over, "you don't have the right equipment" and other such excuses are just bluff and hand-waving to delay the inevitable discovery.
Re: (Score:1)
Re: (Score:2)
The weights won't let you know what material it is used to train on any more than API access would.
Anyway, these training on copyrighted materials including books or GPL source code does not infringe copyright anymore than a student who studies that material in class infringes copyright because they learned how to do something.