Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

Facing More Nimble Rivals, OpenAI Won't Bend (semafor.com) 17

Customers have asked to run OpenAI models on non-Microsoft cloud services or on their own local servers, but OpenAI has no immediate plans to offer such options, Semafor reported Wednesday, citing people familiar with the matter. From the report: That means there's one area where rivals of the ChatGPT creator have an edge: flexibility. To use OpenAI's technology, paying customers have two choices: They can go directly through OpenAI or through investment partner Microsoft, which has inked a deal to be the exclusive cloud service for OpenAI.

Microsoft will not allow OpenAI's models to be available on other cloud providers, according to a person briefed on the matter. Companies that exclusively use rivals, such as Amazon Web Services, Google Cloud or Oracle, can't be OpenAI customers. But Microsoft would allow OpenAI models to be offered "on premises" in which customers build their own servers. Creating such solutions would pose some challenges, particularly around OpenAI's intellectual property. But it is technically feasible, this person said.

This discussion has been archived. No new comments can be posted.

Facing More Nimble Rivals, OpenAI Won't Bend

Comments Filter:
  • must RENT / BUY OUR disks at X5-X10 markup

  • infrastructure (Score:3, Informative)

    by awwshit ( 6214476 ) on Wednesday July 26, 2023 @02:46PM (#63716580)

    > Microsoft would allow OpenAI models to be offered "on premises" in which customers build their own servers. Creating such solutions would pose some challenges

    Yeah, mostly around power and cooling. Machines with a bunch of nvidia cards in them have very high power density. You are not filling your standard rack offering with these units, not even close. You are going to bring in special power, need a larger footprint, and need more cooling.

    • Bullshit, you use the same data center and less servers per rack with exactly the same cooling as the hot and inefficient 2010 DL380s. Every data center I have built, been in, rented 1.2-2KVA per rack for compute and 2KVA-4KVA for storage and superscalar areas. Perhaps you can push switches and patch bays down to .5KVA per tile. The private data centers of the 2010s are awful empty with the former microsoft farms being moved to the vendors cloud. 6KVA for where the mainframes use to be is not rare.
      • by r1348 ( 2567295 )

        You clearly haven't been in hyperscaling datacenters.

        AWS standard position is 30kVA, next iteration will be 40kVA, future AI/ML solutions will be multi-rack.

        • In the whole world there are like 3 hyper-scalars (big 3 cloud providers), and even the tier 2's can't catch them. So maybe get a grip and realize he is right about the standard power draw in most data centers being fine for a typical fortune 500 who wants to build a basic model.
          • Hyperscale means exactly that. You could easily get a 100KW datacenter in two thousand sq ft. Also, these companies dont want to train models, they want to consume models.
      • https://resources.nvidia.com/e... [nvidia.com]

        Each H100 SXM card can use up to 700W. You can do the math on how many of these cards you can put in one chassis, and maybe include some power use for the chassis/host itself, before you exhaust a full rack worth of normal datacenter power. You are going to need 240V to start, your 120V PDU won't do.

      • https://lambdalabs.com/deep-le... [lambdalabs.com]

        6x 3000W power supplies for one chassis, 210-240Vac / 16-14.5A / 50-60 Hz for each one.

      • You clearly have no idea about how neural net training works.
      • As a data point,from 3 years ago, to run a full in-house GPT instance you'd need two DGX A100s that drew 6.5KW each. A rack of 8 of them would pull 52KW. I don't know about the current kit.
  • Nothing to see here! (Score:4, Informative)

    by oldgraybeard ( 2939809 ) on Wednesday July 26, 2023 @02:50PM (#63716600)
    Just Microsoft being who they have always been.
  • on their own local servers, but OpenAI has no immediate plans to offer such ... That means there's one area where rivals of the ChatGPT creator have an edge: flexibility.

    I see what they did there.

  • by Anonymous Coward

    they wont let you look under the hood or risk anyone else having direct access because they know that its built on copyrighted data sets, pirated books and the like, plus i expect heavy usage of GPL software, nobody gets to see under the hood because then the game is over, "you don't have the right equipment" and other such excuses are just bluff and hand-waving to delay the inevitable discovery.

    • People want applications so they get shiny new applications. How many times have you heard someone publicly declare how irrelevant the. OSI model is below layer 6....? A goal of technology ultimately is to not even know you are using it.
    • The weights won't let you know what material it is used to train on any more than API access would.

      Anyway, these training on copyrighted materials including books or GPL source code does not infringe copyright anymore than a student who studies that material in class infringes copyright because they learned how to do something.

"We don't care. We don't have to. We're the Phone Company."

Working...