Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Microsoft AI Technology

Microsoft Makes Multibillion-Dollar Investment in OpenAI (bloomberg.com) 68

Microsoft is making a multibilllion-dollar investment in OpenAI, the pioneering artificial intelligence research lab behind ChatGPT and DALL-E, as the software giant looks to more closely tie these text and image-generating programs to its offerings. From a report: The news comes less than a week after the company said it's laying off 10,000 workers as a weakening economy crimps software demand. Microsoft noted in that announcement that it will still invest and hire in key priority areas. The software maker reports fiscal second-quarter earnings on Tuesday. Microsoft, which plowed $1 billion into OpenAI in 2019, is seeking an inside edge on some of the most popular and advanced artificial intelligence systems as it competes with Alphabet, Amazon and Meta Platforms to dominate the fast-growing industry. OpenAI needs Microsoft funding and cloud-computing power to crunch massive volumes of data and run the increasingly complex models that allow DALL-E to generate realistic images based on a handful of words, and ChatGPT to create astonishingly human-like conversational text in response to prompts or queries.
This discussion has been archived. No new comments can be posted.

Microsoft Makes Multibillion-Dollar Investment in OpenAI

Comments Filter:
  • by jenningsthecat ( 1525947 ) on Monday January 23, 2023 @10:24AM (#63232236)

    It's bad enough that OpenAI's organizational structure contains a for-profit division with "Open" in its name. Now that Microsoft is a big investor, what is now Open is likely to become more and more Closed. I think "ClosedAI" would be a more appropriate name.

    • Why exactly would they need a billion $$$ investment? They seem to be doing OK so far.

      nb. I don't expect the CEOs to turn down the money, I'm just wondering how it will improve the product?

      • OpenAI already accepted $1B in investment from Microsoft in 2019. I believe this is when they went from being non-profit to creating a for-profit division because they couldn't get enough funding otherwise. I'm not sure what they need that amount of funding for, but it does appear they tried to succeed without that level of funding before and were incapable of making it work.

      • Why exactly would they need a billion $$$ investment?

        Salaries, servers, and data centers would be my guess.

      • by ceoyoyo ( 59147 )

        They do? They have no actual products and it's very questionable whether anything they've made is marketable. As far as I can tell, they've never had any revenue at all.

        Estimates for how much it cost to train GPT3 range from a million to thirty million, just for the compute time. You'd need multiple times that compute for R&D, plus data acquisition and paying all those researchers that you offered big salaries to lure them away from academia.

        • Estimates for how much it cost to train GPT3 range from a million to thirty million, just for the compute time. You'd need multiple times that compute for R&D

          Microsoft has a huge cloud they could lend them.

          • by ceoyoyo ( 59147 )

            It still costs them money, and a much smaller portion of that huge cloud, if any of it, has GPUs with sufficient capacity to train the bigger GPT models.

            And marketable doesn't mean "people will buy it." Marketable means "people will buy it for enough that you don't go broke."

    • Unfortunately none of the leading models is open, they are at best accessible through an online interface that allows the owners to monitor and censor interactions with them. How much of this is due to money vs. the reputational harm of people inducing the models to violate taboos gets murky, because it's both.
    • I thought that OpenAI has always been underwritten by Microsoft and other tech giants....

    • a for-profit division with "Open" in its name.

      "Open" does not imply non-profit nor vice versa There are profitable open source companies. There are also non-profit closed-source companies.

      • Correct, open doesn't mean non-profit, however it usually means open source, correct me if I'm mistaken, but Open AI is not open source at all.
    • by ceoyoyo ( 59147 )

      There's nothing wrong with profit and "open." You need to make money to operate and the open source model just suggests doing that by means other than charging for the use of your intellectual property.

      It's a moot point though; OpenAI has never been open. They refuse to release their code or even sometimes fully describe their techniques because "it's so powerful it's too dangerous."

    • Exactly what I said after I was denied access to ChatGPT's source for creating a local-only search engine. I wanted to build the chat-bot on my local machine for my own personal use -- because I don't want my local network information that I've collected over the past 20+ years to become open to the web. I wanted to see how well I could use the "Open tech" to index and access my own stuff. Nada. Not only did they refuse access. It was almost the next day they announced they were going to charge any+eve

    • by gweihir ( 88907 )

      Don't worry. In their tried-and-true fashion, MS will also completely ruin the product, so it can be safely ignored.

    • Very true and I enjoy your sig.
  • Except they're not - they're just slimey bastards.

  • So far all I have seen ChatGPT do is compile output from a web search into output utilizing english grammar rules. I suppose it's a bit challenging to pick the correct results, but every poem I have seen uses a strict template that always starts with "xxxx, oh xxxx".
    • > So far all I have seen ChatGPT do is compile output from a web search

      Microsoft is going to make Bing conversational, which normal people prefer.

      That's why Google called Larry and Sergi back to "help with AI".

      Google's primary product is being challenged and their revenue is tanking.

      • Ah the dumbing down of the world. Too hard to scroll through a few search engine results, people want it spoon fed to them I guess. I hope I can turn it off.
        • Ah the dumbing down of the world. Too hard to scroll through a few search engine results

          You could also say that search engines are for people too lazy to go to the library.

          • Yes because the couple extra seconds it takes to skim through result summaries is exactly the same as getting in your car, driving to the library, parking, walking to the library, looking up the books you want, going to the shelf location, checking out the books, going back to your car, and driving home. Also I don't know about your library, but I'm lucky if mine has books new enough to contain HTML5 or Python 3.
    • I have a suspicion which public domain works they used to train the poem AI...
    • Re: (Score:2, Insightful)

      by Anonymous Coward
      Then you haven't done hardly anything with it. Feed it some code and ask it to analyze it and add comments. Ask it to write you code from scratch, with you feeding it your requirements and design. Ask it to re-write Homer's Odyssey in the style of Jimmy Buffet. Get curious, for god's sake!

      Even if all it did was what you said - "output from a web search into output utilizing english grammar rules" - if it can do web-search better than Google and actually provide an answer to queries, without the milli
      • by mark-t ( 151149 )

        Ask it to write you code from scratch, with you feeding it your requirements and design.

        You can do this... if all you are expecting is code that superficially looks like it could be correct, but is often entirely failing on one or more functional requirements so spectacularly, that it would be easier to write it yourself than try to find everything that is actually wrong with what ChatGPT produced.

        The problem, you see, is that ChatGPT is just a predictive linguistic text system, and there's just so far th

        • I think you're missing the point. What ChatGPT (and it's ilk) does now isn't nearly as important as what it'll do in the very near future. Judging it explicitly on "now" ignores just how quickly it's moving. The fact that it can speak semi-intelligently to code is pretty staggering, given ,as you say, "ChatGPT was never actually designed to write code in the first place." And we're not talking about some abstract future here... this whole field has reached some sort of critical mass.

        • "But it doesn't actually understand its own output.

          So what? What is important isn't some philosophical, abstract notion like "understanding", it's whether or not the output is appropriate, valid, or useful.

          Understanding is too ethereal a concept to worry about. Lots of doctors diagnose without understanding.

    • Maybe. But I fed it, "Why is Pascal's Wager considered a facile argument?" And what I got was a pretty decent, reasonably succinct response. It might be a glorified web search, but it is still leaps and bounds better than what I have seen before.

    • by Bumbul ( 7920730 )

      So far all I have seen ChatGPT do is compile output from a web search into output utilizing english grammar rules. I suppose it's a bit challenging to pick the correct results, but every poem I have seen uses a strict template that always starts with "xxxx, oh xxxx".

      Well, maybe this paper shows you something else: https://mackinstitute.wharton.... [upenn.edu]

      • It got a lot of the questions wrong due to lack of really understanding the question. For all we know, the questions it got right already had very similar answers on the internet.
  • ...killedbymicrosoft.com
  • Remember when the government asked Skype to disable non centralized call routing and when told to go pound sand Microsoft bought them and immediately revamped the call system to be centralized and tappable even as it made the network worse and disrupted everyone?

    Yeah that was the government.

    Microsoft always seems to magically have cash when the government has an agenda.

  • by Locke2005 ( 849178 ) on Monday January 23, 2023 @11:22AM (#63232388)
    I, for one, welcome our new AI overlords!
  • Didn't they RIF'd a significant number of staff?

    https://www.cnbc.com/2023/01/1... [cnbc.com]

  • The first one is the censorship that is being applied to it. It is actually prohibited to discuss certain topics or use it ways that the creators did not necessarily intend. It turns the company that runs the software into a kind of "thought police", because they can dictate what topics are permitted, and arbitrarily decide that what you are using it for should be prohibited.

    Obviously criminal acts should not be allowed, and immoral or questionable activities can reasonably be suspect, but why should simply *TALKING* about them in any kind of hypothetical context be prohibited? Particularly since it can often be through rational discourse that a person can discover and understand for themselves why they shouldn't actually engage in that behavior, instead of simply being told that they aren't allowed to talk about that.

    The second major problem with ChatGPT is that it is often entirely wrong about things. Give it a problem that requires an understanding of mathematics or virtually any other particular field or broad discipline, for example, and watch the output of ChatGPT fall completely flat. The results may often supeficially appear to be correct, but because ChatGPT does not actually ever really know what it saying, the output it produces is too unreliable to be useful.

    ChatGPT is a great toy, but it is not, contrary to what people might assert, anything remotely resembling a practical tool.

    • Most "AI that learns by chatting with people on the internet" projects ended up with a racist, homophobic, misogynistic AI who believed that Hitler did nothing wrong.
      The uncomfortable truth is that, left alone, an AI will learn all the good and bad sides of humans, but won't learn the "sensible" part where we just leave our most embarrassing thoughts to ourselves. This is especially prevalent with these AIs since they learn off the internet, the one place were people get anonymity's mask, or actually get to

    • why should simply *TALKING* about them in any kind of hypothetical context be prohibited?

      Who cares? If you have a problem with it then write your own chatbot and allow whatever you want. There is no "thought police" with this or Twitter or whatever other private platform. These are examples of privately owned and operated software. You don't get to turn someone else's message board into whatever you think the "town square" should be.

    • by gweihir ( 88907 )

      That nicely sums it up. On the plus-side, it is very unlikely to go berserk like HAL9000 because it was told to lie because it has absolutely no understanding of things anyways.

    • Obviously criminal acts should not be allowed, and immoral or questionable activities can reasonably be suspect, but why should simply *TALKING* about them in any kind of hypothetical context be prohibited?

      It isn't. Just make clear in your prompt that you're talking about hypotheticals and ChatGPT will talk about anything you like.

      I'm not saying that this makes the vague wave at censorship good or even okay, just pointing out that it's trivial to work around.

    • Past experience of other projects says you can't allow free reign while it is learning otherwise it picks up the same bad habits as its users. i.e. it ends up racist or sexist or various other inherit bias's the sess pit we call humanity has.
      • by mark-t ( 151149 )
        Except that a computer program isn't *actually* racist or sexist... regardless of what kind of text it outputs. It actually doesn't hold any personal opinions at all, and any outward appearance to the contrary is solely a result of anthropomorphism and projection.
        • of course it isn't racist itself. however humans are fucked up as a whole and if you are using humans as the training model without intervention or control then you are going to get an AI model that has all the same flawes and bias.
          • by mark-t ( 151149 )

            But it doesn't have those flaws or biases, its output may reflect flaws or biases that were present on input, but it does not have them itself.

            The real problem is that AI like ChatGPT is simply not advanced enough to realize what it actually sayiing and how it will actually be taken, not that it actually has the same biases as what it was given as input.

  • Their operating system is dead, so time to look for new markets.
  • Clippy returns, and exacts revenge for being exiled.

    And he's been working out.

With your bare hands?!?

Working...