Microsoft Makes Multibillion-Dollar Investment in OpenAI (bloomberg.com) 68
Microsoft is making a multibilllion-dollar investment in OpenAI, the pioneering artificial intelligence research lab behind ChatGPT and DALL-E, as the software giant looks to more closely tie these text and image-generating programs to its offerings. From a report: The news comes less than a week after the company said it's laying off 10,000 workers as a weakening economy crimps software demand. Microsoft noted in that announcement that it will still invest and hire in key priority areas. The software maker reports fiscal second-quarter earnings on Tuesday. Microsoft, which plowed $1 billion into OpenAI in 2019, is seeking an inside edge on some of the most popular and advanced artificial intelligence systems as it competes with Alphabet, Amazon and Meta Platforms to dominate the fast-growing industry. OpenAI needs Microsoft funding and cloud-computing power to crunch massive volumes of data and run the increasingly complex models that allow DALL-E to generate realistic images based on a handful of words, and ChatGPT to create astonishingly human-like conversational text in response to prompts or queries.
There goes the neighbourhood! (Score:5, Insightful)
It's bad enough that OpenAI's organizational structure contains a for-profit division with "Open" in its name. Now that Microsoft is a big investor, what is now Open is likely to become more and more Closed. I think "ClosedAI" would be a more appropriate name.
Re: (Score:2)
Why exactly would they need a billion $$$ investment? They seem to be doing OK so far.
nb. I don't expect the CEOs to turn down the money, I'm just wondering how it will improve the product?
Re: There goes the neighbourhood! (Score:2)
OpenAI already accepted $1B in investment from Microsoft in 2019. I believe this is when they went from being non-profit to creating a for-profit division because they couldn't get enough funding otherwise. I'm not sure what they need that amount of funding for, but it does appear they tried to succeed without that level of funding before and were incapable of making it work.
Re: (Score:2)
Why exactly would they need a billion $$$ investment?
Salaries, servers, and data centers would be my guess.
Re: (Score:2)
They do? They have no actual products and it's very questionable whether anything they've made is marketable. As far as I can tell, they've never had any revenue at all.
Estimates for how much it cost to train GPT3 range from a million to thirty million, just for the compute time. You'd need multiple times that compute for R&D, plus data acquisition and paying all those researchers that you offered big salaries to lure them away from academia.
Re: (Score:2)
Estimates for how much it cost to train GPT3 range from a million to thirty million, just for the compute time. You'd need multiple times that compute for R&D
Microsoft has a huge cloud they could lend them.
Re: (Score:2)
It still costs them money, and a much smaller portion of that huge cloud, if any of it, has GPUs with sufficient capacity to train the bigger GPT models.
And marketable doesn't mean "people will buy it." Marketable means "people will buy it for enough that you don't go broke."
Re: (Score:2)
Re: (Score:2)
I thought that OpenAI has always been underwritten by Microsoft and other tech giants....
Re: (Score:2)
a for-profit division with "Open" in its name.
"Open" does not imply non-profit nor vice versa There are profitable open source companies. There are also non-profit closed-source companies.
Re: (Score:1)
Re: (Score:2)
I don't know if this is relevant.
But I came across this the other day. [youtube.com]
I haven't watch it and assumed (perhaps wrongly?), wow the source code is truly available. But have no time to learn these things.
Re: (Score:2)
There's nothing wrong with profit and "open." You need to make money to operate and the open source model just suggests doing that by means other than charging for the use of your intellectual property.
It's a moot point though; OpenAI has never been open. They refuse to release their code or even sometimes fully describe their techniques because "it's so powerful it's too dangerous."
Open: the new proprietary money scheme! (Score:2)
Exactly what I said after I was denied access to ChatGPT's source for creating a local-only search engine. I wanted to build the chat-bot on my local machine for my own personal use -- because I don't want my local network information that I've collected over the past 20+ years to become open to the web. I wanted to see how well I could use the "Open tech" to index and access my own stuff. Nada. Not only did they refuse access. It was almost the next day they announced they were going to charge any+eve
Re: (Score:2)
Don't worry. In their tried-and-true fashion, MS will also completely ruin the product, so it can be safely ignored.
Re: (Score:2)
But but they're a Non Profit ! (Score:2)
Except they're not - they're just slimey bastards.
Re: (Score:2)
Meh (Score:1)
Re: (Score:2)
> So far all I have seen ChatGPT do is compile output from a web search
Microsoft is going to make Bing conversational, which normal people prefer.
That's why Google called Larry and Sergi back to "help with AI".
Google's primary product is being challenged and their revenue is tanking.
Re: (Score:2)
Re: (Score:2)
Ah the dumbing down of the world. Too hard to scroll through a few search engine results
You could also say that search engines are for people too lazy to go to the library.
Re: (Score:2)
“O Romeo, Romeo, wherefore art thou Romeo?&r (Score:2)
Re:“O Romeo, Romeo, wherefore art thou Romeo (Score:2)
Re: (Score:2, Insightful)
Even if all it did was what you said - "output from a web search into output utilizing english grammar rules" - if it can do web-search better than Google and actually provide an answer to queries, without the milli
Re: (Score:2)
You can do this... if all you are expecting is code that superficially looks like it could be correct, but is often entirely failing on one or more functional requirements so spectacularly, that it would be easier to write it yourself than try to find everything that is actually wrong with what ChatGPT produced.
The problem, you see, is that ChatGPT is just a predictive linguistic text system, and there's just so far th
Re: (Score:2)
I think you're missing the point. What ChatGPT (and it's ilk) does now isn't nearly as important as what it'll do in the very near future. Judging it explicitly on "now" ignores just how quickly it's moving. The fact that it can speak semi-intelligently to code is pretty staggering, given ,as you say, "ChatGPT was never actually designed to write code in the first place." And we're not talking about some abstract future here... this whole field has reached some sort of critical mass.
Re: (Score:2)
Re: (Score:2)
Well, sure. But all you've said is, "When ChatGPT fails, it's not good enough." Failure is the entry point for your analysis. I'll go next.
"When you have a small set of functional requirements, and the code that it generates fulfills most or all of them, that's 'Good enough'."
Re: (Score:2)
Except that it doesn't fulfill most or all of them. Often it misses half or more....
When the code that it generates produces wrong results or crashes on input more often than it does things correctly, how is that "good enough"?
Re: (Score:2)
How often is "often" and tell me how you got there. I only have to reach to the grandparent to somebody finding it useful.
When the code that it generates produces wrong results or crashes on input more often than it does things correctly, how is that "good enough"?
You did it again. You describe failure, and then describe that as failing. It's not really "incorrect", but it's not useful. I have people around me in the past week who tried using it when they were stuck on something, and had positive outcomes
Re: (Score:2)
Here's one example: Write function that outputs k randomly selected integers between 0 and n-1, such that the probability of any given number being picked is always 1/n, the results are in non-descending order, and the function completes in O(k), and using O(1) storage.
This is a solvable problem... it does, however, requires some serious thought as to a best way to approach it. You cannot simply select k random integers and then sort them because then the function will not be O(k), not to mention usi
Re: (Score:2)
Look, no matter what you're testing you can probably design a question that'll break it. That includes people - that's what exam/interview questions are. But when you corner-case your way into such a state, I don't know that you've proven the whole thing to be suspect.
Re: (Score:2)
My point of that particular problem was not to design a question that was intended to break it, but rather to give it a problem that can be expressed in relatively simple terms, yet has some aspects to it that make it complex enough that it may be worth delegating to an automated process that could be predictably counted upon to produce reliable results.
As I see it, the entire point of automation is getting machines to do things that are either too tedious or time consuming to do. It is neither tedious
Re: (Score:2)
This is one of the times where I think an example of the capabilities would be helpful in framing the conversation.
Here's the conversation I just had. This is the full conversation with no edits.
You can use the Directory.GetFiles() method in C# to list all the files in a directory, and the Path.GetExtension() method to check the file extension. Here's an example:
Re: (Score:2)
"But it doesn't actually understand its own output.
So what? What is important isn't some philosophical, abstract notion like "understanding", it's whether or not the output is appropriate, valid, or useful.
Understanding is too ethereal a concept to worry about. Lots of doctors diagnose without understanding.
Re: (Score:2)
Maybe. But I fed it, "Why is Pascal's Wager considered a facile argument?" And what I got was a pretty decent, reasonably succinct response. It might be a glorified web search, but it is still leaps and bounds better than what I have seen before.
Re: (Score:1)
So far all I have seen ChatGPT do is compile output from a web search into output utilizing english grammar rules. I suppose it's a bit challenging to pick the correct results, but every poem I have seen uses a strict template that always starts with "xxxx, oh xxxx".
Well, maybe this paper shows you something else: https://mackinstitute.wharton.... [upenn.edu]
Re: Meh (Score:2)
Soon I'll see another example of... (Score:1)
Re: (Score:2)
...killedbymicrosoft.com
Yep. It will soon be "exclusive to Bing!" and dead from non-use shortly after that.
Re: (Score:2)
...killedbymicrosoft.com
Don't get my hopes up. OpenAI is a disaster waiting to happen.
Microsoft = government (Score:2)
Remember when the government asked Skype to disable non centralized call routing and when told to go pound sand Microsoft bought them and immediately revamped the call system to be centralized and tappable even as it made the network worse and disrupted everyone?
Yeah that was the government.
Microsoft always seems to magically have cash when the government has an agenda.
Welcome to Clippy 2.0! (Score:3, Funny)
Re: (Score:2)
I agree. I balked for awhile at providing one. But honestly, eventually I looked at the number of spam calls and text I already get, and thought, "Will this actually add anything to that burden?" And I caved.
Wait... (Score:2)
Didn't they RIF'd a significant number of staff?
https://www.cnbc.com/2023/01/1... [cnbc.com]
There are two main flaws witth ChatGPT right now (Score:5, Insightful)
The first one is the censorship that is being applied to it. It is actually prohibited to discuss certain topics or use it ways that the creators did not necessarily intend. It turns the company that runs the software into a kind of "thought police", because they can dictate what topics are permitted, and arbitrarily decide that what you are using it for should be prohibited.
Obviously criminal acts should not be allowed, and immoral or questionable activities can reasonably be suspect, but why should simply *TALKING* about them in any kind of hypothetical context be prohibited? Particularly since it can often be through rational discourse that a person can discover and understand for themselves why they shouldn't actually engage in that behavior, instead of simply being told that they aren't allowed to talk about that.
The second major problem with ChatGPT is that it is often entirely wrong about things. Give it a problem that requires an understanding of mathematics or virtually any other particular field or broad discipline, for example, and watch the output of ChatGPT fall completely flat. The results may often supeficially appear to be correct, but because ChatGPT does not actually ever really know what it saying, the output it produces is too unreliable to be useful.
ChatGPT is a great toy, but it is not, contrary to what people might assert, anything remotely resembling a practical tool.
Re: (Score:3)
Most "AI that learns by chatting with people on the internet" projects ended up with a racist, homophobic, misogynistic AI who believed that Hitler did nothing wrong.
The uncomfortable truth is that, left alone, an AI will learn all the good and bad sides of humans, but won't learn the "sensible" part where we just leave our most embarrassing thoughts to ourselves. This is especially prevalent with these AIs since they learn off the internet, the one place were people get anonymity's mask, or actually get to
Re: (Score:2)
why should simply *TALKING* about them in any kind of hypothetical context be prohibited?
Who cares? If you have a problem with it then write your own chatbot and allow whatever you want. There is no "thought police" with this or Twitter or whatever other private platform. These are examples of privately owned and operated software. You don't get to turn someone else's message board into whatever you think the "town square" should be.
Re: (Score:2)
That nicely sums it up. On the plus-side, it is very unlikely to go berserk like HAL9000 because it was told to lie because it has absolutely no understanding of things anyways.
Re: (Score:3)
Obviously criminal acts should not be allowed, and immoral or questionable activities can reasonably be suspect, but why should simply *TALKING* about them in any kind of hypothetical context be prohibited?
It isn't. Just make clear in your prompt that you're talking about hypotheticals and ChatGPT will talk about anything you like.
I'm not saying that this makes the vague wave at censorship good or even okay, just pointing out that it's trivial to work around.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
But it doesn't have those flaws or biases, its output may reflect flaws or biases that were present on input, but it does not have them itself.
The real problem is that AI like ChatGPT is simply not advanced enough to realize what it actually sayiing and how it will actually be taken, not that it actually has the same biases as what it was given as input.
Their Operating System is Dead (Score:2)
Like Robert Mitchum's character in Cape Fear (Score:2)
Clippy returns, and exacts revenge for being exiled.
And he's been working out.