Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google AI Technology

Google Shared AI Knowledge With the World - Until ChatGPT Caught Up (washingtonpost.com) 33

For years Google published scientific research that helped jump-start its competitors. But now it's lurched into defensive mode. From a report: In February, Jeff Dean, Google's longtime head of artificial intelligence, announced a stunning policy shift to his staff: They had to hold off sharing their work with the outside world. For years Dean had run his department like a university, encouraging researchers to publish academic papers prolifically; they pushed out nearly 500 studies since 2019, according to Google Research's website. But the launch of OpenAI's groundbreaking ChatGPT three months earlier had changed things. The San Francisco start-up kept up with Google by reading the team's scientific papers, Dean said at the quarterly meeting for the company's research division. Indeed, transformers -- a foundational part of the latest AI tech and the T in ChatGPT -- originated in a Google study.

Things had to change. Google would take advantage of its own AI discoveries, sharing papers only after the lab work had been turned into products, Dean said, according to two people with knowledge of the meeting, who spoke on the condition of anonymity to share private information. The policy change is part of a larger shift inside Google. Long considered the leader in AI, the tech giant has lurched into defensive mode -- first to fend off a fleet of nimble AI competitors, and now to protect its core search business, stock price, and, potentially, its future, which executives have said is intertwined with AI. In op-eds, podcasts and TV appearances, Google CEO Sundar Pichai has urged caution on AI. "On a societal scale, it can cause a lot of harm," he warned on "60 Minutes" in April, describing how the technology could supercharge the creation of fake images and videos. But in recent months, Google has overhauled its AI operations with the goal of launching products quickly, according to interviews with 11 current and former Google employees, most of whom spoke on the condition of anonymity to share private information.

This discussion has been archived. No new comments can be posted.

Google Shared AI Knowledge With the World - Until ChatGPT Caught Up

Comments Filter:
  • The CEO is speedrunning the destruction of the company.
    • They could always do what Xerox Parc did and give tours to Apple. See how well that worked for them.

      • Xerox, sadly for them, couldn't figure out how to successfully commercialize their ideas. The only computer system they made based on them was $INFINITY dollars. OK, not really, but the Alto series really was grossly overpriced, you could really only process documents with it and it cost much more than the Unix workstations with similar spec which followed — and it's not like they were much later, either.

        • From interviews I've read the problem was more that the suits in charge didnt understand what the value in those products was and just wouldnt part with the resources to take ideas from lab to showroom. The Parc boy where coming up with technological marvels and the cigar chompers on the board where all "Who the fuck wants a computer? We sell photocopiers!"

          Something I suspect is familiar to many of us who've come up with a neat idea only for the boss to be mystified by it.

  • by AleRunner ( 4556245 ) on Thursday May 04, 2023 @01:49PM (#63497496)

    I've said it before with their browser. I said it with Android. Now it's happened again. Google hates copyleft and has tried to destroy it and now that's come back to bite them. Just at the moment they could get the greatest benefit, and use aggressive open sourcing to keep them ahead of the competition, their own code is being used to destroy them and they have to give up their greatest potential advantage by shutting their community out.

    Well deserved, in this case.

    • copyright / copyleft? Swapped over in my title, but it's the lack of the combination that will kill them.

    • Except in the case of AI it prospers with a strong hardware base. More so as the models get bigger and bigger and the training times get longer.

      • Except in the case of AI it prospers with a strong hardware base. More so as the models get bigger and bigger and the training times get longer.

        Absolutely correct, though please read the memo [semianalysis.com] that Okian Warrior linked to [slashdot.org], that belief has been a trap for Google which open sourcing their secret sauce would have saved them from. However, that hardware element is the same as a mobile operating system prospering with careful hardware/software integration. They could have used a very strong copyleft which forced everyone to share code and still have kept their advantage based on having the biggest hardware and ensuring that the software was always upgrad

        • The code isn't really the value equation though, the ideas are. You can write a transformer LLM that can do most of what ChatGPT does in a few hundred lines of code with PyTorch or whatever. It was the *idea* behind transformer LLM that changed everything in ML and that came from a Google journal article (specifically "Attention is all you need"). Well that and the actual trained model, but Google and OpenAI aren't sharing those anymore. Curiously, Meta actually did (the llama model which is roughly as powe

      • by sapgau ( 413511 )
        Mode parent up. Google was a pioneer on server hardware and interconnected server farms.
        The barrier to entry is the enormous need for hardware to back these AI systems.
        Google had ample time to expand their services but they got distracted by the sweet ad money.
    • by Okian Warrior ( 537106 ) on Thursday May 04, 2023 @02:15PM (#63497546) Homepage Journal

      A recent leaked memo [semianalysis.com] from Google shows this quite nicely.

      In summary, LLaMA (from Meta) was leaked to the public, and then the public jumped on it and made tremendous advances in the technology - far more than Google could have ever imagined, in the space of two months.

      Yep. LLaMA was leaked at the beginning of March, and already people have got the large language model running on phones, running personalized versions, and training variants in a few hours.

      Someone got ChatGPT running on a RasPi, by restricting the floating points to 4 bits of precision (if I understand it correctly, didn't look in detail).

      The Google memo (from the link above) is a pretty interesting read. They *wanted* to have a "moat" around their software, a secret-sauce that no one else had, to give them dominance in the field. What happened instead is that the public jumped leapt ahead of them by leaps and bounds, and now they are trying to play nice with the community in an attempt to catch up.

      • It's a good read. But the net result is really just reverting back to the longstanding status quo prior to about 1 year ago - AI was never thought to be the exclusive province of a handful of huge companies. Then, briefly, it was. So, another way to make the same point, is that all the concerns about big tech cornering AI was massively overblown in the first place.
        • Not really, AI advancement is extremely expensive, only big companies can do it. They were willing to share results with each other to find a path to profitability, and now that we are there, expect the walls to come up.
      • by narcc ( 412956 )

        Has anyone been able to verify the authenticity of the "leaked" memo? It reads like fan fiction.

        Someone got ChatGPT running on a RasPi, by restricting the floating points to 4 bits of precision (if I understand it correctly,

        The term you're searching for is quantization. No, no one got ChatGPT running on a Raspberry pi, though you can get a similar chat thing running on one.

        You should probably know that you lose a ton of information when you use a 4-bit quantization. The model, consequently, will not perform nearly as well.

        • The term you're searching for is quantization. No, no one got ChatGPT running on a Raspberry pi, though you can get a similar chat thing running on one.

          You should probably know that you lose a ton of information when you use a 4-bit quantization. The model, consequently, will not perform nearly as well.

          You're right, good catch. Thanks.

          Note that the quantization in the human model is way lower than what we imagine it would need. For example, the human eye registers only 12 levels of brightness. The sensitivity range is very wide (starlight up to direct sunlight), but within any specific environment the eye only notes 12 levels of brightness.

          Similarly, there are only about 1 million nerves in the optic bundle, so maybe image recognition could get away with less spacial resolution. The human eye has fewer bl

    • Google may have been bitten but the world has prospered, or at least the world has advanced in AI and phones because of the completion android added.

      For me life isn't about screwing the every last cent out you can out of every one you can, but making a better world for everyone.

      The question is if better AI or more cell phones has made the world better is debatable, but the fact that google shared the knowledge accelerated their development is clear.

  • by Pollux ( 102520 ) <speter@@@tedata...net...eg> on Thursday May 04, 2023 @02:20PM (#63497562) Journal

    They've been resting on their laurels for over 20 years now. It's about time someone else finally gave them some competition.

    No joke, just the other day, I was at a tech workshop where we were playing around with ChatGPT. A bunch of colleagues were typing in "2+2=5" just to see if they could get a rise out of it. I told them to stop, and instead put in the prompt, "Can 2 +2 = 10?" Now, of course it can, in base-4 math, but I truly expected ChatGPT to fail at the inquiry. But I was darned, ChatGPT gave a paragraph response to the tune of "Under most circumstances, 2+2 equals 4, not 10. But if we were to use Base-4 numbers, then 2+2 does equal 10.

    Then, I entered "Does 2+2 equal 10" into Google. The results were atrocious [google.com]. But, if I changed the inquiry to "does 2+2=10", then it finds a Quora result that does answer the question. But strange how Google can't find the answer when I write the world "equal". [google.com]

    • by S_Stout ( 2725099 ) on Thursday May 04, 2023 @03:32PM (#63497706)
      The Google model does not want you to find the answer right away. They want you clicking ads. If they just have you the answer they will struggle to make money off you. They already have your data and are selling that but they want more tracking data and they want to send you to someone else for money.
    • But a fair comparison would be to enter that question in bard.google.com, which is Google's equivalent to ChatGPT, not in Google search.

      When doing do, Bard's answer to "Can 2 + 2 = 10?" is

      "2+2=4 in base 10, the most common number system. However, in base 4, 2+2=10. This is because in base 4, there are only four digits: 0, 1, 2, and 3. Therefore, 10 in base 4 is equal to 4 in base 10."

      And then it goes on a long explanation on how to convert between base 4 and base 10

  • by Guspaz ( 556486 ) on Thursday May 04, 2023 @02:53PM (#63497630)

    At this point, it's too little, too late. Google released their paper on transformer architectures in 2017, and OpenAI announced GPT-1 in 2018. You could argue that OpenAI continued leaning on Google research as they developed GPT-2, GPT-3, GPT-3.5, and GPT-4, but the most important thing from a business perspective is not the advancements of the LLMs themselves, but of the integration. OpenAI and Microsoft have certainly not been cribbing any of that from Google, since Google appears to have done little to nothing in that regard.

    While Google is busy showing off their LLM that seems to be on-par with GPT-3 at best (and if everybody has been cheating off Google's homework, why is Google a generation or two behind), OpenAI and Microsoft have been showing integration with applications and real-time data lookups. Microsoft is now well on their way to having GPT-4 integrated into their search engine, their web browser, their office suite, their business communications platform.

    The whole "OpenAI is only successful because Google did all the hard work first" thing always rang hollow to me in the same way that you could say that computer GUIs were only successful because Xerox did all that work at PARC. I mean, sure, they did, but they didn't do anything with it and quickly became irrelevant in that market.

    • The whole "OpenAI is only successful because Google did all the hard work first" thing always rang hollow to me

      Bro, I remember the same thing in the 1990's with Linux versus commercial UNIX. Folks claimed Linux just "copied" all their ideas and it was really inferior to Solaris or AIX. I mean people said that shit with a straight face and told me Linux & *BSD was just a flash in the pan. What a load of shit. The truth is Linux and *BSD did copy a lot of their ideas. That's how you build something better! You take the best of what's out there and add some more good stuff to it. Of course the actual problem was th

    • if everybody has been cheating off Google's homework, why is Google a generation or two behind

      Because that is how humans innovate and develop we build on the work of people who came before us. Just because one person came up with an idea doesn't mean the next person cannot improve on it or the next person is more or less smarter than the last.

      This is what bothers me about intellectual property we all learn from others society works together for the betterment of all. However claiming patent or copyright just says mine. As for the homework analogy there is copying homework from a learning point of vi

    • Don't count Google out. They weren't the first successful search engine, they weren't the first successful email provider, the first successful map provider, or the first smartphone maker. If they are good at anything, they are really good at being a *second* mover.

      • by Guspaz ( 556486 )

        What was the last in-house Google product that became a successful major product and is still around today? Chromebooks in 2011? Chromecast in 2013? Has Google launched a single major in-house product in the last ten years that is a success?

        Google has launched and discontinued several hundred products in the past ten years, but that doesn't help the argument of not counting them out.

    • Research is not the same as development. In a wealthy monopoly environment like Google, the bean counters like the cache of unbridled research. But when it comes to development, they want proof it will make money. That is what has happened over and over in these monopoly places: Bell Labs (AT&T), IBM, RCA, Xerox, Kodak, ...

  • They started with help from opensource like Linux and dmoz. I hope some temporary hype do do deter them from continuing contributions to the opensource ecosystem.
  • This technology is too powerful to be the property of anyone, and too dangerous to boot. AI research and products should be nationalized, and in fact, internationalized. Our government representatives, who are so dumb that they think TikTok is a national security threat, are of course never going to do anything remotely like this; primarily because their largest donors are heavily invested in this technology.
  • I'm surprised they think they were completely open to their research. I think SUN Microsystems or even IBM are more famous for their research papers.
    They might have great research ideas but others implemented them and now are getting rewarded for their efforts.
    Why didn't they follow through? OpenAI was also publishing their research.

Every cloud has a silver lining; you should have sold it, and bought titanium.

Working...