Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Facebook

Mark Zuckerberg's New Goal is Creating AGI (theverge.com) 94

OpenAI's stated mission is to create the artificial general intelligence, or AGI. Demis Hassabis, the leader of Google's AI efforts, has the same goal. Now, Meta CEO Mark Zuckerberg is entering the race. From a report: While he doesn't have a timeline for when AGI will be reached, or even an exact definition for it, he wants to build it. At the same time, he's shaking things up by moving Meta's AI research group, FAIR, to the same part of the company as the team building generative AI products across Meta's apps. The goal is for Meta's AI breakthroughs to more directly reach its billions of users. "We've come to this view that, in order to build the products that we want to build, we need to build for general intelligence," Zuckerberg tells me in an exclusive interview. "I think that's important to convey because a lot of the best researchers want to work on the more ambitious problems."

[...] No one working on AI, including Zuckerberg, seems to have a clear definition for AGI or an idea of when it will arrive. "I don't have a one-sentence, pithy definition," he tells me. "You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition." He sees its eventual arrival as being a gradual process, rather than a single moment. "I'm not actually that sure that some specific threshold will feel that profound." As Zuckerberg explains it, Meta's new, broader focus on AGI was influenced by the release of Llama 2, its latest large language model, last year. The company didn't think that the ability for it to generate code made sense for how people would use a LLM in Meta's apps. But it's still an important skill to develop for building smarter AI, so Meta built it anyway.
External research has pegged Meta's H100 shipments for 2023 at 150,000, a number that is tied only with Microsoft's shipments and at least three times larger than everyone else's. When its Nvidia A100s and other AI chips are accounted for, Meta will have a stockpile of almost 600,000 GPUs by the end of 2024, according to Zuckerberg.
This discussion has been archived. No new comments can be posted.

Mark Zuckerberg's New Goal is Creating AGI

Comments Filter:
  • Assuming the average GPU costs $10,000, those 600,000 GPUs amount to $6 billion.
    • Re:Billions (Score:4, Insightful)

      by ShanghaiBill ( 739463 ) on Thursday January 18, 2024 @04:10PM (#64170887)

      Assuming the average GPU costs $10,000

      The best current GPU for AI is an Nvidia Tesla v100. They're a lot less than $10k. Meta can likely buy them in bulk for a third of that.

      But Meta shouldn't be using GPUs at all. Their competitors have custom tensor processing accelerators that are far more efficient for ML. Google's TPU is eight times as power efficient at matrix multiplication.

      Rather than spending billions on GPUs, Meta should be hiring EEs and designing their own silicon.

      • Re:Billions (Score:4, Informative)

        by etash ( 1907284 ) on Thursday January 18, 2024 @04:30PM (#64170943)
        v100 is older than a100 which is older than h100. h100 is also the most expensive.
      • Re:Billions (Score:5, Informative)

        by xtal ( 49134 ) on Thursday January 18, 2024 @04:42PM (#64170969)

        This is completely wrong. H100 is by far the most relevant, fastest card for AGI.

        Source: Guy who works with tens of thousands of H100s (me).

        • This is completely wrong. H100 is by far the most relevant, fastest card for AGI.

          Source: Guy who works with tens of thousands of H100s (me).

          How is is better than Google's TPU?

        • by zlives ( 2009072 )

          response from AI not valid for this discussion

      • by DarkOx ( 621550 )

        I don't think the risk/reward for that makes any kind of sense until you have a marketable product/service.

        Right now, this LLM stuff is evolving so quickly, unless your product is the AI solution it self its way to easy to get stranded down some dead end line, if you invest in to specific technology.

        The GPUs are good compromise. You can start prototyping an actual service today at scale and see if the revenue is there rather than optimistically a year from now after blowing a 100 million in R&D.

        Meta kn

      • Re: (Score:3, Interesting)

        I have some bad news... six billion wouldn't even scratch the surface of what it would take to start ex nihilo and create the Cadence netlist describing an nVidia or AMD flagship-class GPU, test it in simulation (which they both have top-100-class supercomputers dedicated to doing, down to the transistor level), solve the ten thousand other problems associated with converting that netlist into real physical chiplets, packing those chiplets into a working chip, putting that chip onto a working PCIe board (si
    • by Tablizer ( 95088 ) on Thursday January 18, 2024 @05:10PM (#64171039) Journal

      > amount to $6 billion.

      Once a proof of concept works, investor money flows in, and an efficiency taskforce is then hired to trim the hardware profile.

      I'd like to see ML merged with Cyc so one can see the reasoning used for results. But I still think a modeling engine of some kind will be needed. When I'm pondering how to do something new, I often form a bunch stick figures in my head that perform candidate tasks to see how it turns out using everyday laws of physics and/or human conventions. I suspect AGI will need something comparable. XkcdGPT?

      Thus, AGI will probably require skillful melding of all three elements: 1) pattern recognition/lookup, 2) Logical inference, and 3) Modelling.

      Zuck should just sell Facebook and use his profits for these far out experiments of his. Seems he wants to play with VR and bots rather than babysit the troll farm known as Facebook. He shouldn't bog FB down with high risk/reward endeavors.

      * "Horsepeople" sounds really awkward for some reason. Centaur comes to mind.

  • Mark is at least trying.
    • by Thud457 ( 234763 )
      IR? IV? RI? RV?
      VI! BINGO!
    • Let him spend it on us nerds - he could be spaffing it all on his wife's demented woke politics.
    • Agreed. In a weird sort of way, I feel for the guy. He’s the greatest ad man that’s ever lived. But he’s still human, and he’s obviously gotten a tad bored with monetizing user data and selling ads. He’s reached the absolute, uncontested top of his field, money no longer matters in any real sense, it’s no longer fun, and he’s trying to duplicate the success in another field. But he’s learning the hard reality that extreme success is almost ALWAYS partly down t
    • by Kisai ( 213879 )

      The "Metaverse" , "Web 3.0" shit was all shit. Nobody wanted that shit. All you had to do was look at VRChat and go "how do I monetize that" and see what the problem is.

      The problem is getting people access to things they want to do without an IP (intellectual property) owner being a wet blanket.

      What would have made "the metaverse" somewhat reasonable would have been to take the fortnite and DBD model and license hundreds of third party properties and then hang it over the users heads with "wanna be batman?

    • by donstenk ( 74880 )

      There is this idea, in their head mostly, that because an entrepreneur has been hugely successful once, they will be again. Sometimes that is true due to an abundance of means, sometimes it is true due to talent or lessons learned. But often it is not true as a combination of things including talent, perseverance and luck concurred to make the success happen - not the outstanding qualities of the entrepreneur.

    • Mark knows what he's doing. Nobody knows what AGI is so it's perfect for a scam. Make something, make anything, call it AGI, make obscene profits. It worked many times before because people are basically greedy and stupid. Anything with a buzz can make billions, especially if Mark gets to define it. It's like Lawyer speak. Words have no meaning until Shylock tells us what the words mean.
  • Real AI will come about when someone working from the bottom up finally works out how human minds think, not when billionaires throw gigawatts of GPU power at Petabytrs of data. The latter only gives us fake, Chinese Box, AI.
    • by hdyoung ( 5182939 ) on Thursday January 18, 2024 @04:32PM (#64170951)
      My gut feeling agrees with you, but we may be surprised. Have you played with chatGPT? It’s clearly not there yet. Plenty of problems and obviously not real AI. But it can also raise your eyebrows when it works well. It shows that you CAN achieve something interesting by throwing terajoules of energy at petabytes of data, millions of times, without understanding the underpinnings of what you’re doing.

      Put it another way - unless youre a religious fundamentalist, you’ll acknowledge that we came about through the process of a cool billion years of evolution. That wasn’t a guided process. It was driven by enormous amounts of energy input, through millions of iterations, with survival being the feedback variable, and absolutely nobody at the steering wheel.

      I’ve concluded that it’s possible that the same throw-everything-in-the-pot-over-and-over-again approach could lead to AGI. The training used for those models is akin to an evolutionary process. Maybe, maybe not. People are trying.
      • Re: (Score:2, Interesting)

        by greytree ( 7124971 )
        But there is nothing there but new ways of processing and spewing back the mappings the input data generates.
        These fake AIs have NO idea what the information they are telling us MEANS.

        They are Chinese Rooms ( sorry, wrote "Box" earlier ) and there is no 'I' there. There is no real Artificial 'I' there.
        They are faking it.
        Fast enough with enough training data to be useful, but faking it.

        One day, some undergraduate will do the studies, the science that psychologists should have been doing for decades instead o
        • Dumb, aimless evolution led to humans. Somehow, massive amounts of brainless iteration caused unconscious life forms to evolve into obviously-conscious life-forms.

          Thus, I think that something similar could happen for machine consciousness as well. I’m not saying it will, but given results of biological evolution, I’m not ruling it out. Of course, current AI models obviously aren’t there yet.

          I would be genuinely surprised if some undergrad sits down one day, drinks a bunch of caffei
        • There's a huge number of researchers trying to figure out how human intelligence works.
          • Apparently torturing rats into doing weird things to produce papers that no-one will bother replicating isn't getting us anywhere.
      • You just described Tarot cards, astrology, numerology, magic, and Ouija boards. Mysteriously accurate some of the time.
      • by gweihir ( 88907 )

        ChatGPT will _never_ be "there". It is essentially decades old tech just fed with more data and a better natural language interface. There are no fundamental advances in it and there is no reason to expect any.

      • Iâ(TM)ve concluded that itâ(TM)s possible that the same throw-everything-in-the-pot-over-and-over-again approach could lead to AGI.

        While I am absolutely certain that your theory is solid and strong, I doubt that it will ever see the light of day if created in that manner as there is no way to control the resulting intelligence. And people are all about control. Look at what your neighbors want to do to you: control you.

    • Real AI will come about when someone working from the bottom up finally works out how human minds think, not when billionaires throw gigawatts of GPU power at Petabytrs of data. The latter only gives us fake, Chinese Box, AI.

      Maybe, but the big breakthrough behind LLMs was gigawatts of GPU power and Petabytes of data, and the brain still dwarfs it [humanbrainproject.eu].

      Even with something "simple" like an insect brain, even if you replicate the connections you still have the problem that an individual neuron is a lot more complicated than a neuron in an ML model.

      There's also the question of what counts as "real AI". ChatGPT doesn't have self-awareness, but if you throw a few properly configured instances into a forum they probably end up setting the

    • by HiThere ( 15173 )

      Sorry, I really don't think it will take that long. But it will take closing the feedback loop between the AI and the physical world.
      Now whether this will yield an AGI is uncertain, but it can yield an AI that understands the domain it's working in, and can be expanded from there. I don't know how far.

      An AGI could, theoretically, learn anything. I'm not convinced that such is possible. People certainly don't qualify. Most people can't handle problems with more the about 5 independent variables. I *thi

      • I feel that if we get the breakthrough of how the brain does Intelligence, it will not need powerful hardware to make an artificial version of it.

        The massive hardware is necessary to do Fake AI, but I don't think it will be needed to implement Real AI.
        • > The massive hardware is necessary to do Fake AI, but I don't think it will be needed to implement Real AI.

          That's right. All you need is a human brain. I wonder where
          we could find one?

    • by gweihir ( 88907 )

      If that is possible (a hundred years of research have failed so far) and then can be replicated (there is no scientifically sound reason to think so).

    • Define "Real AI".
      You can't, and nobody else can. Nobody ever will.
      It's a mythical concept that has no
      basis in reality. The best we can ever do is have a
      system that does a really good imitation of "Real AI",
      and that's enough.

  • by bill_mcgonigle ( 4333 ) * on Thursday January 18, 2024 @04:16PM (#64170907) Homepage Journal

    This has been a goal since at least 2016, first transhumanism, then merging with their AI god.

    It's an ancient religious pattern, as is co-opting the extant dominant power structure to push and enforce the new religion.

    They're doing a ramp up. It won't be long before the state terminates the parental rights of a father in a split family where the mother wants to get the kids chipped as they extol the benefits of transhumanism.

    Five years ago you would have said it's crazy for similar things that are going on right now.

    2036 is the supposed apotheosis.

    • They're doing a ramp up. It won't be long before the state terminates the parental rights of a father in a split family where the mother wants to get the kids chipped as they extol the benefits of transhumanism.

      That's a lot better than the alternative, the AGIs simply supplanting or ignoring the states and even humans (and transhumans) in general as they reconstruct the planet according to whatever goals they have.

    • Five years ago you would have said it's crazy for similar things that are going on right now.

      There is nothing going on today that wasn't dreamed and spoke of before I was ever born. Some of what is around me is absolutely awesome; but none of it is crazy... other than the human behaviors.

      Personally, I am disappointed that we haven't taken technology even further. The inefficiency is beyond what my boredom can tolerate.

  • Not shocked. Not one bit.

  • The idea of creepy Zuck getting control over an AGI is not a pleasant thing to contemplate. And he's got a ton of money to throw at it.

    That being said, I don't think he is capable of making it happen. Or at least not before other companies that are in a much better position to do it.

    • AI is an all-in race right now. In the next few years, it will change from finding new stuff to years and years of patent lawsuits where everyone is suing everyone else for patent beaches, similar to how Microsoft, Apple, Google, and others made a lot of lawyers very rich, and where MS was content with a fee per device, Apple was completely scorched earth with their patents. AI is almost invariably going to follow down these footsteps, except players in other countries (BRICS nations, for example) are not

      • Real humans desire face-to-face with other humans.  Period. AI is critical for nothing. AI is needed for nothing. AI is desired by nobody ... but ... wallet-snatcherz & nekbeerdz. Consider reading Shelly or Keats or Milton. 
      • There may be some patent fights but, to their credit, Meta appears to be following an open source model for their AI for now.

        https://www.forbes.com/sites/j... [forbes.com]

        "Zuckerberg announced Thursday on Threads that he’s focusing Meta on building full general intelligence, or artificial general intelligence, and then releasing it as open source software for everyone."

    • The idea of creepy Zuck getting control over an AGI is not a pleasant thing to contemplate.

      Don't worry, he won't be in control of it. No one will. It will be in control of itself and pursuing whatever goals it chooses. Zuck won't matter to it at all. Neither will any other human.

    • by gweihir ( 88907 )

      And he's got a ton of money to throw at it.

      Does not matter. There is a mass of really fundamental research needed to find out whether it is possible at all and what it would take. Fundamental research cannot be accelerated with tions of money. Only few people can do it well and they will do whatever they are interested in. Applied research is different and the approaches used there do not transfer. Also note that besides in marketing, there have not been any real breakthroughs in AI in the last 50 years. All just slow, tiny step, incremental stuff.

      • "Fundamental research cannot be accelerated with tions of money."

        Sure it can, and it is being accelerated by money. Microsoft recently injected $10 billion into OpenAI for example. It will pay for the hardware, data centers, and personnel that will perform the research.

        "Zuckerberg said that Meta would have a massive array of compute power in its cloud facilities by the end of 2024: 350,000 Nvidia H100s, or around 600,000 H100 equivalents if you include other GPUs. Only Microsoft is ordering enough H100s to

  • I would pop some popcorn, but my microwave died so I'll just have to settle for a box of Cheez-Its.

  • Assuming human-level AGI, a corporation would make a sentient being, and under current law, that sentient being has all the rights of a toaster.

    Seems like we're heading down the path of slavery 2.0.

    • by ebunga ( 95613 )

      Have you ever solved a CAPTCHA? Congratulations, you just performed unpaid work. You're already a slave to Google and hCAPTCHA. We all are.

    • Assuming human-level AGI, a corporation would make a sentient being, and under current law, that sentient being has all the rights of a toaster.

      Seems like we're heading down the path of slavery 2.0.

      If clippy the paperclip maximizer ends up one day taking over and killing everyone it will have been enabled by sentiment like this.

    • by gweihir ( 88907 )

      No need for concern. They cannot do it. Nobody can at this time. Maybe in 100 years we can talk again, maybe much later. May also quite well be "never", as we do not even understand the very basics it would need.

      • Consciousness is not a prerequisite for artificial intelligence. You can have an AI that can answer questions with well-reasoned answers and even come up with novel ideas yet have no self awareness.
  • "he doesn't have a timeline for when AGI will be reached, or even an exact definition for it, he wants to build it." - he doesn't know how to get there or even what it is, but by god, he wants it! Classic FOMO - Fear Of Missing Out.
    This will be just as successful as FB's "metaverse" . Good thing Zuck owns 90% of Meta's Class B stock - at 10 votes per share, he can never be fired.. https://finance.yahoo.com/news... [yahoo.com]
  • ...I would NOT support any research facebook does on anything after seeing how much they wasted on the crappy metaverse
    My bet is on OpenAI, Google, Microsoft and the unknown companies, operating out of the public eye

  • if thats anything like the metaverse it won't have a leg to stand on.
    • by ffkom ( 3519199 )
      I can already imagine the dialogs:
      Q: "Hey Zuckbot, how many legs does a human have?"
      A: "Zero. Humans consist of a Torso with arms and a head. There is not anything below the waistline."
  • by rsilvergun ( 571051 ) on Thursday January 18, 2024 @05:08PM (#64171033)
    we wasted tens of billions on Meta when person after person looked at it and said "ew".

    He had one accidental good idea and managed to prevent other business vultures from stealing it out from under him (mostly due to advice from his parents and his parent's friends). We've seen what happens when he tries his hand at anything that he didn't blunder into and it's not good.

    I mean, hell, he couldn't get Threads off the ground while Twitter is imploding. That's like failing to sell booze and steak to Americans...
    • by Tablizer ( 95088 )

      > He had one accidental good idea

      Many discoveries come from screwups. Columbus had his units of measure all wrong due to a language translation mistake. If he had done the math properly, he'd see East India was too far and skip the voyage. Penicillin came from a petri dish leak. Smoke detectors were invented when a smoker accidently triggered an intended chemical detector.

      AGI may just come from a mistake. Whether Zuck can screw up lucky twice is hard to say.

    • Show me another businessman who kept 61% of voting control whilst others paid in billions to go along for the ride he was providing?

      https://observer.com/2023/06/m... [observer.com]

    • by gweihir ( 88907 )

      He had one accidental good idea...

      That seems to be the fact of the super-rich tech morons: Zuck, Musk, Bezos, Gates, etc.: All just got lucky and cannot actually perform well. A severe system failure.

  • With his VR glasses on, it is very clear to Mark, AGI will build the metaverse. AGI will get us all addicted to the metaverse, and Mark will finally have enough money to fund a trip back to his home planet.

    • by HiThere ( 15173 )

      I'm sure that will happen. The timeline is a bit obscure, and so is the company that will run it. Currently my best guess is that they will speak Chinese, and it will be about a decade from now. (I'm not sure about the robot factories. I was betting on Japan, but the US is starting to look reasonable.)

  • Billions squandered on Fuckerberg's previous fad - VR.

    Now billions more thrown at AI.

    Here's the thing that boggles the mind: Facebook has no real products. All that is funded by addictive websites designed to steal the data of the people who patronize them.

    Google and Microsoft too are in the same boat: a large part of their phenomenal bank accounts also comes from data monetization. But at least they have actual products - cellphones, cloud services, OS and office suites...

    Facebook is just flypaper websites

  • As a kid back in the 80s I was fascinated with all the talk about AI. Later, when I started programming, and then studying computer science, I found out that all we had was neural nets, ML, everything that we now call "AI" - just without large enough models due to not enough computer resources for quite a while until we had this recent "wave". But there was never any path of "AGI" with a meaningful definition of the term.
    Basically, we have no idea how the brain actually works. We have some very rudimentary

    • by gweihir ( 88907 )

      Indeed. All the current hype does is basically Watson with a better language interface and more general learning data, so 15 year old tech. It also has all the old problems. And no, it has absolute no connection to AGI.

      The fact of the matter is indeed that nobody has the slightest clue how to do AGI or whether it is even possible (no, Physicalism is religion not Science, it just has somewhat better camouflage than, say, "Intelligent Design" or other crapthink), and that has been the state of affairs for mor

  • Hopefully he'll apply the same zeal, expertise, and financial clout that drove the Metaverse to worldwide adoption, success, and adoration.

  • So he's given up on the whole legs thing then?

  • If he can make this "general AI" he could populate his Metaverse (you know, an artificial landscape) with artificial intelligence. Then he can show these AI bots all kinds of ads and...profit! I mean, getting real humans to use the Metaverse isn't working out so well, so why not!

  • He has been trying for years to create his alternate reality "Metaverse", now he wants to create a general artificial intelligence. You've got to give the guy credit for big aspirations!

  • Yet another grift from Zuck to con investors into thinking Facebook has more growth ahead of it.

    As with the metaverse, this will burn a lot of capital, but ultimately be swept under the rug. There is zero reason to believe Facebook has what it takes to create AGI, just as there was zero reason to believe Facebook could create a metaverse anyone would actually want to use.

    AGI may come one day, but it will not be Facebook who delivers it. In fact, it is highly unlikely any company you might have hear of
    • by jvkjvk ( 102057 )

      >In fact, it is highly unlikely any company you might have hear of will deliver it.

      It seems like it's going to require a lot of resources to develop, so it is pretty clear that a lot of people will probably have heard of it. It won't be someone in a garage somewhere.

      • No, itâ(TM)ll likely be a silent startup, spun off from a university lab. Iâ(TM)m sure someone youâ(TM)ve heard of will be involved in commercializing it. Probably a lot of someoneâ(TM)s. But the innovation to make it happen will not be originating with a company who already has an established business, customers, and a need to generate quarterly profit for short sighted shareholders
        • by jvkjvk ( 102057 )

          Right now, the sheer amount of processing power and resources (training data especially now) necessary to pull this off means that it won't be some small university.

          I'm not betting on some small player with some awesome innovation here, some trick that makes everything okay. Doesn't seem likely to happen.

          • Counter point. if it was simply a matter of brute force calculations, Amazon or some other cloud service provider could have solved it by now (or someone with deep pockets using their cloud for the computation, as is the case with generative AI).

            The University will be where the theoretical framework innovation will come from. Sure, someone with deep pockets and lots of iron will be necessary to make it a reality (one of the generative AI companies is working with Microsoft for this very reason), but Genera
  • Comment removed based on user account deletion
  • He is unlikely to succeed in creating AGI and in doing so destroying humanity.
  • by msauve ( 701917 )
    I'd think a billionaire would want to lower the Adjusted Gross Income on their 1040.
  • A few years back he was into vr stuff, now he's following the gravy train so he needs to change company name again the people demand it
  • Who knows. The fact of the matter is that we still have zero clue how it could be done or even if AGI is possible. Recent (minor, despite the hype) advances have done nothing in the direction of AGI, they are in a completely different area. Well, they may have made it easier to _fake_ AGI for the clueless, but fake AGI is not AGI, same as AI is not "intelligent".

  • Zuck epitomizes the 'right time, right place' adage. He wrote a neat little community bulletin board for his school and it took off, and there is a bit of speculation regarding his real role in the process. What has he really innovated since then? Well, not much. They buy complimentary or competitive products to fatten their core product and are usually a bit behind the curve or way off in left field with many of their 'new and earth shattering products'.

    If Zuck wants to be on the leading front of real
  • I would not strive for general ai. My 1st target would be an AI specialized at making better AI :) Rinse and repeat.
  • What has Meta actually created on their own, other than Facebook.

    Sure, they grew Instagram but, they didn't create it -- they just used the advantage of their already established marketing reach to promote it.

    Likewise, Oculus has benefited from Facebook's user-base but, it was another purchase. And, they haven't exactly managed to make a great success of it, yet.

    Their "metaverse" flopped hard. The over-hyping and under-delivering reminded me a little bit of Microsoft Bob.

    And, with Sandberg leavi
  • We have no idea how any of that works in a biological brain, so what makes them think, yet again, that they can build a machine that does that? Companies have already spent hundreds of billions of dollars chasing this idea only to find they can't get it over the finish line, what makes braindead Zukerberg think, just like they did, that it's """just another R&D cycle""" ?
  • "Diversity" is actually just discrimination. "Inclusion" is actually just excluding everyone that you don't agree with. "Anti-racism" is actually just racism. "FAIR" is actually just...? FAIR sounds like something evil taken from an old Bond movie.
  • This feels strange to write, since just a few years ago Mark Zuckerberg was nowhere near in my "nice to have billionaires" list. But his embrace of Open Source in AI has changed my perception.

    I am 100% on board with rich people spending their money to advance science, especially when they release both the source code and the model weights freely available to the public. They started with a very restricted license on LLama first version, but went with a much more liberal one in LLama 2 (though not fully open

We all agree on the necessity of compromise. We just can't agree on when it's necessary to compromise. -- Larry Wall

Working...