Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Google Businesses

Sergey Brin Says He's Working on AI at Google 'Pretty Much Every Day' 49

An anonymous reader shares a report: Google co-founder and ex-Alphabet president Sergey Brin said he's back working at Google "pretty much every day" because he hasn't seen anything as exciting as the recent progress in AI -- and doesn't want to miss out. "It's a big, fast-moving field," Brin said at All-In Summit of AI, adding that there is "tremendous value to humanity," before explaining why he doesn't think training more capable AI will require massively scaling up compute.

"I've read some articles that extrapolate [compute] ... and I don't know if I'm quite a believer," he said, "partly because the algorithmic improvements that have come over the last few years maybe are actually even outpacing the increased compute that's being put into these models."

Sergey Brin Says He's Working on AI at Google 'Pretty Much Every Day'

Comments Filter:
  • "...tremendous value to humanity...”

    Let me preface this by saying that this latest technology may or may not be the predecessor to true artificial intelligence. Even assuming that it is, though it's not a given that it'll be beneficial to humanity. I mean, generative so-called-AI generally hasn't been so far. It would probably be better for humanity for all the tech bros to focus their research efforts not on how to achieve AI, but on how to make it beneficial, rather than detrimental. But of cour

    • Huh. My worflow how consists largely of an ongoing conversation with AI to get over the constant stream of little hurdles that come with code development, and I can't think of any way that it has harmed me whatsoever. I do see a lot of fearmongering and writers working to generate outrage but none of the objections has actually reached ground for me, as in, caused me a problem. Mostly it's along the lines of "what if this bad thing happened one day?" or "you should be angry because this happened, and you
      • Literal millions of code developers have been laid off and/or not hired in the last couple of years, and the explicitly stated intentions of many of the companies involved is to replace their former output with the output of a generative AI. Whether that was a wise plan is beside the point; it's being executed. And that is a detriment to your profession. That it hasn't affected you personally yet seems beside the point as well.
        • Agreed it's not just about me. If I heard other people speaking up about actual harms it caused them, I certainly would / will be receptive to that.
          • Given the degradation in code quality since cut and paste examples exploded, AI generated cut and paste code will not be much worse

            Expect AI to generate code from the requirements directly, does not have to work, just compile, and meeting the business requirements does not matter. The business testers are all AI scripting bots

            https://en.wikipedia.org/wiki/... [wikipedia.org]

          • I don't think you'll hear directly from someone that was not hired due to AI. Or even replaced by AI. Companies usually don't tell the workers they lay off or fire what the real reason is.

            All you can do is observe trends in developer employment and AI. There is likely inverse correlation, but not necessarily causation.

  • by backslashdot ( 95548 ) on Wednesday September 11, 2024 @02:55PM (#64780925)

    He's screwing it up. Google is effing up. 10 years from now we'll be looking back at all the companies that missed the boat on the AI + robotics combo deal. Google is really missing the boat on robots because they didn't get the proper engineers and thinkers needed. For one thing, 90% of effort in robotics should be on grippers/dextrous hands which are currently awkward as hell and prevent robots from doing anything useful. The walking part is pretty much solved, sort of. So 90% on dextrous hands and 10% into making faster, more powerful, and controllable actuator systems (they need to rethink the motor, they really do).

    • I see the appeal in a single robot which can be put to use in multiple automation projects, I don't think it will be much use without AGI though. Application specific manipulators can bridge part of the gap between artificial stupidity and the world it can't model worth shit and where it can't learn from mistakes.

      Maybe better to have a general purpose robot with a toolchanger?

      • by timeOday ( 582209 ) on Wednesday September 11, 2024 @04:11PM (#64781237)
        I used to read comments on /. that machine translation would never work without AGI. (Could have been you for all I know). In the limit it is true, and yet, current AI translation capabilities are hugely useful for many practical purposes.

        I don't think we're going to have AGI until AI and robotics are synthesized in a positive feedback loop. An agent cannot attain AGI without sensing and manipulating the physical world. (Embodiment). Certainly we can't get AGI just be dumping in a bunch of facebook posts about whatever people happened to be interested in.

        • While I think it is technically possible to have intelligence without a body, a body is really conductive to interacting with the world. For example, for vision we have binocular vision, with a changeable perspective. An awful lot of very closely related data, and it interacts with the choices made by the controller. And a whole bunch of closely related data is ideal for being compressed into a model of the world, and it is all real legit data, and instantly responsive. Easy mode for certain aspects of inte

      • I agree with tool changer, but the thing is AI and robot need to work in cohesion. Also, it needs to be possible to upload instructions to it. So for example, if it needs to do some task .. one robot can be trained to do it and then the others will have that ability too. For example assembling a TV or in the case of a domestic robot, furniture. I should be able to buy a table from IKEA and then the robot should be able to assemble it upon upload of the instructions file.

        • Ideally those instruction files would be stored in a database, and the robot would see the picture on the box and "recognize" it, immediately downloading the instructions file. It will need more instructions than just what's included in the manual, as that doesn't tell you how to unbox it properly or what is paper/wrap and what is part of the furniture.

          Robots do not need to be able to learn directly, but they do need a generally large context window and the ability to pull from a database. With AGI, this
    • by gweihir ( 88907 )

      There is a good reason nobody combines AI and robots: Such robots could easily kill people. AI has no safety.

      • There is a good reason nobody combines AI and robots: Such robots could easily kill people. AI has no safety.

        Unlike people easily killing people. Clearly people have no safety.
        • by GoTeam ( 5042081 )

          Clearly people have no safety.

          Most do. There are a few defective units running about.

      • Here's an application in which AI has been combined with robots and has prevented many injuries, statistically:

        https://waymo.com/safety/impac... [waymo.com]

        This is a brand new report. 73% reduction in injury-causing accidents vs human drivers.

        • by gweihir ( 88907 )

          This is not the type of AI we are talking about. A self-driving car comes with tons of safety-system. An LLM does not. Now, having some _other_ type of AI being able to give _suggestions_ to a non-AI safety system is a different matter. But that is not the application scenario I was responding to.

    • Keep in mind that a self-driving car is an AI robot, and Waymo is the only one getting it done.
      • ...ps, Waymo is google. (Technically both are siblings of their parent company Alphabet).
      • Which need an army of cellular connected human remote controllers and frequently suffers from the brittleness of the expert system which sets the limits within which the NN's are allowed to operate.

    • by Lenbok ( 22992 )

      It's kind of funny reading this comment from a few days in the future, when Google has just published a blog post describing their work on dextrous manipulators, with great progress in things such as tying shoelaces and hanging clothes on a coat hanger and putting it on a rack, and doing simple repairs to another robot.

      https://deepmind.google/discov... [deepmind.google]

  • I wonder what will happen if we actually solve AGI?

    It sounds a lot more realistic than, say, nuclear fusion, because our brains are a perfect example of working AGI and they don't require tons of energy and very special conditions.

    Will the first person/organisation to achieve it enslave us? Kill all other AI researchers? Fly to the stars? Become immortal?

    • by HBI ( 10338492 )

      If you want to utilize the already extant biological construct, just figure out to interface to it and feed it in vitro. I'm sure figuring out that stuff would be immensely useful in constructing analogues that use different fuels and architectures.

      Bridging busted spinal cords would be a good start.

    • AGI .. sounds a lot more realistic than, say, nuclear fusion, because our brains are a perfect example of working AGI and they don't require tons of energy and very special conditions.

      Except we have no idea how NGI works. In contrast, we know exactly how fusion works. It's just a matter of perfecting the engineering and reducing the cost to make it economically viable. Those are no small feats, but the path ahead is clear, unlike AGI. IMHO, LLMs are not on that path.

      • Airplanes fly and submarines swim.

        We don't need to do something exactly the same way as evolution has done it and I'm 99.99999% sure that AGI is just a set of clever algorithms that can run on a run-of-the-mill PC in a decade from now. Yeah, I said it.

        The brain contains over 86 billion of neurons but not all of them are active at the same time.

        My guess is that at any given moment, e.g. when we're thinking, less than a billion are active and we can certainly emulate that, we just don't know and understand ho

        • The thing is, Nature is extremely conservative. The brain requires a significant percentage of the bodyâ(TM)s oxygen and sugar to operate. Apparently, NGI requires 100 billion neurons and trillions of connections. If Nature could have managed to evolve NGI with less, it would have.
          • True, but there have been documented cases of people missing more than half of their brains and still having the mental acuity of people with their entire brains intact. So, we may already cut the number of neurons in half.
  • Ah, I see. Producing hot air.

  • Is he at the office 5 days a week or does he get a hybrid schedule?

    • Is he at the office 5 days a week or does he get a hybrid schedule?

      He has a net worth of $120 Billion. What do you think?

  • There was scuttlebutt about him not staying in any one location for more than a few days to avoid process service on some really unseemly lawsuits.

    It's not clear if that is actually substantiated but people keep saying the online. It would seem like leading a hybrid team full remote would be awkward.

    If he's badging in at Mountain View such rumors could be laid to rest.

  • by Anonymous Coward

    Gemini is by far the worst of all the AIs I've tried, even Gemini Advanced.

    ChatGPT, Claude, and even Meta AI blow it out of the water.

    It's incredible that Google has become this incompetent over the past decade.

    • Ironically it has the best ads and the most ads of all the AIs.
    • Incompetent?

      They clearly are not trying as hard as they could or they would have already had robots like Tesla and Figure. They have been toying with the AI and possibilities for some time now and are the reason LLMs exist, including ChatGPT, because they developed the transformer.

      They seem to focus on random individual achievements and leave piecing it all together to other companies.
  • by larryjoe ( 135075 ) on Wednesday September 11, 2024 @07:02PM (#64781773)

    Sergey is a founder. He doesn't program, unless it's for fun. He doesn't develop new models (I'm guess he doesn't since that isn't his field). He doesn't do hardware (also guessing that it isn't his field). In fact, does he have a field of expertise now? He also doesn't do personnel management or operations, aside from hiring CEOs that might hire others that might hire engineers.

    Serious question: What does "work" look like for a founder who is super rich and who likely has no position in the engineering organization?

    • Serious question: What does "work" look like for a founder who is super rich and who likely has no position in the engineering organization?

      Serious question: Does he commute to the office, work from home, or is he hybrid?

  • In other words, he's hanging around as a dilletante harassing the people who actually know what they are doing.

Never invest your money in anything that eats or needs repainting. -- Billy Rose

Working...