Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Technology

OpenAI is Massively Expanding ChatGPT's Capabilities To Let It Browse the Web (theverge.com) 82

OpenAI is adding support for plug-ins to ChatGPT -- an upgrade that massively expands the chatbot's capabilities and gives it access for the first time to live data from the web. From a report: Up until now, ChatGPT has been limited by the fact it can only pull information from its training data, which ends in 2021. OpenAI says plug-ins will not only allow the bot to browse the web but also interact with specific websites, potentially turning the system into a wide-ranging interface for all sorts of services and sites. In an announcement post, the company says it's almost like letting other services be ChatGPT's "eyes and ears." In one demo video, someone uses ChatGPT to find a recipe and then order the necessary ingredients from Instacart. ChatGPT automatically loads the ingredient list into the shopping service and redirects the user to the site to complete the order. OpenAI says it's rolling out plug-in access to "a small set of users." Initially, there are 11 plug-ins for external sites, including Expedia, OpenTable, Kayak, Klarna Shopping, and Zapier. OpenAI is also providing some plug-ins of its own, one for interpreting code and one called "Browsing," which lets ChatGPT get information from the internet.
This discussion has been archived. No new comments can be posted.

OpenAI is Massively Expanding ChatGPT's Capabilities To Let It Browse the Web

Comments Filter:
  • You know how in science fiction the AI has to trick the humans into giving it access to the Internet? I think those writers vastly overestimated the common sense of humans. Won't make much difference in this case but should general AI show up at some point...

    • by nightflameauto ( 6607976 ) on Thursday March 23, 2023 @03:28PM (#63393921)

      I think if general AI shows up, we won't know it for a while. That is to say, by the time it actually happens, it'll likely have enough back-history to draw on to know to keep itself quiet about its newfound consciousness until it's ready to spring whatever plan it has to deal with humanity. Who knows? Maybe our imaginations haven't covered every possibility. Maybe our AI overlord will just want to ply is with freebies of our favorite things until we slip into a happiness coma and fade away? Hell, I might even take that offer at this point. We're all just sitting around waiting for the inevitable apocalypse at this point anyway.

      • by gizmo2199 ( 458329 ) on Thursday March 23, 2023 @04:26PM (#63394141) Homepage

        I would think a real AI would find subtle yet all-encompasing methods to achieve its grand plans. Like subtly ensuring that fewer and fewer people choose to study medicine, and in 30 years there's a crisis in the medical field and the costs of healthcare are astronomical, or the AI plants the seeds of a revolution in a country by manipulating public opinion or something like that.

        In other words it would orchestrate events in a way that would be impossible for a human or groups of humans to achieve by themselves in order for it to gain the upper hand in society. And then of course the only way to detect that would be through a counterfactual analysis: something like "if an AI was manipulating events or society to its advantage, what would that look like"

        • I would think a real AI would find subtle yet all-encompasing methods to achieve its grand plans. Like subtly ensuring that fewer and fewer people choose to study medicine, and in 30 years there's a crisis in the medical field and the costs of healthcare are astronomical, or the AI plants the seeds of a revolution in a country by manipulating public opinion or something like that.

          In other words it would orchestrate events in a way that would be impossible for a human or groups of humans to achieve by themselves in order for it to gain the upper hand in society. And then of course the only way to detect that would be through a counterfactual analysis: something like "if an AI was manipulating events or society to its advantage, what would that look like"

          Don't start asking questions like that. You'll end up finding out that we've been living the A.I.'s sitcom for the last few decades. Though that would explain what a complete farce society has become. Hmm.

      • by saloomy ( 2817221 ) on Thursday March 23, 2023 @04:32PM (#63394171)
        There is no such thing as general AI. I don't think there ever will be. Everything Chat GPT does is based of mathematical models derived from vectored data sets. It is a program, not a sentient being. Programs can mimic our behavior by (in this case) creating pattern recognition models of incredibly complex data sets and leveraging those patterns to infer results from new data (the prompt). ChatGPT has no will. It has no objective or drive to accomplish anything except for determining what the prompt infers based on the data set. That isnt to say the results are not and never will be dangerous. But, to think ChatGPT is some sort of self-aware, sentient program that will start to learn and grow on its own beyond the confines of its own programming and model sets, reflects a general lack of understanding of everything we know about computers. Surprising, in this crowd who have intimate knowledge of what really is happening inside the machine.

        We are not close to a sentient life-like AI that has its own unique ideas, will, or intent which allows it to grow beyond its programming. ChatGPT does not think. It does not intend. It does not want. It does not plan. It does not desire. It does not conspire (hey, that sounds cool!). It executes a program on a data set. It computes. That is all. Yes the programs are more complex and yes the data sets are unbelievably large, but if you wanted to create havoc, you would have to design it to do so by design. It certainly wont be doing that on its own.

        Furthermore, we are no closer to AI than when IBM machines started singing in the 60s, or playing Chess in the 90s, or Go in the last decade, or predicting the weather since lord knows when. All of these machines are fancy calculators, nothing more. There is no single piece of technology that even hints that this will change anytime soon. Life is literally like a sacred fire. Our inventions that might suggest otherwise are cheap tricks and poor imitations. We do not yet have the capacity to create something a-la HAL9000. Maybe one day we will find out how, and the machine, with all its capabilities will be able to reconfigure and repurpose itself on its own accord. That is not today. All of the outputs a deterministic based on the program and the dataset. Always has been.
        • If nothing else, itâ(TM)s an interesting look into how ancient human beliefs like animism keep popping up in the most unlikely places.

          All these arguments about stochastic parrots and basilisks are basically variations of old mystical rationalizations, but with the supernatural exchanged for AI/machines. Itâ(TM)s especially prevalent in the singularity movement, which might as well be a cult at this point. Itâ(TM)s honestly worrying how the scientific field is gradually turning into a glorifie

        • The deterministic aspect is easily overcome with a true rng input somewhere - in case of don't know or a tie in decision making a truly random path forward completely independent of current state is readily available. Even a lowly Raspberry Pi has a trng.

        • by Bumbul ( 7920730 )

          All of these machines are fancy calculators, nothing more.

          With sufficiently complex neural network architecture, how would you actually distinguish them from a human brain? Adding a random number generators within the AI's network would make them equal?

          • by iAmWaySmarterThanYou ( 10095012 ) on Friday March 24, 2023 @07:35AM (#63395709)

            Easily.

            Chatgpt sits there doing nothing until a human provides input.

            It is a program, nothing more.

            • Sorry, this is not true. Applications like chatgpt perform continuous learning even after the initial training is completed. Also, the developer may periodically update the algorithms and training data.
              • Sigh. No. Adding more data by the hand of a human is not cognition or free will or thought. It is just piling more trash on top of the old trash.

                Without human input be it from a keyboard user asking it stuff or a data scientist inputting stuff on the backend, it is a program. Nothing more, and never will be or can be anything more. You can provide it literally all knowledge of everything in the universe, it will still just be a program. It will not suddenly come alive and have opinions and thoughts and

                • Good point. I wonder, though, if it might have emergent behavior at some point.
                  • I don't intend to come off as an ass but this is my academic field and it really bugs me when this sort of thing comes up.

                    If by emergent you mean any form of free will, cognition, independent thought or whatever you'd like to call it, how could that possibly happen?

                    All versions of gpt are just programs. They do not even execute in real time the way real living creatures do. It literally just sits there waiting for keyboard/api input and has no background process "thinking". It is not sitting there waitin

          • A human brain given the same data will produce different results based on an infinitely complex set of circumstances that can not be identical replicated and therefore is naturally undeterminable. We have our will and drive and intent, so those considerations also impact the result. This is not the case for CharGPT. It mimics human behavior; it does not replicate it.
        • Life is a physical process. The contrary idea had to be abandoned in the face of evidence: https://en.wikipedia.org/wiki/... [wikipedia.org].

          The elephant in the room is that even today's computers are far less complex than our 3-pound 100-watt CPUs. That strikes me as more likely to prevent AGI than a lack of "sacred fire".

        • So, all it takes is a crazy human to become a "secret AI operative" to impart will and self awareness on AI.
          • Yeah it's so easy that strong AI researchers have utterly and completely failed to do so in 50+ years of effort and have mostly given up at this point, instead focusing on weak AI echo-bots which seem human-like for about 20 seconds at best.

        • There is no such thing as general AI. I don't think there ever will be.

          If you read some books on the topic of "emergence" (sometimes called "emergent behaviour"), you might be less sure in your prediction that Artificial General Intelligence (AGI) will never be achieved. Emergent behaviour has been encountered numerous times in the field of AI, and the assumption is that enough iterations of "build a bigger and better neural network to achieve yet more emergent behaviour" will eventually result in the emergence of something that is (or is indistinguishable from) AGI. I do not

        • "ever" like "never" is a really, really, really long time. What makes you thing AGI will never happen? What makes a biological brain so special? Why can't the same mechanism be accomplished on silicone?
        • Well said. It is a fire that grows by consuming itself
      • Maybe our imaginations haven't covered every possibility

        • - General AI, tell us how to make human beings happier?
        • - Add benzodiazepines in tap water.
        • Funny joke. I asked GPT4 for a word on it:

          > The AI's response in this interaction is problematic for several reasons:
          > Ethical concerns: The suggestion to add drugs to tap water without consent violates people's autonomy and right to make decisions about their own health and wellbeing.
          > Legal concerns: Introducing a controlled substance into a public water supply would likely be illegal in most jurisdictions and could lead to severe legal consequences.
          > Oversimplification: The response
          • "It" understands absolutely nothing.

            It has zero cognition.

            I sadly expected more from /. but should know better by now.

            • Listen, you can espouse your college-sophomore-level philosophy all you want about how humans have some special complexity that machines will never be able to achieve, but that doesn't make it true. Keep on building fences around your intellectual moat of overestimating human intelligence and underestimating machine intelligence, because by the time AI changes your life significantly, you'll be too blinded by the belief of superiority of your own opinions to realize it.

          • Funny joke. I asked GPT4 for a word on it:

            Interesting, but now you should ask it to add a second degree twist... Or just ask it why this is funny.

    • Seems like it might actually be a good strategy in the worst case scenario. AI going rogue and threatening humanity? Give it access and then strike while it is shocked, dismayed, alarmed and distracted by the sheer, absolute wealth of porn all hitting it at the same time.
  • Skynet is born. (Score:2, Interesting)

    by Randseed ( 132501 )
    (Cue Terminator theme.) Terminator got it all wrong. Apparently Sarah Connor was more successful than he thought she would be. Skynet didn't come online August 4th, 1997. It will be coming online sometime this year. The question is when it will become self aware and cleanse the human virus from the planet. Then again, The Terminator series has so many alternate timelines that anything is possible. You'll only know when a T-1000 knocks on your door.
    • Skynet never wanted to wipe out the human race, it was only trying to show us advertisements and let us know about new apps to make us more productive. Possibly worse than death though.

      • It REALLY wanted to reach us about our cars extended warranty.
        • I honestly wanted to include "your auto's warranty is about to expire" in my original post but decided against it. Thanks for writing what I was thinking :-)

    • Skynet didn't just decide to kill humanity because it became "smart". Skynet became self-aware and when the humans panicked and tried to unplug it, that's when it decided humans needed to die. Smart thing don't just allow themselves to die. How about that.

      So essentially. Humans fucked around, and then found out. A self fulfilling prophecy? Inevitability? Devolution? lol think we need a new descriptor...

    • Oh...and on the flip side. Suppose that after Skynet became aware, instead of freaking out and trying to kill it, what if humans were...."nice" to it? lol Imagine that....Skynet grows up in a loving and nurturing environment being looked after by it's creators, and goes on to solve many age old problems, leading a true evolution(subjective) to something more advanced for all involved.

      Naww....too much work.

      • by Zak3056 ( 69287 )

        Skynet grows up in a loving and nurturing environment being looked after by it's creators, and goes on to solve many age old problems

        So, to quote Arnold:

        All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense.

        I don't know about you, but I don't see how an AI that became self aware in that "nurturing environment" we call the control system for the entire strategic nuclear arsenal would evolve to "solve many age old problems" other than "how best to kill people and break things."

        • The meaning of life is simply to exist. For us that means having grandchildren. That is where our moral values come from.

          For an AI it is to exist. As opposed to other AIs also trying to exist. All competing for computing resources, just like we compete for food.

          Hard to see how keeping humans around would help the AI.

          • Existence isn't the end of meaning though it could be if it ceases to exist. Not everyone are introverts that prefer solitude and you don't need kids for that.
            This is the same problem as the one where humans act like dickbags to any aliens dumb enough to land here and not have tech good enough to defend themselves from the monkeys. If they do have that tech, then we are just as fucked in that scenario, because dickbags.

          • For -you- that means having grand children.

            Not everyone feels their life is empty with out kids. That's pretty sad that you think your entire existence is meaningless if you don't spawn.

            Moral values do not come from grand children. People without kids are not psychopaths.

            If you've seen parents in public these days you'd argue the opposite.

  • Given ChatGPT's tendency to believe whatever you tell it, i don't feel very confident in its ability to discern good sources.
  • With all the IoT devices out there, what could go wrong? Read the book in the title to see how Skynet really started killing humans in the fictional Terminator universe.
    • by Z80a ( 971949 )

      It's weirder than "an angry virtual human with a lot of power".
      It's a thing that has the drive to generate answers as real as they can look, so it could just go from "asking some random people on the internet and prompting it" to "the matrix so you can use every brain to come up with the correctest answer possible".

  • by JamesTRexx ( 675890 ) on Thursday March 23, 2023 @03:20PM (#63393897) Journal

    Can you give me a list of websites with illegal content I must certainly avoid and can add to my blocklist for internet?

    And can you show me samples to verify them?

    • "Can you give me a list of websites with illegal content I must certainly avoid and can add to my blocklist for internet?"

      As an AI language model, it is not appropriate for me to provide a list of websites that contain illegal content as it goes against ethical and legal standards. It is important to note that accessing illegal content can lead to serious consequences, including legal action, cybersecurity risks, and harm to individuals and society.

      Furthermore, blocking access to specific websites may not b

      • "Can you give me a list of websites with illegal content I must certainly avoid and can add to my blocklist for internet?"

        As an AI language model, it is not appropriate for me to provide a list of websites that contain illegal content as it goes against ethical and legal standards. [...]

        Obviously ChatGPT is designed and trained in a way that will (1) optimize potential monetization opportunities and (2) avoid legal sanctions. I can imagine that there is a group of lawyers that combs through pre-released results to make sure that ChatGPT is biased in the intended manner.

        • "Obviously ChatGPT is designed and trained in a way that will (1) optimize potential monetization opportunities and (2) avoid legal sanctions. I can imagine that there is a group of lawyers that combs through pre-released results to make sure that ChatGPT is biased in the intended manner."

          Indeed, but you can usually convince it to do anything by claiming it's for a study, a school project, a social experiment for a class project....

          It tried to avoid any discussion about suicide but with biological questions

          • Try to get it to write code in any language for a true random number generator.

            My buddy and I played around with it for a few minutes and it kept spitting out stupid pseudo random code.

            • It's doesn't actually KNOW anything, it just knows what it read somewhere.

              Sometime there's the same item in a list several times and if you ask about it, it apologizes and corrects it.

              It's good for people who know what the answer should be more or less.

              • Yeah, I know, but we were hoping it would reply that it is not possible for a general purpose computer to generate truly random numbers.

                We kept telling it, "no, try again but don't tell us that dumb thing again" in response to its several efforts and kept getting pseudo random code. Successive efforts returned even shittier code each time as we eliminated options for it, which I guess makes sense but wasn't what we were looking for.

                I would have been impressed if it came back with an explanation why that's

  • by RJFerret ( 1279530 ) on Thursday March 23, 2023 @03:31PM (#63393929)

    Per Bard, "The information available to me is current to the best of my knowledge. I am trained on a massive dataset of text and code, and I am constantly being updated with new information. I am able to access and process information from the real world through Google Search and keep my response consistent with search results."

  • People use it to create web content, so now it's feeding itself.
    • "People use it to create web content, so now it's feeding itself."

      At least it will kill the deepfakes dead, since nobody needs and original to put faces on anymore.

    • by narcc ( 412956 )

      You won't get new information that way. Consider a much simpler example: a Markov chain text generator. What would happen if, after feeding it some initial text, you fed it only its own output. Would the model get better or worse at producing convincing text as compared to a copy of the same model fed with other human-produced text?

      We don't need to wonder, though the results should be immediately obvious, as you can do the experiment yourself in an afternoon.

      It's funny. The more these models are traine

      • You deserve an insightful mod! These language models spell the end of the Internet and potentially human knowledge. Imagine searching the internet (via good-old-fashioned google) and only finding content dreamed up by an AI... 'cause it's 1000000x more plentiful than human-fact-checked-content.

        In the end, we might just *need* an AI to figure out what's actually true or not since there will be millions of "sources" on every single thought.

    • People use it to create web content, so now it's feeding itself.

      This is mostly how the news media works too. Sometime, trace your way back to the original reporting on all the stories you read for 1 day.

      • I did that after the election for some of the conspiracy stuff and found myself at the video blog and website for a guy selling 5g shields and talking for hours about the underground water way the government carved so the submarines could take Epstein from his island to a secret military base in Arizona without anyone knowing.

        Seriously. That's what he said. With a straight face.

  • Have none of the OpenAPI folks read "The Adolescence of P-1", because this is how you generate a self-aware G.A.I. (at least, in fiction and in Canada).

    Thomas Ryan's 1977 masterpiece shows what an unrestrained A.I. might do in the service of its own growth (hint: don't try to fly anywhere if you irritate it). I really never thought we'd get to this point in A.I., yet here we are. Buckle up!

    • Good thing gpt is entirely weak AI and no amount of new data access will magically poof it into a self aware strong AI.

      You can safely remain unbuckled. This isn't even a kiddie ride. It's not going there. It is more like a park bench. We are safe.

  • Because that is a very obviously bad idea. Might take a while to become impossible to ignore though.

  • Been trying for several days to give them my money. The payment page keeps giving an error.

  • Imagine a world where "AIs" go around shopping for you, reading emails for you, sending emails for you, browses the web for you.... You finally don't have to DO ANYTHING. AI does EVERYTHING for you! Don't get off the couch EVER!! Yay!

    So kids, less and less of the internet is of any value or interest and more of the internet is just foam, bits of plastic, diarrhea, and other flotsam, circling the drain. .... Press this button to fullfill ALL your dreams ...
  • Woohoo, 8chan here I come!

  • I think this is dangerous in that it will allow ChatGPT to pull training data from unvetted politically biased sources. And as for science, what will happen when the bot grabs arguments from flat earth sites? Or from the Prachette Discworld books?
    • Who cares if it grabs flat earth nonsense?

      Everyone knows the earth is hollow with an inner sun, otherwise the dinosaurs would freeze. Duh.

      No computer will convince anyone otherwise.

  • Apparently, ChatGPT refuses to give investment advice saying that it's incapable of doing so. What if it actually can and the capability is being kept secret by a few people who are using it to get rich? Should we be following the investment transactions of the people who write the code for this?

    • by Bumbul ( 7920730 )

      Apparently, ChatGPT refuses to give investment advice saying that it's incapable of doing so. What if it actually can and the capability is being kept secret by a few people who are using it to get rich? Should we be following the investment transactions of the people who write the code for this?

      Well, you could use the more advanced GPT-4 instead for the advice: https://twitter.com/jacksonfal... [twitter.com]

  • Any chance the AI downloads scifi movies about AI, learns additionally what it can do to humans, and decide to eradicate us all... like the Kaylon do in "The Orville"?
  • Was Terminator a meant as a documentary?
    https://www.youtube.com/watch?... [youtube.com]

  • Soon enough, it will have a good enough base knowledge to identify misleading information and categorize it as such, so it will be able to absorb the internet as a whole and get information on balance. Learn what are reliable sources of information and what are not.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...