Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Technology

OpenAI Offers New Privacy Options for ChatGPT (bloomberg.com) 14

OpenAI is letting people opt to withhold their ChatGPT conversations from use in training the artificial intelligence company's models. The move could be a privacy safeguard for people who sometimes share sensitive information with the popular AI chatbot. From a report: The startup said Tuesday that ChatGPT users can now turn off their chat histories by clicking a toggle switch in their account settings. When people do this, their conversations will no longer be saved in ChatGPT's history sidebar (located on the left side of the webpage), and OpenAI's models won't use that data to improve over time.

OpenAI is aiming to make people feel more comfortable using the chatbot for all kinds of applications. For example, during a demo of the feature on Monday, the company used the example of planning a surprise birthday party. "We want to move more in this direction where people who are using our products can decide how their data is being used -- if it's being used for training or not," OpenAI Chief Technology Officer Mira Murati said.

This discussion has been archived. No new comments can be posted.

OpenAI Offers New Privacy Options for ChatGPT

Comments Filter:
  • Costly feature (Score:4, Insightful)

    by kubajz ( 964091 ) on Tuesday April 25, 2023 @12:41PM (#63475700)
    Great news for privacey, but this must be quite a costly feature - not direct cost, but an opportunity cost, because OpenAI would be be missing a ton of training data if this toggle became widely used. I would not be surprised if this soon required the paid account - it is hard to provide the service for free, and not even get training data from it.
    • How about a prize for a good FP? Perhaps a special power prize allowing the FPer to edit the rushed FP to fix the spelling mistakes? Having said that much, I think it was a good FP, though I've added your "opportunity" to the Subject for more substance.

      On the story, I was thinking this needed to be done in some way. The obvious example involves "discussions" of personal medical conditions. Scare quotes because I still have strong reservations about describing interactions with ChatGPT in such anthropomorphi

    • Re:Costly feature (Score:4, Insightful)

      by fph il quozientatore ( 971015 ) on Tuesday April 25, 2023 @01:56PM (#63475896)
      No luck east of the Atlantic. Some firms in Europe have tried "give-us-our-data-or-pay" schemes, including a famous newspaper in Italy, but the idea is considered illegal under GDPR. If a firm doesn't *need* those data for technical reasons, then they shouldn't be mandatory, period. And the paid opt-out makes it pretty clear that they are not needed.
      • by kubajz ( 964091 )
        GDPR is only relevant to *personal* data. Most queries to ChatGPT is not personal data because it does not contain personally identifiable information, and franky, ChatGPT would not even be interested in those - a huge headache to trouble oneself with personal data...
    • No I dont see THAT flying in any country with even basic privacy laws. Regardless, its an issue that already negatively affects them.

      The reason its a major problem is when you feed that data into the training corpus, ChatGPT now remembers that conversation, in *any* conversation it has with *anyone*. And its far too unpredictable to trusted not to spit that out when someone asks "What do you know about ".

      OpenAI needs people to trust it to be viable as a business. The gaing of a bit of low-quality trainin

  • by Waffle Iron ( 339739 ) on Tuesday April 25, 2023 @01:23PM (#63475814)

    > Are you *really* going to respect my privacy, where "privacy" is defined in a way that a reasonable person would expect?

    * I'm sorry, Dave. I'm afraid I can't answer that.

  • OpenAI isn't perfect but it seems they are making an effort to do the right thing.
  • If I were a betting man, 40% of my chips say "inadvertent" consumption of the data, 30% on straight up mispresentation, 20% on loophole, and 10% goes on adherence to the spirit.

  • I trust OpenAI's claim as much as I trust politicians to not lie. Wake me up when they stop requiring name and phone numbers to access the program.

  • You lose history when you opt out from training so it is strong arming the user to opt in.
  • that whenever I used ChatGPT or the likes that my proompts were being collated and categorised into future training models and as such was circumspect with the types of proompts I entered.

    I would have thought, by now, that would have been the normal stance - assume you are contributing to the datasets, not just consuming them.

Trap full -- please empty.

Working...