OpenAI Offers New Privacy Options for ChatGPT (bloomberg.com) 14
OpenAI is letting people opt to withhold their ChatGPT conversations from use in training the artificial intelligence company's models. The move could be a privacy safeguard for people who sometimes share sensitive information with the popular AI chatbot. From a report: The startup said Tuesday that ChatGPT users can now turn off their chat histories by clicking a toggle switch in their account settings. When people do this, their conversations will no longer be saved in ChatGPT's history sidebar (located on the left side of the webpage), and OpenAI's models won't use that data to improve over time.
OpenAI is aiming to make people feel more comfortable using the chatbot for all kinds of applications. For example, during a demo of the feature on Monday, the company used the example of planning a surprise birthday party. "We want to move more in this direction where people who are using our products can decide how their data is being used -- if it's being used for training or not," OpenAI Chief Technology Officer Mira Murati said.
OpenAI is aiming to make people feel more comfortable using the chatbot for all kinds of applications. For example, during a demo of the feature on Monday, the company used the example of planning a surprise birthday party. "We want to move more in this direction where people who are using our products can decide how their data is being used -- if it's being used for training or not," OpenAI Chief Technology Officer Mira Murati said.
Costly feature (Score:4, Insightful)
Re:[Opportunity] Costly feature (Score:1)
How about a prize for a good FP? Perhaps a special power prize allowing the FPer to edit the rushed FP to fix the spelling mistakes? Having said that much, I think it was a good FP, though I've added your "opportunity" to the Subject for more substance.
On the story, I was thinking this needed to be done in some way. The obvious example involves "discussions" of personal medical conditions. Scare quotes because I still have strong reservations about describing interactions with ChatGPT in such anthropomorphi
Re:Costly feature (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
No I dont see THAT flying in any country with even basic privacy laws. Regardless, its an issue that already negatively affects them.
The reason its a major problem is when you feed that data into the training corpus, ChatGPT now remembers that conversation, in *any* conversation it has with *anyone*. And its far too unpredictable to trusted not to spit that out when someone asks "What do you know about ".
OpenAI needs people to trust it to be viable as a business. The gaing of a bit of low-quality trainin
Directives in conflict (Score:3)
> Are you *really* going to respect my privacy, where "privacy" is defined in a way that a reasonable person would expect?
* I'm sorry, Dave. I'm afraid I can't answer that.
A good sign (Score:2)
I don't believe them. (Score:2)
If I were a betting man, 40% of my chips say "inadvertent" consumption of the data, 30% on straight up mispresentation, 20% on loophole, and 10% goes on adherence to the spirit.
Trust me, said the soulless chatbot (Score:1)
I trust OpenAI's claim as much as I trust politicians to not lie. Wake me up when they stop requiring name and phone numbers to access the program.
Not the full picture. (Score:2)
I always assumed... (Score:1)
that whenever I used ChatGPT or the likes that my proompts were being collated and categorised into future training models and as such was circumspect with the types of proompts I entered.
I would have thought, by now, that would have been the normal stance - assume you are contributing to the datasets, not just consuming them.