Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Social Networks AI

Research Shows Recommender Systems Can Use AI To Manipulate Our Preferences (ieee.org) 11

Slashdot reader silverjacket writes: Research presented at the International Conference on Machine Learning shows that when recommender systems use reinforcement learning to increase engagement, they can have the side effect of shifting our preferences to increase engagement. The researchers also showed ways to detect and reduce such manipulation. Google and Facebook have used reinforcement learning in their recommender systems but didn't respond to questions.
This discussion has been archived. No new comments can be posted.

Research Shows Recommender Systems Can Use AI To Manipulate Our Preferences

Comments Filter:
  • by xack ( 5304745 ) on Sunday October 09, 2022 @10:43AM (#62950969)
    Instead of the greedy marketers that force programmers to implement the algorithms for such systems.
  • to show me more things that I want to watch.... The bastards.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      The research points to the fact they can make you like things you didn't like before. Like Reinforcement Learning is trained to BEAT an opponent, RL control of feeds can BEAT you preferences and install new ones.

      • by vivian ( 156520 )

        it's not just recommendation systems that can do this.
        if you regularly follow a news source with a particular bias, your thinking is going to tend to be shaped along those lines too, as that becomes your new normal. Those news sources are also going to be pushed more in whatever direction their readership goes, as they seek to gain and retain a larger audience.
        Publishing centric stories just doesn't get the same reader response as publishing stories that push extreme left or right agendas. I think that's wh

  • RL explained (Score:5, Interesting)

    by Dru Nemeton ( 4964417 ) on Sunday October 09, 2022 @10:57AM (#62951003)
    "Reinforcement learning differs from supervised learning in a way that in supervised learning the training data has the answer key with it so the model is trained with the correct answer itself whereas in reinforcement learning, there is no answer but the reinforcement agent decides what to do to perform the given task."

    So with "recommender systems" it seems like these companies are shortening the learning process by using datasets from machine learning to "guess" what we'll be interested in and pushing that. We already know that ML datasets can easily be tainted with, shall we say "negative aspects of our society," so it's no damn surprise that doing has the potential to introduce us to things we normally wouldn't seek out. All in the name of engagement and making a buck.
    • Re: RL explained (Score:5, Interesting)

      by NagrothAgain ( 4130865 ) on Sunday October 09, 2022 @11:22AM (#62951057)
      I prefer recommendations to include things I wouldn't normally seek out, as opposed to simply amplifying what I've already searched for. The larger problem is most systems seem to accomplish this by promoting things other people have been searching, which just widens the echo chamber slightly. And it opens the system to being gamed or polluted by bots.
  • dark patterns (Score:4, Interesting)

    by awwshit ( 6214476 ) on Sunday October 09, 2022 @11:54AM (#62951121)

    Seems like these kinds of effects are fairly well known in 2022.

    https://en.wikipedia.org/wiki/... [wikipedia.org]

  • All of a sudden, I feel like voting GOP.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...