Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Privacy Security The Courts Technology IT

Suit To Let Researchers Break Website Rules Wins a Round (axios.com) 71

An anonymous reader writes: Anyone following Facebook's recent woes with Cambridge Analytica might be surprised to hear that there's a civil liberties argument for swiping data from websites, even while violating their terms of service. In fact, there's a whole world of situations where that thinking could apply: bona fide academic research. On Friday, a judge in a D.C. federal court ruled that an American Civil Liberties Union-backed case trying to guarantee researchers the ability to break sites' rules without being arrested could move forward, denying a federal motion to dismiss. "What we're talking about here is research in the public interest, finding out if there is discrimination," Esha Bhandari, an ACLU attorney representing the academics, told Axios.
This discussion has been archived. No new comments can be posted.

Suit To Let Researchers Break Website Rules Wins a Round

Comments Filter:
  • You could justify (Score:3, Insightful)

    by macxcool ( 1370409 ) on Tuesday April 03, 2018 @01:02PM (#56373615)
    just about anything if you start with 'in the public interest' and 'finding out if there is discrimination'. Also. Can people just use 'unjust discrimination' instead? Discrimination is what we do as human beings. We can't function without it. (sorry. Pet Peave).
    • by CrimsonAvenger ( 580665 ) on Tuesday April 03, 2018 @01:12PM (#56373697)
      Read summary, and this was the first thing that occurred to me.

      ANYTHING can be defined as "in the public interest", if your lawyers are halfway decent. Including having the government spy on everyone, all the time.

      This looks like the beginning of a very slippery slope, and we aren't going to enjoy the ride to the bottom even a little bit....

      DO remember that even if you approve of this sort of thing when done to your enemies (political and otherwise), it won't be nearly so much fun when they use it against you by and by.

      And they will....

      • Yeah... this is a minefield.

        What Cambridge Analytica did was "research". Ostensibly, their research was "in the public interest" because they thought the best thing for the public was for Trump to win the election. At the same time, yes, there are places out there doing legitimately bad things, and if their TOS is enforceable, investigative researchers won't be inclined to look into the transgression, because they might be sued or face other undue consequences.

        It might be a bit more appropriate to have some

      • There are very few or possibly no scenarios in which I think breaking the TOS on a website should be prosecutable as a criminal offense in any case.
      • by Anonymous Coward on Tuesday April 03, 2018 @03:37PM (#56374657)

        The summary is terrible, the short version of the argument is that private companies shouldn't be able to write overbearing ToS and turn violations into a federal felony under the CFAA. The only thing it does is to make it so private companies can't attack people with felonies over some stupid ToS on their website. They could still go after you at the civil (but not criminal) level for damages related to any breach of your agreements, the main difference is that they can't get you thrown into jail for violating some nonsense they wrote on their web page this way.

        If you want to defend privacy, it's better to get actual privacy laws so that the hundred thousand other companies who misused the Facebook friends API to suck in your social graph can't misuse it. Yes, I realize the only thing that CA did wrong was to break Facebook's ToS, but making that into a federal felony is a bad idea because a ton of you have likely broken their ToS in some trivial way don't belong in prison. I mean, they're especially after disparagement clauses. Would you like for everyone with a Facebook account to be forced, under pain of federal felony charges, to not be able to say bad things about Facebook any more?

        Because that's the kind of crap you're asking for if you defend this use of the CFAA.

  • An algorithm cares about is nothing but whether it's profitable. Rest assured it will be biased. Why? Because of exactly that. All it takes is that some algorithm determines that $minority as a group has a higher chance of destroying something, not paying rent or generally being something you don't want as a landlord. And there you go.

    This is near certain. Yes, that's unfair. Algorithms don't care about fairness, though. They "care" (strange word with computers. Or corporations for that matter...) about pro

    • An algorithm cares about nothing

      FTFY. Algorithms don't care about anything. They don't care about valid or invalid input. Either they will work or they won't, but they still don't care.

  • TFA notes this as well. This is an area where I say the ACLU is wrong, if their cause has a good case then they can make an arrangement with the service provider rather they flaunt a right to 'break the rules'.
    • TFA notes this as well. This is an area where I say the ACLU is wrong, if their cause has a good case then they can make an arrangement with the service provider rather they flaunt a right to 'break the rules'.

      You should read the news from the original ACLU [aclu.org] one. Slashdot shouldn't use the current link to a blogger anyway...

  • To clarify: (Score:5, Informative)

    by Anonymous Coward on Tuesday April 03, 2018 @01:08PM (#56373667)

    This suit is over whether breaking a site's TOS consittutes a criminal offense under the CFAA, notably 18 USC 1030(a)(4-6) [cornell.edu].

    There is a circuit split on this issue, which this suit attempts to clarify.

    This suit does not have any impact on civil or contractual suits researches might face for breaking TOS, only whether doing so is a federal criminal offense under this specific law.

    • by AmiMoJo ( 196126 )

      Thanks, I figured it was like Iron Man or something.

    • Aaron Swartz but we need an PD willing so stand up to long case with EULAs as the contracts and no 100K+ bails

  • Is this not the same argument used by doctors and governments throughout time for medical experimentation on prisoners and people who don't know they are being experimented on? Why would the ACLU of all organizations not see this?
    • Because the intent is different. The intent here is social justice so the law shouldn't be allowed to be used against that.

  • by clovis ( 4684 ) on Tuesday April 03, 2018 @01:48PM (#56373923)

    I don't see in the actual lawsuit anything about swiping collected data, nor is the suit suggesting accessing website data other than through the normal access a person typing at a keyboard using the site in a normal way would do. In other words, it isn't about mass data grabbing from servers behind the web site.
    What the complaint is covering is very narrowly defined behavior.

    Here is the actual ACLU Sandvig v. Lynch - Complaint
    https://www.aclu.org/legal-doc... [aclu.org]

    It's about violating TOS access to websites that forbid using dummy accounts, bots to do testing, scraping (saving screenshots in this case), or violating TOS with non-disparagement clauses.
    The complaint says that on-line access that may violate a TOS should not be covered under the CFAA, and that the penalties are far too harsh.

    Here's what they're talking about: Researchers want use dummy accounts with the names of people that appear to be some minority group, so that they can see if that group is being discriminated against. As an example, AirBNB, VRBO and such like are prime examples of where that sort of discrimination is in play. Many sites require real names, and non-disparagement clauses would obviously be violated if the research turned up anything.

    I especially object to non-disparagement clauses in sites that have an open interface to the public, and although I think that requiring real names is a valid stipulation to use a website, I cannot support that using an alias is a criminal act. The website has the option of cancelling your account if they don't like you much in the same way that the mall can kick you out for not wearing shoes.

    • Here's what they're talking about: Researchers want use dummy accounts with the names of people that appear to be some minority group, so that they can see if that group is being discriminated against.

      However, as a precedent, it would be applied much more broadly.

      Automated access to the websites I run is an abuse of those sites, and such access is covered by a robots.txt rule. I don't care if an "academic researcher" thinks that making thousands of hits per hour is going to help him learn something, it's still an abuse and his desire is not an excuse.

      I faced this issue back when robots were just taking over. I had a site that ran an ocean tide prediction program that I made available as part of other

      • by clovis ( 4684 )

        Basically I agree with you about abusive bots, and this case document does state that these researchers scripts are designed to mimic an actual person typing and clicking, and they claim that their automated scripts should cause no impact or minimal impact. (point 96 or so)

        Excessive/abusive access (whether by bots or groups of people) does need better rules regulating it. Right now the CFAA addresses that too vaguely.
        And that should not be something in a TOS for one site but not another, it should be for al

        • And that should not be something in a TOS for one site but not another, it should be for all sites.

          Of course not. There may be sites that are happy to have robots come index them, and they should not be limited by laws prohibiting that. Neither should laws permit robot indexing in any blanket manner, since there are sites who have decided they do not want this.

          As for this "Determining if "uber" or "airbnb" are illegally discriminating is not an "academic research" problem. ", you're just plain wrong there.

          BY DEFINITION, if someone is trying to determine if an illegal activity is being performed, it is not an academic research issue. It is a legal issue.

          You're right that it is a legal problem, but studying laws and their impact on society

          This alleged research project is not studying "laws and their impact on society", they're looking

          • Also important, by making fake accounts they make airbnb and uber less reliable since they obviously cancel any orders that do go through, which makes the users of the services trust them less. This is a material harm to the companies in question.

            • since they obviously cancel any orders that do go through, which makes the users of the services trust them less.

              Not only that, but during the time that the property is rented it is not available to rent to someone who is actually seeking a place to stay. Further, if there are credit charges for deposits that need to be refunded, the company loses any transaction fees. These are even more direct material harm to the companies.

              If the company tracks interest in a property and bases its rental rates on that interest, then fake rentals may appear to be fake demand, with an associated increase in the rental rate -- whic

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...