Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Businesses AI Technology

ACM, Ethics, and Corporate Behavior 34

theodp writes: In the just-published March 2022 issue Communications of the ACM, former CACM Editor-in-Chief Moshe Y. Vardi takes tech companies -- and their officers and technical leaders -- to task over the societal risk posed by surveillance capitalism in "ACM, Ethics, and Corporate Behavior." Vardi writes: "Surveillance capitalism is perfectly legal, and enormously profitable, but it is unethical, many people believe, including me. After all, the ACM Code of Professional Ethics starts with 'Computing professionals' actions change the world. To act responsibly, they should reflect upon the wider impacts of their work, consistently supporting the public good.' It would be extremely difficult to argue that surveillance capitalism supports the public good."

"The biggest problem that computing faces today is not that AI technology is unethical -- though machine bias is a serious issue -- but that AI technology is used by large and powerful corporations to support a business model that is, arguably, unethical. Yet, with the exception of FAccT, I have seen practically no serious discussion in the ACM community of its relationship with surveillance-capitalism corporations. For example, the ACM Turing Award, ACM's highest award, is now accompanied by a prize of $1 million, supported by Google."

"Furthermore, the issue is not just ACM's relationship with tech companies. We must also consider how we view officers and technical leaders in these companies. Seriously holding members of our community accountable for the decisions of the institutions they lead raises important questions. How do we apply the standard of 'have not committed any action that violates the ACM Code of Ethics and ACM's Core Values' to such people? It is time for us to have difficult and nuanced conversations on responsible computing, ethics, corporate behavior, and professional responsibility."
This discussion has been archived. No new comments can be posted.

ACM, Ethics, and Corporate Behavior

Comments Filter:
  • by mspohr ( 589790 ) on Thursday February 24, 2022 @09:19PM (#62301329)

    One of these does not belong.

  • Start with what information is being kept by cell phone companies [www.zeit.de]. Then find an advocate in the legislature, let them request that data from the phone company to show how that advocate's movement can be tracked, and ask the rest of the legislature if that's a good idea.
  • >machine bias is a serious issue

    Unless you're talking about class A amplifiers, I call bullshit. If an output correlates with an input in the real world, it's not "bias" for a machine to reflect the same, no matter how much it offends the SJW in you.
    • Indeed. The problem is not "machine bias" but that the machines are not biased, even when the result is politically incorrect.

      Machine learning was used to predict the probability of defendants jumping bail and not showing up for trial. The system was twice as likely to predict that a black man would not appear for trial as a white man. But that is not biased because black men in the training data really were twice as likely to be no-shows.

      That is not "bias" but rather an unbiased and accurate prediction.

      • I don't know if they were so much "no-shows", as that the automated systems used to collect the training data just couldn't see them [youtu.be].
      • by ranton ( 36917 )

        Machine learning was used to predict the probability of defendants jumping bail and not showing up for trial. The system was twice as likely to predict that a black man would not appear for trial as a white man. But that is not biased because black men in the training data really were twice as likely to be no-shows. That is not "bias" but rather an unbiased and accurate prediction.

        Your own bias is leading you to massively distort the problem with the bias found in these attempts at using AI in justice reform. The system wasn't twice as likely to predict a black man was high risk, it was twice as likely to inaccurately identify a black man as high risk. They looked at the false positives, and found the system was much more likely to flag a black man incorrectly. This could lead to the algorithm flagging black men at 3-4 times the rate of white men, when it should have only been flaggi

    • It's like image searches that can't distinguish between black people and gorillas. It's not that machine learning can't distinguish between those, it's that they didn't have enough black people (or gorillas) in their data set.

      The bias is the bias of the people who created the data set. It doesn't match reality.

  • ... tech professionals are "moderate" conservative (in the capitalist, not socialist sense) capitalist types safely smug in their middle class income.

    The average member of the public has no idea what a computer is or how a computer works, which is how MMO's got off the ground in mid 90's when the first really successful back ended game succeeded (ultima online) in 97, that changed the entire course of PC gaming and computer history as the war on software ownership went into overdrive once Microsoft, valve,

  • Incentives (Score:4, Insightful)

    by dskoll ( 99328 ) on Thursday February 24, 2022 @10:09PM (#62301439) Homepage

    The AI algorithms are poorly-understood, and they are given a goal: Maximize revenue. Nobody really foresaw that they'd do really shitty things to maximize revenue.

    The only ways around the problem are either to ban surveillance capitalism (unlikely) or to put regulations in place that change the goals of the algorithms so they don't do shitty things. That is, the goal must contain other targets than "maximize revenue".

    We need a regulatory framework that can meaningfully control the algorithms, and that has flexibility to update the regulations in the face of unexpected or emergent behavior from the algorithms.

    This too, is unlikely.

    So I think surveillance capitalism and social media are going to destroy our societies. Democracy is already in deep trouble, and social cohesion is under immense strain. Of all the ways machines could have ended up doing humanity in, I don't think anyone predicted this.

    • Maximizing revenue, maximizing paperclips [wikipedia.org], either one's good.
    • Over the thousands of years of organized human society the constant pattern is hierarchy wherein a small sector of society uses its devices to control and direct the masses. Any organization, human or otherwise, succeeds or fails by its understanding of what it must manipulate, how and why. The latest technologies are far better at this than anything in previous human history. In general, business goals are designed to maximize profits and must be as totalitarian as possible to achieve this goal. A democrat
      • by raymorris ( 2726007 ) on Friday February 25, 2022 @12:03AM (#62301633) Journal

        I'm not quite sure what you mean by this:

        > A democratic government is ideally goal centered on maximizing benefits of all members of society ...
        > Theoretically, a democratic government balances the goals of its component organizations for general welfare of all the members of its nation

        Ideally, sure. Usually, any government of any type would try to maximize benefits for all members of society. Nice do, of course. They mostly maximize benefits for themselves, for politicians. That's because mammals have a strong extinct to take care of themselves. The personality types pompous, arrogant, and power-hungry enough to run for office of course tend to be even more self-centered.

        I'm not sure what you mean by "theoretically a democratic government ...". Is there some theory that suggests that would somehow happen?

        Democracy, of course, simply means that policy decisions - decisions about macroeconomics, foreign policy, etc are nominally made by the average Joe. That is, policy decisions about macroeconomics, foreign policy, etc are made by the majority of people - who know nothing about economics, foreign policy, etc.

        *Pure* democracy is 51% of people making rules about how you need to segment your vlans or price your bananas. Without knowing what a vlan is or where bananas come from. PURE democracy is simply Idiocracy - control by the uninformed. There's no reason to think that would ever result in decisions that are good for anyone.

        So we have *representative democracy*. Which works on a small scale. In representative democracy, aka a republic, the majority was supposed to choose someone who does know something about economics or law or foreign policy. Then those who were chosen based on their qualifications and being trustworthy got together and made decisions. That did work for a while, and can still work in a town. At national levels, however, people don't pick representatives based on qualifications. They pick based on marketing, on advertising. If they were choosing based on qualifications, we wouldn't have a barista making the laws, would we. Representative democracy at the national scale is rule by marketing - whoever gets the most likes rules. And fuck the 49% of people who didn't pick that politiball team.

        At the local level it's a bit different. I've been on the board of an organization with a few hundred members. They chose me to represent them because they know me. Not based on marketing, on advertising. In my city, the local council member lives two blocks from me. I had a 10-15 minute conversation with each candidate, one on one with each. I can drop by his shop and talk to him whenever needed. So I can make a decision based in something more than the character he plays in a 30 second TV commercial.

        • I agree with much of what you say. By theoretical democracy I mean to say a democracy where every qualified individual can contribute an opinion that carries at least a particle of weight in making a government policy decision. It is theoretical because, in the past it was not possible to implement. Currently the digital technology made that possible but the political system is now providing merely a battle between political contenders for office whose behavior is oriented towards suppressing any voters who
          • > intelligent democracy demands that the population be well and thoroughly educated and informed since one cannot make reasonable decisions without proper information, Current information systems are extensive but almost totally corrupt and, if nothing else, that is an immense barrier to true democracy.

            Agreed. There's also the fact that most people have no *interest* is studying economics, or geography, or pharmacology or most of the other relevant topics. They don't WANT to spend their time being well

            • I am afraid I cannot discuss this intelligently since I am a rather isolated individual, I am in my middle 90's, a former New Yorker now living in Helsinki only roughly capable in Finnish and fully aware of my incompetence in the depths of the local politics, I have lived in five different countries highly limited in linguistic competence but, in general, discovered wonderful and talented people in all my experience everywhere. Even as a child I have been puzzled by the acceptances of many different people
    • Mod up. I take exception to perfectly legal. When they are caught, they plead ignorance, and then ask you to supply biometrics!! 1) Full Disclosure - and this means disclosing the buyers of that information 2) Right to correct that information 3) Fines for not being licensed where licences are necessary - think EU 4) No more 'Any lawful purpose' exceptions for Govt. Full auditability, and mandatory exclusion from evidence/trials where that information was unlawfully acquired or where tainted by same. In t
    • by AmiMoJo ( 196126 )

      GDPR covers this very well, but the origins of those rules can be traced all the way back to the early 1980s when it was recognized that computers were a game-changer for data collection and processing.

    • The current Russian invasion of Ukraine is an unfortunate consequence of the rigidity of the cultures involving the sustenance of dominance of the USA in finance and military power since WWII and the growing power of China along with the remaining power of Russia to resist that domination. Fossil fuel in the USA is still heavily subsidized and is the main industry in Russia and the competition for the market is one of the main driving forces that encourages violence in the matter. Freedom and democracy is
  • by Anonymous Coward

    The biggest problem is when it's dead accurate in defiance of political whim. All of the illegal, legislatively protected factors do affect productivity and predictable long term employment and likelihood of criminal activity. Age, sex, religion, weight, medical disabilities, marital status, and skin color all correlate strongly with productivity in the workplace, with loan replayment, and with likelihood of violent crime and drug addiction. But it's illegal to ask about them. AI is providing a backdoor mea

  • by BobC ( 101861 ) on Thursday February 24, 2022 @11:08PM (#62301535)

    I once worked at a maker of video surveillance equipment. While we had a "generic" budget line we sold primarily to businesses and commercial security companies, our best camera and analysis systems were sold exclusively to governments. We provided training on those systems only to those within those customer organizations, never to independents or third parties. We instructed them not only in operating the equipment, but also in basic information security practices, and proper digital evidence preparation, chain of custody and preservation.

    Then the day came when one of our senior engineers, who sat at the desk next to me, was called as an expert witness in a trial where a government TLA (Three Letter Agency) had flagrantly misused our most advanced equipment. Our lawyer got the transcript and generated a report to all employees.

    Within the week senior management decided sever business with that customer. Within 3 months they decided to pivot the company away from video surveillance equipment. Essentially, we realized that "trusting our customers" was not a sustainable business model. We sold our inventory and support services to a competitor, but did not sell the equipment designs themselves.

    This was despite the truly wonderful uses made of some of our high-end products. For example, in Afghanistan and Iraq our systems encircled forward firebases, greatly reducing both the number of watchstanders needed and the number of successful attacks.

    We knew the tech, and we knew its limitations. Those high-end products served a VERY profitable market, but we refused to sell our souls to get that money. Principles mattered more.

    The company pivoted from making surveillance cameras and video analysis systems to making secure digital communication systems with data aggregation/disaggregation support. Our first product gave small(-ish) surveillance drones the same communication bandwidth as Predator drones. Why that particular market? First, we estimated our system would deliver 10x the bandwidth of existing systems for under 2x the cost. Second, small drones don't carry weapons.

    In the process of executing the pivot, the company headcount shrunk by 75%. We had a talented crew, and fortunately everyone who was let go was snapped up immediately.

    New product development took over a year from the start until we hosted several successful field tests and system demonstrations, and soon had our first customer orders in-hand. Just as we started to staff up for product manufacturing, the company folded due to financial pressures: Our pivot had been funded primarily by loans, and the 2008 credit crunch made the well run dry. Not even our healthy order backlog could get us loans no matter the rates: There simply was no money in the market for us.

    We died soon after Borders did, along with many other good companies.

  • People seem to be getting the wrong idea. This isn't about AI, it's possible bias or it's use in government surveillance. This is just some really late to the party TDS bullshit about Cambridge Analytica.

    ACM is fine with weapon development and AI being used in widespread spying by government.

  • How do we apply the standard of 'have not committed any action that violates the ACM Code of Ethics and ACM's Core Values' to such people? It is time for us to have difficult and nuanced conversations on responsible computing, ethics, corporate behavior, and professional responsibility.

    Impressive balls. I am guessing a lot of members complained..

  • This is the first time I've heard the acronym ACM, and I had to dig around after following the link to find out what it stands for. Is it strange that I've never heard of them, or is the group not that well known?
  • Then I'll listen to their moral positions on others. But it seems to me that ACM as an organization, and particularly their paid staff, are addicted to the revenue that expensive publication sales brings in.

    ACM can and should move to "make information free". There are legitimate costs in technical publications, so ACM needs to look at alternatives to meeting those costs.

    I also note ACM, at least when I was a member, had a long-standing objection to professional liability for software developers.

  • ...this is too convoluted & complicated for the electorate to understand & who can trust these "experts" anyway? Just keep using Facebook & Google & the telcos (They're the biggest data-miners of all!) & read their press releases & PR & marketing departments breathlessly espouse consumer-utopias with AI doing all our thinking for us, all with no ethical concerns whatsoever & having grand public ceremonies where they give themselves awards for being swell people who absolutely
  • Thanks to globalization, private wealth has the power to suck the capital out of entire economy if a country passes legislation it doesn't like - it's effectively a veto over democracy. Absolutely nothing will be done without addressing this issue first. That this is never discussed is part of a deeper problem - we're actively discouraged from thinking and reasoning about the economy in any meaningful way.

    I humbly offer you: needs-based systems analysis, which you can perform using only first principles,
  • Years ago -- before I retired as a software test engineer -- I belonged to the ACM. I had a brief paper published in the Communications of the ACM (CACM) and several reviews published in Computing Reviews.

    One day, I happened to browse through the "help wanted" ads in the CACM. I saw an ad containing an explicit statement of discrimination. Thereafter, I browsed the "help wanted" ads in subsequent issues and found several instances of explicit discrimination in almost every issue. For example, some ads i

Please go away.

Working...