ACM, Ethics, and Corporate Behavior 34
theodp writes: In the just-published March 2022 issue Communications of the ACM, former CACM Editor-in-Chief Moshe Y. Vardi takes tech companies -- and their officers and technical leaders -- to task over the societal risk posed by surveillance capitalism in "ACM, Ethics, and Corporate Behavior." Vardi writes: "Surveillance capitalism is perfectly legal, and enormously profitable, but it is unethical, many people believe, including me. After all, the ACM Code of Professional Ethics starts with 'Computing professionals' actions change the world. To act responsibly, they should reflect upon the wider impacts of their work, consistently supporting the public good.' It would be extremely difficult to argue that surveillance capitalism supports the public good."
"The biggest problem that computing faces today is not that AI technology is unethical -- though machine bias is a serious issue -- but that AI technology is used by large and powerful corporations to support a business model that is, arguably, unethical. Yet, with the exception of FAccT, I have seen practically no serious discussion in the ACM community of its relationship with surveillance-capitalism corporations. For example, the ACM Turing Award, ACM's highest award, is now accompanied by a prize of $1 million, supported by Google."
"Furthermore, the issue is not just ACM's relationship with tech companies. We must also consider how we view officers and technical leaders in these companies. Seriously holding members of our community accountable for the decisions of the institutions they lead raises important questions. How do we apply the standard of 'have not committed any action that violates the ACM Code of Ethics and ACM's Core Values' to such people? It is time for us to have difficult and nuanced conversations on responsible computing, ethics, corporate behavior, and professional responsibility."
"The biggest problem that computing faces today is not that AI technology is unethical -- though machine bias is a serious issue -- but that AI technology is used by large and powerful corporations to support a business model that is, arguably, unethical. Yet, with the exception of FAccT, I have seen practically no serious discussion in the ACM community of its relationship with surveillance-capitalism corporations. For example, the ACM Turing Award, ACM's highest award, is now accompanied by a prize of $1 million, supported by Google."
"Furthermore, the issue is not just ACM's relationship with tech companies. We must also consider how we view officers and technical leaders in these companies. Seriously holding members of our community accountable for the decisions of the institutions they lead raises important questions. How do we apply the standard of 'have not committed any action that violates the ACM Code of Ethics and ACM's Core Values' to such people? It is time for us to have difficult and nuanced conversations on responsible computing, ethics, corporate behavior, and professional responsibility."
Tech, Capitalism, Ethics? (Score:3)
One of these does not belong.
"perfectly legal" (Score:2)
"Perfectly legal"?
Laws can change.
In Europe they already are.
Just publish this data (Score:2)
An you were doing so well... (Score:1)
Unless you're talking about class A amplifiers, I call bullshit. If an output correlates with an input in the real world, it's not "bias" for a machine to reflect the same, no matter how much it offends the SJW in you.
Re: (Score:3)
Indeed. The problem is not "machine bias" but that the machines are not biased, even when the result is politically incorrect.
Machine learning was used to predict the probability of defendants jumping bail and not showing up for trial. The system was twice as likely to predict that a black man would not appear for trial as a white man. But that is not biased because black men in the training data really were twice as likely to be no-shows.
That is not "bias" but rather an unbiased and accurate prediction.
Re: (Score:2)
Re: (Score:2)
Machine learning was used to predict the probability of defendants jumping bail and not showing up for trial. The system was twice as likely to predict that a black man would not appear for trial as a white man. But that is not biased because black men in the training data really were twice as likely to be no-shows. That is not "bias" but rather an unbiased and accurate prediction.
Your own bias is leading you to massively distort the problem with the bias found in these attempts at using AI in justice reform. The system wasn't twice as likely to predict a black man was high risk, it was twice as likely to inaccurately identify a black man as high risk. They looked at the false positives, and found the system was much more likely to flag a black man incorrectly. This could lead to the algorithm flagging black men at 3-4 times the rate of white men, when it should have only been flaggi
Re: (Score:3)
It's like image searches that can't distinguish between black people and gorillas. It's not that machine learning can't distinguish between those, it's that they didn't have enough black people (or gorillas) in their data set.
The bias is the bias of the people who created the data set. It doesn't match reality.
The reality is... (Score:2)
... tech professionals are "moderate" conservative (in the capitalist, not socialist sense) capitalist types safely smug in their middle class income.
The average member of the public has no idea what a computer is or how a computer works, which is how MMO's got off the ground in mid 90's when the first really successful back ended game succeeded (ultima online) in 97, that changed the entire course of PC gaming and computer history as the war on software ownership went into overdrive once Microsoft, valve,
Incentives (Score:4, Insightful)
The AI algorithms are poorly-understood, and they are given a goal: Maximize revenue. Nobody really foresaw that they'd do really shitty things to maximize revenue.
The only ways around the problem are either to ban surveillance capitalism (unlikely) or to put regulations in place that change the goals of the algorithms so they don't do shitty things. That is, the goal must contain other targets than "maximize revenue".
We need a regulatory framework that can meaningfully control the algorithms, and that has flexibility to update the regulations in the face of unexpected or emergent behavior from the algorithms.
This too, is unlikely.
So I think surveillance capitalism and social media are going to destroy our societies. Democracy is already in deep trouble, and social cohesion is under immense strain. Of all the ways machines could have ended up doing humanity in, I don't think anyone predicted this.
Re: (Score:3)
Re: (Score:1)
In the sense of wishing? (Score:4, Interesting)
I'm not quite sure what you mean by this:
> A democratic government is ideally goal centered on maximizing benefits of all members of society ...
> Theoretically, a democratic government balances the goals of its component organizations for general welfare of all the members of its nation
Ideally, sure. Usually, any government of any type would try to maximize benefits for all members of society. Nice do, of course. They mostly maximize benefits for themselves, for politicians. That's because mammals have a strong extinct to take care of themselves. The personality types pompous, arrogant, and power-hungry enough to run for office of course tend to be even more self-centered.
I'm not sure what you mean by "theoretically a democratic government ...". Is there some theory that suggests that would somehow happen?
Democracy, of course, simply means that policy decisions - decisions about macroeconomics, foreign policy, etc are nominally made by the average Joe. That is, policy decisions about macroeconomics, foreign policy, etc are made by the majority of people - who know nothing about economics, foreign policy, etc.
*Pure* democracy is 51% of people making rules about how you need to segment your vlans or price your bananas. Without knowing what a vlan is or where bananas come from. PURE democracy is simply Idiocracy - control by the uninformed. There's no reason to think that would ever result in decisions that are good for anyone.
So we have *representative democracy*. Which works on a small scale. In representative democracy, aka a republic, the majority was supposed to choose someone who does know something about economics or law or foreign policy. Then those who were chosen based on their qualifications and being trustworthy got together and made decisions. That did work for a while, and can still work in a town. At national levels, however, people don't pick representatives based on qualifications. They pick based on marketing, on advertising. If they were choosing based on qualifications, we wouldn't have a barista making the laws, would we. Representative democracy at the national scale is rule by marketing - whoever gets the most likes rules. And fuck the 49% of people who didn't pick that politiball team.
At the local level it's a bit different. I've been on the board of an organization with a few hundred members. They chose me to represent them because they know me. Not based on marketing, on advertising. In my city, the local council member lives two blocks from me. I had a 10-15 minute conversation with each candidate, one on one with each. I can drop by his shop and talk to him whenever needed. So I can make a decision based in something more than the character he plays in a 30 second TV commercial.
Re: (Score:2)
Re: (Score:2)
> intelligent democracy demands that the population be well and thoroughly educated and informed since one cannot make reasonable decisions without proper information, Current information systems are extensive but almost totally corrupt and, if nothing else, that is an immense barrier to true democracy.
Agreed. There's also the fact that most people have no *interest* is studying economics, or geography, or pharmacology or most of the other relevant topics. They don't WANT to spend their time being well
Re: (Score:1)
Re: (Score:2)
Thanks for taking the time to speak with me and share your thoughts.
Re: (Score:2)
Re: (Score:2)
GDPR covers this very well, but the origins of those rules can be traced all the way back to the early 1980s when it was recognized that computers were a game-changer for data collection and processing.
Re: (Score:1)
The biggest problem... (Score:1)
The biggest problem is when it's dead accurate in defiance of political whim. All of the illegal, legislatively protected factors do affect productivity and predictable long term employment and likelihood of criminal activity. Age, sex, religion, weight, medical disabilities, marital status, and skin color all correlate strongly with productivity in the workplace, with loan replayment, and with likelihood of violent crime and drug addiction. But it's illegal to ask about them. AI is providing a backdoor mea
Experience in the Surveillance Equipment Business (Score:5, Interesting)
I once worked at a maker of video surveillance equipment. While we had a "generic" budget line we sold primarily to businesses and commercial security companies, our best camera and analysis systems were sold exclusively to governments. We provided training on those systems only to those within those customer organizations, never to independents or third parties. We instructed them not only in operating the equipment, but also in basic information security practices, and proper digital evidence preparation, chain of custody and preservation.
Then the day came when one of our senior engineers, who sat at the desk next to me, was called as an expert witness in a trial where a government TLA (Three Letter Agency) had flagrantly misused our most advanced equipment. Our lawyer got the transcript and generated a report to all employees.
Within the week senior management decided sever business with that customer. Within 3 months they decided to pivot the company away from video surveillance equipment. Essentially, we realized that "trusting our customers" was not a sustainable business model. We sold our inventory and support services to a competitor, but did not sell the equipment designs themselves.
This was despite the truly wonderful uses made of some of our high-end products. For example, in Afghanistan and Iraq our systems encircled forward firebases, greatly reducing both the number of watchstanders needed and the number of successful attacks.
We knew the tech, and we knew its limitations. Those high-end products served a VERY profitable market, but we refused to sell our souls to get that money. Principles mattered more.
The company pivoted from making surveillance cameras and video analysis systems to making secure digital communication systems with data aggregation/disaggregation support. Our first product gave small(-ish) surveillance drones the same communication bandwidth as Predator drones. Why that particular market? First, we estimated our system would deliver 10x the bandwidth of existing systems for under 2x the cost. Second, small drones don't carry weapons.
In the process of executing the pivot, the company headcount shrunk by 75%. We had a talented crew, and fortunately everyone who was let go was snapped up immediately.
New product development took over a year from the start until we hosted several successful field tests and system demonstrations, and soon had our first customer orders in-hand. Just as we started to staff up for product manufacturing, the company folded due to financial pressures: Our pivot had been funded primarily by loans, and the 2008 credit crunch made the well run dry. Not even our healthy order backlog could get us loans no matter the rates: There simply was no money in the market for us.
We died soon after Borders did, along with many other good companies.
Oh noes, targeted advertising (Score:1)
People seem to be getting the wrong idea. This isn't about AI, it's possible bias or it's use in government surveillance. This is just some really late to the party TDS bullshit about Cambridge Analytica.
ACM is fine with weapon development and AI being used in widespread spying by government.
Ballsy at the end (Score:2)
Impressive balls. I am guessing a lot of members complained..
Who? (Score:2)
Re:Who? (Score:4)
ACM is very well-known in computer science circles. They're almost as famous and influential as the IEEE.
When ACM gets out of the for-profit pubs business (Score:2)
Then I'll listen to their moral positions on others. But it seems to me that ACM as an organization, and particularly their paid staff, are addicted to the revenue that expensive publication sales brings in.
ACM can and should move to "make information free". There are legitimate costs in technical publications, so ACM needs to look at alternatives to meeting those costs.
I also note ACM, at least when I was a member, had a long-standing objection to professional liability for software developers.
Move along, nothing to see here... (Score:2)
Re: (Score:1)
It's mentioned ten times but nowhere is the acronym described.
Is this one of these "Tell me you are not a computer geek without telling me you are not a computer geek"-moments? :-)
ACM = Association for Computing Machinery, an organization that has been around since 1947, hence the arcane name. They publish journals and books, have lots of SIGs (Special Interest Groups), hold conferences, hand out awards and offer educational resources. If you have read any computer related technical articles in the last few decades or so, you are more or less bound to have run into th
This only scratches the surface of the problem. (Score:2)
I humbly offer you: needs-based systems analysis, which you can perform using only first principles,
Pot Calling the Kettle Black? (Score:2)
Years ago -- before I retired as a software test engineer -- I belonged to the ACM. I had a brief paper published in the Communications of the ACM (CACM) and several reviews published in Computing Reviews.
One day, I happened to browse through the "help wanted" ads in the CACM. I saw an ad containing an explicit statement of discrimination. Thereafter, I browsed the "help wanted" ads in subsequent issues and found several instances of explicit discrimination in almost every issue. For example, some ads i