×

Submission + - Tesla shareholder group opposes Musk's $46B pay, slams board "dysfunction" (arstechnica.com)

theweatherelectric writes: Jon Brodkin for Ars Technica writes, "A Tesla shareholder group yesterday urged other shareholders to vote against Elon Musk's $46 billion pay package, saying the Tesla board is dysfunctional and 'overly beholden to CEO Musk.' The group's letter also urged shareholders to vote against the reelection of board members Kimbal Musk and James Murdoch. 'Tesla is suffering from a material governance failure which requires our urgent attention and action,' and its board 'is stacked with directors that have close personal ties to CEO Elon Musk,' the letter said. 'There are multiple indications that these ties, coupled with excessive director compensation, prevent the level of critical and independent thinking required for effective governance.'"

Submission + - $25 million stolen using deepfake scam (cnn.com)

quonset writes: Arup, the British multinational company behind the design of the Sydney Opera House, has admitted it was the victim of a $25 million scam involving deepfakes.

Hong Kong police said in February that during the elaborate scam the employee, a finance worker, was duped into attending a video call with people he believed were the chief financial officer and other members of staff, but all of whom turned out to be deepfake re-creations. The authorities did not name the company or parties involved at the time.

According to police, the worker had initially suspected he had received a phishing email from the company’s UK office, as it specified the need for a secret transaction to be carried out. However, the worker put aside his doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized.

He subsequently agreed to send a total of 200 million Hong Kong dollars — about $25.6 million. The amount was sent across 15 transactions, Hong Kong public broadcaster RTHK reported, citing police.

Submission + - In a world of Deep Fakes, USAA is asking customers to authenticate with voice (usaa.com)

st33ld13hl writes: USAA, a banking and insurance company, has begun notifying its customers that they can "enhance your account security" by enabling USAA Voice ID.
From letters sent to customers:

Imagine a world with fewer passwords and verification codes, where your voice becomes the key to effortless access.

Embrace the peace of mind that comes with knowing your account is protected with state-of-the-art security measures. Voice ID uses advanced algorithms to analyze your unique vocal patterns, making it secure and convenient. Feel confident in the knowledge that your voiceprint and your account is in safe hands — yours.

Your bank account balance can now be in anyone's hands that has 15 seconds of your voice

See also:


Submission + - Robot dogs armed with AI-aimed rifles undergo US Marines Special Ops eval (arstechnica.com)

SonicSpike writes: The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic "dogs" developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone.

While MARSOC is testing Ghost Robotics' quadrupedal unmanned ground vehicles (called "Q-UGVs" for short) for various applications, including reconnaissance and surveillance, it's the possibility of arming them with weapons for remote engagement that may draw the most attention. But it's not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past.

MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx's SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously.

On LinkedIn, Onyx Industries shared a video of a similar system in action.

Submission + - Openwashing (nytimes.com)

An anonymous reader writes: There’s a big debate in the tech world over whether artificial intelligence models should be “open source.” Elon Musk, who helped found OpenAI in 2015, sued the startup and its chief executive, Sam Altman, on claims that the company had diverged from its mission of openness. The Biden administration is investigating the risks and benefits of open source models. Proponents of open source A.I. models say they’re more equitable and safer for society, while detractors say they are more likely to be abused for malicious intent. One big hiccup in the debate? There’s no agreed-upon definition of what open source A.I. actually means. And some are accusing A.I. companies of “openwashing” — using the “open source” term disingenuously to make themselves look good. (Accusations of openwashing have previously been aimed at coding projects that used the open source label too loosely.)

In a blog post on Open Future, a European think tank supporting open sourcing, Alek Tarkowski wrote, “As the rules get written, one challenge is building sufficient guardrails against corporations’ attempts at ‘openwashing.’” Last month the Linux Foundation, a nonprofit that supports open-source software projects, cautioned that “this ‘openwashing’ trend threatens to undermine the very premise of openness — the free sharing of knowledge to enable inspection, replication and collective advancement." Organizations that apply the label to their models may be taking very different approaches to openness. [...]

The main reason is that while open source software allows anyone to replicate or modify it, building an A.I. model requires much more than code. Only a handful of companies can fund the computing power and data curation required. That’s why some experts say labeling any A.I. as “open source” is at best misleading and at worst a marketing tool. “Even maximally open A.I. systems do not allow open access to the resources necessary to ‘democratize’ access to A.I., or enable full scrutiny,” said David Gray Widder, a postdoctoral fellow at Cornell Tech who has studied use of the “open source” label by A.I. companies.

Submission + - Palantir's First-Ever AI Warfare Conference (theguardian.com)

An anonymous reader writes: On May 7th and 8th in Washington, D.C., the city’s biggest convention hall welcomed America’s military-industrial complex, its top technology companies and its most outspoken justifiers of war crimes. Of course, that’s not how they would describe it. It was the inaugural “AI Expo for National Competitiveness," hosted by the Special Competitive Studies Project – better known as the “techno-economic” thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference’s lead sponsor was Palantir, a software company co-founded by Peter Thiel that’s best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump’s family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces.

The conference hall was also filled with booths representing the US military and dozens of its contractors, ranging from Booz Allen Hamilton to a random company that was described to me as Uber for airplane software. At industry conferences like these, powerful people tend to be more unfiltered – they assume they’re in a safe space, among friends and peers. I was curious, what would they say about the AI-powered violence in Gaza, or what they think is the future of war?

Attendees were told the conference highlight would be a series of panels in a large room toward the back of the hall. In reality, that room hosted just one of note. Featuring Schmidt and the Palantir CEO, Alex Karp, the fire-breathing panel would set the tone for the rest of the conference. More specifically, it divided attendees into two groups: those who see war as a matter of money and strategy, and those who see it as a matter of death. The vast majority of people there fell into group one. I’ve written about relationships between tech companies and the military before, so I shouldn’t have been surprised by anything I saw or heard at this conference. But when it ended, and I departed DC for home, it felt like my life force had been completely sucked out of my body.

Submission + - Robert Dennard, Inventor of DRAM, Dies at 91

necro81 writes: Robert Dennard was working at IBM in the 1960s when he invented a way to store one bit using a single transistor and capacitor. The technology became dynamic random access memory (DRAM), which when implemented using the emerging technology of silicon integrated circuits, helped catapult computing by leaps and bounds. The first commercial DRAM chips in the late 1960s held just 1024 bits; today's DDR5 modules hold hundreds of billions.

Dr. Robert H. Dennard passed away last month at age 91. (alternate link)

In the 1970s he helped guide technology roadmaps for the ever-shrinking feature size of lithography, enabling the early years of Moore's Law. He wrote a seminal paper in 1974 relating feature size and power consumption that is now referred to as Dennard Scaling. His technological contributions earned him numerous awards, and accolades from the National Academy of Engineering, IEEE, and the National Inventor's Hall of Fame.

Submission + - Another Billionaire Pushes A Bid For TikTok, But To Decentralize It (techdirt.com)

An anonymous reader writes: If you’re a fan of chaos, well, the TikTok ban situation is providing plenty of chaos to follow. Ever since the US government made it clear it was seriously going to move forward with the obviously unconstitutional and counterproductive plan to force ByteDance to divest from TikTok or have the app effectively banned from the U.S., various rich people have been stepping up with promises to buy the app. There was former Trump Treasury Secretary Steven Mnuchin with plans to buy it. Then there was “mean TV investor, who wants you to forget his sketchy history” Kevin O’Leary with his own TikTok buyout plans. I’m sure there have been other rich dudes as well, though strikingly few stories of actual companies interested in purchasing TikTok.

But now there’s another billionaire to add to the pile: billionaire real estate/property mogul Frank McCourt (who has had some scandals in his own history) has had an interesting second act over the last few years as a big believer in decentralized social media. He created and funded Project Liberty, which has become deeply involved in a number of efforts to create infrastructure for decentralized social media, including its own Decentralized Social Networking Protocol (DSTP).

Over the past few years, I’ve had a few conversations with people involved in Project Liberty and related projects. Their hearts are in the right place in wanting to rethink the internet in a manner that empowers users over big companies, even if I don’t always agree with their approach (he also frequently seems to surround himself with all sorts of tech haters, who have somewhat unrealistic visions of the world). Either way, McCourt and Project Liberty have now announced a plan to bid on TikTok. They plan to merge it into his decentralization plans.

Submission + - AI's 'Her' Era Has Arrived

theodp writes: In AI's 'Her' Era Has Arrived (alt. source), the NY Times' Kevin Roose begins, "A lifelike artificial intelligence with a smooth, alluring voice enchants and impresses its human users — flirting, telling jokes, fulfilling their desires and eventually winning them over. I’m summarizing the plot of the 2013 movie 'Her,' in which a lonely introvert named Theodore, played by Joaquin Phoenix, is seduced by a virtual assistant named Samantha, voiced by Scarlett Johansson. But I might as well be describing the scene on Monday when OpenAI, the creator of ChatGPT, showed off an updated version of its A.I. voice assistant at an event in San Francisco."

"The company’s new model, called GPT-4o (the o stands for 'omni'), will let ChatGPT talk to users in a much more lifelike way — detecting emotions in their voices, analyzing their facial expressions and changing its own tone and cadence depending on what a user wants. If you ask for a bedtime story, it can lower its voice to a whisper. If you need advice from a sassy friend, it can speak in a playful, sarcastic tone. It can even sing on command. The new voice feature, which ChatGPT users will be able to start using for free in the coming weeks, immediately drew comparisons to Samantha from 'Her.' (Sam Altman, OpenAI’s chief executive, who has praised the movie, posted its title on X after Monday’s announcement, making the connection all but official.)"

Submission + - Feds Probe Waymo Driverless Cars Hitting Parked Cars, Drifting Into Traffic (arstechnica.com)

An anonymous reader writes: Crashing into parked cars, drifting over into oncoming traffic, intruding into construction zones—all this "unexpected behavior" from Waymo's self-driving vehicles may be violating traffic laws, the US National Highway Traffic Safety Administration (NHTSA) said (PDF) Monday. To better understand Waymo's potential safety risks, NHTSA's Office of Defects Investigation (ODI) is now looking into 22 incident reports involving cars equipped with Waymo’s fifth-generation automated driving system. Seventeen incidents involved collisions, but none involved injuries.

Some of the reports came directly from Waymo, while others "were identified based on publicly available reports," NHTSA said. The reports document single-party crashes into "stationary and semi-stationary objects such as gates and chains" as well as instances in which Waymo cars "appeared to disobey traffic safety control devices." The ODI plans to compare notes between incidents to decide if Waymo cars pose a safety risk or require updates to prevent malfunctioning. There is already evidence from the ODI's initial evaluation showing that Waymo's automated driving systems (ADS) were either "engaged throughout the incident" or abruptly "disengaged in the moments just before an incident occurred," NHTSA said. The probe is the first step before NHTSA can issue a potential recall, Reuters reported.

Submission + - Ordered Back to the Office, Top Tech Talent Left Instead, Study Finds (washingtonpost.com)

An anonymous reader writes: Return-to-office mandates at some of the most powerful tech companies — Apple, Microsoft and SpaceX — were followed by a spike in departures among the most senior, tough-to-replace talent, according to a case study published last week by researchers at the University of Chicago and the University of Michigan. Researchers drew on resume data from People Data Labs to understand the impact that forced returns to offices had on employee tenure and the movement of workers between companies. What they found was a strong correlation between the departures of senior-level employees and the implementation of a mandate, suggesting that these policies “had a negative effect on the tenure and seniority of their respective workforce.” High-ranking employees stayed several months less than they might have without the mandate, the research suggests — and in many cases, they went to work for direct competitors.

At Microsoft, the share of senior employees as a portion of the company’s overall workforce declined more than five percentage points after the return-to-office mandate took effect, the researchers found. At Apple, the decline was four percentage points, while at SpaceX — the only company of the three to require workers to be fully in-person — the share of senior employees dropped 15 percentage points. “We find experienced employees impacted by these policies at major tech companies seek work elsewhere, taking some of the most valuable human capital investments and tools of productivity with them,” said Austin Wright, an assistant professor of public policy at the University of Chicago and one of the study’s authors. “Business leaders should weigh carefully employee preferences and market opportunities when deciding when, or if, they mandate a return to office.”

Submission + - Big Tech sees neurotechnology as its next AI frontier (yahoo.com)

ZipNada writes: In one study conducted by Meta's Fundamental AI Research (FAIR) group, researchers flashed an image in front of participants for 1.5 seconds. Users seated in a neuroimaging machine thought of the image they saw, and AI was able to use that brain activity data to recreate the image.

"At the moment, this is not a mind-reading technology," Jean-Rémi King, the lead neuroscientist working on the project, told Yahoo Finance. "What we can try to do is reconstruct the image that they see at the given moment, so we really decode perception."

The results weren’t perfect, as seen in the image below, but they were close enough that the research team initially thought the test was flawed.

Submission + - Biden Lauds New Microsoft AI Center on Site of Trump's Failed Foxconn Project

theodp writes: The AP reports on President Biden's appearance at a Wisconsin event promoting Microsoft's new $3.3B AI Data Center: "President Joe Biden on Wednesday laced into Donald Trump over a failed project in the previous administration that was supposed to bring thousands of new jobs into southeastern Wisconsin and trumpeted new economic investments under his watch that are coming to the same spot."

"That location in the battleground state will now be the site of a new data center from Microsoft, whose president [Brad Smith, a big Biden fundraiser and personal supporter] credited the Biden administration’s economic policies for paving the way for the new investments. For Biden, it offered another point of contrast between him and Trump, who had promised a $10 billion investment by the Taiwan-based electronics giant Foxconn that never came."

“'In fact, he came here with your senator, Ron Johnson, literally holding a golden shovel, promising to build the eighth wonder of the world. You kidding me?' Biden told the crowd of about 300 people, who clapped and cheered loudly as he spoke. 'Look what happened. They dug a hole with those golden shovels, and then they fell into it.' Noting that 100 homes were destroyed to make way for the project, which wasted hundreds of millions of dollars, Biden added a jab: 'Foxconn turned out to be just that — a con. Go figure.'"

Submission + - Microsoft Said AI Shouldn't be Used to Weaponize Elections. How About AI Events?

theodp writes: "As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” said Microsoft President Brad Smith in the press release for the AI Election Accords signed in February by Microsoft and 20+ other well-known names like Alphabet, OpenAI, and Meta.

So, it was somewhat surprising to see a Microsoft press release this week touting a joint appearance by U.S. President Joe Biden and Microsoft President Brad Smith at a Wisconsin AI event where Microsoft announced plans to invest $3.3B in an AI-related initiative in Wisconsin, giving a boost to Biden's job-creation efforts in the key election battleground state. "Time and again," Politico reported, "Biden took aim at former President Donald Trump, casting him as someone who talked but didn’t deliver. Even the setting of the speech [transcript] itself was meant to deliver the point: Biden was highlighting a new Microsoft data center that would be built on grounds where then-President Trump announced that Foxconn would build a $10 billion factory for making LCD panels. That plant was never built."

Microsoft's Smith was among the 'bundlers' who helped Biden raise at least $100K to unseat Trump in 2020. Smith also wrote a $100K check of his own to the Biden Inaugural Committee. Interestingly, Smith's promotion of Biden as the 'AI President' comes after he and Microsoft promoted both Barack Obama (in 2016) and Donald Trump (in 2017, after Obama's proposed $4B K-12 CS for All initiative failed to receive funding) as 'CS Presidents.'

Submission + - Apple Will Revamp Siri to Catch Up to Its Chatbot Competitors (nytimes.com)

An anonymous reader writes: Apple’s top software executives decided early last year that Siri, the company’s virtual assistant, needed a brain transplant. The decision came after the executives Craig Federighi and John Giannandrea spent weeks testing OpenAI’s new chatbot,ChatGPT. The product’s use ofgenerative artificial intelligence, which can write poetry, create computer code and answer complex questions, made Siri look antiquated, said two people familiar with the company’s work, who didn’t have permission to speak publicly. Introduced in 2011 asthe original virtual assistantin every iPhone, Siri had been limited for years to individual requests and had never been able to follow a conversation. It often misunderstood questions. ChatGPT, on the other hand, knew that if someone asked for the weather in San Francisco and then said, “What about New York?” that user wanted another forecast.

The realization thatnew technology had leapfrogged Siriset in motion the tech giant’s most significant reorganization in more than a decade. Determined to catch up in the tech industry’s A.I. race, Apple has made generative A.I. a tent pole project — the company’s special, internal label that it uses to organize employees around once-in-a-decade initiatives. Apple is expected to show off its A.I. work at its annual developers conference on June 10 when it releases an improved Siri that is more conversational and versatile, according to three people familiar with the company’s work, who didn’t have permission to speak publicly. Siri’s underlying technology will include a new generative A.I. system that will allow it to chat rather than respond to questions one at a time. The update to Siri is at the forefront of a broader effort to embrace generative A.I. across Apple’s business. The company is also increasing the memory in this year’s iPhones to support its new Siri capabilities. And it has discussedlicensing complementary A.I. modelsthat power chatbots from several companies, including Google, Cohere and OpenAI.

Submission + - Maryland Passes Two Internet Privacy Bills (theverge.com)

An anonymous reader writes: The Maryland legislature passed two bills over the weekend limiting tech platforms’ ability to collect and use consumers’ data. Maryland Governor Wes Moore is expected to sign one of those bills, the Maryland Kids Code, on Thursday, MoCo360 reports. If signed into law, the other bill, the Maryland Online Privacy Act, will go into effect in October 2025. The legislation would limit platforms’ ability to collect user data and let users opt out of having their data used for targeted advertising and other purposes. Together, the bills would significantly limit social media and other platforms’ ability to track their users — but tech companies, including Amazon, Google, and Meta, have opposed similar legislation. Lawmakers say the goal is to protect children, but tech companies say the bills are a threat to free speech.

Part of the Maryland Kids Code — the Maryland Age-Appropriate Design Code Act — will go into effect much sooner, on October 1st. It bans platforms from using “system design features to increase, sustain, or extend the use of the online product,” including autoplaying media, rewarding users for spending more time on the platform, and spamming users with notifications. Another part of the legislation prohibits certain video game, social media, and other platforms from tracking users who are younger than 18.

Submission + - FCC Explicitly Prohibits Fast Lanes, Closing Possible Net Neutrality Loophole (arstechnica.com)

An anonymous reader writes: The Federal Communications Commission clarified its net neutrality rules to prohibit more kinds of fast lanes. While the FCC voted to restore net neutrality rules on April 25, it didn't release the final text of the order until yesterday. The final text (PDF) has some changes compared to the draft version released a few weeks before the vote.

Both the draft and final rules ban paid prioritization, or fast lanes that application providers have to pay Internet service providers for. But some net neutrality proponents raised concerns about the draft text because it would have let ISPs speed up certain types of applications as long as the application providers don't have to pay for special treatment. The advocates wanted the FCC to clarify its no-throttling rule to explicitly prohibit ISPs from speeding up applications instead of only forbidding the slowing of applications down. Without such a provision, they argued that ISPs could charge consumers more for plans that speed up specific types of content. [...]

"We clarify that a BIAS [Broadband Internet Access Service] provider's decision to speed up 'on the basis of Internet content, applications, or services' would 'impair or degrade' other content, applications, or services which are not given the same treatment," the FCC's final order said. The "impair or degrade" clarification means that speeding up is banned because the no-throttling rule says that ISPs "shall not impair or degrade lawful Internet traffic on the basis of Internet content, application, or service."

Submission + - Google will exit prominent San Francisco waterfront office tower (sfchronicle.com)

An anonymous reader writes: A downtown “trophy” office complex with sweeping views of San Francisco Bay is poised to lose a major tenant — its second in the span of a year.

Tech behemoth Google will be exiting the 300,000-square-foot office that it has occupied at One Market Plaza since 2018, once the company’s lease for the space expires next April, a spokesperson for Google confirmed Tuesday. The move follows Visa’s exit from the three-building complex, as the company relocated its employees to Mission Rock.

  The company is consolidating its San Francisco offices into much less glamorous digs at 345 Spear St in the SoMa neighborhood.

Submission + - Tech giants push for green card changes amid layoffs, raising job concerns (techtarget.com) 1

dcblogs writes: As the tech industry reels from a wave of layoffs, with over 80,000 jobs slashed in 2024 alone, giants like Microsoft and Google are lobbying for a controversial change to the green card sponsorship process. The proposed revision would allow companies to bypass labor market tests designed to protect U.S. workers, potentially sparking outrage among those already affected by job cuts.

Under the current Program Electronic Review Management (PERM) system, employers must advertise job openings and demonstrate a lack of qualified American candidates before sponsoring foreign workers for employment-based green cards. However, Microsoft and Google are pressuring the Biden administration to add STEM occupations, such as software engineering and AI development, to the Department of Labor's Schedule A Shortage Occupation List. This move would exempt these high-demand roles, including software engineering and any job related to security and AI, from PERM requirements, fast-tracking the green card process by up to 20 months.

Critics argue that the tech industry has a history of circumventing PERM safeguards, prioritizing the retention of H-1B visa holders over U.S. workers. The recent layoffs have only heightened these concerns, with Amazon suspending its green card sponsorship program due to legal complexities and the obligation to prioritize laid-off American employees. Permanent residency is less controversial than the H-1B visa program, but critics say the PERM process was added for a reason and that was to prioritize U.S. workers, including existing permanent residents.

The public has until May 13 to weigh in on the proposal, which has the potential to reshape the tech industry's hiring practices.

Submission + - Minor car crashes mean high tech repairs (cnn.com)

smooth wombat writes: With all the improvements in car safety over the decades, the recent addition of a plethora of high tech sensors and warnings comes with increased costs. And not just to have to have them on your car. Any time you get into an accident, even a minor one, it will most likely require a detailed examination of any sensors which may have been affected and their subsequent realignment, replacement, and calibration.

Some vehicles require “dynamic calibration,” which means, once the sensors and cameras are back in place, a driver needs to take the vehicle out on real roads for testing. With proper equipment attached the car can, essentially, recalibrate itself as it watches lane lines and other markers. It requires the car to be driven for a set distance at a certain speed but weather and traffic can create problems.

“If you’re in Chicago or L.A., good luck getting to that speed,” said Ebrahimi ”or if you’re in Seattle or Chicago or New York, with snow, good luck picking up all the road markings.”

More commonly, vehicles need “static calibration,” which can be done using machinery inside a closed workshop with a flat, level floor. Special targets are set up around the vehicle at set distances according to instructions from the vehicle manufacturer.

“The car [views] those targets at those specific distances to recalibrate the world into the car’s computer,” Ebrahimi said.

These kinds of repairs also demand buildings with open space that meet requirements including specific colors and lighting. And it requires special training for employees to perform these sorts of recalibrations, he said.

Slashdot Top Deals