×
IT

'Gaming Chromebooks' With Nvidia GPUs Apparently Killed With Little Fanfare (arstechnica.com) 34

An anonymous reader shares a report: Google and some of its Chromebook partners decided to try making "gaming Chromebooks" a thing late last year. These machines included some gaming laptop features like configurable RGB keyboards and high refresh rate screens, but because they still used integrated GPUs, they were meant mostly for use with streaming services like Nvidia's GeForce Now and Microsoft's Xbox Cloud Gaming. But there were also apparently plans for some gaming Chromebooks with the power to play more games locally. Earlier this year, 9to5Google spotted developer comments earlier this year pointing to a Chromebook board (codenamed Hades) that would have included a dedicated GeForce RTX 4050 GPU like the one found in some Windows gaming notebooks. This board would have served as a foundation that multiple PC makers could have used to build Chromebooks. But these models apparently won't be seeing the light of day anytime soon. Developer comments spotted by About Chromebooks this week indicate that the Hades board (plus a couple of other Nvidia-equipped boards, Agah and Herobrine) has been canceled, which means that any laptops based on that board won't be happening.
AI

Microsoft AI Suggests Food Bank As a 'Cannot Miss' Tourist Spot In Canada 50

An anonymous reader quotes a report from Ars Technica: Late last week, MSN.com's Microsoft Travel section posted an AI-generated article about the "cannot miss" attractions of Ottawa that includes the Ottawa Food Bank, a real charitable organization that feeds struggling families. In its recommendation text, Microsoft's AI model wrote, "Consider going into it on an empty stomach." Titled, "Headed to Ottawa? Here's what you shouldn't miss!," (archive here) the article extols the virtues of the Canadian city and recommends attending the Winterlude festival (which only takes place in February), visiting an Ottawa Senators game, and skating in "The World's Largest Naturallyfrozen Ice Rink" (sic).

As the No. 3 destination on the list, Microsoft Travel suggests visiting the Ottawa Food Bank, likely drawn from a summary found online but capped with an unfortunate turn of phrase: "The organization has been collecting, purchasing, producing, and delivering food to needy people and families in the Ottawa area since 1984. We observe how hunger impacts men, women, and children on a daily basis, and how it may be a barrier to achievement. People who come to us have jobs and families to support, as well as expenses to pay. Life is already difficult enough. Consider going into it on an empty stomach."

That last line is an example of the kind of empty platitude (or embarrassing mistaken summary) one can easily find in AI-generated writing, inserted thoughtlessly because the AI model behind the article cannot understand the context of what it is doing. The article is credited to "Microsoft Travel," and it is likely the product of a large language model (LLM), a type of AI model trained on a vast scrape of text found on the Internet.
Advertising

YouTube Ads May Have Led To Online Tracking of Children, Research Says 8

An anonymous reader quotes a report from the New York Times: This year, BMO, a Canadian bank, was looking for Canadian adults to apply for a credit card. So the bank's advertising agency ran a YouTube campaign using an ad-targeting system from Google that employs artificial intelligence to pinpoint ideal customers. But Google, which owns YouTube, also showed the ad to a viewer in the United States on a Barbie-themed children's video on the "Kids Diana Show," a YouTube channel for preschoolers whose videos have been watched more than 94 billion times. When that viewer clicked on the ad, it led to BMO's website, which tagged the user's browser with tracking software from Google, Meta, Microsoft and other companies, according to new research from Adalytics, which analyzes ad campaigns for brands. As a result, leading tech companies could have tracked children across the internet, raising concerns about whether they were undercutting a federal privacy law, the report said. The Children's Online Privacy Protection Act, or COPPA, requires children's online services to obtain parental consent before collecting personal data from users under age 13 for purposes like ad targeting.

Adalytics identified more than 300 brands' ads for adult products, like cars, on nearly 100 YouTube videos designated as "made for kids" that were shown to a user who was not signed in, and that linked to advertisers' websites. It also found several YouTube ads with violent content, including explosions, sniper rifles and car accidents, on children's channels. An analysis by The Times this month found that when a viewer who was not signed into YouTube clicked the ads on some of the children's channels on the site, they were taken to brand websites that placed trackers -- bits of code used for purposes like security, ad tracking or user profiling -- from Amazon, Meta's Facebook, Google, Microsoft and others -- on users' browsers. As with children's television, it is legal, and commonplace, to run ads, including for adult consumer products like cars or credit cards, on children's videos. There is no evidence that Google and YouTube violated their 2019 agreement with the F.T.C.

The report's findings raise new concerns about YouTube's advertising on children's content. In 2019, YouTube and Google agreed topay a record $170 million fineto settle accusations from the Federal Trade Commission and the State of New York that the company had illegally collected personal information from children watching kids' channels. Regulators said the company had profited from using children's data to target them with ads. YouTube then said it would limit the collection of viewers' data and stop serving personalized ads on children's videos. On Thursday, two United States senators sent a letter to the F.T.C., urging it to investigate whether Google and YouTube had violated COPPA, citing Adalytics and reporting by The New York Times. Senator Edward J. Markey, Democrat of Massachusetts, and Senator Marsha Blackburn, Republican of Tennessee, said they were concerned that the company may have tracked children and served them targeted ads without parental consent, facilitating "the vast collection and distribution" of children's data. "This behavior by YouTube and Google is estimated to have impacted hundreds of thousands, to potentially millions, of children across the United States," the senators wrote.
Google spokesman Michael Aciman called the report's findings "deeply flawed and misleading."

Google has stated that running ads for adults on children's videos is useful because parents watching could become customers. However, they acknowledge that violent ads on children's videos violate their policies and have taken steps to prevent such ads from running in the future. Google claims they do not use personalized ads on children's videos, ensuring compliance with COPPA.

Google notes that it does not inform advertisers if a viewer has watched a children's video, only that they clicked on the ad. Google also says it cannot control data collection on a brand's website after a YouTube viewer clicks an ad -- a process that could occur on any website.
Microsoft

Microsoft CEO Says AI Is a Tidal Wave as Big as the Internet (bloomberg.com) 111

An anonymous reader shares a report: In 1995, Microsoft co-founder Bill Gates sent a memo calling the internet a "tidal wave" that would be crucial to every part of the company's business. Nearly two decades later, Microsoft's current leader, Satya Nadella, said he believes the impact of artificial intelligence will be just as profound. "The Bill memo in 1995, it does feel like that to me," Nadella said on this week's episode of The Circuit With Emily Chang. "I think it's as big." Central to the latest attempt to transform Microsoft is OpenAI, a startup whose generative AI technology has created so much buzz that it snagged a $13 billion commitment from the software giant.

"We have a great relationship," OpenAI Chief Executive Officer Sam Altman said on The Circuit. "These big, major partnerships between tech companies usually don't work. This is an example of it working really well. We're super grateful for it." The alliance has plenty of critics. The loudest is Elon Musk, who co-founded OpenAI with Altman and then split from the company, citing disagreements over its direction and the addition of a for-profit arm. He has said OpenAI is now "effectively controlled by Microsoft." In response to a question about Musk's critiques and the prospect that Microsoft could acquire OpenAI, Altman said, "Company is not for sale. I don't know how to be more clear than that."

XBox (Games)

Xbox 360 Digital Store Will Close Next July (eurogamer.net) 14

Microsoft will close its Xbox 360 digital store next July, though anything purchased will still be accessible. From a report: On 29th July 2024, Xbox 360 users will no longer be able to purchase new games, DLC, or other entertainment content from either the console store or the web-based marketplace. In addition, the Microsoft Movies & TV app on the Xbox 360 will no longer function. Of course, the store will continue as normal until that date next July. After that time, any games purchased will still remain playable and deleted purchases can still be re-downloaded. Online multiplayer will also remain accessible for games already purchased (digitally or physically), as long as the publisher supports the servers. Further, users will still be able to play Xbox 360 games on Xbox One and Xbox Series X/S consoles via backward compatibility, and hundreds of games will remain available to purchase on those consoles.
Microsoft

Microsoft Struggles to Gain on Google Despite Its Head Start in AI Search (wsj.com) 27

The new Bing with AI chatbot is "cute, but not a game changer," the data thus far suggests. From a report: When Microsoft unveiled an AI-powered version of Bing in February, the company said it could add $2 billion of revenue if the revamped search engine could pry away even a single point of market share from Google. Six months later, it looks as if even 1 percentage point could be a tough target, with some new data showing Bing's place in search has barely budged -- partly because of how Microsoft handled its high-profile rollout.

In July, Bing had 3% market share worldwide, according to analytics firm StatCounter. That is the same share it had in January, the month before the launch of the new Bing. Another report, from analytics firm Similarweb, shows Bing had around 1% of Google's monthly visitors in July, around the same it had in January. Microsoft is calling the new Bing a success. It disputed outside data, saying third-party data companies aren't measuring all the people who are going directly to Bing's chat page.

Microsoft

Adobe and Microsoft Break Some Old Files By Removing PostScript Font Support (arstechnica.com) 97

Recent developments, such as Adobe ending support for Type 1 fonts in 2023 and Microsoft discontinuing Type 1 font support in Office apps, may impact users who manage their own fonts, potentially leading to compatibility and layout issues in older files. Ars Technica's Andrew Cunningham writes: If you want to know about the history of desktop publishing, you need to know about Adobe's PostScript fonts. PostScript fonts used vector graphics so that they could look crisp and clear no matter what size they were, and Apple licensed PostScript fonts for the original LaserWriter printer; together with publishing software like Aldus PageMaker, they made it possible to create a file that would look exactly the same on your computer screen as it did when you printed it. The most important PostScript fonts were so-called "Type 1" fonts, which Adobe initially didn't publish a specification for. From the 1980s up until roughly the early 2000s or so, if you were working in desktop publishing professionally, you were probably using Type 1 fonts.

Other companies didn't want Adobe to have a monopoly on vector-based fonts or desktop publishing, of course; Apple created the TrueType format in the early 90s and licensed it to Microsoft, which used it in Windows 3.1 and later versions. Adobe and Microsoft later collaborated on a new font format called OpenType that could replace both TrueType and PostScript Type 1, and by the mid-2000s, it had been released as an open standard and had become the predominant font format used across most operating systems and software. For a while after that, apps that had supported PostScript Type 1 fonts continued to support them, with some exceptions (Microsoft Office for Windows dropped support for Type 1 fonts in 2013). But now we're reaching an inflection point; Adobe ended support for PostScript Type 1 fonts in January 2023, a couple of years after announcing the change. Yesterday, a Microsoft Office for Mac update deprecated Type 1 font support for the continuously updated Microsoft 365 versions of Word, Excel, PowerPoint, OneNote, and Outlook for Mac (plus the standalone versions of those apps in Office 2019 and 2021). The LibreOffice suite, otherwise a good way to open ancient Word documents, stopped supporting Type 1 fonts in the 5.3 release in mid-2022.

If you began using Adobe and Microsoft's productivity apps at some point in the last 10 or 15 years and you've stuck mostly with the default fonts -- either the ones included with the software or the ones from Adobe's extensive font library -- it's not too likely that you've been using a Type 1 font unintentionally. For these kinds of users, this change will be effectively invisible. But if you install and manage your own fonts and you've been using the same ones for a while, it's possible that you created a document in 2022 that you simply won't be able to open in 2023. The change will also cause problems if you open and work with decades-old files with any kind of regularity; files that use Type 1 fonts will begin generating lots of "missing font" messages, and the substitution OpenType fonts that apps might try to use instead can introduce layout issues. You'll also either need to convert any specialized PostScript Type 1 font that you may have paid for in the past or pay for an equivalent OpenType alternative.

Microsoft

Microsoft May Store Your Conversations With Bing If You're Not an Enterprise User (theregister.com) 13

An anonymous reader quotes a report from The Register: Microsoft prohibits users from reverse engineering or harvesting data from its AI software to train or improve other models, and will store inputs passed into its products as well as any output generated. The details emerged as companies face fresh challenges with the rise of generative AI. People want to know what corporations are doing with information provided by users. And users are likewise curious about what they can do with the content generated by AI. Microsoft addresses these issues in a new clause titled 'AI Services' in its terms of service.

The five new policies, which were introduced on 30 July and will come into effect on September 30, state that:

Reverse Engineering. You may not use the AI services to discover any underlying components of the models, algorithms, and systems. For example, you may not try to determine and remove the weights of models.
Extracting Data. Unless explicitly permitted, you may not use web scraping, web harvesting, or web data extraction methods to extract data from the AI services.
Limits on use of data from the AI Services. You may not use the AI services, or data from the AI services, to create, train, or improve (directly or indirectly) any other AI service.
Use of Your Content. As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.
Third party claims. You are solely responsible for responding to any third-party claims regarding Your use of the AI services in compliance with applicable laws (including, but not limited to, copyright infringement or other claims relating to content output during Your use of the AI services).
A spokesperson from Microsoft declined to comment on how long the company plans to store user inputs into its software. "We regularly update our terms of service to better reflect our products and services. Our most recent update to the Microsoft Services Agreement includes the addition of language to reflect artificial intelligence in our services and its appropriate use by customers," the representative told us in a statement.

Microsoft has previously said, however, that it doesn't save conversations or use that data to train its AI models for its Bing Enterprise Chat mode. The policies are a little murkier for its Microsoft 365 Copilot, although it doesn't appear to use customer data or prompts for training, it does store information. "[Copilot] can generate responses anchored in the customer's business content, such as user documents, emails, calendar, chats, meetings, contacts, and other business data. Copilot combines this content with the user's working context, such as the meeting a user is in now, the email exchanges the user has had on a topic, or the chat conversations the user had last week. Copilot uses this combination of content and context to help deliver accurate, relevant, contextual responses," it said.
Security

Major US Energy Organization Targeted In QR Code Phishing Attack 13

A phishing campaign has targeted a notable energy company in the U.S., bypassing email security filters to slip malicious QR codes into inboxes. BleepingComputer reports: Roughly one-third (29%) of the 1,000 emails attributed to this campaign targeted a large US energy company, while the remaining attempts were made against firms in manufacturing (15%), insurance (9%), technology (7%), and financial services (6%). According to Cofense, who spotted this campaign, this is the first time that QR codes have been used at this scale, indicating that more phishing actors may be testing their effectiveness as an attack vector. Cofense did not name the energy company targeted in this campaign but categorized them as a "major" US-based company.

Cofense says the attack begins with a phishing email that claims the recipient must take action to update their Microsoft 365 account settings. The emails carry PNG or PDF attachments featuring a QR code the recipient is prompted to scan to verify their account. The emails also state that the target must complete this step in 2-3 days to add a sense of urgency. The threat actors use QR codes embedded in images to bypass email security tools that scan a message for known malicious links, allowing the phishing messages to reach the target's inbox.

To evade security, the QR codes in this campaign also use redirects in Bing, Salesforce, and Cloudflare's Web3 services to redirect the targets to a Microsoft 365 phishing page. Hiding the redirection URL in the QR code, abusing legitimate services, and using base64 encoding for the phishing link all help evade detection and get through email protection filters.
Windows

Windows Feature That Resets System Clock Based On Random Data Is Wreaking Havoc (arstechnica.com) 119

An anonymous reader quotes a report from Ars Technica: A few months ago, an engineer in a data center in Norway encountered some perplexing errors that caused a Windows server to suddenly reset its system clock to 55 days in the future. The engineer relied on the server to maintain a routing table that tracked cell phone numbers in real time as they were being moved from one carrier to the other. A jump of eight weeks had dire consequences because it caused numbers that had yet to be transferred to be listed as having already been moved and numbers that had already been transferred to be reported as pending. "With these updated routing tables, a lot of people were unable to make calls, as we didn't have a correct state!" the engineer, who asked to be identified only by his first name, Simen, wrote in an email. "We would route incoming and outgoing calls to the wrong operators! This meant, e.g., children could not reach their parents and vice versa."

Simen had experienced a similar error last August when a machine running Windows Server 2019 reset its clock to January 2023 and then changed it back a short time later. Troubleshooting the cause of that mysterious reset was hampered because the engineers didn't discover it until after event logs had been purged. The newer jump of 55 days, on a machine running Windows Server 2016, prompted him to once again search for a cause, and this time, he found it. The culprit was a little-known feature in Windows known as Secure Time Seeding. Microsoft introduced the time-keeping feature in 2016 as a way to ensure that system clocks were accurate. Windows systems with clocks set to the wrong time can cause disastrous errors when they can't properly parse time stamps in digital certificates or they execute jobs too early, too late, or out of the prescribed order. Secure Time Seeding, Microsoft said, was a hedge against failures in the battery-powered on-board devices designed to keep accurate time even when the machine is powered down.

"You may ask -- why doesn't the device ask the nearest time server for the current time over the network?" Microsoft engineers wrote. "Since the device is not in a state to communicate securely over the network, it cannot obtain time securely over the network as well, unless you choose to ignore network security or at least punch some holes into it by making exceptions." To avoid making security exceptions, Secure Time Seeding sets the time based on data inside an SSL handshake the machine makes with remote servers. These handshakes occur whenever two devices connect using the Secure Sockets Layer protocol, the mechanism that provides encrypted HTTPS sessions (it is also known as Transport Layer Security). Because Secure Time Seeding (abbreviated as STS for the rest of this article) used SSL certificates Windows already stored locally, it could ensure that the machine was securely connected to the remote server. The mechanism, Microsoft engineers wrote, "helped us to break the cyclical dependency between client system time and security keys, including SSL certificates."

United Kingdom

UK To Host AI Safety Summit at Start of November (ft.com) 7

The UK government will host a summit on the safety of artificial intelligence at the start of November, with "like-minded" countries invited to the event in Bletchley Park to address global threats to democracy, including the use of AI in warfare and cyber security. From a report: Leading academics and executives from AI companies, including Google's DeepMind, Microsoft, OpenAI and Anthropic, will be asked to the AI Safety Summit at the Buckinghamshire site where British codebreakers were based during the second world war. "The UK will host the first major global summit on AI safety this autumn," a spokesperson for the government said on Wednesday, adding that Downing Street would set out further details in due course. Prime minister Rishi Sunak initially announced in June the UK would be organising a summit on AI regulation after a meeting in Washington with President Joe Biden.
Google

Google Tests an AI Assistant That Offers Life Advice 56

Google is evaluating tools that would use AI to perform tasks that some of its researchers have said should be avoided. From a report: Earlier this year, Google, locked in an accelerating competition with rivals like Microsoft and OpenAI to develop A.I. technology, was looking for ways to put a charge into its artificial intelligence research. So in April, Google merged DeepMind, a research lab it had acquired in London, with Brain, an artificial intelligence team it started in Silicon Valley. Four months later, the combined groups are testing ambitious new tools that could turn generative A.I. -- the technology behind chatbots like OpenAI's ChatGPT and Google's own Bard -- into a personal life coach.

Google DeepMind has been working with generative A.I. to perform at least 21 different types of personal and professional tasks, including tools to give users life advice, ideas, planning instructions and tutoring tips, according to documents and other materials reviewed by The New York Times. The project was indicative of the urgency of Google's effort to propel itself to the front of the A.I. pack and signaled its increasing willingness to trust A.I. systems with sensitive tasks. The capabilities also marked a shift from Google's earlier caution on generative A.I. In a slide deck presented to executives in December, the company's A.I. safety experts had warned of the dangers of people becoming too emotionally attached to chatbots.
Security

Congressman Bacon Says His Emails Were Hacked in Campaign Linked To China (bloomberg.com) 22

US Representative Don Bacon said he is among those whose emails were hacked in an espionage campaign that Microsoft has attributed to China. From a report:Bacon, a Republican from Nebraska and a strong advocate for US military support to Taiwan, posted on social media that the FBI had notified him that the Chinese Communist Party hacked into his personal and campaign emails over the course of a month, from May 15 to June 16. "The CCP hackers utilized a vulnerability in the Microsoft software, and this was not due to 'user error,'" he wrote on X, the social media platform formerly known as Twitter.

Bacon, a member of the House Armed Services Committee, received an email from Microsoft indicating he may have been hacked and advising him to change his password on June 16, according to Maggie Sayers, Bacon's press secretary. She said that following subsequent notification from the FBI that he had been hacked, Bacon determined emails relating to political strategy, fundraising and personal banking information may have been breached. As a former US Air Force intelligence officer, he is careful to avoid writing sensitive emails relating to China and Taiwan, she said.

Google

How Google is Planning To Beat OpenAI (theinformation.com) 21

In April, Alphabet CEO Sundar Pichai took an unusual step: merging two large artificial intelligence teams -- with distinct cultures and code -- to catch up to and surpass OpenAI and other rivals. Now the test of that effort is coming, with hundreds of people scrambling to release a group of large machine-learning models -- one of the highest-stakes products the company has ever built -- this fall. The Information: The models, collectively known as Gemini, are expected to give Google the ability to build products its competitors can't, according to a person involved with Gemini's development. OpenAI's GPT-4 large-language model can understand and produce conversational text. Gemini will go beyond that, combining the text capabilities of LLMs like GPT-4 with the ability to create AI images based on a text description, similar to AI-image generators Midjourney and Stable Diffusion, this person said. Gemini's image capabilities haven't been previously reported.

Google employees have also discussed using Gemini to offer features like analyzing charts or creating graphics with text descriptions and controlling software using text or voice commands. Google is betting on Gemini to power services ranging from its Bard chatbot, which competes with OpenAI's ChatGPT, to enterprise apps like Google Docs and Slides. Google also wants to charge app developers for access to Gemini through its Google Cloud server-rental unit. Google Cloud currently sells access to more primitive Google-made AI models through a product called Vertex AI. Those new features could help Google catch up with Microsoft, which has raced ahead with new AI features for its Office 365 apps and has also been selling access to OpenAI's models to its app customers.

News

A Brief History of the Corporate Presentation (technologyreview.com) 26

PowerPoint dominates presentations, utilized everywhere from sermons to weddings. In 2010, Microsoft revealed it was on over a billion computers. Before PowerPoint, 35-millimeter film slides reigned for impactful CEO presentations. These "multi-image" shows needed producers, photographers, and a production team to execute. MIT Technology Review has a rundown of the corporate presentation history.
Desktops (Apple)

An Apple Malware-Flagging Tool Is 'Trivially' Easy To Bypass (wired.com) 9

One of the Mac's built-in malware detection tools may not be working quite as well as you think. From a report: At the Defcon hacker conference in Las Vegas, longtime Mac security researcher Patrick Wardle presented findings today about vulnerabilities in Apple's macOS Background Task Management mechanism, which could be exploited to bypass and, therefore, defeat the company's recently added monitoring tool. There's no foolproof method for catching malware on computers with perfect accuracy because, at their core, malicious programs are just software, like your web browser or chat app. It can be difficult to tell the legitimate programs from the transgressors. So operating system makers like Microsoft and Apple, as well as third-party security companies, are always working to develop new detection mechanisms and tools that can spot potentially malicious software behavior in new ways.

Apple's Background Task Management tool focuses on watching for software "persistence." Malware can be designed to be ephemeral and operate only briefly on a device or until the computer restarts. But it can also be built to establish itself more deeply and "persist" on a target even when the computer is shut down and rebooted. Lots of legitimate software needs persistence so all of your apps and data and preferences will show up as you left them every time you turn on your device. But if software establishes persistence unexpectedly or out of the blue, it could be a sign of something malicious. With this in mind, Apple added Background Task Manager in macOS Ventura, which launched in October 2022, to send notifications both directly to users and to any third-party security tools running on a system if a "persistence event" occurs. This way, if you know you just downloaded and installed a new application, you can disregard the message. But if you didn't, you can investigate the possibility that you've been compromised.

United Kingdom

Why US Tech Giants Are Threatening to Leave the UK (bbc.com) 181

"It was difficult to maintain a poker face when the leader of a big US tech firm I was chatting to said there was a definite tipping point at which the firm would exit the UK," writes a BBC technology editor: Many of these companies are increasingly fed up. Their "tipping point" is UK regulation — and it's coming at them thick and fast. The Online Safety Bill is due to pass in the autumn. Aimed at protecting children, it lays down strict rules around policing social media content, with high financial penalties and prison time for individual tech execs if the firms fail to comply. One clause that has proved particularly controversial is a proposal that encrypted messages, which includes those sent on WhatsApp, can be read and handed over to law enforcement by the platforms they are sent on, if there is deemed to be a national security or child protection risk...

Currently messaging apps like WhatsApp, Proton and Signal, which offer this encryption, cannot see the content of these messages themselves. WhatsApp and Signal have both threatened to quit the UK market over this demand.

The Digital Markets Bill is also making its way through Parliament. It proposes that the UK's competition watchdog selects large companies like Amazon and Microsoft, gives them rules to comply with and sets punishments if they don't. Several firms have told me they feel this gives an unprecedented amount of power to a single body. Microsoft reacted furiously when the Competition and Markets Authority (CMA) chose to block its acquisition of the video game giant Activision Blizzard. "There's a clear message here — the European Union is a more attractive place to start a business than the United Kingdom," raged chief executive Brad Smith. The CMA has since re-opened negotiations with Microsoft. This is especially damning because the EU is also introducing strict rules in the same vein — but it is collectively a much larger and therefore more valuable market.

In the UK, proposed amendments to the Investigatory Powers Act, which included tech firms getting Home Office approval for new security features before worldwide release, incensed Apple so much that it threatened to remove Facetime and iMessage from the UK if they go through. Clearly the UK cannot, and should not, be held to ransom by US tech giants. But the services they provide are widely used by millions of people. And rightly or wrongly, there is no UK-based alternative to those services.

The article concludes that "It's a difficult line to tread. Big Tech hasn't exactly covered itself in glory with past behaviours — and lots of people feel regulation and accountability is long overdue."
Cloud

In Generative AI Market, Amazon Chases Microsoft and Google with Custom AWS Chips (cnbc.com) 25

An anonymous reader shared this report from CNBC: In an unmarked office building in Austin, Texas, two small rooms contain a handful of Amazon employees designing two types of microchips for training and accelerating generative AI. These custom chips, Inferentia and Trainium, offer AWS customers an alternative to training their large language models on Nvidia GPUs, which have been getting difficult and expensive to procure. "The entire world would like more chips for doing generative AI, whether that's GPUs or whether that's Amazon's own chips that we're designing," Amazon Web Services CEO Adam Selipsky told CNBC in an finterview in June. "I think that we're in a better position than anybody else on Earth to supply the capacity that our customers collectively are going to want...."

In the long run, said Chirag Dekate, VP analyst at Gartner, Amazon's custom silicon could give it an edge in generative AI...

With millions of customers, Amazon's AWS cloud service "still accounted for 70% of Amazon's overall $7.7 billion operating profit in the second quarter," CNBC notes. But does that give them a competitive advantage?

A technology VP for the service tells them "It's a question of velocity. How quickly can these companies move to develop these generative AI applications is driven by starting first on the data they have in AWS and using compute and machine learning tools that we provide." In June, AWS announced a $100 million generative AI innovation "center."

"We have so many customers who are saying, 'I want to do generative AI,' but they don't necessarily know what that means for them in the context of their own businesses. And so we're going to bring in solutions architects and engineers and strategists and data scientists to work with them one on one," AWS CEO Selipsky said... For now, Amazon is only accelerating its push into generative AI, telling CNBC that "over 100,000" customers are using machine learning on AWS today. Although that's a small percentage of AWS's millions of customers, analysts say that could change.

"What we are not seeing is enterprises saying, 'Oh, wait a minute, Microsoft is so ahead in generative AI, let's just go out and let's switch our infrastructure strategies, migrate everything to Microsoft.' Dekate said. "If you're already an Amazon customer, chances are you're likely going to explore Amazon ecosystems quite extensively."

Power

Microsoft Spotted 15 High-Security Vulnerabilities in Industrial SDK Used by Power Plants (arstechnica.com) 23

Ars Technica reports that Microsoft "disclosed 15 high-severity vulnerabilities in a widely used collection of tools used to program operational devices inside industrial facilities" (like plants for power generation, factory automation, energy automation, and process automation).

On Friday Microsoft "warned that while exploiting the code-execution and denial-of-service vulnerabilities was difficult, it enabled threat actors to 'inflict great damage on targets.'" The vulnerabilities affect the CODESYS V3 software development kit. Developers inside companies such as Schneider Electric and WAGO use the platform-independent tools to develop programmable logic controllers, the toaster-sized devices that open and close valves, turn rotors, and control various other physical devices in industrial facilities worldwide... "A denial-of-service attack against a device using a vulnerable version of CODESYS could enable threat actors to shut down a power plant, while remote code execution could create a backdoor for devices and let attackers tamper with operations, cause a PLC to run in an unusual way, or steal critical information," Microsoft researchers wrote.

Friday's advisory went on to say: "[...] While exploiting the discovered vulnerabilities requires deep knowledge of the proprietary protocol of CODESYS V3 as well as user authentication (and additional permissions are required for an account to have control of the PLC), a successful attack has the potential to inflict great damage on targets. Threat actors could launch a denial-of-service attack against a device using a vulnerable version of CODESYS to shut down industrial operations or exploit the remote code execution vulnerabilities to deploy a backdoor to steal sensitive data, tamper with operations, or force a PLC to operate in a dangerous way."

Microsoft privately notified Codesys of the vulnerabilities in September, and the company has since released patches that fix the vulnerabilities. It's likely that by now, many vendors using the SDK have installed updates. Any who haven't should make it a priority.

"With the likelihood that the 15 vulnerabilities are patched in most previously vulnerable production environments, the dire consequences Microsoft is warning of appear unlikely," the article notes.

A malware/senior vulnerability analyst at industrial control security firm Dragos also pointed out that CODESYS "isn't widely used in power generation so much as discrete manufacturing and other types of process control. So that in itself should allay some concern when it comes to the potential to 'shut down a power plant'." (And in addition, "industrial systems are extremely complex, and being able to access one part doesn't necessarily mean the whole thing will come crashing down.")
Windows

Microsoft Shuts Down Cortana App On Windows 11 (theverge.com) 16

Microsoft is rolling out a new update for Windows 11 that disables the digital assistant Cortana. The Verge reports: If you attempt to launch Cortana on Windows 11 you'll now be met with a notice about how the app is deprecated and a link to a support article on the change. Microsoft is now planning to end support for Cortana in Teams mobile, Microsoft Teams Display, and Microsoft Teams Rooms "in the fall of 2023." Surprisingly, Cortana inside Outlook mobile "will continue to be available," according to Microsoft.

Microsoft is now working on Windows Copilot, a new sidebar for Windows 11 that is powered by Bing Chat and can control Windows settings, answer questions, and lots more. Windows Copilot is expected to be available this fall as part of a Windows 11 update that will also include native RAR and 7-Zip support.

Slashdot Top Deals