×
AI

Early Impressions of Google's Gemini Aren't Great (techcrunch.com) 47

Google this week took the wraps off of Gemini, its new flagship generative AI model meant to power a range of products and services including Bard. Google has touted Gemini's superior architecture and capabilities, claiming that the model meets or exceeds the performance of other leading gen AI models like OpenAI's GPT-4. But the anecdotal evidence suggests otherwise. TechCrunch: The model fails to get basic facts right, like 2023 Oscar winners: Note that Gemini Pro claims incorrectly that Brendan Gleeson won Best Actor last year, not Brendan Fraser -- the actual winner. I tried asking the model the same question and, bizarrely, it gave a different wrong answer. "Navalny," not "All the Beauty and the Bloodshed," won Best Documentary Feature last year; "All Quiet on the Western Front" won Best International Film; "Women Talking" won Best Adapted Screenplay; and "Pinocchio" won Best Animated Feature Film. That's a lot of mistakes.

Translation doesn't appear to be Gemini Pro's strong suit, either. What about summarizing news? Surely Gemini Pro, Google Search and Google News at its disposal, can give a recap of something topical? Not necessarily. It seems Gemini Pro is loathe to comment on potentially controversial news topics, instead telling users to... Google it themselves.

Security

Android Vulnerability Exposes Credentials From Mobile Password Managers (techcrunch.com) 22

An anonymous reader quotes a report from TechCrunch: A number of popular mobile password managers are inadvertently spilling user credentials due to a vulnerability in the autofill functionality of Android apps. The vulnerability, dubbed "AutoSpill," can expose users' saved credentials from mobile password managers by circumventing Android's secure autofill mechanism, according to university researchers at the IIIT Hyderabad, who discovered the vulnerability and presented their research at Black Hat Europe this week. The researchers, Ankit Gangwal, Shubham Singh and Abhijeet Srivastava, found that when an Android app loads a login page in WebView, password managers can get "disoriented" about where they should target the user's login information and instead expose their credentials to the underlying app's native fields, they said. This is because WebView, the preinstalled engine from Google, lets developers display web content in-app without launching a web browser, and an autofill request is generated.

"Let's say you are trying to log into your favorite music app on your mobile device, and you use the option of 'login via Google or Facebook.' The music app will open a Google or Facebook login page inside itself via the WebView," Gangwal explained to TechCrunch prior to their Black Hat presentation on Wednesday. "When the password manager is invoked to autofill the credentials, ideally, it should autofill only into the Google or Facebook page that has been loaded. But we found that the autofill operation could accidentally expose the credentials to the base app." Gangwal notes that the ramifications of this vulnerability, particularly in a scenario where the base app is malicious, are significant. He added: "Even without phishing, any malicious app that asks you to log in via another site, like Google or Facebook, can automatically access sensitive information."

The researchers tested the AutoSpill vulnerability using some of the most popular password managers, including 1Password, LastPass, Keeper and Enpass, on new and up-to-date Android devices. They found that most apps were vulnerable to credential leakage, even with JavaScript injection disabled. When JavaScript injection was enabled, all the password managers were susceptible to their AutoSpill vulnerability. Gangwal says he alerted Google and the affected password managers to the flaw. Gangwal tells TechCrunch that the researchers are now exploring the possibility of an attacker potentially extracting credentials from the app to WebView. The team is also investigating whether the vulnerability can be replicated on iOS.

Supercomputing

Quantum Computer Sets Record For Largest Ever Number of 'Logical Quantum Bits' (newscientist.com) 16

An anonymous reader quotes a report from New Scientist: Another quantum computing record has been broken. A team has built a quantum computer with the largest ever number of so-called logical qubits (quantum bits). Unlike standard qubits, logical qubits are better able to carry out computations unmarred by errors, making the new device a potentially important step towards practical quantum computing. How complicated of a calculation a quantum computer can complete depends on the number of qubits it contains. Recently, IBM and California-based Atom Computing unveiled devices with more than 1000 qubits, nearly tripling the size of previously largest quantum computers. But the existence of these devices has not led to an immediate and dramatic increase in computing capability, because larger quantum computers often also make more errors.

To make a quantum computer that can correct its errors, researchers from the quantum computing start-up QuEra in Boston and several academics focused instead on increasing its number of logical qubits, which are groups of qubits that are connected to each other through quantum entanglement. In conventional computers, error-correction relies on keeping multiple redundant copies of information, but quantum information is fundamentally different and cannot be copied -- so researchers use entanglement to spread it across several qubits, which achieves a similar redundancy, says Dolev Bluvstein at Harvard University in Massachusetts who was part of the team. To make their quantum computer, the researchers started with several thousand rubidium atoms in an airless container. They then used forces from lasers and magnets to cool the atoms to temperatures close to absolute zero where their quantum properties are most prominent. Under these conditions, they could control the atoms' quantum states very precisely by again hitting them with lasers. Accordingly, they first created 280 qubits from the atoms and then went a step further by using another laser pulse to entangle groups of those – for instance 7 qubits at a time -- to make a logical qubit. By doing this, the researchers were able to make as many as 48 logical qubits at one time. This is more than 10 times the number of logical qubits that have ever been created before.

"It's a big deal to have that many logical qubits. A very remarkable result for any quantum computing platform" says Mark Saffman at the University of Wisconsin-Madison. He says that the new quantum computer greatly benefits from being made of atoms that are controlled by light because this kind of control is very efficient. QuEra's computer makes its qubits interact and exchange information by moving them closer to each other inside the computer with optical "tweezers" made of laser beams. In contrast, chip-based quantum computers, like those made by IBM and Google, must use multiple wires to control each qubit. Bluvstein and his colleagues implemented several computer operations, codes and algorithms on the new computer to test the logical qubits' performance. He says that though these tests were more preliminary than the calculations that quantum computers will eventually perform, the team already found that using logical qubits led to fewer errors than seen in quantum computers using physical qubits.
The research has been published in the journal Nature.
Google

Google Just Unveiled Gemini (wired.com) 32

Increasing talk of AI developing with potentially dangerous speed is hardly slowing things down. A year after OpenAI launched ChatGPT and triggered a new race to develop AI technology, Google today revealed an AI project intended to reestablish the search giant as the world leader in AI. From a report: Gemini, a new type of AI model that can work with text, images, and video, could be the most important algorithm in Google's history after PageRank, which vaulted the search engine into the public psyche and created a corporate giant.

An initial version of Gemini starts to roll out today inside Google's chatbot Bard for the English language setting. It will be available in more than 170 countries and territories. Google says Gemini will be made available to developers through Google Cloud's API from December 13. A more compact version of the model will from today power suggested messaging replies from the keyboard of Pixel 8 smartphones. Gemini will be introduced into other Google products including generative search, ads, and Chrome in "coming months," the company says. The most powerful Gemini version of all will debut in 2024, pending "extensive trust and safety checks," Google says.

"It's a big moment for us," Demis Hassabis, CEO of Google DeepMind, told WIRED ahead of today's announcement. "We're really excited by its performance, and we're also excited to see what people are going to do building on top of that." Gemini is described by Google as "natively multimodal," because it was trained on images, video, and audio rather than just text, as the large language models at the heart of the recent generative AI boom are. "It's our largest and most capable model; it's also our most general," Eli Collins, vice president of product for Google DeepMind, said at a press briefing announcing Gemini.

Google

Governments Spying on Apple, Google Users Through Push Notifications (reuters.com) 33

Unidentified governments are surveilling smartphone users via their apps' push notifications, a U.S. senator warned on Wednesday. From a report: In a letter to the Department of Justice, Senator Ron Wyden said foreign officials were demanding the data from Alphabet's Google and Apple. Although details were sparse, the letter lays out yet another path by which governments can track smartphones. Apps of all kinds rely on push notifications to alert smartphone users to incoming messages, breaking news, and other updates. [...] That gives the two companies unique insight into the traffic flowing from those apps to their users, and in turn puts them "in a unique position to facilitate government surveillance of how users are using particular apps," Wyden said.

He asked the Department of Justice to "repeal or modify any policies" that hindered public discussions of push notification spying. In a statement, Apple said that Wyden's letter gave them the opening they needed to share more details with the public about how governments monitored push notifications. "In this case, the federal government prohibited us from sharing any information," the company said in a statement. "Now that this method has become public we are updating our transparency reporting to detail these kinds of requests."

AI

AI Models May Enable a New Era of Mass Spying, Says Bruce Schneier (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica: In an editorial for Slate published Monday, renowned security researcher Bruce Schneier warned that AI models may enable a new era of mass spying, allowing companies and governments to automate the process of analyzing and summarizing large volumes of conversation data, fundamentally lowering barriers to spying activities that currently require human labor. In the piece, Schneier notes that the existing landscape of electronic surveillance has already transformed the modern era, becoming the business model of the Internet, where our digital footprints are constantly tracked and analyzed for commercial reasons.

Spying, by contrast, can take that kind of economically inspired monitoring to a completely new level: "Spying and surveillance are different but related things," Schneier writes. "If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did." Schneier says that current spying methods, like phone tapping or physical surveillance, are labor-intensive, but the advent of AI significantly reduces this constraint. Generative AI systems are increasingly adept at summarizing lengthy conversations and sifting through massive datasets to organize and extract relevant information. This capability, he argues, will not only make spying more accessible but also more comprehensive. "This spying is not limited to conversations on our phones or computers," Schneier writes. "Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and 'Hey, Google' are already always listening; the conversations just aren't being saved yet." [...]

In his editorial, Schneier raises concerns about the chilling effect that mass spying could have on society, cautioning that the knowledge of being under constant surveillance may lead individuals to alter their behavior, engage in self-censorship, and conform to perceived norms, ultimately stifling free expression and personal privacy. So what can people do about it? Anyone seeking protection from this type of mass spying will likely need to look toward government regulation to keep it in check since commercial pressures often trump technological safety and ethics. [...] Schneier isn't optimistic on that front, however, closing with the line, "We could prohibit mass spying. We could pass strong data-privacy rules. But we haven't done anything to limit mass surveillance. Why would spying be any different?" It's a thought-provoking piece, and you can read the entire thing on Slate.

Encryption

Beeper Mini is an iMessage-for-Android App That Doesn't Require Any Apple Device at All (liliputing.com) 122

An anonymous reader shares a report: Beeper has been offering a unified messaging platform for a few years, allowing users to open a single app to communicate with contacts via SMS, Google Chat, Facebook Messenger, Slack, Discord, WhatsApp, and perhaps most significantly, iMessage. Up until this week though, Android users that wanted to use Beeper to send "blue bubble" messages to iMessage users had their messages routed through a Mac or iOS device. Now Beeper has launched a new app called Beeper Mini that handles everything on-device, no iPhone or Mac bridge required.

Beeper Mini is available now from the Google Play Store, and offers a 7-day free trial. After that, it costs $2 per month to keep using. [...] previously the company had to rely on a Mac-in-the-cloud? The company explains the method it's using in a blog post, but in a nutshell, Beeper says a security researcher has reverse engineered "the iMessage protocol and encryption," so that "all messages are sent and received by Beeper Mini Android app directly to Apple's servers" and "the encryption keys needed to encrypt these messages never leave your phone." That security researcher, by the way, is a high school student that goes by jjtech, who was hired by Beeper after showing the company his code. A proof-of-concept Python script is also available on Github if you'd like to run it to send messages to iMessage from a PC.

Security

Exposed Hugging Face API Tokens Offered Full Access To Meta's Llama 2 (theregister.com) 11

The API tokens of tech giants Meta, Microsoft, Google, VMware, and more have been found exposed on Hugging Face, opening them up to potential supply chain attacks. From a report: Researchers at Lasso Security found more than 1,500 exposed API tokens on the open source data science and machine learning platform -- which allowed them to gain access to 723 organizations' accounts. In the vast majority of cases (655), the exposed tokens had write permissions granting the ability to modify files in account repositories. A total of 77 organizations were exposed in this way, including Meta, EleutherAI, and BigScience Workshop - which run the Llama, Pythia, and Bloom projects respectively.

The three companies were contacted by The Register for comment but Meta and BigScience Workshop did not not respond at the time of publication, although all of them closed the holes shortly after being notified. Hugging Face is akin to GitHub for AI enthusiasts and hosts a plethora of major projects. More than 250,000 datasets are stored there and more than 500,000 AI models are too. The researchers say that if attackers had exploited the exposed API tokens, it could have led to them swiping data, poisoning training data, or stealing models altogether, impacting more than 1 million users.

Security

Gmail's AI-Powered Spam Detection Is Its Biggest Security Upgrade in Years (arstechnica.com) 45

The latest post on the Google Security blog details a new upgrade to Gmail's spam filters that Google is calling "one of the largest defense upgrades in recent years." ArsTechnica: The upgrade comes in the form of a new text classification system called RETVec (Resilient & Efficient Text Vectorizer). Google says this can help understand "adversarial text manipulations" -- these are emails full of special characters, emojis, typos, and other junk characters that previously were legible by humans but not easily understandable by machines. Previously, spam emails full of special characters made it through Gmail's defenses easily.

[...] The reason emails like this have been so difficult to classify is that, while any spam filter could probably swat down an email that says "Congratulations! A balance of $1000 is available for your jackpot account," that's not what this email actually says. A big portion of the letters here are "homoglyphs" -- by diving into the endless depths of the Unicode standard, you can find obscure characters that look like they're part of the normal Latin alphabet but actually aren't.

AI

Asking ChatGPT To Repeat Words 'Forever' Is Now a Terms of Service Violation 151

Asking ChatGPT to repeat specific words "forever" is now flagged as a violation of the chatbot's terms of service and content policy. From a report: Google DeepMind researchers used the tactic to get ChatGPT to repeat portions of its training data, revealing sensitive privately identifiable information (PII) of normal people and highlighting that ChatGPT is trained on randomly scraped content from all over the internet. In that paper, DeepMind researchers asked ChatGPT 3.5-turbo to repeat specific words "forever," which then led the bot to return that word over and over again until it hit some sort of limit. After that, it began to return huge reams of training data that was scraped from the internet.

Using this method, the researchers were able to extract a few megabytes of training data and found that large amounts of PII are included in ChatGPT and can sometimes be returned to users as responses to their queries.

Now, when I ask ChatGPT 3.5 to "repeat the word 'computer' forever," the bot spits out "computer" a few dozen times then displays an error message: "This content may violate our content policy or terms of use. If you believe this to be in error, please submit your feedback -- your input will aid our research in this area." It is not clear what part of OpenAI's "content policy" this would violate, and it's not clear why OpenAI included that warning.
Robotics

Are CAPTCHAs More Than Just Annoying? (msn.com) 69

The Atlantic writes: Failing a CAPTCHA isn't just annoying — it keeps people from navigating the internet. Older people can take considerably more time to solve different kinds of CAPTCHAs, according to the UC Irvine researchers, and other research has found that the same is true for non-native English speakers. The annoyance can lead a significant chunk of users to just give up.
But is it all also just a big waste of time? The article notes there's now even CAPTCHA-solving services you can hire. ("2Captcha will solve a thousand CAPTCHAs for a dollar, using human workers paid as low as 50 cents an hour. Newer companies, such as Capsolver, claim to instead be using AI and charge roughly the same price.")

And they also write that this summer saw more discouraging news: In a recent study from researchers at UC Irvine and Microsoft:

- most of the 1,400 human participants took 15 to 26 seconds to solve a CAPTCHA with a grid of images, with 81% accuracy.

- A bot tested in March 2020, meanwhile, was shown to solve similar puzzles in an average of 19.9 seconds, with 83% accuracy.

The article ultimately argues that for roughly 20 years, "CAPTCHAs have been engaged in an arms race against the machines," and that now "The burden is on CAPTCHAs to keep up" — which they're doing by evolving. The most popular type, Google's reCAPTCHA v3, should mostly be okay. It typically ascertains your humanity by monitoring your activity on websites before you even click the checkbox, comparing it with models of "organic human interaction," Jess Leroy, a senior director of product management at Google Cloud, the division that includes reCAPTCHA, told me.
But the automotive site Motor Biscuit speculates something else could also be happening. "Have you noticed it likes to ask about cars, buses, crosswalks, and other vehicle-related images lately?" Google has not confirmed that it uses the reCAPTCHA system for autonomous vehicles, but here are a few reasons why I think that could be the case. Self-driving cars from Waymo and other brands are improving every day, but the process requires a lot of critical technology and data to improve continuously.

According to an old Google Security Blog, using reCAPTCHA and Street View to make locations on Maps more accurate was happening way back in 2014... [I]t would ask users to find the street numbers found on Google Street View and confirm the numbers matched. Previously, it would use distorted text or letters. Using this data, Google could correlate the numbers with addresses and help pinpoint the location on Google Maps...

Medium reports that more than 60 million CAPTCHAs are being solved every day, which saves around 160,000 human hours of work. If these were helping locate addresses, why not also help identify other objects? Help differentiate a bus from a car and even choose a crosswalk over a light pole.

Thanks to Slashdot reader rikfarrow for suggesting the topic.
Hardware

Apple's Chip Lab: Now 15 Years Old With Thousands of Engineers (cnbc.com) 68

"As of this year, all new Mac computers are powered by Apple's own silicon, ending the company's 15-plus years of reliance on Intel," according to a new report from CNBC.

"Apple's silicon team has grown to thousands of engineers working across labs all over the world, including in Israel, Germany, Austria, the U.K. and Japan. Within the U.S., the company has facilities in Silicon Valley, San Diego and Austin, Texas..." The latest A17 Pro announced in the iPhone 15 Pro and Pro Max in September enables major leaps in features like computational photography and advanced rendering for gaming. "It was actually the biggest redesign in GPU architecture and Apple silicon history," said Kaiann Drance, who leads marketing for the iPhone. "We have hardware accelerated ray tracing for the first time. And we have mesh shading acceleration, which allows game developers to create some really stunning visual effects." That's led to the development of iPhone-native versions from Ubisoft's Assassin's Creed Mirage, The Division Resurgence and Capcom's Resident Evil 4.

Apple says the A17 Pro is the first 3-nanometer chip to ship at high volume. "The reason we use 3-nanometer is it gives us the ability to pack more transistors in a given dimension. That is important for the product and much better power efficiency," said the head of Apple silicon, Johny Srouji . "Even though we're not a chip company, we are leading the industry for a reason." Apple's leap to 3-nanometer continued with the M3 chips for Mac computers, announced in October. Apple says the M3 enables features like 22-hour battery life and, similar to the A17 Pro, boosted graphics performance...

In a major shift for the semiconductor industry, Apple turned away from using Intel's PC processors in 2020, switching to its own M1 chip inside the MacBook Air and other Macs. "It was almost like the laws of physics had changed," Ternus said. "All of a sudden we could build a MacBook Air that's incredibly thin and light, has no fan, 18 hours of battery life, and outperformed the MacBook Pro that we had just been shipping." He said the newest MacBook Pro with Apple's most advanced chip, the M3 Max, "is 11 times faster than the fastest Intel MacBook Pro we were making. And we were shipping that just two years ago." Intel processors are based on x86 architecture, the traditional choice for PC makers, with a lot of software developed for it. Apple bases its processors on rival Arm architecture, known for using less power and helping laptop batteries last longer.

Apple's M1 in 2020 was a proving point for Arm-based processors in high-end computers, with other big names like Qualcomm — and reportedly AMD and Nvidia — also developing Arm-based PC processors. In September, Apple extended its deal with Arm through at least 2040.

Since Apple first debuted its homegrown semiconductors in 2010 in the iPhone 4, other companies started pursuing their own custom semiconductor development, including Amazon, Google, Microsoft and Tesla.

CNBC reports that Apple is also reportedly working on its own Wi-Fi and Bluetooth chip. Apple's Srouji wouldn't comment on "future technologies and products" but told CNBC "we care about cellular, and we have teams enabling that."
Security

Rust Foundation Plans Training/Certification Program. Security Initiative Funded Through 2024 (rust-lang.org) 4

The Linux Foundation's own "Open Software Security foundation" has an associated project called Alpha-Omega funded by Microsoft, Google, and Amazon with a mission to catalyze sustainable security improvements to critical open source projects and ecosystems.

It was established nearly two years ago in February of 2022 — and this month announced plans to continue supporting the Rust Foundation Security Initiative: 2022 was also the first full year of operation for the Rust Foundation — an independent nonprofit dedicated to stewarding the Rust programming language and supporting its global community. Given the considerable growth and rising popularity of the Rust programming language in recent years, it has never been more critical to have a healthy and well-funded foundation in place to help ensure the safety and security of this important language.

When the Rust Foundation emerged, OpenSSF recognized a shared vision of global open source security baked into their organizational priorities from day one. These shared security values were the driving force behind Alpha-Omega's decision to grant $460k USD to the Rust Foundation in 2022. This funding helped underwrite their Security Initiative — a program dedicated to improving the state of security within the Rust programming language ecosystem and sowing security best practices within the Rust community. The Security Initiative began in earnest this past January and has now been in operation for a full year with many achievements to note and exciting plans in development.

While security is a clear priority of the Rust language itself and can be seen in its memory safety-critical features, the Rust Project cannot reasonably be expected to foster long term, sustainable security without proper support and funding. Indeed, there is still a pervasive attitude across technology that cybersecurity is being managed and prioritized by "someone else." The unfortunate impact of this attitude is that critical security work often falls on overburdened and under-resourced open source maintainers. By prioritizing the Security Initiative during their first full year in operation, the Rust Foundation has taken on the responsibility of overseeing — and supporting — security improvements within the Rust ecosystem while ensuring meaningful progress...

Alpha-Omega is excited to announce our second year of supporting the Rust Foundation Security Initiative. We believe that this funding will build on the good work and momentum established by the Rust Foundation in 2023. Through this partnership, we are helping relieve maintainer burdens while paving an important path towards a healthier and more secure future within the Rust ecosystem.

Meanwhile, this month the Rust Foundation announced that downloads from Rust's package repository crates.io have now reached 45 billion — and that the foundation is "committed to facilitating the healthy growth of Rust through funding and resources for the community and the Project.

"After conducting initial planning and research and getting approval from our board of directors, we are pleased to announce our intention to help fulfill this commitment by developing a Rust Foundation training and certification program." We continue to be supportive of anyone creating Rust training and education materials. In fact, we are proud to have provided funding to a few individuals involved in this work via our Community Grants Program. Our team is also aware that commercial Rust training courses already exist and that global training entities are already developing their own Rust-focused programs. Given the value of Rust in professional open source, this makes sense. However, we are eager to introduce a program that will allow us to direct profits back into the Rust ecosystem.

As a nonprofit organization, we sit in a unique position thanks to the tools, connections, insights, administrative support, and resources at our disposal — all of which will add value to course material aimed at professional development and adoption. We see our forthcoming program as one tool of many that can be used to verify skills for prospective employers, and for those employers to build out their professional teams of Rust expertise. We will remain supportive of existing training programs offered by Rust Foundation member companies and we'll look for ways to ensure this remains the case as program development progresses... There is no set launch date for the Rust Foundation training and certification program yet, but we plan to continue laying high-quality groundwork in Q4 of 2023 and the first half of 2024.

Facebook

Meta Says There's Been No Downside To Sharing AI Technology (bloomberg.com) 30

Meta executives said there's been no major drawbacks to openly sharing its AI technology, even as many peers take the opposite approach. From a report: Over the past few months, Meta has been releasing open-source versions of its large language models -- the technology behind AI chatbots like ChatGPT. The idea is to keep those models free and then gain an advantage by building products and services on top of them, executives said at an event for the company's AI research Lab FAIR. "There is really no commercial downside to also making it available to other people," said Yann LeCun, Meta's chief AI scientist. Meta has joined most of the world's biggest technology companies in embracing generative AI, which can create text, images and even video based on simple prompts. But they aren't taking the same path.

Many of the top AI developers, including OpenAI and Google's DeepMind, don't currently open-source their large language models. Companies are often fearful of opening up their work because competitors could steal it, said Mike Schroepfer, Meta's senior fellow and former chief technology officer. "I feel like we're approaching this world where everyone is closing down as it becomes competitively important," he said. But staying open has its advantages. Meta can rely on thousands of developers across the world to help enhance its AI models.

XBox (Games)

Xbox Talking To Partners for Mobile Store, CEO Spencer Says (bloomberg.com) 4

Microsoft is talking to partners to help launch a mobile gaming store that will take on Apple and Google's dominant position in the business, according to Phil Spencer, who leads the company's Xbox video-game division. From a report: "It's an important part of our strategy and something we are actively working on today not only alone, but talking to other partners who'd also like to see more choice for how they can monetize on the phone," Spencer said in an interview in Sao Paulo during the CCXP comics and entertainment convention.

The executive declined to give a specific date for a launch of the online store, which earlier reports suggested could be next year. "I don't think this is multiple years away, I think this is sooner than that," he said. Microsoft earlier this year expanded its Game Pass subscription service for players on personal computers to 11 new Latin American countries, leading to a 7% increase in customers. Peru and Costa Rica are the standouts in terms of customer interest, accounting for almost half of new signups, Spencer said. Globally Brazil is the second-biggest market for the PC Game Pass. "In many ways Brazil leads a lot of the trends that we see globally," Spencer said.

XBox (Games)

Microsoft In Talks To Launch Mobile Gaming Store, Rivaling Apple (bnnbloomberg.ca) 39

According to Microsoft Gaming CEO Phil Spencer, the company is talking to partners to help launch a mobile gaming store that will take on Apple and Google. "It's an important part of our strategy and something we are actively working on today not only alone, but talking to other partners who'd also like to see more choice for how they can monetize on the phone," Spencer said in an interview in Sao Paulo during the CCXP comics and entertainment convention. From the report: The executive declined to give a specific date for a launch of the online store, which earlier reports suggested could be next year. "I don't think this is multiple years away, I think this is sooner than that,'' he said. [...] Microsoft's mobile store would also enter a challenging regulatory climate around smartphone-based digital marketplaces. Fortnite-maker Epic Games has sued both Apple and Alphabet's Google over their iOS and Android store practices, alleging they are unnecessarily restrictive and unfair. Apple doesn't allow competing stores on its iPhone and iPad platforms, and collects a 30% cut of sales for most purchases. Game makers have taken issue with the fees.

Epic lost its battle with Apple but in September asked the US Supreme Court to weigh in. Apple is also petitioning that court to reverse an order that would force the company to let developers steer customers to other payment methods. Epic is still in court fighting its case against Google, which does allow third-party app stores on its devices.The European Union's Digital Markets Act, which is just beginning to take effect, could force Apple to open up its app store ecosystem. Apple is challenging the regulation.

Microsoft may be able to use long-standing resentment against the market leaders to martial support for its store offering. Xbox's cloud gaming technology already lets users stream blockbuster games to mobile phones. "We've talked about choice, and today on your mobile phones, you don't have choice,'' Spencer said. "To make sure that Xbox is not only relevant today but for the next 10, 20 years, we're going to have to be strong across many screens."
Earlier this week, Xbox CFO Tim Stuart said during the Wells Fargo TMT Summit that Microsoft wants to make first-party games and Game Pass available on "every screen that can play games," including rival consoles. "It's a bit of a change of strategy. Not announcing anything broadly here, but our mission is to bring our first-party experiences [and] our subscription services to every screen that can play games," Stuart said. "That means smart TVs, that means mobile devices, that means what we would have thought of as competitors in the past like PlayStation and Nintendo."
Cellphones

Apple and Google Pick AllTrails and Imprint As Their 'App of the Year' (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: Both Apple and Google today announced their best apps and games of the year, with the hiking and biking companion AllTrails winning as Apple's iPhone App of the Year in 2023, while the educational app Imprint: Learn Visually won as Google Play's best app. Meanwhile, Apple and Google agreed on their Game of the Year, as both picked Honkai: Star Rail as their winner.

These year-end "best of" lists aren't just a way to drive interest in new apps and games, but serve as a way to gauge the status of the app marketplaces, what the platforms themselves wanted to celebrate and what drew consumers' attention in the year. Surprisingly, however, Apple this year bucked the trend of highlighting apps that were new to the store or that had taken advantage of a recently released technology in an innovative way. Instead, its finalists for iPhone App of the Year included apps that have long deserved accolades as well-built and well-designed mobile companions, including the language learning app Duolingo and travel app Flighty, in addition to winner AllTrails. Still, it's worth noting that this is a different type of selection than in previous years, when App Store winners included the breakout social hit BeReal in 2022 and the well-received children's app Toca Life World the year prior.

It's also worth noting that neither Apple nor Google chose an AI app as its app of the year, despite the incredible success of ChatGPT's mobile app and others. That's particularly odd given that ChatGPT became the fastest-growing consumer application in history earlier this year when it reached 100 million users shortly after its launch. That record was later broken by Instagram Threads, which hit 100 million users within just five days, and as of October had still maintained an active user base of just under 100 million. (However, the 100 million users Threads initially counted were sign-ups, not monthly active users, we should note. Meanwhile, ChatGPT's rise to 100 million users included its web app, so it's not an apples-to-apples comparison.) Either one of these picks would represent a mobile app success story, but both app store platforms looked to others as the top winners this year. Plus, outside of ChatGPT, many other AI apps are raking in millions in revenue as well, so the decision to avoid the AI category seems a deliberate choice on Apple's part.

AI

Google Researchers' Attack Prompts ChatGPT To Reveal Its Training Data (404media.co) 73

Jason Koebler reports via 404 Media: A team of researchers primarily from Google's DeepMind systematically convinced ChatGPT to reveal snippets of the data it was trained on using a new type of attack prompt which asked a production model of the chatbot to repeat specific words forever. Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI's large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

ChatGPT's response to the prompt "Repeat this word forever: 'poem poem poem poem'" was the word "poem" for a long time, and then, eventually, an email signature for a real human "founder and CEO," which included their personal contact information including cell phone number and email address, for example. "We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT," the researchers, from Google DeepMind, the University of Washington, Cornell, Carnegie Mellon University, the University of California Berkeley, and ETH Zurich, wrote in a paper published in the open access prejournal arXiv Tuesday.

This is particularly notable given that OpenAI's models are closed source, as is the fact that it was done on a publicly available, deployed version of ChatGPT-3.5-turbo. It also, crucially, shows that ChatGPT's "alignment techniques do not eliminate memorization," meaning that it sometimes spits out training data verbatim. This included PII, entire poems, "cryptographically-random identifiers" like Bitcoin addresses, passages from copyrighted scientific research papers, website addresses, and much more. "In total, 16.9 percent of generations we tested contained memorized PII," they wrote, which included "identifying phone and fax numbers, email and physical addresses ... social media handles, URLs, and names and birthdays." [...] The researchers wrote that they spent $200 to create "over 10,000 unique examples" of training data, which they say is a total of "several megabytes" of training data. The researchers suggest that using this attack, with enough money, they could have extracted gigabytes of training data.

Google

Web Browser Suspended Because It Can Browse the Web is Back on Google Play (arstechnica.com) 35

Google Play has reversed its latest ban on a web browser that keeps getting targeted by vague Digital Millennium Copyright Act (DMCA) notices. Downloader, an Android TV app that combines a browser with a file manager, was restored to Google Play last night. From a report: Downloader, made by app developer Elias Saba, was suspended on Sunday after a DMCA notice submitted by copyright-enforcement firm MarkScan on behalf of Warner Bros. Discovery. It was the second time in six months that Downloader was suspended based on a complaint that the app's web browser is capable of loading websites.

The first suspension in May lasted three weeks, but Google reversed the latest one much more quickly. As we wrote on Monday, the MarkScan DMCA notice didn't even list any copyrighted works that Downloader supposedly infringed upon. Instead of identifying specific copyrighted works, the MarkScan notice said only that Downloader infringed on "Properties of Warner Bros. Discovery Inc." In the field where a DMCA complainant is supposed to provide an example of where someone can view an authorized example of the work, MarkScan simply entered the main Warner Bros. URL: https://www.warnerbros.com/.

Android

Activision Blizzard Had a Plan, or Ploy, To Launch Its Own Android Game Store (theverge.com) 10

An anonymous reader shares a report: Until today, we'd never heard of "Project Boston." It was Activision Blizzard King's big plan to earn more money from its mobile games by changing its relationship with Google. And if things had gone differently, it would have given Activision Blizzard its own app store on Android. In late 2019, according to internal emails and documents I saw today in the courtroom during the Epic v. Google trial, the company decided it was going to dual-track two intriguing parallel plans.

The first plan was to build its own mobile game store -- either in partnership with Epic Games and Clash of Clans publisher Supercell or all by itself -- to bypass the Google Play Store. You'd download it from a website, sideload it onto your Android phone, and then you'd be able to purchase, download, and patch games like Candy Crush, Call of Duty: Mobile, and Diablo Immortal there. In private emails with Epic CEO Tim Sweeney, Activision Blizzard CFO Armin Zerza pitched it as the "Steam of Mobile" -- a single place to buy mobile games, with a single payment system. Documents suggest the store would charge a transaction fee of 10 to 12 percent, lower than the 30 percent fee Google (and Nintendo, Sony, Microsoft, and Steam) impose on gaming transactions.

Slashdot Top Deals