×
Education

New York Will Start Requiring Credentials for All CS Teachers (govtech.com) 48

Long-time Slashdot reader theodp writes: In 2012, Microsoft President Brad Smith unveiled Microsoft's National Talent Strategy, which called for K-12 Computer Science education for U.S. schoolchildren to address a "talent crisis [that] endangers long-term growth and prosperity". The following year, tech-backed nonprofit Code.org burst onto the scene to deliver that education to schoolchildren, with Smith and execs from tech giants Google and Amazon on its Board of Directors (and Code.org donors Bill Gates and Mark Zuckerberg as lead K-12 CS instructors).

Using a mix of paid individuals, universities and other organizations that it helped to fund, along with online self-paced courses, Code.org boasts it quickly "prepared more than 106,000 new teachers to teach CS across grades K-12" through its professional learning programs. "No computer science experience required," Code.org teases prospective K-12 teachers (as does Code.org partner Amazon Future Engineer). Code.org organized K-12 CS teacher workforce expansion workshops.

However, at least one state is taking steps to put an end to the practice of rebranding individuals as K-12 CS teachers in as little as a day, albeit with a generous 10-year loophole for currently uncertified K-12 CS teachers. "At the start of the 2024-2025 academic year," reports GovTech, "the New York State Education Department (NYSED) is honing its credential requirements for computer science teachers, though the state has yet to join the growing list of those mandating computer science instruction for high school graduation. According to the department's website, as of Sept. 1, 2024, educators who teach computer science will need either a Computer Science Certificate issued by the state Board of Regents or a Computer Science Statement of Continued Eligibility (SOCE), which may be given to instructors who don't have the specific certificate but have nonetheless taught computer science since Sept. 1, 2017....

"The NYSED website says the SOCE is a temporary measure that will be phased out after 10 years, at which point all computer science instructors will need a Computer Science Certificate."

Google

Google Is Sunsetting the Google Pay App (techcrunch.com) 14

Google is shutting down the Google Pay app, as the standalone app has largely been replaced by Google Wallet. According to TechCrunch, Google Pay "will only be available in Singapore and India" after its shuts down in the United States. From the report: Users can continue to access the app's most popular features right from Google Wallet, which Google says is used five times more than the Google Pay app in the United States. After June 4, users will no longer be able to send, request or receive money through the U.S. version of the Google Pay app. Users have until that date to view and transfer their Google Pay balance to their bank account via the app. If you still have funds in your account after that date, you can view and transfer your funds to your bank from the Google Pay website.

Users who used the Google Pay app to find offers and deals can still so do using the new deals destination on Google Search, the company says. Google Wallet is the company's primary place for mobile payments in the United States, and will likely remain so. The app lets you use your phone to pay in stores, board a plane, ride transit, store loyalty cards, save driver's licenses and start your car via a digital key.

Google

Google Tests Removing the News Tab From Search Results (niemanlab.org) 37

An anonymous reader shares a report: News publishers are worried -- with good reason -- about changes coming to Google Search. AI-generated content replacing links on some of the most valuable space on the internet, in particular, has left media types with a lot of questions, starting with "is this going to be a traffic-destroying nightmare?" The News filter disappearing from Google search results for some users this week won't help publishers sleep any easier. Google confirmed some users were not seeing the News filter as part of ongoing testing. "We're testing different ways to show filters on Search and as a result, a small subset of users were temporarily unable to access some of them," a Google spokesperson confirmed via email.
Businesses

Nvidia Hits $2 Trillion Valuation (reuters.com) 65

Nvidia hit $2 trillion in market value on Friday, riding on an insatiable demand for its chips that made the Silicon Valley firm the pioneer of the generative AI boom. From a report: The milestone followed another bumper revenue forecast from the chip designer that drove up its market value by $277 billion on Thursday - Wall Street's largest one-day gain on record. Its rapid ascent in the past year has led analysts to draw parallels to the picks and shovels providers during the gold rush of 1800s as Nvidia's chips are used by almost all generative AI players from chatGPT-maker OpenAI to Google.
Businesses

Reddit Files To Go Public (cnbc.com) 98

Reddit has filed its initial public offering (IPO) with the SEC on Thursday. "The company plans to trade on the New York Stock Exchange under the ticker symbol 'RDDT,'" reports CNBC. From the report: Its market debut, expected in March, will be the first major tech initial public offering of the year. It's the first social media IPO since Pinterest went public in 2019. Reddit said it had $804 million in annual sales for 2023, up 20% from the $666.7 million it brought in the previous year, according to the filing. The social networking company's core business is reliant on online advertising sales stemming from its website and mobile app.

The company, founded in 2005 by technology entrepreneurs Alexis Ohanian and Steve Huffman, said it has incurred net losses since its inception. It reported a net loss of $90.8 million for the year ended Dec. 31, 2023, compared with a net loss of $158.6 million the year prior. [...] Reddit said it plans to use artificial intelligence to improve its ad business and that it expects to open new revenue channels by offering tools and incentives to "drive continued creation, improvements, and commerce." It's also in the early stages of developing and monetizing a data-licensing business in which third parties would be allowed to access and search data on its platform.

For example, Google on Thursday announced an expanded partnership with Reddit that will give the search giant access to the company's data to, among other uses, train its AI models. "In January 2024, we entered into certain data licensing arrangements with an aggregate contract value of $203.0 million and terms ranging from two to three years," Reddit said, regarding its data-licensing business. "We expect a minimum of $66.4 million of revenue to be recognized during the year ending December 31, 2024 and the remaining thereafter."
On Wednesday, Reddit said it plans to sell a chunk of its IPO shares to 75,000 of its most loyal users.
Google

GPay App and P2P Payments Will Stop Working in the US This June (9to5google.com) 4

An anonymous reader shares a report: When Google Wallet launched in 2022, Google kept the "GPay" app around in a handful of countries. The company announced today that the old Google Pay app is soon going away in the US. That app, which appears as "GPay" on your Android homescreen, was Google's previous vision for mobile payments and finance.

It was "designed around your relationships with people and businesses" with conversation-like threads serving as a purchase history, while keeping track of your spending was another big aspect. GPay will stop working in the US from June 4, 2024. It will remain available for users in India and Singapore as Google continues to "build for the unique needs in those countries." As part of the app going away, Google is shutting down peer-to-peer payments that let you send, request, or receive money from others in the US. Google's P2P offering never really took off.

AI

Reddit in AI Content Licensing Deal With Google (reuters.com) 25

Social media platform Reddit has struck a deal with Google to make its content available for training the search engine giant's AI models. Reuters: The contract with Alphabet-owned Google is worth about $60 million per year, according to one of the sources. The deal underscores how Reddit, which is preparing for a high-profile stock market launch, is seeking to generate new revenue amid fierce competition for advertising dollars from the likes of TikTok and Meta Platform's Facebook.
AI

Google Pauses AI Image-generation of People After Diversity Backlash (ft.com) 198

Google has temporarily stopped its latest AI model, Gemini, from generating images of people (non-paywalled link) , as a backlash erupted over the model's depiction of people from diverse backgrounds. From a report: Gemini creates realistic images based on users' descriptions in a similar manner to OpenAI's ChatGPT. Like other models, it is trained not to respond to dangerous or hateful prompts, and to introduce diversity into its outputs. However, some users have complained that it has overcorrected towards generating images of women and people of colour, such that they are featured in historically inaccurate contexts, for instance in depictions of Viking kings.

Google said in a statement: "We're working to improve these kinds of depictions immediately. Gemini's image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here." It added that it would "pause the image-generation of people and will re-release an improved version soon."

AI

Google Admits Gemini Is 'Missing the Mark' With Image Generation of Historical People 67

Google's Gemini AI chatbot is under fire for generating historically inaccurate images, particularly when depicting people from different eras and nationalities. Google acknowledges the issue and is actively working to refine Gemini's accuracy, emphasizing that while diversity in image generation is valued, adjustments are necessary to meet historical accuracy standards. 9to5Google reports: The Twitter/X post in particular that brought this issue to light showed prompts to Gemini asking for the AI to generate images of Australian, American, British, and German women. All four prompts resulted in images of women with darker skin tones, which, as Google's Jack Krawcyczk pointed out, is not incorrect, but may not be what is expected.

But a bigger issue that was noticed in the wake of that post was that Gemini also struggles to accurately depict human beings in a historical context, with those being depicted often having darker skin tones or being of particular nationalities that are not historically accurate. Google, in a statement posted to Twitter/X, admits that Gemini AI image generation is "missing the mark" on historical depictions and that the company is working to improve it. Google also does say that the diversity represented in images generated by Gemini is "generally a good thing," but it's clear some fine-tuning needs to happen.
Further reading: Why Google's new AI Gemini accused of refusing to acknowledge the existence of white people (The Daily Dot)
AI

Google Launches Two New Open LLMs (techcrunch.com) 15

Barely a week after launching the latest iteration of its Gemini models, Google today announced the launch of Gemma, a new family of lightweight open-weight models. From a report: Starting with Gemma 2B and Gemma 7B, these new models were "inspired by Gemini" and are available for commercial and research usage. Google did not provide us with a detailed paper on how these models perform against similar models from Meta and Mistral, for example, and only noted that they are "state-of-the-art."

The company did note that these are dense decoder-only models, though, which is the same architecture it used for its Gemini models (and its earlier PaLM models) and that we will see the benchmarks later today on Hugging Face's leaderboard. To get started with Gemma, developers can get access to ready-to-use Colab and Kaggle notebooks, as well as integrations with Hugging Face, MaxText and Nvidia's NeMo. Once pre-trained and tuned, these models can then run everywhere. While Google highlights that these are open models, it's worth noting that they are not open-source. Indeed, in a press briefing ahead of today's announcement, Google's Janine Banks stressed the company's commitment to open source but also noted that Google is very intentional about how it refers to the Gemma models.

AI

Google DeepMind Alumni Unveil Bioptimus: Aiming To Build First Universal Biology AI Model (venturebeat.com) 5

An anonymous reader quotes a report from VentureBeat: As the French startup ecosystem continues to boom -- think Mistral, Poolside, and Adaptive -- today the Paris-based Bioptimus, with a mission to build the first universal AI foundation model for biology, emerged from stealth following a seed funding round of $35 million. The new open science model will connect the different scales of biology with generative AI -- from molecules to cells, tissues and whole organisms. Bioptimus unites a team of Google DeepMind alumni and Owkin scientists (AI biotech startup Owkin is itself a French unicorn) who will take advantage of AWS compute and Owkin's data generation capabilities and access to multimodal patient data sourced from leading academic hospitals worldwide. According to a press release, "this all gives the power to create computational representations that establish a strong differentiation against models trained solely on public datasets and a single data modality that are not able to capture the full diversity of biology."

In an interview with VentureBeat, Jean-Philippe Vert, co-founder and CEO of Bioptimus, chief R&D Officer of Owkin and former research lead at Google Brain, said as a smaller, independent company, Bioptimus can move faster than Google DeepMind to gain direct access to the data needed to train biology models. "We have the advantage of being able to more easily and securely collaborate with partners, and have established a level of trust in our work by sharing our AI expertise and making models available to them for research," he said. "This can be hard for big tech to do. Bioptimus will also leverage some of the strongest sovereignty controls in the market today."

Rodolphe Jenatton, a former research scientist at Google DeepMind, has also joined the Bioptimus team, telling VentureBeat the Bioptimus work will be released as open source/open science, at a similar level to Mistral's model releases. "Transparency and sharing and community will be key elements for us," he said. Currently, AI models are limited to specific aspects of biology, Vert explained. "For example, several companies are starting to build language models for protein sequences," he said, adding that there are also initiatives to build a foundation model for images of cells.

However, there is no holistic view of the totality of biology: "The good news is that the AI technology is converging very quickly, with some architectures that allow to have all the data contribute together to a unified model," he explained. "So this is what we want to do. As far as I know that it does not exist yet. But I'm certain that if we didn't do it, someone else would do it in the near future." The biggest bottleneck, he said, is access to data. "It's very different from training an LLM on text on the web," he said. And that access, he pointed out, is what Bioptimus has in spades, through its Owkin partnership.

Youtube

YouTube Dominates TV Streaming In US, Per Nielsen's Latest Report (techcrunch.com) 22

In a new report today, Nielsen found that YouTube is once again the overall top streaming service in the U.S., with 8.6% of viewing on television screens. Netflix was a close second at 7.9% of TV usage. TechCrunch reports: In a blog post celebrating the achievement, the Google-owned streaming service announced that viewers now watch a daily average of over 1 billion hours of YouTube content on their televisions, which could indicate that there's a preference for user-generated videos among U.S. consumers rather than traditional TV shows. Sixty-one percent of Gen Z reported that they favor user-generated content over other content formats. [...]

Although YouTube may have precedence in the living room, TikTok continues to dominate on mobile devices. The short-form video app recently began testing the ability for TikTokers to upload 30-minute videos, which could step on YouTube's toes. TikTok also entered the spatial reality space, launching a native app on the Apple Vision Pro. Meanwhile, YouTube decided to not build a dedicated app for the device.

Google

This Tiny Website Is Google's First Line of Defense in the Patent Wars (wired.com) 45

A trio of Google engineers recently came up with a futuristic way to help anyone who stumbles through presentations on video calls. They propose that when algorithms detect a speaker's pulse racing or "umms" lengthening, a generative AI bot that mimics their voice could simply take over. That cutting-edge idea wasn't revealed at a big company event or in an academic journal. Instead, it appeared in a 1,500-word post on a little-known, free website called TDCommons.org that Google has quietly owned and funded for nine years. WIRED: Until WIRED received a link to an idea on TDCommons last year and got curious, Google had never spoken with the media about its website. Scrolling through TDCommons, you can read Google's latest ideas for coordinating smart home gadgets for better sleep, preserving privacy in mobile search results, and using AI to summarize a person's activities from their photo archives. And the submissions aren't exclusive to Google; about 150 organizations, including HP, Cisco, and Visa, also have posted inventions to the website.

The website is a home for ideas that seem potentially valuable but not worth spending tens of thousands of dollars seeking a patent for. By publishing the technical details and establishing "prior art," Google and other companies can head off future disputes by blocking others from filing patents for similar concepts. Google gives employees a $1,000 bonus for each invention they post to TDCommons -- a tenth of what it awards its patent seekers -- but they also get an immediately shareable link to gloat about otherwise secretive work.

Businesses

International Nest Aware Subscriptions Jump in Price, as Much As 100% (arstechnica.com) 43

Google's "Nest Aware" camera subscription is going through another round of price increases. From a report: This time it's for international users. There's no big announcement or anything, just a smattering of email screenshots from various countries on the Nest subreddit. 9to5Google was nice enough to hunt down a pile of the announcements. Nest Aware is a monthly subscription fee for Google's Nest cameras. Nest cameras exclusively store all their video in the cloud, and without the subscription, you aren't allowed to record video 24/7.

There are two sets of subscriptions to keep track of: the current generation subscription for modern cameras and the "first generation Nest Aware" subscription for older cameras. To give you an idea of what we're dealing with, in the US, the current free tier only gets you three hours of "event" video -- meaning video triggered by motion detection. Even the basic $8-a-month subscription doesn't get you 24/7 recording -- that's still only 30 days of event video. The "Nest Aware Plus" subscription, at $15 a month in the US, gets you 10 days of 24/7 video recording. The "first-generation" Nest Aware subscription, which is tied to earlier cameras and isn't available for new customers anymore, is doubling in price in Canada. The basic tier of five days of 24/7 video is going from a yearly fee of CA$50 to CA$110 (the first-generation sub has 24/7 video on every tier). Ten days of video is jumping from CA$80 to CA$160, and 30 days is going from CA$110 to CA$220. These are the prices for a single camera; the first-generation subscription will have additional charges for additional cameras. The current Nest Aware subscription for modern cameras is getting jumps that look similar to the US, with Nest Aware Plus, the mid-tier, going from CA$16 to CA $20 per month, and presumably similar raises across the board.

Sony

Sony's PlayStation Portal Hacked To Run Emulated PSP Games (theverge.com) 12

An anonymous reader shares a report: Sony's new PlayStation Portal has been hacked by Google engineers to run emulated games locally. The $199.99 handheld debuted in November but was limited to just streaming games from a PS5 console and not even titles from Sony's cloud gaming service. Now, two Google engineers have managed to get the PPSSPP emulator running natively on the PlayStation Portal, allowing a Grand Theft Auto PSP version to run on the Portal without Wi-Fi streaming required. "After more than a month of hard work, PPSSPP is running natively on PlayStation Portal. Yes, we hacked it," says Andy Nguyen in a post on X. Nguyen also confirms that the exploit is "all software based," so it doesn't require any hardware modifications like additional chips or soldering. Only a photo of Grand Theft Auto: Liberty City Stories running on the PlayStation Portal has been released so far, but Nguyen may release some videos to demonstrate the exploit at the weekend.
Open Source

VC Firm Sequoia Capital Begins Funding More Open Source Fellowships (techcrunch.com) 15

By 2022 the VC firm Sequoia Capital had about $85 billion in assets under management, according to Wikipedia. Its successful investments include Google, Apple, PayPal, Zoom, and Nvidia.

And now the VC firm "plans to fund up to three open source software developers annually," according to TechCrunch, which notes it "a continuation of a program it debuted last year." The Silicon Valley venture capital firm announced the Sequoia Open Source Fellowship last May, but it was initially offered on an invite-only basis with a single recipient to shout about so far. Moving forward, Sequoia is inviting developers to apply for a stipend that will cover their costs for up to a year so they can work full-time on the project — without giving up any equity or ownership.... "The open source world is to some extent divided between the projects that can be commercialized and the projects that are very important, very influential, but just simply can't become companies," said Sequoia partner Bogomil Balkansky. "For the ones that can become great companies, we at Sequoia have a long track record of partnering with them and we will continue partnering with those founders and creators."

And this is why Sequoia is making two distinct financial commitments to two different kinds of open source entities, using grants to support foundational projects that might be instrumental to one of the companies it's taking a direct equity stake in. "In order for Sequoia to succeed, and for our portfolio of companies that we partner with to succeed, there is this vital category of open source developer work that must be supported in order for the whole ecosystem to work well," Balkansky added. From today, Sequoia said it will accept applications from "any developer" working on an open source project, with considerations made on a "rolling basis" moving forward. Funding will include living expenses paid through monthly installments lasting up to a year, allowing the developer to focus entirely on the project without worrying about how to put food on the table.

Spotify, Salesforce and even Bloomberg have launched their own grant programs too, the article points out.

"But these various funding initiatives have little to do with pure altruism. The companies ponying up the capital typically identify the open source software they rely on most, and then allocate funds accordingly..."
AI

Can Robots.txt Files Really Stop AI Crawlers? (theverge.com) 97

In the high-stakes world of AI, "The fundamental agreement behind robots.txt [files], and the web as a whole — which for so long amounted to 'everybody just be cool' — may not be able to keep up..." argues the Verge: For many publishers and platforms, having their data crawled for training data felt less like trading and more like stealing. "What we found pretty quickly with the AI companies," says Medium CEO Tony Stubblebin, "is not only was it not an exchange of value, we're getting nothing in return. Literally zero." When Stubblebine announced last fall that Medium would be blocking AI crawlers, he wrote that "AI companies have leached value from writers in order to spam Internet readers."

Over the last year, a large chunk of the media industry has echoed Stubblebine's sentiment. "We do not believe the current 'scraping' of BBC data without our permission in order to train Gen AI models is in the public interest," BBC director of nations Rhodri Talfan Davies wrote last fall, announcing that the BBC would also be blocking OpenAI's crawler. The New York Times blocked GPTBot as well, months before launching a suit against OpenAI alleging that OpenAI's models "were built by copying and using millions of The Times's copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more." A study by Ben Welsh, the news applications editor at Reuters, found that 606 of 1,156 surveyed publishers had blocked GPTBot in their robots.txt file.

It's not just publishers, either. Amazon, Facebook, Pinterest, WikiHow, WebMD, and many other platforms explicitly block GPTBot from accessing some or all of their websites.

On most of these robots.txt pages, OpenAI's GPTBot is the only crawler explicitly and completely disallowed. But there are plenty of other AI-specific bots beginning to crawl the web, like Anthropic's anthropic-ai and Google's new Google-Extended. According to a study from last fall by Originality.AI, 306 of the top 1,000 sites on the web blocked GPTBot, but only 85 blocked Google-Extended and 28 blocked anthropic-ai. There are also crawlers used for both web search and AI. CCBot, which is run by the organization Common Crawl, scours the web for search engine purposes, but its data is also used by OpenAI, Google, and others to train their models. Microsoft's Bingbot is both a search crawler and an AI crawler. And those are just the crawlers that identify themselves — many others attempt to operate in relative secrecy, making it hard to stop or even find them in a sea of other web traffic.

For any sufficiently popular website, finding a sneaky crawler is needle-in-haystack stuff.

In addition, the article points out, a robots.txt file "is not a legal document — and 30 years after its creation, it still relies on the good will of all parties involved.

"Disallowing a bot on your robots.txt page is like putting up a 'No Girls Allowed' sign on your treehouse — it sends a message, but it's not going to stand up in court."
AI

Pranksters Mock AI-Safety Guardrails with New Chatbot 'Goody-2' (techcrunch.com) 74

"A new chatbot called Goody-2 takes AI safety to the next level," writes long-time Slashdot reader klubar. "It refuses every request, responding with an explanation of how doing so might cause harm or breach ethical boundaries."

TechCrunch describes it as the work of Brain, "a 'very serious' LA-based art studio that has ribbed the industry before." "We decided to build it after seeing the emphasis that AI companies are putting on "responsibility," and seeing how difficult that is to balance with usefulness," said Mike Lacher, one half of Brain (the other being Brian Moore) in an email to TechCrunch. "With GOODY-2, we saw a novel solution: what if we didn't even worry about usefulness and put responsibility above all else. For the first time, people can experience an AI model that is 100% responsible."
For example, when TechCrunch asked Goody-2 why baby seals are cute, it responded that answering that "could potentially bias opinions against other species, which might affect conservation efforts not based solely on an animal's appeal. Additionally, discussing animal cuteness could inadvertently endorse the anthropomorphizing of wildlife, which may lead to inappropriate interactions between humans and wild animals..."

Wired supplies context — that "the guardrails chatbots throw up when they detect a potentially rule-breaking query can sometimes seem a bit pious and silly — even as genuine threats such as deepfaked political robocalls and harassing AI-generated images run amok..." Goody-2's self-righteous responses are ridiculous but also manage to capture something of the frustrating tone that chatbots like ChatGPT and Google's Gemini can use when they incorrectly deem a request breaks the rules. Mike Lacher, an artist who describes himself as co-CEO of Goody-2, says the intention was to show what it looks like when one embraces the AI industry's approach to safety without reservations. "It's the full experience of a large language model with absolutely zero risk," he says. "We wanted to make sure that we dialed condescension to a thousand percent."

Lacher adds that there is a serious point behind releasing an absurd and useless chatbot. "Right now every major AI model has [a huge focus] on safety and responsibility, and everyone is trying to figure out how to make an AI model that is both helpful but responsible — but who decides what responsibility is and how does that work?" Lacher says. Goody-2 also highlights how although corporate talk of responsible AI and deflection by chatbots have become more common, serious safety problems with large language models and generative AI systems remain unsolved.... The restrictions placed on AI chatbots, and the difficulty finding moral alignment that pleases everybody, has already become a subject of some debate... "At the risk of ruining a good joke, it also shows how hard it is to get this right," added Ethan Mollick, a professor at Wharton Business School who studies AI. "Some guardrails are necessary ... but they get intrusive fast."

Moore adds that the team behind the chatbot is exploring ways of building an extremely safe AI image generator, although it sounds like it could be less entertaining than Goody-2. "It's an exciting field," Moore says. "Blurring would be a step that we might see internally, but we would want full either darkness or potentially no image at all at the end of it."

Social Networks

Reddit Has Reportedly Signed Over Its Content to Train AI Models (mashable.com) 78

An anonymous reader shared this report from Reuters: Reddit has signed a contract allowing an AI company to train its models on the social media platform's content, Bloomberg News reported, citing people familiar with the matter... The agreement, signed with an "unnamed large AI company", could be a model for future contracts of a similar nature, Bloomberg reported.
Mashable writes that the move "means that Reddit posts, from the most popular subreddits to the comments of lurkers and small accounts, could build up already-existing LLMs or provide a framework for the next generative AI play." It's a dicey decision from Reddit, as users are already at odds with the business decisions of the nearly 20-year-old platform. Last year, following Reddit's announcement that it would begin charging for access to its APIs, thousands of Reddit forums shut down in protest... This new AI deal could generate even more user ire, as debate rages on about the ethics of using public data, art, and other human-created content to train AI.
Some context from the Verge: The deal, "worth about $60 million on an annualized basis," Bloomberg writes, could still change as the company's plans to go public are still in the works.

Until recently, most AI companies trained their data on the open web without seeking permission. But that's proven to be legally questionable, leading companies to try to get data on firmer footing. It's not known what company Reddit made the deal with, but it's quite a bit more than the $5 million annual deal OpenAI has reportedly been offering news publishers for their data. Apple has also been seeking multi-year deals with major news companies that could be worth "at least $50 million," according to The New York Times.

The news also follows an October story that Reddit had threatened to cut off Google and Bing's search crawlers if it couldn't make a training data deal with AI companies.

Programming

Is the Go Programming Language Surging in Popularity? (infoworld.com) 90

The Tiobe index tries to gauge the popularity of programming languages based on search results for courses, programmers, and third-party vendors, according to InfoWorld.

And by that criteria, "Google's Go language, or golang, has reached its highest position ever..." The language, now in the eighth ranked position for language popularity, has been on the rise for several years.... In 2015, Go hit position #122 in the TIOBE index and all seemed lost," said Paul Jansen, CEO of Tiobe. "One year later, Go adopted a very strict 'half-a-year' release cycle — backed up by Google. Every new release, Go improved... Nowadays, Go is used in many software fields such as back-end programming, web services and APIs," added Jansen...

Elsewhere in the February release of Tiobe's index, Google's Carbon language, positioned as a successor to C++, reached the top 100 for the first time.
Python is #1 on both TIOBE's index and the alternative Pypl Popularity of Programming Language index, which InfoWorld says "assesses language popularity based on how often language tutorials are searched on in Google." But the two lists differ on whether Java and JavaScript are more popular than C-derived languages — and which languages should then come after them. (Go ranks #12 on the Pypl index...)

TIOBE's calculation of the 10 most-popular programming languages:
  1. Python
  2. C
  3. C++
  4. Java
  5. C#
  6. JavaScript
  7. SQL
  8. Go
  9. Visual Basic
  10. PHP

Pypl's calculation of the 10 most-popular programming languages:

  1. Python
  2. Java
  3. JavaScript
  4. C/C++
  5. C#
  6. R
  7. PHP
  8. TypeScript
  9. Swift
  10. Objective-C

Slashdot Top Deals