AI

Humans Still Cheaper Than AI in Vast Majority of Jobs, MIT Finds (bloomberg.com) 47

AI can't replace the majority of jobs right now in cost-effective ways, the Massachusetts Institute of Technology found in a study that sought to address fears about AI replacing humans in a swath of industries. From a report: In one of the first in-depth probes of the viability of AI displacing labor, researchers modeled the cost attractiveness of automating various tasks in the US, concentrating on jobs where computer vision was employed -- for instance, teachers and property appraisers. They found only 23% of workers, measured in terms of dollar wages, could be effectively supplanted. In other cases, because AI-assisted visual recognition is expensive to install and operate, humans did the job more economically. [...] The cost-benefit ratio of computer vision is most favorable in segments like retail, transportation and warehousing, all areas where Walmart and Amazon are prominent. It's also feasible in the health-care context, MIT's paper said. A more aggressive AI rollout, especially via AI-as-a-service subscription offerings, could scale up other uses and make them more viable, the authors said.
Social Networks

'You Are Not An Embassy' (substack.com) 108

Jamie Bartlett, a technology columnist, argues that social media platforms constantly pressure users to share opinions on events they may not fully understand, contributing to an atmosphere of performative outrage and conformity rather than thoughtful discussion. However, he also acknowledges the counterpoint that silence in the face of injustice can enable harm. From the column: One of the trickier aspects of digital life is the constant pressure to opine. To have a strong opinion on a subject, and to share it with the world. It's literally baked into the design of the most popular platforms. [...] If I am honest, I know very little about most bad things going on in the world. Certainly not enough that sharing my view will inform or educate or enlighten. Yet whenever I see a news report, an urgent need rises up: what shall I say about this? I have a feeling about it -- which must be shared! (And ideally in emotionally charged language, since that will receive more interactions).

What's wrong with calling out the bad stuff going on? Nothing per se. And certainly not on an individual level. The problem is when people feel a soft and gentle pressure to denounce, to praise, to comment on things they don't feel they fully understand. Things they don't feel comfortable speaking about. Things that are contentious and difficult to discuss on heartless, unforgiving platforms where the wrong phrase or tone might land you in hot water. What social media has done is to make silence an active -- rather than the default -- choice. To speak publicly is now so easy that not doing it kind-of-implies you don't know or don't care about what's going on in the world. Who wants to look ignorant or indifferent? And besides, who doesn't want to appear kind or wise, or morally upstanding in front of others?

But the result is an undirected anger from all sides: frenetic, purposeless, habitual and above all moralising. There's nothing wrong with occasionally saying what you think and sometimes it's very important.

Google

Predatory Loan Apps Are Thriving in Google Play Store, Despite Ban (restofworld.org) 29

Tens of thousands of people have fallen victim to predatory loan apps, which extort users using sensitive information from their phones. Google has changed its policy to prevent the loan apps from being listed on the Play store, but enforcement is unreliable. Rest of World: According to Mexico City's Citizen Council for Safety and Justice, a consumer watchdog group, 135 reports to local authorities have been filed against JoyCredito for fraud and extortion. But despite the government attention, the app is still available to download from the Google Play store. For years, apps like JoyCredito have been exploiting borrowers from Mexico to India. They lend small amounts of money with few requirements and very high interest rates to financially vulnerable people -- and then extort them when the loan is due. After years of mounting pressure from watchdog groups, Google explicitly banned the apps from the Play store in October. But stories like those of Macias Gonzalez show how widespread the apps still are -- and how ineffective Google has been at enforcing its own policy.

Rest of World presented Google with 15 instances of exploitative loan apps based in Mexico that explicitly violate the terms of the Play store. All of them were still available in the store as of press time. Of the 15 apps, 12 explicitly asked for access to either the camera roll or contacts in the Google Play store's terms of services. Two others specified full access only in external documents. One other gave no data access information. Rest of World also found 10 apps in Peru that have been flagged as exploitative by SBS, a national body that oversees banking, insurance, and private pension. All the apps are still available for download on the Google Play store.

Unix

Should New Jersey's Old Bell Labs Become a 'Museum of the Internet'? (medium.com) 54

"Bell Labs, the historic headwaters of so many inventions that now define our digital age, is closing in Murray Hill," writes journalism professor Jeff Jarvis (in an op-ed for New Jersey's Star-Ledger newspaper).

"The Labs should be preserved as a historic site and more." I propose that Bell Labs be opened to the public as a museum and school of the internet.

The internet would not be possible without the technologies forged at Bell Labs: the transistor, the laser, information theory, Unix, communications satellites, fiber optics, advances in chip design, cellular phones, compression, microphones, talkies, the first digital art, and artificial intelligence — not to mention, of course, many advances in networks and the telephone, including the precursor to the device we all carry and communicate with today: the Picturephone, displayed as a futuristic fantasy at the 1964 World's Fair.

There is no museum of the internet. Silicon Valley has its Computer History Museum. New York has museums for television and the moving image. Massachusetts boasts a charming Museum of Printing. Search Google for a museum of the internet and you'll find amusing digital artifacts, but nowhere to immerse oneself in and study this immensely impactful institution in society.

Where better to house a museum devoted to the internet than New Jersey, home not only of Bell Labs but also at one time the headquarters of the communications empire, AT&T, our Ma Bell...? The old Bell Labs could be more than a museum, preserving and explaining the advances that led to the internet. It could be a school... Imagine if Bell Labs were a place where scholars and students in many disciplines — technologies, yes, but also anthropology, sociology, psychology, history, ethics, economics, community studies, design — could gather to teach and learn, discuss and research.

The text of Jarvis's piece is behind subscription walls, but has apparently been re-published on X by innovation theorist John Nosta.

In one of the most interesting passages, Jarvis remembers visiting Bell Labs in 1995. "The halls were haunted with genius: lab after lab with benches and blackboards and history within. We must not lose that history."
Power

What's the Solution to Gridlocked EV Chargers? (sacbee.com) 426

"Some of the most convenient fast-charging stations — mostly those located off major highways — have become gridlocked, especially on busy weekends," complains the opinion editor for California's Tribune newspaper in San Luis, Obispo. Drivers are reporting waits of half an hour or more — sometimes much more. One driver who posted on Reddit waited three hours to charge in Kettleman City on Thanksgiving weekend, turning a five-and-a-half-hour trip into a 10-and-a-half-hour ordeal... Look, it's one thing to spend 30 or 40 minutes charging a battery, which is a given when you take an EV on a road trip. But having to wait in a long line just to get to an open charging bay? What's happening now is "potentially a nightmare for drivers as more EVs hit the road," described GreenBiz transportation writer Vartan Badalian [after a March visit to New York State]...

Badalian, the transportation writer, has an idea on how to deal with gridlock. "As you approach a full charging location, your EV (of any make) connects to the charging location and enters itself into a virtual queue, with entry to the queue dependent upon close geographical proximity. Drivers then park in an available normal parking spot, and only when prompted, proceed to plug in and charge. If a driver attempted to charge before their turn, the chargers would simply not communicate with the vehicle..."

If only that would work. Unfortunately, plug-in chargers have a tough enough time fulfilling their basic task of delivering electricity. Here's how bad it is: A survey of non-Tesla chargers conducted in the Bay Area in 2022 found that 27% of chargers were not working. This would be a good time to point out that Tesla superchargers have a much better performance record than other types of chargers, and that Tesla is opening "select" supercharger stations to other types of vehicles. Also, efforts are being made to increase the reliability of public chargers; the U.S. Department of Transportation just awarded $149 million in grants for the repair and replacement of broken chargers. The biggest share, $64 million, is going to California. In other words, hope is on the horizon. For now, though, we seem to be relying on a haphazard honor system.

How hard would it be to use some orange cones to designate a "waiting lane"? That way drivers pulling in could get an immediate read on how long they might have to wait... Also, limit drivers to an 80% charge, and require them to drive away within, say, five minutes after the charger has stopped. That might be hard to enforce, but peer pressure can be a powerful incentive. The point is, somebody has to step up and make charging stations more driver-friendly, and the obvious choice is whoever is in charge of the chargers.

Cellphones

Could Apostrophy OS Be the Future of Cellphone Privacy? (stuff.co.za) 100

"Would you pay $15 a month so Android doesn't track you and send all of that data back to Google?" asks Stuff South Africa: A new Swiss-based privacy company thinks $15 is a fair fee for that peace of mind. "A person's data is the original digital currency," argues Apostrophy, which has created its own operating system, called Apostrophy OS.

It's based on Android — don't panic — but the version that has already been stripped of Google's intrusiveness by another privacy project called GrapheneOS, which used to be known as CopperheadOS. Launched in 2014, it which was briefly known as the Android Hardening project, before being rebranded as GrapheneOS in 2019. Apostrophy OS is "focused on empowering our users, not leveraging them," it says and is "purposely Swiss-based, so we can be champions of data sovereignty".

What it does, they say, is separate the apps from the underlying architecture of the operating system and therefore prevent apps from accessing miscellaneous personal data, especially the all-important location data so beloved of surveillance capitalism... Apostrophy OS has its own app store, but also cleverly allows users to access the Google Play Store. If you think that is defeating the point, Apostrophy argues that those apps can't get to the vitals of your digital life. Apostrophy OS has "partitioned segments prioritising application integrity and personal data privacy".

The service is free for one year with the purchase of the new MC02 phone from Swiss manufacturer Punkt, according to PC Magazine. "The phone costs $749 and is available for preorder now. It will ship at the end of January." Additional features include a built-in VPN called Digital Nomad based on the open-source Wireguard framework to secure your activity against outside snooping, which includes "exit addresses" in the US, Germany, and Japan with the base subscription.
Power

Do Electric Vehicles Fail at a Lower Rate Than Gas Cars In Extreme Cold? (electrek.co) 216

In a country experiencing extreme cold — and where almost 1 in 4 cars are electric — a roadside assistance company says it's still gas-powered cars that are experiencing the vast majority of problems starting.

Electrek argues that while extreme cold may affect chargers, "it mainly gets attention because it's a new technology and it fails for different reasons than gasoline vehicles in the cold." Viking, a road assistance company (think AAA), says that it responded to 34,000 assistance requests in the first 9 days of the year. Viking says that only 13% of the cases were coming from electric vehicles (via TV2 — translated from Norwegian) ["13 percent of the cases with starting difficulties are electric cars, while the remaining 87 percent are fossil cars..."]

To be fair, this data doesn't adjust for the age of the vehicles. Older gas-powered cars fail at a higher rate than the new ones and electric vehicles are obviously much more recent on average.

Thanks to long-time Slashdot reader Geoffrey.landis for sharing the article.
Power

'For Truckers Driving EVs, There's No Going Back' (yahoo.com) 153

The Washington Post looks at "a small but growing group of commercial medium-to-heavy-duty truck drivers who use electric trucks."

"These drivers — many of whom operate local or regional routes that don't require hundreds of miles on the road in a day — generally welcome the transition to electric, praising their new trucks' handling, acceleration, smoothness and quiet operation. "Everyone who has had an EV has no aspirations to go back to diesel at this point," said Khari Burton, who drives an electric Volvo VNR in the Los Angeles area for transport company IMC. "We talk about it and it's all positivity. I really enjoy the smoothness ... and just the quietness as well." Mike Roeth, the executive director of the North American Council for Freight Efficiency, said many drivers have reported that the new vehicles are easier on their bodies — thanks to both less rocking off the cab, assisted steering and the quiet motor. "Part of my hypothesis is that it will help truck driver retention," he said. "We're seeing people who would retire driving a diesel truck now working more years with an electric truck."

Most of the electric trucks on the road today are doing local or regional routes, which are easier to manage with a truck that gets only up to 250 miles of range... Trucking advocates say electric has a long way to go before it can take on longer routes. "If you're running very local, very short mileage, there may be a vehicle that can do that type of route," said Mike Tunnell, the executive director of environmental affairs for the American Trucking Association. "But for the average haul of 400 miles, there's just nothing that's really practical today."

There's other concerns, according to the article. "[S]ome companies and trucking associations worry this shift, spurred in part by a California law mandating a switch to electric or emissions-free trucks by 2042, is happening too fast. While electric trucks might work well in some cases, they argue, the upfront costs of the vehicles and their charging infrastructure are often too heavy a lift."

But this is probably the key sentence in the article: For the United States to meet its climate goals, virtually all trucks must be zero-emissions by 2050. While trucks are only 4 percent of the vehicles on the road, they make up almost a quarter of the country's transportation emissions.
The article cites estimates that right now there's 12.2 million trucks on America's highways — and barely more than 1% (13,000) are electric. "Around 10,000 of those trucks were just put on the road in 2023, up from 2,000 the year before." (And they add that Amazon alone has thousands of Rivian's electric delivery vans, operating in 1,800 cities.)

But the article's overall message seems to be that when it comes to the trucks, "the drivers operating them say they love driving electric." And it includes comments from actual truckers:
  • 49-year-old Frito-Lay trucker Gary LaBush: "I was like, 'What's going on?' There was no noise — and no fumes... it's just night and day."
  • 66-year-old Marty Boots: Diesel was like a college wrestler. And the electric is like a ballet dancer... You get back into diesel and it's like, 'What's wrong with this thing?' Why is it making so much noise? Why is it so hard to steer?"

Networking

Ceph: a Journey To 1 TiB/s (ceph.io) 16

It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation". The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.

And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers — a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness...

Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery....

The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full...

For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track.

It's a long blog post, but here's where it ends up:
  • Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states."
  • Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests." In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU."
  • Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled."

The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s — and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes — achieving 950GiB/s — then tried some more performance optimizations...


Hardware

Researchers Claim First Functioning Graphene-Based Chip (ieee.org) 4

An anonymous reader quotes a report from IEEE Spectrum: Researchers at Georgia Tech, in Atlanta, have developed what they are calling the world's first functioning graphene-based semiconductor. This breakthrough holds the promise to revolutionize the landscape of electronics, enabling faster traditional computers and offering a new material for future quantum computers. The research, published on January 3 in Nature and led by Walt de Heer, a professor of physics at Georgia Tech, focuses on leveraging epitaxial graphene, a crystal structure of carbon chemically bonded to silicon carbide (SiC). This novel semiconducting material, dubbed semiconducting epitaxial graphene (SEC) -- or alternatively, epigraphene -- boasts enhanced electron mobility compared with that of traditional silicon, allowing electrons to traverse with significantly less resistance. The outcome is transistors capable of operating at terahertz frequencies, offering speeds 10 times as fast as that of the silicon-based transistors used in current chips.

De Heer describes the method used as a modified version of an extremely simple technique that has been known for over 50 years. "When silicon carbide is heated to well over 1,000C, silicon evaporates from the surface, leaving a carbon-rich surface which then forms into graphene," says de Heer. This heating step is done with an argon quartz tube in which a stack of two SiC chips are placed in a graphite crucible, according to de Heer. Then a high-frequency current is run through a copper coil around the quartz tube, which heats the graphite crucible through induction. The process takes about an hour. De Heer added that the SEC produced this way is essentially charge neutral, and when exposed to air, it will spontaneously be doped by oxygen. This oxygen doping is easily removed by heating it at about 200C in vacuum. "The chips we use cost about [US] $10, the crucible about $1, and the quartz tube about $10," said de Heer. [...]

De Heer and his research team concede, however, that further exploration is needed to determine whether graphene-based semiconductors can surpass the current superconducting technology used in advanced quantum computers. The Georgia Tech team do not envision incorporating graphene-based semiconductors with standard silicon or compound semiconductor lines. Instead, they are aiming for a paradigm shift beyond silicon, utilizing silicon carbide. They are developing methods, such as coating SEC with boron nitride, to protect and enhance its compatibility with conventional semiconductor lines. Comparing their work with commercially available graphene field-effect transistors (GFETs), de Heer explains that there is a crucial difference: "Conventional GFETs do not use semiconducting graphene, making them unsuitable for digital electronics requiring a complete transistor shutdown." He says that the SEC developed by his team allows for a complete shutdown, meeting the stringent requirements of digital electronics. De Heer says that it will take time to develop this technology. "I compare this work to the Wright brothers' first 100-meter flight. It will mainly depend on how much work is done to develop it."

Operating Systems

Huawei Makes a Break From Android With Next Version of Harmony OS 27

China's Huawei will not support Android apps on the latest iteration of its in-house Harmony operating system, domestic financial media Caixin reported, as the company looks to bolster its own software ecosystem. From a report: The company plans to roll out a developer version of its HarmonyOS Next platform in the second quarter of this year followed by a full commercial version in the fourth quarter, it said in a company statement highlighting the launch event for the platform in its home city of Shenzhen on Thursday.

Huawei first unveiled its proprietary Harmony system in 2019 and prepared to launch it on some smartphones a year later after U.S. restrictions cut its access to Google's technical support for its Android mobile OS. However, earlier versions of Harmony allowed apps built for Android to be used on the system, which will no longer be possible, according to Caixin.
Communications

Viasat Tries To Stop Citizen Effort To Revive FCC Funding for Starlink (pcmag.com) 78

A resident in Virginia has urged the Federal Communications Commission to reconsider canceling $886 million in federal funding for SpaceX's Starlink system. But rival satellite company Viasat has gone out of its way to oppose the citizen-led petition.ÂPCMag: On Jan. 1, the FCC received a petition from the Virginia resident Greg Weisiger asking the commission to reconsider denying the $886 million to SpaceX. "Petitioner is at an absolute loss to understand the Commission's logic with these denials," wrote Weisiger, who lives in Midlothian, Virginia. "It is abundantly clear that Starlink has a robust, reliable, affordable service for rural and insular locations in all states and territories."

The petition arrived a few weeks after the FCC denied SpaceX's appeal to receive $886 million from the commission's Rural Digital Opportunity Fund, which is designed to subsidize 100Mbps to gigabit broadband across the US. SpaceX wanted to use the funds to expand Starlink access in rural areas. But the FCC ruled that "Starlink is not reasonably capable of offering the required high-speed, low latency service throughout the areas where it won auction support." Weisiger disagrees. In his petition, he writes that the FCC's decision will deprive him of federal support to bring high-speed internet to his home. "Thousands of other Virginia locations were similarly denied support," he added.

Windows

Microsoft Bringing Teams Meeting Reminders To Windows 11 Start Menu (theverge.com) 47

Microsoft is getting ready to place Teams meeting reminders on the Start menu in Windows 11. From a report: The software giant has started testing a new build of Windows 11 with Dev Channel testers that includes a Teams meeting reminder in the recommended section of the Start menu. Microsoft is also testing an improved way to instantly access new photos and screenshots from Android devices. [...] The Teams meeting reminders will be displayed alongside the regular recently used and recommended file list on the Start menu, and they won't be displayed for non-business users of Windows 11.
Math

How Much of the World Is It Possible to Model? 45

Dan Rockmore, the director of the Neukom Institute for Computational Sciences at Dartmouth College, writing for The New Yorker: Recently, statistical modelling has taken on a new kind of importance as the engine of artificial intelligence -- specifically in the form of the deep neural networks that power, among other things, large language models, such as OpenAI's G.P.T.s. These systems sift vast corpora of text to create a statistical model of written expression, realized as the likelihood of given words occurring in particular contexts. Rather than trying to encode a principled theory of how we produce writing, they are a vertiginous form of curve fitting; the largest models find the best ways to connect hundreds of thousands of simple mathematical neurons, using trillions of parameters.They create a vast data structure akin to a tangle of Christmas lights whose on-off patterns attempt to capture a chunk of historical word usage. The neurons derive from mathematical models of biological neurons originally formulated by Warren S. McCulloch and Walter Pitts, in a landmark 1943 paper, titled "A Logical Calculus of the Ideas Immanent in Nervous Activity." McCulloch and Pitts argued that brain activity could be reduced to a model of simple, interconnected processing units, receiving and sending zeros and ones among themselves based on relatively simple rules of activation and deactivation.

The McCulloch-Pitts model was intended as a foundational step in a larger project, spearheaded by McCulloch, to uncover a biological foundation of psychiatry. McCulloch and Pitts never imagined that their cartoon neurons could be trained, using data, so that their on-off states linked to certain properties in that data. But others saw this possibility, and early machine-learning researchers experimented with small networks of mathematical neurons, effectively creating mathematical models of the neural architecture of simple brains, not to do psychiatry but to categorize data. The results were a good deal less than astonishing. It wasn't until vast amounts of good data -- like text -- became readily available that computer scientists discovered how powerful their models could be when implemented on vast scales. The predictive and generative abilities of these models in many contexts is beyond remarkable. Unfortunately, it comes at the expense of understanding just how they do what they do. A new field, called interpretability (or X-A.I., for "explainable" A.I.), is effectively the neuroscience of artificial neural networks.

This is an instructive origin story for a field of research. The field begins with a focus on a basic and well-defined underlying mechanism -- the activity of a single neuron. Then, as the technology scales, it grows in opacity; as the scope of the field's success widens, so does the ambition of its claims. The contrast with climate modelling is telling. Climate models have expanded in scale and reach, but at each step the models must hew to a ground truth of historical, measurable fact. Even models of covid or elections need to be measured against external data. The success of deep learning is different. Trillions of parameters are fine-tuned on larger and larger corpora that uncover more and more correlations across a range of phenomena. The success of this data-driven approach isn't without danger. We run the risk of conflating success on well-defined tasks with an understanding of the underlying phenomenon -- thought -- that motivated the models in the first place.

Part of the problem is that, in many cases, we actually want to use models as replacements for thinking. That's the raison detre of modelling -- substitution. It's useful to recall the story of Icarus. If only he had just done his flying well below the sun. The fact that his wings worked near sea level didn't mean they were a good design for the upper atmosphere. If we don't understand how a model works, then we aren't in a good position to know its limitations until something goes wrong. By then it might be too late. Eugene Wigner, the physicist who noted the "unreasonable effectiveness of mathematics," restricted his awe and wonder to its ability to describe the inanimate world. Mathematics proceeds according to its own internal logic, and so it's striking that its conclusions apply to the physical universe; at the same time, how they play out varies more the further that we stray from physics. Math can help us shine a light on dark worlds, but we should look critically, always asking why the math is so effective, recognizing where it isn't, and pushing on the places in between.
The Internet

'Where Have All the Websites Gone?' (fromjason.xyz) 171

An anonymous reader shares an essay: No one clicks a webpage hoping to learn which cat can haz cheeseburger. Weirdos, maybe. Sickos. No, we get our content from a For You Page now -- algorithmically selected videos and images made by our favorite creators, produced explicitly for our preferred platform. Which platform doesn't matter much. So long as it's one of the big five. Creators churn out content for all of them. It's a technical marvel, that internet. Something so mindblowingly impressive that if you showed it to someone even thirty years ago, their face would melt the fuck off. So why does it feel like something's missing? Why are we all so collectively unhappy with the state of the web?

A tweet went viral this Thanksgiving when a Twitter user posed a question to their followers. (The tweet said: "It feels like there are no websites anymore. There used to be so many websites you could go on. Where did all the websites go?") A peek at the comments, and I could only assume the tweet struck a nerve. Everyone had their own answer. Some comments blamed the app-ification of the web. "Everything is an app now!," one user replied. Others point to the death of Adobe Flash and how so many sites and games died along with it. Everyone agrees that websites have indeed vanished, and we all miss the days we were free to visit them.

The Internet

Bing Gained Less Than 1% Market Share Since Adding Bing Chat, Report Finds (seroundtable.com) 31

According to StatCounter, Bing's market share grew less than 1% since launching Bing Chat (now known as Copilot) roughly a year ago. From a report: Bloomberg reported (paywalled) on the StatCounter data, saying, "But Microsoft's search engine ended 2023 with just 3.4% of the global search market, according to data analytics firm StatCounter, up less than 1 percentage point since the ChatGPT announcement." Google still dominates the global search market with a 91.6% market share, followed by Bing's 3.4%, Yandex's 1.6% and Yahoo's 1.1%. "Other" search engines accounted for a total of just 2.2% of the global search market.

You can view the raw chart and data from StatCounter here.
Google

Google To Invest $1 Billion In UK Data Center (reuters.com) 6

Google announced today that it will invest $1 billion building a data center near London. Reuters reports: The data centre, located on a 33-acre (13-hectare) site bought by Google in 2020, will be located in the town of Waltham Cross, about 15 miles north of central London, the Alphabet-owned company said in a statement. The British government, which is pushing for investment by businesses to help fund new infrastructure, particularly in growth industries like technology and artificial intelligence, described Google's investment as a "huge vote of confidence" in the UK.

"Google's $1 billion investment is testament to the fact that the UK is a centre of excellence in technology and has huge potential for growth," Prime Minister Rishi Sunak said in the Google statement. The investment follows Google's $1 billion purchase of a central London office building in 2022, close to Covent Garden, and another site in nearby King's Cross, where it is building a new office and where its AI company DeepMind is also based.
In November, Microsoft announced plans to pump $3.2 billion into Britain over the next three years.
Android

Google Is Rolling Out WebGPU For Next-Gen Gaming On Android 14

In a blog post today, Google announced that WebGPU is "now enabled by default in Chrome 121 on devices running Android 12 and greater powered by Qualcomm and ARM GPUs," with support for more Android devices rolling out gradually. Previously, the API was only available on Windows PCs that support Direct3D 12, macOS, and ChromeOS devices that support Vulkan.

Google says WebGPU "offers significant benefits such as greatly reduced JavaScript workload for the same graphics and more than three times improvements in machine learning model inferences." With lower-level access to a device's GPU, developers are able to enable richer and more complex visual content in web applications. This will be especially apparent with games, as you can see in this demo.

Next up: WebGPU for Chrome on Linux.
AI

Coursera Saw Signups For AI Courses Every Minute in 2023 (reuters.com) 13

U.S. edutech platform Coursera added a new user every minute on average for its AI courses in 2023, CEO Jeff Maggioncalda said on Thursday, in a clear sign of people upskilling to tap a potential boom in generative AI. Reuters: The technology behind OpenAI's ChatGPT has taken the world by a storm and sparked a race among companies to roll out their own versions of the viral chatbot. "I'd say the real hotspot is generative AI because it affects so many people," he told Reuters in an interview at the World Economic Forum in Davos.

Coursera is looking to offer AI courses along with companies that are the frontrunners in the AI race, including OpenAI and Google's DeepMind, Maggioncalda said. Investors had earlier feared that apps based on generative AI might replace ed-tech firms, but on the contrary the technology has encouraged more people to upskill, benefiting companies such as Coursera. The company has more than 800 AI courses and saw more than 7.4 million enrollments last year. Every student on the platform gets access to a ChatGPT-like AI assistant called "Coach" that provides personalized tutoring.

AI

Mark Zuckerberg's New Goal is Creating AGI (theverge.com) 94

OpenAI's stated mission is to create the artificial general intelligence, or AGI. Demis Hassabis, the leader of Google's AI efforts, has the same goal. Now, Meta CEO Mark Zuckerberg is entering the race. From a report: While he doesn't have a timeline for when AGI will be reached, or even an exact definition for it, he wants to build it. At the same time, he's shaking things up by moving Meta's AI research group, FAIR, to the same part of the company as the team building generative AI products across Meta's apps. The goal is for Meta's AI breakthroughs to more directly reach its billions of users. "We've come to this view that, in order to build the products that we want to build, we need to build for general intelligence," Zuckerberg tells me in an exclusive interview. "I think that's important to convey because a lot of the best researchers want to work on the more ambitious problems."

[...] No one working on AI, including Zuckerberg, seems to have a clear definition for AGI or an idea of when it will arrive. "I don't have a one-sentence, pithy definition," he tells me. "You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence. But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition." He sees its eventual arrival as being a gradual process, rather than a single moment. "I'm not actually that sure that some specific threshold will feel that profound." As Zuckerberg explains it, Meta's new, broader focus on AGI was influenced by the release of Llama 2, its latest large language model, last year. The company didn't think that the ability for it to generate code made sense for how people would use a LLM in Meta's apps. But it's still an important skill to develop for building smarter AI, so Meta built it anyway.
External research has pegged Meta's H100 shipments for 2023 at 150,000, a number that is tied only with Microsoft's shipments and at least three times larger than everyone else's. When its Nvidia A100s and other AI chips are accounted for, Meta will have a stockpile of almost 600,000 GPUs by the end of 2024, according to Zuckerberg.

Slashdot Top Deals