×
Power

Fiat 500e EVs Will Begin Battery Swap Testing In 2024 (theverge.com) 90

An anonymous reader quotes a report from The Verge: Stellantis struck a deal with California-based EV battery swapping company Ample to power a fleet of shared Fiat 500e vehicles in Spain. But the company says the deal could eventually expand to include personally owned EVs in Europe and the US as well. By becoming one of the first Western automakers to embrace battery swapping technology, Stellantis is betting that EV charging infrastructure in Europe and the US will remain a barrier to adoption in the near future, necessitating other solutions. Battery swapping could theoretically help EV owners power up and get moving without having to wait for long stretches at a charging station.

Stellantis will work with Ample to launch a battery swapping system for a fleet of Fiat 500e vehicles as part of a car-sharing service through its Free2move subsidiary. The service will first appear in Madrid in 2024, where the Fiat 500e is already available. (The tiny EV won't come to North America until next year.) Ample has four stations already in operation in the city and plans to build an additional nine stations in the months to come. Stellantis will need to install modular batteries in the Fiat 500e in order to be compatible with Ample's swapping system. The process works by driving the vehicle into a station, where it gets raised slightly. Ample's robot arms remove the spent battery from underneath the vehicle, replace it with a fully charged one, and then lower the vehicle. The company says the whole process can take as little as five minutes. "Our system knows how many batteries are in the Fiat 500e, knows how to extract each one of those modules, and put them back in the same arrangement," Khaled Hassounah, CEO of Ample, said in a briefing with reporters.

Starting with a small fleet of shared vehicles in one city will help Stellantis see how well Ample's system works and whether it can be scaled to new markets and to include privately owned vehicles. If the company does decide to expand its partnership with Ample, the Fiat 500e will likely be the first vehicle to support the technology, said Ricardo Stamatti, senior VP for charging and energy at Stellantis. Customers who buy cars that are compatible with Ample's swapping system would then just subscribe to a battery, opening up a possible new line of revenue for Stellantis. "We believe that this is actually an infrastructure play that can and will scale," Stamatti added.

Robotics

Are CAPTCHAs More Than Just Annoying? (msn.com) 69

The Atlantic writes: Failing a CAPTCHA isn't just annoying — it keeps people from navigating the internet. Older people can take considerably more time to solve different kinds of CAPTCHAs, according to the UC Irvine researchers, and other research has found that the same is true for non-native English speakers. The annoyance can lead a significant chunk of users to just give up.
But is it all also just a big waste of time? The article notes there's now even CAPTCHA-solving services you can hire. ("2Captcha will solve a thousand CAPTCHAs for a dollar, using human workers paid as low as 50 cents an hour. Newer companies, such as Capsolver, claim to instead be using AI and charge roughly the same price.")

And they also write that this summer saw more discouraging news: In a recent study from researchers at UC Irvine and Microsoft:

- most of the 1,400 human participants took 15 to 26 seconds to solve a CAPTCHA with a grid of images, with 81% accuracy.

- A bot tested in March 2020, meanwhile, was shown to solve similar puzzles in an average of 19.9 seconds, with 83% accuracy.

The article ultimately argues that for roughly 20 years, "CAPTCHAs have been engaged in an arms race against the machines," and that now "The burden is on CAPTCHAs to keep up" — which they're doing by evolving. The most popular type, Google's reCAPTCHA v3, should mostly be okay. It typically ascertains your humanity by monitoring your activity on websites before you even click the checkbox, comparing it with models of "organic human interaction," Jess Leroy, a senior director of product management at Google Cloud, the division that includes reCAPTCHA, told me.
But the automotive site Motor Biscuit speculates something else could also be happening. "Have you noticed it likes to ask about cars, buses, crosswalks, and other vehicle-related images lately?" Google has not confirmed that it uses the reCAPTCHA system for autonomous vehicles, but here are a few reasons why I think that could be the case. Self-driving cars from Waymo and other brands are improving every day, but the process requires a lot of critical technology and data to improve continuously.

According to an old Google Security Blog, using reCAPTCHA and Street View to make locations on Maps more accurate was happening way back in 2014... [I]t would ask users to find the street numbers found on Google Street View and confirm the numbers matched. Previously, it would use distorted text or letters. Using this data, Google could correlate the numbers with addresses and help pinpoint the location on Google Maps...

Medium reports that more than 60 million CAPTCHAs are being solved every day, which saves around 160,000 human hours of work. If these were helping locate addresses, why not also help identify other objects? Help differentiate a bus from a car and even choose a crosswalk over a light pole.

Thanks to Slashdot reader rikfarrow for suggesting the topic.
Robotics

America's Bowling Pins Face a Revolutionary New Technology: Strings (msn.com) 98

There's yet another technological revolution happening, reports the Los Angeles Times. Bowling alleys across America "are ditching traditional pinsetters — the machines that sweep away and reset pins — in favor of contraptions that employ string.

"Think of the pins as marionettes with nylon cords attached to their heads. Those that fall are lifted out of the way, as if by levitation, then lowered back into place after each frame... European bowling alleys have used string pinsetters for decades because they require less energy and maintenance.

"All you need is someone at the front counter to run back when the strings tangle." String pinsetters mean big savings, maybe salvation, for an industry losing customers to video games and other newfangled entertainment. That is why the U.S. Bowling Congress recently certified them for tournaments and league play. But there is delicate science at play here. Radius of gyration, coefficient of restitution and other obscure forces cause tethered pins to fly around differently than their free-fall counterparts. They don't even make the same noise. Faced with growing pushback, the bowling congress published new research this month claiming the disparity isn't nearly as great as people think.
Using a giant mechanical arm, powered by hydraulics and air pressure, they rolled "thousands of test balls from every angle, with various speeds and spins, on string-equipped lanes," according to the article: They found a configuration that resulted in 7.1% fewer strikes and about 10 pins fewer per game as compared to bowling with traditional pinsetters... Officials subsequently enlisted 500 human bowlers for more testing and, this time, reported finding "no statistically significant difference." But hundreds of test participants commented that bowling on strings felt "off." The pins seemed less active, they said. There were occasional spares whereby one pin toppled another without making contact, simply by crossing strings.

Nothing could be done about the muted sound. It's like hearing a drum roll — the ball charging down the lane — with no crashing cymbal at the end.

Still, one Northern California bowling alley spent $1 million to install the technology, and believes it will save them money — partly by cutting their electric bill in half. "We had a full-time mechanic and were spending up to $3,000 a month on parts."

The article also remembers that once upon a time, bowling alleys reset their pins using pinboys, "actual humans — mostly teenagers... scrambling around behind the lanes, gathering and resetting by hand," before they were replaced by machines after World War II.
Robotics

NYC Will Soon Be Home To 15 Robot-Run Vegetarian Restaurants From Chipotle's Founder (eater.com) 60

The founder of Chipotle is opening a new endeavor called Kernel, a vegetarian fast-casual restaurant that will be operated mostly by robots. Steve Ells is opening at least 15 locations of Kernel, the first by early 2024; the remainder are on track for NYC in the next two years, a spokesperson confirms. From a report: Kernel will serve vegetarian sandwiches, salads, and sides, made in a space that's around 1,000 square-feet or smaller. Each location would employ three workers, the Wall Street Journal reported, "rather than the dozen that many fast-casual eateries have working." The menu pricing will be on par with Chipotle's, and, Ells says, the company will pay more and offer better benefits for actual humans working than other chains.

As you'd expect from the former CEO of Chipotle -- which had at least five foodborne illness outbreaks between 2015 and 2018, costing the company $25 million per the Justice Department -- "the new system's design helps better ensure food safety," Ells told the Journal. It has taken $10 million in his personal funds to start Kernel, along with $36 million from investors. The company suggests customers may not want much interaction with other people -- and neither do CEOs. "We've taken a lot of human interaction out of the process and left just enough," he told the Journal. Yet in a 2022 study on the future of dining out conducted by commerce site, PYMNTS, of 2,500 people surveyed, 63 percent of diners believe restaurants are becoming increasingly understaffed, and 39 percent said that they are becoming less personal.

Robotics

Could AI and Tech Advancements Bring a New Era of Evolution? (noemamag.com) 117

A professor of religion at Columbia University writes, "I do not think human beings are the last stage in the evolutionary process." Whatever comes next will be neither simply organic nor simply machinic but will be the result of the increasingly symbiotic relationship between human beings and technology. Bound together as parasite/host, neither people nor technologies can exist apart from the other because they are constitutive prostheses of each other... So-called "artificial" intelligence is the latest extension of the emergent process through which life takes ever more diverse and complex forms.
The article lists "four trajectories that will be increasingly important for the symbiotic relationship between humans and machines."

- Writing about neuroprosthetics, the professor argues that "Increasing possibilities for symbiotic relations between computers and brains will lead to alternative forms of intelligence that are neither human nor machinic, but something in between."

- Then there's biobots. The article argues that with nanotechnology, "it will be increasingly difficult to distinguish the natural from the artificial."

But there's also an interesting discussion about synthetic biology. "Michael Levin and his colleagues at the Allen Discovery Center of Tufts University — biologists, computer scientists and engineers — have created "xenobots," which are "biological robots" that were produced from embryonic skin and muscle cells from an African clawed frog." As Levin and his colleagues wrote in 2020...

Here we show a scalable pipeline for creating functional novel lifeforms: AI methods automatically design diverse candidate lifeforms in silico to perform some desired function, and transferable designs are then created using a cell-based construction toolkit to realize living systems with predicted behavior. Although some steps in this pipeline still require manual intervention, complete automation in the future would pave the way for designing and deploying living systems for a wide range of functions.

And the article concludes with a discussion of organic-relational AI: While Levin uses computational technology to create and modify biological organisms, the German neurobiologist Peter Robin Hiesinger uses biological organisms to model computational processes by creating algorithms that evolve. This work involves nothing less than developing a new form of "artificial" intelligence... Non-anthropocentric AI would not be merely an imitation of human intelligence, but would be as different from our thinking as fungi, dog and crow cognition is from human cognition.

Machines are becoming more like people and people are becoming more like machines. Organism and machine? Organism or machine? Neither organism nor machine? Evolution is not over; something new, something different, perhaps infinitely and qualitatively different, is emerging.

Who would want the future to be the endless repetition of the past?

Robotics

Robot Crushes Man To Death After Misidentifying Him As a Box (theguardian.com) 86

A robot in a South Korea distribution center crushed a man to death after the machine apparently failed to differentiate him from the boxes of produce it was handling. The Guardian reports: The man, a robotics company worker in his 40s, was inspecting the robot's sensor operations at a distribution centre for agricultural produce in South Gyeongsang province. The industrial robot, which was lifting boxes filled with bell peppers and placing them on a pallet, appears to have malfunctioned and identified the man as a box, Yonhap reported, citing the police. The robotic arm pushed the man's upper body down against the conveyor belt, crushing his face and chest, according to Yonhap. He was transferred to the hospital but died later, the report said. The BBC notes that the man was "checking the robot's sensor operations ahead of its test run [...] scheduled for November 8." It was originally planned for November 6th, "but was pushed back by two days due to problems with the robot's sensor," the report adds.
Cloud

Matic's Robot Vacuum Maps Spaces Without Sending Data To the Cloud (techcrunch.com) 24

An anonymous reader quotes a report from TechCrunch: A relatively new venture founded by Navneet Dalal, an ex-Google research scientist, Matic, formerly known as Matician, is developing robots that can navigate homes to clean "more like a human," as Dalal puts it. Matic today revealed that it has raised $29.5 million, inclusive of a $24 million Series A led by a who's who of tech luminaries, including GitHub co-founder Nat Friedman, Stripe co-founders John and Patrick Collison, Quora CEO Adam D'Angelo and Twitter co-founder and Block CEO Jack Dorsey.

Dalal co-founded Matic in 2017 with Mehul Nariyawala, previously a lead product manager at Nest, where he oversaw Nest's security camera portfolio. [...] Early on, Matic focused on building robot vacuums -- but not because Dalal, who serves as the company's CEO, saw Matic competing with the iRobots and Ecovacs of the world. Rather, floor-cleaning robots provided a convenient means to thoroughly map indoor spaces, he and Nariyawala believed. "Robot vacuums became our initial focus due to their need to cover every inch of indoor surfaces, making them ideal for mapping," Dalal said. "Moreover, the floor-cleaning robot market was ripe for innovation." [...] "Matic was inspired by busy working parents who want to live in a tidy home, but don't want to spend their limited free time cleaning," Dalal said. "It's the first fully autonomous floor cleaning robot that continuously learns and adapts to users' cleaning preferences without ever compromising their privacy."

There are a lot of bold claims in that statement. But on the subject of privacy, Matic does indeed -- or at least claims to -- ensure data doesn't leave a customer's home. All processing happens on the robot (on hardware "equivalent to an iPhone 6," Dalal says), and mapping and telemetry data is saved locally, not in the cloud, unless users opt in to sharing. Matic doesn't even require an internet connection to get up and running -- only a smartphone paired over a local Wi-Fi network. The Matic vacuum understands an array of voice commands and gestures for fine-grained control. And -- unlike some robot vacuums in the market -- it can pick up cleaning tasks where it left off in the event that it's interrupted (say, by a wayward pet). Dalal says that Matic can also prioritize areas to clean depending on factors like the time of day and nearby rooms and furniture.
Dalal insists that all this navigational lifting can be accomplished with cameras alone. "In order to run all the necessary algorithms, from 3D depth to semantics to ... controls and navigation, on the robot, we had to vertically integrate and hyper-optimize the entire codebase," Dalal said, "from the modifying kernel to building a first-of-its-kind iOS app with live 3D mapping. This enables us to deliver an affordable robot to our customers that solves a real problem with full autonomy."

The robot won't be cheap. It starts at $1,795 but will be available for a limited time at a discounted price of $1,495.
AI

Boston Dynamics Robot Dog Talks Using OpenAI's ChatGPT (arstechnica.com) 31

Boston Dynamics has infused one of their robotic dog robots with OpenAI's ChatGPT, allowing it to speak in a variety of voices and accents "including a debonair British gentleman, a sarcastic and irreverent American named Josh, and a teenage girl who is so, like, over it," reports the Daily Beast. From the report: The robot was a result of a hackathon in which the Boston Dynamics engineers combined a variety of AI technologies including ChatGPT, voice recognition software, voice creation software, and image processing AI with the company's famous "Spot," the robot dog known for its ability to jump rope and reinforce the police state. The bot also had some upgrades including image recognition software combined with a "head" sensor that the engineers decorated with hats and googly eyes producing incredibly creepy results.

The team created a number of different versions of the robot including a "tour guide" personality that seemed to recognize the layout of the Boston Dynamics warehouse, and was able to provide descriptions and the history behind the various locations in the workplace. "Welcome to Boston Dynamics! I am Spot, your tour guide robot," the android said in the video. "Let's explore the building together!" In the video, the robot can be seen "speaking" and responding to different humans and a variety of prompts. For example, an engineer asked Spot for a haiku, to which it quickly responded with one. After Klingensmith said that he was thirsty, the robot seemed to direct it to the company's snack area. "Here we are at the snack bar and coffee machine," Spot said. "This is where our human companions find their energizing elixirs."

Robotics

College Warns To 'Avoid All Robots' After Bomb Threat Involving Food Delivery Robots (nbcnews.com) 38

Oregon State University on Tuesday urged students and staff to "avoid all robots" after a bomb threat was reported in Starship food delivery robots. NBC News reports: The warning was issued at 12:20 p.m. local time and by 12:59 p.m., the potentially dangerous bots had been isolated at safe locations, the school said. The robots were being "investigated by" a technician, OSU said in a statement posted at 1:23 p.m. "Remain vigilant for suspicious activity," the school said. Finally, at around 1:45 p.m., the school issued an "all clear" alert. "Emergency is over," the message said. "You may now resume normal activities. Robot inspection continues in a safe location."

A representative for Starship, the company that produces the robots, could not be immediately reached for comment. The company calls itself a "global leader in autonomous delivery" with agreements at a host of universities across the United States.
Developing...
IT

Matter 1.2 is a Big Move For the Smart Home Standard (theverge.com) 64

Matter -- the IOT connectivity standard with ambitions to fix the smart home and make all of our gadgets talk to each other -- has hit version 1.2, adding support for nine new types of connected devices. From a report: Robot vacuums, refrigerators, washing machines, and dishwashers are coming to Matter, as are smoke and CO alarms, air quality sensors, air purifiers, room air conditioners, and fans. It's a crucial moment for the success of the industry-backed coalition that counts 675 companies among its members. This is where it moves from the relatively small categories of door locks and light bulbs to the real moneymakers: large appliances.

The Connectivity Standards Alliance (CSA), the organization behind Matter, released the Matter 1.2 specification this week, a year after launching Matter 1.0, following through on its promise to release two updates a year. Now, appliance manufacturers can add support for Matter to their devices, and ecosystems such as Apple Home, Amazon Alexa, Google Home, and Samsung SmartThings can start supporting the new device types. Yes, this means you should finally be able to control a robot vacuum in the Apple Home app -- not to mention your wine fridge, dishwasher, and washing machine.

The initial feature set for the new device types includes basic function controls (start / stop, change mode) and notifications -- such as the temperature of your fridge, the status of your laundry, or whether smoke is detected. Robot vacuum support is robust -- remote start and progress notifications, cleaning modes (dry vacuum, wet mopping), and alerts for brush status, error reporting, and charging status. But there's no mapping, so you'll still need to use your vacuum app if you want to tell the robot where to go.

Robotics

Amazon Tests Humanoid Robot in Warehouse Automation Push (bloomberg.com) 33

Amazon says it's testing two new technologies to increase automation in its warehouses, including a trial of a humanoid robot. From a report: The humanoid robot, called Digit, is bipedal and can squat, bend and grasp items using clasps that imitate hands, the company said in a blog post Wednesday. It's built by Agility Robotics and will initially be used to help employees consolidate totes that have been emptied of items. Amazon invested in Agility Robotics last year.

[...] In addition to Digit, Amazon is testing a technology called Sequoia, which will identify and sort inventory into containers for employees, who will then pick the items customers have ordered, the company said. Remaining products are then consolidated in bins by a robotic arm called Sparrow, which the company revealed last year. The system is in use at an Amazon warehouse in Houston, the company said in a statement.

AI

Freak Accident in San Francisco Traps Pedestrian Under Robotaxi (msn.com) 104

In downtown San Francisco two vehicles were stopped at a red light on Monday night, reports the Washington Post — a regular car and a Cruise robotaxi. Both vehicles advanced when the light turned green, according to witness accounts and video recorded by the Cruise vehicle's internal cameras and reviewed by The Post. As the cars moved forward, the pedestrian entered the traffic lanes in front of them, according to the video, and was struck by the regular car. The video shows the victim rolling onto that vehicle's windshield and then being flung into the path of the driverless car, which stopped once it collided with the woman. According to Cruise spokesperson Hannah Lindow, the autonomous vehicle "braked aggressively to minimize the impact" but was unable to stop before rolling over the woman and coming to a halt. Photos published by the San Francisco Chronicle show the woman's leg sticking out from underneath the car's left rear wheel.
"According to Cruise, police had directed the company to keep the vehicle stationary, apparently with the pedestrian stuck beneath it," reports the San Francisco Chronicle.

Also from the San Francisco Chronicle: Austin Tutone, a bicycle delivery person, saw the woman trapped underneath the Cruise car and tried to reassure her as they waited for first-responders. "I told her, 'The ambulance is coming' and that she'd be okay. She was just screaming." He shared a photo of the aftermath with The Chronicle that appears to show the car tire on the woman's leg. San Francisco firefighters arrived and used the jaws of life to lift the car off the woman. She was transported to San Francisco General Hospital with "multiple traumatic injuries," said SFFD Capt. Justin Schorr. The victim was in critical condition as of late Tuesday afternoon, according to the hospital.

It appears that once the Cruise car sensed something underneath its rear axle, it came to a halt and turned on its hazard lights, Schorr said. Firefighters obstructed the sensors of the driverless car to alert the Cruise control center. He said representatives from Cruise responded to firefighters and "immediately disabled the car remotely."
More from the San Francisco Chronicle: "When it comes to someone pinned beneath a vehicle, the most effective way to unpin them is to lift the vehicle," Sgt. Kathryn Winters, a spokesperson for the department, said in an interview. Were a driver to move a vehicle with a person lying there, "you run the risk of causing more injury." Once the person is freed, the car must stay in place as police gather evidence including "the location of the vehicle and/or vehicles before, during and after the collision," said Officer Eve Laokwansathitaya, another spokesperson.
The human driver who struck the pedestrian immediately fled the scene, and has not yet been identified.
Robotics

Japan Startup Develops 'Gundam'-Like Robot With $3 Million Price Tag (reuters.com) 36

A Tokyo startup has developed a 4.5-meter-tall, four-wheeled robot modeled after the "Mobile Suit Gundam" from the Japanese animation series. It has a price tag of $3 million. Reuters reports: Called ARCHAX after the avian dinosaur archaeopteryx, the robot has cockpit monitors that receive images from cameras hooked up to the exterior so that the pilot can maneuver the arms and hands with joysticks from inside its torso. The 3.5-ton robot, which will be unveiled at the Japan Mobility Show later this month, has two modes: the upright 'robot mode' and a 'vehicle mode' in which it can travel up to 10 km (6 miles) per hour.

"Japan is very good at animation, games, robots and automobiles so I thought it would be great if I could create a product that compressed all these elements into one," said Ryo Yoshida, the 25-year-old chief executive of Tsubame Industries. "I wanted to create something that says, 'This is Japan.'" Yoshida plans to build and sell five of the machines for the well-heeled robot fan, but hopes the robot could one day be used for disaster relief or in the space industry.

Robotics

Robot 'Monster Wolves' Try to Scare Off Japan's Bears (bbc.co.uk) 44

"Bear attacks in Japan have been rising at an alarming rate, so the city of Takikawa [about 570 miles from Tokyo] installed a robot wolf as a deterrent," reports the BBC. "The robot wolf was originally designed to keep wild animals from farmlands, but is now being used by local governments and managers of highways, golf courses, and pig farms." Digital Trends describes the "Monster Wolf" as "complete with glowing red eyes and protruding fangs." [T]he solar-powered Monster Wolf emits a menacing roar if it detects a nearby bear. It also has a set of flashing LED lights on its tail, and can move its head to appear more real... The robot's design is apparently based on a real wolf that roamed part of the Asian nation more than 100 years ago before it was hunted into extinction.

Japanese news outlet NHK reported earlier this month that bear attacks in the country are at their highest level since records began in 2007. The environment ministry said 53 cases of injuries as a result of such attacks were reported between April and July this year, with at least one person dying following an attack in Hokkaido in May.

Sci-Fi

Could 'The Creator' Change Hollywood Forever? (indiewire.com) 96

At the beginning of The Creator a narrator describes AI-powered robots that are "more human than human." From the movie site Looper: It's in reference to the novel "Do Androids Dream of Electric Sheep?" by Philip K. Dick, which was adapted into the seminal sci-fi classic, "Blade Runner." The phrase is used as the slogan for the Tyrell Corporation, which designs the androids that take on lives of their own. The saying perfectly encapsulates the themes of "Blade Runner" and, by proxy, "The Creator." If a machine of sufficient intelligence is indistinguishable from humans, then shouldn't it be considered on equal footing as humanity?
The Huffington Post calls its "the pro-AI movie we don't need right now" — but they also praise it as "one of the most astonishing sci-fi theatrical experiences this year." Variety notes the film was co-written and directed by Gareth Edwards (director of the 2014 version of Godzilla and the Star Wars prequel Rogue One), working with Oscar-winning cinematographer Greig Fraser (Dune) after the two collaborated on Rogue One. But what's unique is the way they filmed it: adding visual effects "almost improvisationally afterward.

"Achieving this meant shooting sumptuous natural landscapes in far-flung locales like Thailand or Tibet and building futuristic temples digitally in post-production..."

IndieWire gushes that "This movie looks fucking incredible. To a degree that shames most blockbusters that cost three times its budget." They call it "a sci-fi epic that should change Hollywood forever." Once audiences see how "The Creator" was shot, they'll be begging Hollywood to close the book on blockbuster cinema's ugliest and least transportive era. And once executives see how much (or how little) "The Creator" was shot for, they'll be scrambling to make good on that request as fast as they possibly can.

Say goodbye to $300 million superhero movies that have been green-screened within an inch of their lives and need to gross the GDP of Grenada just to break even, and say hello — fingers crossed — to a new age of sensibly budgeted multiplex fare that looks worlds better than most of the stuff we've been subjected to over the last 20 years while simultaneously freeing studios to spend money on the smaller features that used to keep them afloat. Can you imagine...? How ironic that such fresh hope for the future of hand-crafted multiplex entertainment should come from a film so bullish and sanguine at the thought of humanity being replaced by A.I [...]

The real reason why "The Creator" is set in Vietnam (and across large swaths of Eurasia) is so that it could be shot in Vietnam. And in Thailand. And in Cambodia, Nepal, Indonesia, and several other beautiful countries that are seldom used as backdrops for futuristic science-fiction stories like this one. This movie was born from the visual possibilities of interpolating "Star Wars"-like tech and "Blade Runner"-esque cyber-depression into primordially expressive landscapes. Greig Fraser and Oren Soffer's dusky and tactile cinematography soaks up every inch of what the Earth has to offer without any concession to motion capture suits or other CGI obstructions, which speaks to the truly revolutionary aspect of this production: Rather than edit the film around its special effects, Edwards reverse-engineered the special effects from a completed edit of his film... Instead of paying a fortune to recreate a flimsy simulacrum of our world on a computer, Edwards was able to shoot the vast majority of his movie on location at a fraction of the price, which lends "The Creator" a palpable sense of place that instantly grounds this story in an emotional truth that only its most derivative moments are able to undo... [D]etails poke holes in the porous border that runs between artifice and reality, and that has an unsurprisingly profound effect on a film so preoccupied with finding ghosts in the shell. Can a robot feel love? Do androids dream of electric sheep? At what point does programming blur into evolution...?

[T]he director has a classic eye for staging action, that he gives his movies room to breathe, and that he knows that the perfect "Kid A" needle-drop (the album, not the song) can do more for a story about the next iteration of "human" life than any of the tracks from Hans Zimmer's score... [T]here's some real cognitive dissonance to seeing a film that effectively asks us to root for a cuter version of ChatGPT. But Edwards and Weitz's script is fascinating for its take on a future in which people have programmed A.I. to maintain the compassion that our own species has lost somewhere along the way; a future in which technology might be a vessel for humanity rather than a replacement for it; a future in which computers might complement our movies rather than replace our cameras.

Privacy

Food Delivery Robots Are Feeding Camera Footage to the LAPD, Internal Emails Show (404media.co) 63

samleecole writes: A food delivery robot company that delivers for Uber Eats in Los Angeles provided video filmed by one of its robots to the Los Angeles Police Department as part of a criminal investigation, 404 Media has learned. The incident highlights the fact that delivery robots that are being deployed to sidewalks all around the country are essentially always filming, and that their footage can and has been used as evidence in criminal trials. Emails obtained by 404 Media also show that the robot food delivery company wanted to work more closely with the LAPD, which jumped at the opportunity.
Moon

Chinese Astronauts May Build a Base Inside a Lunar Lava Tube (universetoday.com) 75

According to Universe Today, China may utilize lunar caves as potential habitats for astronauts on the Moon, offering defense against hazards like radiation, meteorites, and temperature variations. From the report: Different teams of scientists from different countries and agencies have studied the idea of using lava tubes as shelter. At a recent conference in China, Zhang Chongfeng from the Shanghai Academy of Spaceflight Technology presented a study into the underground world of lava tubes. Chinese researchers did fieldwork in Chinese lava tubes to understand how to use them on the Moon. According to Zhang, there's enough similarity between lunar and Earthly lava tubes for one to be an analogue of the other. It starts with their two types of entrances, vertical and sloped. Both worlds have both types.

Most of what we've found on the Moon are vertical-opening tubes, but that may be because of our overhead view. The openings are called skylights, where the ceiling has collapsed and left a debris accumulation on the floor of the tube directly below it. Entering through these requires either flight or some type of vertical lift equipment. Sloped entrances make entry and exit much easier. It's possible that rovers could simply drive into them, though some debris would probably need to be cleared. According to Zhang, this is the preferred entrance that makes exploration easier. China is prioritizing lunar lava tubes at Mare Tranquillitatis (Sea of Tranquility) and Mare Fecunditatis (Sea of Fecundity) for exploration.

China is planning a robotic system that can explore caves like the one in Mare Tranquillitatis. The primary probe will have either wheels or feet and will be built to adapt to challenging terrain and to overcome obstacles. It'll also have a scientific payload. Auxiliary vehicles can separate from the main probe to perform more reconnaissance and help with communications and "energy support." They could be diversified so the mission can meet different challenges. They might include multi-legged crawling probes, rolling probes, and even bouncing probes. These auxiliary vehicles would also have science instruments to study the lunar dust, radiation, and the presence of water ice in the tubes. China is also planning a flight-capable robot that could find its way through lava tubes autonomously using microwave and laser radars.
"China's future plan, after successful exploration, is a crewed base," the report adds. "It would be a long-term underground research base in one of the lunar lava tubes, with a support center for energy and communication at the tube's entrance. The terrain would be landscaped, and the base would include both residential and research facilities inside the tube."

"[R]egardless of when they start, China seems committed to the idea. Ding Lieyun, a top scientist at Huazhong University of Science and Technology, told the China Science Daily that 'Eventually, building habitation beyond the Earth is essential not only for all humanity's quest for space exploration but also for China's strategic needs as a space power.'"
Robotics

Tesla Bot Can Now Sort Objects Autonomously (interestingengineering.com) 54

The official Tesla Optimus account shared an update video showing the progress its humanoid robot has made since it was announced in August 2021. In a video that looks like CGI, you can see Optimus sorting blocks and performing some yoga poses, among other things. Interesting Engineering reports: The video begins with the Tesla Bot aka the Optimus robot performing a self-calibration routine, which is essential for adapting to new environments. It then shows how TeslaBot can use its vision and joint position sensors to accurately locate its limbs in space, without relying on any external feedback. This enables TeslaBot to interact with objects and perform tasks with precision and dexterity.

One of the tasks that Optimus demonstrates is sorting blue and green blocks into matching trays. Tesla Optimus can grasp each block with ease and sort them at a human-like speed. It can also handle dynamic changes in the environment, such as when a human intervenes and moves the blocks around. TeslaBot can quickly adjust to the new situation and resume its task. It can also correct its own errors, such as when a block lands on its side and needs to be rotated.

The video also showcases Tesla Bot's balance and flexibility, as it performs some yoga poses that require standing on one leg and extending its limbs. These poses are not related to any practical workloads, but they show how TeslaBot can control its body and maintain its stability. The video ends with a call for more engineers to join the Tesla Optimus team, as the project is still in development and needs more talent. There is no information on when TeslaBot will be ready for production or commercial use, but the video suggests that it is making rapid progress and using the same software as the Tesla cars.

Robotics

New York City Deploys 420-Pound RoboCop to Patrol Subway Station (gothamist.com) 82

"New York City is now turning to robots to help patrol the Times Square subway station," quipped one local newscast.

The non-profit New York City blog Gothamist describes the robot as "almost as tall as the mayor — but at least three-times as wide around the waist," with a maximum speed of 3 miles per hour-- but a 360-degree field of vision, equipped with four cameras to send live video (without audio) to the police. A 420-pound, 5-foot-2-inch robocop with a giant camera for a face will begin patrolling the Times Square subway station overnight, the New York Police Department announced Friday morning. At a press conference held underground in the 42nd Street subway station, New York City Mayor Eric Adams said the city is launching a two-month pilot program to test the Knightscope K5 Autonomous Security Robot. During the press conference, the K5 robot — which is shaped like a small, white rocketship — stood silently along with uniformed officers and city officials in suits. Stripes of glowing blue lights indicated it was "on."

The K5 will act as a crime deterrent and provide real-time information on how to best deploy human officers to a safety incident, the mayor said. It features multiple cameras, a button that can connect the public with a real person, and a speaker for live audio communication... During the pilot program, the K5 will patrol the Times Squares subway station from midnight to 6 a.m. with a human NYPD handler that will help introduce it to the public. After two months, the mayor said the handler will no longer be necessary, and the robot will go on solo patrol...

Knightscope, which manufactures the robot, reports that it has been deployed to 30 clients in 10 states, including at malls and hospitals. The K5 has been in some sticky situations in other cities. One was toppled and slathered in barbecue sauce in San Francisco, while another was beaten by an intoxicated man in Mountain View, California, according to news reports. Another robot fell into a pool of water outside an office building in Washington, D.C.

When asked whether the robot was at risk of vandalism in New York City, the mayor strode over to it and gave it a few firm shoves. "Let's be clear, this is not a pushover. 420 pounds. This is New York tested," he said.

The city is leasing the robot for $9 an hour — And yes, local newscasts couldn't resist calling it a robocop. One shows the mayor announcing "We will continue to stay ahead of those who want to harm everyday New Yorkers."

Though the robot is equipped with facial recognition capability, it will not be activated.
Medicine

Neuralink Is Recruiting Subjects For the First Human Trial of Its Brain-Computer Interface 85

A few months after getting FDA approval for human trials, Neuralink is looking for its first test subjects. The Verge reports: The six-year initial trial, which the Elon Musk-owned company is calling "the PRIME Study," is intended to test Neuralink tech designed to help those with paralysis control devices. The company is looking for people (PDF) with quadriplegia due to vertical spinal cord injury or ALS who are over the age of 22 and have a "consistent and reliable caregiver" to be part of the study.

The PRIME Study (which apparently stands for Precise Robotically Implanted Brain-Computer Interface, even though that acronym makes no sense) is set to research three things at once. The first is the N1 implant, Neuralink's brain-computer device. The second is the R1 robot, the surgical robot that actually implants the device. The third is the N1 User App, the software that connects to the N1 and translates brain signals into computer actions. Neuralink says it's planning to test both the safety and efficacy of all three parts of the system.

Those who participate in the PRIME Study will first participate in an 18-month study that involves nine visits with researchers. After that, they'll spend at least two hours a week on brain-computer interface research sessions and then do 20 more visits over the next five years. Neuralink doesn't say how many subjects it's looking for or when it plans to begin the study but does say it only plans to compensate "for study-related costs" like travel to and from the study location. (Also not clear: where that location is. Neuralink only says it has received approval from "our first hospital site.")

Slashdot Top Deals