Slashdot stories can be listened to in audio form via an RSS feed, as read by our own robotic overlord.

 



Forgot your password?
typodupeerror
AI

Research Highlights How AI Sees and How It Knows What It's Looking At 65

Posted by samzenpus
from the what-am-I-looking-at-here dept.
anguyen8 writes Deep neural networks (DNNs) trained with Deep Learning have recently produced mind-blowing results in a variety of pattern-recognition tasks, most notably speech recognition, language translation, and recognizing objects in images, where they now perform at near-human levels. But do they see the same way we do? Nope. Researchers recently found that it is easy to produce images that are completely unrecognizable to humans, but that DNNs classify with near-certainty as everyday objects. For example, DNNs look at TV static and declare with 99.99% confidence it is a school bus. An evolutionary algorithm produced the synthetic images by generating pictures and selecting for those that a DNN believed to be an object (i.e. "survival of the school-bus-iest"). The resulting computer-generated images look like modern, abstract art. The pictures also help reveal what DNNs learn to care about when recognizing objects (e.g. a school bus is alternating yellow and black lines, but does not need to have a windshield or wheels), shedding light into the inner workings of these DNN black boxes.
AI

Economists Say Newest AI Technology Destroys More Jobs Than It Creates 533

Posted by Soulskill
from the i'm-sorry-dave,-there's-a-hiring-freeze-right-now dept.
HughPickens.com writes: Claire Cain Miller notes at the NY Times that economists long argued that, just as buggy-makers gave way to car factories, technology used to create as many jobs as it destroyed. But now there is deep uncertainty about whether the pattern will continue, as two trends are interacting. First, artificial intelligence has become vastly more sophisticated in a short time, with machines now able to learn, not just follow programmed instructions, and to respond to human language and movement. At the same time, the American work force has gained skills at a slower rate than in the past — and at a slower rate than in many other countries. Self-driving vehicles are an example of the crosscurrents. Autonomous cars could put truck and taxi drivers out of work — or they could enable drivers to be more productive during the time they used to spend driving, which could earn them more money. But for the happier outcome to happen, the drivers would need the skills to do new types of jobs.

When the University of Chicago asked a panel of leading economists about automation, 76 percent agreed that it had not historically decreased employment. But when asked about the more recent past, they were less sanguine. About 33 percent said technology was a central reason that median wages had been stagnant over the past decade, 20 percent said it was not and 29 percent were unsure. Perhaps the most worrisome development is how poorly the job market is already functioning for many workers. More than 16 percent of men between the ages of 25 and 54 are not working, up from 5 percent in the late 1960s; 30 percent of women in this age group are not working, up from 25 percent in the late 1990s. For those who are working, wage growth has been weak, while corporate profits have surged. "We're going to enter a world in which there's more wealth and less need to work," says Erik Brynjolfsson. "That should be good news. But if we just put it on autopilot, there's no guarantee this will work out."
AI

AI Expert: AI Won't Exterminate Us -- It Will Empower Us 414

Posted by Soulskill
from the but-the-tee-vee-said dept.
An anonymous reader writes: Oren Etzioni has been an artificial intelligence researcher for over 20 years, and he's currently CEO of the Allen Institute for AI. When he heard the dire warnings recently from both Elon Musk and Stephen Hawking, he decided it's time to have an intelligent discussion about AI. He says, "The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. ... To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations." Etzioni adds, "If unjustified fears lead us to constrain AI, we could lose out on advances that could greatly benefit humanity — and even save lives. Allowing fear to guide us is not intelligent."
AI

A Common Logic To Seeing Cats and the Cosmos 45

Posted by Soulskill
from the learning-to-teach-to-learn dept.
An anonymous reader sends this excerpt from Quanta Magazine: "Using the latest deep-learning protocols, computer models consisting of networks of artificial neurons are becoming increasingly adept at image, speech and pattern recognition — core technologies in robotic personal assistants, complex data analysis and self-driving cars. But for all their progress training computers to pick out salient features from other, irrelevant bits of data, researchers have never fully understood why the algorithms or biological learning work.

Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos. The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called "renormalization," which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, "a cat" regardless of its color, size or posture in a given video.

"They actually wrote down on paper, with exact proofs, something that people only dreamed existed," said Ilya Nemenman, a biophysicist at Emory University.
AI

Hawking Warns Strong AI Could Threaten Humanity 574

Posted by timothy
from the who-is-the-journal-of-robot-overlords-going-to-believe dept.
Rambo Tribble writes In a departure from his usual focus on theoretical physics, the estimable Steven Hawking has posited that the development of artificial intelligence could pose a threat to the existence of the human race. His words, "The development of full artificial intelligence could spell the end of the human race." Rollo Carpenter, creator of the Cleverbot, offered a less dire assessment, "We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it." I'm betting on "ignored."
Transportation

Here's What Your Car Could Look Like In 2030 144

Posted by Soulskill
from the just-make-sure-there's-room-for-a-cot dept.
Nerval's Lobster writes: If you took your cubicle, four wheels, powerful AI, and brought them all together in unholy matrimony, their offspring might look something like the self-driving future car created by design consultants IDEO. That's not to say that every car on the road in 2030 will look like a mobile office, but technology could take driving to a place where a car's convenience and onboard software (not to mention smaller size) matter more than, say, speed or handling, especially as urban areas become denser and people potentially look at "driving time" as a time to get things done or relax as the car handles the majority of driving tasks. Then again, if old science-fiction movies have proven anything, it's that visions of automobile design thirty or fifty years down the road (pun intended) tend to be far, far different than the eventual reality. (Blade Runner, for example, posited that the skies above Los Angeles would swarm with flying cars by 2019.) So it's anyone's guess what you'll be driving a couple decades from now.
AI

Alva Noe: Don't Worry About the Singularity, We Can't Even Copy an Amoeba 455

Posted by samzenpus
from the I-for-one-don't-worry-about-our-new-robotic-overlords dept.
An anonymous reader writes "Writer and professor of philosophy at the University of California, Berkeley Alva Noe isn't worried that we will soon be under the rule of shiny metal overlords. He says that currently we can't produce "machines that exhibit the agency and awareness of an amoeba." He writes at NPR: "One reason I'm not worried about the possibility that we will soon make machines that are smarter than us, is that we haven't managed to make machines until now that are smart at all. Artificial intelligence isn't synthetic intelligence: It's pseudo-intelligence. This really ought to be obvious. Clocks may keep time, but they don't know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn't do anything. All the doing was on our side. We played Jeopordy! with Watson. We used 'it' the way we use clocks.""
AI

Upgrading the Turing Test: Lovelace 2.0 68

Posted by Soulskill
from the just-make-sure-to-skip-version-9.0 dept.
mrspoonsi tips news of further research into updating the Turing test. As computer scientists have expanded their knowledge about the true domain of artificial intelligence, it has become clear that the Turing test is somewhat lacking. A replacement, the Lovelace test, was proposed in 2001 to strike a clearer line between true AI and an abundance of if-statements. Now, professor Mark Reidl of Georgia Tech has updated the test further (PDF). He said, "For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator. Creativity is not unique to human intelligence, but it is one of the hallmarks of human intelligence."
AI

Google Announces Image Recognition Advance 29

Posted by timothy
from the what-does-a-grue-look-like? dept.
Rambo Tribble writes Using machine learning techniques, Google claims to have produced software that can better produce natural-language descriptions of images. This has ramifications for uses such as better image search and for better describing the images for the blind. As the Google people put it, "A picture may be worth a thousand words, but sometimes it's the words that are the most useful ..."
United States

US Intelligence Unit Launches $50k Speech Recognition Competition 62

Posted by samzenpus
from the unseen-mechanized-ear dept.
coondoggie writes The $50,000 challenge comes from researchers at the Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence. The competition, known as Automatic Speech recognition in Reverberant Environments (ASpIRE), hopes to get the industry, universities or other researchers to build automatic speech recognition technology that can handle a variety of acoustic environments and recording scenarios on natural conversational speech.
Robotics

Robots Put To Work On E-Waste 39

Posted by Soulskill
from the robots-disassembling-robots dept.
aesoteric writes: Australian researchers have programmed industrial robots to tackle the vast array of e-waste thrown out every year. The research shows robots can learn and memorize how various electronic products — such as LCD screens — are designed, enabling those products to be disassembled for recycling faster and faster. The end goal is less than five minutes to dismantle a product.
AI

Magic Tricks Created Using Artificial Intelligence For the First Time 77

Posted by samzenpus
from the pick-a-circuit-any-circuit dept.
An anonymous reader writes Researchers working on artificial intelligence at Queen Mary University of London have taught a computer to create magic tricks. The researchers gave a computer program the outline of how a magic jigsaw puzzle and a mind reading card trick work, as well the results of experiments into how humans understand magic tricks, and the system created completely new variants on those tricks which can be delivered by a magician.
AI

A Worm's Mind In a Lego Body 200

Posted by timothy
from the with-very-few-exceptions-is-not-a-worm dept.
mikejuk writes The nematode worm Caenorhabditis elegans (C. elegans) is tiny and only has 302 neurons. These have been completely mapped, and one of the founders of the OpenWorm project, Timothy Busbice, has taken the connectome and implemented an object oriented neuron program. The neurons communicate by sending UDP packets across the network. The software works with sensors and effectors provided by a simple LEGO robot. The sensors are sampled every 100ms. For example, the sonar sensor on the robot is wired as the worm's nose. If anything comes within 20cm of the 'nose' then UDP packets are sent to the sensory neurons in the network. The motor neurons are wired up to the left and right motors of the robot. It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward. The key point is that there was no programming or learning involved to create the behaviors. The connectome of the worm was mapped and implemented as a software system and the behaviors emerge. Is the robot a C. elegans in a different body or is it something quite new? Is it alive? These are questions for philosophers, but it does suggest that the ghost in the machine is just the machine. The important question is does it scale?
AI

Does Watson Have the Answer To Big Blue's Uncertain Future? 67

Posted by Soulskill
from the i'm-sorry-dave,-i-need-those-TPS-reports-right-now dept.
HughPickens.com writes: IBM has recently delivered a string of disappointing quarters, and announced recently that it would take a multibillion-dollar hit to offload its struggling chip business. But Will Knight writes at MIT Technology Review that Watson may have the answer to IBM's uncertain future. IBM's vast research department was recently reorganized to ramp up efforts related to cognitive computing. The push began with the development of the original Watson, but has expanded to include other areas of software and hardware research aimed at helping machines provide useful insights from huge quantities of often-messy data. "We're betting billions of dollars, and a third of this division now is working on it," says John Kelly, director of IBM Research, said of cognitive computing, a term the company uses to refer to artificial intelligence techniques related to Watson. The hope is that the Watson Business Group, a division aimed making its Jeopardy!-winning cognitive computing application more of a commercial success, will be able to answer more complicated questions in all sorts of industries, including health care, financial investment, and oil discovery; and that it will help IBM build a lucrative new computer-driven consulting business.

But Watson is still a work in progress. Some companies and researchers testing Watson systems have reported difficulties in adapting the technology to work with their data sets. "It's not taking off as quickly as they would like," says Robert Austin. "This is one of those areas where turning demos into real business value depends on the devils in the details. I think there's a bold new world coming, but not as fast as some people think." IBM needs software developers to embrace its vision and build services and apps that use its cognitive computing technology. In May of this year it announced that seven universities would offer computer science classes in cognitive computing and last month IBM revealed a list of partners that have developed applications by tapping into application programming interfaces that access versions of Watson running in the cloud. Big Blue said it will invest $1 billion into the Watson division including $100 million to fund startups developing cognitive apps. "I very much admire the end goal," says Boris Katz, adding that business pressures could encourage IBM's researchers to move more quickly than they would like. "If the management is patient, they will really go far."
Transportation

What Will It Take To Make Automated Vehicles Legal In the US? 320

Posted by samzenpus
from the johnny-cab dept.
ashshy writes Tesla, Google, and many other companies are working on self-driving cars. When these autopilot systems become perfected and ubiquitous, the roads should be safer by orders of magnitude. So why doesn't Tesla CEO Elon Musk expect to reach that milestone until 2013 or so? Because the legal framework that supports American road rules is incredibly complex, and actually handled on a state-by-state basis. The Motley Fool explains which authorities Musk and his allies will have to convince before autopilot cars can hit the mainstream, and why the process will take another decade.
AI

Machine Learning Expert Michael Jordan On the Delusions of Big Data 145

Posted by samzenpus
from the listen-up dept.
First time accepted submitter agent elevator writes In a wide-ranging interview at IEEE Spectrum, Michael I. Jordan skewers a bunch of sacred cows, basically saying that: The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges. Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a fool's errand; and despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.
Google

Will the Google Car Turn Out To Be the Apple Newton of Automobiles? 287

Posted by samzenpus
from the flop-or-not dept.
An anonymous reader writes The better question may be whether it will ever be ready for the road at all? The car has fewer capabilities than most people seem to be aware of. The notion that it will be widely available any time soon is a stretch. From the article: "Noting that the Google car might not be able to handle an unmapped traffic light might sound like a cynical game of 'gotcha.' But MIT roboticist John Leonard says it goes to the heart of why the Google car project is so daunting. 'While the probability of a single driver encountering a newly installed traffic light is very low, the probability of at least one driver encountering one on a given day is very high,' Leonard says. The list of these 'rare' events is practically endless, said Leonard, who does not expect a full self-driving car in his lifetime (he’s 49)."
Graphics

Ubisoft Claims CPU Specs a Limiting Factor In Assassin's Creed Unity On Consoles 338

Posted by timothy
from the bottlenecks-shift dept.
MojoKid (1002251) writes A new interview with Assassin's Creed Unity senior producer Vincent Pontbriand has some gamers seeing red and others crying "told you so," after the developer revealed that the game's 900p framerate and 30 fps target on consoles is a result of weak CPU performance rather than GPU compute. "Technically we're CPU-bound," Pontbriand said. "The GPUs are really powerful, obviously the graphics look pretty good, but it's the CPU that has to process the AI, the number of NPCs we have on screen, all these systems running in parallel. We were quickly bottlenecked by that and it was a bit frustrating, because we thought that this was going to be a tenfold improvement over everything AI-wise..." This has been read by many as a rather damning referendum on the capabilities of AMD's APU that's under the hood of Sony's and Microsoft's new consoles. To some extent, that's justified; the Jaguar CPU inside both the Sony PS4 and Xbox One is a modest chip with a relatively low clock speed. Both consoles may offer eight CPU threads on paper, but games can't access all that headroom. One thread is reserved for the OS and a few more cores will be used for processing the 3D pipeline. Between the two, Ubisoft may have only had 4-5 cores for AI and other calculations — scarcely more than last gen, and the Xbox 360 and PS3 CPUs were clocked much faster than the 1.6 / 1.73GHz frequencies of their replacements.
AI

Outsourced Tech Jobs Are Increasingly Being Automated 236

Posted by timothy
from the ban-farm-equipment dept.
Jason Koebler writes Yahoo announced [Tuesday] it would be laying off at least 400 workers in its Indian office, and back in February, IBM cut roughly 2,000 jobs there. Meanwhile, tech companies are beginning to see that many of the jobs it has outsourced can be automated, instead. Labor in India and China is still cheaper than it is in the United States, but it's not the obvious economic move that it was just a few years ago: "The labor costs are becoming significant enough in China and India that there are very real discussions about automating jobs there now," Mark Muro, an economist at Brookings, said. "Companies are seeing that automated replacements are getting to be 'good enough.'"
AI

Michigan Builds Driverless Town For Testing Autonomous Cars 86

Posted by timothy
from the stepford-michigan dept.
HughPickens.com writes Highway driving, which is less complex than city driving, has proved easy enough for self-driving cars, but busy downtown streets—where cars and pedestrians jockey for space and behave in confusing and surprising ways—are more problematic. Now Will Knight reports that Michigan's Department of Transportation and 13 companies involved with developing automated driving technology are constructing a 30-acre, $6.5 million driverless town near Ann Arbor to test self-driving cars in an urban environment. Complex intersections, confusing lane markings, and busy construction crews will be used to gauge the aptitude of the latest automotive sensors and driving algorithms and mechanical pedestrians will even leap into the road from between parked cars so researchers can see if they trip up onboard safety systems. "I think it's a great idea," says John Leonard, a professor at MIT who led the development of a self-driving vehicle for a challenge run by DARPA in 2007. "It is important for us to try to collect statistically meaningful data about the performance of self-driving cars. Repeated operations—even in a small-scale environment—can yield valuable data sets for testing and evaluating new algorithms." The testing facility is part of broader work by the University of Michigan's Mobility Transformation Facility that will include putting up to 20,000 vehicles on southeastern Michigan roads. By 2021, Ann Arbor could become the first American city with a shared fleet of networked, driverless vehicles. "Ann Arbor will be seen as the leader in 21st century mobility," says Peter Sweatman, director of the U-M Transportation Research Institute. "We want to demonstrate fully driverless vehicles operating within the whole infrastructure of the city within an eight-year timeline and to show that these can be safe, effective and commercially successful."

3500 Calories = 1 Food Pound

Working...