×
Supercomputing

A Peek Inside D-Wave's Quantum Computing Hardware 55

JeremyHsu writes: A one-second delay can still seem like an eternity for a quantum computing machine capable of running calculations in mere millionths of a second. That delay represents just one of the challenges D-Wave Systems overcame in building its second-generation quantum computing machine known as D-Wave Two — a system that has been leased to customers such as Google, NASA and Lockheed Martin. D-Wave's rapid-scaling approach to quantum computing has plenty of critics, but the company's experience in building large-scale quantum computing hardware could provide valuable lessons for everyone, regardless of whether the D-Wave machines live up to quantum computing's potential by proving they can outperform classical computers. (D-Wave recently detailed the hardware design changes between its first- and second-generation quantum computing machines in the the June 2014 issue of the journal IEEE Transactions on Applied Superconductivity.)

"We were nervous about going down this path," says Jeremy Hilton, vice president of processor development at D-Wave Systems. "This architecture requires the qubits and the quantum devices to be intermingled with all these big classical objects. The threat you worry about is noise and impact of all this stuff hanging around the qubits. Traditional experiments in quantum computing have qubits in almost perfect isolation. But if you want quantum computing to be scalable, it will have to be immersed in a sea of computing complexity.
Supercomputing

Computing a Cure For HIV 89

aarondubrow writes: The tendency of HIV to mutate and resist drugs has made it particularly difficult to eradicate. But in the last decade scientists have begun using a new weapon in the fight against HIV: supercomputers. Using some of the nation's most powerful supercomputers, teams of researchers are pushing the limits of what we know about HIV and how we can treat it. The Huffington Post describes how supercomputers are helping scientists understand and treat the disease.
Bitcoin

NSF Researcher Suspended For Mining Bitcoin 220

PvtVoid (1252388) writes "In the semiannual report to Congress by the NSF Office of Inspector General, the organization said it received reports of a researcher who was using NSF-funded supercomputers at two universities to mine Bitcoin. The computationally intensive mining took up about $150,000 worth of NSF-supported computer use at the two universities to generate bitcoins worth about $8,000 to $10,000, according to the report. It did not name the researcher or the universities."
Supercomputing

Electrical Control of Nuclear Spin Qubits: Important Step For Quantum Computing 42

Taco Cowboy writes: "Using a spin cascade in a single-molecule magnet, scientists at Karlsruhe Institute of Technology and their French partners have demonstrated that a single nuclear spin can be realized in a purely electric manner, rather than through the use of magnetic fields (abstract). For their experiments, the researchers used a nuclear spin-qubit transistor that consists of a single-molecule magnet connected to three electrodes (source, drain, and gate). The single-molecule magnet is a TbPc2 molecule — a single metal ion of terbium that is enclosed by organic phthalocyanine molecules of carbon, nitrogen, and hydrogen atoms. The gap between the electric field and the spin is bridged by the so-called hyperfine-Stark effect that transforms the electric field into a local magnetic field. This quantum mechanical process can be transferred to all nuclear spin systems and, hence, opens up entirely novel perspectives for integrating quantum effects in nuclear spins into electronic circuits"
Supercomputing

Stanford Bioengineers Develop 'Neurocore' Chips 9,000 Times Faster Than a PC 209

kelk1 sends this article from the Stanford News Service: "Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC (abstract). Kwabena Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed 'Neurocore' chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. ... But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. (...) Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies. By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore's cost 100-fold – suggesting a million-neuron board for $400 a copy."
Space

Using Supercomputers To Predict Signs of Black Holes Swallowing Stars 31

aarondubrow (1866212) writes "A 'tidal disruption' occurs when a star orbits too close to a black hole and gets sucked in. The phenomenon is accompanied by a bright flare with a unique signature that changes over time. Researchers at the Georgia Institute of Technology are using Stampede and other NSF-supported supercomputers to simulate tidal disruptions in order to better understand the dynamics of the process. Doing so helps astronomers find many more possible candidates of tidal disruptions in sky surveys and will reveal details of how stars and black holes interact."
IBM

Fifty Years Ago IBM 'Bet the Company' On the 360 Series Mainframe 169

Hugh Pickens DOT Com (2995471) writes "Those of us of a certain age remember well the breakthrough that the IBM 360 series mainframes represented when it was unveiled fifty years ago on 7 April 1964. Now Mark Ward reports at BBC that the first System 360 mainframe marked a break with all general purpose computers that came before because it was possible to upgrade the processors but still keep using the same code and peripherals from earlier models. "Before System 360 arrived, businesses bought a computer, wrote programs for it and then when it got too old or slow they threw it away and started again from scratch," says Barry Heptonstall. IBM bet the company when they developed the 360 series. At the time IBM had a huge array of conflicting and incompatible lines of computers, and this was the case with the computer industry in general at the time, it was largely a custom or small scale design and production industry, but IBM was such a large company and the problems of this was getting obvious: When upgrading from one of the smaller series of IBM computers to a larger one, the effort in doing that transition was so big so you might as well go for a competing product from the "BUNCH" (Burroughs, Univac, NCR, CDC and Honeywell). Fred Brooks managed the development of IBM's System/360 family of computers and the OS/360 software support package and based his software classic "The Mythical Man-Month" on his observation that "adding manpower to a late software project makes it later." The S/360 was also the first computer to use microcode to implement many of its machine instructions, as opposed to having all of its machine instructions hard-wired into its circuitry. Despite their age, mainframes are still in wide use today and are behind many of the big information systems that keep the modern world humming handling such things as airline reservations, cash machine withdrawals and credit card payments. "We don't see mainframes as legacy technology," says Charlie Ewen. "They are resilient, robust and are very cost-effective for some of the work we do.""
Stats

Mystery MLB Team Moves To Supercomputing For Their Moneyball Analysis 56

An anonymous reader writes "A mystery [Major League Baseball] team has made a sizable investment in Cray's latest effort at bringing graph analytics at extreme scale to bat. Nicole Hemsoth writes that what the team is looking for is a "hypothesis machine" that will allow them to integrate multiple, deep data wells and pose several questions against the same data. They are looking for platforms that allow users to look at facets of a given dataset, adding new cuts to see how certain conditions affect the reflection of a hypothesized reality."
Supercomputing

Pentago Is a First-Player Win 136

First time accepted submitter jwpeterson writes "Like chess and go, pentago is a two player, deterministic, perfect knowledge, zero sum game: there is no random or hidden state, and the goal of the two players is to make the other player lose (or at least tie). Unlike chess and go, pentago is small enough for a computer to play perfectly: with symmetries removed, there are a mere 3,009,081,623,421,558 (3e15) possible positions. Thus, with the help of several hours on 98304 threads of Edison, a Cray supercomputer at NERSC, pentago is now strongly solved. 'Strongly' means that perfect play is efficiently computable for any position. For example, the first player wins."
IBM

IBM Dumping $1 Billion Into New Watson Group 182

Nerval's Lobster writes "IBM believes its Watson supercomputing platform is much more than a gameshow-winning gimmick: its executives are betting very big that the software will fundamentally change how people and industries compute. In the beginning, IBM assigned 27 core researchers to the then-nascent Watson. Working diligently, those scientists and developers built a tough 'Jeopardy!' competitor. Encouraged by that success on live television, Big Blue devoted a larger team to commercializing the technology—a group it made a point of hiding in Austin, Texas, so its members could better focus on hardcore research. After years of experimentation, IBM is now prepping Watson to go truly mainstream. As part of that upgraded effort (which includes lots of hype-generating), IBM will devote a billion dollars and thousands of researchers to a dedicated Watson Group, based in New York City at 51 Astor Place. The company plans on pouring another $100 million into an equity fund for Watson's growing app ecosystem. If everything goes according to IBM's plan, Watson will help kick off what CEO Ginni Rometty refers to as a third era in computing. The 19th century saw the rise of a "tabulating" era: the birth of machines designed to count. In the latter half of the 20th century, developers and scientists initiated the 'programmable' era—resulting in PCs, mobile devices, and the Internet. The third (potential) era is 'cognitive,' in which computers become adept at understanding and solving, in a very human way, some of society's largest problems. But no matter how well Watson can read, understand and analyze, the platform will need to earn its keep. Will IBM's clients pay lots of money for all that cognitive power? Or will Watson ultimately prove an overhyped sideshow?"
Encryption

NSA Trying To Build Quantum Computer 221

New submitter sumoinsanity writes "The Washington Post has disclosed that the NSA is trying to build a quantum computer for use in cracking modern encryption. Their work is part of a research project into tackling the toughest equipment, which received $79.7 million in total funding. Another article makes the case that the NSA's quantum computing efforts are both disturbing and reassuring. The reassuring part is that public key infrastructure is still OK when done properly, since the NSA is still working so hard to defeat it. It's also highly unlikely that the NSA has achieved significant progress without outside awareness or help. More disturbing is that it may simply be a matter of time before it fails, and our private messages are out there for all to see."
Supercomputing

Using Supercomputers To Find a Bacterial "Off" Switch 30

Nerval's Lobster writes "The comparatively recent addition of supercomputing to the toolbox of biomedical research may already have paid off in a big way: Researchers have used a bio-specialized supercomputer to identify a molecular 'switch' that might be used to turn off bad behavior by pathogens. They're now trying to figure out what to do with that discovery by running even bigger tests on the world's second-most-powerful supercomputer. The 'switch' is a pair of amino acids called Phe396 that helps control the ability of the E. coli bacteria to move under its own power. Phe396 sits on a chemoreceptor that extends through the cell wall, so it can pass information about changes in the local environment to proteins on the inside of the cell. Its role was discovered by a team of researchers from the University of Tennessee and the ORNL Joint Institute for Computational Sciences using a specialized supercomputer called Anton, which was built specifically to simulate biomolecular interactions among proteins and other molecules to give researchers a better way to study details of how molecules interact. 'For decades proteins have been viewed as static molecules, and almost everything we know about them comes from static images, such as those produced with X-ray crystallography,' according to Igor Zhulin, a researcher at ORNL and professor of microbiology at UT, in whose lab the discovery was made. 'But signaling is a dynamic process, which is difficult to fully understand using only snapshots.'"
Medicine

Google Supercomputers Tackle Giant Drug-Interaction Data Crunch 50

ananyo writes "By analysing the chemical structure of a drug, researchers can see if it is likely to bind to, or 'dock' with, a biological target such as a protein. Researchers have now unveiled a computational effort that used Google's supercomputers to assesses billions of potential dockings on the basis of drug and protein information held in public databases. The effort will help researchers to find potentially toxic side effects and to predict how and where a compound might work in the body. 'It's the largest computational docking ever done by mankind,' says Timothy Cardozo, a pharmacologist at New York University's Langone Medical Center, who presented the project at the US National Institutes of Health's High Risk–High Reward Symposium in Bethesda, Maryland. The result, a website called Drugable, is still in testing, but it will eventually be available for free, allowing researchers to predict how and where a compound might work in the body, purely on the basis of chemical structure."
Intel

A Co-processor No More, Intel's Xeon Phi Will Be Its Own CPU As Well 53

An anonymous reader writes "The Xeon Phi co-processor requires a Xeon CPU to operate... for now. The next generation of Xeon Phi, codenamed Knights Landing and due in 2015, will be its own CPU and accelerator. This will free up a lot of space in the server but more important, it eliminates the buses between CPU memory and co-processor memory, which will translate to much faster performance even before we get to chip improvements. ITworld has a look."
Supercomputing

The Double Life of Memory Exposed With Automata Processor 32

An anonymous reader writes "As Nicole Hemsoth over at HPCwire reports 'In a nutshell, the Automata processor is a programmable silicon device that lends itself to handing high speed search and analysis across massive, complex, unstructured data. As an alternate processing engine for targeted areas, it taps into the inner parallelism inherent to memory to provide a robust and absolutely remarkable, if early benchmarks are to be believed, option for certain types of processing.'" Basically, the chip is designed solely to process Nondeterministic Finite Automata and can explore all valid paths of an NFA in parallel, hiding the whole O(n^2) complexity thing. Micron has a stash of technical documents including a paper covering the design and development of the chip. Imagine how fast you can process regexes now.
Japan

Japan Aims To Win Exascale Race 51

dcblogs writes "In the global race to build the next generation of supercomputers — exascale — there is no guarantee the U.S. will finish first. But the stakes are high for the U.S. tech industry. Today, U.S. firms — Hewlett-Packard, IBM and Intel, in particular — dominate the global high performance computing (HPC) market. On the Top 500 list, the worldwide ranking of the most powerful supercomputers, HP now has 39% of the systems, IBM, 33%, and Cray, nearly 10%. That lopsided U.S. market share does not sit well with other countries, which are busy building their own chips, interconnects, and their own high-tech industries in the push for exascale. Europe and China are deep into effort to build exascale machines, and now so is Japan. Kimihiko Hirao, director of the RIKEN Advanced Institute for Computational Science of Japan, said Japan is prepping a system for 2020. Asked whether he sees the push to exascale as a race between nations, Hirao said yes. Will Japan try to win that race? 'I hope so,' he said. 'We are rather confident,' said Hirao, arguing that Japan has the technology and the people to achieve the goal. Jack Dongarra, a professor of computer science at the University of Tennessee and one of the academic leaders of the Top 500 supercomputing list, said Japan is serious and on target to deliver a system by 2020."
Network

Researcher Shows How GPUs Make Terrific Network Monitors 67

alphadogg writes "A network researcher at the U.S. Department of Energy's Fermi National Accelerator Laboratory has found a potential new use for graphics processing units — capturing data about network traffic in real time. GPU-based network monitors could be uniquely qualified to keep pace with all the traffic flowing through networks running at 10Gbps or more, said Fermilab's Wenji Wu. Wenji presented his work as part of a poster series of new research at the SC 2013 supercomputing conference this week in Denver."
Graphics

Building a (Virtual) Roman Emperor's Villa 50

Nerval's Lobster writes "Scientists have been using everything from supercomputing clusters to 3D printers to virtually recreate dinosaur bones. Now another expert is trying to do something similar with the ancient imperial villa built for Roman emperor Hadrian, who ruled from 117 A.D. to 138 A.D. Hadrian's Villa is already one of the best-preserved Roman imperial sites, but that wasn't quite good enough for Indiana University Professor of Informatics Bernie Frischer, who trained as a classical philologist and archaeologist before being seduced by computers into what evolved into the academic discipline of digital analysis and reproduction of archaeological and historical works. The five-year effort to recreate Hadrian's Villa is based on information from academic studies of the buildings and grounds, as well as analyses of how the buildings, grounds and artifacts were used; the team behind it decided to go with gaming platform Unity 3D as a key part of the simulation."
Supercomputing

Warning At SC13 That Supercomputing Will Plateau Without a Disruptive Technology 118

dcblogs writes "At this year's supercomputing conference, SC13, there is worry that supercomputing faces a performance plateau unless a disruptive processing tech emerges. 'We have reached the end of the technological era' of CMOS, said William Gropp, chairman of the SC13 conference and a computer science professor at the University of Illinois at Urbana-Champaign. Gropp likened the supercomputer development terrain today to the advent of CMOS, the foundation of today's standard semiconductor technology. The arrival of CMOS was disruptive, but it fostered an expansive age of computing. The problem is 'we don't have a technology that is ready to be adopted as a replacement for CMOS,' said Gropp. 'We don't have anything at the level of maturity that allows you to bet your company on.' Peter Beckman, a top computer scientist at the Department of Energy's Argonne National Laboratory, and head of an international exascale software effort, said large supercomputer system prices have topped off at about $100 million 'so performance gains are not going to come from getting more expensive machines, because these are already incredibly expensive and powerful. So unless the technology really has some breakthroughs, we are imagining a slowing down.'" Although carbon nanotube based processors are showing promise (Stanford project page; the group is at SC13 giving a talk about their MIPS CNT processor).
IBM

NVIDIA Announces Tesla K40 GPU Accelerator and IBM Partnership In Supercomputing 59

MojoKid writes "The supercomputing conference SC13 kicks off this week and Nvidia is kicking off their own event with the launch of a new GPU and a strategic partnership with IBM. Just as the GTX 780 Ti was the full consumer implementation of the GK110 GPU, the new K40 Tesla card is the supercomputing / HPC variant of the same core architecture. The K40 picks up additional clock headroom and implements the same variable clock speed threshold that has characterized Nvidia's consumer cards for the past year, for a significant overall boost in performance. The other major shift between Nvidia's previous gen K20X and the new K40 is the amount of on-board RAM. K40 packs a full 12GB and clocks it modestly higher to boot. That's important because datasets are typically limited to on-board GPU memory (at least, if you want to work with any kind of speed). Finally, IBM and Nvidia announced a partnership to combine Tesla GPUs and Power CPUs for OpenPOWER solutions. The goal is to push the new Tesla cards as workload accelerators for specific datacenter tasks. According to Nvidia's release, Tesla GPUs will ship alongside Power8 CPUs, which are currently scheduled for a mid-2014 release date. IBM's venerable architecture is expected to target a 4GHz clock speed and offer up to 12 cores with 96MB of shared L3 cache. A 12-core implementation would be capable of handling up to 96 simultaneous threads. The two should make for a potent combination."

Slashdot Top Deals