Japan

Lower Limit Found For Sudoku Puzzle Clues 121

ananyo writes "An Irish mathematician has used a complex algorithm and millions of hours of supercomputing time to solve an important open problem in the mathematics of Sudoku, the game popularized in Japan that involves filling in a 9X9 grid of squares with the numbers 1–9 according to certain rules. Gary McGuire of University College Dublin shows in a proof posted online [PDF] that the minimum number of clues — or starting digits — needed to complete a puzzle is 17; puzzles with 16 or fewer clues do not have a unique solution. Most newspaper puzzles have around 25 clues, with the difficulty of the puzzle decreasing as more clues are given."
Software

Hadoop 1.0 Released 38

darthcamaro writes "There has been a tonne of hype about Big Data and specifically Hadoop in recent years. But until today, Hadoop was not a 1.0 release product. Does it matter? Not really, but it's still a big milestone. The new release includes a new web interface for the Hadoop filesystem, security, and Hbase database support. '"At this point we figured that as a community we can support this release and be compatible for the foreseeable future. That makes this release an ideal candidate to be called 1.0," Arun C. Murthy, vice president of Apache Hadoop, said.'"
Supercomputing

Russia, Europe Seek Divorce From U.S. Tech Vendors 201

dcblogs writes "The Russians are building a 10-petaflop supercomputer as part of a goal to build an exascale system by 2018-20, in the same timeframe as the US. The Russians, as well as Europe and China, want to reduce reliance on U.S. tech vendors and believe that exascale system development will lead to breakthroughs that could seed new tech industries. 'Exascale computing is a challenge, and indeed an opportunity for Europe to become a global HPC leader,' said Leonardo Flores Anover, who is the European Commission's project officer for the European Exascale Software Initiative. 'The goal is to foster the development of a European industrial capability,' he said. Think what Europe accomplished with Airbus. For Russia: 'You can expect to see Russia holding its own in the exascale race with little or no dependence on foreign manufacturers,' said Mike Bernhardt, who writes The Exascale Report. For now, Russia is relying on Intel and Nvidia."
Supercomputing

How the Tevatron Influenced Computing 66

New submitter SciComGeek writes "Few laypeople think of computing innovation in connection with the Tevatron particle accelerator, which shut down earlier this year. Mention of the Tevatron inspires images of majestic machinery, or thoughts of immense energies and groundbreaking physics research, not circuit boards, hardware, networks, and software. Yet over the course of more than three decades of planning and operation, a tremendous amount of computing innovation was necessary to keep the data flowing and physics results coming. Those innovations will continue to influence scientific computing and data analysis for years to come."
AMD

ORNL's Newest Petaflop Climate Computer To Come Online For NOAA 66

bricko writes with a description of NOAA's Gaea supercomputer, being assembled at the Oak Ridge National Laboratory. It's some big iron: 1.1 petaflops, based on 16-core Interlagos chips from AMD, and built by Cray. "The system, which is used for climate modeling and resource, also includes two separate Lustre parallel file systems 'that handle data sets that rank among the world's largest,' ORNL said. 'NOAA research partners access the system remotely through speedy wide area connections. Two 10-gigabit (billion bit) lambdas, or optical waves, pass data to NOAA's national research network through peering points at Atlanta and Chicago.'"
Supercomputing

Wielding Supercomputers To Make High-Stakes Predictions 65

aarondubrow writes "The emergence of the uncertainty quantification field was initially spurred in the mid-1990s by the federal government's desire to use computer models to predict the reliability of nuclear weapons. Since then, the toll of high-stake events that could potentially have been better anticipated if improved predictive computer models had been available — like the Columbia disaster, Hurricane Katrina and the World Trade Center collapse after the 9/11 terrorist attacks — has catapulted research on uncertainty quantification to the scientific and engineering forefronts." (Read this with your Texas propaganda filter turned to High.)
Supercomputing

The Top 10 Supercomputers, Illustrated 68

1sockchuck writes "The twice-a-year list of the Top 500 supercomputers documents the most powerful systems on the planet. Many of these supercomputers are striking not just for their processing power, but for their design and appearance as well. Here's a visual guide to the top finishers in the latest Top 500 list, which was released this week at the SC11 conference."
Intel

Intel Announces Xeon E5 and Knights Corner HPC Chip 122

MojoKid writes "At the supercomputing conference SC2011 yesterday, Intel announced its new Xeon E5 processors and demoed their new Knights Corner many integrated core (MIC) solution. The new Xeons won't be broadly available until the first half of 2012, but Intel has been shipping the new chips to a small number of cloud and HPC customers since September. The new E5 family is based on the same core as the Core i7-3960X Intel launched Monday. The E5, while important to Intel's overall server lineup, isn't as interesting as the public debut of Knights Corner. Recall that Intel's canceled GPU (codenamed Larrabee) found new life as the prototype device for future HPC accelerators and complementary products. According to Intel, Knights Corner packs 50 x86 processor cores into a single die built on 22nm technology. The chip is capable of delivering up to 1TFlop of sustained performance in double-precision floating point code and operates at 1 — 1.2GHz. NVIDIA's current high-end M2090 Tesla GPU, in contrast, is capable of just 665 DP GFlops."
IBM

Cray Replaces IBM To Build $188M Supercomputer 99

wiredmikey writes "Supercomputer maker Cray today said that the University of Illinois' National Center for Supercomputing Applications (NCSA) awarded the company a contract to build a supercomputer for the National Science Foundation's Blue Waters project. The supercomputer will be powered by new 16-core AMD Opteron 6200 Series processors (formerly code-named 'Interlagos') a next-generation GPU from NVIDIA, called 'Kepler,' and a new integrated storage solution from Cray. IBM was originally selected to build the supercomputer in 2007, but terminated the contract in August 2011, saying the project was more complex and required significantly increased financial and technical support beyond its original expectations. Once fully deployed, the system is expected to have a sustained performance of more than one petaflops on demanding scientific applications."
Japan

Fujitsu Announces 16-core SPARC64 IXfx (and the Supercomputer It Powers) 68

First time accepted submitter A12m0v writes with a link to Fujitsu's announcement of its next generation of supercomputer, from which he pastes: "PRIMEHPC FX10 runs on the newly-developed SPARC64 IXfx processors, which offer a very significant boost in performance over the SPARC64 VIIIfx processor on which they are based and which power the K computer. Each processor has 16 cores and achieves world-class standalone performance levels of 236.5 gigaflops and performance per watt of over 2 gigaflops." Not that K is any slouch.
Supercomputing

Japanese Supercomputer K Hits 10.51 Petaflops 125

coondoggie writes "The Japanese supercomputer ranked #1 on the Top 500 fastest supercomputers broke its own record this week by hitting 10 quadrillion calculations per second (10.51 petaflops), according to its operators, Fujitsu and Riken.
The supercomputer 'K' consists of 864 racks, comprising a total of 88,128 interconnected CPUs and has a theoretical calculation speed of 11.28 petaflops, the companies said."
China

China Builds 1-Petaflop Homegrown Supercomputer 185

MrSeb writes "Drawing yet another battle line between the incumbent oligarchs of the West and the developing hordes of the East, China has unveiled a new supercomputer that uses entirely-homegrown processors — 8,704 of them, to be exact. The computer is called Sunway BlueLight MPP and it has a peak performance of just over 1 petaflop — or around the 15th fastest supercomputer in the world. Sunway uses the ShenWei SW-3 1600, a 16-core, 64-bit MIPS-compatible (RISC) CPU. The process used to make the chips is not known, but it is likely 65 or 45nm, a few generations behind Intel's latest and greatest. Each of the 139,264 cores runs at 1.1GHz, the entire system has 150TB of memory and 2PB of storage, and of course it's water-cooled. The ShenWei chips are based on the Loongson/Godson architecture, which China — as in, the country itself — probably reverse engineered from a DEC Alpha CPU in 2001 and has been developing ever since. Sunway is significant for two reasons: a) It's very low-power; it consumes just one megawatt, about half of its contemporaries and one seventh of the US's Jaguar — and b) This is China's first significant supercomputer to be built without Intel or AMD processors."
Supercomputing

Jaguar Supercomputer Being Upgraded To Regain Fastest Cluster Crown 89

MrSeb writes with an article in Extreme Tech about the Titan supercomputer. From the article: "Cray, AMD, Nvidia, and the Department of Energy have announced that the Oak Ridge National Laboratory's Jaguar supercomputer will soon be upgraded to yet again become the fastest HPC installation in the world. The new, mighty-morphing computer will feature thousands of Cray XK6 blades, each one accommodating up to four 16-core AMD Opteron 6200 (Interlagos) chips and four Nvidia Tesla 20-series GCGPU coprocessors. The Jaguar name will be suitably inflated, too: the new behemoth will be called Titan. The exact specs of Titan haven't been revealed, but the Jaguar supercomputer currently sports 200 cabinets of Cray XT5 blades — and each cabinet, in theory, can be upgraded to hold 24 XK6 blades. That's a total of 4,800 servers, or 38,400 processors in total; 19,200 Opterons 6200s, and 19,200 Tesla GPUs. ... that's 307,200 CPU cores — and with 512 shaders in each Tesla chip that's 9,830,400 compute units. In other words, Titan should be capable of massive parallelism of more than one million concurrent operations. When the server is complete, towards the end of 2012, Titan will be capable of between 10 and 20 petaflops, and should recapture the crown of Fastest Supercomputer in the World from the Japanese 'K' computer."
Education

Michael Nielsen's Free Video Courseware On Quantum Computing 54

New submitter quax writes "Michael Nielsen, who co-authored the book on Quantum Computing, released a set of short video lectures on his blog this summer (link to Google cache). They make a great introduction to the subject. But here's the catch: Due to other work responsibilities, he stopped short of completing the course, and will only complete it if he sees enough interest in the videos. Let's show him some numbers."
Australia

New Supercomputer Boosts Aussie SKA Telescope Bid 32

angry tapir writes "Australian academic supercomputing consortium iVEC has acquired another major supercomputer, Fornax, to be based at the University of Western Australia, to further the country's ability to conduct data-intensive research. The SGI GPU-based system, also known as iVEC@UWA, is made up of 96 nodes, each containing two 6-core Intel Xeon X5650 CPUs, an NVIDIA Tesla C2050 GPU, 48 GB RAM and 7TB of storage. All up, the system has 1152 cores, 96 GPUs and an additional dedicated 500TB fabric attached storage-based global filesystem. The system is a boost to the Australian-NZ bid to host the Square Kilometer Array radio telescope."
IBM

Behind the Parting of IBM and Blue Waters 36

An anonymous reader writes "The News-Gazette has an article about the troubled Blue Waters supercomputer project, providing some new information about why IBM and the University of Illinois parted ways back in August. Quoting: 'More than three dozen changes, most suggested by IBM, would have delayed the Blue Waters project by a year ... The requested changes caused friction as early as December 2010, eight months before IBM pulled out, leaving the project to look for a new vendor for the supercomputer. Documents released under the Freedom of Information Act show Big Blue and the Big U asserting their rights in lengthy and increasingly testy, but always polite, language. In the documents, IBM suggested that if changes were not made, the project would become overly expensive.'"
Supercomputing

Will Quantum Computing Make It Out of the Lab? 129

alphadogg writes "Researchers have been working on quantum systems for more than a decade, in the hopes of developing super-tiny, super-powerful computers. And while there is still plenty of excitement surrounding quantum computing, significant roadblocks are causing some to question whether quantum computing will ever make it out of the lab. 'Artur Ekert, professor of Quantum Physics, Mathematical Institute at the University of Oxford, says physicists today can only control a handful of quantum bits, which is adequate for quantum communication and quantum cryptography, but nothing more. He notes that it will take a few more domesticated qubits to produce quantum repeaters and quantum memories, and even more to protect and correct quantum data. "Add still a few more qubits, and we should be able to run quantum simulations of some quantum phenomena and so forth. But when this process arrives to 'a practical quantum computer' is very much a question of defining what 'a practical quantum computer' really is. The best outcome of our research in this field would be to discover that we cannot build a quantum computer for some very fundamental reason, then maybe we would learn something new and something profound about the laws of nature," Ekert says.'"
Networking

Ask Slashdot: Best Use For a New Supercomputing Cluster? 387

Supp0rtLinux writes "In about 2 weeks time I will be receiving everything necessary to build the largest x86_64-based supercomputer on the east coast of the U.S. (at least until someone takes the title away from us). It's spec'ed to start with 1200 dual-socket six-core servers. We primarily do life-science/health/biology related tasks on our existing (fairly small) HPC. We intend to continue this usage, but to also open it up for new uses (energy comes to mind). Additionally, we'd like to lease access to recoup some of our costs. So, what's the best Linux distro for something of this size and scale? Any that include a chargeback option/module? Additionally, due to cost contracts, we have to choose either InfiniBand or 10Gb Ethernet for the backend: which would Slashdot readers go with if they had to choose? Either way, all nodes will have four 1Gbps Ethernet ports. Finally, all nodes include only a basic onboard GPU. We intend to put powerful GPUs into the PCI-e slot and open up the new HPC for GPU related crunching. Any suggestions on the most powerful Linux friendly PCI-e GPU available?"
AI

IBM's Watson To Help Diagnose, Treat Cancer 150

Lucas123 writes "IBM's Jeopardy-playing supercomputer, Watson, will be turning its data compiling engine toward helping oncologists diagnose and treat cancer. According to IBM, the computer is being assembled in the Richmond, Va. data center of WellPoint, the country's largest Blue Cross, Blue Shield-based healthcare company. Physicians will be able to input a patient's symptoms and Watson will use data from a patient's electronic health record, insurance claims data, and worldwide clinical research to come up with both a diagnosis and treatment based on evidence-based medicine. 'If you think about the power of [combining] all our information along with all that comparative research and medical knowledge... that's what really creates this game changing capability for healthcare,' said Lori Beer, executive vice president of Enterprise Business Services at WellPoint."
Data Storage

IBM Building 120PB Cluster Out of 200,000 Hard Disks 290

MrSeb writes "Smashing all known records by some margin, IBM Research Almaden, California, has developed hardware and software technologies that will allow it to strap together 200,000 hard drives to create a single storage cluster of 120 petabytes — 120 million gigabytes. The data repository, which currently has no name, is being developed for an unnamed customer, but with a capacity of 120PB, it's most likely use will be a storage device for a governmental (or Facebook) supercomputer. With IBM's GPFS (General Parallel File System), over 30,000 files can be created per second — and with massive parallelism, and no doubt thanks to the 200,000 individual drives in the array, single files can be read or written at several terabytes per second."

Slashdot Top Deals