×
Power

Utility Targets Bitcoin Miners With Power Rate Hike (datacenterfrontier.com) 173

1sockchuck writes: A public utility in Washington state wants to raise rates for high-density power users, citing a flood of requests for electricity to power bitcoin mining operations. Chelan County has some of the cheapest power in the nation, supported by hydroelectric generation from dams along the Columbia River. That got the attention of bitcoin miners, prompting requests to provision 220 megawatts of additional power. After a one-year moratorium, the Chelan utility now wants to raise rates for high density users (more than 250kW per square foot) from 3 cents to 5 cents per kilowatt hour. Bitcoin businesses say the rate hike is discriminatory. But Chelan officials cite the transient nature of the bitcoin business as a risk to recovering their costs for provisioning new power capacity.
Classic Games (Games)

Computer Beats Go Champion 149

Koreantoast writes: Go (weiqi), the ancient Chinese board game, has long been held up as one of the more difficult, unconquered challenges facing AI scientists... until now. Google DeepMind researchers, led by David Silver and Demis Hassabis, developed a new algorithm called AlphaGo, enabling the computer to soundly defeat European Go champion Fan Hui in back-to-back games, five to zero. Played on a 19x19 board, Go players have more than 300 possible moves per turn to consider, creating a huge number of potential scenarios and a tremendous computational challenge. All is not lost for humanity yet: DeepMind is scheduled to face off in March with Lee Sedol, considered one of the best Go players in recent history, in a match compared to the Kasparov-Deep Blue duels of previous decades.
Math

Finally Calculated: All the Legal Positions In a 19x19 Game of Go (github.io) 117

Reader John Tromp points to an explanation posted at GitHub of a computational challenge Tromp coordinated that makes a nice companion to the recent discovery of a 22 million-digit Mersenne prime. A distributed effort using pooled computers from two centers at Princeton, and more contributed from the HP Helion cloud, after "many hiccups and a few catastrophes" calculated the number of legal positions in a 19x19 game of Go. Simple as Go board layout is, the permutations allowed by the rules are anything but simple to calculate: "For running an L19 job, a beefy server with 15TB of fast scratch diskspace, 8 to 16 cores, and 192GB of RAM, is recommended. Expect a few months of running time." More: Large numbers have a way of popping up in the game of Go. Few people believe that a tiny 2x2 Go board allows for more than a few hundred games. Yet 2x2 games number not in the hundreds, nor in the thousands, nor even in the millions. They number in the hundreds of billions! 386356909593 to be precise. Things only get crazier as you go up in boardsize. A lower bound of 10^{10^48} on the number of 19x19 games, as proved in our paper, was recently improved to a googolplex. (For anyone who wants to double check his work, Tromp has posted as open source the software used.)
Math

New Mersenne Prime Discovered, Largest Known Prime Number: 2^74,207,281 - 1 (mersenne.org) 132

Dave Knott writes: The Great Internet Mersenne Prime Search (GIMPS) has discovered a new largest known prime number, 2^74,207,281-1, having 22,338,618 digits. The same GIMPS software recently uncovered a flaw in Intel's latest Skylake CPUs, and its global network of CPUs peaking at 450 trillion calculations per second remains the longest continuously-running "grassroots supercomputing" project in Internet history. The prime is almost 5 million digits larger than the previous record prime number, in a special class of extremely rare prime numbers known as Mersenne primes. It is only the 49th known Mersenne prime ever discovered, each increasingly difficult to find.
Businesses

Uber Scaling Up Its Data Center Infrastructure (datacenterfrontier.com) 33

1sockchuck writes: Connected cars generate a lot of data. That's translating into big business for data center providers, as evidenced by a major data center expansion by Uber, which needs more storage and compute power to support its global data platform. Uber drivers' mobile phones send location updates every 4 seconds, which is why the design goal for Uber's geospatial index is to handle a million writes per second. It's a reminder that as our cars become mini data centers, the data isn't staying onboard, but will also be offloaded to the data centers of automakers and software companies.
Supercomputing

Seymour Cray and the Development of Supercomputers (linuxvoice.com) 54

An anonymous reader writes: Linux Voice has a nice retrospective on the development of the Cray supercomputer. Quoting: "Firstly, within the CPU, there were multiple functional units (execution units forming discrete parts of the CPU) which could operate in parallel; so it could begin the next instruction while still computing the current one, as long as the current one wasn't required by the next. It also had an instruction cache of sorts to reduce the time the CPU spent waiting for the next instruction fetch result. Secondly, the CPU itself contained 10 parallel functional units (parallel processors, or PPs), so it could operate on ten different instructions simultaneously. This was unique for the time." They also discuss modern efforts to emulate the old Crays: "...what Chris wanted was real Cray-1 software: specifically, COS. Turns out, no one has it. He managed to track down a couple of disk packs (vast 10lb ones), but then had to get something to read them in the end he used an impressive home-brew robot solution to map the information, but that still left deciphering it. A Norwegian coder, Yngve Ådlandsvik, managed to play with the data set enough to figure out the data format and other bits and pieces, and wrote a data recovery script."
Security

Quantum Computer Security? NASA Doesn't Want To Talk About It (csoonline.com) 86

itwbennett writes: At a press event at NASA's Advanced Supercomputer Facility in Silicon Valley on Tuesday, the agency was keen to talk about the capabilities of its D-Wave 2X quantum computer. 'Engineers from NASA and Google are using it to research a whole new area of computing — one that's years from commercialization but could revolutionize the way computers solve complex problems,' writes Martyn Williams. But when questions turned to the system's security, a NASA moderator quickly shut things down [VIDEO], saying the topic was 'for later discussion at another time.'
Supercomputing

Google Finds D-Wave Machine To Be 10^8 Times Faster Than Simulated Annealing (blogspot.ca) 157

An anonymous reader sends this report form the Google Research blog on the effectiveness of D-Wave's 2X quantum computer: We found that for problem instances involving nearly 1000 binary variables, quantum annealing significantly outperforms its classical counterpart, simulated annealing. It is more than 10^8 times faster than simulated annealing running on a single core. We also compared the quantum hardware to another algorithm called Quantum Monte Carlo. This is a method designed to emulate the behavior of quantum systems, but it runs on conventional processors. While the scaling with size between these two methods is comparable, they are again separated by a large factor sometimes as high as 10^8. A more detailed paper is available at the arXiv.
Intel

Intel Launches 72-Core Knight's Landing Xeon Phi Supercomputer Chip (hothardware.com) 179

MojoKid writes: Intel announced a new version of their Xeon Phi line-up today, otherwise known as Knight's Landing. Whatever you want to call it, the pre-production chip is a 72-core coprocessor solution manufactured on a 14nm process with 3D Tri-Gate transistors. The family of coprocessors is built around Intel's MIC (Many Integrated Core) architecture which itself is part of a larger PCI-E add-in card solution for supercomputing applications. Knight's Landing succeeds the current version of Xeon Phi, codenamed Knight's Corner, which has up to 61 cores. The new Knight's Landing chip ups the ante with double-precision performance exceeding 3 teraflops and over 8 teraflops of single-precision performance. It also has 16GB of on-package MCDRAM memory, which Intel says is five times more power efficient as GDDR5 and three times as dense.
Math

'Shrinking Bull's-eye' Algorithm Speeds Up Complex Modeling From Days To Hours (mit.edu) 48

rtoz sends word of the discovery of a new algorithm that dramatically reduces the computation time for complex processes. Scientists from MIT say it conceptually resembles a shrinking bull's eye, incrementally narrowing down on its target. "With this method, the researchers were able to arrive at the same answer as a classic computational approaches, but 200 times faster." Their full academic paper is available at the arXiv. "The algorithm can be applied to any complex model to quickly determine the probability distribution, or the most likely values, for an unknown parameter. Like the MCMC analysis, the algorithm runs a given model with various inputs — though sparingly, as this process can be quite time-consuming. To speed the process up, the algorithm also uses relevant data to help narrow in on approximate values for unknown parameters."
Earth

NASA's Hurricane Model Resolution Increases Nearly 10-Fold Since Katrina 89

zdburke writes: Thanks to improvements in satellites and on-the-ground computing power, NASA's ability to model hurricane data has come a long way in the ten years since Katrina devastated New Orleans. Their blog notes, "Today's models have up to ten times the resolution than those during Hurricane Katrina and allow for a more accurate look inside the hurricane. Imagine going from video game figures made of large chunky blocks to detailed human characters that visibly show beads of sweat on their forehead." Gizmodo covered the post too and added some technical details, noting that, "the supercomputer has more than 45,000 processor cores and runs at 1.995 petfalops."
AI

IBM 'TrueNorth' Neuro-Synaptic Chip Promises Huge Changes -- Eventually 97

JakartaDean writes: Each of IBM's "TrueNorth" chips contains 5.4 billion transistors and runs on 70 milliwatts. The chips are designed to behave like neurons—the basic building blocks of biological brains. Dharmenda Modha, the head of IBM's cognitive computing group, says a system of 24 connected chips simulates 48 million neurons, roughly the same number rodents have.

Whereas conventional chips are wired to execute particular "instructions," the TrueNorth juggles "spikes," much simpler pieces of information analogous to the pulses of electricity in the brain. Spikes, for instance, can show the changes in someone's voice as they speak—or changes in color from pixel to pixel in a photo. "You can think of it as a one-bit message sent from one neuron to another." says one of the chip's chief designers. The chips are designed well not for training neural networks, but for executing them. This has significant implications for consumer AI: big companies with lots of resources could focus on the training, which individual TrueNorth chips in people's gadgets could handle the execution.
Math

How Weather Modeling Gets Better 43

Dr_Ish writes: Bob Henson over at Weather Underground has posted a fascinating discussion of the recent improvements made to the major weather models that are used to forecast hurricanes and the like. The post also included interesting links that explain more about the models. Quoting: "The latest version of the ECMWF model, introduced in May, has significant changes to model physics and the ways in which observations are brought into and used within the model. The overall improvements include better portrayal of clouds and precipitation, including a more accurate depiction of intense rainfall. The main effect of the model upgrade for tropical cyclones is slightly lower central pressure. During the first 3 days of a forecast, the ECMWF has tended to have a slight weak bias on tropical cyclones; the new version is closer to the mark."
Supercomputing

Obama's New Executive Order Says the US Must Build an Exascale Supercomputer 223

Jason Koebler writes: President Obama has signed an executive order authorizing a new supercomputing research initiative with the goal of creating the fastest supercomputers ever devised. The National Strategic Computing Initiative, or NSCI, will attempt to build the first ever exascale computer, 30 times faster than today's fastest supercomputer. Motherboard reports: "The initiative will primarily be a partnership between the Department of Energy, Department of Defense, and National Science Foundation, which will be designing supercomputers primarily for use by NASA, the FBI, the National Institutes of Health, the Department of Homeland Security, and NOAA. Each of those agencies will be allowed to provide input during the early stages of the development of these new computers."
Australia

Cray To Build Australia's Fastest Supercomputer 54

Bismillah writes: US supercomputer vendor Cray has scored the contract to build the Australian Bureau of Meteorology's new system, said to be capable of 1.6 petaFLOPS and with an upgrade option in three years' time to hit 5 petaFLOPS. From the iTnews story: "The increase in capacity will allow the BoM to deal with growth in the 1TB of data it collects every day, which it expects to increase by 30 percent every 18 months to two years. It will also allow the agency to collect new areas of information it previously lacked the capacity for. 'The new observation platforms that are coming online are bringing quite a lot more data,' supercomputer program director Tim Pugh told iTnews.
Supercomputing

Ask Slashdot: Best Bang-for-the-Buck HPC Solution? 150

An anonymous reader writes: We are looking into procuring a FEA/CFD machine for our small company. While I know workstations well, the multi-socket rack cluster solutions are foreign to me. On one end of the spectrum, there are companies like HP and Cray that offer impressive setups for millions of dollars (out of our league). On the other end, there are quad-socket mobos from Supermicro and Intel, for 8-18 core CPUs that cost thousands of dollars apiece.

Where do we go from here? Is it even reasonable to order $50k worth of components and put together our own high-performance, reasonably-priced blade cluster? Or is this folly, best left to experts? Who are these experts if we need them?

And what is the better choice here? 16-core Opterons at 2.6 GHz, 8-core Xeons at 3.4 GHz? Are power and thermals limiting factors here? (A full rack cupboard would consume something like 25 kW, it seems?) There seems to be precious little straightforward information about this on the net.
Supercomputing

Supercomputing Cluster Immersed In Oil Yields Extreme Efficiency 67

1sockchuck writes: A new supercomputing cluster immersed in tanks of dielectric fluid has posted extreme efficiency ratings. The Vienna Scientific Cluster 3 combines several efficiency techniques to create a system that is stingy in its use of power, cooling and water. VSC3 recorded a PUE (Power Usage Efficiency) of 1.02, putting it in the realm of data centers run by Google and Facebook. The system avoids the use of chillers and air handlers, and doesn't require any water to cool the fluid in the cooling tanks. Limiting use of water is a growing priority for data center operators, as cooling towers can use large volumes of water resources. The VSC3 system packs 600 teraflops of computing power into 1,000 square feet of floor space.
AMD

AMD Outlines Plans For Zen-Based Processors, First Due In 2016 166

crookedvulture writes: AMD laid out its plans for processors based on its all-new Zen microarchitecture today, promising 40% higher performance-per-clock from from the x86 CPU core. Zen will use simultaneous multithreading to execute two threads per core, and it will be built using "3D" FinFETs. The first chips are due to hit high-end desktops and servers next year. In 2017, Zen will combine with integrated graphics in smaller APUs designed for desktops and notebooks. AMD also plans to produce a high-performance server APU with a "transformational memory architecture" likely similar to the on-package DRAM being developed for the company's discrete graphics processors. This chip could give AMD a credible challenger in the HPC and supercomputing markets—and it could also make its way into laptops and desktops.
Stats

Humans Dominating Poker Super Computer 93

New submitter IoTdude writes: The Claudico super computer uses an algorithm to account for gargantuan amounts of complexity by representing the number of possible Heads-Up No-limit Texas Hold'em decisions. Claudico also updates its strategy as it goes along, but its basic approach to the game involves getting into every hand by calling bets. And it's not working out so far. Halfway through the competition, the four human pros had a cumulative lead of 626,892 chips. Though much could change in the week remaining, a lead of around 600,000 chips is considered statistically significant.
Supercomputing

Nuclear Fusion Simulator Among Software Picked For US's Summit Supercomputer 57

An anonymous reader writes Today, The Register has learned of 13 science projects approved by boffins at the US Department of Energy to run on the 300-petaFLOPS Summit. These software packages, selected for the Center for Accelerated Application Readiness (CAAR) program, will be ported to the massive parallel machine, and are hoped to make full use of the supercomputer's architecture.They range from astrophysics, biophysics, chemistry, and climate modeling to combustion engineering, materials science, nuclear physics, plasma physics and seismology.

Slashdot Top Deals