Supercomputing

Optical Transistor Made From Single Molecule 92

An anonymous reader writes "Researchers from ETH Zurich have recently managed to create an optical transistor from a single molecule in what is yet another important achievement on the road to quantum computing. The molecule itself is about 2 nanometers in size, much smaller than standard transistors, which means that a lot more could be integrated in a single chip. Dr. Hwang, lead author of the academic paper, said, 'Our single-molecule optical transistor generates almost negligible amount of heat. When a single molecule absorbs one photon, there is some probability (quantum yield) that the molecule emits a photon out. The rest of the energy absorbed turns into heat in the matrix. For the case of the specific hydrocarbon molecule that we use, the quantum yield is near 100%. So almost no heat is generated.'"
Supercomputing

DARPA Wants a 19" Super-Efficient Supercomputer 200

coondoggie writes "If you can squish all the processing power of, say, an IBM Roadrunner supercomputer inside a 19-inch box and make it run on about 60 kilowatts of electricity, the government wants to talk to you. The extreme scientists at the Defense Advanced Research Projects Agency this week issued a call for research that might develop a super-small, super-efficient super beast of a computer. Specifically, DARPA's desires for Ubiquitous High Performance Computing (UHPC) will require a new system-wide technology approach including hardware and software co-design to minimize energy dissipation per operation and maximize energy efficiency, with a 50GFLOPS per watt goal."
Space

Aussie Scientists Build a Cluster To Map the Sky 58

Tri writes "Scientists at the Siding Spring Observatory have built a new system to map and record over 1 billion objects in the southern hemisphere sky. They collect 700 GB of data every night, which they then crunch down using some perl scripts and make available to other scientists through a web interface backed on Postgresql. 'Unsurprisingly, the Southern Sky Survey will result in a large volume of raw data — about 470 terabytes ... when complete. ... the bulk of the analysis of the SkyMapper data will be done on a brand new, next generation Sun supercomputer kitted out with 12,000 cores. Due to be fully online by December, the supercomputer will offer a tenfold increase in performance over the facility's current set up of two SGI machines, each with just under 3500 cores in total.'"
Supercomputing

Open Source Solution Breaks World Sorting Records 139

allenw writes "In a recent blog post, Yahoo's grid computing team announced that Apache Hadoop was used to break the current world sorting records in the annual GraySort contest. It topped the 'Gray' and 'Minute' sorts in the general purpose (Daytona) category. They sorted 1TB in 62 seconds, and 1PB in 16.25 hours. Apache Hadoop is the only open source software to ever win the competition. It also won the Terasort competition last year."
Supercomputing

Flu Models Predict Pandemic, But Flu Chips Ready 216

An anonymous reader writes "Supercomputer software models predict that swine flu will likely go pandemic sometime next week, but flu chips capable of detecting the virus within four hours are already rolling off the assembly line. The U.S. Department of Health and Human Services (HHS), which has designated swine flu as the '2009 H1N1 flu virus,' is modeling the spread of the virus using modeling software designed by the Department of Defense back when avian flu was a perceived threat. Now those programs are being run on cluster supercomputers and predict that officials are not implementing enough social distancing--such as closing all schools--to prevent a pandemic. Companies that designed flu-detecting chips for avian flu, are quickly retrofitting them to detect swine flu, with the first flu chips being delivered to labs today." Relatedly, at least one bio-surveillance firm is claiming they detected and warned the CDC and the WHO about the swine flu problem in Mexico over two weeks before the alert was issued.
IBM

IBM Computer Program To Take On 'Jeopardy!' 213

longacre writes "I.B.M. plans to announce Monday that it is in the final stages of completing a computer program to compete against human 'Jeopardy!' contestants. If the program beats the humans, the field of artificial intelligence will have made a leap forward. ... The team is aiming not at a true thinking machine but at a new class of software that can 'understand' human questions and respond to them correctly. Such a program would have enormous economic implications. ... The proposed contest is an effort by I.B.M. to prove that its researchers can make significant technical progress by picking "grand challenges" like its early chess foray. The new bid is based on three years of work by a team that has grown to 20 experts in fields like natural language processing, machine learning and information retrieval. ... Under the rules of the match that the company has negotiated with the 'Jeopardy!' producers, the computer will not have to emulate all human qualities. It will receive questions as electronic text. The human contestants will both see the text of each question and hear it spoken by the show's host, Alex Trebek. ... Mr. Friedman added that they were also thinking about whom the human contestants should be and were considering inviting Ken Jennings, the 'Jeopardy!' contestant who won 74 consecutive times and collected $2.52 million in 2004."
Supercomputing

Creating a Low-Power Cloud With Netbook Chips 93

Al writes "Researchers from Carnegie Mellon University have created a remarkably low-power server architecture using netbook processors and flash memory cards. The server design, dubbed a 'fast array of wimpy nodes,' or FAWN, is only designed to perform simple tasks, but the CMU team say it could be perfect for large Web companies that have to retrieve large amounts of data from RAM. A set-up including 21 individual nodes draws a maximum of just 85 watts under real-world conditions. The researchers say that a FAWN cluster could offer a low-power replacement for sites that currently rely on Memcached to access data from RAM."
Supercomputing

Supercomputer As a Service 78

gubm writes "Nearly one and a half years after making a stunning entry into the global supercomputer list with Eka, ranked as the fourth-fastest supercomputer in the world, Computational Research Laboratories (CRL), a Tata Sons' subsidiary, has succeeded in creating a new market for supercomputers — that of offering supercomputing power on rent to enterprises in India. For now, for want of a better word, let us call it 'Supercomputer as a Service.'"
Supercomputing

Microchip Mimics a Brain With 200,000 Neurons 521

Al writes "European researchers have taken a step towards replicating the functioning of the brain in silicon, creating new custom chip with the equivalent of 200,000 neurons linked up by 50 million synaptic connections. The aim of the Fast Analog Computing with Emergent Transient States (FACETS) project is to better understand how to construct massively parallel computer systems modeled on a biological brain. Unlike IBM's Blue Brain project, which involves modeling a brain in software, this approach makes it much easier to create a truly parallel computing system. The set-up also features a distributed algorithm that introduces an element of plasticity, allowing the circuit to learn and adapt. The researchers plan to connect thousands of chips to create a circuit with a billion neurons and 10^13 synapses (about a tenth of the complexity of the human brain)."
Supercomputing

Collaborative Map-Reduce In the Browser 188

igrigorik writes "The generality and simplicity of Google's Map-Reduce is what makes it such a powerful tool. However, what if instead of using proprietary protocols we could crowd-source the CPU power of millions of users online every day? Javascript is the most widely deployed language — every browser can run it — and we could use it to push the job to the client. Then, all we would need is a browser and an HTTP server to power our self-assembling supercomputer (proof of concept + code). Imagine if all it took to join a compute job was to open a URL."
Supercomputing

Best Solution For HA and Network Load Balancing? 298

supaneko writes "I am working with a non-profit that will eventually host a massive online self-help archive and community (using FTP and HTTP services). We are expecting 1,000+ unique visitors / day. I know that having only one server to serve this number of people is not a great idea, so I began to look into clusters. After a bit of reading I determined that I am looking for high availability, in case of hardware fault, and network load balancing, which will allow the load to be shared among the two to six servers that we hope to purchase. What I have not been able to determine is the 'perfect' solution that would offer efficiency, ease-of-use, simple maintenance, enjoyable performance, and a notably better experience when compared to other setups. Reading about Windows 2003 Clustering makes the whole process sounds easy, while Linux and FreeBSD just seem overly complicated. But is this truly the case? What have you all done for clustering solutions that worked out well? What key features should I be aware for successful cluster setup (hubs, wiring, hardware, software, same servers across the board, etc.)?"
Portables

Testing Lenovo's ThinkPad W700ds Dual-Screen Notebook 197

MojoKid writes "Lenovo's ThinkPad W700 is a unique product, targeted squarely at mobile professionals who require the power, features, and performance of workstation-class product in a notebook. The machine has a few stand-out integrated features, like a Wacom Digitizer Tablet and X-Rite Color Calibrator. In addition, the ThinkPad W700ds version and adds a secondary, slide-out 10.6" WXGA+ display, which increases monitor real-estate by 39% spanning across its two panels. HotHardware's video demonstrates the machine's arsenal of toys for the graphics pro, in a somewhat portable desktop replacement notebook."
Hardware Hacking

DIY 1980s "Non-Von" Supercomputer 135

Brietech writes "Ever wanted to own your own supercomputer? This guy recreated a 31-processor SIMD supercomputer from the early 1980s called the 'Non-Von 1' in an FPGA. It uses a 'Non-Von Neumann' architecture, and was intended for extremely fast database searches and artificial intelligence applications. Full-scale models were intended to have more than a million processors. It's a cool project for those interested in 'alternative' computer architectures, and yes, full source code (Verilog) is available, along with a python library to program it with." Hope the WIPO patent has expired.
Supercomputing

IBM Building 20 Petaflop Computer For the US Gov't 248

eldavojohn writes "When it's built, 'Sequoia' will outshine every super computer on the top 500 list today. The specs on this 96 rack beast are a bit hard to comprehend as it consists of 1.6 million processors and some 1.6TB of memory. That's 1.6 million processors — not cores. Its purpose? Primarily to keep track of nuclear waste & simulate explosions of nuclear munitions, but also for research into astronomy, energy, the human genome, and climate change. Hopefully the government uses this magnificent tool wisely when it gets it in 2012."
Supercomputing

Roland Piquepaille Dies 288

overheardinpdx writes "I'm sad to report that longtime HPC technology pundit Roland Piquepaille (rpiquepa) died this past Tuesday. Many of you may know of him through his blog, his submissions to Slashdot, and his many years of software visualization work at SGI and Cray Research. I worked with Roland 20 years ago at Cray, where we both wrote tech stories for the company newsletter. With his focus on how new technologies modify our way of life, Roland was really doing Slashdot-type reporting before there was a World Wide Web. Rest in peace, Roland. You will be missed." The notice of Roland's passing was posted on the Cray Research alumni group on Linked-In by Matthias Fouquet-Lapar. There will be a ceremony on Monday Jan. 12, at 10:30 am Paris time, at Père Lachaise.
Supercomputing

How To Build a Homebrew PS3 Cluster Supercomputer 211

eldavojohn writes "UMass Dartmouth Physics Professor Gaurav Khanna and UMass Dartmouth Principal Investigator Chris Poulin have created a step-by-step guide designed to show you how to build your own supercomputer for about $4,000. They are also hoping that by publishing this guide they will bring about a new kind of software development targeting this architecture & grid (I know a few failed NLP projects of my own that could use some new hardware). If this catches on for research institutions it may increase Sony's sales, but they might not be seeing the corresponding sale of games spike (where they make the most profit)."
Supercomputing

Inside Tsubame, Japan's GPU-Based Supercomputer 75

Startled Hippo writes "Japan's Tsubame supercomputer was ranked 29th-fastest in the world in the latest Top 500 ranking with a speed of 77.48T Flops (floating point operations per second) on the industry-standard Linpack benchmark. Why is it so special? It uses NVIDIA GPUs. Tsubame includes hundreds of graphics processors of the same type used in consumer PCs, working alongside CPUs in a mixed environment that some say is a model for future supercomputers serving disciplines like material chemistry." Unlike the GPU-based Tesla, Tsubame definitely won't be mistaken for a personal computer.
Supercomputing

IEEE Says Multicore is Bad News For Supercomputers 251

Richard Kelleher writes "It seems the current design of multi-core processors is not good for the design of supercomputers. According to IEEE: 'Engineers at Sandia National Laboratories, in New Mexico, have simulated future high-performance computers containing the 8-core, 16-core, and 32-core microprocessors that chip makers say are the future of the industry. The results are distressing. Because of limited memory bandwidth and memory-management schemes that are poorly suited to supercomputers, the performance of these machines would level off or even decline with more cores.'"
Supercomputing

NVIDIA's $10K Tesla GPU-Based Personal Supercomputer 236

gupg writes "NVIDIA announced a new category of supercomputers — the Tesla Personal Supercomputer — a 4 TeraFLOPS desktop for under $10,000. This desktop machine has 4 of the Tesla C1060 computing processors. These GPUs have no graphics out and are used only for computing. Each Tesla GPU has 240 cores and delivers about 1 TeraFLOPS single precision and about 80 GigaFLOPS double-precision floating point performance. The CPU + GPU is programmed using C with added keywords using a parallel programming model called CUDA. The CUDA C compiler/development toolchain is free to download. There are tons of applications ported to CUDA including Mathematica, LabView, ANSYS Mechanical, and tons of scientific codes from molecular dynamics, quantum chemistry, and electromagnetics; they're listed on CUDA Zone."
IBM

DARPA's IBM-Led Neural Network Project Seeks To Imitate Brain 170

An anonymous reader writes "According to an article in the BBC, IBM will lead an ambitious DARPA-funded project in 'cognitive computing.' According to Dharmendra Modha, the lead scientist on the project, '[t]he key idea of cognitive computing is to engineer mind-like intelligent machines by reverse engineering the structure, dynamics, function and behaviour of the brain.' The article continues, 'IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons. The team will then aim to produce for the first time an electronic system that behaves as the simulations do. The longer-term goal is to create a system with the level of complexity of a cat's brain.'"

Slashdot Top Deals