An anonymous reader quotes a report from Ars Technica: The Raspberry Pi Compute Module is getting a big upgrade, with the same processor used in the recently released Raspberry Pi 3. The Compute Module, which is intended for industrial applications, was first released in April 2014 with the same CPU as the first-generation Raspberry Pi. The upgrade announced today has 1GB of RAM and a Broadcom BCM2837 processor that can run at up to 1.2GHz. "This means it provides twice the RAM and roughly ten times the CPU performance of the original Compute Module," the Raspberry Pi Foundation announcement said. This is the second major version of the Compute Module, but it's being called the "Compute Module 3" to match the last flagship Pi's version number. The new Compute Module has more flexible storage options than the original. "One issue with the [Compute Module 1] was the fixed 4GB of eMMC flash storage," the announcement said. But some users wanted to add their own flash storage. "To solve this, two versions of the [Compute Module 3] are being released: one with 4GB eMMC on-board and a 'Lite' model which requires the user to add their own SD card socket or eMMC flash." The core module is tiny so that it can fit into other hardware, but for development purposes there is a separate I/O board with GPIO, USB and MicroUSB, CSI and DSI ports for camera and display boards, HDMI, and MicroSD. The Compute Module 3 and the lite version cost $30 and $25, respectively.
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
An anonymous reader quotes a report from Apple Insider: Apple on Tuesday was granted a patent detailing technology that allows for ear speakers, cameras and even a heads-up display to hide behind an edge-to-edge screen, a design rumored to debut in a next-generation iPhone later this year. Awarded by the U.S. Patent and Trademark Office, Apple's U.S. Patent No. 9,543,364 for "Electronic devices having displays with openings" describes a method by which various components can be mounted behind perforations in a device screen that are so small as to be imperceptible to the human eye. This arrangement would allow engineers to design a smartphone or tablet with a true edge-to-edge, or "full face," display. With smartphones becoming increasingly more compact, there has been a push to move essential components behind the active -- or light-emitting -- area of incorporated displays. Apple in its patent suggests mounting sensors and other equipment behind a series of openings, or through-holes, in the active portion of an OLED or similar panel. These openings might be left empty or, if desired, filled with glass, polymers, radio-transparent ceramic or other suitable material. Positioning sensor inputs directly in line with said openings facilitates the gathering of light, radio waves and acoustic signals. Microphones, cameras, antennas, light sensors and other equipment would therefore have unimpeded access beyond the display layer. The design also accommodates larger structures like iPhone's home button. According to the document, openings are formed between pixels, suggesting a self-illuminating display technology like OLED is preferred over traditional LCD structures that require backlight and filter layers. Hole groupings can be arranged in various shapes depending on the application, and might be larger or smaller than the underlying component. If implemented into a future iPhone, the window-based HUD could be Apple's first foray into augmented reality. Apple leaves the mechanics unmentioned, but the system could theoretically go beyond AR and into mixed reality applications.
MojoKid writes: AMD has a lot riding on Ryzen, its new generation CPU architecture that is supposed to return the chip designer to a competitive position versus Intel in the high-end desktop X86 processor market. Late last week, at CES 2017, AMD has lined up over a dozen high-performance AM4 motherboards from five hardware partners, including ASRock, ASUS, Biostar, Gigabyte, and MSI. All AM4 motherboards are built around one of two desktop chipsets for Ryzen, the AMD X370 or X300. Motherboards based on the X370 chipset are intended for power users and gamers. These boards bring more robust overclocking controls and support for dual graphics cards, along with more I/O connectivity and dual-channel DDR4 memory support. The X300 is AMD's chipset for mini-ITX motherboards for small form factor (SFF) system platforms. The X300 also supports dual-channel DDR4 memory, PCIe 3.0, M.2 SATA devices, NVMe, and USB 3.1 Gen 1 and Gen 1. Finally, AMD representatives on hand at CES also reported that all Ryzen processors will be multiplier unlocked, hopefully for some rather flexible overclocking options. There will also be several processors in the family, with varying core counts depending on SKU, at launch.
MojoKid writes: Over the past couple of years, Dell has been driving a redesign effort of its consumer and commercial product lines and has systematically been updating both design signatures and the technology platforms within them. Dell's premium consumer XPS product line, perhaps more so than any other, has seen the most significant design reinvention with the likes of its XPS 13 and XPS 15 notebook line. At CES 2017, Dell announced the XPS 27 7760 all-in-one PC that has a radically new look that draws at least one design cue from its XPS notebook siblings, specifically with respect to the display bezel, or the lack thereof. Though Dell isn't officially branding the touch-enabled version of XPS 27 with an "InfinityEdge" display, the side and top bezel is cut to a minimum, accentuating a beautiful 4K IPS panel. However, the machine's display might not be the most standout feature of the 2017 Dell XPS 27. Under that display, Dell actually expanded things mechanically to make room not only for a Windows Hello capable camera but a 10 speaker sound system that was designed in conjunction with Grammy Award-winning music producer and audio engineer, JJ Puig, that takes the system's audio reproduction and output capabilities to a whole new level. Its sound system is very accurate with dual 50 watt amplifiers at less than 1% THD (Total Harmonic Distortion) and a 70Hz to 20KHz frequency response. Though the system is currently built on Intel's Skylake platform, Kaby Lake versions are imminent and with discrete AMD Radeon R9 M470X graphics, it has decent gaming and multimedia chops as well.
MojoKid writes: AMD lifted the veil on its next generation GPU architecture, codenamed Vega, this morning. One of the underlying forces behind Vega's design is that conventional GPU architectures have not been scaling well for diverse data types. Gaming and graphics workloads have shown steady progress, but today's GPUs are used for much more than just graphics. In addition, the compute capability of GPUs may have been increasing at a good pace, but memory capacity has not kept up. Vega aims to improve both compute performance and addressable memory capacity, however, through some new technologies not available on any previous-gen architecture. First, is that Vega has the most scalable GPU memory architecture built to date with 512TB of address space. It also has a new geometry pipeline tuned for more performance and better efficiency with over 2X peak throughput per clock, a new Compute Unit design, and a revamped pixel engine. The pixel engine features a new draw stream binning rasterizer (DSBR), which reportedly improves performance and saves power. All told, Vega should offer significant improvements in terms of performance and efficiency when products based on the architecture begin shipping in a few months.
When a technology company like Apple releases a new product, chances are it's going to be thinner than its predecessor -- even if may be slightly worse off for it. HP is taking a different approach with its new 15.6-inch Spectre x360 laptop, which was recently announced at CES. The machine is slightly thicker than its predecessor, and HP claims it features three hours of additional battery life. The Verge reports: The difference between the new x360 and the old x360, in terms of thickness, is minimal, from 15.9mm to 17.8mm. (For reference, the 2015 MacBook Pro was 18mm thick.) It's an increase of 1.9mm for the Spectre, but HP says it's now including a battery that's 23 percent larger in exchange. At the same time, the laptop is also getting narrower, with its body shrinking from 14.8 inches wide to 14 inches wide. Unfortunately, the claimed three hours of additional battery life aren't meant to make this laptop into some long-lasting wonder -- they're really just meant to normalize its battery life. HP will only be selling the 15.6-inch x360 with a 4K display this year, and that requires a lot more power. By increasing the laptop's battery capacity, HP is able to push the machine's battery life from the 9.5 hours it estimated for the 4K version of its 2016 model to about 12 hours and 45 minutes for this model. So it is adding three hours of battery life, but in doing so, it's merely matching the battery life of last year's 1080p model. The x360 is also being updated to include Intel's Kaby Lake processors. It includes options that max out at an i7 processor, 16GB of RAM, a 1TB SSD, and Nvidia GeForce 940MX graphics. It's supposed to be released February 26th, with pricing starting at $1,278 for an entry-level model.
Qualcomm has detailed the Snapdragon 835 processor, which will power most of the leading Android smartphones this year. It's designed to grab information from the air at gigabit speeds and turn it into rich virtual and augmented reality experiences, according to several executives at a pre-CES briefing. Qualcomm SVP Keith Kressin said, "The 835 is going to be one of the key devices that propels the VR use case." PC Magazine reports: The hardest thing to understand about the Snapdragon 835, especially if you're thinking from a desktop CPU space, is how much Qualcomm has been prioritizing elements of the system-on-chip other than the CPU. This has been coming for years, and it can be tricky because it relies on firmware and the Android OS to properly distribute work to non-CPU components of the chip. During the briefing, it was striking how little Qualcomm talked about its Kryo 280 CPU, as compared to other components. Qualcomm tries to counter that by pointing out that this is the first 10nm mobile processor, which will improve efficiency, and also by saying the CPU is "tightly integrated" with other components using the new Symphony system manager, which operates automatically yet can be customized by application developers. This distributes work across the CPU, GPU, DSP, and more exotic components, letting the Snapdragon 835 work better than it would with CPU alone. How that will combine with Qualcomm's recent announcement that it will support Windows 10 on mobile PCs, including legacy Win32 apps, is yet to be seen. The Snapdragon 835 consumes 25 percent less power than the 820, according to Qualcomm. That means seven hours of 4K streaming video and two hours of VR gaming on a typical device, the company said. These new uses are really power hungry. Since Qualcomm can only do so much on power efficiency, it's also introducing Quick Charge 4, which supposedly charges a phone to five hours of use in five minutes and is USB-C power delivery compliant. The new Adreno 540 graphics chip improves 3D performance by 25 percent over the previous generation, Qualcomm said. But it also enables features like HDR10, which improves colors; foveated rendering, which most clearly renders what you're looking at rather than elements in the periphery of a scene; and low latency, which allows you to move your head smoothly around VR scenes. With one 32MP or two 16MP cameras running at the same time, the Snapdragon 835 supports various dual-camera functions. The Snapdragon 835 will feature the X16 modem, which Qualcomm announced earlier this year and will be able to boost LTE to gigabit speeds. The keys to gigabit LTE are triple 20MHz carrier aggregation with 256-QAM encoding and 4x4 MIMO antennas, said Qualcomm's senior director of marketing, Peter Carson. That's going to be first introduced with a Netgear hotspot in Australia this January, but Sprint and T-Mobile have said they're trying to assemble this set of technologies.
Reader joshtops writes: Ars Technica has reviewed the much-anticipated Intel Core i7-7700K Kaby Lake, the recently launched desktop processor from the giant chipmaker. And it's anything but a good sign for enthusiasts who were hoping to see significant improvements in performance. From the review, "The Intel Core i7-7700K is what happens when a chip company stops trying. The i7-7700K is the first desktop Intel chip in brave new post-"tick-tock" world -- which means that instead of major improvements to architecture, process, and instructions per clock (IPC), we get slightly higher clock speeds and a way to decode DRM-laden 4K streaming video. [...] If you're still rocking an older Ivy Bridge or Haswell processor and weren't convinced to upgrade to Skylake, there's little reason to upgrade to Kaby Lake. Even Sandy Bridge users may want to consider other upgrades first, such as a new SSD or graphics card. The first Sandy Bridge parts were released six years ago, in January 2011. [...] As it stands, what we have with Kaby Lake desktop is effectively Sandy Bridge polished to within an inch of its life, a once-groundbreaking CPU architecture hacked, and tweaked, and mangled into ever smaller manufacturing processes and power envelopes. Where the next major leap in desktop computing power comes from is still up for debate -- but if Kaby Lake is any indication, it won't be coming from Intel. While Ars Technica has complained about the minimal upgrades, AnandTech looks at the positive side: The Core i7-7700K sits at the top of the stack, and performs like it. A number of enthusiasts complained when they launched the Skylake Core i7-6700K with a 4.0/4.2 GHz rating, as this was below the 4.0/4.4 GHz rating of the older Core i7-4790K. At this level, 200-400 MHz has been roughly the difference of a generational IPC upgrade, so users ended up with similar performing chips and the difference was more in the overclocking. However, given the Core i7-7700K comes out of the box with a 4.2/4.5 GHz arrangement, and support for Speed Shift v2, it handily mops the floor with the Devil's Canyon part, resigning it to history.
AMD announced Tuesday it is introducing Radeon FreeSync 2, a new display technology that will enable monitors to show the exact intended image pixels that a game or other application wants to. The result will be better image quality for gamers, according to AMD. From a report on VentureBeat: With the FreeSync 2 specification, monitor makers will be able to create higher-quality monitors that build on the two-year-old FreeSync technology. Sunnyvale, Calif.-based AMD is on a quest for "pixel perfection," said David Glen, senior fellow at AMD, in a press briefing. With FreeSync 2, you won't have to mess with your monitor's settings to get the perfect setting for your game, Glen said. It will be plug-and-play, deliver brilliant pixels that have twice as much color gamut and brightness over other monitors, and have low-latency performance for high-speed games. AMD's FreeSync technology and Nvidia's rival G-Sync allow a graphics card to adjust the monitor's refresh rate on the fly, matching it to the computer's frame rate. This synchronization prevents the screen-tearing effect -- with visibly mismatched graphics on different parts of the screen -- which happens when the refresh rate of the display is out of sync with the computer.
Qualcomm's upcoming Snapdragon 835's specs have leaked ahead of its CES reveal. An anonymous reader writes: According to the leaked press release, Qualcomm's Snapdragon 835 sports the Qualcomm Kryo 280 CPU (quad-core), Qualcomm Adreno 540 GPU, and Qualcomm Hexagon DSP to manage the different workloads. All of this combined together will result in a 27% increase in performance when compared to the previous generation. Qualcomm is also making significant improvements with the Snapdragon 835 when it comes to power consumption. To be precise, the Snapdragon 835 consumes 40% less power than the older generation which is supposed to offer the following: "1+ day of talk time, 5+ days of music playback, and 7+ hours of 4K video streaming. Should your phone need more power, Qualcomm Quick Charge 4 provides five hours of battery life for five minutes of charging." Qualcomm stated in the press release that the Snapdragon also comes with substantial improvements to the graphics rendering and virtual reality. According to the company, the Snapdragon 835 includes "game-changing" enhancements to improve audio, intuitive interactions, and vibrant visuals. The processor also offers 25 percent faster 3D graphic rendering and produces 60X display colors than the Snapdragon 820.
Ahead of CES 2016, which officially kicks off Tuesday, Dell has announced a convertible version of its popular XPS 13 laptop. The machine is powered by a seventh-generation Kaby Lake Intel Core i chip (i5 and i7 options are available), Intel HD Graphics 615 integrated GPU, 4 to 16GB LPDDR3 RAM, a 128GB-1TB solid-state drive (SSD), a 720p webcam on the bottom of the display with support for Windows Hello, a fingerprint scanner, a 46 watt-hour battery, and a 13.3-inch touchscreen, available in QHD+ or FHD configurations. From a report on VentureBeat: The bezel is very narrow, in keeping with the XPS style. The fanless PC offers an SD card slot and two USB-C ports, and a USB-A to USB-C adapter comes in the box. The laptop is 0.32-0.54 inch thick, which is thinner than Dell's 2016 XPS. But the keyboard hasn't been squished down -- the keys have 1.3mm travel, or just a tad bit (0.1mm) more than you get on the XPS laptop -- which is impressive. The laptop weighs 2.7 pounds. The question is whether people will want the convertible option when the laptop is fine as is. The convertible XPS 13 starts at $1000, which is $200 more than the XPS 13 laptop.
Foxconn was in the news recently for plans to "automate entire factories" in China, but the electronics manufacturing company has also announced plans with Sharp to build a $8.8 billion (61 million yuan) factory in China to produce liquid-crystal displays (LCDs). Reuters reports: Sakai Display Products Corp's plant will be a so-called Gen-10.5 facility specializing in large-screen LCDs and will be operational by 2019, the company said at a signing event with local officials in Guangzhou on Friday. It said the plant will have capacity equating to 92 billion yuan a year. The heavy investment is aimed at increasing production to meet expected rising demand for large-screen televisions and monitors in Asia. Sakai Display Products Corp's plans for the Guangzhou plant come as Hon Hai seeks to turn the joint venture into a subsidiary, investing a total of 15.1 billion yuan in the company. The venture will also sell 436,000 shares for 17.1 billion yuan to an investment co-owned by Hon Hai Chairman Terry Gou, giving Hon Hai a 53 percent interest in the business and lowering Sharp's stake from to 26 percent from 40 percent.
dryriver writes: A few years ago I bought a multiplayer war game called Soldner: Secret Wars that I had never heard of before. (The game is entirely community maintained now and free to download and play at www.soldnersecretwars.de.) The professional reviews completely and utterly destroyed Soldner -- buggy, bad gameplay, no single-player mode, disappointing graphics, server problems and so on. For me and many other players who did give it a chance beyond the first 30 minutes, Soldner turned out to be the most fun, addictive, varied, satisfying and multi-featured multiplayer war game ever. It had innovative features that AAA titles like Battlefield and COD did not have at all at the time -- fully destructible terrain, walls and buildings, cool physics on everything from jeeps flying off mountaintops to Apache helicopters crashing into Hercules transport aircraft, to dozens of trees being blown down by explosions and then blocking an incoming tank's way. Soldner took a patch or three to become fully stable, but then was just fun, fun, fun to play. So much freedom, so much cool stuff you can do in-game, so many options and gadgets you can play with. By contrast, the far, far simpler -- but better looking -- Battlefield, COD, Medal Of Honor, CounterStrike war games got all the critical praise, made the tens of millions in profit per release, became longstanding franchises and are, to this day, not half the fun to play that Soldner is. How does this happen? How does a title like Soldner, that tried to do more new stuff than the other war games combined, get trashed by every reviewer, and then far less innovative and fun to play war games like BF, COD, CS sell tens of millions of copies per release and get rave reviews all around?
dryriver writes: I got together with old computer nerd friends the other day. All of us have been at it since the 8-bit/1980s days of Amstrad, Atari, Commodore 64-type home computers. Everybody at the meeting agreed on one thing -- computing is just not as cool and as much fun as it once was. One person lamented that computer games nowadays are tied to internet DRM like Steam, that some crucial DCC software is available to rent only now (e.g. Photoshop) and that many "basic freedoms" of the old-school computer nerd are increasingly disappearing. Another said that Windows 10's spyware aspects made him give up on his beloved PC platform and that he will use Linux and Android devices only from now on, using consoles to game on instead of a PC because of this. A third complained about zero privacy online, internet advertising, viruses, ransomware, hacking, crapware. I lamented that the hardware industry still hasn't given us anything resembling photorealistic realtime 3D graphics, and that the current VR trend arrived a full decade later than it should have. A point of general agreement was that big tech companies in particular don't treat computer users with enough respect anymore. What do Slashdotters think? Is computing still as cool and fun as it once was, or has something "become irreversibly lost" as computing evolved into a multi-billion dollar global business?
Reader MojoKid writes: NVIDIA's Pascal architecture has been wildly successful in the consumer space. The various GPUs that power the GeForce GTX 10 series are all highly competitive at their respective price points, and the higher-end variants are currently unmatched by any single competing GPU. NVIDIA has since retooled Pascal for the professional workstation market as well, with products that make even the GeForce GTX 1080 and TITAN X look quaint in comparison. NVIDIA's beastly Quadro P6000 and Quadro P5000 are Pascal powered behemoths, packing up to 24GB of GDDR5X memory and GPUs that are more capable than their consumer-targeted counterparts. Though it is built around the same GP102 GPU, the Quadro P6000 is particularly interesting, because it is outfitted with a fully-functional Pascal GPU with all of its SMs enabled, which results in 3,840 active cores, versus 3,584 on the TITAN X. The P5000 has the same GP104 GPU as the GTX 1080, but packs in twice the amount of memory -- 8GB vs 16GB. In the benchmarks, with cryptographic workloads and pro-workstation targeted graphics tests, the Quadro P6000 and Quadro P5000 are dominant across the board. The P6000 significantly outpaced the previous-generation Maxwell-based Quadro M6000 throughout testing, and the P5000 managed to outpace the M6000 on a few occasions as well. Of particular note is that the Quadro P6000 and P5000, while offering better performance than NVIDIA's previous-gen, high-end professional graphics cards, do it in much lower power envelopes, and they're quieter too. In a couple of quick gaming benchmarks, the P6000 may give us a hint at what NVIDIA has in store for the rumored GeForce GTX 1080 Ti, with all CUDA cores enabled in its GP102 GPU and performance over 10% faster than a Titan X.
According to a new SuperData Research report, the worldwide gaming market was worth a whopping $91 billion this year, with mobile gaming leading the way with a total estimated market value of $41 billion. The PC gaming market did very well too, as it pulled in nearly $36 billion over the year. PC Gamer reports: The mobile game segment was the largest at $41 billion (up 18 percent), followed by $26 billion for retail games and $19 billion for free-to-play online games. New categories such as virtual reality, esports, and gaming video content were small in size, but they are growing fast and holding promise for 2017, SuperData said. Mobile gaming was driven by blockbuster hits like Pokemon Go and Clash Royale. The mobile games market has started to mature and now more closely resembles traditional games publishing, requiring ever higher production values and marketing spend. Monster Strike was the No. 1 mobile game, with $1.3 billion in revenue. VR grew to $2.7 billion in 2016. Gaming video reached $4.4 billion, up 34 percent. Consumers increasingly download games directly to their consoles, spending $6.6 billion on digital downloads in 2016. PC gaming continues to do well, earning $34 billion (up 6.7 percent) and driven largely by free-to-play online titles and downloadable games. Incumbents like League of Legends together with newcomers like Overwatch are driving the growth in PC games. PC gamers also saw a big improvement with the release of a new generation of graphics cards, offering a 40 percent increase in graphics power and a 20 percent reduction of power consumption.
An anonymous reader shares a report: Data from the builds on PCPartPicker show an interesting trend among the buyers of AMD CPUs. Of the 25,780 builds on PCPartPicker from the last 31 months with a price point between $450âS - $5,000, 19% included an AMD CPU. This is in-line with the Steam Hardware Surveys, but things have changed recently. Builds with AMD CPUs tend to be much less expensive than those with Intel CPUs. The builds with an AMD CPU were $967 on average versus the Intel CPU builds, which were on average $1,570. In the last 31 months, brand loyalty to AMD seemed to push AMD CPU builders to choose AMD graphics cards at a much higher rate than Intel CPU builders. 55% of machines with an AMD CPU also had an AMD GPU; whereas, only 19% of builds with an Intel CPU included an AMD GPU. In the last six months, AMD has started to lose even more ground to Intel and to Nvidia. On the CPU builds, only 10% of gamers building on PCPartPicker were opting to buy an AMD CPU. Among these, the percentage that decided to pair their AMD CPU with an AMD GPU dropped to 51%. The challenges that AMD is seeing in the overall GPU market are being felt even amongst their loyal supporters.
The Nintendo Switch -- the hybrid portable games console/tablet due for release in March 2017 -- will be powered by Nvidia's older Tegra X1 SoC and not its upcoming Tegra X2 "Parker" SoC as initially rumored. From a report on ArsTechnica: The use of Tegra X1, which also powers the Nvidia Shield Android TV, means the graphics hardware inside the Switch is based on Nvidia's older second-generation Maxwell architecture, rather than the latest Pascal architecture. While the two architectures share a very similar design, the Switch will miss out on some of the smaller performance improvements made in Pascal. When docked, the Switch's GPU runs at a 768MHz, already lower than the 1GHz of the Shield Android TV. When used as a portable, the Switch downclocks the GPU to 307.2MHz -- just 40 percent of the clock speed when docked. Given the Switch is highly likely to use a 720p screen rather than 1080p -- this is currently assumed to be a 6.2-inch IPS LCD with 10-point multi-touch support -- there is some overhead to run games at 1080p when docked. However, it's questionable how many developers will go to the effort of creating games that make use of the extra horsepower when docked, rather than simply opting to program for the slower overall GPU clock speed. While GPU performance is variable, the rest of the Switch's specs remain static. Its four ARM A57 CPU cores are purported to run at 1020MHz regardless of whether the console is docked or undocked, while the memory controller can run at either 1600MHz or 1331MHz in either mode.
It's the open source web version of the classic Linux strategy game, and now Slashdot reader Andreas(R) -- one of its developers -- has an announcement. Now the developers are working on bringing the game to the modern era with 3D WebGL graphics [and] a beta of the 3D WebGL version of Freeciv has been released today. The game will work on any device with a browser with HTML5 and WebGL support, and three gigabytes of RAM... It's a volunteer community development project and anyone is welcome to contribute to the project. Have fun and remember to sleep!
The developers of Freeciv-web are now also working on a VR version using Google Cardboard, according to the site, while the original Freeciv itself has still been maintained for over 20 years -- and apparently even has its own dedicated port number.
The developers of Freeciv-web are now also working on a VR version using Google Cardboard, according to the site, while the original Freeciv itself has still been maintained for over 20 years -- and apparently even has its own dedicated port number.
Reader MojoKid writes: AMD is announcing a new series of Radeon-branded products today, targeted at machine intelligence and deep learning enterprise applications, called Radeon Instinct. As its name suggests, the new Radeon Instinct line of products are comprised of GPU-based solutions for deep learning, inference and training. The new GPUs are also complemented by a free, open-source library and framework for GPU accelerators, dubbed MIOpen. MIOpen is architected for high-performance machine intelligence applications and is optimized for the deep learning frameworks in AMD's ROCm software suite. The first products in the lineup consist of the Radeon Instinct MI6, the MI8, and the MI25. The 150W Radeon Instinct MI6 accelerator is powered by a Polaris-based GPU, packs 16GB of memory (224GB/s peak bandwidth), and will offer up to 5.7 TFLOPS of peak FP16 performance. Next up in the stack is the Fiji-based Radeon Instinct MI8. Like the Radeon R9 Nano, the Radeon Instinct MI8 features 4GB of High-Bandwidth Memory (HBM) with peak bandwidth of 512GB/s. The MI8 will offer up to 8.2 TFLOPS of peak FP16 compute performance, with a board power that typical falls below 175W. The Radeon Instinct MI25 accelerator will leverage AMD's next-generation Vega GPU architecture and has a board power of approximately 300W. All of the Radeon Instinct accelerators are passively cooled but when installed into a server chassis you can bet there will be plenty of air flow. Like the recently released Radeon Pro WX series of professional graphics cards for workstations, Radeon Instinct accelerators will be built by AMD. All of the Radeon Instinct cards will also support AMD MultiGPU (MxGPU) hardware virtualization technology.