Note: You can take 10% off all Slashdot Deals with coupon code "slashdot10off." ×
Power

Plunging Battery Prices Expected To Spur Renewable Energy Adoption 119

Lucas123 writes: Lithium-ion (Li-on) and flow battery prices are expected to drop by as much as 60% by 2020, making them far more affordable for storing power from distributed renewable energy systems, such as wind and solar, according to a recent report by Australia's Renewable Energy Agency (ARENA). The 130-page report (PDF) shows that Li-on batteries will drop from $550 per kilowatt hour (kWh) in 2014 to $200 per kWh by 2020; and flow battery prices will drop from $680 per kWh to $350 per kWh during the same time. Flow batteries and Li-ion batteries work well with intermittent energy sources such as solar panels and wind turbines because of their ability to be idle for long periods without losing a charge. Both battery technologies offer unique advantages in that they can easily be scaled to suit many applications and have high cycle efficiency, the ARENA report noted. Li-ion batteries more easily suit consumer market. Flow batteries, which are less adaptable for consumer use because they're typically too large, scale more easily because all that's needed to grow storage capacity is more electrolyte liquid; the hardware remains the same.
Networking

OnHub Router -- Google's Smart Home Trojan Horse? 120

An anonymous reader writes: A couple weeks ago, Google surprised everybody by announcing a new piece of hardware: the OnHub Wi-Fi router. It packs a ton of processing power and a bunch of wireless radios into a glowy cylinder, and they're going to sell it for $200, which is on the high end for home networking equipment. Google sent out a number of units for testing, and the reviews are starting to come out. The device is truly Wi-Fi-centric, with only a single port for an ethernet cable. It runs on a Qualcomm IPQ8064 dual-core 1.4GHz SoC with 1GB of RAM and 4GB of storage. You can only access the router's admin settings by using the associated app on a mobile device.

OnHub's data transfer speeds couldn't compete with a similarly priced Asus router, but it had no problem blanketing the area with a strong signal. Ron Amadeo puts his conclusion simply: "To us, this looks like Google's smart home Trojan horse." The smartphone app that accompanies OnHub has branding for something called "Google On," which they speculate is Google's new hub for smart home products. "There are tons of competing smart home protocols out there, all of which are incompatible with one another—imagine HD-DVD versus Blu-Ray, but with about five different players. ... Other than Bluetooth and Wi-Fi, everything in OnHub is a Google/Nest/Alphabet protocol. And remember, the "Built for Google On" stamp on the bottom of the OnHub sure sounds like a third-party certification program."
Businesses

Ask Slashdot: Advice On Enterprise Architect Position 196

dave562 writes: I could use some advice from the community. I have almost 20 years of IT experience, 5 of it with the company I am currently working for. In my current position, the infrastructure and applications that I am responsible for account for nearly 80% of the entire IT infrastructure of the company. In broad strokes our footprint is roughly 60 physical hosts that run close to 1500 VMs and a SAN that hosts almost 4PB of data. The organization is a moderate sized (~3000 employees), publicly traded company with a nearly $1 billion market value (recent fluctuations not withstanding).

I have been involved in a constant struggle with the core IT group over how to best run the operations. They are a traditional, internal facing IT shop. They have stumbled through a private cloud initiative that is only about 30% realized. I have had to drag them kicking and screaming into the world of automated provisioning, IaaS, application performance monitoring, and all of the other IT "must haves" that a reasonable person would expect from a company of our size. All the while, I have never had full access to the infrastructure. I do not have access to the storage. I do not have access to the virtualization layer. I do not have Domain Admin rights. I cannot see the network.

The entire organization has been ham strung by an "enterprise architect" who relies on consultants to get the job done, but does not have the capability to properly scope the projects. This has resulted in failure after failure and a broken trail of partially implemented projects. (VMware without SRM enabled. EMC storage hardware without automated tiering enabled. Numerous proof of concept systems that never make it into production because they were not scoped properly.)

After 5 years of succeeding in the face of all of these challenges, the organization has offered me the Enterprise Architect position. However they do not think that the position should have full access to the environment. It is an "architecture" position and not a "sysadmin" position is how they explained it to me. That seems insane. It is like asking someone to draw a map, without being able to actually visit the place that needs to be mapped.

For those of you in the community who have similar positions, what is your experience? Do you have unfettered access to the environment? Are purely architectural / advisory roles the norm at this level?
Data Storage

Oakland Changes License Plate Reader Policy After Filling 80GB Hard Drive 275

An anonymous reader writes: License plate scanners are a contentious subject, generating lots of debate over what information the government should have, how long they should have it, and what they should do with it. However, it seems policy changes are driven more by practical matters than privacy concerns. Earlier this year, Ars Technica reported that the Oakland Police Department retained millions of records going back to 2010. Now, the department has implemented a six-month retention window, with older data being thrown out. Why the change? They filled up the 80GB hard drive on the Windows XP desktop that hosted the data, and it kept crashing.

Why not just buy a cheap drive with an order of magnitude more storage space? Sgt. Dave Burke said, "We don't just buy stuff from Amazon as you suggested. You have to go to a source, i.e., HP or any reputable source where the city has a contract. And there's a purchase order that has to be submitted, and there has to be money in the budget. Whatever we put on the system, has to be certified. You don't just put anything. I think in the beginning of the program, a desktop was appropriate, but now you start increasing the volume of the camera and vehicles, you have to change, otherwise you're going to drown in the amount of data that's being stored."
Data Storage

MIT's New File System Won't Lose Data During Crashes 168

jan_jes sends news that MIT researchers will soon present a file system they say is mathematically guaranteed not to lose data during a crash. While building it, they wrote and rewrote the file system over and over, finding that the majority of their development time was spent defining the system components and the relationships between them. "With all these logics and proofs, there are so many ways to write them down, and each one of them has subtle implications down the line that we didn’t really understand." The file system is slow compared to other modern examples, but the researchers say their formal verification can also work with faster designs. Associate professor Nickolai Zeldovich said, "Making sure that the file system can recover from a crash at any point is tricky because there are so many different places that you could crash. You literally have to consider every instruction or every disk operation and think, ‘Well, what if I crash now? What now? What now?’ And so empirically, people have found lots of bugs in file systems that have to do with crash recovery, and they keep finding them, even in very well tested file systems, because it’s just so hard to do.”
Android

Samsung May Release an 18" Tablet 177

A report at PC Magazine says that Samsung may soon field a tablet to satisfy people not content with the 7", 9", 12", or even slightly larger tablets that are today's normal stock in trade. Instead, the company is reported to be working on an 18.4" tablet aimed at "living rooms, offices, and schools." There's a lot of couching going on, but it sounds like an interesting idea: It's said to run Android 5.1 Lollipop and be powered by an octa-core 64-bit 1.6GHz Exynos 7580 processor. Other rumored specs include 2GB of RAM, 32GB of internal storage, a microSD card slot with support for cards up to 128GB, and a large 5,700 mAh battery. The device also has an 8-megapixel main camera (and you thought people looked silly taking photos with their iPads) and 2.1-megapixel "secondary camera."
Data Storage

Object Storage and POSIX Should Merge 66

storagedude writes: Object storage's low cost and ease of use have made it all the rage, but a few additional features would make it a worthier competitor to POSIX-based file systems, writes Jeff Layton at Enterprise Storage Forum. Byte-level access, easier application portability and a few commands like open, close, read, write and lseek could make object storage a force to be reckoned with.

'Having an object storage system that allows byte-range access is very appealing,' writes Layton. 'It means that rewriting applications to access object storage is now an infinitely easier task. It can also mean that the amount of data touched when reading just a few bytes of a file is greatly reduced (by several orders of magnitude). Conceptually, the idea has great appeal. Because I'm not a file system developer I can't work out the details, but the end result could be something amazing.'
Data Storage

Meet Linux's Newest File-System: Bcachefs 132

An anonymous reader writes: Bcachefs is a new open-source file-system derived from the bcache Linux kernel block layer cache. Bcachefs was announced by Kent Overstreet, the lead Bcache author. Bcachefs hopes to provide performance like XFS/EXT4 while having features similar to Btrfs and ZFS. The bachefs on-disk format hasn't yet been finalized and the code isn't yet ready for the Linux kernel. That said, initial performance results are okay and "It probably won't eat your data — but no promises." Features so far for Bcachefs are support for multiple devices, built-in caching/tiering, CRC32C checksumming, and Zlib transparent compression. Support for snapshots is to be worked on.
Google

Lightning Wipes Storage Disks At Google Data Center 141

An anonymous reader writes: Lightning struck a Google data center in Belgium four times in rapid succession last week, permanently erasing a small amount of users' data from the cloud. The affected disks were part of Google Computer Engine (GCE), a utility that lets people run virtual computers in the cloud on Google's servers. Despite the uncontrollable nature of the incident, Google has accepted full responsibility for the blackout and promises to upgrade its data center storage hardware, increasing its resilience against power outages.
Bug

Air Traffic Snafu: FAA System Runs Out of Memory 234

minstrelmike writes: Over the weekend, hundreds of flights were delayed or canceled in the Washington, D.C. area after air traffic systems malfunctioned. Now, the FAA says the problem was related to a recent software upgrade at a local radar facility. The software had been upgraded to display customized windows of reference data that were supposed to disappear once deleted. Unfortunately, the systems ended up running out of memory. The FAA's report is vague about whether it was operator error or software error: "... as controllers adjusted their unique settings, those changes remained in memory until the storage limit was filled." Wonder what programming language they used?
Data Storage

Intel Promises 'Optane' SSDs Based On Technology Faster Than Flash In 2016 80

holy_calamity writes: Intel today announced that it will introduce SSDs based on a new non-volatile memory that is significantly faster than flash in 2016. A prototype was shown operating at around seven times as fast as a high-end SSD available today. Intel's new 3D Xpoint memory technology was developed in collaboration with Micron and is said to be capable of operating as much as 1000 times faster than flash. Scant details have been released, but the technology has similarities with the RRAM and memristor technologies being persued by other companies.
Books

Jason Scott of Textfiles.com Is Trying To Save a Huge Storage Room of Manuals 48

martiniturbide writes: Remember Jason Scott of Textfiles.com, who wanted your AOL & Shovelware CDs earlier this year? Right now -- at this moment! -- he trying to save the manuals in a huge storage room that was going to be dumped. It is a big storage room and some of these manuals date back to the thirties. On Monday a team of volunteers helped him to pack some manuals to save them. Today he needs more volunteers at "2002 Bethel Road, Finksburg, MD, USA" to try to save them all. He is also accepting Paypal donations for the package material, transportation and storage room payment. You can also check his progress on his twitter account.
Businesses

Wuala Encrypted Cloud-Storage Service Shuts Down 128

New submitter craigtp writes: Wuala, one of the more trusted cloud-storage services that employed encryption for your files, is shutting down. Users of the service will have until 15th November 2015 to move all of their files off the service before all of their data is deleted. From the announcement: "Customers who have an active prepaid annual subscription will be eligible to receive a refund for any unused subscription fees. Your refund will be calculated based on a termination date effective from today’s date, even though the full service will remain active until 30 September 2015 and your data will be available until 15 November 2015. Refunds will be automatically processed and issued to eligible customers in coming weeks. Some exceptions apply. Please visit www.wuala.com for more information."
Security

One Petabyte of Data Exposed Via Insecure Big Data Systems 50

chicksdaddy writes: Behind every big data deployment is a range of supporting technologies like databases and memory caching systems that are used to store and analyze massive data sets at lightning speeds. A new report from security research firm Binaryedge suggests that many of the organizations using these powerful data storage and analysis tools are not taking adequate steps to secure them. The result is that more than a petabyte of stored data is accessible to anyone online with the knowledge of where and how to look for it.

In a blog post on Thursday, the firm reported the results of research that found close to 200,000 such systems that were publicly addressable. Binaryedge said it found 39,000 MongoDB servers that were publicly addressable and that "didn't have any type of authentication." In all, the exposed MongoDB systems contained more than 600 terabytes of data stored in databases with names like "local," "admin," and "db." Other platforms that were found to be publicly addressable and unsecured included the open source Redis key-value cache and store technology (35,000 publicly addressable instances holding 13TB of data) and 9,000 instances of ElasticSearch, a commonly used search engine based on Lucene, that exposed another 531 terabytes of data.
Cellphones

The Realities of a $50 Smartphone 141

An anonymous reader writes: Google recently reiterated their commitment to the goal of a $50 smartphone in India, and a new article breaks down exactly what that means for the phone's hardware. A budget display will eat up about about $8 of that budget — it's actually somewhat amazing that so little money can still buy a 4-4.5" panel running at 854x480. For another $10, you can get a cheap SoC — something in the range of 1.3Ghz and quad-core, complete with Wi-Fi, Bluetooth, and GPS radios. A gigabyte of RAM and 4 gigabytes of storage can be had for another $10 or so. Throw in a $2.10, 1,600 mAh battery and a $5 camera unit, and you've got most of a phone. That leaves about $9 to play with for basic stuff like a casing, and then packaging/marketing costs (some of which could be given freely, like the design work.) Profit margins will be nonexistent, but that's less of an issue for Google, who simply wants to spread the reach of Android.
Data Storage

Samsung Unveils V-NAND High Performance SSDs, Fast NVMe Card At 5.5GB Per Second 61

MojoKid writes: Sometimes it's the enterprise sector that gets dibs on the coolest technology, and so it goes with a trio of TCO-optimized, high-performance solid state drives from Samsung that were just announced, all three of which are based on three-dimensional (3D) Vertical NAND (V-NAND) flash memory technology. The fastest of bunch can read data at up to 5,500 megabytes per second. That's the rated sequential read speed of Samsung's PM1725, a half-height, half-length (HHHL) PCIe card-type NVMe SSD. Other rated specs include a random read speed of up to 1,000,000 IOPS, random write performance of up to 120,000 IOPS, and sequential writes topping out at 1,800MB/s. The PM1725 comes in just two beastly storage capacities, 3.2TB and 6.4TB, the latter of which is rated to handle five drive writes per day (32TB) for five years. Samsung also introduced two other 3D V-NAND products, the PM1633 and PM953. The PM1633 is a 2.5-inch 12Gb/s SAS SSD that will be offered in 480GB, 960GB, 1.92TB, and 3.84TB capacities. As for the PM953, it's an update to the SM951 and is available in M.2 and 2.5-inch form factors at capacities up to 1.92TB.
Hardware

Ask Slashdot: Capacity Planning and Performance Management? 64

An anonymous reader writes: When shops mostly ran on mainframes, it was relatively easy to do capacity planning because systems and programs were mostly monolithic. But today is very different; we use a plethora of technologies and systems are more distributed. Many applications are decentralized, running on multiple servers either for redundancy or because of multi-tiering architecture. Some companies run legacy systems alongside bleeding-edge technologies. We're also seeing many innovations in storage, like compression, deduplication, clones, snapshots, etc.

Today, with many projects, the complexity make it pretty difficult to foresee resource usage. This makes it hard to budget for hardware that can fulfill capacity and performance requirements in the long term. It's even tougher when the project is still in the planning stages. My question: how do you do capacity planning and performance management for such decentralized systems with diverse technologies? Who is responsible for capacity planning in your company? Are you mostly reactive in adding resources (CPU, memory, IO, storage, etc) or are you able to plan it out well beforehand?
Hardware

Military Data Center In a Suitcase To Get Commercial Release 90

judgecorp writes: The Mobyl Data Center, designed for the US Department of Defense, puts a data center in a rugged suitcase-sized box, and it will shortly be available commercially. The box includes up to 88 Xeon cores a maximum of 176 GB of RAM, and 2.8 TB of SSD storage with 12TB of hard disk as an option. The system uses credit-card sized MobylPC server units, sealed in epoxy, and rated to survive 300g of shock, but apparently proprietary to the vendor, Arnouse Digital Devices Corp.
Data Storage

Toshiba, SanDisk Piloting 3D NAND That Doubles Previous Capacity 61

Lucas123 writes: Under a joint development agreement, Toshiba and SanDisk have begun pilot production of a new 48-layer 256Gb NAND flash chip in a brand new fab in Mie prefecture, Japan. The new X3 chips, which double capacity from 16GB to 32GB over the previous product, are made with triple-level cell (TLC) flash compared with Toshiba's last multi-level cell (MLC) chip, which stored two-bits per transistor. The chips are expected to begin shipping in products next year. The companies plan to use the new memory in a wide number of products, including consumer SSDs, smartphones, tablets, memory cards, and enterprise SSDs for data centers, the companies said.
Bug

Samsung Finds, Fixes Bug In Linux Trim Code 184

New submitter Mokki writes: After many complaints that Samsung SSDs corrupted data when used with Linux, Samsung found out that the bug was in the Linux kernel and submitted a patch to fix it. It turns out that kernels without the final fix can corrupt data if the system is using linux md raid with raid0 or raid10 and issues trim/discard commands (either fstrim or by the filesystem itself). The vendor of the drive did not matter and the previous blacklisting of Samsung drives for broken queued trim support can be most likely lifted after further tests. According to this post the bug has been around for a long time.