Posted In:Hardware Archives - AppFerret

standard

Happy 40th Anniversary to the Original Intel 8086 and the x86 Architecture

2018-06-08 - By 

Forty years ago today, Intel launched the original 8086 microprocessor — the grandfather of every x86 CPU ever built, including the ones we use now. This, it must be noted, is more or less the opposite outcome of what everyone expected at the time, including Intel.

According to Stephen P. Morse, who led the 8086 development effort, the new CPU “was intended to be short-lived and not have any successors.” Intel’s original goal with the 8086 was to improve overall performance relative to previous products while retaining source compatibility with earlier products (meaning assembly language for the 8008, 8080, or 8085 could be run on the 8086 after being recompiled). It offered faster overall performance than the 8080 or 8085 and could address up to 1MB of RAM (the 8085 topped out at 64KB). It contained eight 16-bit registers, which is where the x86 abbreviation comes from in the first place, and was originally offered at a clock speed of 5MHz (later versions were clocked as high as 10MHz).

Morse had experience in software as well as hardware and, as this historical retrospectivemakes clear, made decisions intended to make it easy to maintain backwards compatibility with earlier Intel products. He even notes that had he known he was inventing an architecture that would power computing for the next 40 years, he would’ve done some things differently, including using a symmetric register structure and avoiding segmented addressing. Initially, the 8086 was intended to be a stopgap product while Intel worked feverishly to finish its real next-generation microprocessor — the iAPX 432, Intel’s first 32-bit microprocessor. When sales of the 8086 began to slip in 1979, Intel made the decision to launch a massive marketing operation around the chip, dubbed Operation Crush. The goal? Drive adoption of the 8086 over and above competing products made by Motorola and Zilog (the latter founded by former Intel employees, including Federico Faggin, lead architect on the first microprocessor, Intel’s 4004). Project Crush was quite successful and is credited with spurring IBM to adopt the 8088 (a cut-down 8086 with an 8-bit bus) for the first IBM PC.

One might expect, given the x86 architecture’s historic domination of the computing industry, that the chip that launched the revolution would have been a towering achievement or quantum leap above the competition. The truth is more prosaic. The 8086 was a solid CPU core built by intelligent architects backed up by a strong marketing campaign. The computer revolution it helped to launch, on the other hand, transformed the world.

All that said, there’s one other point want to touch on.

It’s Been 40 Years. Why Are We Still Using x86 CPUs?

This is a simple question with a rather complex answer. First, in a number of very real senses, we aren’t really using x86 CPUs anymore. The original 8086 was a chip with 29,000 transistors. Modern chips have transistor counts in the billions. The modern CPU manufacturing process bears little resemblance to the nMOS manufacturing process used to implement the original design in 1978. The materials used to construct the CPU are themselves very different and the advent of EUV (Extreme Ultraviolet Lithography) will transform this process even more.

Modern x86 chips translate x86 microcode into internal micro-ops for more efficient execution. They implement features like out-of-order execution and speculative executionto improve performance and limit the impact of slow memory buses (relative to CPU clocks) with multiple layers of cache and capabilities like branch prediction. People often ask “Why are we still using x86 CPUs?” as if this was analogous to “Why are we still using the 8086?” The honest answer is: We aren’t. An 8086 from 1978 and a Core i7-8700K are both CPUs, just as a Model T and 2018 Toyota are both cars — but they don’t exactly share much beyond that most basic classification.

Furthermore, Intel tried to replace or supplant the x86 architecture multiple times. The iAPX 432, Intel i960, Intel i860, and Intel Itanium were all intended to supplant x86. Far from refusing to consider alternatives, Intel literally spent billions of dollars over multiple decades to bring those alternative visions to life. The x86 architecture won these fights — but it didn’t just win them because it offered backwards compatibility. We spoke to Intel Fellow Ronak Singhal for this article, who pointed out a facet of the issue I honestly hadn’t considered before. In each case, x86 continued to win out against the architectures Intel intended to replace it because the engineers working on those x86 processors found ways to extend and improve the performance of Intel’s existing microarchitectures, often beyond what even Intel engineers had thought possible years earlier.

Is there a penalty for continuing to support the original x86 ISA? There is — but today, it’s a tiny one. The original Pentium may have devoted up to 30 percent of its transistors to backwards compatibility, and the Pentium Pro’s bet on out-of-order execution and internal micro-ops chewed up a huge amount of die space and power, but these bets paid off. Today, the capabilities that consumed huge resources on older chips are a single-digit percent or less of the power or die area budget of a modern microprocessor. Comparisons between a variety of ISAs have demonstrated that architectural design decisions have a much larger impact on performance efficiency and power consumption than ISA does, at least above the microcontroller level.

Will we still be using x86 chips 40 years from now? I have no idea. I doubt any of the Intel CPU designers that built the 8086 back in 1978 thought their core would go on to power most of the personal computing revolution of the 1980s and 1990s. But Intel’s recent moves into fields like AI, machine learning, and cloud data centers are proof that the x86 family of CPUs isn’t done evolving. No matter what happens in the future, 40 years of success are a tremendous legacy for one small chip — especially one which, as Stephen Moore says, “was intended to be short-lived and not have any successors.”

Now read: A Brief History of Intel CPUs, Part 1: The 4004 to the Pentium Pro

Original article here.

 


standard

Spectre, Meltdown: Critical CPU Security Flaws Explained

2018-01-04 - By 

Over the past few days we’ve covered major new security risks that struck at a number of modern microprocessors from Intel and to a much lesser extent, ARM and AMD. Information on the attacks and their workarounds initially leaked out slowly, but Google has pushed up its timeline for disclosing the problems and some vendors, like AMD, have issued their own statements. The two flaws in question are known as Spectre and Meltdown, and they both relate to one of the core capabilities of modern CPUs, known as speculative execution.

Speculative execution is a performance-enhancing technique virtually all modern CPUs include to one degree or another. One way to increase CPU performance is to allow the core to perform calculations it may need in the future. The different between speculative execution and “execution” is that the CPU performs these calculations before it knows whether it’ll actually be able to use the results.

Here’s how Google’s Project Zero summarizes the problem: “We have discovered that CPU data cache timing can be abused to efficiently leak information out of mis-speculated execution, leading to (at worst) arbitrary virtual memory read vulnerabilities across local security boundaries in various contexts.”

Meltdown is Variant 3 in ARMAMD, and Google parlance. Spectre accounts for Variant 1 and Variant 2.

Meltdown

“On affected systems, Meltdown enables an adversary to read memory of other processes or virtual machines in the cloud without any permissions or privileges, affecting millions of customers and virtually every user of a personal computer.”

Intel is badly hit by Meltdown because its speculative execution methods are fairly aggressive. Specifically, Intel CPUs are allowed to access kernel memory when performing speculative execution, even when the application in question is running in user memory space. The CPU does check to see if an invalid memory access occurs, but it performs the check after speculative execution, not before. Architecturally, these invalid branches never execute — they’re blocked — but it’s possible to read data from affected cache blocks even so.

The various OS-level fixes going into macOS, Windows, and Linux all concern Meltdown. The formal PDF on Meltdown notes that the software patches Google, Apple, and Microsoft are working on are a good start, but that the problem can’t be completely fixed in software. AMD and ARM appear largely immune to Meltdown, though ARM’s upcoming Cortex-A75 is apparently impacted.

Spectre

Meltdown is bad, but Meltdown can at least be ameliorated in software (with updates), even if there’s an associated performance penalty. Spectre is the name given to a set of attacks that “involve inducing a victim to speculatively perform operations that would not occur during correct program execution, and which leak the victim’s confidential information via a side channel to the adversary.”

Unlike Meltdown, which impacts mostly Intel CPUs, Spectre’s proof of concept works against everyone, including ARM and AMD. Its attacks are pulled off differently — one variant targets branch prediction — and it’s not clear there are hardware solutions to this class of problems, for anyone.

What Happens Next

Intel, AMD, and ARM aren’t going to stop using speculative execution in their processors; it’s been key to some of the largest performance improvements we’ve seen in semiconductor history. But as Google’s extensive documentation makes clear, these proof-of-concept attacks are serious. Neither Spectre nor Meltdown relies on any kind of software bug to work. Meltdown can be solved through hardware design and software rearchitecting; Spectre may not.

When reached for comment on the matter, Linux creator Linux Torvalds responded with the tact that’s made him legendary. “I think somebody inside of Intel needs to really take a long hard look at their CPU’s, and actually admit that they have issues instead of writing PR blurbs that say that everything works as designed,” Torvalds writes. “And that really means that all these mitigation patches should be written with ‘not all CPU’s are crap’ in mind. Or is Intel basically saying ‘We are committed to selling you shit forever and ever, and never fixing anything? Because if that’s the case, maybe we should start looking towards the ARM64 people more.”

It does appear, as of this writing, that Intel is disproportionately exposed on these security flaws. While Spectre-style attacks can affect all CPUs, Meltdown is pretty Intel-specific. Thus far, user applications and games don’t seem much impacted, but web servers and potentially other workloads that access kernel memory frequently could run markedly slower once patched.

 

Original article here.

 


standard

Western Digital plans 40TB drives, but it’s still not enough

2017-10-24 - By 

Data continues to grow faster than disk capacity.

Hard disk makers are using capacity as their chief bulwark against the rise of solid-state drives (SSDs), since they certainly can’t argue on performance, and Western Digital — the king of the hard drive vendors — has shown off a new technology that could lead to 40TB drives.

Western Digital already has the largest-capacity drive on the market. It recently introduced a 14TB drive, filled with helium to reduce drag on the spinning platters. But thanks to a new technology called microwave-assisted magnetic recording (MAMR), the company hopes to reach 40TB by 2025. The company promised engineering samples of drive by mid-2018.

MAMR technology is a new method of cramming more data onto the disk. Western Digital’s chief rival, Seagate, is working on a competitive product called HAMR, or heat-assisted magnetic recording. I’ll leave it to propeller heads like AnandTech to explain the electrical engineering of it all. What matters to the end user is that it should ship sometime in 2019, and that’s after 13 years of research and development.

That’s right, MAMR was first developed by a Carnegie Mellon professor in 2006 and work has gone on ever since.

The physics of hard drives

Just like semiconductors, hard drives are running into a brick wall called the laws of physics. Every year it gets harder and harder to shrink these devices while cramming more in them at the same time.

Western Digital believes MAMR should enable 15 percent decline in terabytes per dollar, another argument hard disk has on SSD. A hard disk will always be cheaper per terabyte than SSD because cramming more data into the same space is easier, relatively speaking, for hard drives than flash memory chips. MAMR and HAMR are expected to enable drive makers to pack as much as 4 terabits per square inch on a platter, well beyond the 1.1 terabits per square inch in today’s drives.

Data growing faster than hard disk capacity

The thing is, data is growing faster than hard disk capacity. According to research from IDC (sponsored by Seagate, it should be noted), by 2025 the global datasphere will grow to 163 zettabytes (a zettabyte is a trillion gigabytes). That’s 10 times the 16.1ZB of data generated in 2016. Much of it will come from Big Data and analytics, especially the Internet of Things (IoT), where sensor data will be picking up gigabytes of data per second.

And those data sets are so massive, many companies don’t use them all. They dump their accumulated data into what are called data lakes to be processed later, if ever. I’ve seen estimates that collected data is unused as high as 90 percent. But it has to sit somewhere, and that’s on a hard disk.

Mind you, that’s just Big Data. Individuals are generating massive quantities of data as well. Online backup and storage vendor BackBlaze, which has seen its profile rise after it began reporting on hard drive failures, uses hundreds of thousands of drives in its data centers. It just placed an order for 100 petabytes worth of disk storage, and it plans to deploy all of it in the fourth quarter of this year. And it has plans for another massive order for Q1. And that’s just one online storage vendor among dozens.

All of that is great news for Western Digital, Seagate and Toshiba — and the sales reps who work on commission.

Original article here.


standard

IBM Sets New Tape Capacity Record of 330TB In A Single Cartridge

2017-08-07 - By 

IBM Research announced, with the help of Sony Storage Media Solutions, the have achieved a capacity breakthrough in tape storage. IBM was able to fit 201 Gb/in^2 (gigabits per square inch) in areal density on a prototype sputtered magnetic tape. This marks the fifth capacity record IBM has hit since 2006.

The current buzz in storage typically goes to faster media, like those that leverage the NVMe interface. StorageReview is guilty of focusing on these new emerging technologies without spending much time on tape; namely because tape is a fairly well known and not terribly exciting storage media. However, tape remains the most secure, energy efficient, and cost-effective solution for storing enormous amounts of back-up and archival data. And the deluge of unstructured data that is now being seen everywhere will need to go on something that has the capacity to store it.

This newly announced record for tape capacity would be 20 times the areal density of state of the art commercial tape drives such as the IBM TS1155 enterprise tape drive. The technology allows for 330TB of uncompressed data to be stored on a single tape cartridge. According to IBM this is the equivalent of having the texts of 330 million books in the palm of one’s hand.

Technologies used to hit this new density include:

  • Innovative signal-processing algorithms for the data channel, based on noise-predictive detection principles, which enable reliable operation at a linear density of 818,000 bits per inch with an ultra-narrow 48nm wide tunneling magneto-resistive (TMR) reader.
  • A set of advanced servo control technologies that when combined enable head positioning with an accuracy of better than 7 nanometers. This combined with a 48nm wide (TMR) hard disk drive read head enables a track density of 246,200 tracks per inch, a 13-fold increase over a state of the art TS1155 drive.
  • A novel low friction tape head technology that permits the use of very smooth tape media

This new technology marks a long list of tape storage innovation for IBM stretching back 60 years. Though the capacity today is 165 million times the capacity of their first tape product.

Original article here.

 


standard

Amazon opens up Alexa’s microphone and voice processing technology to hardware makers

2017-04-14 - By 

Amazon’s digital assistant Alexa might show up in a lot of new devices soon.

That’s because the online retail giant has decided to open up what amounts to Alexa’s ears, her 7-Mic Voice Processing Technology, to third party hardware makers who want to build the digital brain into their devices. The new development kit also includes access to Amazon’s proprietary software for wake word recognition, beamforming, noise reduction, and echo cancellation as well as reference client software for local device control and communication with the Alexa Voice Service.

The move will make it easier and less expensive for hardware makers to build Alexa into their products.

“Since the introduction of Amazon Echo and Echo Dot, device makers have been asking us to provide the technology and tools to enable a far-field Alexa experience for their products,” said Priya Abani, director of Amazon Alexa said in a statement. “With this new reference solution, developers can design products with the same unique 7-mic circular array, beamforming technology, and voice processing software that have made Amazon Echo so popular with customers. It’s never been easier for device makers to integrate Alexa and offer their customers world-class voice experiences.”

Amazon said the new development kit will be invitation only. Device makers can sign up here for an invite and to learn more about the technology.

A similar decision in 2015 to give developers the opportunity to build new capabilities for Alexa through the Alexa Skills Kit helped push Amazon into the early lead in the competitive voice assistant market. Developers who want to add to Alexa’s abilities can write code that works with Alexa in the cloud, letting the smart assistant do the heavy lifting of understanding and deciphering spoken commands.

Alexa reached a milestone of 10,000 skills back in February, and it has surely added many more since. That’s up from 7,000 in January; from 5,400 in December; from 3,000 in September; and from 1,000 in June. That’s a 10X increase since June.

Amazon is battling with Microsoft, Apple, Google, and soon Samsung, in the voice assistant market. Apple and Google opened up their digital assistant platforms to third party developers last year.

Original article here.


standard

Intel targets IoT machine vision firm Movidius

2016-09-06 - By 

Intel has moved to buy machine vision technology developer Movidius.

The attraction to Intel is Movidius capability to add low power vision process to IoT-enabled device and autonomous machines.

Movidius has European design centres in Dublin and Romania.

Josh Walden, general manager of Intel’s New Technology Group, writes:

“The ability to track, navigate, map and recognize both scenes and objects using Movidius’ low-power and high-performance SoCs opens opportunities in areas where heat, battery life and form factors are key. Specifically, we will look to deploy the technology across our efforts in augmented, virtual and merged reality (AR/VR/MR), drones, robotics, digital security cameras and beyond.”

Its Myriad 2 family of Vision Processor Units (VPUs) are based on a sub-1W processing architecture, backed by a memory subsystem capable of feeding the processor array as well as hardware acceleration to support large-scale operations.

Remi El-Ouazzane, CEO of Movidius, writes:

“As part of Intel, we’ll remain focused on this mission, but with the technology and resources to innovate faster and execute at scale. We will continue to operate with the same eagerness to invent and the same customer-focus attitude that we’re known for, and we will retain Movidius talent and the start-up mentality that we have demonstrated over the years.”

Its customers include DJI, FLIR, Google and Lenovo which use its IP and devices in drones, security cameras, AR/VR headsets.

“When computers can see, they can become autonomous and that’s just the beginning. We’re on the cusp of big breakthroughs in artificial intelligence. In the years ahead, we’ll see new types of autonomous machines with more advanced capabilities as we make progress on one of the most difficult challenges of AI: getting our devices not just to see, but also to think,” said El-Ouazzane.

The terms of the deal have no been published.

Original article here.


standard

Fujitsu now making DRAM killer with 1,000x performance boost

2016-09-04 - By 

But Nano-RAM faces limitations from the DDR4 interface, even as it promises limitless longevity

Fujistu Semiconductor Ltd. has become the first manufacturer to announce it is mass producing a new RAM that boasts 1,000 times the performance of DRAM but stores data like NAND flash memory.

The new non-volatile memory known as Nano-RAM (NRAM) was first announced last year and is based on carbon nanotube technology.

Fujitsu Semiconductor plans to develop a custom embedded storage-class memory module using the DDR4 interface by the end of 2018, with the goal of expanding the product line-up into a stand-alone NRAM product family from Fujitsu’s foundry, Mie Fujitsu Semiconductor Ltd.; the stand-alone memory module will be sold through resellers, who’ll rebrand it.

According to Nantero, the company that invented NRAM, seven fabrication plants in various parts of the world experimented with the new memory last year. And other as-yet unannounced chipmakers are already ramping up production behind the scenes.

Fujitsu plans to initially manufacture the NRAM using a 55-nanometer (nm) process, which refers to the size of the transistors used to store bits of data. At that size, the initial memory modules will only be able to store megabytes of data. However, the company also plans a next-generation 40nm-process NRAM version, according to Greg Schmergel, CEO of Nantero.

Initially, NRAM products will likely be aimed at the data center and servers. But over time they could find their way into the consumer market — and even into mobile devices. Because it uses power in femtoJoules (1015 of a Joule) and requires no data clean-up operations in the background, as NAND flash does, NRAM could extend the battery life of a mobile device in standby mode for months, Schmergel said.

Fujitsu has not specified whether its initial NRAM product will be produced as a DIMM (dual in-line memory module), but Schmergel said one of the other fabrication partners “is definitely doing just that…for a DDR4 compatible chip in product design.

“There are several others [fabricators] we are still working with, and one, for example, is focused on a 28nm process and that’s a multi-gigabyte stand-alone memory product,” Schmergel said, referring to the DIMM manufacturer.

Currently, NRAM is being produced as a planar memory product, meaning memory cells are laid out horizontally across a two-dimensional plane. However, just as the NAND flash industry has, Nantero is developing a three dimensional (3D) multilayer architecture that will greatly increase the memory’s density.

“We were forced to go into 3D multilayer technology maybe sooner than we realized because customers want those higher densities,” Schmergel said. “We expect densities will vary from fab to fab. Most of them will produce four to eight layers. We can do more than that. Nanotube technology is not the limiting factor.”

Currenty, NRAM can be produced for about half the cost of DRAM, Schmergel said, adding that with greater densities production costs will also shrink — just as they have for the NAND flash industry.

“My understanding is that Nantero plans to bring NRAM to the market as an embedded memory in MCUs and ASICs for the time being,” Jim Handy, principal analyst with semiconductor research firm Objective Analysis, said in an email reply to Computerworld. “This is a good strategy, since flash processes are having trouble keeping pace with the logic processes that are used to make MCUs and ASICs.

“An alternative technology like NRAM then stands a chance of getting into high volumes on the back of the MCU and ASIC markets,” Handy added. “After that, it could challenge DRAM, but it will have some trouble getting to cost parity with DRAM until its unit volume rises to a number close to that of DRAM.”

Should DRAM stop scaling, though, NRAM will encounter a big opportunity since it promises to scale at lower prices than DRAM will be able to reach, according to Handy.

Because of its potential to store increasingly more data as its density increases, NRAM could also someday replace NAND flash as the price to produce it drops along with  economies of scale, Schmergel said.

“We’re really focused in the next few years on competing with DRAM where costs don’t need to be as low as NAND flash,” Schmergel said.

One big advantage NRAM has over traditional flash memory is its endurance. Flash memory can only sustain a finite number of program/erase (P/E) cycles — typically around 5,000 to 8,000 per flash cell before the memory begins to fail. The best NAND flash, with error correction code and wear-leveling software, can withstand about 100,000 P/E cycles.

Carbon nanotubes are strong — very strong. In fact, they’re 50 times stronger than steel, and they’re only 1/50,000th the size a human hair. Because of carbon nanotubes’ strength, NRAM has far greater write endurance compared to NAND flash; the program/erase (P/E) cycles it can endure are practically infinite, according to Schmergel.

NRAM has been tested by Nantero to withstand 1012 P/E cycles and 1015 read cycles, Schmergel said.

In 2014, a team of researchers of at Chuo University in Tokyo tested Nantero’sNRAM tested it up to 1011 P/E (program/erase) cycles which represents more than one billion write cycles.

“We expect it to have unlimited endurance,” Schmergel said.

Another advantage is that NRAM is being built using the DDR4 specification interface, so it could sport up to 3.2 billion data transfers per second or 2,400 Mbps — more than twice as fast as NAND flash. Natively, however, the NRAM’s read/write capability is thousands of times faster than NAND flash, Schmergel said; the bottleneck is the computer BUS interface.

“Nanotube switch [states] in picoseconds — going off to on and on to off,” Schmergal said. A picosecond is one trillionth of a second.

Because the company designed the memory using the DDR4 interface, speeds will be limited by the bus interface; thus, it only has the potential to be 1,000 times faster than DRAM on a technical specification sheet.

Another advantage is that NRAM is resistant to extreme heat. It can withstand up to 300 degrees Celsius. Nantero claims its memory can last thousands of years at 85 degrees Celsius and has been tested at 300 degrees Celsius for 10 years. Not one bit of data was lost, the company claims.

How NRAM works

Carbon nanotubes are grown from catalyst particles, most commonly iron.

NRAM is made up of an interlocking fabric matrix of carbon nanotubes that can either be touching or slightly separated. Each NRAM “cell” or transistor is made up by a  network of the carbon nanotubes that exist between two metal electrodes. The memory acts the same way as other resistive non-volatile RAM technologies.

Carbon nanotubes that are not in contact with each other are in the high resistance state that represents the “off” or “0” state. When the carbon nanotube contact each other, they take on the low-resistance state of “on” or “1.”

In terms of new memories, NRAM is up against an abundant field of emerging  technologies that are expected to challenge NAND flash in speed, endurance and capacity, according to Handy.

For example, Ferroelectric RAM (FRAM) has shipped in high volume; IBM has developed Racetrack Memory; Intel, IBM and Numonyx have all producedPhase-Change Memory (PCM); Magnetoresistive Random-Access Memory(MRAM) has been under development since the 1990s; Hewlett-Packard and Hynix have been developing ReRAM, also called Memristor; and Infineon Technologies has been developing Conductive-Bridging RAM (CBRAM).

Another potential NRAM competitor, however, could be 3D XPoint memory, which will be released this year by development partners Intel and Micron.

Micron, which will market it under the name QuantX (and Intel under the name Octane), is targeting NAND flash because the technology is primarily a mass storage-class memory that, while slower, is cheaper to produce than DRAM and vastly faster than NAND.

“We’re at DRAM speed. We have far greater endurance,” Schmergel said.

“It should be superior to 3D XPoint, which wears and has a slower write than read cycle,” Handy said. “If this is true, and if its costs can be brought to something similar to DRAM’s costs, then it is positioned to replace DRAM. Cost is the big issue here though, since it takes very high unit volumes for prices to get close to those of DRAM.

“It’s a chicken-and-egg problem: Costs will drop once volumes get high enough, and volume will get high if the cost is competitive with DRAM costs.”

Original article here.


standard

ARM Wrestles Its Way Into Supercomputing

2016-08-25 - By 

The designer of the chips that run most of the world’s mobile devices has announced its first dedicated processor for use in supercomputers.

The British company ARM Holdings, which was recently acquired by the Japanese telecom and Internet company SoftBank, has announced a new kind of chip architecture dedicated to high-performance computing. The new designs use what’s known as vector processing to work with large quantities of data simultaneously, making them well suited to applications such as financial and scientific computing.

This isn’t ARM’s first association with supercomputers. Earlier this year, Fujitsu announced that it plans to build a successor to the Project K supercomputer, which is housed at the Riken Advanced Institute for Computational Science, using ARM chips. In fact, it was announced today that the new Post-K machine will be the first to license the newly announced ARM architecture.

ARM has built a reputation for building processors known for their energy efficiency. That’s why they’ve proven so popular for mobile devices—they extend battery life in smartphones and tablets. Among the companies that license ARM’s designs are Apple, Qualcomm, and Nvidia. But the company’s energy-efficient chips also create less heat and use less power, which are both desirable attributes in large-scale processing applications such as supercomputers.

Intel will be worried by the purchase. The once-dominant chipmaker missed the boat on chips for mobile devices, allowing ARM to dominate the sector. But until recently it’s always been a leading player in the supercomputer arena. Now the world’s fastest supercomputer is built using Chinese-made chips, and clearly ARM plans to give it a run for its money, too.

It remains to be seen how successful ARM-powered supercomputers will be, though. The first big test will come when Fujitsu’s Post-K machine is turned on, which is expected in 2020. Intel will be watching carefully the whole way.

Original article here.


standard

The world’s largest SSD is now shipping for $10,000

2016-08-02 - By 

Even with today’s latest SSDs, sometimes you just need a little more space.

That’s where the Samsung PM1633a SSD comes in, clocking in at a massive 15.36 terabytes (or 15,360,000,000,000 bytes) of storage. Such power comes at a price, however, with preorders for the 15.36 terabyte behemoth coming in at around $10,000. The 15.36 TB size is not only the largest SSD ever made, but the largest single hard drive ever, finally breaking the 10 TB barrier that spinning disk drives seem to be capped at.

The drive’s small size and huge storage capacity means that you’d be able to outfit a standard 42U server storage rack with 1,008 PM1633a for a cool 15482.88 TB (over 15 petabytes) of storage, to presumably store a spare copy of the known universe. (Assuming you can afford the $10,080,000 price tag for such a setup.)

But while the PM1633a is primarily designed for enterprise customers for use in data centers or large server systems, there’s theoretically nothing stopping you (aside from price, anyway) from just buying one for your home desktop. Good luck filling all that space!

Original article here.


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);