If you want to cram as much RAM in a server as practical, you'll currently be using 32GB LRDIMM's which currently cost ~$600 ($20/GB). Some dual-processor motherboards will take 24 of them (3 ranks * 4 channels * 2 processors) for a 768 GB total: http://www.supermicro.com/products/motherboard/xeon/c600/x9d...
(The 1.5TB description for the board presumes the soon to be released 64GB LRDIMMS.)
This would be $15000 for just the 768GB of RAM. Let's guess another $5K for the rest, so $20K for the machine. At 20GB of dedicated RAM per VPS, you can fit ~40 of them on the machine. $20K / 40 = $500. Since this ignores power and connectivity, I think you'll be waiting a while for a $5 month plan. Although perhaps you could overprovision and swap out a lot?
Viewed alternatively, the sweet spot for selling RAM does not involve putting as much of it as you can in a single machine. You'd cut the cost of your RAM per GB a lot (down to about $10/GB) if you stuck with older lower density RAM put a bunch of low price blade servers in a single box.
Right, the trouble with RAM is that if someone wants to keep a pile of data in memory, it has to stay loaded. I would hazard a guess that memory consumption on servers tends to be much less spiky than CPU is, and that's part of why a "DigitalOcean of RAM" is difficult to do.
I did not. I'd be interested to know what the wholesale prices are for 32GB LRDIMM's. My instinct is that there wouldn't be much of a discount until you are buying entire production runs, but this isn't an area I know about.
It could be very interesting for VMs with transient storage that need fast IO, but don't need to persist it on the machine itself. Thinking stream consolidation or analysis.
Session database in memory, mirrored to another host for redundancy, and only events that need to be stored written to "slow" SSDs when I/O bandwidth permits.
I could bring my startup from 10 to 2 hosts and have better performance.
I suppose that's true, thanks for putting that in perspective.
The real problem is that gobs of RAM isn't _cheap_. I wish that DDR4 would make DDR3 less expensive, but as manufacturers switch to making DDR4, I expect the price to rise.
edit: Which is to say, I'd be happy if it became possible in the near future to get laptops with 48GB of of DD3 and junky Dell rackmounts with 256GB of RAM for today's prices of 16/64GB respectively.
There are also diminishing returns for the average user. My mother would notice an SSD, but probably not another 4 gigabytes of RAM. Her computer almost never hits the swap partition.
DDR3 absolutely is twice as fast as DDR2, there's nothing theoretical about it. A person just needs to define their terms correctly and understand what is being measured.
DDR4 is still bleeding edge, and it's about twice as fast as DDR3, so 384GB of DDR4 in a system that could go on sale in the next few months is very impressive in terms of the sort of performance gains that can be expected.
I think a single character typo of '2' for '3' is a more plausible explanation. It also fits better with the message "Why is this special? It's the same thing I've had for years."
That it's DDR4 might be exciting, but there's plenty of servers around that can take more - several of the servers we have at work can take 512GB or 768GB (though we've not put anywhere near that much in any of them).
Slightly OT question: what is meant by a chipset supporting Thunderbolt, as the one in this demo is rumored to? Does it mean that there are pins on the chipset that can be connected directly to a Thunderbolt connector, or is it just a meaningless statement that the chipset provides PCIe lanes that could be routed to a Thunderbolt bridge chip, or something else?
"We speculate this is Grantley-EP and Wellsburg PCH. If so, the server supports Thunderbolt and features DDR4 speeds of 2400 and 3200MHz. However, these specs are pure conjecture."
They're obviously under the impression that a new generation of CPU+chipset provides some degree of Thunderbolt support that is lacking in server components currently on the market: either that Intel's requiring Thunderbolt controllers to be included on the motherboard, or that the CPU or chipset provides a Thunderbolt controller.
At high speed you need a dedicated hardware peripheral to manage key protocol elements like fragmentation, bus arbitratration, and forward error correction. By putting that hardware in the chipset you don't need a special chip so costs are lower for the oem, and buyer.