First let's be an economist and therefore use ceteris paribus
I've been researching on Datacenters power consumption.
For the benefit of this example, let's say that all US-based datacenters power consumption amounts to 100B KWh for 2015.
Now, POET's solution is at the inteconnect stage, which I've been struggling to find specific information about power usage of interconnects.
But this morning, I've found this:
Quote :
Serial links move data between microprocessors and other electronic devices. As vital as they are, serial links are idle 50-70% of the time. Idle, however, does not mean off.
Current technology demands that serial links stay powered up, consuming microprocessor power to the tune of 20%. And, when everything is added up, that could amount to 7% of a data-center's power budget.
Source : http://www.techrepublic.com/article/save-millions-on-data-centers-thanks-to-a-breakthrough-in-serial-link-power-usage/
Now the "easy and plain simple" calcultation would be :
7% of 100B Kwh = 7B Kwh
10x Power reduction at interconnect level would therefore be a reduction of 6.3B Kwh per year
At 0.08$ per Kwh, power savings au interconnect level only COULD amount to 448 M$/year for US-based datacenters
Pretty close to half a bilion a year in power savings.
Which company would benefit the most?
Some facts and stats to end this post :
https://storageservers.wordpress.com/2013/07/17/facts-and-stats-of-worlds-largest-data-centers/
Regards,