http://www.networkcomputing.com/data-centers/100-gbps-headed-for-the-data-center/a/d-id/1317513
Abstract:
There are market segments, such as high-performance computing or financial trading, that require the fastest possible interconnect and therefore adopt new interconnect speeds immediately upon introduction. On the other hand, traditional data centers have shown hesitance to move to new speeds.
For example, 10 Gbps Ethernet was first demonstrated in 2002, but it wasn't until 2009 before it was first adopted, and it only really started to ramp up in 2011. This is primarily due to the higher cost of equipment for the higher speed (even though on a per gigabit-per-second basis, it could be more cost effective), the possible need to modify the data center infrastructure, and the higher costs of power and maintenance. These arguments are still valid for those traditional data centers.
Recently, though, there has been a shift toward cloud and web 2.0 infrastructures. Traditional data centers are being converted to large-scale private clouds, and web 2.0 applications and public clouds have become an integral part of our lives. These new compute and storage data centers require the ability to move data faster than ever before -- to store, retrieve, analyze, and store again, such that the data is always accessible in real-time.
As a result, adoption of 10 Gbps, and 40 Gbps, and higher speeds has accelerated. Companies have already started to create 100 Gbps links by combining 10 lanes of 10 Gbps as an interim solution in advance of the availability of the more traditional aggregation, which will offer four lanes of 25 Gbps.