Dimitra Simeonidou, Reza Nejabati, Bill St. Arnaud, Micah Beck, Peter Clarke, Doan B. Hoang, David Hutchison, Gigi Karmous-Edwards, Tal Lavian, Jason Leigh, Joe Mambretti, Volker Sander, John Strand, Franco Travostino, Global Grid Forum(GGF) GHPN Standard GFD-I.036 August 2004.

During the past years it has become evident to the technical community that computational resources cannot keep up with the demands generated by some applications. As an example, particle physics experiments produce more data than can be realistically processed and stored in one location (i.e. several Petabytes/year). In such situations where intensive computation analysis of shared large scale data is needed, one can try to use accessible computing resources distributed in different locations (combined data and computing Grid).

Distributed computing & the concept of a computational Grid is not a new paradigm but until a few years ago networks were too slow to allow efficient use of remote resources. As the bandwidth and the speed of networks have increased significantly, the interest in distributed computing has taken to a new level. Recent advances in optical networking have created a radical mismatch between the optical transmission world and the electrical forwarding/routing world. Currently, a single strand of optical fiber can transmit more bandwidth than the entire Internet core. What’s more, only 10% of potential wavelengths on 10% of available fiber pairs are actually lit. This represents 1-2% of potential bandwidth that is actually available in the fiber system. The result of this imbalance between supply and demand has led to severe price erosion of bandwidth product. Annual STM-1 (155 Mbit/sec) prices on major European routes have fallen by 85-90% from 1990-2002. Therefore it now becomes technically and economically viable to think of a set of computing, storage or combined computing storage nodes coupled through a high speed network as one large computational and storage device.

The use of the available fiber and DWDM infrastructure for the global Grid network is an attractive proposition ensuring global reach and huge amounts of cheap bandwidth. Fiber and DWDM networks have been great enablers of the World Wide

Web fulfilling the capacity demand generated by Internet traffic and providing global connectivity. In a similar way optical technologies are expected to play an important role in creating an efficient infrastructure for supporting Grid applications.

The need for high throughput networks is evident in e-Science applications. The USA National Science Foundation (NSF) and European Commission have acknowledged this. These applications need very high bandwidth between a limited number of destinations. With the drop of prices for raw bandwidth, a substantial cost is going to be in the router infrastructure in which the circuits are terminated. “The current L3-based architectures can’t effectively transmit Petabytes or even hundreds of Terabytes, and they impede service provided to high-end data-intensive applications. Current HEP projects at CERN and SLAC already generate Petabytes of data. This will reach Exabytes (10^18) by 2012, while the Internet-2 cannot effectively meet today’s transfer needs.”

The present document aims to discuss solutions towards an efficient and intelligent network infrastructure for the Grid taking advantage of recent developments in optical networking technologies.