The prospect of inter-data-center optical networks


Google’s mission is to organize the world’s information, and make it universally accessible and useful. To achieve this goal, Google processes more than 3 billion search queries every day, of which 15 percent are new. Google has found over 30 trillion unique URLs on the web, and over 230 million web domains. To ensure the results returned to users’ queries are as current as possible, Google has to crawl over 20 billion websites every day to refresh its index. All these computationally intensive tasks are done in warehouse-scale computers (WSCs), which are commonly known as mega data centers. Google offers services in 55 countries across the world in 146 languages, thereby driving the need for globally distributed computation resources and a global network (Fig. 1). In addition to global reach, service availability is another important consideration. To ensure that user experience is maintained to the extent possible during unplanned failures or planned maintenance events, many of Google services’ backend designs maintain redundancy by keeping copies in multiple data centers. This combination of global reach, large scale, and inherent redundancy sets the fundamental requirements for Google’s inter-data-center optical network. Capacity scaling on existing fiber plants in the next 5–10 years is one of the main issues to address. Deployment of new fibers along longhaul and ultra-long-haul routes is time-consuming and capital-cost-intensive. Therefore, it is important to maximize the capacity of deployed fiber plants by utilizing various emerging techniques. Today, coherent 100-Gb/s polarizationmultiplexed quadrature phase shift keying (PM-QPSK) technologies with increased number of dense wavelength-division multiplexing (DWDM) channels (through channel bandwidth reduction and guard band removal) can likely provide 12 Tb/s per fiber pair. As Internet traffic continues to grow at 50–60 percent year after year [1], solutions for capacity scaling beyond 12 Tb/s are needed. However, fiber capacity scaling is eventually bounded by the nonlinear Shannon limit [1]. The conventional paths explored in the past (e.g., data rate increase per wavelength channel) cannot easily be exploited going forward as we are getting close to the limit. In addition to the critical task of capacity scaling, designing, deploying, and operating a global optical network on a tens-of-terabits scale has its own challenges. Network flexibility, agility, and automation are necessary features to ensure holistic network scaling. The Holy Grail for network operators is to have a closed-loop network control and management system that includes monitoring, provisioning, commissioning, and configuration of the network all performed in an automated fashion. As shown in Fig. 2, this automated network control and management is required for new capacity adds as well as online optimization activities such as optical layer routing and spectrum allocation based on real-time monitored telemetry. We show later in this article that a logically centralized network operating system (OS) and a consolidated packet and optical data layer are desired to enable this software-defined networking (SDN) paradigm. ABSTRACT

DOI: 10.1109/MCOM.2013.6588647

6 Figures and Tables

Cite this paper

@article{Zhao2013ThePO, title={The prospect of inter-data-center optical networks}, author={Xiaoxue Zhao and Vijay Vusirikala and Bikash Koley and Valey Kamalov and R. Theodore Hofmeister}, journal={IEEE Communications Magazine}, year={2013}, volume={51} }