From OTN to OTN Cluster, Maximizing Node Values Through Resource Pooling

From OTN to OTN Cluster, Maximizing Node Values Through Resource Pooling

Simplified networks have become the development trend for meeting the larger bandwidth and lower latency requirements of new services, such as 4k video, 5G, and cloud computing, and for reducing network construction costs and improving operation efficiency. The network architecture and node optimization enter a new stage, and core nodes of bearer networks, as traffic highlands, are entering the resource pooling era.

Large Traffic and Full Mesh Interconnection, Driving OTN Switching from a Single Subrack to Multiple Subracks

  • At present, the traffic at core nodes keeps an annual growth rate of 40%. The switching capacity of some core nodes surpasses 100T. The capacity, number of slots, and heat dissipation of a single OTN subrack are limited. Multiple subracks are required to carry services. The capacity requirement growth drives the evolution from a single subrack to multiple subracks.
  • The network evolves towards full mesh architecture. The number of switching directions of core nodes increases. The network construction mode of one subrack for one direction drives the evolution from a single subrack to multiple subracks.

Three Factors, Driving the Rapid Growth of Inter-Subrack Grooming Services

  • Network flattening enables that services be transmitted in one hop, and OTN grooming across the backbone and metro layers increasingly. For the services between the aggregation layer and backbone layer in the same equipment room, cross-subrack OTN seamless grooming is increasingly required.
  • Increasing DCI and private line services call for cross-subrack OTN grooming. According to analysts' reports, GE/10G/100G large-granularity private lines have an annual increase of more than 25%. These services require cross-network and cross-plane transmission. Different private line services share large pipes at the access and aggregation nodes. At the core layer, they need to be reassembled and transmitted in different directions based on OTN grooming to improve channel utilization.
  • With enhanced protection, cross-subrack OTN grooming is increasingly required. Mesh networks provide more protection routes, and multi-direction ASON protection requires cross-subrack OTN grooming.

OTN Cluster Resource Pooling Maximizes the Value of Nodes and Networks

In multi-subrack OTN switching, interconnection between systems is critical. In traditional mode, various types of service boards are required to implement inter-subrack interconnection. Once the application scenario changes, hardware needs to be changed and onsite operations need to be performed manually.

The OTN cluster solution uses a centralized switching subrack to implement wavelength and ODU resource pooling. In this manner, electrical subracks can be smoothly expanded and the switching capacity can reach beyond 100T. Services can be provisioned remotely, greatly improving the E2E efficiency. The advantages are as follows:

  • Zero hardware waste: Resource pooling enables that when services change, the subrack hardware interconnection does not need to be replaced. Therefore, the investment is not wasted. In addition, more service slots are available, which improves the system access capability by more than 20%.
  • Maximized channel utilization: Cross-subrack sharing of wavelength and ODU resources enables full sharing of protection bandwidth and ASON channels, maximizing resource utilization.
  • Simplified planning: In traditional scenarios, service grooming is often emergent and difficult to plan. The pooled system capability of OTN cluster greatly simplifies planning and design.
  • Intelligent operation: The remote configuration of the OTN cluster replaces site visits. The scheduling time is reduced from several weeks to several minutes, greatly improving the efficiency.

The combination of OTN cluster and single-node system capacity upgrade forms a large-capacity scheduling system whose capacity is multiple times that of a single node, supporting long-term smooth evolution of large-capacity DC nodes and traditional CT networks. In 2018, OTN clusters are starting to be tested, and are put into commercial use for some carriers. In the near future, OTN clusters will be applied to a large number of core nodes.