US 7535895 Selectively switching data between link interfaces and processing engines in a network switch

ABSTRACT – Technology is disclosed for directing data through a network switch. One version of a network switch employs a mid-plane architecture that allows data to be directed between any link interface and any processing engine. Each time slot of data from an ingress link interface can be separately directed to any ingress processing engine. Each time slot of data from an egress processing engine can be separately directed to any egress link interface that supports the lower level protocol for the data. In one version of the switch, each processing engine in the network switch has the ability to service all of the protocols from the layers of the OSI model that are supported by the switch and not handled on the link interfaces. This allows the switch to allocate processing engine resources, regardless of the protocols employed in the data passing through the switch.

FIELD OF THE INVENTION

The present invention is directed to network switching technology.

BACKGROUND OF THE INVENTION

Network switches process data from an incoming port and direct it to an outgoing port. Network switches offer a variety of services, including support for virtual private networks (“VPNs”). The increased popularity of the Internet and other network related technologies has increased performance demands on network switches. Network switches need to efficiently manage their resources, while supporting multiple physical signaling standards and higher-level protocols.

A typical network switch includes link interfaces for exchanging data with physical signaling mediums. The mediums carry data according to multiple physical signaling protocols. Each link interface supports a physical signaling standard from the physical layer (Layer 1) of the Open Systems Interconnection (“OSI”) model. In one example, a link interface supports a network connection for Synchronous Optical Network (“SONET”) at Optical Carrier level 48 (“OC-48”). In another example, a link interface supports the physical layer of Gigabit Ethernet. Some link interfaces also support portions of the data-link layer (Layer 2) in the OSI model, such as the Media Access Control (“MAC”) of Gigabit Ethernet.

Incoming data also conforms to one or more higher-level protocols, such as High-level Data Link Control (“HDLC”), Point-to-Point Protocol (“PPP”), Frame Relay, Asynchronous Transfer Mode (“ATM”), and other protocols in higher layers of the OSI model. Each link interface forwards incoming data to a processing engine in the network switch that supports a higher-level protocol for the data—a network switch may have one or multiple processing engines. The processing engine interprets the data and performs any desired processing, such as packet processing according to one or more layers in the OSI model. A variety of different processing operations can be performed, based on the services supported in the network switch.

The processing engine identifies an egress link interface for incoming data and arranges for the data to be forwarded to the egress link interface. The egress link interface delivers the data to a desired network medium connection. In network switches with multiple processing engines, an ingress processing engine directs the data through an egress processing engine for forwarding to the egress link interface.

Network switches can be implemented as either single card or multiple card systems. A single card system implements all of the switch’s link interfaces and processing engines on a single printed circuit board (“PCB”). These types of systems are typically simple switches with the link interfaces being directly connected to a single processing engine. If multiple processing engines are employed, each processing engine is typically connected to a fixed set of link interfaces. The processing engines are also coupled to a fabric or interconnect mesh for exchanging data with each other. The connections between a set of link interfaces and a processing engine cannot be altered, regardless of the level of traffic through the link interfaces. This can result in an inefficient use of system resources—one processing engine can be over utilized, while another processing engine is under utilized. The fixed relationship between link interfaces and processing engines does not permit incoming link interface traffic to be redirected to a processing engine with excess resources.

Multiple card network switches implement link interface functionality and processing engine functionality on multiple PCBs—allowing the network switch to support more link interfaces and processing engines than a single card switch. Two types of multiple card systems are the backplane architecture and the mid-plane architecture.

In the backplane architecture, multiple line cards are coupled together over a backplane. A line card includes a processing engine coupled to one or more link interfaces for interfacing to physical mediums. Multiple line cards are coupled to the backplane. Incoming data is switched from an ingress line card to an egress line card for transmission onto a desired network medium. One type of backplane architecture includes a fabric card coupled to the backplane for facilitating the transfer of data between line cards.

The backplane architecture also fails to efficiently allocate system resources. The relationship between link interfaces and processing engines is fixed. Each processing engine resides on a line card with its associated link interfaces. The processing engine can only communicate with link interfaces on its line card. Incoming traffic through the link interfaces on one card can be very heavy. None of this traffic can be directed to a processing engine on another line card for ingress processing. This is wasteful if processing engines on other cards have excess resources available.

In the mid-plane architecture, multiple cards are coupled to a mid-plane that facilitates communication between the cards. The switch includes processing engine cards and link interface cards. Each processing engine card includes a processing engine, and each link interface card has one or more link interfaces. One type of link interface typically performs processing on data at Layer 1 of the OSI model. Each processing engine card processes data at Layer 2 in the OSI model and higher. A processing engine card provides only one type of Layer 2 processing—requiring the switch to contain at least one processing engine card for each type of Layer 2 protocol supported in the switch. In some instances, link interfaces perform a portion of the Layer 2 processing, such as Gigabit Ethernet MAC framing. Another type of link interface only performs a portion of the Layer 1 processing—operating only as a transceiver. In this implementation, the processing engine card also performs the remaining portion of the Layer 1 processing. Each processing engine card only supports one type of Layer 1 protocol—requiring the switch to contain at least one processing engine card for each type of Layer 1 protocol supported in the switch.

In operation, an ingress link interface card forwards incoming data to an ingress processing engine card. The ingress processing engine card is dedicated to processing data according to the higher-level protocol employed in the incoming data. The ingress processing engine passes the processed incoming data to an egress processing engine card for further processing and forwarding to an egress link interface card. In one implementation, the switch includes a fabric card for switching data from one processing engine card to another processing engine card.

One type of mid-plane switch passes data between link interface cards and processing engine cards over fixed traces in the mid-plane. This creates the same inefficient use of system resources explained above for the single card architecture and backplane architecture.

Another type of mid-plane switch employs a programmable connection card to direct data between link interface cards and processing engine cards. Each link interface card has one or more serial outputs for ingress data coupled to the connection card and one or more serial inputs for egress data coupled to the connection card. Each processing engine card has one or more serial outputs for egress data coupled to the connection card and one or more serial inputs for ingress data coupled to the connection card. For each link interface card output, the connection card forms a connection with one processing engine card input for transferring data. For each processing engine card output, the connection card forms a connection with one link interface card input for transferring data. Data from one card output cannot be directed to multiple card inputs through the connection card.

Even with the connection card, the mid-plane architecture wastes resources. Protocol specific processing engine cards can be wasteful. If the network switch receives a disproportionately large percentage of data according to one protocol, the processing engine supporting that protocol is likely to be over utilized. Meanwhile, processing engine cards for other protocols remain under utilized with idle processing bandwidth. The resource inefficiency of existing mid-plane systems is worse when a switch includes redundant processing engine cards. The network switch requires at least one redundant processing engine card for each protocol supported in the network switch, regardless of the protocol’s utilization level.

SUMMARY OF THE INVENTION

The present invention, roughly described, pertains to technology for efficiently utilizing resources within a network switch. One implementation of a network switch employs a mid-plane architecture that allows data to be directed between any link interface and any processing engine. In one implementation, each link interface can have a single data stream or a channelized data stream. Each channel of data from a link interface can be separately directed to any processing engine. Similarly, each channel of data from a processing engine can be separately directed to any link interface. In one embodiment, each processing engine in the network switch has the ability to service all of the protocols from the layers of the OSI model that are supported by the switch and not handled on the link interfaces. This allows the switch to allocate processing engine resources, regardless of the protocols employed in the data passing through the switch.

One embodiment of the network switch includes link interfaces, processing engines, a switched fabric between the processing engines, and a switch between the link interfaces and processing engines. In one implementation, the switch between the link interfaces and processing engines is a time slot interchange (“TSI”) switch. An ingress link interface receives incoming data from a physical signaling medium. The ingress link interface forwards incoming data to the TSI switch. The TSI switch directs the data to one or more ingress processing engines for processing, such as forwarding at the Layer 2 or Layer 3 level of the OSI model. In one implementation, the TSI switch performs Time Division Multiplexing (“TDM”) switching on data received from each link interface—separately directing each time slot of incoming data to the proper ingress processing engine. In an alternate embodiment, the TSI switch is replaced by a packet switch. The information exchanged between link interfaces and processing engines is packetized and switched through the packet switch.

The ingress processing engine sends data to the packet switch fabric, which directs packets from the ingress processing engine to one or more egress processing engines for further processing and forwarding to the TSI switch. The TSI switch directs the data to one or more egress link interfaces for transmission onto a physical medium. One implementation of the TSI switch performs TDM switching on data streams received from each processing engine—separately directing each time slot of incoming data to the proper egress link interface. In an alternate embodiment, the TSI switch is replaced by a packet switch that performs packet switching.

The switch between the link interfaces and processing engines can be any multiplexing switch—a switch that multiplexes data from multiple input interfaces onto a single output interface and demultiplexes data from a single input interface to multiple output interfaces. The above-described TSI switch and packet switch are examples of a multiplexing switch.

In one example, the TSI switch receives data from link interfaces and processing engines in the form of SONET STS-48 frames. The TSI switch has the ability to switch time slots in the SONET frame down to the granularity of a single Synchronous Transport Signal—1 (“STS-1”) channel. In alternate embodiments, the TSI switch can switch data at a higher or lower granularity. Further implementations of the TSI switch perform virtual concatenation—switching time slots for multiple STS-1 channels that operate together as a higher throughput virtual channel, such as a STS-3 channel.

The operation of the TSI switch and protocol independence of the processing engines facilitates bandwidth pooling within the network switch. When a processing engine becomes over utilized, a channel currently supported by the processing engine can be diverted to any processing engine that is not operating at full capacity. This redirection of network traffic can be performed at the STS-1 channel level or higher. Similar adjustments can be made when a processing engine or link interface are under utilized. Bandwidth pooling adjustments can be made when the network switch is initialized and during the switch’s operation. The network switch also provides efficient redundancy—a single processing engine can provide redundancy for many other processing engines, regardless of the protocols embodied in the underlying data. Any processing engine can be connected to any channel on any link interface—allowing any processing engine in the network switch to back up any other processing engine in the switch. This easily facilitates the implementation of 1:1 or 1:N processing engine redundancy. In one implementation, the efficient distribution of resources allows for a 2:1 ratio of link interfaces to processing engines, so that each link interface has redundancy and no processing engine is required to sit idle.

The present invention can be accomplished using hardware, software, or a combination of both hardware and software. The software used for the present invention is stored on one or more processor readable storage media including hard disk drives, CD-ROMs, DVDs, optical disks, floppy disks, tape drives, RAM, ROM or other suitable storage devices. In alternative embodiments, some or all of the software can be replaced by dedicated hardware including custom integrated circuits, gate arrays, FPGAs, PLDs, and special purpose computers. In one embodiment, software implementing the present invention is used to program one or more processors, including microcontrollers and other programmable logic. The processors can be in communication with one or more storage devices, peripherals and/or communication interfaces.

These and other objects and advantages of the present invention will appear more clearly from the following description in which the preferred embodiment of the invention has been set forth in conjunction with the drawings.

 

 

Related Posts