MPLS HQoS ( Hierarchical QoS )

Introduction


Definition

As an application of QoS in VPN services, MPLS HQoS includes hierarchical QoS (HQoS) at the public network side based on the peer PE/VPN instance, and interface-based QoS at the access side of users.
MPLS HQoS offers practical QoS solutions to various services in the MPLS VPN.

Purpose

With the development of Virtual Private Network (VPN) services, telecommunication operators and users urgently demand the implementation of QoS (such as QoS for the leased line) in VPN networks.
Operators, when providing L2VPN/L3VPN access services for a VPN user, sign the service level agreement (SLA) with the user. The SLA includes the following:
  • Total bandwidth used by the user to access the MPLS VPN.
  • Priority of the user service in the MPLS network.
The preceding two points determine the volume of user traffic that can access the ISP network. After a user gets connected to the operator's network, the operator needs to decide which type of QoS is to be delivered to the user's service traffic.
  • The operator should guarantee bandwidth for the user traffic to a specified peer PE.
  • Types of services bound for a specific peer PE may include voice, video, important data, and common network services. These services have different requirements in terms of bandwidth and delay, and the operator should guarantee that such requirements are met.
MPLS HQoS provides a relatively complete MPLS L2VPN/L3VPN QoS solution. By using various QoS techniques, MPLS HQoS meets the diversified and refined QoS demands of VPN users.

Benefits

Benefits Brought to Operators
  • Ability to provide diversified QoS-based services, resulting in increased competitiveness
  • Ability to guarantee the quality of service for key users
Benefits Brought to Users
Users can enjoy the kind of quality of service similar to that available to the traditional physical leased line through the MPLS VPN network. This helps reduce their investment.

Implementation Principle


Implementing HQoS (L2VPN/L3VPN) at the Public Network Side Based on VPN Instance + Peer PE

In the MPLS VPN, bandwidth agreement may need to be reached between PE devices of an operator, so that traffic between two PEs is restricted or guaranteed according to the bandwidth agreement. To achieve this end, HQoS at the public network side that is based on VPN Instance + Peer PE can be adopted.
As shown in Figure 1, bandwidth and class of service are specified for traffic between PEs at the MPLS VPN side. For example, in VPNA, the specified bandwidth for traffic between PE1 and PE2 is 30 Mbit/s, and higher priority services are given bandwidth out of the 30 Mbit/s bandwidth ahead of lower priority services.
 NOTE:
The LDP LSP and the static LSP do not reserve bandwidth resources along their forwarding paths. If you need to provide QoS guarantee for services along the entire PE-to-PE path, you need to adopt the TE tunnel, which provides bandwidth guarantee by reserving bandwidth resources along its forwarding path. When the LDP LSP tunnel or the static LSP tunnel is adopted, end-to-end QoS cannot be guaranteed. In this case, priority scheduling and bandwidth restriction can be implemented on only the ingress PE.
If, however, you need to implement bandwidth restriction rather than bandwidth guarantee at the network side, you can simply specify the CIR to be 0, and the PIR to the desired bandwidth.
Figure 1 Implementing HQoS at the public network side based on VPN instance + peer PE

The preceding traffic model is described as follows:
  1. In this scenario, the LDP tunnel or the TE tunnel that is not allocated bandwidth is adopted.
    QoS is configured based on VPN instance + peer PE to implement bandwidth restriction at the network side.
    As shown in Figure 2, at the egress PE, traffic is mapped to eight flow queues in the service queue (SQ) according to the preference information carried by the traffic. Flow queues belonging to the same VPN instance + peer PE are mapped to the same SQ.
    Based on the user's requirements, multiple VPNs + peer PEs can be configured to be in one user group, and undergo group queue (GQ) scheduling.
    On the outbound interface, traffic first undergoes port queue scheduling, and is then forwarded.
    Figure 2 Model of hierarchical traffic scheduling
    In this scenario, priority scheduling of flow queues is supported, bandwidth restriction and bandwidth guarantee of SQs are supported, whereas only traffic shaping is supported for GQs.


  1. In this scenario, the TE tunnel that is allocated bandwidth is adopted.
    Scenario 2 differs from scenario 1 in the following aspects.
    • After traffic undergoes the SQ scheduling, by default, traffic undergoes GQ scheduling that is based on the TE tunnel. That is, by default, all traffic in one TE tunnel is regarded as being in one GQ. GQ scheduling, however, can also be configured to be based on VPN instance + peer PE according to the user's requirements. In this case, the default GQ scheduling that is based on the TE tunnel is no longer effective.
    • PE-to-PE bandwidth guarantee for traffic is supported. This is because in this scenario, bandwidth resources are reserved for the TE tunnel.
      Figure 3 Model of hierarchical traffic scheduling
    If traffic is load-balanced among TE tunnels on peer PEs, all traffic that is load-balanced undergoes priority scheduling and bandwidth restriction according to the traffic scheduling procedure as shown in Figure 3.
     NOTE:
    DS-TE only forwards traffic that conforms with its CT. All the other traffic is discarded.
    In this scenario, it is recommended that the TE tunnel that is configured with bandwidth resources be adopted to achieve PE-to-PE bandwidth guarantee for traffic.

Implementing HQoS (L2VPN/L3VPN/MVPN) at the Public Network Side Based on VPN Instance

In the MPLS VPN, when a VPN instance is a user (a corporate user in most cases), the operator wishes to implement traffic restriction/guarantee for the user on the PE device.
Generally, on the access side of users, there are multiple interfaces, which may be located on different boards. This means that deployment of QoS on these interfaces is not only complex, but it involves the coordinating of QoS enforcement on different boards. The problem can be solved by deploying HQoS that is based on VPN + peer PE at the public network side.
As shown in Figure 4, bandwidth and class of service are specified on the ingress PE for traffic of VPN instances (for example, the specified bandwidth for VPN-A is 30 Mbit/s, and higher priority services are given bandwidth out of the 30 Mbit/s bandwidth ahead of lower priority services). VPN-specific queue scheduling of traffic is performed at the PE side to implement traffic restriction/guarantee for the entire VPN.
 NOTE:
If you need to implement bandwidth restriction rather than bandwidth guarantee at the network side, you can simply specify the CIR to be 0, and the PIR to the desired bandwidth.
Figure 4 VPN-based QoS on the network side in an L2VPN/L3VPN
The preceding traffic model is described as follows:
  1. In this scenario, the LDP tunnel or the TE tunnel that is not allocated bandwidth is adopted.
    QoS is configured based on VPN instance to implement bandwidth restriction at the network side.
    As shown in Figure 5, at the ingress PE, traffic is mapped to eight flow queues in the SQ according to the preference information carried by the traffic. Flow queues belonging to the same VPN instance are mapped to the same SQ.
    Based on the user's requirements, multiple VPNs can be configured to be in one user group, and undergo GQ scheduling.
    On the outbound interface, traffic first undergoes queue scheduling, and is then forwarded.
    Figure 5 Model of hierarchical traffic scheduling
    In this scenario, priority scheduling of flow queues is supported, bandwidth restriction and bandwidth guarantee of SQs are supported, whereas only traffic shaping is supported for GQs.
  2. In this scenario, the TE tunnel that is allocated bandwidth is adopted. This scenario is shown in the following figure.
    Figure 6 Model of hierarchical traffic scheduling
    The TE tunnel that is allocated bandwidth is adopted differs from the LDP tunnel or the TE tunnel that is not allocated bandwidth is adopted in the following aspects.
    • Traffic is allocated to an SQ based on VPN +TE tunnel (in scenario 1, traffic is allocated based on VPN). This means that if a VPN uses N such TE tunnels, the total bandwidth of the VPN is N times the traffic of one such TE tunnel.
    • After traffic undergoes the SQ scheduling, by default, traffic undergoes GQ scheduling that is based on the TE tunnel. That is, by default, all traffic in one TE tunnel is regarded as being in one GQ. Multiple VPN instances can be configured to be in the same group for GQ scheduling according to the user's requirements. In this case, the default GQ scheduling that is based on the TE tunnel is no longer effective.
    • PE-to-PE bandwidth guarantee for traffic is supported. This is because in this scenario, bandwidth resources are reserved for the TE tunnel.
     NOTE:
    In this scenario, the TE tunnel that is configured with bandwidth resources is not recommended. This is because VPN-based traffic control is not concerned with PE-to-PE traffic handling. Instead, VPN-based traffic control mainly focuses on the bandwidth restriction and guarantee on a single PE.
    When QoS is configured at the public network side for a VPN instance, if the VPN instance also runs multicast VPN (MVPN) services, the QoS configuration that is applicable to unicast services of the VPN instance is also applicable to these MVPN services.

Implementing HQoS (L2VPN/L3VPN) on Interfaces at the CE Side of a VPN Instance

If an interface is bound to a VPN, interface-based QoS configurations are also applicable to the interface (for details, refer to the description in the QoS part). This means that traffic received by the interface from the CE or sent by the interface to the CE can undergo interface-based QoS processing.

Implementing VPN-based Traffic Statistics

In a VPLS network, the PE device supports statistics on both the incoming and outgoing traffic at the AC or PW side (at the AC side, traffic statistics are based on the interface, whereas at the PW side, traffic statistics are based on the PW table). After MPLS HQoS is configured, traffic statistics can be produced on packets at the PW side that are sent based on their priorities.
In a L3VPN, the PE device supports statistics on both the incoming and outgoing traffic of the VPN user (based on interfaces at the VPN side). After MPLS HQoS is configured, traffic statistics can be produced on packets at the public network side of the ingress PE that are sent based on their priorities.
In a VLL, the PE device supports statistics on both the incoming and outgoing traffic of the VLL user (based on interfaces at the VPN side). After MPLS HQoS is configured, traffic statistics can be produced on packets at the public network side of the ingress PE that are sent based on their priorities.

Hierarchical Quality of Service ( HQOS)

Definition

Hierarchical Quality of Service (HQoS) is a technology used to guarantee the bandwidth of multiple services of many users in the Differentiated Service (DiffServ) model through a queue scheduling mechanism.
DiffServ is a type of the class-based QoS technology, and provides different services for different service flows. Therefore, in the DiffServ scheme, service flows need to be classified based on service requirements first. Traffic classification is the prerequisite and basis of HQoS.
Family HQoS is the classifying of services with the specified characteristics into the same family. These services enter one subscriber queue (SQ). Currently, methods of identifying family members include NONE, C-VLAN, P-VLAN, P+C VLAN, and Option 82. Subscribers going online from different sub-interfaces of the same interface can also be classified as belonging to the same family.
Leased line user refers to an enterprise that leases an entire interface (or interfaces) from the operator. All subscribers from this enterprise are charged, scheduled, and managed in a unified manner.

Purpose

Along with the emergence of new applications on IP networks, new requirements are presented to the QoS of IP networks. For example, real-time services such as Voice over IP (VoIP) demand a shorter delay. A long delay for packet transmission is unacceptable. Email and File Transfer Protocol (FTP) services are comparatively insensitive to the delay. To support the services that present different service requirements, such as voice, video, and data services, the network is required to distinguish the services before providing corresponding quality of services for them. Therefore, the QoS technology is introduced. With the rapid development of network equipment, the capacity of a single interface increases along with the number of users accessing it. HQoS is needed because Traditional QoS is encountering new problems with applications.
  • Traditional traffic management schedules traffic based on the bandwidth of interfaces. As a result, traffic management is sensitive to the class of services rather than usersand fit for traffic at the network core side but not fit for traffic at the service access side.
  • Traditional traffic management has great difficulties in simultaneously controlling multiple services of many users.
To solve the problems and provide better QoS, a QoS technology that can carry out queue scheduling based on the priorities of services and control the traffic of users, is needed. Combined with the DiffServ scheme, HQoS adopts five levels of scheduling. HQoS enables the equipment to acquire policies for controlling internal resources with the existing hardware. It can both provide the quality assurance for the advanced users and reduce the total cost of the network construction.
Family HQoS implements unified traffic scheduling of all services belonging to the same family.
Leased line HQoS implements unified traffic scheduling of all services belonging to the same enterprise.

Principles of QoS Queues

A queue is a storage format of packets during the forwarding process. When the rate of traffic exceeds the bandwidth on an interface or the bandwidth set for the traffic, packets are placed into the cache in a queue. The time and sequence for packets leaving related queues and the scheduling of packets in various queues are determined by scheduling policies. The QoS queue structure is classified into the upstream queue structure and the downstream queue structure, which include flow queues, common queues, and port queues.

Figure 1 Upstream QoS queue scheduling structure

Figure 2 Downstream QoS queue scheduling structure


Queue Scheduling Technology

The queue scheduling mechanism is a very important technology in QoS. When the congestion occurs, a proper queue scheduling mechanism can provide packets of a certain type with proper QoS features such as the bandwidth, delay, and jitter. The queue scheduling mechanism works only when the congestion occurs. The commonly used queue scheduling technologies include Weighted Fair Queuing (WFQ), Weighted Round Robin (WRR), and PQ.

RR

RR, short for Round Robin, is a simple scheduling method. Through RR, multiple queues can be scheduled.
Figure 3  Schematic diagram of RR

RR schedules multiple queues in ring mode. If the queue on which RR is performed is not empty, the scheduler takes packets of a certain length away from the queue. If the queue is empty, the queue is skipped and the scheduler does not wait.

WRR

Compared with RR, WRR can set the weights of queues. During the WRR scheduling, the service time obtained by a queue is in direct proportion to the weight of the queue.
WRR allocates different queues different periods of service time based on the weight. Each time a queue is scheduled, packets are sent from the queue for the allocated period of time, as shown in Figure 4.
Figure 4 Schematic diagram of WRR

During the WRR scheduling, the empty queue is directly skipped. Therefore, when there is a small volume of traffic in a queue, the remaining bandwidth of the queue is used by the queues according to a certain proportion.

WFQ+SP

SP is based on the absolute priority.
WFQ is used to assign bandwidth to queues taking part in the scheduling according to the weights of the queues. Therefore, the technology is called WFQ+SP. In WFQ, unused bandwidth is reassigned. WFQ can thus ensure the minimum bandwidth of queues according to the weights of the queues.

Comparison Between the Scheduling Technologies

Scheduling Algorithms Complexity Delay/Jitter Fairness
RR It is easy to implement. In case of low-speed scheduling, delay and jitter intensify. The packets in the queues are scheduled based on the lengths of packets.
WRR It is easy to implement. In case of low-speed scheduling, delay and jitter intensify. The packets in the queues are scheduled based on the lengths of packets.
WFQ It is complex to implement. The delay is controlled properly and the jitter is low. The queues are scheduled fairly at the granularity of bytes.


QoS Queue Scheduling

HQoS ensures the bandwidth of multiple services of many users through the QoS queue scheduling of five levels. HQoS scheduling is classified into upstream HQoS scheduling and downstream HQoS scheduling.
  • The upstream QoS scheduling is classified into five levels: FQ -> SQ -> GQ -> Target Blade (TB) -> CQ.
  • The downstream QoS scheduling is classified into five levels: FQ -> SQ -> GQ -> CQ -> Port Queue (PQ).
QoS queue scheduling includes FQ scheduling, common queue scheduling, and port queue scheduling.

FQ Scheduling

Flow queue scheduling includes GQ traffic shaping, SQ scheduling, FQ scheduling, and FQ traffic shaping.
GQ does not support the scheduling algorithm for ensuring bandwidth of queues. Instead, GQ only adopts traffic shaping to limit bandwidth by setting the rate of the traffic shaper Traffic shaping uses a single token bucket and places tokens to the bucket according to the rate of the traffic shaper. When the SQ scheduling is complete, the GQ scheduling needs to be performed if the SQ belongs to a GQ. Otherwise, the GQ scheduling is not required.
An SQ corresponds to a user. The SQ scheduling ensures the CIR and PIR bandwidth of the user.
The FQ scheduling adopts SP, WFQ, SPL. Each SQ corresponds to eight FQs, and the eight FQs can share the bandwidth of the SQ. Each FQ can define its PIR. Figure 5 shows the hierarchical structure of the FQ scheduling.
Figure 5 Hierarchical structure of the FQ scheduling

Traffic shaping is implemented on each FQ to limit the traffic rate. That is, the PIR is configured for each FQ. Traffic shaping is implemented through a single token bucket.

Common Queue Scheduling

After the WFQ scheduling, the packets of common queues enter a CQ.

Port Queue Scheduling

To ensure the allocation of available bandwidth, the upstream port queue scheduling is classified into the following levels:
  • RR scheduling for port queues with the same CoS
  • WFQ scheduling for multicast traffic of the same CoS and unicast traffic of the same CoS
  • SP+WFQ scheduling for traffic of different CoSs
The downstream port queue scheduling is classified into the following levels:
  • WFQ scheduling for traffic of different CoSs on an interface
  • WRR scheduling for traffic between interfaces
  • CQ scheduling and port queue scheduling

Principles of Queue Mapping

Queue mapping is used to map the eight priority queues (BE, AF1, AF2, AF3, AF4, EF, CS6, and CS7) of an FQ at the first level to the eight priority queues (BE, AF1, AF2, AF3, AF4, EF, CS6, and CS7) of a CQ at the fifth level. Through the queue mapping from an FQ to a CQ, the traffic of an FQ can flexibly enter a queue of a CQ.

Next we will talk about MPLS HQOS and how MPLS Providers implement the HQOS

Inter-AS VPN

Generally, an MPLS VPN architecture runs within an AS in which the VPN routing information is flooded on demand. The VPN routing information within the AS cannot be flooded to the AS of other SPs. To realize the exchange of VPN route information between different ASs, the inter-AS MPLS VPN model is introduced. The inter-AS MPLS VPN model is an extension of the existing protocol and MPLS VPN framework. Through this model, the route prefix and label information can be advertised over the links between different carrier networks.
RFC 2547bis presents three Inter-AS VPN solutions.
  • Inter-Provider Backbones Option A: ASBRs manage VPN routes, through dedicated interfaces for the VPNs that traverse different ASs. This solution is also called VRF-to-VRF.
  • Inter-Provider Backbones Option B: ASBRs advertise labeled VPN-IPv4 routes to each other through MP-EBGP. This solution is also called EBGP redistribution of labeled VPN-IPv4 routes.
  • Inter-Provider Backbones Option C: PEs advertise labeled VPN-IPv4 routes to each other through Multi-hop MP-EBGP. This solution is also called Multi-hop EBGP redistribution of labeled VPN-IPv4 routes.

Inter-Provider Backbones Option A

As a basic BGP/MPLS IP VPN application in the inter-AS scenario, Option A does not need special configurations and MPLS need not run between ASBRs. In this mode, ASBRs of the two ASs are directly connected, and they act as the PEs in the ASs. Either of the ASBR PEs takes the peer ASBR as its CE and advertises IPv4 routes to the peer ASBR through EBGP.
Figure 1 Networking diagram of inter-provider backbones Option A 

In Figure 1, for ASBR1 in AS 100, ASBR2 is a CE. Similarly, for ASBR2, ASBR1 is a CE.

Inter-Provider Backbones Option B

In Option B, through MP-EBGP, two ASBRs receive the labeled VPN-IPv4 routes from the PEs in the ASs respectively and then exchange the routes.
Figure 2 Networking diagram of inter-provider backbones Option B 

In inter-AS VPN Option B, ASBRs receive all inter-AS VPNv4 routes within the local AS and from the outside ASs and then advertise these VPN-IPv4 routes. In the basic MPLS VPN implementation, a PE stores only the VPN routes that match the VPN target of the local VPN instance. Thus, the VPN instance whose routes need to be advertised by the ASBR can be configured on the ASBR, but no interface is bound to VPN instances. If the ASBR is not configured with the related VPN instances, the following methods can be adopted:
  • The ASBR processes the labeled VPN-IPv4 routes specially and stores all the received VPN routes regardless of whether the local VPN instance that matches the routes exists.
    When using this method, note the following:
    • ASBRs do not filter the VPN-IPv4 routes received from each other based on VPN targets. Therefore, the SPs in different ASs that exchange VPN-IPv4 routes must reach a trust agreement on route exchange.
    • The VPN-IPv4 routes are exchanged only between VPN peers of private networks. A VPN cannot exchange VPN-IPv4 routes with public networks or MP-EBGP peers with whom there is no trust agreement.
    All the traffic is forwarded by the ASBR; thus, the traffic is easy to control, but the load on the ASBR increases.
  • Use BGP routing policies such as the policy filtering routes based on RTs to control the transmission of VPN-IPv4 routes.

Inter-Provider Backbones Option C

The preceding two modes can satisfy networking requirements of inter-AS VPN. ASBRs, however, need to maintain and distribute VPN-IPv4 routes. When each AS needs to exchange a large number of VPN routes, ASBRs may hinder network extension.
The solution to the problem is that PEs directly exchange VPN-IPv4 routes with each other and ASBRs do not maintain or advertise VPN-IPv4 routes.
  • ASBRs advertise labeled IPv4 routes to PEs in their respective ASs through MP-IBGP, and advertise labeled IPv4 routes received on PEs in the local AS to the ASBR peers in other ASs. ASBRs in the transit AS also advertise labeled IPv4 routes. Therefore, an BGP LSP can be established between the ingress PE and egress PE.
  • The PEs in different ASs establish multi-hop EBGP connections with each other and exchange VPN-IPv4 routes.
  • The ASBRs do not store VPN-IPv4 routes or advertise VPN-IPv4 routes to each other.
Figure 3 Networking diagram of inter-provider backbones Option C 

To improve the expansibility, you can specify an Route Reflector (RR) in each AS. The RR stores all VPN-IPv4 routes and exchanges VPN-IPv4 routes with the PEs in the AS. The RRs in two ASs establish MP-EBGP connections with each other and advertise VPN-IPv4 routes.
Figure 4 Networking diagram of inter-provider backbones Option C with RRs


Comparison Between Three Options

Inter-AS VPN
Characteristic
Option A
This solution is easy to implement because MPLS is not required between ASBRs and no special configuration is required.
The expansibility, however, is poor because ASBRs need to manage all VPN routes and create VPN instances for each VPN. This may result in too many VPN-IPv4 routes on PEs. In addition, as common IP forwarding is performed between the ASBRs, each inter-AS VPN requires different interfaces, which can be sub-interfaces, physical interfaces, and bound logical interfaces. Therefore, this option poses high requirements for PEs. If a VPN spans multiple ASs, the intermediate ASs must support VPN services. This requires complex configurations and greatly affects the operation of the intermediate ASs. If the number of inter-AS VPNs is small, Option A can be considered.
Option B
Unlike Option A, Option B is not limited by the number of the links between ASBRs.
VPN routing information is stored on and forwarded by ASBRs. When a great number of VPN routes exist, the overburdened ASBRs are likely to become bottlenecks. Therefore, in the MP-EBGP solution, the ASBRs that maintain VPN routing information do not perform IP forwarding on the public network.
Option C
VPN routes are directly exchanged between the ingress PE and the egress PE. The routes need not be stored and forwarded by intermediate devices.
The exchange of VPN routing information involves only PEs. Ps and ASBRs are responsible for packet forwarding only. The intermediate devices need to support only MPLS forwarding rather than the MPLS VPN services. In such a case, ASBRs are unlikely to become bottlenecks. Option C, therefore, is suitable for the VPN that spans multiple ASs.
MPLS VPN load balancing is easy to carry out in Option C.
The disadvantage lies in the high-cost management of an end-to-end connection between PEs.




Configuring Martini VLL by Using MPLS TE Tunnels



The public network tunnel of a Martini VLL can be an MPLS TE tunnel.

Networking Requirements

As shown in Figure 1, CE1 and CE2 belong to the same VPN. They access the MPLS backbone network through PE1 and PE2 respectively. OSPF is used as the IGP protocol on the MPLS backbone network.
In this scenario, you need to configure Martini VLL. An MPLS TE tunnel must be established between PE1 and PE2 through the dynamic signaling protocol RSVP-TE to forward the traffic of the VLL. The bandwidth of the tunnel is 20 Mbit/s. The maximum bandwidth of the link that the TE tunnel passes through is 100 Mbit/s, and the maximum reservable bandwidth is 50 Mbit/s.
Figure 1 Networking diagram of configuring Martini L2VPN using MPLS TE tunnels 

Configuration Roadmap

The configuration roadmap is as follows:
  1. Configure a routing protocol and enable MPLS on related devices (PEs and P) in the backbone network to implement interworking.
  2. Establish the MPLS TE tunnel and configure the tunnel policy. For the details of establishing an MPLS TE tunnel, refer to the chapter "MPLS TE Configuration" in the HUAWEI CX600 Metro Services Platform Configuration Guide - MPLS.
  3. Enable VLL on the PEs and establish VCs.

Data Preparation

To complete the configuration, you need the following data:
  • OSPF area enabled with TE
  • Name of the tunnel policy
  • Number of tunnels involved in load balancing (if load balancing is not performed, the number of tunnels is 1)

Procedure

  1. Configure an IP address for each interface and configure OSPF in the backbone network.
    The configuration details are not mentioned here.
  2. Enable MPLS, MPLS TE, MPLS RSVP-TE, and MPLS TE CSPF.
    Enable MPLS, MPLS TE, and MPLS RSVP-TE in the system view and the interface view of each node along the tunnel, and enable MPLS TE CSPF in the MPLS view of the ingress of the tunnel.
    # Configure PE1.
    [PE1] mpls lsr-id 1.1.1.9
    [PE1] mpls
    [PE1-mpls] mpls te
    [PE1-mpls] mpls rsvp-te
    [PE1-mpls] mpls te cspf
    [PE1-mpls] quit
    [PE1] interface pos1/0/0
    [PE1-Pos1/0/0] mpls
    [PE1-Pos1/0/0] mpls te
    [PE1-Pos1/0/0] mpls rsvp-te
    [PE1-Pos1/0/0] quit
    # Configure P.
    [P] mpls lsr-id 2.2.2.9
    [P] mpls
    [P-mpls] mpls te
    [P-mpls] mpls rsvp-te
    [P-mpls] quit
    [P] interface pos1/0/0
    [P-Pos1/0/0] mpls
    [P-Pos1/0/0] mpls te
    [P-Pos1/0/0] mpls rsvp-te
    [P-Pos1/0/0] quit
    [P] interface pos2/0/0
    [P-Pos2/0/0] mpls
    [P-Pos2/0/0] mpls te
    [P-Pos2/0/0] mpls rsvp-te
    [P-Pos2/0/0] quit
    # Configure PE2.
    [PE2] mpls lsr-id 3.3.3.9
    [PE2] mpls
    [PE2-mpls] mpls te
    [PE2-mpls] mpls rsvp-te
    [PE2-mpls] mpls te cspf
    [PE2-mpls] quit
    [PE2] interface pos1/0/0
    [PE2-Pos1/0/0] mpls
    [PE2-Pos1/0/0] mpls te
    [PE2-Pos1/0/0] mpls rsvp-te
    [PE2-Pos1/0/0] quit
  3. Configure OSPF TE on the backbone network.
    # Configure PE1.
    [PE1] ospf
    [PE1-ospf-1] opaque-capability enable
    [PE1-ospf-1] area 0.0.0.0
    [PE1-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
    [PE1-ospf-1-area-0.0.0.0] network 100.1.1.0 0.0.0.255
    [PE1-ospf-1-area-0.0.0.0] mpls-te enable
    # Configure P.
    [P] ospf
    [P-ospf-1] opaque-capability enable
    [P-ospf-1] area 0.0.0.0
    [P-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
    [P-ospf-1-area-0.0.0.0] network 100.1.1.0 0.0.0.255
    [P-ospf-1-area-0.0.0.0] network 100.2.1.0 0.0.0.255
    [P-ospf-1-area-0.0.0.0] mpls-te enable
    # Configure PE2.
    [PE2] ospf
    [PE2-ospf-1] opaque-capability enable
    [PE2-ospf-1] area 0.0.0.0
    [PE2-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
    [PE2-ospf-1-area-0.0.0.0] network 100.2.1.0 0.0.0.255
    [PE2-ospf-1-area-0.0.0.0] mpls-te enable
  4. Configure the MPLS TE attributes of the links.
    Configure the maximum bandwidth and the maximum reservable bandwidth of the link on the interface of each CX device along the tunnel.
    # Configure PE1.
    [PE1] interface pos 1/0/0
    [PE1-Pos1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
    [PE1-Pos1/0/0] mpls te bandwidth bc0 50000
    [PE1-Pos1/0/0] quit
    # Configure P.
    [P] interface pos 1/0/0
    [P-Pos1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
    [P-Pos1/0/0] mpls te bandwidth bc0 50000
    [P-Pos1/0/0] quit
    [P] interface pos 2/0/0
    [P-Pos2/0/0] mpls te bandwidth max-reservable-bandwidth 100000
    [P-Pos2/0/0] mpls te bandwidth bc0 50000
    [P-Pos2/0/0] quit
    # Configure PE2.
    [PE2] interface pos 1/0/0
    [PE2-Pos1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
    [PE2-Pos1/0/0] mpls te bandwidth bc0 50000
    [PE2-Pos1/0/0] quit
  5. Configure a tunnel interface.
    # Create the tunnel interface on the PEs, specify the tunnel protocol as MPLS TE and the signaling protocol as RSVP-TE, and specify the bandwidth.
    # Configure PE1.
    <PE1> system-view
    [PE1] interface tunnel 1/0/0
    [PE1-Tunnel1/0/0] ip address unnumbered interface loopback1
    [PE1-Tunnel1/0/0] tunnel-protocol mpls te
    [PE1-Tunnel1/0/0] mpls te signal-protocol rsvp-te
    [PE1-Tunnel1/0/0] destination 3.3.3.9
    [PE1-Tunnel1/0/0] mpls te tunnel-id 100
    [PE1-Tunnel1/0/0] mpls te bandwidth ct0 20000
    [PE1-Tunnel1/0/0] mpls te commit
    # Configure PE2.
    <PE2> system-view
    [PE2] interface tunnel 1/0/0
    [PE2-Tunnel1/0/0] ip address unnumbered interface loopback1
    [PE2-Tunnel1/0/0] tunnel-protocol mpls te
    [PE2-Tunnel1/0/0] mpls te signal-protocol rsvp-te
    [PE2-Tunnel1/0/0] destination 1.1.1.9
    [PE2-Tunnel1/0/0] mpls te tunnel-id 101
    [PE2-Tunnel1/0/0] mpls te bandwidth ct0 20000
    [PE2-Tunnel1/0/0] mpls te commit
    After the configuration, run the display this interface command in the tunnel interface view. You can view that the MPLS TE tunnel is established successfully. That is, Line protocol current state displays UP.
    [PE1-Tunnel1/0/0] display this interface
    Tunnel1/0/0 current state : UP
    Line protocol current state : UP
    Last line protocol up time : 2009-02-23 14:46:57
    Description: Tunnel1/0/0 Interface
    Route Port,The Maximum Transmit Unit is 1500
    Internet Address is unnumbered, using address of LoopBack1(1.1.1.9/32)
    Encapsulation is TUNNEL, loopback not set
    Tunnel destination 3.3.3.9
    Tunnel up/down statistics 1
    Tunnel protocol/transport MPLS/MPLS, ILM is available,
    primary tunnel id is 0x1002002, secondary tunnel id is 0x0
        300 seconds output rate 0 bytes/sec, 0 packets/sec
        68 seconds output rate 0 bits/sec, 0 packets/sec
        22894187 packets output,  2958834536 bytes
        0 packets output error
        ct0:0 packets output,  0 bytes
        0 output error 
  6. Establish LDP sessions.
    Establish a remote peer session between PE1 and PE2.
    # Configure PE1.
    [PE1] mpls ldp
    [PE1-mpls-ldp] quit
    [PE1] mpls ldp remote-peer  3.3.3.9
    [PE1-mpls-ldp-remote-3.3.3.9] remote-ip 3.3.3.9
    [PE1-mpls-ldp-remote-3.3.3.9] quit
    # Configure PE2.
    [PE2] mpls ldp
    [PE2-mpls-ldp] quit
    [PE2] mpls ldp remote-peer 1.1.1.9
    [PE2-mpls-ldp-remote-1.1.1.9] remote-ip 1.1.1.9
    [PE2-mpls-ldp-remote-1.1.1.9] quit
    After the configuration, LDP sessions are established between the PEs.
    Take the display on PE1 as an example:
    [PE1] display mpls ldp session
    
    
     LDP Session(s) in Public Network
     Codes: LAM(Label Advertisement Mode), SsnAge Unit(DDDD:HH:MM)
     A '*' before a session means the session is being deleted.
     ------------------------------------------------------------------------------
     PeerID             Status      LAM  SsnRole  SsnAge      KASent/Rcv
     ------------------------------------------------------------------------------
     3.3.3.9:0          Operational DU   Passive  000:00:07   31/31
     ------------------------------------------------------------------------------
     TOTAL: 1 session(s) Found.
  7. Configure a tunnel policy and establish VC connections.
    # Configure PE1.
    [PE1] tunnel-policy policy1
    [PE1-tunnel-policy-policy1] tunnel select-seq cr-lsp load-balance-number 1
    [PE1-tunnel-policy-policy1] quit
    [PE1] mpls l2vpn
    [PE1-l2vpn] mpls l2vpn default martini
    [PE1-l2vpn] quit
    [PE1] interface pos2/0/0
    [PE1-Pos2/0/0] mpls l2vc 3.3.3.9 100 tunnel-policy policy1
    [PE1-Pos2/0/0] undo shutdown
    [PE1-Pos2/0/0] quit
    # Configure PE2.
    [PE2] tunnel-policy policy1
    [PE2-tunnel-policy-policy1] tunnel select-seq cr-lsp load-balance-number 1
    [PE2-tunnel-policy-policy1] quit
    [PE2] mpls l2vpn
    [PE2-l2vpn] mpls l2vpn default martini
    [PE2-l2vpn] quit
    [PE2] interface pos2/0/0
    [PE2-Pos2/0/0] mpls l2vc 1.1.1.9 100 tunnel-policy policy1
    [PE2-Pos2/0/0] undo shutdown
    [PE2-Pos2/0/0] quit
    # Configure CE1.
    <CE1> interface pos1/0/0
    [CE1-Pos1/0/0] ip address 10.1.1.1 24
    [CE1-Pos1/0/0] undo shutdown
    [CE1-Pos1/0/0] quit
    # Configure CE2.
    <CE2> interface pos1/0/0
    [CE2-Pos1/0/0] ip address 10.1.1.2 24
    [CE2-Pos1/0/0] undo shutdown
    [CE2-Pos1/0/0] quit

  8. Verify the configuration.
    Run the display mpls lsp verbose command on PE1. You can view that an MPLS RSVP-TE tunnel is established between 1.1.1.9 and 3.3.3.9. The LSP index of this tunnel is the same as the value in the MPLS forwarding table, which indicates that the local packets to 3.3.3.9 are forwarded through the MPLS TE tunnel.
    <PE1> display mpls lsp verbose
    ----------------------------------------------------------------------
                     LSP Information: RSVP LSP
    ----------------------------------------------------------------------
      No                  :  1
      SessionID           :  100
      IngressLsrID        :  1.1.1.9
      LocalLspID          :  1
      Tunnel-Interface    :  Tunnel1/0/0
      Fec                 :  3.3.3.9/32
      TunnelTableIndex    :  0x0     
      Nexthop             :  100.1.1.2
      In-Label            :  NULL
      Out-Label           :  13312
      In-Interface        :  ----------
      Out-Interface       :  Pos1/0/0
      LspIndex            :  4096
      Token               :  0x1002002
      LsrType             :  Ingress
      Mpls-Mtu            :  1500
      TimeStamp           :  396sec
      Bfd-State           :  ---
    Run the display mpls te tunnel-interface command on the PEs. You can view detailed information about the tunnel. Take the display on PE1 as an example:
    <PE1> display mpls te tunnel-interface
        Tunnel Name       :  Tunnel1/0/0
        Tunnel State Desc :  CR-LSP is Up
        Tunnel Attributes   :
        Session ID          :  100
        Ingress LSR ID      :  1.1.1.9               Egress LSR ID:  3.3.3.9
        Admin State         :  UP                    Oper State   :  UP
        Signaling Protocol  :  RSVP
        Tie-Breaking Policy :  None                  Metric Type  :  None
        Car Policy          :  Disabled              Bfd Cap      :  None
        BypassBW Flag       :  Not Supported
        BypassBW Type       :  -                     Bypass BW    :  -
        Retry Limit         :  5                     Retry Int    :  2 sec
        Reopt               :  Disabled              Reopt Freq   :  -
        Auto BW             :  Disabled
        Current Collected BW:  -                     Auto BW Freq :  -
        Min BW              :  -                     Max BW       :  -
        Tunnel  Group       :  Primary
        Interfaces Protected:  -
        Excluded IP Address :  -
        Is On Radix-Tree    :  Yes                   Referred LSP Count:  0
        Primary Tunnel      :  -                     Pri Tunn Sum :  -
        Backup Tunnel       :  -
        Group Status        :  Up                    Oam Status   :  Up
        IPTN InLabel        :  -
        BackUp Type         :  None                  BestEffort   :  Disabled
        Secondary HopLimit  :  -
        BestEffort HopLimit :  -
        Secondary Explicit Path Name:  -
        Secondary Affinity Prop/Mask:  0x0/0x0
        BestEffort Affinity Prop/Mask:  0x0/0x0
    
        Primary LSP ID      :  1.1.1.9:1
        Setup Priority      :  7                     Hold Priority:  7
        Affinity Prop/Mask  :  0x0/0x0               Resv Style   :  SE
        CT0 Bandwidth(Kbit/sec) :  20000             CT1 Bandwidth(Kbit/sec) :  0
        CT2 Bandwidth(Kbit/sec) :  0                 CT3 Bandwidth(Kbit/sec) :  0
        CT4 Bandwidth(Kbit/sec) :  0                 CT5 Bandwidth(Kbit/sec) :  0
        CT6 Bandwidth(Kbit/sec) :  0                 CT7 Bandwidth(Kbit/sec) :  0
        Actual Bandwidth(kbps):  -
        Explicit Path Name  :  -                     Hop Limit    :  -
        Record Route        :  Disabled              Record Label :  Disabled
        Route Pinning       :  Disabled
        FRR Flag            :  Disabled
        IdleTime Remain     :  -
    
    CE1 and CE2 can ping through each other.
    <CE1> ping 10.1.1.2
      PING 10.1.1.2: 56  data bytes, press CTRL_C to break
        Reply from 10.1.1.2: bytes=56 Sequence=1 ttl=255 time=125 ms
        Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=125 ms
        Reply from 10.1.1.2: bytes=56 Sequence=3 ttl=255 time=94 ms
        Reply from 10.1.1.2: bytes=56 Sequence=4 ttl=255 time=125 ms
        Reply from 10.1.1.2: bytes=56 Sequence=5 ttl=255 time=125 ms
    
    
      --- 10.1.1.2 ping statistics ---
        5 packet(s) transmitted
        5 packet(s) received
        0.00% packet loss
        round-trip min/avg/max = 94/118/125 ms

Configuration Files

  • Configuration file of CE1
    #
     sysname CE1
    #
    interface Pos1/0/0
     link-protocol ppp
     undo shutdown
     ip address 10.1.1.1 255.255.255.0
    #
    return
  • Configuration file of PE1
    #
     sysname PE1
    #
     mpls lsr-id 1.1.1.9
     mpls
      mpls te
      mpls rsvp-te
      mpls te cspf
    #
     mpls l2vpn
      mpls l2vpn default martini
    #
    mpls ldp
    #
     mpls ldp remote-peer 3.3.3.9
     remote-ip 3.3.3.9
    #
    interface Pos1/0/0
     link-protocol ppp
     undo shutdown
     ip address 100.1.1.1 255.255.255.0
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 100000
     mpls te bandwidth bc0 50000
     mpls rsvp-te
    #
    interface Pos2/0/0
     link-protocol ppp
     undo shutdown
     mpls l2vc 3.3.3.9 100 tunnel-policy policy1
    #
    interface LoopBack1
     ip address 1.1.1.9 255.255.255.255
    #
    interface Tunnel1/0/0
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 3.3.3.9
     mpls te tunnel-id 100
     mpls te bandwidth ct0 20000
     mpls te commit
    #
    ospf 1
     opaque-capability enable
     area 0.0.0.0
      network 1.1.1.9 0.0.0.0
      network 100.1.1.0 0.0.0.255
      mpls-te enable
    #
    tunnel-policy  policy1
     tunnel select-seq  cr-lsp load-balance-number 1
    #
    return
  • Configuration file of P
    #
     sysname P
    #
    mpls lsr-id 2.2.2.9
     mpls
      mpls te
      mpls rsvp-te
    #
    interface Pos1/0/0
     link-protocol ppp
     undo shutdown
     ip address 100.1.1.2 255.255.255.0
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 100000
     mpls te bandwidth bc0 50000
     mpls rsvp-te
    #
    interface Pos2/0/0
     link-protocol ppp
     undo shutdown
     ip address 100.2.1.1 255.255.255.0
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 100000
     mpls te bandwidth bc0 50000
     mpls rsvp-te
    #
    interface LoopBack1
     ip address 2.2.2.9 255.255.255.255
    #
    ospf 1
     opaque-capability enable
     area 0.0.0.0
      network 2.2.2.9 0.0.0.0
      network 100.1.1.0 0.0.0.255
      network 100.2.1.0 0.0.0.255
      mpls-te enable
    #
    return
  • Configuration file of PE2
    #
     sysname PE2
    #
     mpls lsr-id 3.3.3.9
     mpls
      mpls te
      mpls rsvp-te
      mpls te cspf
    #
     mpls l2vpn
      mpls l2vpn default martini
    #
    mpls ldp
    #
     mpls ldp remote-peer 1.1.1.9
     remote-ip 1.1.1.9
    #
    interface Pos1/0/0
     link-protocol ppp
     undo shutdown
     ip address 100.2.1.2 255.255.255.0
     mpls
     mpls te
     mpls te bandwidth max-reservable-bandwidth 100000
     mpls te bandwidth bc0 50000
     mpls rsvp-te
    #
    interface Pos2/0/0
     link-protocol ppp
     undo shutdown
     mpls l2vc 1.1.1.9 100 tunnel-policy policy1
    #
    interface LoopBack1
     ip address 3.3.3.9 255.255.255.255
    #
    interface Tunnel1/0/0
     ip address unnumbered interface LoopBack1
     tunnel-protocol mpls te
     destination 1.1.1.9
     mpls te tunnel-id 101
     mpls te bandwidth ct0 20000
     mpls te commit
    #
    ospf 1
     opaque-capability enable
     area 0.0.0.0
      network 3.3.3.9 0.0.0.0
      network 100.2.1.0 0.0.0.255
      mpls-te enable
    #
    tunnel-policy  policy1
     tunnel select-seq  cr-lsp load-balance-number 1
    #
    return
  • Configuration file of CE2
    #
     sysname CE2
    #
    interface Pos1/0/0
     link-protocol ppp
     undo shutdown
     ip address 10.1.1.2 255.255.255.0
    #
    return