Hierarchical Quality of Service ( HQOS)

Definition

Hierarchical Quality of Service (HQoS) is a technology used to guarantee the bandwidth of multiple services of many users in the Differentiated Service (DiffServ) model through a queue scheduling mechanism.
DiffServ is a type of the class-based QoS technology, and provides different services for different service flows. Therefore, in the DiffServ scheme, service flows need to be classified based on service requirements first. Traffic classification is the prerequisite and basis of HQoS.
Family HQoS is the classifying of services with the specified characteristics into the same family. These services enter one subscriber queue (SQ). Currently, methods of identifying family members include NONE, C-VLAN, P-VLAN, P+C VLAN, and Option 82. Subscribers going online from different sub-interfaces of the same interface can also be classified as belonging to the same family.
Leased line user refers to an enterprise that leases an entire interface (or interfaces) from the operator. All subscribers from this enterprise are charged, scheduled, and managed in a unified manner.

Purpose

Along with the emergence of new applications on IP networks, new requirements are presented to the QoS of IP networks. For example, real-time services such as Voice over IP (VoIP) demand a shorter delay. A long delay for packet transmission is unacceptable. Email and File Transfer Protocol (FTP) services are comparatively insensitive to the delay. To support the services that present different service requirements, such as voice, video, and data services, the network is required to distinguish the services before providing corresponding quality of services for them. Therefore, the QoS technology is introduced. With the rapid development of network equipment, the capacity of a single interface increases along with the number of users accessing it. HQoS is needed because Traditional QoS is encountering new problems with applications.
  • Traditional traffic management schedules traffic based on the bandwidth of interfaces. As a result, traffic management is sensitive to the class of services rather than usersand fit for traffic at the network core side but not fit for traffic at the service access side.
  • Traditional traffic management has great difficulties in simultaneously controlling multiple services of many users.
To solve the problems and provide better QoS, a QoS technology that can carry out queue scheduling based on the priorities of services and control the traffic of users, is needed. Combined with the DiffServ scheme, HQoS adopts five levels of scheduling. HQoS enables the equipment to acquire policies for controlling internal resources with the existing hardware. It can both provide the quality assurance for the advanced users and reduce the total cost of the network construction.
Family HQoS implements unified traffic scheduling of all services belonging to the same family.
Leased line HQoS implements unified traffic scheduling of all services belonging to the same enterprise.

Principles of QoS Queues

A queue is a storage format of packets during the forwarding process. When the rate of traffic exceeds the bandwidth on an interface or the bandwidth set for the traffic, packets are placed into the cache in a queue. The time and sequence for packets leaving related queues and the scheduling of packets in various queues are determined by scheduling policies. The QoS queue structure is classified into the upstream queue structure and the downstream queue structure, which include flow queues, common queues, and port queues.

Figure 1 Upstream QoS queue scheduling structure

Figure 2 Downstream QoS queue scheduling structure


Queue Scheduling Technology

The queue scheduling mechanism is a very important technology in QoS. When the congestion occurs, a proper queue scheduling mechanism can provide packets of a certain type with proper QoS features such as the bandwidth, delay, and jitter. The queue scheduling mechanism works only when the congestion occurs. The commonly used queue scheduling technologies include Weighted Fair Queuing (WFQ), Weighted Round Robin (WRR), and PQ.

RR

RR, short for Round Robin, is a simple scheduling method. Through RR, multiple queues can be scheduled.
Figure 3  Schematic diagram of RR

RR schedules multiple queues in ring mode. If the queue on which RR is performed is not empty, the scheduler takes packets of a certain length away from the queue. If the queue is empty, the queue is skipped and the scheduler does not wait.

WRR

Compared with RR, WRR can set the weights of queues. During the WRR scheduling, the service time obtained by a queue is in direct proportion to the weight of the queue.
WRR allocates different queues different periods of service time based on the weight. Each time a queue is scheduled, packets are sent from the queue for the allocated period of time, as shown in Figure 4.
Figure 4 Schematic diagram of WRR

During the WRR scheduling, the empty queue is directly skipped. Therefore, when there is a small volume of traffic in a queue, the remaining bandwidth of the queue is used by the queues according to a certain proportion.

WFQ+SP

SP is based on the absolute priority.
WFQ is used to assign bandwidth to queues taking part in the scheduling according to the weights of the queues. Therefore, the technology is called WFQ+SP. In WFQ, unused bandwidth is reassigned. WFQ can thus ensure the minimum bandwidth of queues according to the weights of the queues.

Comparison Between the Scheduling Technologies

Scheduling Algorithms Complexity Delay/Jitter Fairness
RR It is easy to implement. In case of low-speed scheduling, delay and jitter intensify. The packets in the queues are scheduled based on the lengths of packets.
WRR It is easy to implement. In case of low-speed scheduling, delay and jitter intensify. The packets in the queues are scheduled based on the lengths of packets.
WFQ It is complex to implement. The delay is controlled properly and the jitter is low. The queues are scheduled fairly at the granularity of bytes.


QoS Queue Scheduling

HQoS ensures the bandwidth of multiple services of many users through the QoS queue scheduling of five levels. HQoS scheduling is classified into upstream HQoS scheduling and downstream HQoS scheduling.
  • The upstream QoS scheduling is classified into five levels: FQ -> SQ -> GQ -> Target Blade (TB) -> CQ.
  • The downstream QoS scheduling is classified into five levels: FQ -> SQ -> GQ -> CQ -> Port Queue (PQ).
QoS queue scheduling includes FQ scheduling, common queue scheduling, and port queue scheduling.

FQ Scheduling

Flow queue scheduling includes GQ traffic shaping, SQ scheduling, FQ scheduling, and FQ traffic shaping.
GQ does not support the scheduling algorithm for ensuring bandwidth of queues. Instead, GQ only adopts traffic shaping to limit bandwidth by setting the rate of the traffic shaper Traffic shaping uses a single token bucket and places tokens to the bucket according to the rate of the traffic shaper. When the SQ scheduling is complete, the GQ scheduling needs to be performed if the SQ belongs to a GQ. Otherwise, the GQ scheduling is not required.
An SQ corresponds to a user. The SQ scheduling ensures the CIR and PIR bandwidth of the user.
The FQ scheduling adopts SP, WFQ, SPL. Each SQ corresponds to eight FQs, and the eight FQs can share the bandwidth of the SQ. Each FQ can define its PIR. Figure 5 shows the hierarchical structure of the FQ scheduling.
Figure 5 Hierarchical structure of the FQ scheduling

Traffic shaping is implemented on each FQ to limit the traffic rate. That is, the PIR is configured for each FQ. Traffic shaping is implemented through a single token bucket.

Common Queue Scheduling

After the WFQ scheduling, the packets of common queues enter a CQ.

Port Queue Scheduling

To ensure the allocation of available bandwidth, the upstream port queue scheduling is classified into the following levels:
  • RR scheduling for port queues with the same CoS
  • WFQ scheduling for multicast traffic of the same CoS and unicast traffic of the same CoS
  • SP+WFQ scheduling for traffic of different CoSs
The downstream port queue scheduling is classified into the following levels:
  • WFQ scheduling for traffic of different CoSs on an interface
  • WRR scheduling for traffic between interfaces
  • CQ scheduling and port queue scheduling

Principles of Queue Mapping

Queue mapping is used to map the eight priority queues (BE, AF1, AF2, AF3, AF4, EF, CS6, and CS7) of an FQ at the first level to the eight priority queues (BE, AF1, AF2, AF3, AF4, EF, CS6, and CS7) of a CQ at the fifth level. Through the queue mapping from an FQ to a CQ, the traffic of an FQ can flexibly enter a queue of a CQ.

Next we will talk about MPLS HQOS and how MPLS Providers implement the HQOS

No comments:

Post a Comment