A queue is a collection of data packets collectively waiting to be transmitted by a network device using a pre-defined structure methodology. Queuing works almost on the same methodology used at banks or supermarkets, where the customer is treated according to its arrival.
Queues are used to:
- limit data rate for certain IP addresses, subnets, protocols, ports, etc.;
- limit peer-to-peer traffic;
- packet prioritization;
- configure traffic bursts for traffic acceleration;
- apply different time-based limits;
- share available traffic among users equally, or depending on the load of the channel
Queue implementation in MikroTik RouterOS is based on Hierarchical Token Bucket (HTB). HTB allows to create hierarchical queue structure and determine relations between queues. These hierarchical structures can be attached at two different places, the Packet Flow diagram illustrate both input and postrouting chains.
There are two different ways how to configure queues in RouterOS:
- /queue simple menu - designed to ease configuration of simple, every day queuing tasks (such as single client upload/download limitation, p2p traffic limitation, etc.).
- /queue tree menu - for implementing advanced queuing tasks (such as global prioritization policy, user group limitations). Requires marked packet flows from /ip firewall mangle facility.
Rate limitation principles
Rate limiting is used to control the rate of traffic flow sent or received on a network interface. Traffic which rate that is less than or equal to the specified rate is sent, whereas traffic that exceeds the rate is dropped or delayed.
Rate limiting can be performed in two ways:
- discard all packets that exceed rate limit – rate-limiting (dropper or shaper) (100% rate limiter when queue-size=0)
- delay packets that exceed specific rate limit in the queue and transmit its when it is possible – rate equalizing (scheduler) (100% rate equalizing when queue-size=unlimited)
Next figure explains the difference between rate limiting and rate equalizing:
As you can see in the first case all traffic exceeds a specific rate and is dropped. In another case, traffic exceeds a specific rate and is delayed in the queue and transmitted later when it is possible, but note that the packet can be delayed only until the queue is not full. If there is no more space in the queue buffer, packets are dropped.
For each queue we can define two rate limits:
- CIR (Committed Information Rate) – (limit-at in RouterOS) worst-case scenario, the flow will get this amount of traffic rate regardless of other traffic flows. At any given time, the bandwidth should not fall below this committed rate.
- MIR (Maximum Information Rate) – (max-limit in RouterOS) best-case scenario, the maximum available data rate for flow, if there is free any part of the bandwidth.
A simple queue is a plain way how to limit traffic fora particular target. Also, you can use simple queues to build advanced QoS applications. They have useful integrated features:
- peer-to-peer traffic queuing;
- applying queue rules on chosen time intervals;
- using multiple packet marks from /ip firewall mangle
- traffic shaping (scheduling) of bidirectional traffic (one limit for the total of upload + download)
Simple queues have a strict order - each packet must go through every queue until it reaches one queue which conditions fit packet parameters or until the end of the queues list is reached. For example, In the case of 1000 queues, a packet for the last queue will need to proceed through 999 queues before it will reach the destination.
In the following example we have one SOHO device with two connected units PC and Server.
We have a 15 Mbps connection available from ISP in this case. We want to be sure the server receives enough traffic, so we will configure a simple queue with a limit-at parameter to guarantee a server to receive 5Mbps:
That is all. The server will get 5 Mbps of traffic rate regardless of other traffic flows. If you are using the default configuration, be sure FastTrack rule is disabled for this particular traffic, otherwise, it will bypass Simple Queues and they will not work.
Queue tree creates only a one-directional queue in one of the HTBs. It is also the only way how to add queue on the separate interface. This way it is possible to ease mangle configuration - you don't need separate marks for download and upload - only the upload will get to Public interface and only the download will get to a Private interface. The main difference from Simple Queues is that Queue tree is not ordered - all traffic pass it together.
In the following example we will mark all the packets coming from preconfigured in-interface-list=LAN and will limit the traffic with queue tree based on these packet marks.
Let`s create a firewall address-list:
Mark packets with firewall mangle facility:
Configure the queue tree based on previously marked packets:
Check Queue tree stats to be sure traffic is matched:
This sub-menu list by default created queue types and allows to add new user-specific ones.
By default RouterOS creates following pre-defined queue types:
All RouterBOARDS have default queue type "only-hardware-queue" with "kind=none". "only-hardware-queue" leaves interface with only hardware transmit descriptor ring buffer which acts as a queue in itself. Usually, at least 100 packets can be queued for transmit in transmit descriptor ring buffer. Transmit descriptor ring buffer size and the number of packets that can be queued in it varies for different types of ethernet MACs. Having no software queue is especially beneficial on SMP systems because it removes the requirement to synchronize access to it from different CPUs/cores which is resource-intensive. Having the possibility to set "only-hardware-queue" requires support in an ethernet driver so it is available only for some ethernet interfaces mostly found on RouterBOARDs.
A "multi-queue-ethernet-default" can be beneficial on SMP systems with ethernet interfaces that have support for multiple transmit queues and have a Linux driver support for multiple transmit queues. By having one software queue for each hardware queue there might be less time spent on synchronizing access to them.
Improvement from only-hardware-queue and multi-queue-ethernet-default is present only when there is no "/queue tree" entry with a particular interface as a parent.
Queue kinds are packet processing algorithms. Kind describe which packet will be transmitted next in the line. RouterOS supports the following Queueing kinds:
- FIFO (BFIFO, PFIFO, MQ PFIFO)
These kinds are based on the FIFO algorithm (First-In-First-Out). The difference between PFIFO and BFIFO is that one is measured in packets and the other one in bytes. These queues use pfifo-limit and bfifo-limit parameters.
Every packet that cannot be enqueued (if the queue is full), is dropped. Large queue sizes can increase latency but utilize channel better.
MQ-PFIFO is pfifo with support for multiple transmit queues. This queue is beneficial on SMP systems with ethernet interfaces that have support for multiple transmit queues and have a Linux driver support for multiple transmit queues. This kind uses mq-pfifo-limit parameter.
Random Early Drop is a queuing mechanism that tries to avoid network congestion by controlling the average queue size. The average queue size is compared to two thresholds: a minimum (minth) and maximum (maxth) threshold. If the average queue size (avgq) is less than the minimum threshold, no packets are dropped. When the average queue size is greater than the maximum threshold, all incoming packets are dropped. But if the average queue size is between the minimum and maximum thresholds packets are randomly dropped with probability Pd where probability is exact a function of the average queue size: Pd = Pmax(avgq – minth)/ (maxth - minth). If the average queue grows, the probability of dropping incoming packets grows too. Pmax - ratio, which can adjust the packet discarding probability abruptness, (the simplest case Pmax can be equal to one. The 8.2 diagram shows the packet drop probability in the RED algorithm.
Stochastic Fairness Queuing (SFQ) is ensured by hashing and round-robin algorithms. SFQ is called "Stochastic" because it does not really allocate a queue for each flow, it has an algorithm that divides traffic over a limited number of queues (1024) using a hashing algorithm.
Traffic flow may be uniquely identified by 4 options (src-address, dst-address, src-port, and dst-port), so these parameters are used by the SFQ hashing algorithm to classify packets into one of 1024 possible sub-streams. Then round-robin algorithm will start to distribute available bandwidth to all sub-streams, on each round giving sfq-allot bytes of traffic. The whole SFQ queue can contain 128 packets and there are 1024 sub-streams available. The 8.3 diagram shows the SFQ operation:
PCQ algorithm is very simple - at first it uses selected classifiers to distinguish one sub-stream from another, then applies individual FIFO queue size and limitation on every sub-stream, then groups all sub-streams together and applies global queue size and limitation.
- pcq-classifier (dst-address | dst-port | src-address | src-port; default: "") : selection of sub-stream identifiers
- pcq-rate (number) : maximal available data rate of each sub-steam
- pcq-limit (number) : queue size of single sub-stream (in KiB)
- pcq-total-limit (number) : maximum amount of queued data in all sub-streams (in KiB)
It is possible to assign speed limitation to sub-streams with pcq-rate option. If "pcq-rate=0" sub-streams will divide available traffic equally.
For example, instead of having 100 queues with 1000kbps limitation for download, we can have one PCQ queue with 100 sub-streams
PCQ has burst implementation identical to Simple Queues and Queue Tree:
- pcq-burst-rate (number) : maximal upload/download data rate which can be reached while the burst for substream is allowed
- pcq-burst-threshold (number) : this is the value of burst on/off switch
- pcq-burst-time (time) : a period of time (in seconds) over which the average data rate is calculated. (This is NOT the time of actual burst)
PCQ also allows using different size IPv4 and IPv6 networks as sub-stream identifiers. Before it was locked to a single IP address. This is done mainly for IPv6 as customers from an ISP point of view will be represented by /64 network, but devices in customers network will be /128. PCQ can be used for both of these scenarios and more. PCQ parameters:
- pcq-dst-address-mask (number) : the size of IPv4 network that will be used as a dst-address sub-stream identifier
- pcq-src-address-mask (number) : the size of IPv4 network that will be used as an src-address sub-stream identifier
- pcq-dst-address6-mask (number) : the size of IPV6 network that will be used as a dst-address sub-stream identifier
- pcq-src-address6-mask (number) : the size of IPV6 network that will be used as an src-address sub-stream identifier
Before sending data over an interface, it is processed by the queue. This sub-menu lists all available interfaces in RouterOS and allows to change queue type for a particular interface. The list is generated automatically.
[admin@MikroTik] > queue interface print
Columns: INTERFACE, QUEUE, ACTIVE-QUEUE
# INTERFACE QUEUE ACTIVE-QUEUE
0 ether1 only-hardware-queue only-hardware-queue
1 ether2 only-hardware-queue only-hardware-queue
2 ether3 only-hardware-queue only-hardware-queue
3 ether4 only-hardware-queue only-hardware-queue
4 ether5 only-hardware-queue only-hardware-queue
5 ether6 only-hardware-queue only-hardware-queue
6 ether7 only-hardware-queue only-hardware-queue
7 ether8 only-hardware-queue only-hardware-queue
8 ether9 only-hardware-queue only-hardware-queue
9 ether10 only-hardware-queue only-hardware-queue
10 sfp-sfpplus1 only-hardware-queue only-hardware-queue
11 wlan1 wireless-default wireless-default
12 wlan2 wireless-default wireless-default