Layer 3 Hardware Offloading (L3HW, otherwise known as IP switching or HW routing) allows to offload some router features onto the switch chip. This allows reaching wire speeds when routing packets, which simply would not be possible with the CPU.
To enable Layer 3 Hardware Offloading, set l3-hw-offloading=yes for the switch:
Switch Port Configuration
Layer 3 Hardware Offloading can be configured for each physical switch port. For example:
Note that l3hw settings for switch and ports are different:
=nofor the switch completely disables offloading - all packets will be routed by CPU.
- However, setting
=nofor a switch port only disables hardware routing from/to this particular port. Moreover, the port can still participate in Fastrack connection offloading.
To enable full hardware routing, enable l3hw on all switch ports:
To make all packets go through the CPU first, and offload only the Fasttrack connections, disable l3hw on all ports but keep it enabled on the switch chip itself:
Packets get routed by the hardware only if both source and destination ports have
l3-hw-offloading=yes. If at least one of them has
l3-hw-offloading=no, packets will go through the CPU/Firewall while offloading only the Fasttrack connections.
The next example enables hardware routing on all ports but the upstream port (sfp-sfpplus16). Packets going to/from sfp-sfpplus16 will enter the CPU and, therefore, subject to Firewall/NAT processing.
The existing connections may be unaffected by the
l3-hw-offloading setting change.
The L3HW Settings menu has been introduced in RouterOS version 7.6.
/interface ethernet switch l3hw-settings
|autorestart (yes | no; Default: no)||Automatically restarts the l3hw driver in case of an error. Otherwise, if an error occurs, |
|fasttrack-hw (yes | no; Default: yes (if supported))||Enables or disables FastTrack HW Offloading. Keep it enabled unless HW TCAM memory reservation is required, e.g., for dynamic switch ACL rules creation. Not all switch chips support FastTrack HW Offloading (see hw-supports-fasttrack).|
|ipv6-hw (yes | no; Default: no)||Enables or disables IPv6 Hardware Offloading. Since IPv6 routes occupy a lot of HW memory, enable it only if IPv6 traffic speed is significant enough to benefit from hardware routing.|
|icmp-reply-on-error (yes | no; Default: yes)||Since the hardware cannot send ICMP messages, the packet must be redirected to the CPU to send an ICMP reply in case of an error (e.g., "Time Exceeded", "Fragmentation required", etc.). Enabling icmp-reply-on-error helps with network diagnostics but may open potential vulnerabilities for DDoS attacks. Disabling icmp-reply-on-error silently drops the packets on the hardware level in case of an error.|
|hw-supports-fasttrack (yes | no)||Indicates if the hardware (switch chip) supports FastTrack HW Offloading.|
This menu allows tweaking l3hw settings for specific use cases.
It is NOT recommended to change the advanced L3HW settings unless instructed by MikroTik Support or MikroTik Certified Routing Engineer. Applying incorrect settings may break the L3HW operation.
/interface ethernet switch l3hw-settings advanced
|route-queue-limit-high (number; Default: 256)|
The switch driver stops route indexing when route-queue-size (see #monitor) exceeds this value. Lowering this value leads to faster route processing but increases the lag between a route's appearance in RouterOS and hardware memory.
Setting route-queue-limit-high=0 disables route indexing when there are any routes in the processing queue - the most efficient CPU usage but the longest delay before hardware offloading. Useful when there are static routes only. Not recommended together with routing protocols (such as BGP or OSPF) when there are frequent routing table changes.
|route-queue-limit-low (number; Default: 0)|
Re-enable route indexing when route-queue-size drops down to this value. Must not exceed the high limit.
Setting route-queue-limit-low=0 tells the switch driver to process all pending routes before the next hw-offloading attempt. While this is the desired behavior, it may completely block the hw-offloading under a constant BGP feed.
|shwp-reset-counter (number; Default: 128)|
Reset the Shortest HW Prefix (see ipv4-shortest-hw-prefix / ipv6-shortest-hw-prefix in #monitor) and try the full route table offloading after this amount of changes in the routing table. At a partial offload, when the entire routing table does not fit into the hardware memory and shorter prefixes are redirected to the CPU, there is no need to try offloading route prefixes shorter than SHWP since those will get redirected to the CPU anyway, theoretically. However, significant changes to the routing table may lead to a different index layout and, therefore, a different amount of routes that can be hw-offloaded. That's why it is recommended to do the full table re-indexing occasionally.
Lowering this value may allow more routes to be hw-offloaded but increases CPU usage and vice-versa. Setting shwp-reset-counter=0 always does full re-indexing after each routing table change.
This setting is used only during Partial Offloading and has no effect when ipv4-shortest-hw-prefix=0 (and ipv6, respectively).
|partial-offload-chunk (number; Default: 1024, min: 16)|
The minimum number of routes for incremental adding in Partial Offloading. Depending on the switch chip model, routes are offloaded either as-is (each routing entry in RouterOS corresponds to an entry in the hardware memory) or getting indexed, and the index entries are the ones that are written into the hardware memory. This setting is used only for the latter during Partial Offloading.
Depending on index fragmentation, a single IPv4 route addition can occupy from -3 to +6 LPM blocks of HW memory (some route addition may lower the amount of required HW memory thanks to index defragmentation). Hence, it is impossible to predict the exact number of routes that may fit in the hardware memory. The switch driver uses a binary split algorithm to find the maximum number of routes that fit in the hardware.
Let's imagine 128k routes, all of them not fitting into the hardware memory. The algorithm halves the number and tries offloading 64k routes. Let's say offloading succeeded. In the next iteration, the algorithm picks 96k, let's say it fails; then 80k - fails again, 72k - succeeds, 76k, etc. until the difference between succeeded and failed numbers drops below the partial-offload-chunk value.
Lowering the partial-offload-chunk value increases the number of hw-offloaded routes but also raises CPU usage and vice-versa.
|route-index-delay-min (time; Default: 1s)|
The minimum delay between route processing and its offloading. The delay allows processing more routes together and offloading them at once, saving CPU usage. It also makes offloading the entire routing table faster by reducing the per-route processing work. On the other hand, it slows down offloading of an individual route.
If an additional route is received during the delay, the latter resets to the route-index-delay-min value. Adding more and more routes within the delay keeps resetting the timer until the route-index-delay-max is reached.
|route-index-delay-max (time; Default: 10s)|
The maximum delay between route processing and its offloading. When the maximum delay is reached, the processed routes get offloaded despite more routes pending. However, route-queue-limit-high has higher priority than this, meaning that the indexing/offloading gets paused anyway when a certain queue size is reached.
|neigh-keepalive-interval (time; Default: 15s, min: 5s)|
Neighbor (host) keepalive interval. When a host (IP neighbor) gets hw-offloaded, all traffic from/to it is routed by the switch chip, and RouterOS may think the neighbor is inactive and delete it. To prevent that, the switch driver must keep the offloaded neighbors alive by sending periodical refreshes to RouterOS.
|neigh-discovery-interval (time; Default: 1m37s, min: 30s)|
Unfortunately, switch chips do not provide per-neighbor stats. Hence, the only way to check if the offloaded host is still active is by sending occasional ARP (IPv4) / Neighbor Discovery (IPv6) requests to the connected network. Increasing the value lowers the broadcast traffic but may leave inactive hosts in hardware memory for longer.
Neighbor discovery is triggered within the neighbor keepalive work. Hence, the discovery time is rounded up to the next keepalive session. Choose a value for neigh-discovery-interval not dividable by neigh-keepalive-interval to send ARP/ND requests in various sessions, preventing broadcast bursts.
|neigh-discovery-burst-limit (number; Default: 64)|
The maximum number of ARP/ND requests that can be sent at once.
|neigh-discovery-burst-delay (time; Default: 300ms, min: 10ms)|
The delay between ARP/ND subsequent bursts if the number of requests exceeds neigh-discovery-burst-limit.
Some settings only apply to certain switch models.
The L3HW Monitor feature has been introduced in RouterOS version 7.10. It allows monitoring of switch chip and driver stats related to L3HW.
|ipv4-routes-total||The total number of IPv4 routes handled by the switch driver.|
|ipv4-routes-hw||The number of hardware-offloaded IPv4 routes (a.k.a. hardware routes)|
|ipv4-routes-cpu||The number of IPv4 routes redirected to the CPU (a.k.a. software routes)|
|ipv4-shortest-hw-prefix||Shortest Hardware Prefix (SHWP) for IPv4. If the entire IPv4 routing table does not fit into the hardware memory, partial offloading is applied, where the longest prefixes are hw-offloaded while the shorter ones are redirected to the CPU. This field shows the shortest route prefix (/x) that is offloaded to the hardware memory. All prefixes shorter than this are processed by the CPU. "|
|ipv4-hosts||The number of hardware-offloaded IPv4 hosts (/32 routes)|
|ipv6-routes-total 1||The total number of IPv6 routes handled by the switch driver.|
|ipv6-routes-hw 1||The number of hardware-offloaded IPv6 routes (a.k.a. hardware routes)|
|ipv6-routes-cpu 1||The number of IPv6 routes redirected to the CPU (a.k.a. software routes)|
|ipv6-shortest-hw-prefix 1||Shortest Hardware Prefix (SHWP) for IPv6. If the entire IPv6 routing table does not fit into the hardware memory, partial offloading is applied, where the longest prefixes are hw-offloaded while the shorter ones are redirected to the CPU. This field shows the shortest route prefix (/x) that is offloaded to the hardware memory. All prefixes shorter than this are processed by the CPU. "|
|ipv6-hosts 1||The number of hardware-offloaded IPv6 hosts (/128 routes)|
|route-queue-size||The number of routes in the queue for processing by the switch chip driver. Under normal working conditions, this field is 0, meaning that all routes are processed by the driver.|
|fasttrack-ipv4-conns 2||The number of hardware-offloaded FastTrack connections.|
|fasttrack-hw-min-speed 2||When the hardware memory for storing FastTrack is full, this field shows the minimum speed (in bytes per second) of a hw-offloaded FastTrack connection. Slower connections are routed by the CPU.|
1 IPv6 stats appear only when IPv6 hardware routing is enabled (
2 FastTrack stats appear only when hardware offloading of FastTrack connections is enabled (fasttrack-hw
An enhanced version of Monitor with extra telemetry data for advanced users. Advanced Monitor contains all data from the basic monitor plus the fields listed below.
|route-queue-rate||The rate at which routes are added to the queue for the switch driver processing. In other words, the growth rate of route-queue-size (routes per second)|
|route-process-rate||The rate at which previously queued routes are processed by the switch driver. In other words, the shrink rate of route-queue-size (routes per second)|
|fasttrack-queue-size||The number of FastTrack connections in the queue for processing by the switch chip driver.|
|fasttrack-queue-rate||The rate at which FastTrack connections are added to the queue for the switch driver processing. In other words, the growth rate of fasttrack-queue-size (connections per second)|
|fasttrack-process-rate||The rate at which previously queued FastTrack connections are processed by the switch driver. In other words, the shrink rate of fasttrack-queue-size (connections per second)|
|fasttrack-hw-offloaded||The number of FastTrack connections offloaded to the hardware. The counter resets every second (or every monitor interval).|
|fasttrack-hw-unloaded||The number of FastTrack connections unloaded from the hardware (redirected to software routing). The counter resets every second (or every monitor interval).|
|lpm-cap||The size of the LPM hardware table (LPM = Longest Prefix Match). LPM stores route indexes for hardware routing. Not every switch chip model uses LPM. Others use TCAM.|
|lpm-usage||The number of used LPM blocks. lpm-usage / lpm-cap = usage percentage.|
|lpm-bank-cap||LPM memory is organized in banks - special memory units. The bank size depends on the switch chip model. This value shows the size of a single bank (in LPM blocks). lpm-cap / lpm-bank-cap = the number of banks (usually, 20).|
|lpm-bank-usage||Per-bank LPM usage (in LPM blocks)|
|pbr-cap||The size of the Policy-Based Routing (PBR) hardware table. PBR is used for NAT offloading of FastTrack connections.|
|pbr-usage||The number of used PBR entries. pbr-usage / pbr-cap = usage percentage.|
|pbr-lpm-bank||PBR shares LPM memory banks with routing tables. This value shows the LPM bank index shared with PBR (0 = the first bank).|
|nat-usage||The number of used NAT hardware entries (for FastTrack connections).|
It is impossible to use interface lists directly to control
l3-hw-offloading because an interface list may contain virtual interfaces (such as VLAN) while the
l3-hw-offloading setting must be applied to physical switch ports only. For example, if there are two VLAN interfaces (vlan20 and vlan30) running on the same switch port (trunk port), it is impossible to enable hardware routing on vlan20 but keep it disabled on vlan30.
However, an interface list may be used as a port selector. The following example demonstrates how to enable hardware routing on LAN ports (ports that belong to the "LAN" interface list) and disable it on WAN ports:
Please take into account that since interface lists are not used directly in the hardware routing control, modifying the interface list also does not automatically reflect into l3hw changes. For instance, adding a switch port to the "LAN" interface list does not automatically enable
l3-hw-offloading on that. The user has to rerun the above script to apply the changes.
The hardware supports up to 8 MTU profiles, meaning that the user can set up to 8 different MTU values for interfaces: the default 1500 + seven custom ones.
l3-hw-offloadingwhile changing the MTU/L2MTU values on the interfaces.
Layer 2 Dependency
Layer 3 hardware processing lies on top of Layer 2 hardware processing. Therefore, L3HW offloading requires L2HW offloading on the underlying interfaces. The latter is enabled by default, but there are some exceptions. For example, CRS3xx devices support only one hardware bridge. If there are multiple bridges, others are processed by the CPU and are not subject to L3HW.
Another example is ACL rules. If a rule redirects traffic to the CPU for software processing, then hardware routing (L3HW) is not triggered:
To make sure that Layer 3 is in sync with Layer 2 on both the software and hardware sides, we recommend disabling L3HW while configuring Layer 2 features. The recommendation applies to the following configuration:
- adding/removing/enabling/disabling bridge;
- adding/removing switch ports to/from the bridge;
- bonding switch ports / removing bond;
- changing VLAN settings;
- changing MTU/L2MTU on switch ports;
- changing ethernet (MAC) addresses.
In short, disable
l3-hw-offloading while making changes under
MAC telnet and RoMON
There is a limitation for MAC telnet and RoMON when L3HW offloading is enabled on 98DX8xxx, 98DX4xxx or 98DX325x switch chips. Packets from these protocols are dropped and do not reach the CPU, thus access to the device will fail.
If MAC telnet or RoMON are desired in combination with L3HW, certain ACL rules can be created to force these packets to the CPU.
For example, if MAC telnet access on sfp-sfpplus1 and sfp-sfpplus2 is needed, you will need to add this ACL rule. It is possible to select even more interfaces with the
For example, if RoMON access on sfp-sfpplus2 is needed, you will need to add this ACL rule.
Since L3HW depends on L2HW, and L2HW is the one that does VLAN processing, Inter-VLAN hardware routing requires a hardware bridge underneath. Even if a particular VLAN has only one tagged port member, the latter must be a bridge member. Do not assign a VLAN interface directly on a switch port! Otherwise, L3HW offloading fails and the traffic will get processed by the CPU:
/interface/vlan add interface=ether2 name=vlan20 vlan-id=20
Assign VLAN interface to the bridge instead. This way, VLAN configuration gets offloaded to the hardware, and, with L3HW enabled, the traffic is subject to inter-VLAN hardware routing.
L3HW MAC Address Range Limitation (DX2000/DX3000 series only)
Marvell Prestera DX2000 and DX3000 switch chips have a hardware limitation that allows configuring only the last (least significant) octet of the MAC address for each interface. The other five (most significant) octets are configurated globally and, therefore, must be equal for all interfaces (switch ports, bridge, VLANs). In other words, the MAC addresses must be in the format "XX:XX:XX:XX:XX:??", where:
- "XX:XX:XX:XX:XX" part is common for all interfaces.
- "??" is a variable part.
This requirement applies only to Layer 3 (routing). Layer 2 (bridging) does not use the switch's ethernet addresses. Moreover, it does not apply to bridge ports because they use the bridge's MAC address.
The requirement for common five octets applies to:
- Standalone switch ports (not bridge members) with hardware routing enabled (
- Bridge itself.
- VLAN interfaces (those are using bridge's MAC address by default).
Suppressing HW Offload
By default, all the routes are participating to be hardware candidate routes. To further fine-tune which traffic to offload, there is an option for each route to disable/enable
For example, if we know that majority of traffic flows to the network where servers are located, we can enable offloading only to that specific destination:
Now only the route to 192.168.3.0/24 has H-flag, indicating that it will be the only one eligible to be selected for HW offloading:
H-flag does not indicate that route is actually HW offloaded, it indicates only that route can be selected to be HW offloaded.
For dynamic routing protocols like OSFP and BGP, it is possible to suppress HW offloading using routing filters. For example, to suppress HW offloading on all OSFP instance routes, use "
suppress-hw-offload yes" property:
Offloading Fasttrack Connections
Firewall filter rules have
hw-offload option for Fasttrack, allowing fine-tuning connection offloading. Since the hardware memory for Fasttrack connections is very limited, we can choose what type of connections to offload and, therefore, benefit from near-the-wire-speed traffic. The next example offloads only TCP connections while UDP packets are routed via the CPU and do not occupy HW memory:
Stateless Hardware Firewall
While connection tracking and stateful firewalling can be performed only by the CPU, the hardware can perform stateless firewalling via switch rules (ACL). The next example prevents (on a hardware level) accessing a MySQL server from the ether1, and redirects to the CPU/Firewall packets from ether2 and ether3:
Switch Rules (ACL) vs. Fasttrack HW Offloading
Some firewall rules may be implemented both via switch rules (ACL) and CPU Firewall Filter + Fasttrack HW Offloading. Both options grant near-the-wire-speed performance. So the question is which one to use?
First, not all devices support Fasttrack HW Offloading. And without HW offloading, Firewall Filter uses only software routing, which is dramatically slower than its hardware counterpart. Second, even if Fasttrack HW Offloading is an option, a rule of thumb is:
Always use Switch Rules (ACL), if possible.
Switch rules share the hardware memory with Fastrack connections. However, hardware resources are allocated for each Fasttrack connection while a single ACL rule can match multiple connections. For example, if you have a guest WiFi network connected to sfp-sfpplus1 VLAN 10 and you don't want it to access your internal network, simply create an ACL rule:
The matched packets will be dropped on the hardware level. It is much better than letting all guest packets to the CPU for Firewall filtering.
Of course, ACL rules cannot match everything. For instance, ACL rules cannot filter connection states: accept established, drop others. That is where Fasttrack HW Offloading gets into action - redirect the packets to the CPU by default for firewall filtering, then offload the established Fasttrack connections. However, disabling
l3-hw-offloading for the entire switch port is not the only option.
Define ACL rules with
redirect-to-cpu=yes instead of setting
l3-hw-offloading=no of the switch port for narrowing down the traffic that goes to the CPU.
Inter-VLAN Routing with Upstream Port Behind Firewall/NAT
This example demonstrates how to benefit from near-to-wire-speed inter-VLAN routing while keeping Firewall and NAT running on the upstream port. Moreover, Fasttrack connections to the upstream port get offloaded to hardware as well, boosting the traffic speed close to wire-level. Inter-VLAN traffic is fully routed by the hardware, not entering the CPU/Firewall, and, therefore, not occupying the hardware memory of Fasttrack connections.
We use the CRS317-1G-16S+ model with the following setup:
- sfp1-sfp4 - bridged ports, VLAN ID 20, untagged
- sfp5-sfp8 - bridged ports, VLAN ID 30, untagged
- sfp16 - the upstream port
- ether1 - management port
Setup interface lists for easy access:
Routing requires dedicated VLAN interfaces. For standard L2 VLAN bridging (without inter-VLAN routing), the next step can be omitted.
Configure management and upstream ports, a basic firewall, NAT, and enable hardware offloading of Fasttrack connections:
At this moment, all routing still is performed by the CPU. Enable hardware routing on the switch chip:
- Within the same VLAN (e.g., sfp1-sfp4), traffic is forwarded by the hardware on Layer 2 (L2HW).
- Inter-VLAN traffic (e.g. sfp1-sfp5) is routed by the hardware on Layer 3 (L3HW).
- Traffic from/to WAN port gets processed by the CPU/Firewall first. Then Fasttrack connections get offloaded to the hardware (Hardware-Accelerated L4 Stateful Firewall). NAT applies both on CPU- and HW-processed packets.
- Traffic to the management port is protected by the Firewall.
Below are typical user errors of configuring Layer 3 Hardware Offloading.
VLAN interface on a switch port or bond
VLAN interface must be set on the bridge due to Layer 2 Dependency. Otherwise, L3HW will not work. The correct configuration is:
Not adding the bridge interface to /in/br/vlan
For Inter-VLAN routing, the bridge interface itself needs to be added to the tagged members of the given VLANs. In the next example, Inter-VLAN routing works between VLAN 10 and 11, but packets are NOT routed to VLAN 20.
The above example does not always mean an error. Sometimes, you may want the device to act as a simple L2 switch in some/all VLANs. Just make sure you set such behavior on purpose, not due to a mistake.
Creating multiple bridges
The devices support only one hardware bridge. If there are multiple bridges created, only one gets hardware offloading. While for L2 that means software forwarding for other bridges, in the case of L3HW, multiple bridges may lead to undefined behavior.
Instead of creating multiple bridges, create one and segregate L2 networks with VLAN filtering.
Using ports that do not belong to the switch
Some devices have two switch chips or the management port directly connected to the CPU. For example, CRS312-4C+8XG has an ether9 port connected to a separate switch chip. Trying to add this port to a bridge or involve it in the L3HW setup leads to unexpected results. Leave the management port for management!
Relying on Fasttrack HW Offloading too much
Since Fasttrack HW Offloading offers near-the-wire-speed performance at zero configuration overhead, the users tempt to use it as the default solution. However, the number of HW Fasttrack connections is very limited, leaving the other traffic for the CPU. Try using the hardware routing as much as possible, reduce the CPU traffic to the minimum via switch ACL rules, and then fine-tune which Fasttrack connections to offload with firewall filter rules.
Trying to offload slow-path connections
Using certain configuration (e.g. enabling bridge "use-ip-firewall" setting, creating bridge nat/filter rules) or running specific features like sniffer or torch can disable RouterOS FastPath, which will affect the ability to properly FastTrack and HW offload connections. If HW offloaded Fasttrack is required, make sure that there are no settings that disable the FastPath and verify that connections are getting the "H" flag or use the L3HW monitor command to see the amount of HW offloaded connections.
L3HW Feature Support
- HW - the feature is supported and offloaded to the hardware.
- CPU - the feature is supported but performed by software (CPU)
- N/A - the feature is not available together with L3HW. Layer 3 hardware offloading must be completely disabled (switch
l3-hw-offloading=no) to make this feature work.
- FW - the feature requires
=nofor a given switch port. On the switch level,
|IPv4 Unicast Routing||HW||7.1|
|IPv6 Unicast Routing||HW|
|IPv4 Multicast Routing||CPU|
|IPv6 Multicast Routing||CPU|
/ip/route add dst-address=10.0.99.0/24 blackhole
/ip/route add dst-address=10.0.0.0/24 gateway=ether1
This works only for directly connected networks. Since HW does not know how to send ARP requests,
|BRIDGE||HW||IP Routing from/to hardware-offloaded bridge interface.||7.1|
|VLAN||HW||Routing between VLAN interfaces that are created on hardware-offloaded bridge interface with vlan-filtering.||7.1|
|IPv4 Firewall||FW||Users must choose either HW-accelerated routing or firewall.|
Firewall rules get processed by the CPU. Fasttrack connections get offloaded to HW.
|IPv4 NAT||FW||NAT rules applied to the offloaded Fasttrack connections get processed by HW too.||7.1|
|VRF||N/A||Only the main routing table gets offloaded. If VRF is used together with L3HW and packets arrive on a switch port with |
|Controller Bridge and Port Extender||N/A|
|MTU||HW||The hardware supports up to 8 MTU profiles.||7.1|
|QinQ and tag-stacking||CPU||Stacked VLAN interfaces will lose HW offloading, while other VLANs created directly on the bridge interface can still use HW offloading.|
Only the devices listed in the table below support L3 HW Offloading.
L3HW Device Support
Only the devices listed in the table below support L3 HW Offloading.
CRS3xx: Switch DX3000 and DX2000 Series
The devices below are based on Marvell 98DX224S, 98DX226S, or 98DX3236 switch chip models. These devices do not support Fasttrack or NAT connection offloading.
The 98DX3255 and 98DX3257 models are exceptions, which have a feature set of the DX8000 rather than the DX3000 series.
|Model||Switch Chip||Release||IPv4 Route Prefixes1||IPv6 Route Prefixes2||Nexthops||ECMP paths per prefix3|
1 Since the total amount of routes that can be offloaded is limited, prefixes with higher netmask are preferred to be forwarded by hardware (e.g., /32, /30, /29, etc.), any other prefixes that do not fit in the HW table will be processed by the CPU. Directly connected hosts are offloaded as /32 (IPv4) or /128 (IPv6) route prefixes. The number of hosts is also limited by max-neighbor-entries in IP Settings / IPv6 Settings.
2 IPv4 and IPv6 routing tables share the same hardware memory.
3 If a route has more paths than the hardware ECMP limit (X), only the first X paths get offloaded.
CRS3xx, CRS5xx: Switch DX8000 and DX4000 Series
The devices below are based on Marvell 98DX8xxx, 98DX4xxx switch chips, or 98DX325x model.
|Model||Switch Chip||Release||IPv4 Routes 1||IPv4 Hosts 7||IPv6 Routes8||IPv6 Hosts7||Nexthops||Fasttrack connections 2,3,4||NAT entries 2,5|
|CRS317-1G-16S+||98DX8216||7.1||120K - 240K||64K||30K - 40K||32K||8K||4.5K||4K|
|CRS309-1G-8S+||98DX8208||7.1||16K - 36K||16K||4K - 6K||8K||8K||4.5K||3.9K|
|CRS312-4C+8XG||98DX8212||7.1||16K - 36K||16K||4K - 6K||8K||8K||2.25K||2.25K|
|CRS326-24S+2Q+||98DX8332||7.1||16K - 36K||16K||4K - 6K||8K||8K||2.25K||2.25K|
|CRS354-48G-4S+2Q+, CRS354-48P-4S+2Q+||98DX3257 6||7.1||16K - 36K||16K||4K - 6K||8K||8K||2.25K||2.25K|
|CRS504-4XQ||98DX4310||7.1||60K - 120K||64K||15K - 20K||32K||8K||4.5K||4K|
|CRS510-8XS-2XQ||98DX4310||7.3||60K - 120K||64K||15K - 20K||32K||8K||4.5K||4K|
|CRS518-16XS-2XQ||98DX8525||7.3||60K - 120K||64K||15K - 20K||32K||8K||4.5K||4K|
1 Depends on the complexity of the routing table. Whole-byte IP prefixes (/8, /16, /24, etc.) occupy less HW space than others (e.g., /22). Starting with RouterOS v7.3, when the Routing HW table gets full, only routes with longer subnet prefixes are offloaded (/30, /29, /28, etc.) while the CPU processes the shorter prefixes. In RouterOS v7.2 and before, Routing HW memory overflow led to undefined behavior. Users can fine-tune what routes to offload via routing filters (for dynamic routes) or suppressing hardware offload of static routes. IPv4 and IPv6 routing tables share the same hardware memory.
2 When the HW limit of Fasttrack or NAT entries is reached, other connections will fall back to the CPU. MikroTik's smart connection offload algorithm ensures that the connections with the most traffic are offloaded to the hardware.
3 Fasttrack connections share the same HW memory with ACL rules. Depending on the complexity, one ACL rule may occupy the memory of 3-6 Fasttrack connections.
4 MPLS shares the HW memory with Fasttrack connections. Moreover, enabling MPLS requires the allocation of the entire memory region, which could otherwise store up to 768 (0.75K) Fasttrack connections. The same applies to Bridge Port Extender. However, MPLS and BPE may use the same memory region, so enabling them both doesn't double the limitation of Fasttrack connections.
5 If a Fasttrack connection requires Network Address Translation, a hardware NAT entry is created. The hardware supports both SRCNAT and DSTNAT.
6 The switch chip has a feature set of the DX8000 series.
7 DX4000/DX8000 switch chips store directly connected hosts, IPv4 /32, and IPv6 /128 route entries in the FDB table rather than the routing table. The HW memory is shared between regular FDB L2 entries (MAC), IPv4, and IPv6 addresses. The number of hosts is also limited by max-neighbor-entries in IP Settings / IPv6 Settings.
8 IPv4 and IPv6 routing tables share the same hardware memory.
|Model||Switch Chip||Release||IPv4 Routes||IPv4 Hosts||IPv6 Routes||IPv6 Hosts||Nexthops||Fasttrack connections||NAT entries|
|CCR2116-12G-4S+||98DX3255 1||7.1||16K - 36K||16K||4K - 6K||8K||8K||2.25K||2.25K|
|CCR2216-1G-12XS-2XQ||98DX8525||7.1||60K - 120K||64K||15K - 20K||32K||8k||4.5K||4K|
1 The switch chip has a feature set of the DX8000 series.