Summary
Sub-menu: /container
Packages requied: container
A container is MikroTik's implementation of Linux containers, allowing users to run containerized environments within RouterOS. The container feature works in the latest MikroTik RouterOS v7.x version. Containers are compatible with images from Docker Hub, GCR, Quay, or other providers, as well as those built on other devices, using the same formats supported by these providers. While RouterOS uses different syntax compared to Docker, it still achieves similar functionality.
Disclaimer
you need physical access to your RouterOS device to enable support for the container feature, it is disabled by default;
- once the container feature is enabled, containers can be added/configured/started/stopped/removed remotely!
- if your RouterOS device is compromised, containers can be used to easily install malicious software in your RouterOS device and over network;
- your RouterOS device is as secure as anything you run in container;
- if you run container, there is no security guarantee of any kind;
- running a 3rd party container image on your RouterOS device could open a security hole/attack vector/attack surface;
- an expert with knowledge how to build exploits will be able to jailbreak/elevate to root;
Security risks:
- When a security expert publishes his exploit research - anyone can apply such an exploit;
- Someone can build a container image that can use the exploit AND provide a Linux root shell;
- By using a root shell someone may leave a permanent backdoor/vulnerability in your RouterOS system even after the container image is removed and the container feature disabled;
- If a vulnerability is injected into the primary or secondary RouterBOOT (or vendor pre-loader), then even Netinstall may not be able to fix it;
Requirements
Container package is compatible with arm arm64 and x86 architectures. Using of remote-image (similar to docker pull) functionality requires a lot of free space in main memory, 16MB SPI flash boards may use pre-build images on USB or other disk media.
External disk is highly recommended
/container
Properties
Property | Description |
---|---|
cmd (string; Default: ) | The main purpose of a CMD is to provide defaults for an executing container. These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT instruction as well. |
comment (string; Default: ) | Short description |
dns (string; Default: ) | If container needs different DNS, it can be configured here |
domain-name (string; Default: ) | |
entrypoint (string; Default: ) | An ENTRYPOINT allows to specify executable to run when starting container. Example: /bin/sh |
envlist (string; Default: ) | list of environmental variables (configured under /container envs ) to be used with container |
file (string; Default: ) | container *tar.gz tarball if the container is imported from a file |
hostname (string; Default: ) | Assigning a hostname to a container helps in identifying and managing the container more easily |
interface (string; Default: ) | veth interface to be used with the container |
logging (string; Default: ) | if set to yes, all container-generated output will be shown in the RouterOS log |
mounts (string; Default: ) | mounts from /container/mounts/ sub-menu to be used with this container |
remote-image (string; Default: ) | the container image name to be installed if an external registry is used (configured under /container/config set registry-url=...) |
root-dir (string; Default: ) | used to save container store outside main memory |
stop-signal (string; Default: ) | |
workdir (string; Default: ) | the working directory for cmd entrypoint |
Container configuration
/container/config/
Property | Description |
---|---|
registry-url | external registry url from where the container will be downloaded |
tmpdir | container extraction directory |
ram-high | RAM usage limit in bytes ( 0 for unlimited) |
username | Specifies the username for authentication ( starting from ROS 7.8) |
password | Specifies the password for authentication ( starting from ROS 7.8) |
Examples
Running Pi-hole
Prerequisites
- RouterOS device with RouterOS v7.4beta or later and installed Container package - How to install packages
- Physical access to a device to enable container mode - will be explained down bellow
- Attached HDD, SSD or USB drive for storage - formatted with a filesystem supported by RouterOS - How to format/manage disks
- RouterOS device with RouterOS v7.4beta or later and installed Container package - How to install packages
Steps to run Pi-hole
- Enable Container mode and follow the instructions the command gives you (read more about Device-mode). You will need to confirm the device-mode with a press of the reset button, or a cold reboot (if using Containers on x86):
/system/device-mode/update container=yes
Device-mode limits container use by default, before granting container mode access - make sure your device is fully secured.
- Create a new veth interface and assign an IP address in a range that is unique in your network:
/interface/veth/add name=veth1 address=172.17.0.2/24 gateway=172.17.0.1
The following configuration is equivalent to "bridge" networking mode in other Container engines such as Docker. It is possible to create a "host" equivalent configuration as well.
One veth interface can be used for many Containers. You can create multiple veth interfaces to create isolated networks for different Containers.
- Create a new bridge that is going to be used for your Containers and assign the same IP address that was used for the veth interface's gateway:
/interface/bridge/add name=containers /ip/address/add address=172.17.0.1/24 interface=containers
- Add the veth interface to your newly created bridge:
/interface/bridge/port add bridge=containers interface=veth1
- Create a NAT for outgoing traffic:
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.17.0.0/24
- Create environment variables for the Container:
/container/envs/add name=ENV_PIHOLE key=TZ value="Europe/Riga" /container/envs/add name=ENV_PIHOLE key=WEBPASSWORD value="mysecurepassword" /container/envs/add name=ENV_PIHOLE key=DNSMASQ_USER value="root"
- Create mounted volumes for the Container:
/container/mounts/add name=MOUNT_PIHOLE_PIHOLE src=disk1/volumes/pihole/pihole dst=/etc/pihole /container/mounts/add name=MOUNT_PIHOLE_DNSMASQD src=disk1/volumes/pihole/dnsmasq.d dst=/etc/dnsmasq.d
src=
points to RouterOS location (could also besrc=disk1/etc_pihole
if, for example, You decide to put configuration files on external USB media),dst=
points to defined location (consult containers manual/wiki/github for information on where to point). Ifsrc
directory does not exist on first time use then it will be populated with whatever container have indst
location.It is highly recommended to place any Container volume on an attached disk to your RouterOS device. Avoid placing Container volumes on the built-in storage.
- Configure to use a specific Container repository, for example, to use Docker.io:
/container/config/set registry-url=https://registry-1.docker.io tmpdir=disk1/tmp
- Add a Containter:
/container/add remote-image=pihole/pihole interface=veth1 root-dir=disk1/images/pihole mounts=MOUNT_PIHOLE_PIHOLE,MOUNT_PIHOLE_DNSMASQD envlist=ENV_PIHOLE name=pihole
If You wish to see container output in
/log print
, then addlogging=yes
when creating a Container, root-dir should point to an external drive. It's not recommended to use internal storage for Containers.There are multiple ways you can get a Container image, check the Adding a Container image section if you need an alternative way of adding a Container image.
Adding a Containter will start downloading or extracting it, the Container itself will not be started after it has been added, you need to start it manually for the first time after it has been downloaded/extracted.
- Check the status of your Container and wait until downloading/extracting has been finished and the
status=stopped
:/container/print
- Start the Containter:
/container/start [find where name=pihole]
- Create a port forwarding for your Container:
/ip firewall nat add action=dst-nat chain=dstnat dst-address=192.168.88.1 dst-port=80 protocol=tcp to-addresses=172.17.0.2 to-ports=80
- You should be able to access the Pi-hole web panel by navigating to
http://192.168.88.1/admin/
in your web browser. - To start using Pi-hole on your devices, change their DNS configuration to use
192.168.88.1
as your DNS server.
Adding a Container image
There are multiple ways you can get a Container image running on your RouterOS device. Check the examples below.
Option A: Get an image from an external library
Set registry-url (for downloading containers from Docker registry) and set extract directory (tmpdir) to attached USB media:
/container/config/set registry-url=https://registry-1.docker.io tmpdir=disk1/tmp
pull image:
/container/add remote-image=pihole/pihole interface=veth1 root-dir=disk1/images/pihole mounts=MOUNT_PIHOLE_PIHOLE,MOUNT_PIHOLE_DNSMASQD envlist=ENV_PIHOLE name=pihole
The image will be automatically pulled and extracted to root-dir, status can be checked by using
/container/print
Option B: Import image from PC
Your can use your PC running either Docker or Podman to download your required container image and save it to an archive. We recommend using Podman since it is easier to build and download containers for specific architectures using Podman.
- Download your required image based on the architecture of your RouterOS device:
#For ARM64 podman pull --arch=arm64 docker.io/pihole/pihole #For ARM podman pull --arch=arm docker.io/pihole/pihole #For AMD64 podman pull --arch=amd64 docker.io/pihole/pihole
- Save the container image to an archive:
podman save pihole > pihole.tar
- Upload the archive to your RouterOS device, for example:
rsync -av pihole.tar admin@192.168.88.1:/data/disk1/
You can also use Winbox to upload files!
- Create a Container on your RouterOS device using the uploaded container image archive file:
/container/add file=disk1/pihole.tar interface=veth1 root-dir=disk1/pihole mounts=MOUNT_PIHOLE_PIHOLE,MOUNT_PIHOLE_DNSMASQD envlist=ENV_PIHOLE name=pihole
Option C: Build an image on PC
You can build your own Containers and use them on your RouterOS device. While you can build Containers using Docker, we recommend using Podman since it is easier to build Containers for a specific architecture using Podman.
- Get source files for your required Container image, for example by using git:
git clone https://github.com/pi-hole/docker-pi-hole.git cd docker-pi-hole
- Build the Container image by specifying the Dockerfile or Containerfile and the target archiceture:
#For ARM64 podman build --platform linux/arm64 --tag pihole -f ./src/Dockerfile #For ARM podman build --platform linux/arm --tag pihole -f ./src/Dockerfile #For AMD64 podman build --platform linux/amd64 --tag pihole -f ./src/Dockerfile
- Save the container image to an archive:
podman save pihole > pihole.tar
- Upload the archive to your RouterOS device, for example:
rsync -av pihole.tar admin@192.168.88.1:/data/disk1/
You can also use Winbox to upload files!
- Create a Container on your RouterOS device using the uploaded container image archive file:
/container/add file=disk1/pihole.tar interface=veth1 root-dir=disk1/pihole mounts=MOUNT_PIHOLE_PIHOLE,MOUNT_PIHOLE_DNSMASQD envlist=ENV_PIHOLE name=pihole
Alternative: Using Docker to build Container images
To use Dockerfile and make your own docker package - docker needs to be installed as well as buildx or other builder toolkit.
Easiest way is to download and install Docker Engine:
https://docs.docker.com/engine/install/
After install check if extra architectures are available:
docker buildx ls
should return:
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS default * docker default default running linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
If not - install extra architectures:
docker run --privileged --rm tonistiigi/binfmt --install all
pull or create your project with Dockerfile included and build, extract image (adjust --platform if needed):
git clone https://github.com/pi-hole/docker-pi-hole.git cd docker-pi-hole docker buildx build --no-cache --platform arm64 --output=type=docker -t pihole . docker save pihole > pihole.tar
Upload pihole.tar to Your RouterOS device.
Images and objects on the Linux system can be pruned
Create a container from the tar image
/container/add file=pihole.tar interface=veth1 mounts=MOUNT_PIHOLE_PIHOLE,MOUNT_PIHOLE_DNSMASQD envlist=ENV_PIHOLE name=pihole
Networking examples
Bridge with NAT
In this networking setup, all Containers use the same veth interface and communicate with each other without any Firewall restrictions, but you need to forward ports in order to allow access to a Container's port.
For example, a database Container needs to communicate with a web application Container, the web application needs the port 80
to be exposed to the world, but the database Container does not need any ports to be exposed to the world.
- The network configuration:
/interface/veth/add name=veth1 address=172.17.0.2/24 gateway=172.17.0.1 /interface/bridge/add name=containers /ip/address/add address=172.17.0.1/24 interface=containers /interface/bridge/port add bridge=containers interface=veth1 /ip firewall nat add chain=srcnat action=masquerade src-address=172.17.0.0/24 add action=dst-nat chain=dstnat dst-address=192.168.88.1 dst-port=80 protocol=tcp to-addresses=172.17.0.2 to-ports=80
- The database Container configuration:
/container/envs/add name=ENV_POSTGRES key=POSTGRES_DB value="webapp" /container/envs/add name=ENV_POSTGRES key=POSTGRES_PASSWORD value="<changeme>" /container/envs/add name=ENV_POSTGRES key=POSTGRES_USER value="webapp" /container/envs/add name=ENV_POSTGRES key=PGDATA value="/var/lib/postgresql/data/pgdata" /container/envs/add name=ENV_POSTGRES key=POSTGRES_INITDB_ARGS value="--encoding='UTF8' --lc-collate='C' --lc-ctype='C'" /container/mounts/add name=MOUNT_POSTGRES src=disk1/volumes/postgres/data dst=/var/lib/postgresql/data /container/add remote-image=postgres:15 interface=veth1 root-dir=disk1/images/postgres mounts=MOUNT_POSTGRES envlist=ENV_POSTGRES name=postgres start-on-boot=yes logging=yes
- The webapp Container configuration:
/container/add remote-image=dpage/pgadmin4 interface=veth1 root-dir=disk1/images/pgadmin name=pgadmin start-on-boot=yes logging=yes
In this example, the pgadmin
port 80 is accessible to everyone, but the postgres
port 5432 is not accessible to everyone, it can only be accessed through either pgadmin
as 127.0.0.1
or through the RouterOS device running the Containter as 172.17.0.2
.
Isolated Containers
In this networking setup, you have multiple Containers and you want to make sure that some of them can communicate without Firewall restrictions, but some need to be isolated from other Containers. For example, you might want to create two database Containers and isolate them.
- The network configuration:
/interface/veth/add name=veth1 address=172.17.0.2/24 gateway=172.17.0.1 /interface/veth/add name=veth2 address=172.18.0.2/24 gateway=172.18.0.1 /interface/bridge/add name=containers1 /interface/bridge/add name=containers2 /ip/address/add address=172.17.0.1/24 interface=containers1 /ip/address/add address=172.18.0.1/24 interface=containers2 /interface/bridge/port add bridge=containers1 interface=veth1 /interface/bridge/port add bridge=containers2 interface=veth2 /ip firewall nat add chain=srcnat action=masquerade src-address=172.17.0.0/24 add chain=srcnat action=masquerade src-address=172.18.0.0/24 add action=dst-nat chain=dstnat dst-address=192.168.88.1 dst-port=81 protocol=tcp to-addresses=172.17.0.2 to-ports=80 add action=dst-nat chain=dstnat dst-address=192.168.88.1 dst-port=82 protocol=tcp to-addresses=172.18.0.2 to-ports=80
- The first and second database Container configuration:
/container/envs/add name=ENV_POSTGRES1 key=POSTGRES_DB value="webapp1" /container/envs/add name=ENV_POSTGRES1 key=POSTGRES_PASSWORD value="<changeme>" /container/envs/add name=ENV_POSTGRES1 key=POSTGRES_USER value="webapp1" /container/envs/add name=ENV_POSTGRES1 key=PGDATA value="/var/lib/postgresql/data/pgdata" /container/envs/add name=ENV_POSTGRES1 key=POSTGRES_INITDB_ARGS value="--encoding='UTF8' --lc-collate='C' --lc-ctype='C'" /container/mounts/add name=MOUNT_POSTGRES1 src=disk1/volumes/postgres1/data dst=/var/lib/postgresql/data /container/add remote-image=postgres:15 interface=veth1 root-dir=disk1/images/postgres1 mounts=MOUNT_POSTGRES1 envlist=ENV_POSTGRES1 name=postgres1 start-on-boot=yes logging=yes
/container/envs/add name=ENV_POSTGRES2 key=POSTGRES_DB value="webapp2" /container/envs/add name=ENV_POSTGRES2 key=POSTGRES_PASSWORD value="<changeme>" /container/envs/add name=ENV_POSTGRES2 key=POSTGRES_USER value="webapp2" /container/envs/add name=ENV_POSTGRES2 key=PGDATA value="/var/lib/postgresql/data/pgdata" /container/envs/add name=ENV_POSTGRES2 key=POSTGRES_INITDB_ARGS value="--encoding='UTF8' --lc-collate='C' --lc-ctype='C'" /container/mounts/add name=MOUNT_POSTGRES2 src=disk1/volumes/postgres2/data dst=/var/lib/postgresql/data /container/add remote-image=postgres:15 interface=veth2 root-dir=disk1/images/postgres2 mounts=MOUNT_POSTGRES2 envlist=ENV_POSTGRES2 name=postgres2 start-on-boot=yes logging=yes
- The first and second webapp Container configuration:
/container/add remote-image=dpage/pgadmin4 interface=veth1 root-dir=disk1/images/pgadmin1 name=pgadmin1 start-on-boot=yes logging=yes
/container/add remote-image=dpage/pgadmin4 interface=veth2 root-dir=disk1/images/pgadmin2 name=pgadmin2 start-on-boot=yes logging=yes
In this example, pgadmin1
is able to reach postgres1
, but is not able to reach postgres2
. Similarly pgadmin2
is able to reach postgres2
, but is not able to reach postgres1
.
Container in Layer2 network
In this networking setup, your Container is directly attached to a Layer2 network with other physical network devices. This networking setup is equivalent to "host" networking mode on other Container engines such as Docker.
In this networking setup, all the ports on your Container are exposed. This is considered insecure, but does slightly improve the Container's networking performance.
- The networking configuration:
/interface/veth/add name=veth1 address=192.168.88.2/24 gateway=192.168.88.1 /interface/bridge/port add bridge=bridge interface=veth1
- In case your RouterOS device has services running on the same port, you need to disable them:
/ip service/disable [find where name=www]
- The webapp configuration:
/container/add remote-image=dpage/pgadmin4 interface=veth1 root-dir=disk1/images/pgadmin name=pgadmin start-on-boot=yes logging=yes
In this example, pgadmin
Container does not need port forwarding, but all other ports that the Container is using are now accessible to others on the same Layer2 network. This type of setup should only be used when your application requires that the Container has an IP address in the same Layer2 network such as application that use broadcast traffic for service discovery (in most cases such requirements can still be bypassed by using NAT).
Tips and tricks
- Containers use up a lot of disk space, USB/SATA, NVMe attached media is highly recommended. For devices with USB ports - USB to SATA adapters can be used with 2.5" drives - for extra storage and faster file operations.
- RAM usage can be limited by using:
/container/config/set ram-high=200M
this will soft limit RAM usage - if a RAM usage goes over the high boundary, the processes of the cgroup are throttled and put under heavy reclaim pressure.
For starting containers after router reboot use start-on-boot option (starting from 7.6beta6)
/container/print 0 name="2e679415-2edd-4300-8fab-a779ec267058" tag="test_arm64:latest" os="linux" arch="arm" interface=veth2 root-dir=disk1/alpine mounts="" dns="" logging=yes start-on-boot=yes status=running /container/set 0 start-on-boot=yes
It is possible to get to running container shell:
/container/shell 0
Enable logging to get output from container:
/container/set 0 logging=yes
- Some containers will require additional privileges in order to be able to run properly:
/container/set 0 user=0:0
- Starting from 7.11beta5 version multiple addresses and ipv6 addresses can be added:
interface/veth add address=172.17.0.3/16,fd8d:5ad2:24:2::2/64 gateway=172.17.0.1 gateway6=fd8d:5ad2:24:2::1