My LXC containers usually work with a masqueraded bridge, on a private network.
This time I would like to put the containers on the host’s LAN, but I can’t get any results.
I use LXC 2.0.7-2+deb9u2 on debian, and I refer to this documentation : LXC/SimpleBridge.
cfrbr0 is the bridge on the host, its IP is 192.168.0.12/24, it contains the physical interface (up with no IP) and lxc-net service is down.
[config]
lxc.network.type = veth
lxc.network.name = eth0
lxc.network.flags = up
lxc.network.ipv4.gateway = auto
lxc.network.link = cfrbr0
lxc.network.ipv4 = 192.168.0.13/24
[lxc-usernet]
test veth cfrbr0 100
$ sudo service lxc-net stop
$ lxc-start -n test-ct
$ lxc-attach -n test-ct -- sudo -i
# ip a
24: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 9e:82:4f:5a:6c:74 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.13/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
# ip r
default via 192.168.0.12 dev eth0
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.13
# ping 192.168.0.12
PING 192.168.0.12 (192.168.0.12) 56(84) bytes of data.
64 bytes from 192.168.0.12: icmp_seq=1 ttl=64 time=0.081 ms
But :
# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
From 192.168.0.12: icmp_seq=2 Redirect Host(New nexthop: 192.168.0.254)
The host pings 1.1.1.1 and the veth is added to the bridge. IP forwarding is set to 1 on the host.
FYI, the host is a virtualbox VM on a macos (same issue on debian stretch virtualbox).
I think I’m misconfiguring the host-shared bridge because I don’t have problem with a masqueraded bridge and a LXC private network. As a workaround, is there a possibility to put the containers into the local network with a masqueraded bridge ?
Thank you for your suggestions !
3
Answers
What about sharing the network namespace of the host with the container ? To make it, you can use the –share-net option of lxc-start command to which you pass the pid of a process running on host side (e.g. pid#1). Then, the container will be in the network namespace as the host and so, will see all the available network interfaces of the host.
After experiencing this exact problem, the key for me was to manually set the
src
in the container’s default route.On the host, /etc/network/interfaces:
(note there is no entry for
eth0
andxx.xx.xx.zz
is the primary/main public IP of the host)On the host, the network part of /var/lib/lxc/[container]/config:
On the container, /etc/network/interfaces:
Note the
src xx.xx.xx.yy
where the IP address is the external IP being assigned to the container.The result of this on the container:
And on the host:
When using VirtualBox on a Mac (over WiFi?) this requires a different approach because of how WiFi bridging works (https://docs.oracle.com/en/virtualization/virtualbox/6.0/user/network_bridged.html).
The key in this setup is not to bridge
eth0
and don’t uselxc-net
. On the host, /etc/network/interfaces is standard:A bridge is not needed (no
lxc-net
) but set the container config to create a virtual interface thusly:Some notes on this config: (1) there is no
lxc.net.0.link
since we don’t want a bridge, (2) thelxc.net.0.ipv4.gateway
address is the host’s IP address, (3) note the netmask is/32
, (4) the scripts are explained below.The
netup.sh
script routes incoming IP traffic to the container and creates an ARP entry so thateth0
will accept traffic for it:The
netdown.sh
script simply removes the ARP entry (the IP route will go away automatically whenveth0
is destroyed).On the guest, /etc/network/interfaces can be empty, since in this case the setup was done in the container config file.
The end result on the host:
And in the container:
I know this was stated in the question, but for anyone starting from scratch, make sure forwarding is enabled,
echo 1 > /proc/sys/net/ipv4/ip_forward ; echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
.My other answer works for KVM and might be useful for others so I won’t edit it, but this one is more specific to VirtualBox and WiFi.