Close
0%
0%

Container cheat sheet

x

Similar projects worth following
170 views
0 followers

Create a container:

lxc-create -t download -n container-name -- --no-validate

Then it prints a list & you have to specify an image from the list with 3 prompts:

Distribution: ubuntu
Release: xenial
Architecture: amd64

Rename a container without lxc-rename or lxc-move:

lxc-copy -R -n old-name -N new-name


To set up networking for containers, enable bridging in the kernel:


Networking support → Networking options -> 802.1d Ethernet Bridging

modprobe bridge

The containers require a bridge to start or they'll give the error failed to attach 'vethEFO7AL' to the bridge 'lxcbr0': Operation not permitted:

brctl addbr lxcbr0
ip link set lxcbr0 up

 Start a container:

lxc-start -n name

Create a root console on the container:

lxc-attach -n name

For the container to access the network, unset the host's address

ifconfig enp6s0 0.0.0.0

Attach the host to the bridge

brctl addif lxcbr0 enp6s0

Assign the host address to the bridge 

ifconfig lxcbr0 10.0.10.25 netmask 255.255.255.0

 Assign the default route to the bridge

route add default gw xena

 Set up the container by editing the container's /etc/network/interfaces.  Helas, this is for ubunt 16.

auto eth0
#iface eth0 inet dhcp
iface eth0 inet manual

In the image configuration file /var/lib/lxc/container-name/config you need to add lines for the container's address & the gateway

lxc.network.ipv4 = 10.0.10.26/24
lxc.network.ipv4.gateway = 10.0.10.1

 Add a nameserver line to the container's /etc/resolvconf/resolv.conf.d/head  This too is for ubunt 16.

On ubunt 20, you can disable the systemd networking

mv /lib/systemd/systemd-networkd /lib/systemd/systemd-networkd.bak
mv /lib/systemd/systemd-resolved /lib/systemd/systemd-resolved.bak
mv /usr/bin/networkd-dispatcher /usr/bin/networkd-dispatcher.bak

 & run all the network commands in /etc/rc.local

Restart the container 

lxc-stop -n name
lxc-start -n name

Mount a host directory in a container by adding a line to the config file

lxc.mount.entry = /grid /var/lib/lxc/grid20/rootfs/grid none bind 0 0

/grid is the host directory

/var/lib/lxc/grid20/rootfs/grid is the container mountpoint

Create the mount point with a mkdir in /var/lib/lxc/grid20/rootfs/

  • Torrents in a VPN

    lion mclionhead10/26/2023 at 23:18 0 comments

    Lions contend that most containers are only used for downloading torrents in a VPN.  Most ISP's these days disable your account for nefarious downloading.

    The mane VPN program is openvpn.  The mane command line torrent program these days is transmission-cli

    apt install openvpn transmission-daemon transmission-cli

    Getting it going in a VPN container is a long & hard process.

    The daemon is a systemd service.  It immediately gives "unauthorized" for all commands.  You have to systemctl stop transmission-daemon, edit /etc/transmission-daemon/settings.json, set rpc-authentication-required to false, & systemctl start transmission-daemon to get around this.

    Another new dance is the download-queue-enabled, queue-stalled-enabled  options have to be set to false or it'll set every new torrent to queued while waiting forever for the unseeded torrents.

    Generally, there's an openvpn command which configures the VPN.  It runs a command after the VPN starts & another command before the VPN dies, to ensure no data intended for the VPN goes to the insecure network.

    openvpn --script-security 2 --config [.ovpn file] --auth-user-pass [userpass file] --comp-lzo --up-delay --up [startup script] --down-pre --down [shutdown script]

    For some reason, the --comp-lzo option may have to be taken out if it can't access anything from inside the VPN.

    The VPN nameserver is contained in a foreign_option_1 environment variable passed to the startup script.

    At minimum, the startup & shutdown scripts have to manage the torrent daemon & set resolv.conf.

    startup script:

    #!/bin/sh

    systemctl start transmission-daemon

    cat > /etc/resolv.conf << EOF
    nameserver THE_VPN_NAMESERVER
    EOF

    shutdown script:

    #!/bin/sh

    systemctl stop transmission-daemon

    cat > /etc/resolv.conf << EOF
    nameserver THE_ISP_NAMESERVER
    EOF

    All the transmission downloads go in /var/lib/transmission-daemon/downloads.  The lion kingdom made this a mount point pointing to a host directory in the lxc config.

    lxc.mount.entry = /home/mov/sin /var/lib/lxc/sin/rootfs/var/lib/transmission-daemon/downloads none bind 0 0

    The permission has to be 777 since transmission-daemon runs as a normal user.

    The torrents are all in /var/lib/transmission-daemon/.config/transmission-daemon/torrents & /var/lib/transmission-daemon/.config/transmission-daemon/resume

     New containers have to be routinely created as VPN's migrate to new ubuntu releases.  To transfer all the torrents between containers, you have to copy the 2 torrent directories, make sure the permissions are 777, & the user exists. 

    -------------------------------------------------------------------------------------------------------------------------

    Key commands:

    Start downloading a torrent:

    transmission-remote -a "magnet link"

    List status & ID's of all torrents:

    transmission-remote -l

    Stop a torrent by ID:

    transmission-remote -t [ID] -S

    Resume a torrent by ID:

    transmission-remote -t [ID] -s

    Remove a torrent by ID:

    transmission-remote -t [ID] -r

    There is no easy way to select individual files for downloading.  The general idea is to poll the torrent contents by torrent ID.

    transmission-remote -t [ID] -f

    Once it downloads the file list, stop all the files from downloading

    transmission-remote -t [ID] -G all

    Then resume 1 file at a time by passing its ID

    transmission-remote -t [ID] -g [file ID]

  • Dangling virtual eth devices

    lion mclionhead05/24/2023 at 23:36 0 comments

    Noted every time you lxc-stop & lxc-start a certain container, it creates a new virtual eth device.  Eventually ifconfig & brctl show fill with dead virtual eth devices.  These dead devices cause networking in the container to fail with TLS errors, dropped connections, while DNS & ping continue to work. 

    The leading theory is these containers start processes which aren't killed by lxc-stop.  kill -9 instead of lxc-stop doesn't release the devices. 

    meld shows an interesting evolution of the process table during the lifecycle of the container.

    The container creates a bunch of nfsd & kworker which aren't visible inside the container.  These are left behind after lxc-stop or kill -9

    killall -9 nfsd gets rid of all the dangling eth devices & processes.

    The nfsds are all created outside of the container by lxc-start & bound to the virtual eth.

    For some reason, the offending container has a bunch of rpc programs running inside the container & these cause nfsd to run outside the container.  There's no reason to use nfsd in a container since the filesystem is a subdirectory on the host.  A quick disabling of RPC solves the dangling network interfaces.


    mv /usr/sbin/rpc.idmapd /usr/sbin/rpc.idmapd.bak

    mv /usr/sbin/rpc.mountd /usr/sbin/rpc.mountd.bak

    mv /usr/sbin/rpcbind /usr/sbin/rpcbind.bak


    That fixed the connection errors.  As in all things, containers introduce many new problems in exchange for solving old problems with virtual machines.  The hope is the new problems are less debilitating than the old problems but there can never be a perfect solution.

  • X clients in a container

    lion mclionhead05/23/2023 at 21:14 0 comments

    The goog spits out many hits for running an X11 program in a container by creating a new container from scratch.  Buried in this list was 1 hit for reconfiguring an existing container.

    https://blog.simos.info/running-x11-software-in-lxd-containers/

    The general idea is to add 1 line to your /var/lib/lxc/*/config file

    lxc.mount.entry = /tmp/.X11-unix tmp/.X11-unix none bind,optional,create=dir,ro

    Then restart the container.

    This creates a path which redirects the X server sockets of display :0 to the X server sockets of the host display.

View all 3 project logs

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates