osctl - command line interface for container management.
osctl [global options] command [command options] [arguments...]
osctl is a command line interface for osctld. osctld is a daemon from
vpsAdminOS that is used to manage unprivileged Linux containers, including
storage pools, user namespaces and cgroups for resource management.
osctld must be running before osctl can be used. osctl is available only
to root.
osctld uses ZFS for persistent storage and it is the only supported file
system. ZFS pools are created and imported by the administrator or the OS,
then they have to be installed into osctld, see commands pool install
and pool import. One osctl pool corresponds to one ZFS pool, osctld
requires at least one pool to operate.
Pools are independent entities carrying their own configuration and data, such as users, groups, containers, images, log files and other configuration files. Pools can be imported and exported at runtime, taking all associated entities with them.
When managing entities such as groups or containers with multiple pools, you may
need to specify pool name if there are name conflicts, e.g. two groups or
containers from different pools with the same name. Two users with the same name
are not allowed, because of system user/group conflict. osctld by default
selects the entity from the first pool that has it. If you wish to manage such
entities from other pools, you can use global option --pool pool or specify
the group/container name/id as pool:ctid|user|group, i.e. pool name
and group/container name/id separated by colon.
ID ranges are used to track user/group ID allocations into user namespace maps. There is one default ID range on each pool, with the possibility of creating custom ID ranges. User namespace maps allocated from one ID range are guaranteed to be unique, i.e. no two containers can share the same user/group IDs, making them isolated.
See the id-range command family.
osctld makes it possible to run every container with a different user
namespace mapping to increate isolation. For each mapping, osctld manages
an unprivileged system user and takes care of all important system files, such
as /etc/passwd, /etc/group, /etc/subuid, /etc/subgid or
/etc/lxc/lxc-usernet.
See the user command family.
Groups represent the cgroup hierarchy and are used for system resource
accounting and control. Each pool has two groups by default: / and /default.
/ is the parent of all managed groups and /default is the group that new
containers are placed in, unless configured otherwise. Every container belongs
to exactly one group.
See the group command family.
Every container uses user namespace mapping, resource control groups
and resides in its own ZFS dataset. Containers are usually created from
images, see IMAGES.
Under the hood, osctld utilizes LXC to setup and run containers.
An image is a tar archive generated e.g. by ct export or built using the
osctl-image utility. It contains container configuration and filesystems.
Containers can be created from local images using ct import. Images can also
be automatically downloaded from remote repositories over HTTP using ct new.
vpsAdminOS comes with one such repository preconfigured, it can be browsed
at https://images.vpsadminos.org or using command
repository images ls default.
See the repository command family, ct export, ct import and ct new.
Commands for container manipulation:
ct new - create a new containerct reinstall - remove root file system content and import imagect cp - copy container to another to the same or a different poolct mv - move container to a different pool or change its idct chown - change container userct chgrp - change container groupct set, ct unset - configure container propertiesct start, ct stop, ct restart - control containersct attach - enter a container and open an interactive shellct console - attach a container's consolect exec - execute an arbitrary command within a containerct runscript - execute a script from the host within a containerct passwd - set password for a user within a containerBy default, created containers have to be started manually. It is possible to
mark containers that should be automatically started when their pool is imported
using command ct set autostart.
When a pool is imported, its containers marked for start are sorted in a queue
based on their priority. Containers are then started in order, usually several
containers at once in parallel. The start queue can be accessed using command
pool autostart queue, cancelled by pool autostart cancel and manually
triggered using pool autostart trigger.
The number of containers started at once in parallel can be set by
pool set parallel-start. There is also pool set parallel-stop which
controls how many containers at once are being stopped when the pool is being
exported from osctld.
osctld supports the veth device in two configurations: bridged and routed.
Bridge interfaces are simpler to configure, but do not provide a great isolation
of the network layer. The interfaces can be configured either statically or
using DHCP. See command ct netif new bridge for more information.
Routed interfaces rely on other routing protocols such as OSFP or BGP. osctld
adds configured routes to the container's network interfaces and it is up to
the routing protocol to propagate them wherever needed. Routed interfaces
are harder to configure, but provide a proper isolation of the network layer.
See command ct netif new routed for more information.
osctld is generating config files inside the container, which are then read
and evaluated by its init system on boot. This is used primarily for hostname
and network configuration. Supported distributions are:
Other distributions have to be configured manually from the inside.
cgroup limits can be set either on groups, where they apply to all containers
in a group and also to all child groups, or directly on containers. cgroup
parameters can be managed by commands group cgparams and ct cgparams.
To make frequently used limits simpler to configure, there are several commands
built on top of group|ct cgparams:
group|ct set memory-limit to configure memory and swap limitsgroup|ct set cpu-limit to limit CPU usage using CPU quotasAccess to devices is managed using the devices cgroup controller. Groups
and containers can be given permission to read, write or mknod configured
block and character devices. If a container wants to access a device, access
to the device has to be allowed in its group and all its parent groups up to the
root group. This is why managing devices using group|ct cgparams commands
would be impractical and special commands group|ct devices exist.
The root group by default allows access to fundamental devices such as
/dev/null, /dev/urandom, TTYs, etc. These devices are marked as inheritable
and all child groups automatically inherit them and pass them to their
containers. Additional devices can be added in two ways:
See the group|ct devices command family.
Every container resides in its own ZFS dataset. It is also possible to create
additional subdatasets and mount them within the container. See the
ct dataset command family for more information.
Arbitrary directories from the host can be mounted inside containers. Mounted
directories should use the same user namespace mapping as the container,
otherwise their contents will appear be owned by nobody:nogroup and access
permission will not work as they should.
See the ct mounts family of commands.
Existing containers can be exported to a tar archive and later imported to the same or a different vpsAdminOS instance. The tar archive contains the container's root file system including all subdatasets and osctl configuration.
See commands ct export and ct import for more information.
osctld has support for transfering containers between different vpsAdminOS
instances with SSH used as a transport channel. Each vpsAdminOS node has
a system user called osctl-ct-receive. The source node is connecting to the
osctl-ct-receive user on the destination node. Authentication is based
on public/private keys.
On the source node, a public/private key pair is needed. It can be generated by
send key gen, or the keys can be manually installed to paths given by
send key path public and send key path private. Through another
communication channel, picked at your discretion, the public key of the source
node must be transfered to the destination node and authorized to send
containers to that node. Once transfered, the key can be authorized using
receive authorized-keys add or receive authorized-keys set.
The container transfer consists of several steps:
ct send config is used to prepare environment on the destination node
and copy configurationct send rootfs sends over the container's rootfsct send sync optionally syncs rootfs changes, can be called multiple timesct send state stops the container on the source node, performs
another rootfs sync and finally starts the container on the destination nodect send cleanup is used to remove the container from the source nodeUp until ct send state, the send can be cancelled using
ct send cancel.
ct send will perform all necessary send steps in succession.
Useful commands:
ct top - interactive container monitorct ps - list container processesct pid - identify containers by PIDct log cat, ct log path - view container log filect su - switch to the container userct log catct assetshealthcheck -act reconfigure can be used to regenerate LXC configurationct recover kill can be used to kill unresponsive container processesct recover cleanup can be used to cleanup after a container crashedct recover state can be used to re-check container statusTo keep track of all the datasets, directories and files osctld is managing,
each entity has command assets. It prints a list of all managed resources,
their purpose and state. Command healthcheck then checks the state
of all assets of selected pools and reports errors.
All entities support custom user attributes that can be used to store
additional data, i.e. a simple key-value store. Attribute names and values
are stored as a string. The intended attribute naming is vendor:key, where
vendor is a reversed domain name and key an arbitrary string, e.g.
org.vpsadminos.osctl:declarative.
Attributes can be set with command set attr, unset with unset attr and
read by ls or show commands.
osctld does not destroy ZFS datasets right away, mainly because they can be busy.
Datasets to destroy are instead placed in the trash bin, which is a dedicated
dataset at <pool>/trash. The trash bin is emptied periodically every six hours.
See trash-bin command family.
--help-j, --json-p, --parsable--[no-]color-q, --quiet--pool pool--versionosctl provides shortcuts for selected commands. For example, the shortcut
for osctl ct ls is ct ls. When using a shortcut, it is not possible to pass
global options.
The following shortcuts are supported:
ctgrouphealthcheckid-rangepoolrepouserpool install [options] poolosctld.
User property org.vpsadminos.osctl:active is set to yes. osctld will
automatically import such marked pools on start. The pool is also immediately
imported, see pool import. --dataset dataset
Scope osctld to dataset on zpool pool. All osctld's data will be stored
in dataset. This option can be useful when the pool is used with other
applications or data.
pool uninstall poolpool install.
No data is deleted from the pool, it will simply not be automatically imported
when osctld starts.pool import -a,--all|poolosctld. osctld will load all users, groups and
containers from the pool. -a, --all
Import all installed pools. This is what osctld does on start.
-s, --[no-]autostart
Start containers that are configured to be started automatically. Enabled
by default.
pool export [options] poolosctld. No data is deleted, the pool and all its
content is merely removed from osctld. pool export aborts if any container
from the exported pool is running, unless option -f, --force is given. -f, --force
Export the pool even if there are containers running or an autostart plan
is still in progress. Running containers are stopped if -s,
--stop-containers is set, otherwise they're left alone.
-s, --[no-]stop-containers
Stop all containers from pool pool. Enabled by default.
-u, --[no-]unregister-users
Unregister users from pool pool from the system, i.e. remove entries
from /etc/passwd and /etc/group. Enabled by default.
-m, --message message
Message sent to logged-in users of containers that are stopped.
--if-imported
Export the pool if it is imported, exit successfully if it is not imported.
--abort
Abort an already running export of pool. The pool can be left in a partially
exported state, i.e. it can be disabled and some or all containers can be
stopped. To recover the pool after an aborted export, use
pool export --force --no-stop-containers followed by pool import.
pool ls [names...] -H, --hide-header
Do not show header, useful for scripts.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
-s, --sort parameters
Sort output by parameters, comma separated.
pool show pool -L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
-H, --hide-header
Do not show header, useful for scripts.
pool assets [options] pool -v, --verbose
Show detected errors.
pool autostart queue [options] pool -H, --hide-header
Do not show header, useful for scripts.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
pool autostart trigger poolpool autostart cancel poolpool set parallel-start pool nparallel-stop, as the storage won't
be a bottleneck.pool unset parallel-start poolparallel-start to the default value.pool set parallel-stop pool nosctl shutdown. Defaults to 4.pool unset parallel-stop poolparallel-stop to the default value.pool set attr pool vendor:key valuepool ls or pool show using the -o, --output
option.org.vpsadminos.osctl:declarative.pool unset attr pool vendor:keyid-range new options id-range --start-id start-id
The first user/group ID. Required.
--block-size block-size
Number of user/group IDs that make up the minimum allocation unit.
Should be set to 65536 or more.
--block-count count
How many blocks from start-id should the range include. Defines the
maximum number of user namespace maps that can be allocated from this
range. Required.
id-range del id-rangeid-range ls [id-ranges...] -H, --hide-header
Do not show header, useful for scripts.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
-s, --sort parameters
Sort output by parameters, comma separated.
id-range show id-range -L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
-H, --hide-header
Do not show header, useful for scripts.
id-range table ls [options] id-range [all|allocated|free] -H, --hide-header
Do not show header, useful for scripts.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
-s, --sort parameters
Sort output by parameters, comma separated.
id-range table show [options] id-range block-index -H, --hide-header
Do not show header, useful for scripts.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
id-range allocate id-range --block-count n
How many blocks to allocate. Defaults to 1 block.
--block-index n
Optional index of the first allocated block in the allocation table.
--owner string
Optional owner of the allocated blocks.
id-range free id-range --block-index n
Index of the first allocated block to free.
--owner owner
Free allocations belonging to owner.
id-range assets [options] id-range -v, --verbose
Show detected errors.
id-range set attr id-range vendor:key valueid-range ls or id-range show using the -o,
--output option.org.vpsadminos.osctl:declarative.id-range unset attr id-range vendor:keyuser new options user--id-range is used, or no mapping is set using options
--map, --map-uid and --map-gid, a new UID/GID range is allocated from
id-range and a default mapping is created.id-range allocate and then use option
--id-range-block-index together with --map, --map-uid or --map-gid. --pool pool
Pool name.
--id-range id-range
Name of an ID range to allocate UID/GID from. Defaults to ID range
called default.
--id-range-block-index n
Use an existing UID/GID allocation from id-range, or allocate a new
block at index n. The owner of the allocated block is not changed, so
existing blocks will not get automatically freed when the user is deleted.
--map id:lowerid:count
Provide both UID and GID mapping for user namespace. id is the beginning
of the range inside the user namespace, lowerid is the range beginning
on the host and count is the number of mapped IDs both inside and
outside the user namespace. This option can be used mutiple times.
--map-uid uid:loweruid:count
Provide UID mapping for user namespace. uid is the beginning of
the range inside the user namespace, loweruid is the range beginning
on the host and count is the number of mapped UIDs both inside and
outside the user namespace. This option can be used mutiple times.
--map-gid gid:lowergid:count
Provide GID mapping for user namespace. gid is the beginning of
the range inside the user namespace, lowergid is the range beginning
on the host and count is the number of mapped GIDs both inside and
outside the user namespace. This option can be used mutiple times.
--[no-]standalone
Make the user standalone. Standalone users are not deleted together with
their containers, but are left behind. Enabled by default.
user del useruser ls [options] [names...] -H, --hide-header
Do not show header, useful for scripts.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
-s, --sort parameters
Sort output by parameters, comma separated.
--pool names
Filter by pool name, comma separated.
--registered
List only registered users.
--unregistered
List only unregistered users
user show [options] user -L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
-H, --hide-header
Do not show header, useful for scripts.
user reg all|useruser unreg all|useruser subugidsuser assets [options] user -v, --verbose
Show detected errors.
user map user [uid | gid | both] -H, --hide-header
Do not show header, useful for scripts.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
user set standalone useruser unset standalone useruser set attr user vendor:key valueuser ls or user show using the -o, --output
option.org.vpsadminos.osctl:declarative.user unset attr user vendor:keyct new [options] ctid--distribution
and optionally also --version, --arch, --vendor or --variant.
All configured repositories are searched by default.ct import to create containers from local files.--dataset. If the dataset already
containers rootfs and you do not wish to use any image, signal this with
option --skip-image. Otherwise, the image to be used can be selected
using any of the methods above. --pool pool
Pool name. Defaults to the first available pool.
--user user
User name. If not provided, a new user is created.
--group group
Group name, defaults to group default from selected pool.
--dataset dataset
Use a custom dataset for the container's rootfs. The dataset and all its
parents are created, if it doesn't already exist. If used with
--skip-image, the dataset is expected to already contain the rootfs
and --distribution and --version have to be provided.
--zfs-property property=value
A ZFS property passed to ZFS when creating container datasets.
Can be used multiple times.
--map-mode native|zfs
Specify UID/GID mapping mode. Defaults to native.
--skip-image
Do not import any image, leave the container's root filesystem empty.
Useful for when you wish to setup the container manually.
--distribution distribution
Distribution name in lower case, e.g. alpine, centos, debian, ubuntu.
--version version
Distribution version. The format can differ among distributions, e.g.
alpine 3.6, centos 7.0, debian 9.0 or ubuntu 16.04.
--arch arch
Container architecture, e.g. x86_64 or x86. Defaults to the host system
architecture.
--vendor vendor
Vendor to be selected from the remote image repository.
--variant variant
Vendor variant to be selected from the remote image repository.
--repository repository
Instead of searching all configured repositories from appropriate pool,
use only repository name. The selected repository can be disabled.
ct del ctid -f, --force
Delete the container even if it is running. By default, running containers
cannot be deleted.
--prune
Prune the trash-bin after the container is deleted, see trash-bin prune.
ct reinstall [options] ctidosctld will attempt
to find the appropriate image for the container's distribution version in
remote repositories. This may not work if the container was created
from a local file, stream or if the distribution is too old and no longer
supported.ct reinstall will
abort if there are snapshots present. You can use option -r,
--remove-snapshots to remove them. --from-file file
Create the container from a container image.
--distribution distribution
Distribution name in lower case, e.g. alpine, centos, debian, ubuntu.
--version version
Distribution version. The format can differ among distributions, e.g.
alpine 3.6, centos 7.0, debian 9.0 or ubuntu 16.04.
--arch arch
Container architecture, e.g. x86_64 or x86. Defaults to the host system
architecture.
--vendor vendor
Vendor to be selected from the remote image repository.
--variant variant
Vendor variant to be selected from the remote image repository.
--repository repository
Instead of searching all configured repositories from appropriate pool,
use only repository name. The selected repository can be disabled.
-r, --remove-snapshots
Remove all snapshots of the container's root dataset. ct reinstall
cannot proceed if there are snapshots present.
ct ls [options] [ctids...] -H, --hide-header
Do not show header, useful for scripts.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
-s, --sort parameters
Sort output by parameters, comma separated.
--pool pools
Filter by pool, comma separated.
-u, --user users
Filter by user name, comma separated.
-g, --group groups
Filter by group name, comma separated.
-S, --state states
Filter by state, comma separated. Available states:
stopped, starting, running, stopping, aborting, freezing,
frozen, thawed.
-e, --ephemeral
Filter ephemeral containers.
-p, --persistent
Filter persistent (non-ephemeral) containers.
-d, --distribution distributions
Filter by distribution, comma separated.
-v, --version versions
Filter by distribution version, comma separated.
ct tree poolct show [options] ctid -L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
-H, --hide-header
Do not show header, useful for scripts.
ct mount ctidct start [options] ctid -w, --wait seconds|infinity
How many seconds to wait for the container to enter state running.
Defaults to 120 seconds. Set to 0 to return immediately.
-F, --[no-]foreground
Open container console (can be later detached), see ct console.
-q, --queue
Enqueue the start operation using the pool's autostart facility. The pool
is configured to start a certain number of containers in parallel. Use
this option to add the container to the queue. This is useful when you're
manually starting a large number of containers.
-p, --priority n
Priority for the autostart queue. This option can be used together with
-q, --queue. See ct set autostart for more information.
-D, --[no-]debug
Configure LXC to write debug messages to the container's log file, see
ct log commands.
-a, --attach
Attach the container using ct attach after it starts. Conflicts with
-F, --foreground.
-u, --user-shell
When -a, --attach is used, load the shell that's configured
in the container's /etc/passwd for root and read personal
configuration files, such as .bashrc.
ct stop [options] ctidosctld will send a signal to the container's
init process to cleanly shutdown and wait until it finishes or timeout
seconds passes. If it time outs, the container is killed. This behaviour can
be changed with options --timeout, --kill and --dont-kill. -F, --[no-]foreground
Open container console (can be later detached), see ct console.
-m, --message message
Message sent to logged-in container users.
-k, --kill
Do not request a clean shutdown, kill the container immediately.
--dont-kill
If the clean shutdown does not finish in timeout seconds, exit with
error, do not kill the container.
-t, --timeout timeout
How many seconds to wait for the container to cleanly shutdown before
killing it or failing, depending on whether option --dont-kill is set.
The default timeout is 300 seconds.
ct restart [options] ctidct restart calls ct stop and ct start
in succession. Like with ct stop, if the container does not cleanly shutdown
in timeout seconds, it is killed. This behaviour can be changed with options
--timeout, --kill and --dont-kill.--reboot is used, the container's init process is signaled to
reboot the system. osctld has no way of knowing whether the init process
responds and the reboot actually takes place. -w, --wait seconds|infinity
How many seconds to wait for the container to enter state running.
Applicable only for full restarts, i.e. when --reboot is not set.
Defaults to 120 seconds. Set to 0 to return immediately.
-F, --[no-]foreground
Open container console (can be later detached), see ct console.
-m, --message message
Message sent to logged-in container users.
-r, --reboot
Request a reboot of the container by signaling its init process.
If the init process does not respond to the configured signal, nothing
happens.
-k, --kill
Do not request a clean shutdown, kill the container immediately.
--dont-kill
If the clean shutdown does not finish in timeout seconds, exit with
error, do not kill the container.
-t, --timeout timeout
How many seconds to wait for the container to cleanly shutdown before
killing it or failing, depending on whether option --dont-kill is set.
The default timeout is 300 seconds.
-a, --attach
Attach the container using ct attach after it starts. Conflicts with
-F, --foreground.
-u, --user-shell
When -a, --attach is used, load the shell that's configured
in the container's /etc/passwd for root and read personal
configuration files, such as .bashrc.
ct attach [options] ctid, ct enter [options] ctidosctld tries to open bash,
busybox and falls back to /bin/sh. The shell is not reading any personal
configuration files from within the container in order to provide a unified
shell interface across all containers. Use option --user-shell to change
this behaviour. -u, --user-shell
Load the shell that's configured in the container's /etc/passwd for
root and read personal configuration files, such as .bashrc.
ct console [options] ctid-j, --json is set, the console will not manipulate the
TTY, but instead will accept JSON commands on standard input. Output from the
console will be written to standard output as-is. To detach the console, send
SIGTERM to osctl. To learn more about the JSON commands, see
CONSOLE INTERFACE. -t, --tty n
Select which TTY to attach, defaults to 0.
ct exec [options] ctid cmd... -r, --run-container
If the container isn't already running, start it, but run cmd instead
of the container's init system. lxc-init is run as PID 1 to reap child
processes and to run cmd. The container is stopped when cmd finishes.
-n, --network
If the container is started using the -r, --run-container option,
configure the network before running cmd. Normally the network is
brought up by the container's init system, for which osctld generates
configuration files. Since ct exec does not use the container's init
system when starting the container, the network is by default not
configured.
Note that only static IP address and route configuration can be setup in this way. DHCP client is not run.
ct runscript [options] ctid script|- [arguments...]-, the script to execute is read from the standard input.
In this case, the script cannot read from the standard input itself. -r, --run-container
If the container isn't already running, start it, but run script instead
of the container's init system. lxc-init is run as PID 1 to reap child
processes and to run script. The container is stopped when script
finishes.
-n, --network
If the container is started using the -r, --run-container option,
configure the network before running cmd. Normally the network is
brought up by the container's init system, for which osctld generates
configuration files. Since ct exec does not use the container's init
system when starting the container, the network is by default not
configured.
Note that only static IP address and route configuration can be setup in this way. DHCP client is not run.
ct cat ctid file...ct wall [options] [ctid...] -m, --message msg
The message to send to the users. The message is read from the standard
input if this option is not provided.
-n, --hide-banner
Suppress the banner.
ct set autostart [options] ctidosctld starts or when its pool is
imported. -p, --priority n
Priority determines container start order. 0 is the highest priority,
higher number means lower priority. Containers with the same priority
are ordered by their ids. The default priority is 1000.
-d, --delay n
Time in seconds for which osctld waits until the next container is
started if the system load average over the last minute is equal to
or greater than the number of processors. The default is 5 seconds.
ct unset autostart ctidct set ephemeral ctidct stop,halt,osctl operation, such as ct export.ct unset ephemeral ctidct set distribution ctid distribution version [arch [vendor [variant]]]ct set image-config ctid--from-file. Configuration values from the container image will replace
current configuration. --from-file file
Use container image stored in file.
--distribution distribution
Distribution name in lower case, e.g. alpine, centos, debian, ubuntu.
--version version
Distribution version. The format can differ among distributions, e.g.
alpine 3.6, centos 7.0, debian 9.0 or ubuntu 16.04.
--arch arch
Container architecture, e.g. x86_64 or x86. Defaults to the host system
architecture.
--vendor vendor
Vendor to be selected from the remote image repository.
--variant variant
Vendor variant to be selected from the remote image repository.
--repository repository
Instead of searching all configured repositories from appropriate pool,
use only repository name. The selected repository can be disabled.
ct set hostname ctid hostname/etc/hosts. The hostname
is configured on every container start.ct unset hostname ctidosctld will not touch the container's hostname
anymore.ct set dns-resolver ctid address.../etc/resolv.conf
on every start./etc/resolv.conf with DNS servers from DHCP server.ct unset dns-resolver ctidosctld will no longer manipulate the
container's /etc/resolv.conf.ct set nesting ctidosctl-ct-nesting.ct unset nesting ctidosctl-ct-nesting is set, it
is changed to lxc-container-default-cgns.ct set cpu-package ctid cpu-package|auto|noneauto is the default value,
the CPU scheduler will assign a package on its own when the container is starting.
none will disable the CPU scheduler on this container. Setting a custom
cpu-package will statically pin the container to a specific CPU package.
See osctl cpu-scheduler package ls for a list of possible CPU packages
on this system.osctl cpu-scheduler status. The container needs to be restarted for this
change to have an effect.ct unset cpu-package ctidauto: the CPU package
is assigned dynamically by the CPU scheduler if the scheduler itself is enabled,
see osctl cpu-scheduler status. The container needs to be restarted for this
change to have an effect.ct set seccomp ctid profilect unset seccomp ctid/run/osctl/configs/lxc/common.seccomp.ct set init-cmd ctid binary [arguments...]/sbin/init.ct unset init-cmd ctid/sbin/init.ct set start-menu [options] ctid -t, --timeout n
Timeout in seconds after which the default init system is started
automatically.
ct unset start-menu ctidct set impermanence [options] ctid/nix. --zfs-property property=value
A ZFS property passed to the impermanent ZFS dataset used
as a root filesystem for the container. Can be used multiple times.
ct unset impermanence ctidct set map-mode ctid native|zfsnative uses ID-mapped bind-mounts and is the
recommended setting for new containers. zfs maps UID/GID using vpsAdminOS-specific
ZFS properties uidmap/gidmap that predate ID-mapped bind-mounts.map-mode to be changed.ct set raw lxc ctidct unset raw lxc ctidct set attr ctid vendor:key valuect ls or ct show using the -o, --output
option.org.vpsadminos.osctl:declarative.ct export and transfered to other nodes by
ct send.ct unset attr ctid vendor:keyct set cpu-limit ctid limit100 means the container can
fully utilize one CPU core.ct cgparams set, two parameters are
configured: cpu.cfs_period_us and cpu.cfs_quota_us. The quota is calculated
as: limit / 100 * period. -p, --period period
Length of measured period in microseconds, defaults to 100000,
i.e. 100 ms.
ct unset cpu-limit ctidct cgparams unset.ct set memory-limit ctid memory [swap]ct cgparams set. Memory limit is set
with cgroup parameter memory.limit_in_bytes. If swap limit is given as well,
parameter memory.memsw.limit_in_bytes is set to memory + swap.k, m, g, or t.ct unset memory-limit ctidct cgparams unset.ct cp ctid new-id --[no-]consistent
When cloning a running container, it has to be stopped if the copy is to
be consitent. Inconsistent copy will not contain data that the running
container has in memory and have not yet been saved to disk by its
applications. Enabled by default.
--pool pool
Name of the target pool. By default, container new-id is created on
the same pool as container ctid.
--user user
Name of the target user. By default, the user of container ctid is used.
When copying to a different pool, the target user has to exist before
ct cp is run.
--group group
Name of the target group. By default, the group of container ctid is used.
When copying to a different pool, the target group has to exist before
ct cp is run.
--dataset name
Custom name of a dataset from the target pool, where the new container's
root filesystem will be stored.
--no-network-interfaces
Remove network interfaces from the new container config. This is useful
for cloning containers without duplicating network configuration.
ct mv ctid new-id --pool pool
Name of the target pool. By default, container new-id is created on
the same pool as container ctid.
--user user
Name of the target user. By default, the user of container ctid is used.
When copying to a different pool, the target user has to exist before
ct cp is run.
--group group
Name of the target group. By default, the group of container ctid is used.
When copying to a different pool, the target group has to exist before
ct cp is run.
--dataset name
Custom name of a dataset from the target pool, where the new container's
root filesystem will be stored.
ct chown ctid userct chgrp [options] ctid group --missing-devices check|provide|remove
The container may require access to devices that are not available in the
target group. This option determines how should osctld treat those
missing devices. check means that if a missing device is found, an error
is returned and the operation is aborted. provide will add missing
devices to the target group and all its parent groups, it will also ensure
sufficient access mode. remove will remove all unavailable devices from
the container. The default mode is check.
ct boot [options] ctidct boot are forgotten
and the container's root dataset will be used again.ct boot can start the container from an image from a repository (use options
--distribution, --version, etc.) or from a local file (use option
--from-file). By default, ct boot will try to use the container's
distribution info to find the appropriate container image and start it. --force
If the container is running, stop it and boot the new image.
--from-file file
Create the container from a container image.
--distribution distribution
Distribution name in lower case, e.g. alpine, centos, debian, ubuntu.
--version version
Distribution version. The format can differ among distributions, e.g.
alpine 3.6, centos 7.0, debian 9.0 or ubuntu 16.04.
--arch arch
Container architecture, e.g. x86_64 or x86. Defaults to the host system
architecture.
--vendor vendor
Vendor to be selected from the remote image repository.
--variant variant
Vendor variant to be selected from the remote image repository.
--repository repository
Instead of searching all configured repositories from appropriate pool,
use only repository name. The selected repository can be disabled.
--mount-root-dataset dir
Mount the container's root dataset to dir inside the container.
--zfs-property property=value
A ZFS property passed to the newly created dataset used as a temporary
root filesystem for the container. Can be used multiple times.
-w, --wait seconds|infinity
How many seconds to wait for the container to enter state running.
Defaults to 120 seconds. Set to 0 to return immediately.
-F, --[no-]foreground
Open container console (can be later detached), see ct console.
-q, --queue
Enqueue the start operation using the pool's autostart facility. The pool
is configured to start a certain number of containers in parallel. Use
this option to add the container to the queue. This is useful when you're
manually starting a large number of containers.
-p, --priority n
Priority for the autostart queue. This option can be used together with
-q, --queue. See ct set autostart for more information.
-D, --[no-]debug
Configure LXC to write debug messages to the container's log file, see
ct log commands.
-a, --attach
Attach the container using ct attach after it starts. Conflicts with
-F, --foreground.
-u, --user-shell
When -a, --attach is used, load the shell that's configured
in the container's /etc/passwd for root and read personal
configuration files, such as .bashrc.
ct config reload ctidct config replace ctidosctld version and has
to contain required options, otherwise errors may occur. This is considered
a low level interface, since a lot of runtime checks is bypassed.ct config replace is called.ct passwd ctid user [password]passwd or chpasswd from the container's system.ct su ctidlxc-start,
ct console for tty0 will not be functional.ct log cat ctidct log path ctidct reconfigure ctidct freeze ctidct unfreeze ctid
ct thaw ctid
Unfreeze (thaw) the container and all its processes.
ct bisect [ctid...] -a, --action freeze|stop
How to disable containers, defaults to freeze.
-x, --exclude ctids
Comma-separated list of containers ids to exclude from the bisect.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
-s, --sort parameters
Sort output by parameters, comma separated.
--pool pools
Filter by pool, comma separated.
-u, --user users
Filter by user name, comma separated.
-g, --group groups
Filter by group name, comma separated.
-e, --ephemeral
Filter ephemeral containers.
-p, --persistent
Filter persistent (non-ephemeral) containers.
-d, --distribution distributions
Filter by distribution, comma separated.
-v, --version versions
Filter by distribution version, comma separated.
ct export [options] ctid file --[no-]consistent
Enable/disable consistent export. When consistently exporting a running
container, the container is stopped, so that applications can gracefully
exit and save their state to disk. Once the export is finished,
the container is restarted.
--compression auto | off | gzip
Enable/disable compression of the dumped ZFS data streams. The default is
auto, which uses compressed stream, if the dataset has ZFS compression
enabled. If the compression is not enabled on the dataset, the stream
will be compressed using gzip. off disables compression, but if
ZFS compression is enabled, the data is dumped as-is. gzip enforces
compression, even if ZFS compression is enabled.
ct import [options] filect export. --as-id ctid
Import the container and change its id to ctid. Using this option, it is
possible to import the same file multiple times, essentially cloning
the containers.
--as-user name
Import the container as an existing user name. User configuration from
file is not used.
--as-group name
Import the container into an existing group name. Group configuration
from file is not used.
--dataset dataset
Use a custom dataset for the container's rootfs. The dataset and all its
parents are created, if it doesn't already exist.
--zfs-property property=value
A ZFS property passed to ZFS when creating container datasets.
Can be used multiple times.
--map-mode native|zfs
Specify UID/GID mapping mode. Defaults to native.
--missing-devices check|provide|remove
The imported container may require access to devices that are not configured
on this system. This option determines how should osctld treat those missing
devices. check means that if a missing device is found, an error is returned
and the import is aborted. provide will add missing devices to all parent
groups and ensure sufficient access mode. remove will remove all unconfigured
devices from the container. The default mode is check.
ct send [options] ctid destinationct send config, ct send rootfs, ct send state and
ct send cleanup in succession. -p, --port port
SSH port, defaults to 22.
--passphrase passphrase
Provide passhprase if the destination node requires it for authentication.
--as-id ctid
Send the container with a different ID.
--as-user user
Send the container with a different user name. The user configuration
remains the same, it is only the name that is changed.
--as-group group
Send the container with a different group name. The group configuration
remains the same, it is only the name that is changed.
--to-pool pool
Select pool on the target node to send the container to. If not set, the
target node uses its first available pool.
--clone
Do not move the container to destination, but clone it.
--no-consistent
When --clone is used, the container is by default stopped to store all
state on disk. After a snapshot with all state is taken, the container
is started again. --no-consistent can be used to clone the container
while it is running.
--no-restart
Do not restart the container on this node after it is cloned to the target
node.
--no-start
Do not start the container on the target node, keep it stopped.
--no-network-interfaces
Remove network interfaces from the container config sent to destination.
This is useful for cloning containers without duplicating network
configuration.
--no-snapshots
Do not send existing snapshots to destination. Only temporary snapshots
created for the send process are sent.
--from-snapshot snapshot
Start the transfer from snapshot. snapshot must be in the short form,
without dataset name. The at sign is optional, e.g. @my-snapshot or
my-snapshot. This snapshot must exist on all container datasets.
--preexisting-datasets
Assume that a common snapshot is on the local node and also already
on the destination node. Use option --from-snapshot to specify
the snapshot name. The common snapshot is then used as a base for
incremental streams.
Note that the common snapshot must exist for all container datasets.
ct send config [options] ctid destination -p, --port port
SSH port, defaults to 22.
--passphrase passphrase
Provide passhprase if the destination node requires it for authentication.
--as-id ctid
Send the container with a different ID.
--as-user user
Send the container with a different user name. The user configuration
remains the same, it is only the name that is changed.
--as-group group
Send the container with a different group name. The group configuration
remains the same, it is only the name that is changed.
--to-pool pool
Select pool on the target node to send the container to. If not set, the
target node uses its first available pool.
--no-network-interfaces
Remove network interfaces from the container config sent to destination.
This is useful for cloning containers without duplicating network
configuration.
--no-snapshots
Do not send existing snapshots to destination. Only temporary snapshots
created for the send process are sent.
--from-snapshot snapshot
Start the transfer from snapshot. snapshot must be in the short form,
without dataset name. The at sign is optional, e.g. @my-snapshot or
my-snapshot. This snapshot must exist on all container datasets.
--preexisting-datasets
Assume that a common snapshot is on the local node and also already
on the destination node. Use option --from-snapshot to specify
the snapshot name. The common snapshot is then used as a base for
incremental streams.
Note that the common snapshot must exist for all container datasets.
ct send rootfs ctidct send rootfs takes snapshots of the container's datasets
and sends them to the destination.ct send sync ctidct send rootfs or the previous ct send sync.
This send step is optional and can be used to keep the source and destination
containers close until ct send state is called.ct send state [options] ctid --clone
Do not move the container to destination, but clone it.
--no-consistent
When --clone is used, the container is by default stopped to store all
state on disk. --no-consistent can be used to clone the container while
it is running.
--no-restart
Do not restart the container on this node after it is cloned to the target
node.
--no-start
Do not start the container on the target node, keep it stopped.
ct send cleanup [options] ctidct send cancel [options] ctidct send state, it cannot stop the send if one of the
steps is still in progress. ct send cancel --force can be used instead of
ct send cleanup to keep the source container and drop the send state. -f, --force
Cancel the send state on the local node, even if the remote node
refuses to cancel. This is helpful when the send state between the
two nodes gets out of sync. The remote node may remain in an unconsistent
state, but from there, the container can be deleted using osctl ct del
if needed.
-l, --local
Cancel the send state only on the local node, do not even attempt to
contact the target node. The remote node may remain in an unconsistent
state, but from there, the container can be deleted using osctl ct del
if needed.
ct monitor ctid-j, --json is used, the state changes are reported
in JSON.ct wait ctid state...ct top [options]ct top can function in two modes: realtime
and cumulative. realtime mode shows CPU usage in percent and other
resources as usage per second, except memory and the number of processes.
cumulative mode shows all resource usage accumulated from the time ct top
was started.| Keys | Action |
|---|---|
q |
Quit |
<, >, left, right |
Change sort column |
r, R |
Reverse sort order |
| up, down | Select containers |
| space | Highlight selected container |
enter, t |
Open top and filter container processes |
h |
Open htop and filter container processes |
PageDown |
Scroll down |
PageUp |
Scroll up |
Home |
Scroll to the top |
End |
Scroll to the bottom |
m |
Toggle between realtime and cumulative mode. |
p |
Pause/unpause resource tracking. |
/ |
Filter containers by ID. Confirm search by enter, cancel by escape. |
? |
Show help message. |
-j, --json is used, the TUI is replaced by JSON
being periodically printed on standard output. Every line describing resource
usage at the time of writing. ct top with JSON output can be manually
refreshed by sending it SIGUSR1. -r, --rate n
Refresh rate in seconds, defaults to 1 second.
--no-processes
Disable tracking of process states, which needs to walk through all
processes in /proc. Relevant only for TUI, JSON output does not contain
this information.
--no-iostat
Do not track the host's io stats using zpool iostat.
ct pid [pid...] | --, the PIDs are read from
the standard input, one PID per line. -H, --hide-header
Do not show header, useful for scripting.
ct uid [uid...] | --, the user IDs
are read from the standard input, one UID per line. -H, --hide-header
Do not show header, useful for scripting.
ct gid [gid...] | --, the group IDs
are read from the standard input, one GID per line. -H, --hide-header
Do not show header, useful for scripting.
ct ps [ctid...] | -- will filter processes from the host.pool - pool namectid - container idpid - process ID as seen on the hostctpid - process ID as seen inside the containerruid - real UID as seen on the hostrgid - real GID as seen on the hosteuid - effective UID as seen on the hostegid - effective GID as seen on the hostctruid - real user ID as seen inside the containerctrgid - real group ID as seen inside the containercteuid - effective user ID as seen inside the containerctegid - effective group ID as seen inside the containervmsize - virtual memory size in bytesrss - resident set size in bytesstate - current process state, see proc(5)start - process start timetime - time spent using CPUcommand - full command string with argumentsname - command name (only executable)
-H, --hide-header-L, --list-o, --output parametersOUTPUT PARAMETERS for more information.-s, --sort parameters-p, --parameter parameter∙value∙ can be one of:= checks equality!= checks inequality=~ matches regular expression, works only on string parameters!~ must not match regular expression, works only on string parameters>, <, >=, <= make comparisons between numeric parametersk, m, g, or t.ct assets [options] ctid -v, --verbose
Show detected errors.
ct cgparams ls [options] ctid [parameters...] -H, --hide-header
Do not show header, useful for scripting.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
-v, --version 1|2|all
Select parameters by cgroup version. Defaults to all.
-S, --subsystem subsystem
Filter by cgroup subsystem, comma separated.
-a, --all
Include parameters from parent groups up to root group.
ct cgparams set ctid parameter value...osctld will
make sure this parameter is always set when the container is started. The
parameter can be for example cpu.shares or memory.limit_in_bytes. cgroup
subsystem is derived from the parameter name, you do not need to supply it.devices cgroup subsystem, where you may need to write to devices.deny and
devices.allow multiple times. -a, --append
Append new values, do not overwrite previously configured values for
parameter.
-v, --version 1|2
Specify cgroup version. Default to the version the system currently uses.
ct cgparams unset ctid parameterosctld
config.cpu.cfs_quota_usmemory.limit_in_bytesmemory.memsw.limit_in_bytes
-v, --version 1|2ct cgparams apply ctidct cgparams replace ctid{
"parameters": [
{
"version": <cgroup version>,
"subsystem": <cgroup subsystem>,
"parameter": <parameter name>,
"value": [ values ]
}
...
]
}
ct devices ls [options] ctid -H, --hide-header
Do not show header, useful for scripts.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
ct devices add [options] ctid block|char major minor mode [device]block/char device identified by the major
and minor numbers, see mknod(1) for more information. mode determines
what is the container allowed to do: r to read, w to write, m to call
mknod. For now, unprivileged containers cannot call mknod, so allowing it
here doesn't do anything.osctld will prepare the device node within the
container's /dev during every container start.ct devices promote. -p, --[no-]parents
The device that is being added has to be provided by all parent groups,
up to the root group. When this switch is enabled, osctld will add
the device to all parent groups that do not already have it.
ct devices del ctid block|char major minorct devices chmod [options] ctid block|char major minor mode-p, --parents is used. -p, --parents
Ensure that all parent groups provide the device with the required
access mode. Parents that do not provide correct access modes are updated.
ct devices promote ctid block|char major minorgroup devices del -r.ct export and ct import.ct send config.ct devices inherit ctid block|char major minorct devices promote. The access mode,
if different from the group, will revert to the acess mode defined by the
parent group.ct devices replace namect devices commands if you wish for osctld
to enforce this rule.{
"devices": [
{
"dev_name": <optional device node>,
"type": block|char,
"major": <major number or asterisk>,
"minor": <minor number or asterisk>,
"mode": <combinations of r,w,m>,
"inherit": true|false
}
...
]
}
ct prlimits ls ctid [limits...] -H, --hide-header
Do not show header, useful for scripting.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
ct prlimits set ctid limit soft_and_hard, ct prlimits set ctid limit soft hardRLIMIT_NOFILE should
be specified as nofile.ct prlimits unset ctid limitct netif ls [options] [ctid] -H, --hide-header
Do not show header, useful for scripting.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
-s, --sort parameters
Sort output by parameters, comma separated.
-l, --link bridge
Filter by linked bridge.
-t, --type type
Filter by interface type (bridge or routed)
ct netif new bridge [options] --link bridge ctid ifnameosctld, it must be provided
by the system administrator in advance. --link bridge
What bridge should the interface be linked with, required.
--enable
Enable the interface. This is the default.
--disable
Disable the interface. Disabled interface is kept down on the host,
it remains up inside the container.
--[no-]dhcp
If enabled, the container's interface will be setup by DHCP. This option
controls DHCP client within the container for supported distributions.
DHCP server must be provided by the host, e.g. using Nix option
networking.dhcpd.
When DHCP is disabled, you can assign IP addresses manually using
ct netif ip commands.
Enabled by default.
--gateway-v4 auto|none|address
IPv4 gateway to use when DHCP is disabled. If set to auto, the primary
address of the linked bridge is used as a gateway.
--gateway-v6 auto|none|address
IPv6 gateway to use when DHCP is disabled. If set to auto, the primary
address of the linked bridge is used as a gateway.
--hwaddr addr
Set a custom MAC address. Every x in the address is replaced by
a random value. By default, the address is dynamically allocated.
--tx-queues n
Set the number of transmit queues.
--rx-queues n
Set the number of receive queues.
--max-tx rate|unlimited
Set a shaper on the interface to limit outgoing data. rate is given
in bits per second, or with an appropriate suffix, i.e. k, m, g,
or t. When set to 0 or unlimited, the shaper is disabled.
--max-rx rate|unlimited
Set a shaper on the interface to limit incoming data. rate is given
in bits per second, or with an appropriate suffix, i.e. k, m, g,
or t. When set to 0 or unlimited, the shaper is disabled.
ct netif new routed [options] ctid ifnameosctld will automatically setup appropriate routes on the host
veth interface and generate configuration files for the container's network
system. The interface will appear as ifname within the container. --enable
Enable the interface. This is the default.
--disable
Disable the interface. Disabled interface is kept down on the host,
it remains up inside the container.
--hwaddr addr
Set a custom MAC address. Every x in the address is replaced by
a random value. By default, the address is dynamically allocated.
--tx-queues n
Set the number of transmit queues.
--rx-queues n
Set the number of receive queues.
--max-tx rate|unlimited
Set a shaper on the interface to limit outgoing data. rate is given
in bits per second, or with an appropriate suffix, i.e. k, m, g,
or t. When set to 0 or unlimited, the shaper is disabled.
--max-rx rate|unlimited
Set a shaper on the interface to limit incoming data. rate is given
in bits per second, or with an appropriate suffix, i.e. k, m, g,
or t. When set to 0 or unlimited, the shaper is disabled.
ct netif del ctid ifnamect netif rename ctid ifname new-ifnamect netif set ctid ifname --enable
Enable the interface.
--disable
Disable the interface. Disabled interface is kept down on the host,
it remains up inside the container.
--link bridge
What bridge should the interface be linked with. Applicable only for
bridged interfaces.
--enable-dhcp
Enables DHCP client within the container for supported distributions.
DHCP server must be provided by the host, e.g. using Nix option
networking.dhcpd. Applicable only for bridged interfaces.
--disable-dhcp
Disables DHCP client within the container. When disabled, IP addresses
can be assigned manually using ct netif ip commands. Applicable only
for bridged interfaces.
--gateway-v4 auto|none|address
IPv4 gateway to use when DHCP is disabled. If set to auto, the primary
address of the linked bridge is used as a gateway. Applicable only for
bridged interfaces.
--gateway-v6 auto|none|address
IPv6 gateway to use when DHCP is disabled. If set to auto, the primary
address of the linked bridge is used as a gateway. Applicable only for
bridged interfaces.
--hwaddr addr
Change MAC address. Every x in the address is replaced by
a random value. Use - to assign the MAC address dynamically.
--tx-queues n
Set the number of transmit queues.
--rx-queues n
Set the number of receive queues.
--max-tx rate|unlimited
Set a shaper on the interface to limit outgoing data. rate is given
in bits per second, or with an appropriate suffix, i.e. k, m, g,
or t. When set to 0 or unlimited, the shaper is disabled.
--max-rx rate|unlimited
Set a shaper on the interface to limit incoming data. rate is given
in bits per second, or with an appropriate suffix, i.e. k, m, g,
or t. When set to 0 or unlimited, the shaper is disabled.
ct netif ip add [options] ctid ifname addrosctld will
setup routing in case of routed interface and add the IP address to the
container's interface. --no-route
For routed interfaces, a new route is created automatically, unless
there is already a route that includes addr. This option prevents
the route from being created. You will have to configure routing
on your own using ct netif route commands.
--route-as network
Instead of routing addr, setup a route for network instead. This
is useful when you're adding an IP address from a larger network
and wish the entire network to be routed to the container.
Applicable only for routed interfaces.
ct netif ip del [options] ctid ifname addr|all --[no-]keep-route
If there is a route that exactly matches the removed IP address, then this
option determines whether the route is removed or not. Routes are removed
by default. Applicable only for routed interfaces.
-v, --version n
If addr is all, these options can specify which IP versions should
be removed. If no option is given, both IPv4 and IPv6 addresses are
removed.
ct netif ip ls [ctid [ifname]] -H, --hide-header
Do not show header, useful for scripting.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
-s, --sort parameters
Sort output by parameters, comma separated.
-v, --version version
Filter by IP version.
ct netif route add [options] ctid ifname addrct netif ip add),
or you can route addr via another hostaddr that is already on ifname
using option --via. Applicable only for routed interfaces. --via hostaddr
Route addr via hostaddr. hostaddr must be a host IP address on
ifname that has already been added using ct netif ip add.
ct netif route del [options] ctid ifname addr|all -v, --version n
If addr is all, these options can specify which IP versions should
be removed. If no option is given, both IPv4 and IPv6 routes are
removed.
ct netif route ls [ctid [ifname]] -H, --hide-header
Do not show header, useful for scripting.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
-s, --sort parameters
Sort output by parameters, comma separated.
-v, --version version
Filter by IP version.
ct dataset ls [options] ctid [properties...]/. -H, --hide-header
Do not show header, useful for scripting.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
ct dataset new [options] ctid dataset [mountpoint]osctl ct dataset new <id> var will create ZFS dataset <pool>/ct/<id>/var
and mount it to directory /var within the container. The target mountpoint
can be optionally provided as an argument.--[no-]mount.zfs directly. Required
properties uidmap and gidmap are inherited by default.ct mounts dataset, mounts created with
ct mounts new might not survive container export/import on different
configurations. --[no-]mount
Mount created datasets to the container, under the mountpoint of its
parents or /. Created datasets are mounted to the container by default.
ct dataset del [options] ctid dataset -r, --recursive
Recursively delete all children as well. Disabled by default.
-u, --[no-]umount, --[no-]unmount
Unmount selected dataset and all its children when in recursive mode
before the deletion. By default, mounted datasets will abort the deletion.
ct mounts ls ctid -H, --hide-header
Do not show header, useful for scripting.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
ct mounts new options ctid --fs fs
File system or device to mount, required.
--mountpoint mountpount
Mountpoint within the container, required.
--type type
File system type, required.
--opts opts
Options, required. Standard mount options depending on the filesystem
type, with two extra options from LXC: create=file and create=dir.
--[no-]automount
Activate this mount when the container starts. Enabled by default.
--[no-]map-ids
Map UID/GID into the container's namespace. It has an effect only
when native map mode is used by the container. Enabled by default.
ct mounts dataset options ctid dataset mountpointct mounts new have a fixed fs path, which would change
on a host with a zpool named differently and the container would refuse to
start.ct mounts dataset does not mount the top-level directory, but
rather a subdirectory called private. This prevents the container to access
the .zfs special directory, which could be used to create or destroy
snapshots from within the container. --ro, --read-only
Mount the dataset in read-only mode.
--rw, --read-write
Mount the dataset in read-write mode. This is the default.
--[no-]automount
Activate this mount when the container starts. Enabled by default.
ct mounts register [options] ctid mountpointpre-mount or post-mount script hooks (see SCRIPT HOOKS) or any other
mount within the container that you wish to control. All options are optional,
but unless you provide fs and type, you won't be able to use command
ct mounts activate.ct mounts register works only on starting or running container. All mounts
registered using this command will be forgotten once the container is stopped. --fs fs
File system or device to mount.
--type type
File system type.
--opts opts
Mount options. Standard mount options depending on the filesystem
type, with two extra options from LXC: create=file and create=dir.
--[no-]map-ids
Map UID/GID into the container's namespace. It has an effect only
when native map mode is used by the container. Enabled by default.
--on-ct-start
Use this option if you're calling ct mounts register from script hooks,
see SCRIPT HOOKS. Without this option, calling this command from hook
scripts will cause a deadlock -- the container won't start and osctld
will be tainted as well.
ct mounts activate ctid mountpointct mounts deactivate ctid mountpointct mounts del ctid mountpointct mounts clear ctidct recover kill ctid [signal]SIGKILL. After a container is killed in this
way, it can be necessary to recover its state using ct recover state
and cleanup its state using ct recover cleanup.ct recover state ctidosctld to check status of container ctid.osctld checks container status only on startup and then watches for events
from lxc-monitor. If the container dies in a way that the monitor does not
report anything, osctld will not notice the change on its own and this
command can be used to recover from such a state.--no-lockosctl ct start from waiting
on a dead container.ct recover cleanup [-f] ctid--force is used. -f, --force
Force cleanup of an unstopped container.
--cgroups
Cleanup only leftover cgroups.
--network-interfaces
Cleanup only network interfaces.
group new options group --pool pool
Pool name, optional.
-p, --parents
Create all missing parent groups.
--cgparam parameter=value
Set cgroup parameter, may be used more than once. See group cgparams set
for what the parameter is.
group del groupgroup ls [options] [groups...] -H, --hide-header
Do not show header, useful for scripts.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
-s, --sort parameters
Sort output by parameters, comma separated.
--pool names
Filter by pool, comma separated.
group tree poolgroup show group -L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
-H, --hide-header
Do not show header, useful for scripts.
group set attr group vendor:key valuegroup ls or group show using the -o, --output
option.org.vpsadminos.osctl:declarative.group unset attr group vendor:keygroup cgparams ls [options] group [parameters...] -H, --hide-header
Do not show header, useful for scripting.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
-v, --version 1|2|all
Select parameters by cgroup version. Defaults to all.
-S, --subsystem subsystem
Filter by cgroup subsystem, comma separated.
-a, --all
Include parameters from parent groups up to root group.
group cgparams set group parameter value...osctld will
make sure this parameter is always set when the container is started. The
parameter can be for example cpu.shares or memory.limit_in_bytes. cgroup
subsystem is derived from the parameter name, you do not need to supply it.devices cgroup subsystem, where you may need to write to devices.deny and
devices.allow multiple times. -a, --append
Append new values, do not overwrite previously configured values for
parameter.
-v, --version 1|2
Specify cgroup version. Default to the version the system currently uses.
group cgparams unset group parameterosctld
config.cpu.cfs_quota_usmemory.limit_in_bytesmemory.memsw.limit_in_bytes
-v, --version 1|2group cgparams apply groupgroup cgparams replace group{
"parameters": [
{
"version": <cgroup version>,
"subsystem": <cgroup subsystem>,
"parameter": <parameter name>,
"value": [ values ]
}
...
]
}
group devices ls [options] group -H, --hide-header
Do not show header, useful for scripts.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output, see OUTPUT PARAMETERS for more information.
group devices add [options] group block|char major minor mode [device]block/char device identified by
the major and minor numbers, see mknod(1) for more information. mode
determines what is the container allowed to do: r to read, w to write,
m to call mknod.osctld will prepare the device node within the
container's /dev during every container start. -i, --[no-]inherit
Determines whether child groups and containers should inherit the device,
i.e. be allowed to use it with the same access mode.
-p, --[no-]parents
The device that is being added has to be provided by all parent groups,
up to the root group. When this switch is enabled, osctld will add
the device to all parent groups that do not already have it.
group devices del group block|char major minor-r, --recursive switch. -r, --recursive
Delete the device from all child groups and containers.
group devices chmod [options] group block|char major minor mode-p, --parents or -r, --recursive to override
parent or child groups and containers. -p, --parents
Ensure that all parent groups provide the device with the required
access mode. Parent groups that do not provide correct access modes
are updated and the missing access modes are set.
-r, --recursive
Change the access mode of all child groups and containers.
group devices promote group block|char major minorgroup devices del -r.group devices inherit group block|char major minorct devices promote. The access mode,
if different from the parent, will revert to the acess mode defined by the
parent group.root group,
as it has no parent to inherit from.group devices set inherit group block|char major minorgroup devices unset inherit group block|char major minorgroup devices replace groupgroup devices commands if you wish for osctld
to enforce these rules.{
"devices": [
{
"dev_name": <optional device node>,
"type": block|char,
"major": <major number or asterisk>,
"minor": <minor number or asterisk>,
"mode": <combinations of r,w,m>,
"inherit": true|false
}
...
]
}
group set cpu-limit group limit100 means the container can
fully utilize one CPU core.group cgparams set, two parameters are
configured: cpu.cfs_period_us and cpu.cfs_quota_us. The quota is calculated
as: limit / 100 * period. -p, --period period
Length of measured period in microseconds, defaults to 100000,
i.e. 100 ms.
group unset cpu-limit groupgroup cgparams unset.group set memory-limit group memory [swap]group cgparams set. Memory limit is set
with cgroup parameter memory.limit_in_bytes. If swap limit is given as well,
parameter memory.memsw.limit_in_bytes is set to memory + swap.k, m, g, or t.group unset memory-limit groupgroup assets [options] group -v, --verbose
Show detected errors.
send key gen [options] -t, --type rsa | ecdsa | ed25519
Key type, defaults to rsa.
-b, --bits bits
Specifies the number of bits in the key to create. Defaults to 4096 for
rsa and 591 for ecdsa.
-f, --force
Overwrite the keys if they already exist.
send key path [public | private]public or private key. Defaults to the
public key.receive authorized-keys lsreceive authorized-keys add [options] name --ctid pattern
Accept only containers whose ID matches the pattern.
--from pattern-list
Allow connections only from selected hosts. Multiple patterns can be
separated by comma. Patterns are matched against source host address
and reverse record.
--single-use
Remove the key after it is used to migrate a container onto this node.
--passphrase passphrase
In addition to the public key authentication, the sender must provide
this passphrase. The passphrase can be used to differentiate identical
keys with different names, the key with matching passphrase is used.
receive authorized-keys del namerepository ls [options] [repository...] -H, --hide-header
Do not show header, useful for scripting.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
repository show repository -H, --hide-header
Do not show header, useful for scripting.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
repository add [options] repository url--pool. --[no-]prune-enable
Enable periodic pruning of locally cached images. Enabled by default.
--prune-interval seconds
How often is periodic prune run. Defaults to 24 hours.
--prune-older-than-days n
Delete only images older than n or more days. Defaults to 90.
repository del repositoryrepository enable repositoryrepository disable repositoryrepository set url repository urlrepository set prune [--interval | --older-than-days] repository --interval seconds
How often is periodic prune run.
--older-than-days n
Delete only images older than n or more days.
repository unset prune repository.repository set attr repository vendor:key valuerepository ls or repository show using
the -o, --output option.org.vpsadminos.osctl:declarative.repository unset attr repository vendor:keyrepository assets repositoryrepository images ls [option] repository -H, --hide-header
Do not show header, useful for scripting.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
-s, --sort parameters
Sort output by parameters, comma separated.
--vendor vendor
Filter by vendor.
--variant variant
Filter by variant.
--arch arch
Filter by architecture.
--distribution distribution
Filter by distribution.
--version version
Filter by distribution version.
--tag tag
Filter by version tag.
--cached
Show only locally cached images.
--uncached
Show only locally uncached images.
repository images prune [options] [repository...] --older-than-days n
Prune only images that were cached n or more days ago.
cpu-scheduler statusct ls -o id,cpu_package_inuse,cpu_package_set
to see containers and their CPU package.cpu-scheduler enableosctld.settings.cpu_scheduler.enable.cpu-scheduler disableosctld.settings.cpu_scheduler.enable.cpu-scheduler upkeepcpu-scheduler package ls -H, --hide-header
Do not show header, useful for scripting.
-L, --list
List available parameters and exit.
-o, --output parameters
Select parameters to output.
-s, --sort parameters
Sort output by parameters, comma separated. Sorted by usage score by default.
cpu-scheduler package enable packagecpu-scheduler package disable packagetrash-bin dataset add datasetosctld's control. The dataset is placed in the trash bin
and later destroyed.trash-bin prune [pool...]garbage-collector prune [pool...]monitorosctld to standard output. If global option
-j, --json is used, the events are printed in JSON.event broadcastosctld, all subscribers will receive them.
The events are read from standard input and must be encoded in JSON on a single line.
Multiple lines can be written. Example input: {
"events": [
{ "type": "mytype"; "opts": { "myoption": 123 } }
]
}
history [pool...]-j, --json is used, the events are printed in JSON.assets [options]osctld assets (datasets, files, directories) and their state. -v, --verbose
Show detected errors.
healthcheck [options] [pool...]osctld assets and optionally also assets of selected pools, which
include all user, group and container assets stored on selected pools. -a, --all
Verify all pools.
ping [wait]osctld to check if it is running. Without wait,
osctl ping either succeeds or fails immediately. If wait is 0, osctl
will block until osctld becomes responsive. If wait is a positive number,
osctl will wait for osctld for up to wait seconds.0 Success1 Unspecified error2 Unable to connect to osctld3 Connected, but received unexpected responsescript file [args...]osctl and libosctl modules loaded.activate [options] --[no-]system
NixOS overwrites files it thinks it manages, such as
/etc/sub{u,g}id and /etc/lxc/lxc-usernet. If this option is enabled,
the required files are regenerated. Enabled by default.
shutdown [-f|--force] [--abort]osctld itself will remain functional./run/osctl/shutdown is created. As long as
this file exists, osctld will not auto-start containers and all pools will
be disabled on import. When osctld prepares the system for shutdown, it sets
the executable by owner permission on /run/osctl/shutdown.-f, --force is set, osctl shutdown will ask for
confirmation on standard input to prevent accidents. -f, --force
Do not ask for confirmation on standard input, initiate shutdown
immediately.
-w, --[no-]wall
Send message to logged-in users of containers that are stopped. The message
can be customized using option -m, --message. Enabled by default.
-m, --message message
Message sent to logged-in users of containers that are stopped.
--abort
Abort an already running shutdown and export of all pools. Some pools can
already be exported and others can be left in a partially exported state,
i.e. the pools can be disabled and some or all containers can be stopped.
To recover such pools after an aborted shutdown, use
pool export --force --no-stop-containers on remaning pools followed by
pool import -a.
help [command...]Option -o, --output parameters found on most read commands can be used to
select which parameters should be shown. The list of available parameters can
usually be found using option -L, --list.
parameters is a comma separated list. If it is an empty string, no output is
shown. -o +parameters can be used to extend the parameters shown by default.
-o -parameters will show default parameters except the ones listed.
-o all will show all parameters.
osctld can execute user-defined scripts on certain events. User scripts can
be placed into directory /<pool>/hook. The exact location and script name
depend on the event, e.g.:
/<pool>/hook/pool/<hook name> for pool script hooks/<pool>/hook/ct/<ctid>/<hook name> for container script hooksUse osctl pool|ct assets to get the exact paths. The user script can be
a single executable file, or it can be a directory <hook name>.d.
If it is a directory, all executable files within it are called in order by their name. In case the script hook's exit status is evaluated, a non-zero exit status will stop the execution of other script hooks from the directory.
All script hooks are run as root on the host, but the mount namespace may
differ, see below.
Note that many osctl commands called from script hooks may not work. Some hooks
are run when the pool or the container is locked within osctld, so another
osctl command on the same pool/container may be rejected.
pre-importpre-import is called before the pool is imported into osctld, e.g. when
pool import is run or osctld is restarted. If pre-import exits
with a non-zero status, the pool is not imported.pre-autostartpre-autostart is run after the pool is imported, but before the container
auto-start facility. If pre-autostart exits with a non-zero status,
containers are not auto-started.post-importpost-import is run after the pool was imported into osctld. Its exit status
is not evaluated.pre-exportpre-export is run before the pool is exported from osctld, e.g. when
pool export is called. It is not run when osctld is restarted, as that
doesn't export the pool. If pre-export exits with a non-zero status,
the pool is not exported.post-exportpost-export is run after the pool has been exported from osctld. Its exit
status is not evaluated.All pool script hooks have the following environment variables set:
OSCTL_HOOK_NAMEOSCTL_POOL_NAMEOSCTL_POOL_DATASETOSCTL_POOL_STATEpre-startpre-start hook is run in the host's namespace before the container is mounted.
The container's cgroups have already been configured and distribution-support
code has been run. If pre-start exits with a non-zero status, the container's
start is aborted.veth-upveth-up hook is run in the host's namespace when the veth pair is created.
Names of created veth interfaces are available in environment variables
OSCTL_HOST_VETH and OSCTL_CT_VETH. Variable OSCTL_VETH_ENABLE is set to 1
when then interface is enabled, 0 otherwise. Disabled interfaces are not up
on the host-side. If veth-up exits with a non-zero status, the container's
start is aborted.pre-mountpre-mount is run in the container's mount namespace, before its rootfs is
mounted. The path to the container's runtime rootfs is in environment variable
OSCTL_CT_ROOTFS_MOUNT. OSCTL_CT_NS_PID contains the PID of a process with
the container's user namespace. If pre-mount exits with a non-zero status, the
container's start is aborted.post-mountpost-mount is run in the container's mount namespace, after its rootfs
and all LXC mount entries are mounted. The path to the container's runtime
rootfs is in environment variable OSCTL_CT_ROOTFS_MOUNT. OSCTL_CT_NS_PID
contains the PID of a process with the container's user namespace. If post-mount
exits with a non-zero status, the container's start is aborted.on-starton-start is run in the host's namespace, after the container has been
mounted and right before its init process is executed. If on-start exits
with a non-zero status, the container's start is aborted.post-startpost-start is run in the host's namespace after the container entered state
running. The container's init PID is passed in environment varible
OSCTL_CT_INIT_PID. The script hook's exit status is not evaluated.pre-stoppre-stop hook is run in the host's namespace when the container is being
stopped using ct stop. If pre-stop exits with a non-zero exit status,
the container will not be stopped. This hook is not called when the container
is shutdown from the inside.on-stopon-stop is run in the host's namespace when the container enters state
stopping. The hook's exit status is not evaluated.veth-downveth-down hook is run in the host's namespace when the veth pair is removed.
Names of the removed veth interfaces are available in environment variables
OSCTL_HOST_VETH and OSCTL_CT_VETH. The hook's exit status is not
evaluated.post-stoppost-stop is run in the host's namespace when the container enters state
stopped. The hook's exit status is not evaluated.All container script hooks have the following environment variables set:
OSCTL_HOOK_NAMEOSCTL_POOL_NAMEOSCTL_CT_IDOSCTL_CT_USEROSCTL_CT_GROUPOSCTL_CT_DATASETOSCTL_CT_ROOTFSOSCTL_CT_MAP_MODEOSCTL_CT_LXC_PATHOSCTL_CT_LXC_DIROSCTL_CT_CGROUP_PATHOSCTL_CT_DISTRIBUTIONOSCTL_CT_VERSIONOSCTL_CT_HOSTNAMEOSCTL_CT_LOG_FILEosctl --json ct console accepts JSON commands on standard input. Commands
are separated by line breaks (\n). Each JSON command can contain the following
values:
{
"keys": base64 encoded data,
"rows": number of terminal rows,
"cols": number of terminal columns
}
keys is the data to be written to the console. rows and cols control
terminal size. Example commands:
{"keys": "Cg=="}\n
{"keys": "Cg==", "rows": 25, "cols": 80}\n
{"rows": 50, "cols": 120}\n
Where Cg== is \n (enter/return key) encoded in Base64. All values
are optional, but rows and cols have to be together and empty command
doesn't do anything.
Exception backtraces in osctl can be enabled by settings environment variable
GLI_DEBUG=true, e.g. GLI_DEBUG=true osctl ct ls. This will not make osctl
more verbose, only print exceptions when it crashes.
osctld is logging either to syslog or to /var/log/osctld, depending on your
system configuration. osctl provides several commands you can use for
debugging purposes. These commands are not shown in osctl help message.
debug threads lsosctld and their backtraces. This can be useful
to check if some operation hangs.debug locks ls [-v] -v, --verbose
Include also backtraces of lock holding threads.
debug locks show idInstall zpool tank into osctld:
osctl pool install tank
Create a container:
osctl ct new --distribution alpine myct01
Add bridged veth interface:
osctl ct netif new bridge --link lxcbr0 myct01 eth0
Start the container:
osctl ct start myct01
Report bugs to https://github.com/vpsfreecz/vpsadminos/issues.
osctl is a part of vpsAdminOS.