LXCF Concept and Feature
return HOME page
1. Concept of LXCF
LXC is becoming good feature to make plural confined environments for execution of MW/application. However, existing tools for LXC are not enough for customers of Enterprise system at some points.
To make long life container easily, a new tool set is necessary. We made the new tool set and named it LXCF.
Though LXCF containers offer environments like host, each container has each own area. In other words, each LXC container has each own logs, setting files and services whose are controlled systemd in each container. Moreover, user can change the resource of CPU, MEMORY, IO, and NETWORK of each container dynamically.
LXCF is a set of templates for “How to use” LXC, which is necessary for Enterprise systems. Now, LXCF offer 2 models.
2. Advantage of the LXCF resource control
Problem

- Scramble for resource among MW and applications.
- User can execute only one application or MW on a server to avoid the above scramble.
- Application/MW’s load may increase without limit due to too much requirements via network
A Solution using LXCF

- Each MWs and applications in container have independent resources, and user can limit the resources for each MWs and applications
- User can execute plural applications and MWs on one server.
- Even if there is too much processing requirements via network, it does not affect other containers.
- User can login each containers via ssh. Each container equips its own network, file systems, systemd, syslog and ssh console.
Dynamic Resource Contorl by LXCF

- Dynamic resource control is available.
-
User can .make Capacity On Demand system whose resources are changed at day and night
-
The overhead of LXC is less than virtual machine (KVM and VMware, etc.).
- CPU, MEMORY and the resources such as the network bands can be limited for each container.
- The amount of the resource can be dynamically changed while executing applications.
- LXC allows users to consolidate services which scramble for the resource on same OS.
3. Feature
- MWs and applications in different containers don’t affect each other.
- There are 2 models in LXCF. The first one is SEPARATE model. In this model, user can make independent file tree s with individual containers. In this model, user can install APLs and MWs in each container.
- The second model is JOINT model. This model has shared file tree among containers. User does not need to install MWs and applications into each container. User can install/update software from host to all - LXCF containers at a time. In addition, these software’s binaries/libraries are shared between host and all containers.
- User can control resources (CPU number, CPU utilization, amount of the memory, NUMA allocation, IO speed (Read, Write), IOPS, and network speed) dynamically.
4. Details of two models
Common Features of both models
- IP address, APLs, and MWs can be individually changed in container.
- Logging (syslog) works on each container.
- Services can be started/stopped in each container by systemctl (systemd).
- Containers can be cloned.
Characteristics of SEPARATE model
- When a container is created, the LXCF tool copies application programs from host to the container.
- User has to install MWs and APLs to created containers if they are not copied.
- User can install APLs and MWs in each container.
- User has to update each container one by one at software updates..
Feature of JOINT model
- Installed APLs(binary and library) are shared among host and containers.
- User can install and update software in each container from host at a time because they are all shared.
5. Password
- The password of HOST is copied onto the container when the LXCF container is created, and it becomes available as the default password for the new container. This password can be changed. After making the container, neither the change on the host nor the change on the guest container mutually influence.
- When the container is generated, the public key of ssh of root is automatically delivered to guest's container. Ssh login as root from HOST to container is password-free.
6. Automatic setting of resource when container starts
- When the container is started, the resources are set by the definition file of LXCF (it’s stored in host’s /etc/lxcf/rsc/CONTAINER-NAME).
- This setting file is also effective if autostart is available by virt-manger/libvirt for the container. The resources are set to the values at system boot time.
7. Network
- LXCF makes network interface lxcfnet1(virbr1) that differs from default(virbr0) automatically, and connects it.
- NAT connection of network is lxcfnet1. It is default and 192.168.125.0/24 and the gateway is 192.168.125.1.
- DHCP is not used, and when the LXCF container is made, static IP address is allocated automatically. They are made to correspond to the container name, and allocated IP-address is registered in host's /etc/hosts.
- lxcfnet1 is for management the LXCF container. If user would like to make other networks for applications/MWs, then user has to make other network interfaces.
- To allocate new network interface, user needs to use libvirt of virt-manager.
8. Update
# model = joint or separate
#
[Model]
model=separate
# lxcfnet1 address range
#
[ipaddr_range]
ipaddr_start=192.168.125.2
ipaddr_end =192.168.125.254
- “separate” or “joint” has to be specified in the “model=” parameter.
- “ipaddr range” is the range for the lxcfnet1 address. The ip address is used between ipaddr_start and ipaddr_end.
10. ipcs resource
The following kernel parameters with sysctl can set a value independent of HOST in the container.
- kernel.msgmax
- kernel.msgmnb
- kernel.msgmni
- kernel.shmall
- kernel.shmmax
- kernel.shmmni
- kernel.sem
The resource of ipcs is limited in the LXC container by the above-mentioned kernel parameter value.
To prevent the influence that sets a big value in the container from influencing other containers and hosts, the amount of the memory that the container uses by memlimit of LXCF is set.
11. Batch queue system
- The following command execution can sequentially be automatically executed on the batch queue.
sysgen
sysgen-n
clone
clone-n
set
set-n
erase
erase-n
update
deploy
- These are the commands to which it takes the time used to generate and to delete the container.Therefore, it is necessary to wait for the previous command in the execution of two or more commands.
- The queuing is done as a job when the execution of these commands is submitted to the batch queue, and it is executed sequentially.
- "queue" subcommand can use "q" as an alias.
- The execution result is logging by the following files.
/var/log/lxcf/lxcf-messages
-
The batch queue system has three queues.
H-QUEUE:
- High priority queue. The job that wants to execute it in the emergency by high priority is executed.
- The job of H-QUEUE gives priority even if there is a job of the unexecution in Q-QUEUE and L-QUEUE and it is executed.
- However, the job executing the processing of Q-QUEUE and L-QUEUE is not stopped. After the job under execution ends, the job of H-QUEUE is executed.
Q-QUEUE:
- Priority queue usually. Q-ueue used by default.
L-QUEUE:
- Lower priority queue. The job with low priority is executed.
- The job of this queue is not executed while the job remains in H-QUEUE and Q-QUEUE.
- Submitting the job uses the submit command. The command such as sysgen to be executed is specified for the argument. The job enters Q-QUEUE.
-
Submitting the job uses the submit command. The command such as sysgen to be executed is specified for the argument. The job enters Q-QUEUE.
# lxcf submit sysgen euro-srv
submited : 950201ce-c11f-11e3-9a01-00d068148bd6
- Uuid allocated to the job is displayed when submitting it. Uuid becomes a value different in each job, and is used to identify the job.
-
To put the job in H-QUEUE and L-QUEUE, the option of "-h" and "-l" is specified for the submit command.
# lxcf submit –h sysgen Emergsrv
# lxcf submit –l sysgen LowPrisrv
-
The queue command is used to confirm the operation and the state of the queue. As for the execution of these commands, logging is done by /var/log/lxcf/lxcf-messages.
-
Display of list of queue (list)
# lxcf queue list
<<< Containers >>>
94b6a94a-e15e-11e3-8f14-00d068148bd6 a-srv running
<<< JOB under execution >>>
950201ce-c11f-11e3-9a01-00d068148bd6 sysgen asia-srv
007945ac-c120-11e3-b1f9-00d068148bd6 sysgen Emergsrv
*** Q-QUEUE ***
db1cb406-c11f-11e3-aa27-00d068148bd6 sysgen euro-srv
*** L-QUEUE ***
fb2e7c84-c11f-11e3-b4b8-00d068148bd6 sysgen LowPrisrv
-
Cancellation of job (cancel)
The job that specifies uuid is canceled.
# lxcf queue cancel db1cb406-c11f-11e3-aa27-00d068148bd6
Canceled : db1cb406-c11f-11e3-aa27-00d068148bd6 sysgen euro-srv
-
All clearness of job on queue (clear)
The cleared queue can be selected by putting "-h", "-q", and "-l" options. The job of all queues is cleared if there is no option.
# lxcf queue clear
canceled : ALL QUEUE
007945ac-c120-11e3-b1f9-00d068148bd6 sysgen Emergsrv
950201ce-c11f-11e3-9a01-00d068148bd6 sysgen asia-srv
fb2e7c84-c11f-11e3-b4b8-00d068148bd6 sysgen LowPrisrv
-
Movement of job on queue (move)
The queue in the moving destination is specified by "-h", "-q", and "-l".
# lxcf queue move -h 950201ce-c11f-11e3-9a01-00d068148bd6
moved to H-QUEUE : 950201ce-c11f-11e3-9a01-00d068148bd6 sysgen asia-srv
12. Limitation and notes
- The setting of SELinux is common between host and each container. User cannot make different SELinux setting for each container and host.
- Need setting/adjusting parameter/configuration/services like the followings
- Since NTP (Chrony) cannot start in container, time synchronization works only on host. Each container uses host’s time information.
- The time zone can be set independently in each container.
- User has to estimate the max number of threads and processes for host and all containers at first. Then user needs to adjust host’s threads-max parameter with sysctl command.
return HOME page