The SR Linux is a suite of modular, lightweight applications running like any others in a Linux environment. Each SR Linux application supports a different protocol or function, such as BGP, LLDP, AAA, and so on. These applications use gRPC and APIs to communicate with each other and external systems over TCP.
One SR Linux application, the application manager (app_mgr), is itself responsible for monitoring the health of the process IDs running each SR Linux applications, and restarting them if they fail. The application manager reads in application-specific YAML configuration and YANG models, and starts each application (or allows an application not to start if there no configuration exists for it). There is an instance of the app_mgr that handles applications running on the CPM, and an instance of the app_mgr on each IMM that handles applications running on the line card.
In addition to the Nokia-provided SR Linux applications, the SR Linux supports installation of user-defined applications, which are managed and configured in the same way as the default SR Linux applications.
This chapter presents examples of installing an application in SR Linux, managing installed SR Linux applications, and configuring settings for an SR Linux application by modifying its YAML configuration.
To install an application, copy the application files into the appropriate SR Linux directories, then reload the application manager and start the application.
The example in this section installs an application called fib_agent. The application consists of files named fib_agent.yml, fib_agent.sh, fib_agent.py, and fib_agent.yang. The fib_agent.yml file is installed in the /etc/opt/srlinux/appmgr/ directory. The .yml file for a user-defined application must reside in this directory in order for the app_mgr to read its YAML configuration.
The .yml file defines the locations of the other application files. The other application files can reside anywhere in the system other than in the /opt/srlinux/ directory or any tempfs file system.
In this example, the fib_agent.sh and fib_agent.py files are installed in the directory/user_agents/, and the fib_agent.yang file is installed in the directory/yang/. The locations for these files are defined in the fib_agent.yml file.
To start an SR Linux application instance, use the start option in the tools system app-management command. To terminate a running application instance and restart it, use the restart option.
Examples:
To start an SR Linux application instance:
To restart an SR Linux application instance:
You can use the stop, quit, or kill options in the tools system app-management command to terminate an SR Linux application.
Examples:
To terminate an application gracefully:
To terminate an application and generate a core dump:
To terminate an application immediately:
Reloading an application causes the app_mgr to reread the application’s YAML configuration and restart the application using settings in its YAML file.
Example:
To reload the configuration of the app_mgr application:
You can display statistics collected for an application with the info from state command. To reset the statistics counters for the application, use the statistics clear option in the tools system app-management command.
Example:
To reset the statistics counters for an application:
An application may have one or more operations that are restricted by default. For example, the linux_mgr application has stop, quit, and kill as restricted operations, meaning that these options are not available when entering the tools system app-management command for the linux_mgr application.
Table 21 lists the restricted operations for each SR Linux application.
Application | Restricted operations |
aaa_mgr | reload |
acl_mgr | reload |
app_mgr | start, stop, restart, quit, kill |
arp_nd_mgr | reload |
bfd_mgr | reload |
bgp_mgr | reload |
chassis_mgr | stop, quit, kill, reload |
device_mgr | reload |
dhcp_client_mgr | stop, reload |
fib_mgr | reload |
gnmi_server | reload |
idb_server | start, stop, restart, quit, kill, reload |
json_rpc_config | reload |
linux_mgr | stop, quit, kill |
lldp_mgr | reload |
log_mgr | reload |
mgmt_server | start, stop, quit, kill, reload |
mpls_mgr | reload |
net_inst_mgr | start, stop, quit, kill, reload |
oam_mgr | reload |
plcy_mgr | reload |
qos_mgr | reload |
sdk_mgr | reload |
static_route_mgr | reload |
supportd | reload |
xdp_cpm | stop, quit, kill, reload |
xdp_lc | reload |
Restricted options are specified in the restricted-operations setting in the YAML file for the application.
To configure an SR Linux application, edit settings in the application’s YAML file, then reload the application manager to activate the changes.
The example in this section shows how to configure an application to specify the action the SR Linux device takes when the application fails. If an SR Linux application fails a specified number of times over a specified time period, the SR Linux device can reboot the system or attempt to restart the application after waiting a specified number of seconds.
For example, if the aaa_mgr application crashes 5 times within a 500-second window, the SR Linux device can be configured to wait 100 seconds, then restart the aaa_mgr application.
The following actions can be taken if an SR Linux application fails:
If you stop or restart an application using the tools system app-management command in the SR Linux CLI, it is not considered an application failure; the failure action for the application, if one is configured, does not occur. However, if the failed application waits a specified period of time (or forever) to be restarted, or has been moved into error state, you can restart the application manually with the tools system app-management application restart CLI command.
To configure the failure action for an application:
The SR Linux protects system processes through the use of control groups (cgroups), which impose resource consumption limits on resource-intensive customer applications.
Cgroup profiles define how usage limits are applied. On the SR Linux, cgroup profiles are supported for CPU and memory and are defined in cgroup_profile.json configuration files.
SR Linux provides a default cgroup profile; customers can configure additional cgroup profiles.
The SR Linux-provided default cgroup profile is located in the /opt/srlinux/appmgr/cgroup_profile.json directory.
![]() | Note: Editing the default cgroup profile is not recommended. |
If the default cgroup profile fails to parse or be read by the app_mgr, the SR Linux will not start.
The default cgroup_profile.json file definition is shown below:
Table 22 describes the default cgroup profile parameters.
Parameter | Description |
name | The cgroup profile name. Type: string |
path | The cgroup directory path relative to a unified root path. A typical unified root path is /sys/fs/cgroup or /mnt/cgroup/unified Type: string |
controller | The memory controller configuration. max: This number denotes the percentage of total memory. The actual memory value is calculated as (max. × total_memory) and is set in the memory.max interface file of the cgroup. If the value is 0, this configuration is ignored. The range is from 0 to 1, the default is 0.8. low: This number denotes the percentage of total memory. The actual memory value is calculated as (max. × total_memory) and is set in the memory.low interface file of the cgroup. The range is from 0 to 1, the default is 0.8. |
cpu | The CPU controller configuration. weight: This value is set in the cpu.weight interface file of the cgroup. The range is from 1 to 10 000, the default is 100. period: This value specifies a period of time, in microseconds, for how regularly a cgroup's access to CPU resources should be reallocated.This value is set in the cpu.max interface file of the cgroup. The range is from 1000 to 1 000 000, the default is 100 000. quota: This value specifies the total length of time, in microseconds, for which all tasks in a cgroup can run during one period (as defined in the period parameter). If quota is set to 0, it translates to “max” in the cpu.max interface file. The range is from 1000 to 1 000 000, the default is max. |
cpuset | CPU usage information for the cgroup. cpus: This value indicates the CPUs used by the cgroup. This can be "", meaning use all CPUs except for the isolated CPUs; this is the default. The value all means include the isolated CPUs for cgroup usage. The value x, y-z, where x, y, and z are CPU numbers, means use a specific CPU or a range of CPUs. mems: This value is used for scheduling multiple NUMA (non-uniform memory access) aware applications in the cgroup. |
Customers can configure cgroup profiles in the /etc/opt/srlinux/appmgr/cgroup_profile.json directory. The app_mgr will create this directory at boot up if it does not exist.
If a customer-defined cgroup profile fails to load, the system will continue to function and applications that are loaded into the customer cgroup are loaded using Nokia defaults, listed below.
The admin user is treated as any other user in the system. Its processes fall into the user.slice/default cgroup.
Customers can configure up to three cgroups in the /etc/opt/srlinux/appmgr/cgroup_profile.json directory. Customer applications are assigned to these groups. Any more than three configured cgroups are ignored. The depth of cgroups is limited to two levels where, for example, workload is one level, and primary/secondary are two levels. Any levels beyond this are also ignored.
If a cgroup with the same name is used in multiple customer-defined profiles, the system ignores it and uses the cgroup defined in the default profile.
The following example shows the configuration of two customer-defined cgroups: one for a lightweight database that needs priority access to resources, and one for storing the users of the database.
The steps and the outputs are described below.
The configuration above created two cgroup profiles: one for the database slice and one for the frontend slice. The profile for the database slice is configured to limit the database to 50% of system memory. The profile for the frontend slice is configured to limit the web front end to 20% of system memory.
In addition, both cgroup profiles are configured to limit CPU resources for their respective cgroup. The database server CPU is weighted at 10000 (the maximum CPU weight) and the frontend server CPU is weighted at 5000 (half the CPU weight of the database). The weights are added together and each group is given its ratio of CPU as a proportion of the sum. The periods are kept the same, and no guaranteed CPU is granted.
The kernel low-memory killer driver monitors the memory state of a running system. It reacts to high memory pressure by killing the least essential processes in order to keep the system performing at acceptable levels.
When the system is low in memory and cannot find free memory space, the out_of_memory function is called. The out_of_memory function makes memory available by killing one or more processes.
When an out-of-memory (OOM) failure occurs, the out_of_memory function is called and it obtains a score from the select_bad_process function. The process with the highest score is the one that is killed. Some of the criteria used to identify a bad process include the following:
In addition to this list, the OOM killer checks the out-of-memory (OOM) score. The OOM killer sets the OOM score for each process and then multiplies that value by memory usage. The processes with higher values have a high probability of being terminated by the OOM killer.
The kernel calculates the oom_score using the formula 10 × percentage of memory used by the process. The maximum score is 10 × 100% = 1000. The oom_score of a process can be found in the /proc directory (/proc/$pid/oom_score). An oom_score of 1000 means the process is using all the memory, an oom_score of 500 means it is using half the memory, and an oom_score of 0 means it is using no memory.
The OOM killer checks the oom_score_adj file in the /proc/$pid/oom_score_adj directory to adjust its final calculated score. The default value is 0.
The oom_score_adj value can range from -1000 to 1000. A score of -1000 results in a process using 100% of the memory and in not being terminated by the OOM killer. However, a score of 1000 causes the Linux kernel to terminate the process even when it uses minimal memory. A score of -100 results in a process using 10% of the memory before it will be considered for termination, as its score will remain 0 until its unadjusted score reaches 100.
The oom_score_adj value for each process is defined in its corresponding YAML definition file. The system groups the processes based on their score, which the SR Linux OOM killer uses as the hierarchy for terminating a rogue process, as follows:
Table 23 lists the OOM adjust score for each process.
Process name | OOM adjust score |
aaa_mgr | 0 |
acl_mgr | 0 |
app_mgr | -998 |
sarp_nd_mgr | -200 |
bfd_mgr | -200 |
bgp_mgr | 0 |
chassis_mgr | -200 |
dev_mgr | -200 |
dhcp_client_mgr | 0 |
dhcp_relay_mgr | 0 |
eth_switch_mgr | -200 |
evpn_mgr | 0 |
fib_mgr | 0 |
gnmi_server | 500 |
idb_server | -998 |
isis_mgr | 0 |
json_rpc | 500 |
l2_mac_learn_mgr | 0 |
l2_mac_mgr | 0 |
l2_static_mac_mgr | 0 |
lag_mgr | 0 |
linux_mgr | 0 |
lldp_mgr | 0 |
log_mgr | 0 |
mcid_mgr | 0 |
mgmt_server | -200 |
mpls_mgr | 0 |
net_inst_mgr | -200 |
oam_mgr | 500 |
ospf_mgr | 0 |
plcy_mgr | 0 |
qos_mgr | 0 |
sdk_mgr | 500 |
sflow_sample_mgr | 500 |
sshd-mgmt | 0 |
static_route_mgr | 0 |
supportd | 0 |
timesrv | 0 |
vrrp_mgr | 0 |
vxlan_mgr | 0 |
xdp_cpm | -200 |
The cgroup feature provides two additional parameters in the app_mgr YAML file, the cgroup parameter and the oom-score-adj parameter.
The app_mgr uses the cgroup parameter to launch an application within a specific cgroup. A valid value for the cgroup parameter is the path of a cgroup as specified in the cgroup profile (equivalent to /profiles/name[]/path), from the cgroupv2 root. If this cgroup does not exist, the app_mgr launches the user application from the default cgroup profile path workload.slice/secondary.
The app_mgr uses the oom-score-adj parameter to set the out-of-memory adjust score for a process. This score is fed into the SR Linux OOM killer. Valid oom-score-adj scores are any value in the range of -1000 to 1000. A process with a score of -1000 is least likely to be killed while a process with a score of 1000 is most likely to be killed. At -1000, a process can use 100% of memory and still avoid being terminated by the OOM killer; however, SR Linux will kill the process more frequently.
The cgroup parameter and the oom-score-adj parameter are shown in the output below.
Cgroup debugging capability is available through:
The SR Linux provides CLI commands to perform the following:
The output below is an example of checking existing cgroup usage.
The output below is an example of showing information about the current OOM adjust scores for all applications that are managed by the app_mgr.
The output below is an example of showing information about cgroups that are associated with applications managed by the app_mgr.
Use the tools system cgroup command pgrep cgroup cgroupname command to list all the applications associated with a specified group; the output below shows an example.
The following Linux-provided CLI commands are available for debugging cgroups:
The systemd-cgls command dumps the cgroup hierarchy. The output below shows an example of the systemd-cgls command.
The systemd-cgtop command dumps the current usage of each cgroup. The output below shows an example of the systemd-cgtop command.