[ad_1]
On Linux computers, system resources are shared between users. Try to use more than your fair share and you will reach an upper limit. It can also bottleneck other users or processes.
Shared System Resources
Among its zillion other jobs, the kernel of a Linux computer is always busy watching who uses how many of the system’s finite resources, like RAM and CPU cycles. A multi-tenant system requires constant attention to ensure that people and processes are not using more system resources than is appropriate.
It’s not fair, for example, for someone to hog so much CPU time that the computer feels slow to everyone else. Even if you’re the only person using your Linux computer, there are limits on the resources your processes can use. After all, you are still just another user.
Some system resources are well known and obvious, like RAM, CPU cycles, and hard drive space. But there are many, many more resources that are monitored and for which each user, or each user property process“It has an upper limit set. One of them is the number of files that a process can have open at the same time.
If you’ve ever seen the “Too many open files” error message in a terminal window or found it in your system logs, it means that the upper limit has been reached and the process is not allowed to open any more files.
It’s not just the files you’ve opened
There is a system-wide limit to the number of open files that Linux can handle. It is a very large number, as we will see, but there is still a limit. Each user process has a mapping that it can use. Each receives a small portion of the system total allotted to them.
What is actually assigned is a number of file identifiers. Each file that is opened requires an identifier. Even with fairly generous allocations, system-wide file handles can run out faster than you might imagine.
Linux abstracts almost everything to make it look like a file. Sometimes they will be just that, simple old files. But other actions, like opening a directory, also use a file handle. Linux uses special block files as a kind of driver for hardware devices. Character special files are very similar, but are more often used with devices that have a performance concept, such as pipes and serial ports.
Block special files handle blocks of data at a time and character special files handle each character separately. These special files can only be accessed by using file handles. Libraries used by a program use a file handle, streams use file handles, and network connections use file handles.
Abstracting all these different requirements to appear as files makes it simpler to interface with them and allows things like pipes and flows to work.
You can see that behind the scenes Linux is opening files and using file handles just to run, regardless of your user processes. The open file count is not just the number of files you have opened. Almost everything in the operating system uses file handles.
File handling limits
The maximum number of file handles in the entire system can be viewed with this command.
cat /proc/sys/fs/file-max
This returns an absurdly large number of 9.2 quintillion. That is the theoretical maximum of the system. It is the largest possible value that can be contained in a 64-bit signed integer. Whether your poor computer can really cope with so many files open at once is another question entirely.
At the user level, there is no explicit value for the maximum number of open files you can have. But we can work it out more or less. To know the maximum number of files that one of your processes can open, we can use the ulimit
command with the -n
(open files) option.
ulimit -n
And to find the maximum number of processes a user can have, we’ll use ulimit
with the -u
(user processes) option.
ulimit -u
Multiplying 1024 and 7640 gives us 7,823,360. Of course, many of those processes will already be used by your desktop environment and other background processes. So that’s another theoretical maximum, and one you’ll never realistically reach.
The important figure is the number of files a process can open. By default, this is 1024. It’s worth noting that opening the same file 1024 times at the same time is the same as opening 1024 different files at the same time. Once you have used all the file handles, you are done.
It is possible to adjust the number of files that a process can open. There are actually two values to consider when adjusting this number. One is the value that it is currently set to, or that you are trying to set it to. This is called the soft limit. There is a hard limit too, and this is the highest value the soft limit can be increased to.
The way to think of this is that the soft limit is really the “current value” and the upper limit is the highest value that the current value can reach. A regular, non-root user can increase their soft limit to any value up to their hard limit. The root user can increase his hard limit.
To see the current hard and soft limits, use ulimit
with the -S
(soft and -H
(hard) choices, and the -n
(open files) option.
ulimit -Sn
ulimit -Hn
To create a situation where we can see the soft limit being applied, we create a program that repeatedly opens files until it fails. It then waits for a keystroke before giving up all the file handles it used. The program is called open-files
.
./open-Files
It opens 1021 files and fails when trying to open the 1022 file.
1024 minus 1021 is 3. What happened to the other three file handles? They were used for STDIN
, STDOUT
Y STDERR
streams are created automatically for each process. These always have file descriptor values of 0, 1 and 2.
RELATED: How to use the Linux lsof command
We can see them using the lsof
command with the -p
(process) option and the process ID of the open-files
Program. Conveniently, it prints its process ID to the terminal window.
lsof -p 11038
Of course, in a real-world situation, you may not know which process just gobbled up all the file handles. To start your investigation, you can use this piped script. It will tell you the fifteen most prolific users of file handles on your computer.
lsof | awk '{ print $1 " " $2; }' | sort -rn | uniq -c | sort -rn | head -15
To see more or fewer entries, adjust the -15
parameter to the head
domain. Once you’ve identified the process, you need to figure out if it’s gone rogue and is opening too many files because it’s out of control, or if it really needs those files. If you need them, you should increase your file handling limit.
Soft limit increase
If we increase the soft limit and run our program again, we should see it open more files. We will use the ulimit
command and the -n
(open files) option with a numeric value of 2048. This will be the new soft limit.
ulimit -n 2048
This time we successfully opened 2045 files. As expected, this is three less than 2048, due to the file handles used to STDIN
, STDOUT
Y STDERR
.
Make permanent changes
Increasing the soft limit only affects the current shell. Open a new terminal window and check the soft limit. You will see that it is the old default value. But is there a way to globally set a new default value for the maximum number of open files a process can have that is persistent and survives reboots.
Outdated advice often recommends editing files like “/etc/sysctl.conf” and “/etc/security/limits.conf”. However, on systemd-based distributions, these edits don’t work consistently, especially for graphical login sessions.
The technique shown here is how to do this on systemd based distributions. There are two files we need to work with. The first is the “/etc/systemd/system.conf” file. we will have to use sudo
.
sudo gedit /etc/systemd/system.conf
Find the line that contains the string “DefaultLimitNOFILE”. Remove the “#” hash from the beginning of the line and edit the first number to whatever you want your new soft limit for processes to be. We choose 4096. The second number on that line is the hard limit. We do not adjust this.
Save the file and close the editor.
We need to repeat that operation in the “/etc/systemd/user.conf” file.
sudo gedit /etc/systemd/user.conf
Make the same settings on the line containing the string “DefaultLimitNOFILE”.
Save the file and close the editor. You must restart your computer or use the systemctl
command with the daemon-reexec
option so that systemd
it runs again and ingests the new configuration.
sudo systemctl daemon-reexec
Opening a terminal window and checking the new limit should show the new value you set. In our case it was 4096.
ulimit -n
We can test that this is a live operational value by re-running our file greedy program.
./open-Files
The program cannot open file number 4094, which means 4093 files were opened. That’s our expected value, 3 less than 4096.
everything is a file
This is why Linux relies so heavily on file handles. Now, if you start to run out of them, you know how to increase your quota.
RELATED: What are stdin, stdout and stderr in Linux?
[ad_2]