Hmm. Should be enough, unless you’re opening other file-like objects in
your
program. But you could try raising/lowering ulimit -n to see if it makes
a
difference.
Thanks Brian.
I am opening 1024+ file-like objects.
That’s the answer exactly.
One more question.
When I try to change ulimit value, I have some error.
dev@seou$: ulimit -n 2048
-bash: ulimit: open files: cannot modify limit: Operation not permitted
dev@seoul$ sudo ulimit -Sn 2048
sudo: ulimit: command not found
Hmm. Should be enough, unless you’re opening other file-like objects in
your
program. But you could try raising/lowering ulimit -n to see if it makes
a
difference.
server = DRb.start_service(server_uri, TestServer.new)
puts sum
DRb::DRbConnError exception raised.
somewhere
It’s been said that it’s some 250 connections. For one, the
application might act as some sort of proxy which would double the
number of sockets.
Apparently it either uses some other files of uses about 4
handles/connection. It looks quite a lot and could be probably lowered
but it does not change the fact that 1000 is a safe default for
non-server applications but can be easily reached by servers.
I wonder why such limits are imposed. It is probably some workaround
for a flaw in UNIX design that introduces possible DoS by exhausting
kernel memory/structures. Maybe it was fixed in some kernels (if
that’s even possible) but nobody cared to fix the limit as well.
I wonder why such limits are imposed. It is probably some workaround
for a flaw in UNIX design that introduces possible DoS by exhausting
kernel memory/structures. Maybe it was fixed in some kernels (if
that’s even possible) but nobody cared to fix the limit as well.
How would you propose to avoid letting users exhaust system resources
without limits of some sort? Sadly the real world implementations of
turing machines don’t generally include unlimited tape…
Modest default limits are a good thing, since they help reduce the
impact of runaways, leaks and, indeed, DoS attempts. This is as true in
*ix as it is in any other system.
I wonder why such limits are imposed. It is probably some workaround
for a flaw in UNIX design that introduces possible DoS by exhausting
kernel memory/structures. Maybe it was fixed in some kernels (if
that’s even possible) but nobody cared to fix the limit as well.
Such limits have been around long before inhibiting DoS attacks became
an
important design goal. Every process has a “descriptor table” which
defaults
to a certain size but can sometimes be increased. However, the
per-process
descriptor table is nothing but an array of pointers to data structures
maintained by the kernel. (That’s why in Unix, a file descriptor is
always a
low-valued integer: it’s just an offset into the pointer array.) What
this
means is that the system resources consumed by the file descriptor
(which is
owned by a process) must be considered in distinction to the kernel
resources consumed by an actual open file or network connection, which
are
managed separately and obey very different constraints. It’s normal in
Unix
for the same kernel object representing an open file to appear in the
descriptor tables of several different processes. Just having the
ability to
represent 50,000 different file descriptors in a single process,
however,
doesn’t automatically mean you have that much more I/O bandwidth
available
to your programs. Think about IBM mainframes, which are designed for
extremely high I/O loads. You can have 500 or more actual open files on
an
IBM mainframe, all performing real live I/O. Intel-based servers can’t
come
anywhere near that kind of capacity. If your per-process tables are
large
enough for thousands of open file descriptors, that says something about
the
size of your I/O data structures (which are constrained primarily by
memory,
a medium-inexpensive resource), but nothing at all about the real
throughput
you’ll get.
Such limits have been around long before inhibiting DoS attacks became an
descriptor tables of several different processes. Just having the ability to
Of course, the FDs are only to organize your IO. You can use TCP ans
the OS provided sockets, or you can use a single UDP socket and
maintain the connection state yourself.
Similarly the FDs to open files give you organized access to space on
a disk drive, and you can always open the partition device and manage
the storage yourself.
The throughput of the structured IO would be usually lower because the
OS does already some processing on the data to organize it neatly for
you.
Limiting the number of FDs per process does not do much to protect the
kernel memory. Maybe a single process cannot exhaust it but forking
more processes is easy. It is only a workaround for the poor design
after all.
Modest default limits are a good thing, since they help reduce the
impact of runaways, leaks and, indeed, DoS attempts. This is as true in
*ix as it is in any other system.
Sure it is avoidable. In a system where memory is allocated to users
(not somehow vaguely pooled), and the networking service is able to
back its socket data structures by user memory you only care about
memory, not what the user uses it for. The user then can store files,
sockets, or anything else that he wishes.
Of course, this probably would not happen on a POSIX system.
On Mon, May 14, 2007 at 11:13:34PM +0900, Michal S. wrote:
Of course, the FDs are only to organize your IO. You can use TCP ans
the OS provided sockets, or you can use a single UDP socket and
maintain the connection state yourself.
If you want to run your own TCP stack you’ll need to open a raw socket,
and
you can’t do that unless you’re root, because of the security
implications
(e.g. ordinary users could masquerade as privileged services such as
SMTP on
port 25)
Similarly the FDs to open files give you organized access to space on
a disk drive, and you can always open the partition device and manage
the storage yourself.
Similarly, only root can have direct access to the raw device.
Otherwise,
any user would be able to read any other user’s files, modify any file
at
whim, totally corrupt the filesystem etc.
OS does already some processing on the data to organize it neatly for
you.
Limiting the number of FDs per process does not do much to protect the
kernel memory. Maybe a single process cannot exhaust it but forking
more processes is easy. It is only a workaround for the poor design
after all.
I really should let this go because it has nothing much to do with Ruby,
but
I don’t agree that the Unix IO-access design is a poor one. User
programs
should never be accessing raw devices for any reason. It’s absolutely
not
the case that direct access to raw devices gives you better
“performance,”
especially considering how much work is being done by well-optimized
device
drivers, and also balanced against the damage you can do by accessing
them
yourself. And the design has stood the test of time, having proved its
ability to easily accommodate a wide range of real devices and
pseudo-devices over the years. And Windows even copied the design of the
system calls (even though the underlying implementation appears to be
quite
different, except of course for Windows’ TCP, which was stolen from
BSD).