On 17.03.2009 00:03, Michael M. wrote:
issue I described is caused by the receiving side not reading all the
Though I think I have narrowed the problem very slightly. The
thread/process pair that blocks indefinitely blocks on the
read_end.read() call. Which is weird and annoying.
And you are sure you close stdout in the client or the client terminates?
pw = IO::pipe
pr = IO::pipe
pe = IO::pipe
read_end,write_end = IO.pipe
Why do you open four pipes when three are sufficient?
pe[0].close
STDERR.reopen(pe[1])
pe[1].close
That’s the code I used to make sure STDOUT, STDIN and STDERR weren’t
causing full buffer hangs.
This code by no means does something about buffer hangs. You just make
sure that stdin, stdout and stderr are redirected to pipes.
Btw, do you also close other ends in the parent?
I borrowed it from a process detach method
elsewhere, so it might not be exactly what I want. However, it had no
effect. The problem still exists.
I do not know whether you plan to exec your forked process but if so you
can make your life much easier by using Open3. Then you can do
from memory
require ‘open3’
Open3.popen “cmd args” do |in,out,err|
out.puts “foo”
end
Righto, here’s the deal: Either the fork call doesn’t work (but doesn’t
fail either -> a valid pid is returned)
if I do a puts either side of the fork() (with and without the STDOUT
re-direction) then on the process that blocks, the puts just inside the
fork() doesn’t run. This indicates that the fork() call fails, however
it doesn’t return nil like the docs say.
I am not sure I can follow your logic. Failure of fork can be
recognized by an exception (in the unlikely case that your OS cannot
manage more processes or won’t allow you to have another one).
I have to agree that your reasoning is not fully clear to me. Using
output operations to determine failure or success of a fork call seems
to be using the wrong tool for the job.
An interesting effect I noticed, is if I remove my call to
Process.wait(pid) then it displays almost identical behaviour! It runs
perfectly some times, and occasionally just hangs. However, if I’m not
collecting child processes, then I would expect to hit my ulimit at the
same place every time and for the fork() call to fail noisily. The docs
don’t mention anything on this. Is it possible that my child processes
aren’t being collected correctly, thus the rlimit is being hit, but Ruby
believes the process is collected, so its internal rlimit is not
reached, so it calls fork() regardless?
Could just mean that you create new processes faster than old ones are
closed.
Is that at all sane? Though
presumably, the external fork() call would fail, causing the internal
call to fail. And yes, I’m speaking as if the two are different, but
they might not be. I suppose it doesn’t really make sense to duplicate
it. In fact, the call directly after the block passed to fork() doesn’t
execute. Is it possible that my Thread is hanging on the fork() call?
What would that indicate? How could I fix it?
I’d rather ask: what is the problem you are trying to solve? I still
lack the big picture - we’re talking about details here all the time but
it’s difficult to stay on track when the context is missing.
Cheers
robert