I have a long-lived process that I am trying to get ruby to interact
with over time.
Essentially, I’m working in an environment that requires a compiler
that was written in Java (and is naturally quite slow). There is an
additional Java-based, shell application that sits in front of the
compiler and allows for exponentially faster, incremental compilation.
I am building a set of project creation and build tools that hide
these aspects of development in order to make it much easier to get up
and running with a new project and to grow existing projects.
Everything - and I mean everything - is working adequately, except my
front end to this compiler-front-end.
Here is what I’m doing:
I’m using what Stephen Wong posted (http://blade.nagaokaut.ac.jp/cgi-
bin/scat.rb/ruby/ruby-talk/118672) as a foundation.
Initiate an external process from Ruby using fork (I have also
tried execute, IO.popen, Open3.popen3, system and ``, none of which
seemed to work as easily as what Stephen provided).
Redirect $stdin, $stdout and $stderr from this forked process to
three new IO streams.
write to one stream and read from the others so that the user can
see what’s going on as Ruby works on this forked process.
I am able to successfully call write_stream.write(‘msg’) and the
process receives this message and responds as expected - if and only
if - I do not call read_stream.read at any time.
The only way I have been able to get the read_stream to not break
the write operation, is if I call write_stream.close after writing. In
this case, everything works perfectly except I cannot perform any
additional write operations - thereby rendering the whole long-lived
Does write_stream.close actually kill the entire forked process? I
can’t really tell what’s going on in there. It looks like I should be
able to simply reopen the write_stream against the existing process,
but I can’t figure out how to do that.
Is there some way to ‘reopen’ an IO stream that has been closed
when the stream was originally attached inside of a fork statement?
Please see the reopen calls in Stephen’s code:
can’t seem to store references to those $std[in/out/err] streams
outside of the fork closure scope. It seems like reopening the write
stream after closing it would work - if the process is still around.
If the forked process is a Java shell application, could it expect
or transmit some non standard end of line character for read/write
Is it typical for a forked process to block all previous or
subsequent write operations when read is called?
What tools would you use to figure out what is going on here? Since
I’m messing with my shell $std[in/out/err] streams, and working in
forks and threads, I’m having trouble debugging - is there some tool
that will tell me what is going on in my computer?
Any help is greatly appreciated!