okay, after wrestling with rails, jruby and hadoop for a few days, i’m
finally able to use my rails app in the mapper class. the key sticking
point turned out to be the fact that rails searches for its
configuration by walking up the directory tree by using File.dirname.
but if running from a jar, jruby’s RubyFile implementation refuses to
treat the “root” directory as a valid path entry. the easy solution was
to modify radoop’s package.rb to jar up my rails app code under a
/classes directory instead of at the top level. i then needed to add
/classes and /classes/lib to the LOAD_PATH. (it has be /classes due to
hadoop’s classpath when the job jar is run by the workers.)
i’m attaching my patched up version package.rb, which is the only change
to rubydoop.rb i needed to make. if it looks interesting, let me know
and i can send you a pull request on github.
thanks for all your help.
Ilya K. wrote in post #1089906:
thanks, that helped me make progress. the thing i’m struggling with now
is that it seems like EMR does not unjar my jar before running it, which
is what my local hadoop does (i.e. everything gets loaded from a
/tmp/… directory that my jar gets uncompressed to locally). in EMR, i
get stacktraces like:
org.jruby.exceptions.RaiseException: (ENOENT) No such file or directory
from which it looks like EMR hadoop is loading directly from that jar
file, which makes any attempts to call Pathname.new().realpath (for
example) fail. some gems try to do that (to load their configs, etc).
i’m trying to figure out how to get EMR to stage my jar similarly to
what my local hadoop installation does.