Segmentation fault on long running script (linux)

i have a long running script which takes a lot netcdf files and
generates SQL which get’s piped to another process for database import.
the script failed about 1/2 way through (after about 5 days) with
segmentation fault.

is there a way to debug segfaulting script (i.e. is there a way to
generate core file) so i can find out more?

my understanding is that segmentation fault is usually caused by
addressing memory which does not belong to the process (bad pointer and
such).

— using
ruby 1.8.2 (2005-04-11) [i386-linux]

vlad

On 2/26/07, Vladimir K. [email protected] wrote:

addressing memory which does not belong to the process (bad pointer and
such).

— using
ruby 1.8.2 (2005-04-11) [i386-linux]

vlad

I suppose that linux makes core dumps unless it is told not to do
(using ulimit). You can inspect the core file with gdb. It may be
helpful to have symbols table available (i.e. not stripped ruby).

Now, don’t take this as too accurate… Last time I debugged a core
file was in 1999…

On Tue, Feb 27, 2007 at 12:21:08AM +0900, Jan S. wrote:

I suppose that linux makes core dumps unless it is told not to do
(using ulimit).

Or the process is setuid, unless you enable dumping of setuid processes
with
sysctl. By default:

kernel.suid_dumpable = 0

Note that many Linux systems have the core size ulimit set to 0 by
default.
I get

$ ulimit -a | grep core
core file size (blocks, -c) 0

on both Ubuntu 6.06 and CentOS 4.4.

Regards,

Brian.

$ ulimit -a | grep core
core file size (blocks, -c) 0

this settings is the “culprit” i think, (this is on debian sarge).

thank you all very much for the pointers.

vlad