Building ruby for speed: wise or otherwise?

My active record based script is taking longer than I’d like.
While I wait for approval to get a faster machine :slight_smile: I’m wondering
about rebuilding ruby 1.8.2 (which I have now) and changing the
CFLAGS from the default
CFLAGS=-g -O2
to
CFLAGS=-O3
or something of the sort. I’m presently using gcc-3.4.3 on
Solaris9. Has anyone done this and if so is there anything I should
watch out for? ISTR reported problems when building other packages
with high -O values in the past.

Would the answer be different for gcc-4.0.2?

    Thank you,
    Hugh

Hugh S. wrote:

with high -O values in the past.

Would the answer be different for gcc-4.0.2?

If gcc-4.x is an option, try it. Anecdotally, it’s substantially faster
than 3.x. In fact, it’s one less reason for me to use msvc on windows:
code compiled with gcc-4.0 (with -O2) turns out to be faster than msvc
for some numerically intensive simulation code running as a ruby
extension, whereas msvc was faster than gcc-3.x output code. YMMV.

Joel VanderWerf wrote:

Solaris9. Has anyone done this and if so is there anything I should
extension, whereas msvc was faster than gcc-3.x output code. YMMV.

Just out of curiosity, what options did you pass to cl when using MSVC?

  • Dan

On Tue, 29 Nov 2005, Joel VanderWerf wrote:

watch out for? ISTR reported problems when building other packages
with high -O values in the past.

Would the answer be different for gcc-4.0.2?

If gcc-4.x is an option, try it. Anecdotally, it’s substantially faster

Yes, I’m unpacking the tarball now, as x.0.2 is sufficiently tried
and tested to be worth looking at, for my situation.

than 3.x. In fact, it’s one less reason for me to use msvc on windows:
code compiled with gcc-4.0 (with -O2) turns out to be faster than msvc
for some numerically intensive simulation code running as a ruby
extension, whereas msvc was faster than gcc-3.x output code. YMMV.

thanks for that. Still curious about dropping -g and bumping up to
-03 though… :slight_smile:

    Hugh

Joel VanderWerf wrote:

watch out for? ISTR reported problems when building other packages
with high -O values in the past.

Would the answer be different for gcc-4.0.2?

If gcc-4.x is an option, try it. Anecdotally, it’s substantially
faster than 3.x. In fact, it’s one less reason for me to use msvc on
windows: code compiled with gcc-4.0 (with -O2) turns out to be faster
than msvc for some numerically intensive simulation code running as a
ruby extension, whereas msvc was faster than gcc-3.x output code.
YMMV.

If Hugh is using ActiveRecord intensively with a database then it’s most
likely that he’ll see no positiv performance effect from compiling it
with
more aggressive optimization.

In fact it’s likely that careful optimization on the database side will
yield better results. This can be as easy as creating some indexes -
but
might be much more complicated - depending on the bottleneck. (Often
it’s
IO and this might have several reasons, from sub optimal execution plans
to slow disks / controllers.)

Kind regards

robert

On Tue, 29 Nov 2005, Robert K. wrote:

If Hugh is using ActiveRecord intensively with a database then it’s most
likely that he’ll see no positiv performance effect from compiling it with
more aggressive optimization.

In fact it’s likely that careful optimization on the database side will
yield better results. This can be as easy as creating some indexes - but
might be much more complicated - depending on the bottleneck. (Often it’s
IO and this might have several reasons, from sub optimal execution plans
to slow disks / controllers.)

At the moment my script to populate the tables is taking about an
hour. Anyway it’s mostly ruby I think, because it spends most of
the time setting up the arrays before it populates the db with them.

Besides that, I’m fairly new to database work, so I’m trying to
optimize what I know about before I start fiddling with the db.

Slow disks/controllers (+ lots of users) could be a factor, the
machine is 5.5 years old.

But those are good points. Thank you.

Kind regards

robert
    Hugh

Daniel B. wrote:

Joel VanderWerf wrote:

If gcc-4.x is an option, try it. Anecdotally, it’s substantially faster
than 3.x. In fact, it’s one less reason for me to use msvc on windows:
code compiled with gcc-4.0 (with -O2) turns out to be faster than msvc
for some numerically intensive simulation code running as a ruby
extension, whereas msvc was faster than gcc-3.x output code. YMMV.

Just out of curiosity, what options did you pass to cl when using MSVC?

The default flags generated by mkmf.rb:

CC = cl -nologo
CFLAGS = -MD -Zi -O2b2xg- -G6
CPPFLAGS = -I. -I$(topdir) -I$(hdrdir) -I$(srcdir) -I. -I./…
-I./…/missing

.c.obj:
$(CC) $(CFLAGS) $(CPPFLAGS) -c -Tc$(<:=/)

I’ve never played around with the optimization flags for msvc (partly
because msvc always seemed so much faster than gcc).

Hi Joel,

code compiled with gcc-4.0 (with -O2) turns out to be faster than msvc
for some numerically intensive simulation code running as a ruby
extension, whereas msvc was faster than gcc-3.x output code. YMMV.

Was this with MSVC 7.1 or 8.0?

Thanks,

Wayne V.
No Bugs Software
“Ruby and C++ Agile Contract Programming in Silicon Valley”

“H” == Hugh S. [email protected] writes:

H> thanks for that. Still curious about dropping -g and bumping up to
H> -03 though… :slight_smile:

You can try it, but don’t forget this

moulon% CC=“gcc -fomit-frame-pointer” ./configure > /dev/null 2>&1
moulon% make > /dev/null
re.c: In function ‘rb_memsearch’:
re.c:121: warning: pointer targets in passing argument 1 of
‘rb_memcicmp’ differ in signedness
re.c:121: warning: pointer targets in passing argument 2 of
‘rb_memcicmp’ differ in signedness
re.c:129: warning: pointer targets in passing argument 1 of
‘rb_memcicmp’ differ in signedness
re.c:129: warning: pointer targets in passing argument 2 of
‘rb_memcicmp’ differ in signedness
regex.c: In function ‘calculate_must_string’:
regex.c:1014: warning: pointer targets in initialization differ in
signedness
regex.c:1015: warning: pointer targets in initialization differ in
signedness
regex.c:1029: warning: pointer targets in assignment differ in
signedness
regex.c: In function ‘ruby_re_search’:
regex.c:3222: warning: pointer targets in passing argument 1 of
‘slow_search’ differ in signedness
regex.c:3222: warning: pointer targets in passing argument 3 of
‘slow_search’ differ in signedness
regex.c:3222: warning: pointer targets in passing argument 5 of
‘slow_search’ differ in signedness
regex.c:2689: warning: pointer targets in passing argument 5 of
‘slow_match’ differ in signedness
regex.c:3227: warning: pointer targets in passing argument 1 of
‘bm_search’ differ in signedness
regex.c:3227: warning: pointer targets in passing argument 3 of
‘bm_search’ differ in signedness
string.c: In function ‘rb_str_index_m’:
string.c:1133: warning: pointer targets in initialization differ in
signedness
string.c: In function ‘rb_str_rindex_m’:
string.c:1255: warning: pointer targets in initialization differ in
signedness
string.c:1256: warning: pointer targets in initialization differ in
signedness
./lib/fileutils.rb:1257: [BUG] Segmentation fault
ruby 1.8.4 (2005-10-29) [i686-linux]

make: *** [.rbconfig.time] Aborted
moulon%

Guy Decoux

“H” == Hugh S. [email protected] writes:

H> [ half a kilo of warnings :slight_smile: ]

gcc 4.0.2, the warning were for matz :slight_smile:

H> Ah, not a good idea. But dropping -g ought to speed things up a
H> little, I’d hope.

If you drop -g, it can add -fomit-frame-pointer with -O2 :slight_smile:

Guy Decoux

On Tue, 29 Nov 2005, ts wrote:

re.c:121: warning: pointer targets in passing argument 1 of ‘rb_memcicmp’ differ in signedness
[ half a kilo of warnings :slight_smile: ]
string.c:1256: warning: pointer targets in initialization differ in signedness
./lib/fileutils.rb:1257: [BUG] Segmentation fault
ruby 1.8.4 (2005-10-29) [i686-linux]

make: *** [.rbconfig.time] Aborted
moulon%

Ah, not a good idea. But dropping -g ought to speed things up a
little, I’d hope.

Guy Decoux

    Thank you,
    Hugh

On Tue, 29 Nov 2005, ts wrote:

“H” == Hugh S. [email protected] writes:

H> [ half a kilo of warnings :slight_smile: ]

gcc 4.0.2, the warning were for matz :slight_smile:

H> Ah, not a good idea. But dropping -g ought to speed things up a
H> little, I’d hope.

If you drop -g, it can add -fomit-frame-pointer with -O2 :slight_smile:

Good job I asked! :slight_smile: Thank you. I’ll carry on with my build of new
[binutils, bison, gcc] then.

Guy Decoux

    Hugh

Hugh S. [email protected] writes:

to slow disks / controllers.)

At the moment my script to populate the tables is taking about an
hour. Anyway it’s mostly ruby I think, because it spends most of
the time setting up the arrays before it populates the db with them.

Do you use transactions correctly?

Hugh S. [email protected] writes:

learn about them. So I suppose that answer is likely to be “no”
:slight_smile:

If you need to INSERT bigger chunks of data, put it in an transaction
so it will write the data to disk only once. If you need to insert
even bigger amounts of data, using COPY (usage depends on your
database) can speed things up a lot too. It may be helpful to add
indexes after importing the data.

On Tue, 29 Nov 2005, Christian N. wrote:

Hugh S. [email protected] writes:

On Tue, 29 Nov 2005, Robert K. wrote:

    [...]

In fact it’s likely that careful optimization on the database side will
yield better results. This can be as easy as creating some indexes - but
[…]
At the moment my script to populate the tables is taking about an
hour. Anyway it’s mostly ruby I think, because it spends most of
the time setting up the arrays before it populates the db with them.

Do you use transactions correctly?

I’ve only got one process accessing the db at the moment. If you’ve
got pointers on common errors, misconceptions, etc I’d be glad to
learn about them. So I suppose that answer is likely to be “no”
:slight_smile:

Besides that, I’m fairly new to database work, so I’m trying to
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
optimize what I know about before I start fiddling with the db.

    Thank you,
    Hugh

On Tue, 29 Nov 2005, Christian N. wrote:

I’ve only got one process accessing the db at the moment. If you’ve
got pointers on common errors, misconceptions, etc I’d be glad to
learn about them. So I suppose that answer is likely to be “no”
:slight_smile:

If you need to INSERT bigger chunks of data, put it in an transaction
so it will write the data to disk only once. If you need to insert
even bigger amounts of data, using COPY (usage depends on your
database) can speed things up a lot too. It may be helpful to add
indexes after importing the data.

OK, thanks.

    Hugh

hgs wrote:

On Tue, 29 Nov 2005, Christian N. wrote:

I’ve only got one process accessing the db at the moment. If you’ve
got pointers on common errors, misconceptions, etc I’d be glad to
learn about them. So I suppose that answer is likely to be “no”
:slight_smile:

If you need to INSERT bigger chunks of data, put it in an transaction
so it will write the data to disk only once. If you need to insert
even bigger amounts of data, using COPY (usage depends on your
database) can speed things up a lot too. It may be helpful to add
indexes after importing the data.

OK, thanks.

    Hugh

Temporarily disabling constraints (UNIQUE and FOREIGN_KEY) will improve
performance too if you are sure the data is safe.

Wayne V. wrote:

Hi Joel,

code compiled with gcc-4.0 (with -O2) turns out to be faster than msvc
for some numerically intensive simulation code running as a ruby
extension, whereas msvc was faster than gcc-3.x output code. YMMV.

Was this with MSVC 7.1 or 8.0?

I’m embarrassed to say it was 6.0. I have 8.0 (express) but can’t get
past the “MSVCR80.DLL missing” problem, at least with the
mkmf.rb-generated Makefile. (For regular projects in MSVC 8.0, you can
get around this problem by deleting the foobar.exe.embed.manifest.res
file from the Debug dir of project foobar.) Anyone have any ideas?

I’m using the single-click installer ruby, which IIRC is compiled with
7.1. Maybe it’s not a fair comparison with gcc-built ruby, since that
will take advatage of i686 vs. i386. So, not a very scientific
comparison at all–it would best to use the latest MS compiler, build
ruby from scratch, and make sure to use the same arch settings as for
gcc.

I’m just glad to see that gcc is so much better than it was.

Hugh S. wrote:

from sub optimal execution plans to slow disks / controllers.)

At the moment my script to populate the tables is taking about an
hour. Anyway it’s mostly ruby I think, because it spends most of
the time setting up the arrays before it populates the db with them.

How did you measure that?

Besides that, I’m fairly new to database work, so I’m trying to
optimize what I know about before I start fiddling with the db.

Um, although I can understand your waryness with regard to the unknown -
you may completely waste your time. IMHO you should first determine the
cause of the slowness and then find a solution. If you optimize
something
that just takes 10% of the whole running time you’ll never seen an
improvement of more than 10%…

Another option to get masses of data into a database is to use some form
of bulk insert / bulk load. Depending on your database there are
probably
several options.

Slow disks/controllers (+ lots of users) could be a factor, the
machine is 5.5 years old.

Could be. If possible spend it at least more mem.

Kind regards

robert

Hugh S. wrote:

will yield better results. This can be as easy as creating some
By eye! :slight_smile: The code doesn’t access the database at all until the
last part, and it doesn’t get there till about 45 mins. But to be
honest, this is so slow it isn’t worth benchmarking to get the
milliseconds.

Wow! In that case it certainly seems to make sense to optimize that.
Did
you keep an eye on memory consumption and disk IO? Could well be that
the
sheer amount of data (and thus memory) slows your script down.

 555    1676   17179 /home/hgs/csestore_meta/populate_tables2.rb

I could post the script if you like. I’ve not profiled it to find
out where the slow bits are because it would take about 5 hours
going by previous slowdowns when profiling.

Unfortunately we’re close to release and I don’t really have much time
to
look into this deeper. If anyone else volunteers…

MySQL. Part of the problem is that this script is also for
updating, based on new data. If the db is empty it just inserts,
else it updates. Easy enought in ActiveRecord.

Ok, bad for bulk loading.

Kind regards

robert