Forum: Ruby MySql limitations??

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
Harish TM (Guest)
on 2006-05-30 01:40
(Received via mailing list)
hi...
       I need to store something like a couple of million rows is a
MySql
table. Is that ok or do I have to split them up. I intend to index each
of
the colums that I will need to access so as to speed up access.
Insertion
will be done only when there is very little or no load on the server and
time for this is not really a factor. I also do not have any constraints
on
disk space.

     Please let me know if I can just use MySql as it is or if I need to
make some changes

harish
Harish TM (Guest)
on 2006-05-30 01:40
(Received via mailing list)
Just in case you are wondering why this is here its because I need to
access
all this data through Ruby modules. So maybe the question should have
been:

       Is the existing ruby module to do this fast enough and reliable
enough or are there some additional modules that I can use?

harish
Matthew S. (Guest)
on 2006-05-30 03:05
(Received via mailing list)
On May 29, 2006, at 22:37, Harish TM wrote:
>> server and
>
> Just in case you are wondering why this is here its because I need
> to access
> all this data through Ruby modules. So maybe the question should
> have been:
>
>       Is the existing ruby module to do this fast enough and reliable
> enough or are there some additional modules that I can use?
>
> harish

Well, the details might depend on the answer to the following questions:

1. Which existing ruby module do you mean?
2. How fast do you need it to be?

At a guess, though, I'd say that ActiveRecord could likely deal with
what you need to do.  At least well enough to use it to start with
and find out for real via testing/benchmarking.

matthew smillie.
zdennis (Guest)
on 2006-05-30 05:25
(Received via mailing list)
Harish TM wrote:
> hi...
>       I need to store something like a couple of million rows is a MySql
> table. Is that ok or do I have to split them up. I intend to index each of
> the colums that I will need to access so as to speed up access. Insertion
> will be done only when there is very little or no load on the server and
> time for this is not really a factor. I also do not have any constraints on
> disk space.
>
>     Please let me know if I can just use MySql as it is or if I need to
> make some changes

Mysql will be able to handle millions of rows in a db in a table. It can
probably handle way more. =)

Zach
Dido S. (Guest)
on 2006-05-30 05:50
(Received via mailing list)
On 5/30/06, Harish TM <removed_email_address@domain.invalid> wrote:
> hi...
>        I need to store something like a couple of million rows is a MySql
> table. Is that ok or do I have to split them up. I intend to index each of
> the colums that I will need to access so as to speed up access. Insertion
> will be done only when there is very little or no load on the server and
> time for this is not really a factor. I also do not have any constraints on
> disk space.
>
>      Please let me know if I can just use MySql as it is or if I need to
> make some changes

MySQL should hold up just fine. I've got a Ruby app backed by a MySQL
database containing a table now with close to 2 million rows, and
constantly growing. The performance of the application right now seems
to be more bounded by the fact that I'm running on a dinky machine
with slow disk drives and not a lot of memory, but as of now Ruby-DBI
and ActiveRecord seem to have reasonably acceptable performance.
Profiling the database access code shows that the application's
slowdown is not in Ruby but in MySQL, and MySQL itself appears to be
limited by the hardware we're running it on. As long as you've got a
reasonable machine, you should be fine.
Harish TM (Guest)
on 2006-05-30 11:58
(Received via mailing list)
hey thanks a lot
Guess MySql is fine

about the Ruby part I am using: mysql-ruby1.4.4a   is that ok? Or are
there
better ways of doing this?

I need an access time of about 0.05 sec. (record retrieval time)

System config:
Processor: Dual Processor Intel Pentium (P4) 2.53 GHz
Memory RAM: 4 GB
HardDrive Capacity : 250GB
Operating System: SUSE Linux Enterprise Server 9



harish
Logan C. (Guest)
on 2006-05-31 09:03
(Received via mailing list)
On May 30, 2006, at 3:55 AM, Harish TM wrote:

> I need an access time of about 0.05 sec. (record retrieval time)

If the requirements are that hard and fast it may be worth the effort
to try out a few different DBs and see which gives you performance
closest to your needs.
Nicolai R. (Guest)
on 2006-05-31 10:16
(Received via mailing list)
Remove the index, insert your data and re-apply the index.
This should speed things up.
Srinivas J. (Guest)
on 2006-05-31 14:42
(Received via mailing list)
Logan C. wrote:
>
> On May 30, 2006, at 3:55 AM, Harish TM wrote:
>
>> I need an access time of about 0.05 sec. (record retrieval time)
>
> If the requirements are that hard and fast it may be worth the effort to
> try out a few different DBs and see which gives you performance closest
> to your needs.
>
>

I missed the earlier mails, so I could be repeating something here --
please pardon me!

1. Are your queries very dynamic? If not, you could try Berkeley DB.
It provides a very high throughput.

2. Is the target of multiple queries mostly the same data? If yes, I
suggest that you take a look at 'memcached'. It helps you save on
database round-trips.

Best regards,

JS
Robert K. (Guest)
on 2006-05-31 19:28
(Received via mailing list)
2006/5/30, Harish TM <removed_email_address@domain.invalid>:
> hey thanks a lot
> Guess MySql is fine
>
> about the Ruby part I am using: mysql-ruby1.4.4a   is that ok? Or are there
> better ways of doing this?
>
> I need an access time of about 0.05 sec. (record retrieval time)

Usually the DB is the limiting factor and not the app.  Whether 0.05s
is tight or not depends on a number of factors including but not
limited to DB vendor, IO subsystem, volume of data, indexing etc.

> System config:
> Processor: Dual Processor Intel Pentium (P4) 2.53 GHz
> Memory RAM: 4 GB
> HardDrive Capacity : 250GB
> Operating System: SUSE Linux Enterprise Server 9

I suggest you make some performance tests with typical data.

Kind regards

robert
Harish TM (Guest)
on 2006-06-01 11:17
(Received via mailing list)
>If the requirements are that hard and fast it may be worth the effort
>to try out a few different DBs and see which gives you performance
>closest to your needs.

>Remove the index, insert your data and re-apply the index.
>This should speed things up.


Guess thats a really good idea... thanks a lot


>1. Are your queries very dynamic? If not, you could try Berkeley DB.
>It provides a very high throughput.

>2. Is the target of multiple queries mostly the same data? If yes, I
>suggest that you take a look at 'memcached'. It helps you save on
>database round-trips.

The queries will be highly dynamic and the target of multiple quires is
hardly the same data.



harish
This topic is locked and can not be replied to.