MySql limitations?

hi…
I need to store something like a couple of million rows is a
MySql
table. Is that ok or do I have to split them up. I intend to index each
of
the colums that I will need to access so as to speed up access.
Insertion
will be done only when there is very little or no load on the server and
time for this is not really a factor. I also do not have any constraints
on
disk space.

 Please let me know if I can just use MySql as it is or if I need to

make some changes

harish

Just in case you are wondering why this is here its because I need to
access
all this data through Ruby modules. So maybe the question should have
been:

   Is the existing ruby module to do this fast enough and reliable

enough or are there some additional modules that I can use?

harish

Harish TM wrote:

hi…
I need to store something like a couple of million rows is a MySql
table. Is that ok or do I have to split them up. I intend to index each of
the colums that I will need to access so as to speed up access. Insertion
will be done only when there is very little or no load on the server and
time for this is not really a factor. I also do not have any constraints on
disk space.

Please let me know if I can just use MySql as it is or if I need to

make some changes

Mysql will be able to handle millions of rows in a db in a table. It can
probably handle way more. =)

Zach

On May 29, 2006, at 22:37, Harish TM wrote:

server and

Just in case you are wondering why this is here its because I need
to access
all this data through Ruby modules. So maybe the question should
have been:

  Is the existing ruby module to do this fast enough and reliable

enough or are there some additional modules that I can use?

harish

Well, the details might depend on the answer to the following questions:

  1. Which existing ruby module do you mean?
  2. How fast do you need it to be?

At a guess, though, I’d say that ActiveRecord could likely deal with
what you need to do. At least well enough to use it to start with
and find out for real via testing/benchmarking.

matthew smillie.

hey thanks a lot
Guess MySql is fine

about the Ruby part I am using: mysql-ruby1.4.4a is that ok? Or are
there
better ways of doing this?

I need an access time of about 0.05 sec. (record retrieval time)

System config:
Processor: Dual Processor Intel Pentium (P4) 2.53 GHz
Memory RAM: 4 GB
HardDrive Capacity : 250GB
Operating System: SUSE Linux Enterprise Server 9

harish

On May 30, 2006, at 3:55 AM, Harish TM wrote:

I need an access time of about 0.05 sec. (record retrieval time)

If the requirements are that hard and fast it may be worth the effort
to try out a few different DBs and see which gives you performance
closest to your needs.

On 5/30/06, Harish TM [email protected] wrote:

hi…
I need to store something like a couple of million rows is a MySql
table. Is that ok or do I have to split them up. I intend to index each of
the colums that I will need to access so as to speed up access. Insertion
will be done only when there is very little or no load on the server and
time for this is not really a factor. I also do not have any constraints on
disk space.

 Please let me know if I can just use MySql as it is or if I need to

make some changes

MySQL should hold up just fine. I’ve got a Ruby app backed by a MySQL
database containing a table now with close to 2 million rows, and
constantly growing. The performance of the application right now seems
to be more bounded by the fact that I’m running on a dinky machine
with slow disk drives and not a lot of memory, but as of now Ruby-DBI
and ActiveRecord seem to have reasonably acceptable performance.
Profiling the database access code shows that the application’s
slowdown is not in Ruby but in MySQL, and MySQL itself appears to be
limited by the hardware we’re running it on. As long as you’ve got a
reasonable machine, you should be fine.

Logan C. wrote:

On May 30, 2006, at 3:55 AM, Harish TM wrote:

I need an access time of about 0.05 sec. (record retrieval time)

If the requirements are that hard and fast it may be worth the effort to
try out a few different DBs and see which gives you performance closest
to your needs.

I missed the earlier mails, so I could be repeating something here –
please pardon me!

  1. Are your queries very dynamic? If not, you could try Berkeley DB.
    It provides a very high throughput.

  2. Is the target of multiple queries mostly the same data? If yes, I
    suggest that you take a look at ‘memcached’. It helps you save on
    database round-trips.

Best regards,

JS

2006/5/30, Harish TM [email protected]:

hey thanks a lot
Guess MySql is fine

about the Ruby part I am using: mysql-ruby1.4.4a is that ok? Or are there
better ways of doing this?

I need an access time of about 0.05 sec. (record retrieval time)

Usually the DB is the limiting factor and not the app. Whether 0.05s
is tight or not depends on a number of factors including but not
limited to DB vendor, IO subsystem, volume of data, indexing etc.

System config:
Processor: Dual Processor Intel Pentium (P4) 2.53 GHz
Memory RAM: 4 GB
HardDrive Capacity : 250GB
Operating System: SUSE Linux Enterprise Server 9

I suggest you make some performance tests with typical data.

Kind regards

robert

Remove the index, insert your data and re-apply the index.
This should speed things up.

If the requirements are that hard and fast it may be worth the effort
to try out a few different DBs and see which gives you performance
closest to your needs.

Remove the index, insert your data and re-apply the index.
This should speed things up.

Guess thats a really good idea… thanks a lot

  1. Are your queries very dynamic? If not, you could try Berkeley DB.
    It provides a very high throughput.
  1. Is the target of multiple queries mostly the same data? If yes, I
    suggest that you take a look at ‘memcached’. It helps you save on
    database round-trips.

The queries will be highly dynamic and the target of multiple quires is
hardly the same data.

harish