I have a requirement for a system that does 10M+ read/writes a day. I
haven’t done work with this volume before. It translates to 100+ read/
writes per second. Can anyone recommend a rails DB back end for a
system of this volume? Can I get performance like this with mysql
clusters? What else should I be looking at?
Before you go any further is that 99 read and 1 write per second or 1
read and 99 writes. Writes are expensive and reads can be cached. Some
idea as to the reads to writes ratio would be useful if anyone is
going to give you any advise.
I agree with Peter’s suggestion. I would also suggest googling “how
well does [databasename] scale?” So, you could check on MySQL,
PostgreSQL (my personal fave), Sybase, Oracle, object DBs like
CouchDB, MongoDB, intersystems (Cach), … etc… You might also want
to go really crazy and look at multivalue databases like
universe/unidata, and objecache, etc. etc.
I have a requirement for a system that does 10M+ read/writes a day. I
haven’t done work with this volume before. It translates to 100+ read/
writes per second. Can anyone recommend a rails DB back end for a
system of this volume? Can I get performance like this with mysql
clusters? What else should I be looking at?
You requirement is a bit fuzzy. After all, SQLite could handle this
easily, depending on the specific scenario at hand.
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.