Avoiding duplicate inserts from multiple ruby processes

How do I use ActiveRecord to safely add-or-update the same unique row
in a table from multiple ruby processes?

Suppose I have a table called People with a column called ‘name’. In
the DB (which is MySql), the name column has a unique index on it.

Now, 2 ruby processes are chugging along adding rows in the People
table. These processes may (will) come across both new and existing
rows in the People table. I need to make sure that I don’t trigger a
“Mysql::Error: #23000Duplicate entry” error on the name column.

bataras wrote:

Suppose I have a table called People with a column called ‘name’. In
the DB (which is MySql), the name column has a unique index on it.

Now, 2 ruby processes are chugging along adding rows in the People
table. These processes may (will) come across both new and existing
rows in the People table. I need to make sure that I don’t trigger a
“Mysql::Error: #23000Duplicate entry” error on the name column.

If the updates do not depend on the current contents of the rows,
use REPLACE: http://dev.mysql.com/doc/refman/5.0/en/replace.html .
Otherwise use INSERT … ON DUPLICATE KEY UPDATE:
http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html .


We develop, watch us RoR, in numbers too big to ignore.

thanks. and thanks for the links too. What i actually want to do from
each process is insert the row if it doesn’t exist, but if it does,
don’t change it, but select it (or otherwise keep it locked in the
current transaction) while I do some other processing related to it.