Xavier N. wrote:
Problem is the validation is done outside the database, I don’t think
it is “fixable”. Also in the database you can’t express :scope for
example… I believe there’s a trade-off in the programmer’s side
here. The docs explain the race condition and the validation may still
have some valid uses cases that accept the risk.
The problem is that on the alter of “agnosticism” the ability to trap
and report DBMS errors has been “sacrificed”. Rails is a General Purpose
web application development tool. To be as attractive as possible to the
widest audience its authors have elected, quite sensibly in my opinion,
to provide 80% of the value for 20% of the effort. So SQL support is
reduced to the very lowest common denominator (SQLite).
What it comes down to is that the SQL-92/99/03 standard as to what a
DBMS must report for certain error conditions, SQLSTATE, is not
uniformly implemented. In SQLite3 for instance, a duplicate index value
reports some nondescript generic error message similar to the following:
“SQL logic error or missing database”
This is really not very much help outside of the context of the INSERT
call so that determining the exact nature of the error is perforce left
to the application; in other words: “validates_uniqueness_of”.
I digress for a moment, but it seems to me that the proper way to handle
this in Rails would have been to INSERT first, trap any error and THEN
do a find on the index to see if the error was caused by a duplicate.
If a duplicate is found and is different (based on hash contents of
inserted row less id attribute versus hash of returned row lest id
attribute?) than the row just inserted then report a validation error;
if not then raise the original dbms error. That way the race condition
would never occur and there would, on average, be
((1 / ) - 1) fewer SQL reads.
Returning to the main stream of thought, PostgreSQL 8 returns this:
“ERROR: Cannot insert a duplicate key into unique index”.
While MySQL 5 can return two different messages, depending on whether
one is using NDB or not:
NDB: ERROR 1169 (23000): Can’t write, because of unique constraint, to
table
not NDB: ERROR 1062 (23000): Duplicate entry for key
Note: The common element in these latter two messages, (23000) simply
tells us the SQLSTATE falls under “Integrity constraint violation”.
The exact error code that should be returned in this case is (23505)
which is exactly what PostgreSQL [sqlca.sqlstate], DB2 [sqlca.sqlstate],
and MySQL [mysql_sqlstate()] do return, they just choose not to display
it in the human readable message. This sort of error trapping stuff all
should live in the DB adapters really, but that poses serious coding
problems for Rails when accessing DBMS that do not support the full
SQLSTATE implementation; SQLite3 for instance, which seemingly returns
SQLSTATE=HY000 for everything that goes wrong.
My suggestion is that you seriously consider what DBMS best supports the
production requirements for your application and use it for development
and testing. Your design is in any case going to reflect the specific
implementation details of the DBMS you select, so pick the DBMS first
and after you decide do not worry over much about the other
possiblities.
When convenient, use SQLite3 to sketch out design ideas and to code
proof of concepts, but do not break your heart trying to get a
single-user DBMS to act like industrial strength software. Similarly,
while MySQL has its many adherents that advocate its suitability for
production use I am not one of them.
Remember that in the vast majority of business applications, it is the
data that is valuable and not the present method of its presentation.
The choice as to where the data goes and how it is handled, verified and
secured from corruption, tampering and loss, trumps the latest whiz-bang
programming paradigm every day of the week. Remember too, that any
significant persistent data store is eventually going to be accessed by
more than one application and by more than one programming language in
the course of its existence.