It’s correct, the data size is small (200’000 records). But an other
application is running as a concurrent activity (UPDATE records). So I
have many transactions (about 10’000 per day). Do you think, that
PostgreSQL is the better solution for me with these transaction? Can I
automate the VACUUM because I haven’t time every day to start the
Yeah, the later versions of Postgres ship with autovacuum builtin and
enabled, which works well for most applications. Postgres uses an
MVCC architecture, which basically means that an update is actually an
insert/mark-as-deleted, and a vacuum is needed periodically to clean
out all the “dead” rows. You will want to tune the autovacuum
settings to make sure that you are cleaning up often enough and that
the number of “dead” rows doesn’t get too large.
In general (don’t want to start a flame war), but MySQL tends to be
faster for the simple queries with few connections, while Postgres
tends to do better with complicated joins and many connections.
FWIW, I love Postgres and have had nothing but ease building apps on top