One million records really isn’t that big. Assuming 1K/record,
that’s only 1 GB. Heck, that might even fit completely in RAM.
That said, here’s some things to consider:
Some old OSs have file system limits (4GB/32-bit) that preclude
super-large tables, but that is largely a thing of the past unless
you are working on a legacy system.
Have to use indexes on large tables–but not too many indexes
because too many indexes slow down inserts.
Might consider moving seldom-used fields into a separate table if
those fields are allowed to be NULL, making the bookmarks table as
‘slim’ as possible.
Might consider changing all VARCHAR fields to CHAR fields since
searches seem to go faster when the record size is fixed.
Might consider moving all old records that haven’t been accessed
in X months to an ‘archive’ table that is only SELECTed if not
found in main bookmarks table. Add an ‘inactive’ flag to each
record and set it if record has been inactive after Y months so
you can skip those records easily.
Some databases (namely, Oracle) will allow you to spread a single
table over several spindles, which speeds things up a lot.
If you do a lot of sorts on the records it will probably be easier
on the db to pull all the records off the db and sort on a client
Regardless, I wouldn’t make many efficiency decisions right
now until it becomes obvious that a single table isn’t going to
give you response times as fast as you require.