Will memcache bring any benefits with a small DB set?

Hi,

I have a DB that stores strings of XML. There could be hundreds of
those, thousands at most.

Based on client requests, a binary file is generated, most of the
content in the binary file is XML taken from the DB. So there is plenty
of parsing to do and several searches to collect the right pieces of XML
that I need.

I need persistence so I still need the DB and ActiveRecord. But I
thought that if I could copy the DB into memory to speed up.

This sounds like I could start playing with memcache.
But somewhere I read that memcache will only be faster when dealing with
millions of records. I won’t have millions of records so will this
really bring me any benefits? The load from clients is expected to be
heavy though.

I’d appreciate your opinions.
I will be playing with memcache in the meanwhile as well.

Cheers.

On Feb 22, 1:57 pm, comopasta Gr [email protected] wrote:

This sounds like I could start playing with memcache.
But somewhere I read that memcache will only be faster when dealing with
millions of records. I won’t have millions of records so will this
really bring me any benefits? The load from clients is expected to be
heavy though.

I’d appreciate your opinions.
I will be playing with memcache in the meanwhile as well.

It’s hard to talk in generalities about this sort of stuff as there
are so many factors that come into play. Your best bet is to benchmark
some stuff, also taking into account the pattern of traffic you expect
(ie what proportion of cache hits do you expect?) Even in the case
where memcache doesn’t make things faster (or even makes things
slower) it may still help at high volume if it takes some load off
your database.

Fred

Have you had a look at your logs to see how often a particular page gets
called. This will give you an idea as to how many entries you might end
up
with in the cache, if they are not being called repeatedly then the
improvement will be slight. Or if the data changes often the the cache
will
become out of date too often.

Also how big are these chunks of data? Again this is the sort of thing
you
will need to know when sizing the cache. If the data is really large
then
the cache may expire them before you get to reuse them.

That aside parsing and combining xml documents is not the fastest thing
on
earth so if you can cache final format of the data and reuse it then you
should get a good speedup.

You need to measure things.

Hi,

Yeah I’m on my way to start playing with memcache after checking some
sites. Then I’ll make some tests and see to which side it affects
performance.

This app won’t be serving pages but providing binary data based on the
stored records, no html, js and all the typical web app stuff. I have no
patterns of terminal behavior so I can only guess at this point (of
course I know the types of requests that clients will make). On the
other hand there’s other areas where I need to improve performance, for
example I’m thinking of have some real C code for the binary file
generation, which I make now with the bindata gem.

The nice thing about cache is that I can use it whenever I feel is good
since it seems so easy to use, so I can revert changes or move the
feature around to play with it.

I also slowly started playing with load testing using apachebench so I
can stress the server a bit.

Thanks for your comments!

Regards.