I do some test, surprising to find seeming to access drb much slower
than
access memcached.
If it’s true, should we use more memcached directly than drb based
things
(like backgrounddrb) in scale situation ?
Before the test, I can’t decide whether to do(such as get/set global
data
operations) in drb/backgrounddrb or just access memcached in
actioncontrollers.
I have thought of that doing save(or some read) operations in
drb/backgroundrb asynchronous will improve the performance highly.
Hope for your idea, thank you.
water
The test:
?summary?
just loop 100 times doing a simple action with drb & memcached, see how
long
they will take.
?background?
ruby 1.8.4 (2006-04-14) [i386-mswin32]
Rails 1.1.6
memcache-client-1.0.3
Distributed Ruby: dRuby version 2.0.4 # from
ruby/lib/rub/1.8/drb/drb.rb
?begin?
RAILS_ROOT/config/enviroments/development.rb
new drb object here to ensure just new it once at the beginning
require ‘drb’
DRb.start_service()
DRB_OBJ = DRbObject.new(nil, ‘druby://localhost:9000’)
In a simple actioncontroller
class TestController < ApplicationController
def test
start_time = Time.now.to_f
100.times do
DRB_OBJ.doNothingInSide() # just invoke a simple drb function, do
nothing inside
end
end_time = Time.now.to_f
flash[‘tm2’] = sprintf(“DRB access take (%0.9f)”, end_time -
start_time)
log the duration to show in view
start_time = Time.now.to_f
100.times do
Cache::put('test', {'test' => 'just test'}) # just set the
memcached
through memcache-client’s put method
end
end_time = Time.now.to_f
flash[‘tm1’] = sprintf(“Memcache access take (%0.9f)”, end_time -
start_time) # log the duration to show in view
end
end
In the simple view
tm1=#<%=flash['tm1']%>#
tm2=#<%=flash['tm2']%>#
run & see the results
tm1=#Memcache access take (0.546000004)
tm2=#DRB access take (3.141000032)
The running results different every time, but all show memcached much
faster
than drb access.