# Forum: Ruby Algorithmic complexity

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
on 2013-03-19 10:39
```Hi,
Can anyone tell algorithmic complexity for this operation:

v.attributes.map{ |k, val| {k => info.instance_values[k]}
}.reduce(:merge).each_pair{ |key, value|
v.send( (key + '=').to_sym, value) }```
on 2013-03-19 11:18
```Rotar D. wrote in post #1102245:
> Hi,
> Can anyone tell algorithmic complexity for this operation:
>
> v.attributes.map{ |k, val| {k => info.instance_values[k]}
> }.reduce(:merge).each_pair{ |key, value|
> v.send( (key + '=').to_sym, value) }

At a glance, probably O(n)

v.attributes. # maybe O(1), probably O(n)
map{|k,val| {k => info.instance_values[k]}}. # ??
reduce(:merge). # O(n)
each_pair{|key,value| v.send((key + '=').to_sym, value)} # probably
O(n)

Each of #map, #reduce, and #each_pair are straight forward linear
iterations, so should run in O(n).

The part I'm least sure about is `info.instance_values`.  I'm having to
look up the Rails doc to see what this is all about, but I assume it's
essentially an #each loop.  Then assuming the hash lookup is somewhere
between O(1) and O(n), I guess the ?? line runs in about map*(each+hash)
= O(n*m), which is just a bigger O(n).

Can anyone do a better analysis?```
on 2013-03-19 11:27
```Matthew Kerwin wrote in post #1102250:
> between O(1) and O(n), I guess the ?? line runs in about map*(each+hash)
> = O(n*m), which is just a bigger O(n).

Er, actually, I think that's O(n²).  Or something.  Argh, undergrad CS
was soo long ago!```
on 2013-03-19 14:01
```On Tue, Mar 19, 2013 at 10:39 AM, Rotar D. <lists@ruby-forum.com> wrote:

> Can anyone tell algorithmic complexity for this operation:
>
> v.attributes.map{ |k, val| {k => info.instance_values[k]}
> }.reduce(:merge).each_pair{ |key, value|
> v.send( (key + '=').to_sym, value) }

Can you tell us what these variables reference?
v
info

Btw, algorithmic complexity might not be the critical factor.  For
example: you seem to create a lot of single entry Hashes and then
merge them.  It would certainly be more efficient to just create a
single Hash and update that.  Or wait, I don't think you need a Hash
at all.  If all types are what I think you can probably get away with

v.attributes.each do |k|
v.send "#{k}=", info.instance_values[k]
end

Which is much simpler and should be more efficient, too.

Kind regards

robert```
on 2013-03-19 20:54
```I do not know what is v ...
I do not know Ruby (my languages are Haskell, C++ and Java), but my
colleague showed this line of code for demonstrate "how to change the
attributes of one object, based on the attributes of another one object
on Ruby in one line"
I asked him about complexity, and he said O(n^3)... So, I found it
questionable.```
on 2013-03-20 09:02
```On Tue, Mar 19, 2013 at 8:54 PM, Rotar D. <lists@ruby-forum.com> wrote:
> I do not know what is v ...
> I do not know Ruby (my languages are Haskell, C++ and Java), but my
> colleague showed this line of code for demonstrate "how to change the
> attributes of one object, based on the attributes of another one object
> on Ruby in one line"

# my version on one line:
v.attributes.each {|k,| v.send "#{k}=", info.instance_values[k]}

I still like it better.

> I asked him about complexity, and he said O(n^3)... So, I found it
> questionable.

Me, too.  I think it's O(n) in the number of attributes.  Let's do an
analysis based on what methods seem to do:

v.attributes
either O(1) if it just hands out a reference or O(n) if a Hash is
constructed for all attributes

.map {|k, val| {k => info.instance_values[k]}}
O(n) for every key value pair a new Hash with a single entry is created.

.reduce(:merge).
O(n) all single entry Hashes are merged into one and since Hash lookup
is O(1) we don't get higher than O(n)

each_pair{ |key, value| v.send( (key + '=').to_sym, value) }
O(n) unles v.send does something awfully complex.

So the maximum is O(n).  With small number of values which we likely
see here the performance is probably dominated by all the unnecessary
object constructions.  I assume that my suggestion (see above) is
faster but ultimately only measurement will show.

Does that help?

Kind regards

robert```
on 2013-03-21 21:52
`Yes, Thx!`
This topic is locked and can not be replied to.