# Removing array duplicates where a subset is unique

I need to remove duplicates from an array of arrays. I can’t use
Array#uniq because some fields are different and not part of the
“key.” Here’s an example where the first 3 elements of each sub array
are the “key” and determine uniqueness. I want to keep only the first
one I get.

a = [[1, 2, 3, 4, 5], [1, 2, 3, 9, 4], [1, 2, 3, 4, 4]]
=> [[1, 2, 3, 4, 5], [1, 2, 3, 9, 4], [1, 2, 3, 4, 4]]

The return value of deduplicating this array should be: [[1, 2, 3, 4,
5]]

Here is my first attempt at solving the problem:

def dedup ary
ary.map do |line|
?> dupes = ary.select { |row| row[0…2] == line[0…2] }
?> dupes.first

end.uniq
end
=> nil

?> dedup a
=> [[1, 2, 3, 4, 5]]

This works. However, it is super slow when operating on my dataset.
My arrays contain hundreds of thousands of sub arrays. The unique key
for each sub array is the first 12 (of 18) elements. It is taking many
seconds to produce each intermediate array (“dupes” in the example
above), so deduping the entire thing would likely take days.

Anyone have a superior and faster solution?

cr

Chuck R. wrote:

I need to remove duplicates from an array of arrays. I can’t use
Array#uniq because some fields are different and not part of the “key.”
Here’s an example where the first 3 elements of each sub array are the
“key” and determine uniqueness. I want to keep only the first one I get.

a = [[1, 2, 3, 4, 5], [1, 2, 3, 9, 4], [1, 2, 3, 4, 4]]
=> [[1, 2, 3, 4, 5], [1, 2, 3, 9, 4], [1, 2, 3, 4, 4]]

The return value of deduplicating this array should be: [[1, 2, 3, 4, 5]]

Might be faster if your intermediate is a hash, so instead of N**2 time
it’s N.

a = [[1, 2, 3, 4, 5], [1, 2, 3, 9, 4], [1, 2, 3, 4, 4]]

h = {}
a.each do |row|
h[row[0…2]] ||= row # record the first match
end

p h.values # ==> [[1, 2, 3, 4, 5]]

Note that the output may come out in a different order. Does that
matter?

Hi –

On Sat, 18 Jul 2009, Chuck R. wrote:

Here is my first attempt at solving the problem:
?> dedup a
=> [[1, 2, 3, 4, 5]]

This works. However, it is super slow when operating on my dataset. My
arrays contain hundreds of thousands of sub arrays. The unique key for each
sub array is the first 12 (of 18) elements. It is taking many seconds to
produce each intermediate array (“dupes” in the example above), so deduping
the entire thing would likely take days.

Anyone have a superior and faster solution?

See if this speeds it up meaningfully (and make sure I’ve got the
logic right):

def dedup(ary)
uniq = {}
ary.each do |line|
uniq[ary[0…2]] ||= line
end
uniq.values
end

David

David A. Black wrote:

def dedup(ary)
uniq = {}
ary.each do |line|
uniq[ary[0…2]] ||= line
end
uniq.values
end

Sweet! (I love Ruby; thanks Matz.)

Regards,

On Fri, Jul 17, 2009 at 7:51 PM, Chuck R.[email protected]
wrote:

(And Joel, I have presorted the array prior to removing the
dupes so I have already taken care of the ordering issue.)

I think what Joel was referring to was that in Ruby 1.8 a Hash doesn’t
maintain insertion order when traversed (Ruby 1.9 does maintain
insertion order):

ruby 1.8.2 (2004-12-25) [powerpc-darwin8.0]:
irb(main):001:0> h = {}
=> {}
irb(main):002:0> 5.times{|n| h[n] = n}
=> 5
irb(main):003:0> h
=> {0=>0, 1=>1, 2=>2, 3=>3, 4=>4}
=> 3
irb(main):005:0> h
=> {0=>0, 1=>1, “sadf”=>3, 2=>2, 3=>3, 4=>4}

On Jul 17, 2009, at 5:39 PM, David A. Black wrote:

a = [[1, 2, 3, 4, 5], [1, 2, 3, 9, 4], [1, 2, 3, 4, 4]]
?> dupes = ary.select { |row| row[0…2] == line[0…2] }
is taking many seconds to produce each intermediate array (“dupes”
ary.each do |line|
uniq[ary[0…2]] ||= line
end
uniq.values
end

David and Joel,

you both provided the same solution. I will test this to see what kind
of performance I get. It will be hell on memory, but I assumed any
solution likely would be. (And Joel, I have presorted the array prior
to removing the dupes so I have already taken care of the ordering
issue.)

cr

Hi –

On Sat, 18 Jul 2009, [email protected] wrote:

=> {}
irb(main):002:0> 5.times{|n| h[n] = n}
=> 5
irb(main):003:0> h
=> {0=>0, 1=>1, 2=>2, 3=>3, 4=>4}
=> 3
irb(main):005:0> h
=> {0=>0, 1=>1, “sadf”=>3, 2=>2, 3=>3, 4=>4}

If you wanted to maintain order you could (for some but probably not
much performance penalty) do something like:

def dedup(ary)
uniq = {}
res = []
ary.each do |line|
key = line[0…2]
next if uniq[key]
res << (uniq[key] = line)
end
res
end

David

Hi –

On Sat, 18 Jul 2009, Chuck R. wrote:

“key” and determine uniqueness. I want to keep only the first one I get.

ary.map do |line|
each sub array is the first 12 (of 18) elements. It is taking many seconds
ary.each do |line|
dupes so I have already taken care of the ordering issue.)
I believe the version you had originally, where you do a mapping of
the whole array, will typically use much more memory than the hash
version. Let’s say your original array has 1000 inner arrays, with 10
that are considered unique. The mapping will be a new array, also of
1000 elements. The hash will have 10 key/value pairs – thus much
smaller.

David

Hi—

On Jul 17, 2009, at 6:14 PM, David A. Black wrote:

doesn’t
ary.each do |line|
key = line[0…2]
next if uniq[key]
res << (uniq[key] = line)
end
res
end

David

I missed the beginning of this thread, but here is an implementation
I’ve used successfully:

def uniq_by(subject, &block)
h = {}
a = []
subject.each do |s|
comparator = yield(s)
unless h[comparator]
a.push(s)
h[comparator] = s
end
end
a
end

Usage:

u = uniq_by(ary|{ |item| item.element }

Basically, what this allows you to do is specify what exactly about an
array item must be unique. It also preserves the original array order,
with a “first entry wins” approach to duplicate elimination.

Hope this is useful.

Chuck R. wrote:

(And Joel, I have presorted the array prior
to removing the dupes so I have already taken care of the ordering
issue.)

That’s well and good, but in the process of using a hash to remove the
duplicates, the result will be out of order. See example below.

I was trying to figure out how to use a hash but did not
make the leap to the ||= construction on my own.

A simple if statement will achieve the same result:

a = [
[1, 2, 3, 10],
[1, 2, 3, 20],
[1, 2, 3, 30],
[2, 2, 3, 40]
]

h = {}

a.each do |suba|
key = suba.slice(0, 3)

if h[key]
next
else
h[key] = suba
end

end

p h.values

–output:–
[[2, 2, 3, 40], [1, 2, 3, 10]]

Hi –

On Mon, 20 Jul 2009, 7stud – wrote:

[2, 2, 3, 40]
h[key] = suba
end

end

p h.values

–output:–
[[2, 2, 3, 40], [1, 2, 3, 10]]

I’m not sure what that buys you, though. The ||= idiom should work
fine, unless you need to do something extra during the if statement.

David

On Jul 17, 2009, at 7:55 PM, David A. Black wrote:

I believe the version you had originally, where you do a mapping of
the whole array, will typically use much more memory than the hash
version. Let’s say your original array has 1000 inner arrays, with 10
that are considered unique. The mapping will be a new array, also of
1000 elements. The hash will have 10 key/value pairs – thus much
smaller.

Oh yes, my version had terrible execution performance and memory
performance. I was trying to figure out how to use a hash but did not
make the leap to the ||= construction on my own. I knew I was missing
something obvious… all of your rapid responses proved it.

FYI, the dedup code you provided performs quite admirably. I’ll take a
look at its memory footprint when I get in the office Monday and
report back.

cr

David A. Black wrote:

Hi –

On Mon, 20 Jul 2009, 7stud – wrote:

[2, 2, 3, 40]
h[key] = suba
end

end

p h.values

–output:–
[[2, 2, 3, 40], [1, 2, 3, 10]]

I’m not sure what that buys you, though. The ||= idiom should work
fine, unless you need to do something extra during the if statement.

David

Facets has enumerable#uniq_by implemented like this:

def uniq_by #:yield:
h = {}; inject([]) {|a,x| h[yield(x)] ||= a << x}
end

So you can do:

require ‘facets’
a = [[1, 2, 3, 4, 5], [1, 2, 3, 9, 4], [1, 2, 3, 4, 4]]
res = a.uniq_by{|el| el[0…2]}

hth,

Siep

Hi –

On Mon, 20 Jul 2009, 7stud – wrote:

David A. Black wrote:

I’m not sure what that buys you, though.

Simplicity of comprehension over brevity.

I guess I like the ||= idiom because it has both. But the if statement
will definitely work, of course.

David

Siep K. wrote:

Just remember this:

inject() = shite

David A. Black wrote:

I’m not sure what that buys you, though.

Simplicity of comprehension over brevity.

The ||= idiom should work
fine,

Of course.

David A. Black wrote:

Hi –

On Mon, 20 Jul 2009, 7stud – wrote:

David A. Black wrote:

I’m not sure what that buys you, though.

Simplicity of comprehension over brevity.

I guess I like the ||= idiom because it has both.

For you, yes. But apparently not for the op:

I was trying to figure out how to use a hash but did not
make the leap to the ||= construction on my own.

The ||= idiom should work
fine

…until it doesn’t (I think you know what I’m refering to).

Hi –

On Mon, 20 Jul 2009, 7stud – wrote:

I guess I like the ||= idiom because it has both.

For you, yes. But apparently not for the op:

I was trying to figure out how to use a hash but did not
make the leap to the ||= construction on my own.

I wouldn’t recommend freezing one’s knowledge or leap abilities at a
particular point, though I’m certainly in sympathy with being
suspicious of punctuation-heavy stuff, but in the case of ||= it’s
such a common idiom, and so easy to learn, that it seems like a bit of
an artificial hardship not to learn it. Still, if will work too

The ||= idiom should work
fine

…until it doesn’t (I think you know what I’m refering to).

The hash default thing? I don’t think that comes into play here, does
it?

David

David A. Black wrote:

The ||= idiom should work
fine

…until it doesn’t (I think you know what I’m refering to).

The hash default thing? I don’t think that comes into play here, does
it?

No, but whenever I see ||= now, I get scared.

Hi –

On Mon, 20 Jul 2009, Joel VanderWerf wrote:

7stud – wrote:

No, but whenever I see ||= now, I get scared.

I do to! It looks like a very angry gnome.

I’m still waiting to see a use-case for &&=. I’ve come close to
thinking I had one once or twice, but it always turns out I didn’t.

David