Is there any way for me to remove only one of these? �I can only seem
to remove all of them.
This is exactly why I use join models and ignore
has_and_belongs_to_many all together.
The issue here is not the presence or absence of a join model – after
all, for a simple habtm, there’s no reason at all to introduce the extra
class. Rather, this is a reflection of the fact that the OP is trying
to do something weird. With or without a join model, there should be no
reason to add b to a.bs 3 times.
To the OP: can you explain more about why you think you need this? I’m
guessing your DB setup may be a bit pathological.
b = B.first # Find the b you want to remove association to.
a.bs.delete(b)
From has_and_belongs_to_many doc:
collection.delete(object, �)
Removes one or more objects from the collection by removing their
associations from the join table.
thanks for the reply, unfortunately this deletes ALL of them from the
association table. any other ideas?
thanks again,
dino
If it does then you’re doing something wrong. I just tried this in a
test app and it work as advertised.
a.bs << b
a.bs << b
a.bs << b
This is somewhat unclear, but it appears from your code this is adding
the same b three times. So yes, if you did exactly as shown all three
association to that same b would be deleted.
a.bs << b1
a.bs << b2
a.bs << b3
a.bs.delete(b2) would properly delete only b2 and leaving b1 and b3.
This is expected behavior. The only thing I don’t like about how Rails
handles this is that << will create copies of the same associations. So
I typically add a unique index across the two primary keys of the join
table to prevent duplicating associations. That being said, I also use
has_many :through in all cases (as Greg mentioned). I basically pretend
HABTM does not exist.
On Thu, Sep 17, 2009 at 2:36 PM, Marnen Laibow-Koser [email protected] wrote:
The issue here is not the presence or absence of a join model – after
all, for a simple habtm, there’s no reason at all to introduce the extra
class. Rather, this is a reflection of the fact that the OP is trying
to do something weird. With or without a join model, there should be no
reason to add b to a.bs 3 times.
Umm yeah, well… I work in genetics research and I can tell you I have
this exact scenario all the time. Genetic pathways repeat
throughout any genome, human or other. Your assumption that
everyone’s data is like your own is illogical.
And like I was saying, to avoid the problem I use a join model.
Storing an extra integer column and having an extra class in play
costs very little when compared to the headaches of having to look up
join table records using multiple keys and include limit statements.
It’s so much easier to just say Foo.delete( 18176236 ).
This is somewhat unclear, but it appears from your code this is adding
the same b three times. So yes, if you did exactly as shown all three
association to that same b would be deleted.
This appears to be what Dino was asking.
a.bs << b1
a.bs << b2
a.bs << b3
a.bs.delete(b2) would properly delete only b2 and leaving b1 and b3.
This is expected behavior. The only thing I don’t like about how Rails
handles this is that << will create copies of the same associations. So
I typically add a unique index across the two primary keys of the join
table to prevent duplicating associations.
Yeah, that makes a lot of sense. I tended to do that on join tables
before I ever heard of Rails.
That being said, I also use
has_many :through in all cases (as Greg mentioned). I basically pretend
HABTM does not exist.
This, on the other hand, makes no sense at all to me. For a simple
habtm, nothing at all is gained by using has_many :through instead. If
you outgrow the simple habtm, refactoring to has_many at that time is
quick and easy. I agree that habtm is very limited, but it’s a nice
shortcut where it works. There’s no reason to avoid it entirely.
Umm yeah, well… I work in genetics research and I can tell you I have
this exact scenario all the time.
And you use this sort of data model for it?
Genetic pathways repeat
throughout any genome, human or other. Your assumption that
everyone’s data is like your own is illogical.
I’m not making any assumptions about other people’s data. I am merely
talking about what can and can’t be modeled effectively with a
particular data structure.
And like I was saying, to avoid the problem I use a join model.
Storing an extra integer column and having an extra class in play
costs very little when compared to the headaches of having to look up
join table records using multiple keys and include limit statements.
It’s so much easier to just say Foo.delete( 18176236 ).
For your use case, this probably makes sense. For a simple habtm where
these features are not needed, I believe it is overkill.
thanks for all the replies. just to clarify my intent, i am
associating keywords with a set of documents, and the more times a
keyword is associated with particular document, the more it is
weighted as being relevant to that keyword, but it must be the exact
same keyword, that’s why i do this:
a.bs << b
a.bs << b
a.bs << b
so I was NOT asking for this:
a.bs << b1
a.bs << b2
a.bs << b3
and when an association disappears, i want to remove one of the
associations and one only, b/c the cardinality of duplicate
associations is important for my app. I explored the idea of an
increment counter, but then I have to add extra logic to the
controller or model.
So ultimately, I refactored to has_many :through, and now I can just
remove one of the has_many’s. I realize now, too, that this makes
more sense philisophically, as one of the intents of has_many :through
is to give the many an equal state as a bona fide model, whereas the
habtm is a cardinality-less association by design (that’s my
conclusion any way).
thanks for all your help!
dino
On Sep 17, 4:09 pm, Marnen Laibow-Koser <rails-mailing-l…@andreas-
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.