Forum: RSpec [cucumber] Cucumber and CI

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
874be46e8593deadb2cec84b70b26725?d=identicon&s=25 Yi Wen (hayafirst)
on 2009-02-22 21:20
(Received via mailing list)
The rhythm for wrking with cucumber advertised by http://cukes.info/ is
to
write tests that fails first, then code that fixes it. Now my question
is,
what is the implication when combine this with Continuous Integration?

We all know when we do TDD/BDD in unit level, one test can be fixed
fairly
quick in a coupe minutes and we can check in and kick off a build. It is
a
ideal scene for doing CI: frequent checkin and fast feedback on build
results.

Cucumber, as far as my understanding goes, works on feature level. It
could
take people days to finish a cucumber feature. In the meantime, the
cucumber
test remains broken. What do we do then? We cannot check in any code
because
that'll break the build. So we can only checkin code after several days?
It
doesn't sound right to me. Any takes on this issue? Thanks in advance.

Yi
F86901feca747abbb5c6c020362ef2e7?d=identicon&s=25 Zach Dennis (zdennis)
on 2009-02-22 22:41
(Received via mailing list)
On Sun, Feb 22, 2009 at 2:47 PM, Yi Wen <hayafirst@gmail.com> wrote:
> take people days to finish a cucumber feature. In the meantime, the cucumber
> test remains broken. What do we do then? We cannot check in any code because
> that'll break the build. So we can only checkin code after several days? It
> doesn't sound right to me. Any takes on this issue? Thanks in advance.
>

I use git, create a new branch for a feature, and work in that branch
while I'm implementing the feature. As I reach stable points in the
feature (even though it may not be done) I will merge into master and
push the changes. Usually at this point I've reached the end of a step
definition so the next step isn't failing, it's just pending.

If unable to reach a stable point to merge back into master one option
is to call "pending" inside of the step definition you were last
working on, so CI doesn't use that to signify that someone broke the
build. This lets you merge back into master and push.

Most of the time I work in the feature branch, continually updating
and rebasing as others push changes so I'm not out of sync, and then I
merge back into master and push when the feature is either stable or
done.

I'm sure there are lots of ways to go about this though,

--
Zach Dennis
http://www.continuousthinking.com
http://www.mutuallyhuman.com
48641c4be1fbe167929fb16c9fd94990?d=identicon&s=25 Mark Wilden (Guest)
on 2009-02-22 23:05
(Received via mailing list)
On Sun, Feb 22, 2009 at 11:47 AM, Yi Wen <hayafirst@gmail.com> wrote:

> Cucumber, as far as my understanding goes, works on feature level. It could
> take people days to finish a cucumber feature. In the meantime, the cucumber
> test remains broken. What do we do then? We cannot check in any code because
> that'll break the build. So we can only checkin code after several days? It
> doesn't sound right to me. Any takes on this issue? Thanks in advance.

What I do is write the feature, and check that in. Then I work on each
step definition TDD-wise, checking in each as it runs without error
and without failing the expectation.

So I check in yellow (with pending steps) and when there are no more
pending steps, I mark the feature as finished.

///ark
0be0e4aa42aacd9a8a95c792de273ca7?d=identicon&s=25 Aslak Hellesøy (aslakhellesoy)
on 2009-02-22 23:42
(Received via mailing list)
On Sun, Feb 22, 2009 at 8:47 PM, Yi Wen <hayafirst@gmail.com> wrote:
> The rhythm for wrking with cucumber advertised by http://cukes.info/ is to
> write tests that fails first, then code that fixes it. Now my question is,
> what is the implication when combine this with Continuous Integration?
>

* Nobody checks in code with failing tests (cucumber features, rspec
tests, anything else).
* If someone accidentally does, CI will run all tests and tell the team.

> We all know when we do TDD/BDD in unit level, one test can be fixed fairly
> quick in a coupe minutes and we can check in and kick off a build. It is a
> ideal scene for doing CI: frequent checkin and fast feedback on build
> results.
>
> Cucumber, as far as my understanding goes, works on feature level. It could
> take people days to finish a cucumber feature. In the meantime, the cucumber
> test remains broken. What do we do then? We cannot check in any code because

A feature typically consists of several scenarios. You don't have to
implement all scenarios before you commit. You don't have to write all
scenarios when you start working on a feature. I recommend you never
have more than one yellow scenario at a time.

The same goes for scenarios, which consist of several steps.

I recommend you commit every time you have made a step go from yellow
to green (via red).
This way, many successive commits will gradually build the whole
feature.

In my experience, getting a step to go from yellow to green rarely
takes more than an hour (usually less).
Is there anything preventing you from working this way?

Aslak
48641c4be1fbe167929fb16c9fd94990?d=identicon&s=25 Mark Wilden (Guest)
on 2009-02-23 00:32
(Received via mailing list)
On Sun, Feb 22, 2009 at 1:36 PM, aslak hellesoy
<aslak.hellesoy@gmail.com> wrote:
> You don't have to write all
> scenarios when you start working on a feature. I recommend you never
> have more than one yellow scenario at a time.

Whereas I use scenarios as a "to-do" list. I'll keep adding them as I
think of them or as they come up in discussion.

///ark
874be46e8593deadb2cec84b70b26725?d=identicon&s=25 Yi Wen (hayafirst)
on 2009-02-23 01:16
(Received via mailing list)
I totally agree with you on this. I have a feeling a lot of people kind
of
use cucumber as a sexy way for doing waterfall.
5d38ab152e1e3e219512a9859fcd93af?d=identicon&s=25 David Chelimsky (Guest)
on 2009-02-23 01:27
(Received via mailing list)
On Sun, Feb 22, 2009 at 6:14 PM, Yi Wen <hayafirst@gmail.com> wrote:
> I totally agree with you on this. I have a feeling a lot of people kind of
> use cucumber as a sexy way for doing waterfall.

That may be so, but one view of agile is that each iteration is a
mini-waterfall. BDD suggests that we *should* define all of the
scenarios in the iteration planning meeting because we use them as a
planning tool (how can we estimate a feature at all before we've
talked about the acceptance criteria?).

FWIW,
David
874be46e8593deadb2cec84b70b26725?d=identicon&s=25 Yi Wen (hayafirst)
on 2009-02-23 02:11
(Received via mailing list)
Wait for an hour before I can checkin something is still too long for
me.
I'd like to checkin every couple minutes most of time.

But I think to make each step just pending first and then make it green
when
I finish implementation for the step makes sense. I probably will still
use
unit tests passing as checkin points.

Thanks

Yi
Aafa8848c4b764f080b1b31a51eab73d?d=identicon&s=25 Phlip (Guest)
on 2009-02-23 03:59
(Received via mailing list)
Yi Wen wrote:

> I totally agree with you on this. I have a feeling a lot of people kind
> of use cucumber as a sexy way for doing waterfall.

"Storytests" are very well represented in the Agile development
community in
general. Cucumber is a (slam-dunk) reinterpretation of Ward Cunningham's
FIT
concept.

(Naturally, born of Java, FIT had no direct translation to Ruby, and
that's
probably a good thing!)

It's only waterfall if your product-owner writes or commissions
_thousands_ of
story tests before doing _any_ of them.

I heavily suspect that the author of a cucumber "feature" can hardly
wait to see
it pass, and I suspect they will refrain from diverting energy to
writing
another one. That is the heart of Agile - the feedback loop.

So what's the maximum number of cucumber features that anyone has ever
seen
on-deck but not yet passing? That's a bad metric, exactly like excess
inventory
in a warehouse.

--
   Phlip
874be46e8593deadb2cec84b70b26725?d=identicon&s=25 Yi Wen (hayafirst)
on 2009-02-23 04:18
(Received via mailing list)
yeah, you guys are probably right on this. I was just over stating. :)

Yi
Aafa8848c4b764f080b1b31a51eab73d?d=identicon&s=25 Phlip (Guest)
on 2009-02-23 05:03
(Received via mailing list)
Yi Wen wrote:

> yeah, you guys are probably right on this. I was just over stating. :)

Ah, but Waterfall is indeed like the Emissaries of the Shadow. If you
defeat one
of them, in one form, another one will always appear, take shape, and
grow...

I myself have heard well-meaning product managers say "When we go for
the
rewrite, we are going to make sure we specify each feature, first,
before coding
them!" They said that because the last effort - code-and-fix - had grown
until
it died under the weight of its cruft. They perceived their inability to
safely
request new features as evidence that they did not specify enough,
up-front.

These managers knew better than to use classic Waterfall, but they still
didn't
understand that Big Requirements Up Front is essentially Waterfall's
worst
aspect. And so they re-invented Waterfall, yet again, in yet another
form.

So keep flying that flag of vigilance, there!

--
   Phlip
0be0e4aa42aacd9a8a95c792de273ca7?d=identicon&s=25 Aslak Hellesøy (aslakhellesoy)
on 2009-02-23 08:32
(Received via mailing list)
On Mon, Feb 23, 2009 at 3:36 AM, Phlip <phlip2005@gmail.com> wrote:
> probably a good thing!)
>

It actually did. It's just that very few have ever used it:
http://fit.rubyforge.org/

> It's only waterfall if your product-owner writes or commissions _thousands_
> of story tests before doing _any_ of them.
>
> I heavily suspect that the author of a cucumber "feature" can hardly wait to
> see it pass, and I suspect they will refrain from diverting energy to
> writing another one. That is the heart of Agile - the feedback loop.
>

Well said. That's the feeling I get when I work with this Cucumber.

> So what's the maximum number of cucumber features that anyone has ever seen
> on-deck but not yet passing? That's a bad metric, exactly like excess

Do you mean "on filesystem"? I have used Cucumber on 5-6 projects now,
and I never exceed 1. If there is a bigger backlog "somewhere else"
(pile of cards, word documents...) then I keep them there for as long
as possible.
39100495c9937c39b2e0c704444e1b4a?d=identicon&s=25 Pat Maddox (Guest)
on 2009-02-23 11:05
(Received via mailing list)
On Sun, Feb 22, 2009 at 6:36 PM, Phlip <phlip2005@gmail.com> wrote:
> So what's the maximum number of cucumber features that anyone has ever seen
> on-deck but not yet passing? That's a bad metric, exactly like excess
> inventory in a warehouse.

I don't know that it's bad.  At the beginning of an iteration, I have
most of the features & scenarios that I'll be working on.  So I start
off with a big pile of yellows, and as the iteration moves on it
gradually turns green.  I'd say we average 8-10 pending features at
the beginning of each iteration maybe.

Pat
Aafa8848c4b764f080b1b31a51eab73d?d=identicon&s=25 Phlip (Guest)
on 2009-02-23 14:19
(Received via mailing list)
aslak hellesoy wrote:

>> So what's the maximum number of cucumber features that anyone has ever seen
>> on-deck but not yet passing? That's a bad metric, exactly like excess
>> inventory in a warehouse.
>
> Do you mean "on filesystem"? I have used Cucumber on 5-6 projects now,
> and I never exceed 1. If there is a bigger backlog "somewhere else"
> (pile of cards, word documents...) then I keep them there for as long
> as possible.

Among the Agile consultants, the metric there is:

   Time between fully specifying a feature and profiting from its use.

You use each cards as a tickler for one last conversation with an onsite
customer, before cutting the test and code, right?

BTW, another CI metric for cucumberists to answer is:

   After passing a cucumber test, it latches, and gates integration.

I think cucumber builds that latch effect in with the 'pending' keyword,
right?
Pass the cuke, take it out, integrate, and then it runs in your
integration
batch, right?

--
   Phlip
85d99e7678d8720f6e00ab0f60fe6ea9?d=identicon&s=25 Andrew Premdas (Guest)
on 2009-02-23 16:01
(Received via mailing list)
I'd question the wisdom of checking into an integration server every
couple
of minutes. I'm not sure if you meant that but if you did then I think
these
sort of checkins have to be in bigger chunks. The reason is that each
checkin to an intergration server is asking my colleagues to checkout my
code and integrate it into their current work. So everything I check in
to
the intergration server should be fit for them to use, and ideally it
should
have been reviewed (self review, or even better a bit of peer review -
easy
enought if pairing). You just can't do that in 2 minutes. IMO a complete
scenario is about the smallest size chunk to integrate with, and a
complete
feature about the largest

Of course if your using Git (or any distributed vcs) you can just
branch,
commit locally and rebase from the master. If you want to push to get  a
backup as well, you can always have a backup target in addition to your
integration target. If your not using Git (or something similar) locally
I'd
highly reccomend that.

I think its reasonable to make failing steps pending for an integration
commit, but its not something I would like to do regularly, much prefer
to
wait a bit longer before integrating and make them green.

I really don't like working with more than 1 failing step, but find that
occasionally I end up doing that (normally because a new step prompts a
refactoring of and older step and that then breaks as well)


HTH

Andrew


2009/2/23 Yi Wen <hayafirst@gmail.com>
48641c4be1fbe167929fb16c9fd94990?d=identicon&s=25 Mark Wilden (Guest)
on 2009-02-23 16:39
(Received via mailing list)
On Mon, Feb 23, 2009 at 6:56 AM, Andrew Premdas <apremdas@gmail.com>
wrote:

> I'd question the wisdom of checking into an integration server every couple
> of minutes.

Our mantra is ABC: Always Be Committing. So we commit anytime we feel
like it, as long as it doesn't break the build. This makes life a lot
easier when there is merging to do.

> I'm not sure if you meant that but if you did then I think these
> sort of checkins have to be in bigger chunks. The reason is that each
> checkin to an intergration server is asking my colleagues to checkout my
> code and integrate it into their current work.

Just because I push doesn't mean my coworkers have to pull.

> IMO a complete
> scenario is about the smallest size chunk to integrate with, and a complete
> feature about the largest

A refactoring, a new method (and its tests), a new test, a fixed typo
- these are all appropriate chunks of code to check in.

I think this is far superior to making massive checkins at the end of
each iteration. We usually fall somewhere in between.

///ark
Aafa8848c4b764f080b1b31a51eab73d?d=identicon&s=25 Phlip (Guest)
on 2009-02-23 17:04
(Received via mailing list)
Mark Wilden wrote:

>> I'd question the wisdom of checking into an integration server every couple
>> of minutes.
>
> Our mantra is ABC: Always Be Committing. So we commit anytime we feel
> like it, as long as it doesn't break the build. This makes life a lot
> easier when there is merging to do.

In a post-Agile world, we often need to remind the juniors about the
Best
Practices that started the movement. Integrate every time you could use
a
roll-back. Use incremental testing, and a test server. You can't
integrate if
your changed tests fail. The first step of integrating pulls

And work in one room, so if you know another pair is in the same module,
you
just holler to them to integrate as soon as possible, each time you do
it.

--
   Phlip
171ea139761951336b844e708d1547ab?d=identicon&s=25 James Byrne (byrnejb)
on 2009-02-23 17:44
Just on a side note,  how many features / stories have people seen on
their projects and how much of their project was covered by
features/stories?  I refrain from the terms average and typical because,
there ain't no much thing.  But I would be interested in getting an idea
of how many features and scenarios people have used to complete a
project along with a few brief comments giving the scale (number of
total/concurrent users) and nature (order entry/ inventory
control/social networking/financial services/government regulatory) of
the project concerned.  Something like:

F=31, S=165, PC=100%, TU=30, CU=21, financial services (insurance
claims)
994e42bda994be2cd1d791f18ee6d561?d=identicon&s=25 Stephen Eley (Guest)
on 2009-02-23 18:22
(Received via mailing list)
On Mon, Feb 23, 2009 at 9:56 AM, Andrew Premdas <apremdas@gmail.com>
wrote:
> I'd question the wisdom of checking into an integration server every couple
> of minutes. I'm not sure if you meant that but if you did then I think these
> sort of checkins have to be in bigger chunks.

To me the answer is just what Zach said: commit as often as you want,
every couple of minutes or whatever, but do it in a separate branch.
A branch per actively developed feature isn't unreasonable.  You get
to decide whether that branch is shared remotely or just lives on your
machine.  Across a team, if the project is big and structured enough,
you could even have an 'integration' branch that you merge into for
CI, then a 'release' branch for rolling into production, and leave
'master' for point releases or dispense with it entirely.  There's
nothing magic about the 'master' branch, it's just the default name
when others aren't specified.

If you do end up doing all your work on the master branch, for
whatever reason, it still doesn't hurt to commit all the time.  Just
don't *push* it until everything works.  Or if you do, don't push it
to your integration server.  Git gives you a lot of control over this
stuff.



> Of course if your using Git (or any distributed vcs) you can just branch,
> commit locally and rebase from the master. If you want to push to get  a
> backup as well, you can always have a backup target in addition to your
> integration target. If your not using Git (or something similar) locally I'd
> highly reccomend that.

To me the single biggest advantage of Git over Subversion and other
prior ilk is the ease of branching.  You can branch in Subversion, but
it's a pain in the ass, requiring some manual repository configuration
and a lot of annoying drudgework on merging.  It discourages
developers from doing it casually.  In Git, branching is utterly
trivial: creating a branch takes seconds, and merging back is
automatic about 90% of the time.  There is no reason not to branch as
often as convenient, and leave the main branch for stuff that's known
to work.

That's the win.  Offline committing and networks of distributed
repositories are just sort of a bonus for most people.  (Particularly
now that Github has helped to reestablish a 'centralized repository'
culture for the majority of shared Ruby projects.)



--
Have Fun,
   Steve Eley (sfeley@gmail.com)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
994e42bda994be2cd1d791f18ee6d561?d=identicon&s=25 Stephen Eley (Guest)
on 2009-02-23 18:31
(Received via mailing list)
On Mon, Feb 23, 2009 at 10:22 AM, Mark Wilden <mark@mwilden.com> wrote:
>
> Our mantra is ABC: Always Be Committing. So we commit anytime we feel
> like it, as long as it doesn't break the build. This makes life a lot
> easier when there is merging to do.

I think your "doesn't break the build" condition is a lot bigger than
you make it sound.  >8->  What's the definition of "the build" in your
work culture?  Do you run all tests every time before committing?  Or
just before pushing?



--
Have Fun,
   Steve Eley (sfeley@gmail.com)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
994e42bda994be2cd1d791f18ee6d561?d=identicon&s=25 Stephen Eley (Guest)
on 2009-02-23 20:18
(Received via mailing list)
On Mon, Feb 23, 2009 at 11:44 AM, James Byrne <lists@ruby-forum.com>
wrote:
>
> F=31, S=165, PC=100%, TU=30, CU=21, financial services (insurance
> claims)

Whathuh?  Wow, you really *do* work for government, don't you?  >8->
That line hurts me to look at and I'm not going to try to put my
answer in that format.

Since I've started using Cucumber I've mostly been doing smaller
projects, and the only ones I've completed or brought near completion
have been personal ones.  My last project was a spike that had to be
done inside a week and I abandoned most testing (and it shows), but
apart from that it seems like the projects I consider 'smallish' tend
to have about 10-30 features, with usually 5-10 scenarios per feature.

If I had to think about how feature counts scale, I'd say that in the
most general case they scale by significant models.  A "significant
model" is an entity that really matters to the business process (as
in, you can't describe the process without it) and that has a
non-trivial interface.  If I'm building a membership registration
system, "Member" is certainly a significant model.  "Payment" is
significant.  "Phone number" probably isn't, even if I break it out
into a separate table and Rails model for relational reasons.

A significant model will likely have somewhere from 2-5 features
simply by virtue of CRUD operations.  You need to be able to view the
information.  That's a feature.  You need to be able to edit it.
That's a feature too.  Whether create/update/delete are separate
features or all one feature varies by how complex or different the
interface needs to be in each case.  If it's all one form and that
form has a "Delete" button with a simple yes/no confirmation, you can
probably cover it with one feature.  Sometimes you can't.  ...And
sometimes there's a need for imports or printed reports or whatever,
and those are all features.

So that's how I gauge this stuff.  The driving metric is interface
complexity and the number of different major interactions an actor
could have with the application.  I don't know how to gauge "percent
completion" off of that, or why the nature of the industry would be
important.

And...  "Total or concurrent users?"  That you're asking for that
information totally baffles me.  How could that possibly make a
difference to the number of features?  That's a scaling issue.  An
implementation detail, unrelated to app complexity or business value.
If I wrote an executive information system that only had 15 users, but
those users used it to save millions of dollars, how does the number
"15" help you determine how many features you should write?


--
Have Fun,
   Steve Eley (sfeley@gmail.com)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
48641c4be1fbe167929fb16c9fd94990?d=identicon&s=25 Mark Wilden (Guest)
on 2009-02-23 21:32
(Received via mailing list)
On Mon, Feb 23, 2009 at 9:09 AM, Stephen Eley <sfeley@gmail.com> wrote:
> On Mon, Feb 23, 2009 at 10:22 AM, Mark Wilden <mark@mwilden.com> wrote:
>>
>> Our mantra is ABC: Always Be Committing. So we commit anytime we feel
>> like it, as long as it doesn't break the build. This makes life a lot
>> easier when there is merging to do.
>
> I think your "doesn't break the build" condition is a lot bigger than
> you make it sound.  >8->  What's the definition of "the build" in your
> work culture?  Do you run all tests every time before committing?  Or
> just before pushing?

If you're working in a local branch, you can do whatever you want. :)
Personally, I run all specs and features before I commit to my local
topic branch.

When I want to push to the remote respository, I pull into my local
master branch, rebase the topic branch from master, merge the topic
into master, run all specs and features, then push.

///ark
Aafa8848c4b764f080b1b31a51eab73d?d=identicon&s=25 Phlip (Guest)
on 2009-02-23 21:39
(Received via mailing list)
James Byrne wrote:

> Just on a side note,  how many features / stories have people seen on
> their projects and how much of their project was covered by
> features/stories?

My current day job is old-school Rails. Some of the tests - written
before I got
here - used a most despicable pattern. Someone would write a test on
some
transactions which pushed the DB into state X. Then, when they needed a
database
in state X to TDD the next transaction, they would call the old test.

This habit - across dozens of business rules - lead to tower-of-jello
test
cases, where any disturbance risks us commenting out the tests, because
we can't
figure out how to pass them - even by reverting and trying again!

So, in this degenerate case, we have saturation testing for all the
low-level
code methods, but we are missing the high-level view that Cuke ought to
be
providing...

--
   Phlip
994e42bda994be2cd1d791f18ee6d561?d=identicon&s=25 Stephen Eley (Guest)
on 2009-02-23 22:34
(Received via mailing list)
On Mon, Feb 23, 2009 at 3:22 PM, Mark Wilden <mark@mwilden.com> wrote:
> On Mon, Feb 23, 2009 at 9:09 AM, Stephen Eley <sfeley@gmail.com> wrote:
>>
>> I think your "doesn't break the build" condition is a lot bigger than
>> you make it sound.  >8->  What's the definition of "the build" in your
>> work culture?  Do you run all tests every time before committing?  Or
>> just before pushing?
>
> If you're working in a local branch, you can do whatever you want. :)

Ah, okay, so we *are* talking about pretty much the same thing.  >8->
The way you said "committing" I thought everyone was going into the
same integration branch every time.  Mea culpa for assuming.

--
Have Fun,
   Steve Eley (sfeley@gmail.com)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
171ea139761951336b844e708d1547ab?d=identicon&s=25 James Byrne (byrnejb)
on 2009-02-24 00:57
Stephen Eley wrote:
> On Mon, Feb 23, 2009 at 11:44 AM, James Byrne <lists@ruby-forum.com>
> wrote:
>>
>> F=31, S=165, PC=100%, TU=30, CU=21, financial services (insurance
>> claims)
>
> Whathuh?  Wow, you really *do* work for government, don't you?  >8->
> That line hurts me to look at and I'm not going to try to put my
> answer in that format.
>

Only in the sense that every taxpayer does.  It seemed me a short hand
way of describing the information requested without being particularly
obscure.

Total users vs. concurrent users gives a very good idea of the resources
behind a project, or at least the potential resources, together with an
idea of how important to a business that a project might be.  Certainly,
it will not catch the corner cases of say of high value, low volume
processing, and it may give unwarranted weight to a minor project in a
large corporation.  On the whole though, I think it will give a good
feel for what size feature/story driven projects are presently.  Perhaps
it might be better to ask for the numbers of active developers / client
representatives per project instead.
85d99e7678d8720f6e00ab0f60fe6ea9?d=identicon&s=25 Andrew Premdas (Guest)
on 2009-02-24 04:29
(Received via mailing list)
Comment inline

2009/2/23 Mark Wilden <mark@mwilden.com>

>
There can be a difference between committing and integrating now we have
distributed vcs.


>
> > I'm not sure if you meant that but if you did then I think these
> > sort of checkins have to be in bigger chunks. The reason is that each
> > checkin to an intergration server is asking my colleagues to checkout my
> > code and integrate it into their current work.
>
> Just because I push doesn't mean my coworkers have to pull.


Yes it does, surely you can't be saying you can commit to an integration
server without pulling the code from it first. Git wouldn't let you push
to
the integration server if you were behind.

>
>
> > IMO a complete
> > scenario is about the smallest size chunk to integrate with, and a
> complete
> > feature about the largest
>
> A refactoring, a new method (and its tests), a new test, a fixed typo
> - these are all appropriate chunks of code to check in.
>

Agreed but the context was a BDD workflow

>
> I think this is far superior to making massive checkins at the end of
> each iteration. We usually fall somewhere in between.


I did say a complete feature was the largest chunk to go to integration,
and
for this to be acceptable for me it should be a very small feature. So
neither of these checkins is massive I'd envisage 30mins to 2 hours work
maybe.
48641c4be1fbe167929fb16c9fd94990?d=identicon&s=25 Mark Wilden (Guest)
on 2009-02-24 06:19
(Received via mailing list)
On Mon, Feb 23, 2009 at 5:45 PM, Andrew Premdas <apremdas@gmail.com>
wrote:
>>
>> Just because I push doesn't mean my coworkers have to pull.
>
> Yes it does, surely you can't be saying you can commit to an integration
> server without pulling the code from it first

I can't and I didn't. :)

///ark
994e42bda994be2cd1d791f18ee6d561?d=identicon&s=25 Stephen Eley (Guest)
on 2009-02-24 07:38
(Received via mailing list)
On Mon, Feb 23, 2009 at 6:57 PM, James Byrne <lists@ruby-forum.com>
wrote:
>
> Total users vs. concurrent users gives a very good idea of the resources
> behind a project, or at least the potential resources, together with an
> idea of how important to a business that a project might be.

If you say so.  Personally I don't grok that relationship at all.  My
driving "metaproject" is my organization's Web site, which has tens of
thousands of total users across different roles (members, academic
institutions, publishers who want to rent our mailing list, employers
and job seekers, etc.) but a *concurrent* user count of...probably two
digits on an ordinary day.  I've never bothered to measure it for
certain, but I could do some math with Google Analytics and tell you
that it couldn't be higher.

What can you learn from that?  Taken in isolation, without knowing
anything more, could you compute the value of the Web site to the
organization?  Could you tell me what the feature count is likely to
be?



--
Have Fun,
   Steve Eley (sfeley@gmail.com)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
Cdf378de2284d8acf137122e541caa28?d=identicon&s=25 Matt Wynne (mattwynne)
on 2009-02-24 10:28
(Received via mailing list)
On 23 Feb 2009, at 00:09, Yi Wen wrote:

> Wait for an hour before I can checkin something is still too long
> for me. I'd like to checkin every couple minutes most of time.
>
> But I think to make each step just pending first and then make it
> green when I finish implementation for the step makes sense. I
> probably will still use unit tests passing as checkin points.

Other people have mentioned this in passing, but here's a great write-
up of using git branches to manage local work. I've been doing it for
a couple of months now and it's a fantastic way to work:

http://blog.hasmanythrough.com/2008/12/18/agile-gi...

See particularly the notes about squashing commits - this allows you
to commit really often in your local branch, then merge these commits
together before you push them into the main source control repository.

You can even use git commit --amend to commit on red (e.g at the end
of the day) and then change that commit later.


Matt Wynne
http://blog.mattwynne.net
http://www.songkick.com
1704ee42050c1ffe3e599e00dd9f2686?d=identicon&s=25 Rob Holland (Guest)
on 2009-02-24 11:17
(Received via mailing list)
On Tue, Feb 24, 2009 at 9:17 AM, Matt Wynne <matt@mattwynne.net> wrote:

> You can even use git commit --amend to commit on red (e.g at the end of the
> day) and then change that commit later.

While I think commit --amend is very useful, I'm not sure why you'd
bother to commit at the end of the day, knowing full well you were
going to amend it first thing tomorrow morning.

What have you gained by commiting a known-bad change set? It does no
harm sure, but I don't understand the gain.
Cdf378de2284d8acf137122e541caa28?d=identicon&s=25 Matt Wynne (mattwynne)
on 2009-02-24 11:29
(Received via mailing list)
On 24 Feb 2009, at 09:30, Rob Holland wrote:

>
> What have you gained by commiting a known-bad change set? It does no
> harm sure, but I don't understand the gain.

Good question. It's not something I do routinely, but when I do, I
have a couple of motives:

I take my laptop home on the bus through central London, so I like to
push my local branch up to the server (using the wonderful
git_remote_branch[1]) just in case some scally-wag takes a shine to it
and decides they want it more than me.

The maturity of the team I'm on means that we have quite a variety of
development environments, so I might again use the remote branch as a
'shelf' to pass the failing code from one machine to another if we
decided to move to working on someone else's machine.

I also do this when I need to urgently switch out of the story branch
to work on something else. In that case I drop everything, commit as
it, and checkout the branch I need to start working on.

It's also worth mentioning Kent Beck's advice in the original TDD book
where he advocates leaving one test failing when you go home. Not that
you have to commit it to source control, but it's a nice idea -
leaving a thread hanging so you know where to start the next day.


[1]http://github.com/webmat/git_remote_branch/tree/master

Matt Wynne
http://blog.mattwynne.net
http://www.songkick.com
8c5b47690c831ed5b38d1ddd90a87ddf?d=identicon&s=25 Raimond Garcia (voodoorai2000)
on 2009-02-24 13:56
>David Chelimsky
>That may be so, but one view of agile is that each iteration is a
>mini-waterfall. BDD suggests that we *should* define all of the
>scenarios in the iteration planning meeting because we use them as a
>planning tool (how can we estimate a feature at all before we've
>talked about the acceptance criteria?).

>Pat MAddox
>I don't know that it's bad.  At the beginning of an iteration, I have
>most of the features & scenarios that I'll be working on.  So I start
>off with a big pile of yellows, and as the iteration moves on it
>gradually turns green.  I'd say we average 8-10 pending features at
>the beginning of each iteration maybe.

I really of like this approach of mini-waterfalls, treating them as
short 1-2 week iterations,
writing only the stories for the upcoming iteration, estimating them and
organizing them by priorities.
Pivotal does a much better job than myself of telling us what we can do
depending on our previous velocity.

Regarding commits, the approach I've enjoyed the most, was whilst pair
programming.
One of us would make the first step go from pending to passing, and make
the next
step fail.  At that point we committed to a branch.  The other
programmer,
then pulled from that branch, and followed the same process of
implementing the code
to make the failing step pass, write the definition of the next step so
that it failed and
committed to the branch.

We had to work remotely, so every time we committed to the branch we
also,
disconnected from VNC and connected to the other persons machine.
That way when it was your turn to type you would feel more comfortable
in your
machine whilst the other person is just observing and discussing whilst
connected to your machine.

Usually once we got a single scenario working we would rebase with
master
and push to the integration server.  All other scenarios in that feature
were left pending,
to minimize the chances of breaking the build.

I guess this same process can be applied individually, even though,
its hard to keep the discipline, I usually just keep going without
committing until the
whole scenario is passing, and its never as much fun as working with
someone.

Cheers,

Rai
Aafa8848c4b764f080b1b31a51eab73d?d=identicon&s=25 Phlip (Guest)
on 2009-02-24 14:04
(Received via mailing list)
Matt Wynne wrote:

> I take my laptop home on the bus through central London

Got WiFi?

--
   Phlip
0be0e4aa42aacd9a8a95c792de273ca7?d=identicon&s=25 Aslak Hellesøy (aslakhellesoy)
on 2009-02-24 14:17
(Received via mailing list)
On Tue, Feb 24, 2009 at 10:30 AM, Rob Holland <rob.holland@gmail.com>
wrote:
> On Tue, Feb 24, 2009 at 9:17 AM, Matt Wynne <matt@mattwynne.net> wrote:
>
>> You can even use git commit --amend to commit on red (e.g at the end of the
>> day) and then change that commit later.
>
> While I think commit --amend is very useful, I'm not sure why you'd
> bother to commit at the end of the day, knowing full well you were
> going to amend it first thing tomorrow morning.
>

Because the longer you wait, the more your code will diverge from your
teammates'. If you don't commit often you rob them of the opportunity
to reduce merge hell.

Aslak
171ea139761951336b844e708d1547ab?d=identicon&s=25 James Byrne (byrnejb)
on 2009-02-24 15:35
Stephen Eley wrote:

> What can you learn from that?  Taken in isolation, without knowing
> anything more, could you compute the value of the Web site to the
> organization?  Could you tell me what the feature count is likely to
> be?

I do not know what the actual feature count and scenario count is for
any type of project of any scale at present.  Nor is my question
intended to answer what the likely feature/scenario count of any project
is likely to be.  However, I do think that getting some initial data
will, in itself, lead to refinements in the approach to the question.  I
believe that this concept is essence of agile development is it not?

In addition to the counts I asked for the nature of the project as well.
A social network project with 40 features, 200 scenarios and 100sK/100
total / concurrent users can be considered in a different light than one
with 400 features, 8000 scenarios, 50 total and 15 concurrent users; if
the second one happens to be a manufacturing process control system.

The question of value to the organization is not one that I raise and is
in any case irrelevant.  It may be assumed that any project that is
funded represents some value to someone in the sponsoring enterprise.
Whether it actually would provide any material benefit to the
organization as whole is a question which is frequently left unasked and
unanswered in my experience.  Which, before you ask, involves computer
systems design and development at several very large multinational
corporations. (Which is the main reason that I now do what I do where I
am.)

To get back to the initial question, I am only looking for a few
primitive metrics regarding scope and scale to get a sense of how
features/scenarios counts relate to specific projects. Doubtless, there
are better questions to ask.  Perhaps the number of models, the expected
number of rows, the anticipated number of transactions per day would all
provide better insight.  But, other than counting the number of models,
this information requires a deal more effort than counting the number of
features and scenarios one has, estimating how much of ones code base is
covered by them, guessing how many concurrent users you are expected to
support and outlining the basic nature of the project.  Or, so I
beleive.
1704ee42050c1ffe3e599e00dd9f2686?d=identicon&s=25 Rob Holland (Guest)
on 2009-02-24 16:02
(Received via mailing list)
> Because the longer you wait, the more your code will diverge from your
> teammates'. If you don't commit often you rob them of the opportunity
> to reduce merge hell.

Please note I did say commit, and not push, and I inferred from Matt
he meant commit and not push (although he has explained otherwise
since).

I find pushing last thing at night even more bizarre to be honest :/
If you're are going home, it seems reasonable that other people might
be, ergo there won't be many more changes made (an assumption
granted). Also, if they are going to continue to work and make
changes, why force them to merge a
broken/half-done/possibly-to-be-completely-redone later commit. Makes
no sense to me :/
0be0e4aa42aacd9a8a95c792de273ca7?d=identicon&s=25 Aslak Hellesøy (aslakhellesoy)
on 2009-02-24 16:31
(Received via mailing list)
On Tue, Feb 24, 2009 at 3:47 PM, Rob Holland <rob.holland@gmail.com>
wrote:
>> Because the longer you wait, the more your code will diverge from your
>> teammates'. If you don't commit often you rob them of the opportunity
>> to reduce merge hell.
>
> Please note I did say commit, and not push, and I inferred from Matt
> he meant commit and not push (although he has explained otherwise
> since).
>

Ok, let's all be more specific when we talk about scm operations. Not
everyone is using git all the time. (I wish I did, but I often work in
the "enterprise", so it will take a while for them).

Say "git commit" or "svn commit" or "git push" instead of just
"commit" or "push".

Aslak

> I find pushing last thing at night even more bizarre to be honest :/

Completely agree. Ending the day with a git push / svn commit is
verboten where I work. Dave Laribee describes why in biblical form:
http://codebetter.com/blogs/david_laribee/archive/...
Cdf378de2284d8acf137122e541caa28?d=identicon&s=25 Matt Wynne (mattwynne)
on 2009-02-24 16:44
(Received via mailing list)
On 24 Feb 2009, at 15:23, aslak hellesoy wrote:

>> I find pushing last thing at night even more bizarre to be honest :/
>
> Completely agree. Ending the day with a git push / svn commit is
> verboten where I work. Dave Laribee describes why in biblical form:
> 
http://codebetter.com/blogs/david_laribee/archive/...

I was just looking for that post, and you beat me to it :)


Matt Wynne
http://blog.mattwynne.net
http://www.songkick.com
994e42bda994be2cd1d791f18ee6d561?d=identicon&s=25 Stephen Eley (Guest)
on 2009-02-24 17:17
(Received via mailing list)
On Tue, Feb 24, 2009 at 9:47 AM, Rob Holland <rob.holland@gmail.com>
wrote:
>
> I find pushing last thing at night even more bizarre to be honest :/
> If you're are going home, it seems reasonable that other people might
> be, ergo there won't be many more changes made (an assumption
> granted). Also, if they are going to continue to work and make
> changes, why force them to merge a
> broken/half-done/possibly-to-be-completely-redone later commit. Makes
> no sense to me :/

I think that's because you're assuming that there's just one active
thread of code.  If you're not pushing to the *integration* branch,
you're not forcing anyone to do anything.  You can push your own
in-progress development branch to the server (in SVN, in Git, in
anything that supports branches at all) just to have it someplace
other than your own machine, and that imposes no cost on anyone else.

I do it all the time just to be paranoid.  "My laptop might get
stolen" is a perfectly sensible reason to take three seconds before
closing the lid.  Or "My place might burn down," or "I might get hit
by that bus I was waiting for," or "I might have an epiphany and quit
my job tomorrow morning to become a chess grandmaster," or even just
"I wonder if my manager would like to look at my functional and
elegant code."  (In some places it might even be "I'd better prove to
my manager that I did something today.")

In any case: pushing to the team's main VCS repository may be a
necessary step for integration, but it doesn't mean every push has to
trigger an integration.  Not if you've created a consistent and
well-understood culture of branching.

--
Have Fun,
   Steve Eley (sfeley@gmail.com)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
85d99e7678d8720f6e00ab0f60fe6ea9?d=identicon&s=25 Andrew Premdas (Guest)
on 2009-02-24 17:40
(Received via mailing list)
I was trying to use intergration and commit to be specific scm terms in
this
sort of discussion, so rephrasing (and hopefully improving on ) Dave
Laribee's law we get

"If its late in the day save your next integration for the morning"

Now people can go home on time and not fret about a build breaking at
the
end of the day and  having to stay late and rush to fix it.

Andrew

2009/2/24 aslak hellesoy <aslak.hellesoy@gmail.com>
48641c4be1fbe167929fb16c9fd94990?d=identicon&s=25 Mark Wilden (Guest)
on 2009-02-24 19:37
(Received via mailing list)
On Tue, Feb 24, 2009 at 1:17 AM, Matt Wynne <matt@mattwynne.net> wrote:
>
> See particularly the notes about squashing commits - this allows you to
> commit really often in your local branch, then merge these commits together
> before you push them into the main source control repository.
>
> You can even use git commit --amend to commit on red (e.g at the end of the
> day) and then change that commit later.

Another workflow (which I don't personally use) is to continually git
commit --amend, changing the commit message each time. This avoids the
rebase step before pushing, but of course you don't have the
checkpoints.

///ark
994e42bda994be2cd1d791f18ee6d561?d=identicon&s=25 Stephen Eley (Guest)
on 2009-02-24 20:26
(Received via mailing list)
On Tue, Feb 24, 2009 at 1:16 PM, Mark Wilden <mark@mwilden.com> wrote:
>
> Another workflow (which I don't personally use) is to continually git
> commit --amend, changing the commit message each time. This avoids the
> rebase step before pushing, but of course you don't have the
> checkpoints.

Egad.  Why am I reminded of "Eternal Sunshine of the Spotless Mind?"

--
Have Fun,
   Steve Eley (sfeley@gmail.com)
   ESCAPE POD - The Science Fiction Podcast Magazine
   http://www.escapepod.org
39100495c9937c39b2e0c704444e1b4a?d=identicon&s=25 Pat Maddox (Guest)
on 2009-02-24 20:46
(Received via mailing list)
On Tue, Feb 24, 2009 at 10:16 AM, Mark Wilden <mark@mwilden.com> wrote:
> commit --amend, changing the commit message each time. This avoids the
> rebase step before pushing, but of course you don't have the
> checkpoints.

Much better to rebase -i to clean up a bunch of little commits, imo.
Gives you all the flexibility in the world when developing, then when
you're ready to share you can assemble nice independent, meaningful
changes.

Pat
48641c4be1fbe167929fb16c9fd94990?d=identicon&s=25 Mark Wilden (Guest)
on 2009-02-24 22:29
(Received via mailing list)
On Tue, Feb 24, 2009 at 10:56 AM, Pat Maddox <pat.maddox@gmail.com>
wrote:
>
> Much better to rebase -i to clean up a bunch of little commits, imo.
> Gives you all the flexibility in the world when developing, then when
> you're ready to share you can assemble nice independent, meaningful
> changes.

Yeah, I use rebase -i almost all of the time, myself. The "continuous
rollup" method is described at
http://gweezlebur.com/2009/01/19/my-git-workflow.html . I thought it
was worth mentioning, as another way to skin a cat.

///ark
C6ce9a98479a04c47d143d444ae317c6?d=identicon&s=25 Kero van Gelder (Guest)
on 2009-02-24 23:30
(Received via mailing list)
> inability to safely request new features as evidence that they did not
> specify enough, up-front.

If you redo a product, you have learned what it should look like.
Thus, you can specify much more in advance. Nothing wrong with that,
it came out of the feedback loops.

I'm curious, do you really expect them to go for a rewrite?

> These managers knew better than to use classic Waterfall, but they still
> didn't understand that Big Requirements Up Front is essentially
> Waterfall's worst aspect. And so they re-invented Waterfall, yet again,
> in yet another form.

That's because they still have not learned that in order to manage
uncertainty, you have to acknowledge that there is uncertainty, first.

Which is not to say I'm doing a good job at explaining that to my
customer, yet :(

Bye,
Kero.
___
How can I change the world if I can't even change myself?
  -- Faithless, Salva Mea
Aafa8848c4b764f080b1b31a51eab73d?d=identicon&s=25 Phlip (Guest)
on 2009-02-24 23:58
(Received via mailing list)
Kero van Gelder wrote:

> If you redo a product, you have learned what it should look like.
> Thus, you can specify much more in advance. Nothing wrong with that,
> it came out of the feedback loops.

Here's pure Waterfall (phase 2, not 3 of his project):

"7 reasons I switched back to PHP after 2 years on Rails"
http://www.oreillynet.com/ruby/blog/2007/09/7_reas...

His rewrite bombed because he tried accidentally did it Waterfall-style,
so of
course he blamed Rails, and caused a tempest in a teapot in his comments
section. His "2 years on Rails" did not include, say, frequently
deploying it...

My rebuttal:

"Big Requirements Up Front"
http://www.oreillynet.com/onlamp/blog/2007/09/big_...

> I'm curious, do you really expect them to go for a rewrite?

"They" were a dot-com in 1999, so we will never know if it could have
worked... (-:

--
   Phlip
C6ce9a98479a04c47d143d444ae317c6?d=identicon&s=25 Kero van Gelder (Guest)
on 2009-02-25 00:01
(Received via mailing list)
> you're not forcing anyone to do anything.  You can push your own
> elegant code."  (In some places it might even be "I'd better prove to
> my manager that I did something today.")

If you don't show up for the next 6 weeks or not at all,
that last hour of work of you is not going to matter to anyone. Really.

If you *do* show up you're likely worrying too much about the lost
hardware, or your lost house.

I'd say the only person I'd commit unfinished code for, is myself.
Which means I don't do it at the end of the day, but that should not
prevent you from doing it.

> In any case: pushing to the team's main VCS repository may be a
> necessary step for integration, but it doesn't mean every push has to
> trigger an integration.  Not if you've created a consistent and
> well-understood culture of branching.

I'm trying to get that culture going :) The understanding is tough...
With the main problem being that most of the co-devs are not
software engineers. So I'll do the merging of their branches with
the master (they do hg branch and hg push, I do hg pull, hg merge and hg
push).

But I got them to use cucumber! I have to help
a lot, of course, but that's a price i'm willing to pay.

___
How can I change the world if I can't even change myself?
  -- Faithless, Salva Mea
F86901feca747abbb5c6c020362ef2e7?d=identicon&s=25 Zach Dennis (zdennis)
on 2009-02-25 02:38
(Received via mailing list)
On Tue, Feb 24, 2009 at 5:58 PM, Kero van Gelder <kero@chello.nl> wrote:
>> you're not forcing anyone to do anything.  You can push your own
>> elegant code."  (In some places it might even be "I'd better prove to
> prevent you from doing it.
>
> But I got them to use cucumber! I have to help
> a lot, of course, but that's a price i'm willing to pay.
>

It's all about little wins. Great work on getting them to use Cucumber!

--
Zach Dennis
http://www.continuousthinking.com
http://www.mutuallyhuman.com
Fd8f09626613a93a79e2ae899f00a465?d=identicon&s=25 Dan North (Guest)
on 2009-03-04 12:43
(Received via mailing list)
2009/2/24 aslak hellesoy <aslak.hellesoy@gmail.com>

> > going to amend it first thing tomorrow morning.
> >
>
> Because the longer you wait, the more your code will diverge from your
> teammates'. If you don't commit often you rob them of the opportunity
> to reduce merge hell.


This is the money line for me.

There's a lovely CI pattern I've seen in the centralised SCM world (with
Java, but that's less important) that I'm surprised hasn't been
mentioned.
Before I describe it I'd like to take this back to first principles.

The point of *continuous* integration is to keep each individual
integration
small and avoid less frequent *big* integrations, because that's where
the
pain happens. Syncing up once per story or feature, which could easily
be
several days work, strikes me as a retrograde step. The fact that DSCMs
like
git or hg allow you to do this doesn't make it a good thing. There are
many
fantastic reasons to use DSCM - modelling IBM Rational ClearCase "best
practice" usage patterns shouldn't be one of them.

Anyhoo, it seems to me the problem we are discussing is the coupling
between
checking in an unfinished scenario and failing the build. The solution
I've
seen - scaling to projects with tens of developers and thousands of
scenarios - is to separate in-progress features from finished ones, and
build everything.

If an in-progress scenario fails then the build carries on. If a
completed
scenario fails it causes the build to fail. There is a nice corollary to
this whereby you fail the build if an in-progress scenario accidentally
*
passes*. This is because you usually want a human to find out why. In
cuke-land you would do this at a feature level rather than a scenario
level
since the convention is to have one feature (with multiple scenarios)
per
file.

Marking a feature as done can be as simple as moving it between two
directories (called in-progress and done), renaming the feature (from
openid_login.in-progress to openid_login.feature) or having an
:in_progresstag on a feature until it's done.

In Java-land I prefer the first model because I can point the same junit
task at either the in-progress or done directories and just change the
failOnError flag.

In any case, I would strongly encourage changing the build so you can
integrate continuously - i.e. git push as frequently as you normally
would -
knowing the build will remain clean as long as you mark your unfinished
work
as in-progress. Lightweight, cheap branches are great for local spikes,
exploration of unfamiliar code and any number of other incidental
activities, but I'm deeply sceptical that they should form part of your
core
workflow.

No doubt once this becomes the norm and Rational are laughing up their
sleeves I'll live to regret saying that :)


Aslak


Cheers,
Dan
Dfaba90dcff3238dc9770e270232b4a1?d=identicon&s=25 nicholas a. evans (Guest)
on 2009-03-04 18:19
(Received via mailing list)
On Wed, Mar 4, 2009 at 6:25 AM, Dan North <tastapod@gmail.com> wrote:
> Marking a feature as done can be as simple as moving it between two
> directories (called in-progress and done), renaming the feature (from
> openid_login.in-progress to openid_login.feature) or having an :in_progress
> tag on a feature until it's done.

I started out with using an in-process directory, but now I prefer to
use a pending step:

Given ...
And ...
And the rest of this scenario is pending: see ticket number #1234, in
progress (2009.03.04)
When I ...
And ...
Then ...

That way, the build won't break even if some of the steps have already
been implemented (e.g. for other scenarios), and I can organize my
scenarios according to their final destination (dir and feature file),
without needing to worry as much about current status.  But I do need
to make sure I insert the appropriate "pending" steps prior to "svn
commit" / "bzr push",
Cdf378de2284d8acf137122e541caa28?d=identicon&s=25 Matt Wynne (mattwynne)
on 2009-03-04 18:25
(Received via mailing list)
On 4 Mar 2009, at 11:25, Dan North wrote:

> > While I think commit --amend is very useful, I'm not sure why you'd
> There's a lovely CI pattern I've seen in the centralised SCM world
> use DSCM - modelling IBM Rational ClearCase "best practice" usage
> nice corollary to this whereby you fail the build if an in-progress
> In Java-land I prefer the first model because I can point the same
> junit task at either the in-progress or done directories and just
> change the failOnError flag.

Thanks for your thoughts, Dan. Cucumber 0.2 tags are ideal for this
sort of filtering. I'm going to look at adding a 'two tier' feature
run to our build.


Matt Wynne
http://blog.mattwynne.net
http://www.songkick.com
874be46e8593deadb2cec84b70b26725?d=identicon&s=25 Yi Wen (hayafirst)
on 2009-03-04 18:55
(Received via mailing list)
Great writing. Thanks
This topic is locked and can not be replied to.