Simon Palmer’s blog

April 19, 2012

What’s agile really for?

Filed under: recruiting — simonpalmer @ 5:49 pm

I’ve been asked by quite a number of people, in a variety of forums, what the real end goal of agile is. The context is normally prior to embarking on a transition to developing software along agile lines, and when trying to get a handle on the organisational cost/benefit trade-off.

My stock answer is that it is about quality and building the right product. It’s definitely not about productivity. It’s also about visibility into product development and putting trade-offs around prioritisation of features and fixes in the right hands in the business. It’s definitely not about productivity.

I have a longer answer where I describe some of the details of XP and what the real benefits are that we have accrued since switching.  I can also speak at length about the organisational cost, in terms of lost resources, transition time and the inevitable path along the satir change curve for the people involved.

The question of productivity always comes up, especially in the context of pair programming (surely it’s half as productive to have two people do a job as it is for one person to do it?).  Setting aside the full answer to that, which is about getting the work right, avoiding knowledge silos etc. I did secretly wonder whether I could find any measurable evidence to back up my intuitive feel that by adopting agile fully we *would* see productivity gains.

So in spite of it not being a primary rationale for the switch, my belief – and experience – was that improved productivity is a beneficial secondary corollary to agile.   I had a perfect chance in an upcoming project where I was implementing XP over the top of an existing team and asking them to go through the transition.  As we went we would have our velocity to use as a measure of our productivity, so we started to track it.

Here’s what happened…

First thing to note is our headcount dropped.  And quite dramatically.  We lost people who didn’t like the new way of working, the new working environment, and their new role in their team.  I expected some of this, but it was surprised at how many.  There were other problems in the team which I think were just as much to blame, and eventually someone’s decision about their employment is a complex and personal issue, but nonetheless, we definitely descended into a little chaos as people left when agile arrived.  The more surprising result was that, in spite of the seeming turmoil, the depletion of resources, the learning of new working practices etc. we saw the team velocity steadily increase.

It’s fair to say that velocity is a function of your ability to plan, and when you start planning in agile you learn about the meaning and value of a story point and it takes a little while to settle into a repeatable pattern.  However, the errors in planning estimates that give rise to velocity tend to be normally distributed, so it settles, and quicker than you might expect.

The end point of this transition is that there are fewer people on the team but there’s more work going on.  Obviously during transition we saw it flip around a little, but it has definitely settled into a place where the team is doing more with fewer resources. If I ignore velocity for a moment and just look at the real measure of activity, which is working product, then by that measure we are doing much better than we were with the previous team and practices.

So I’m going to change my stock answer to include the fact that, done right, agile can quite quickly result in productivity gains.

March 16, 2011

Why I will never score more than 9 on the Joel Test

Filed under: recruiting — simonpalmer @ 1:54 pm

I love the Joel Test.  Really, I do.  I wish it were stamped on the insides of the eyelids of any person with technical talent looking for a job.

BUT – and of course a BUT was coming – I will never encourage my teams to score more than 9.  And deliberately so.  For the record, and to save you having to switch between pages, here are Joel’s 12 questions:

  1. Do you use source control?
  2. Can you make a build in one step?
  3. Do you make daily builds?
  4. Do you have a bug database?
  5. Do you fix bugs before writing new code?
  6. Do you have an up-to-date schedule?
  7. Do you have a spec?
  8. Do programmers have quiet working conditions?
  9. Do you use the best tools money can buy?
  10. Do you have testers?
  11. Do new candidates write code during their interview?
  12. Do you do hallway usability testing?

Here are my answers:

1-5 Yes.  Of course.

6. DEPENDS.  If you mean an enormous GANTT chart written on wallpaper which demands a team of people whose sole purpose in life is to swim against the tide of inaccuracy, incompleteness and pointlessness, then the answer is a resounding NO.  If you will accept an iteration board, a rough release plan stretching out a couple of months, all of which is written on sticky notes and pinned to the wall and completely subject to change, then YES.  I think I score 0.5

7. I was going to say “depends” again, but my real answer is NO.  We do not have specs in any way that an ordinary person might interpret the word.  A “spec” (shot for specification) is a detailed document running to many pages which completely defines how a feature looks and behaves.  It is a contract between the person defining the requirement and the person delivering it.  We do not have these contracts.  For a start it implies that all the work of understanding and design will be complete prior to work starting.  It also presumes that we really know what the feature should be in advance and that the fungible developer will be able to interpret the words in the spec perfectly and implement both the letter and the spirit of the intent.  This never happens and with the benefit of 20-something years of software development it feels naive.  Not to mention it is in direct contradiction of the Agile Manifesto.

8. NO.  Programmers sit together round tables and are encouraged to talk, debate, call across to other people to help, gather in groups round whiteboards, converse, interact, stand up, sit down, cuddle, sing uproariously and generally be social humans engaged in a common task.  If they are silent and concentrating on work on their own, then they are not pairing and our Agile processes are breaking down and the code will suffer.  I say NO to silence.  On the rare occasions where someone has a solo task to perform (research spikes are an example) which will require dedicated concentration for a period of time I will encourage (and facilitate) them to go and find a space which suits that.

9. Yes

10. It’s another “depends”.  There are testers, and there are testers.  If you are doing TDD, then isn’t everyone a “tester”?  If you are aiming for 100% functional and integration coverage using an automated tool, then don’t you need Quality Engineers rather than testers?  Isn’t there every bit as much discipline in writing the tests as there is in writing the code?  We think there is and we religiously chant the TDD mantra, so in the same way that you won’t find “specs” in our world, your won’t find “testers” either.  Yes, we have people whose sole job is quality and we take it very seriously, but I don’t think they would call themselves testers.  I think I score a 0.5 here too.

11. Yes.  Of course.  In fact we have taken it one step further and give people coding challenges to complete between interviews and we review the code together collaboratively when they are done.  For more on this see my blog and RedCanary.

12. Yes.

So, I still love the Joel Test, but the worst, most prescribed waterfall places to work as a developer could score 12, and great creative software shops who do interesting valuable work in a collaborative and genuinely Agile way will rarely get full marks.

We’re the latter, not the former, so I stand proudly by my 9.

February 10, 2011

Recruiting. Challenges.

Filed under: code, recruiting — simonpalmer @ 2:45 pm

We’ve been very actively recruiting for some time and I previously posted about the framework we’re using to assess people.  At the bottom of that post I alluded to the “programming challenges” that we developed so we could get an idea of people’s technical chops.  It’s turned into much more and I wanted to write about it because I absolutely rely on it and it is the best recruiting I have ever done.  I should give an appropriate credit at this point, I say “we developed” but they are really the work of Andrew Datars who is my VP of Architecture, I can only claim to have planted the seed, Andrew did all the really hard work of bringing them into being.

Aside from the back-patting, the other reason I wanted to write about them was because their positioning is as important as their content.  For posterity, and because I think it’ll help with the explanation, here they are:
Empathica new developer challenge
Empathica NET developer challenge
Empathica NET quality engineer challenge

I should also state up-front that we use TDD in our development work and are an Agile XP shop.  These things are important to the story because we are hiring into two roles, Developer and Quality Engineer.  We also hold true to the belief that we recruit against innate skills rather than learnt ones, and therefore place a much higher value on a person’s capacity than we do on their precise experience – although experience and knowledge are obviously highly valuable once acquired.

Given our hiring philosophy we realised we needed a way to objectively assess a person from a technical perspective and within a technical context.  We also wanted to give some flavour of the work that we do to people who may have programming skills but completely outside our domain, or little programming experience at all – and it turns out that is a pretty decent size pool of people.  We are a .Net shop and therefore wrote one for that, repeated but technology agnostic, and created a third for quality engineers which is based on a publicly available code base from Microsoft.

The process goes like this…
1) have the person in for a face to face interview
2) send them away for 10 days or so with one of the challenges
3) bring them back in and review what they have done

There’s a lot we learn through this process which has little to do with the technology:

  • Do they accept the challenge and with what sort of attitude?
  • How long do they take to come back with an answer?
  • What does their code look like (style, separation of concerns, factoring etc.)?
  • How do they respond to criticism of their code?
  • How do they interact with us as developers?
  • How well did they understand the requirements?
  • How do they think through issues and debug the code

On top of which there is the code itself and the finished application.  We position the challenges not so much as a technical test but as a topic around which we can jointly work when they come in for a technical assessment.  The objective is only partly about assessing their technical skills, and almost not at all about their knowledge and experience.  Instead we want to simulate working with the person on a concrete problem in a technology we use and a context which is close to our reality.

We find this gives us an exceptionally good read on the person.  We allow enough time in the “interview” for them to overcome their nerves although that is an important consideration, especially if the person has only limited exposure to the technology.

We further request that they bring along a code base to which they have contributed significantly, ideally a hobby project to avoid NDA issues, and we spend the second half of the interview talking through and understanding their code.

This last part is important.  We found that we could be left wondering whether problems we saw in their approach or code in the challenge were to do with a fundamental lack of understanding of coding, or just unfamiliarity with the technology of the challenge we set.  Having them talk to us on their turf was a good way of finding that out.  It also gives other valuable insights such as how they are at expressing concepts to people with no domain knowledge, how motivated they are to code in their spare time, how curious they are about a problem, what sort of business sense they have, and the picture they have of where technical competence sits in the commercial world.

When it comes right down to it we end up not really caring too much about the technical aspects of the challenge, the human factors being much more relevant and harder to extract through a normal interview process.  We try and position it as being less about the technology and more about the opportunity to work together on some code, but as a candidate it is probably hard to see past it as technical test – which of course it is.

We have hired 3 people so far through this method and have a further 10 or so in our pipeline.  The results are stunning and we are in a hiring groove which is transforming our technical organisation.

If you are reading this and want to talk to me about a job please feel free to contact me by email at and make sure you mention this article and the challenges.

December 22, 2010

A useful and objective hiring framework

Filed under: recruiting — simonpalmer @ 5:08 pm

I have the lovely, if difficult job, of hiring a bunch of new people into our organisation. It’s the thing I like most about my role (I’m CTO at Empathica) and the single most valuable legacy I can leave the business, so I take it very seriously. I also have a penchant for systems, not necessarily technical, but I like to have a framework in which I can place decisions.

Up to now I haven’t had one for choosing people and have had to fall back on my subjective, and deeply questionable, intuition. It’s impossible to get every hire right, but it is much easier to get them badly wrong, and a bad hire is not just bad for us as a business, it’s bad for the person too. I’ve been recruiting all my working life and in the final analysis I would like to think that the people who work here are all identical to me in only one respect, albeit a very important one; work should be fulfilling, rewarding and, as far as possible, pleasurable. I don’t think it is too lofty a goal for software developers to aim to be up towards the top end of Mazlow’s hierarchy.

I was triggered into some thoughts by an excellent and challenging recruitment partner with whom I have just recently started working (The Laudi Group, RedCanary). It’s rare to find someone who thinks similarly about recruiting and then holds your feet to the fire while you go through the process.  Laudi certainly do that and I appreciate it.  They are delivering us great candidates and we are hiring them.

Early on Mario, the eponymous Laudi, sent me a message with a link to a Fast Company article by Dee Hock which was an example of exactly the right information at the right moment. What I really liked in that was the cascade of values which had the learned skills at the bottom, not the top. It struck me that this was exactly how I felt about recruitment – give me raw innate qualities and I can supply the rest. If I had to trade those against knowledge I would always err on the side of the innate.

I started using this as a sounding board for myself and it has evolved into a tool for us in the recruitment team at Empathica. I have added two things to it which speak to our particular needs, namely curiosity and fit – apologies to Dee Hock who clearly has forgotten much more about this than I will ever know, and expresses himself more concisely and eloquently than I ever will. In any case, my cascade of characteristics is as follows:

  • Integrity
  • Motivation
  • Fit
  • Capacity
  • Curiosity
  • Understanding
  • Knowledge
  • Experience

Being a measurement company, and a bunch of slightly over-analytical nerds, we decided to rate people on a 5 point scale on these 8 things.  Once we had done it a few times we realised there was a bar of acceptability given our current recruitment needs and this provided us a very good way of formalising, and sometimes justifying, our recruiting decisions.  There is the intriguing possibility too of taking it one step further, namely to look at our current people.  I’m not quite ready to do that because it is a very sharp knife, but it is tempting.

So our process is to meet with someone once, generally a couple of people at our end, then rate them on our scale.  From the initial rating we get two things, first an obvious “No” if one exists, and second some direction on what we would want to do with them in a subsequent interview. We draw it on my whiteboard and collaborate over the scores, which are sometimes a bit gray, and always somewhat subjective – but guess what, all hiring is.

Here’s what it looks like:

5 point scale

Over time it was clear that we would not compromise on the innate elements of integrity, motivation, fit, curiosity and capacity and we would make quite significant compromises on the other three which we consider learned.  So the solid black line is where we set our bar.  The green line is a candidate we recently saw and although we liked the person we realised that they scored well on the learned end of the scale, and were only OK on the innate end.  We did not hire them.  The red line is a person who we lost because we weren’t quick enough with an offer – a tale for another time.

In addition to this, and to complement it, we have developed some coding challenges for technical hires that we have them complete and then we review with them live on their machines.  This drags out everything you’ll ever need to know about a coder and we only give it to people who have got above our notional bar and is the best technical interview tool I have ever used.  Kudos goes to Andrew Datars, my VP of Architecture, for dreaming up a great set of coding and QA tasks.

We’ll continue to use this framework and it has already added a great deal of value and allowed us to have exactly the right conversations between us, with the candidate and with our recruiter.

Blog at