Gareth is the web architect within the University of St Andrews digital communications team. A graduate of St Andrews (BD Hons, 1993), Gareth joined the web team in 2006 and worked mainly on information architecture and front-end development (HTML, CSS, and JavaScript). He currently spends most of his time doing DSDM agile project management and business analysis.

MoSCoW planning poker

Backlog refinement

One agile practice that we’ve adopted from Scrum is that of product backlog refinement. In short, it involves representatives from both the project level team (project manager, business visionary, technical coordinator, etc.) and solution development team getting together periodically to review the prioritised requirements list (the backlog) to decide whether the upcoming features still have the same priorities, given what we’ve learned throughout the project so far.

This isn’t an entirely alien concept to DSDM, which recommends that “at the end of a project increment, all requirements that have not been met are re-prioritised in the light of the needs of the next increment” (DSDM 2014 onwards, paragraph 10.4.4). It simply gives it a memorable name.

As we approached the final two sprints of the digital pattern library (DPL) project (DC1001/2) we found ourselves with quite a few new requirements that had emerged from the work we had completed so far on the DPL. Many requirements were little more than ideas jotted onto a Trello card, or non-critical bug reports. Most had not been estimated. It felt like the right time to take a couple of hours out of the sprint to begin to get things into order for the next one.

First, we read through the cards to understand what they were about. Then we prioritised them. And last, we estimated them.

For estimation we use planning poker cards, which I wrote about in this article: Planning poker—why and how we estimate. It’s a simple, democratic technique that quickly helps build consensus. So, building on that success we adopted a similar approach for determining priorities.

Vote on priorities

With all our features in Trello, we switched on the voting power-up and each member of the team voted for the features we thought were the highest priorities.

There were six in the team, so we restricted ourselves to looking at only cards that received four or more votes—no point wasting time on features we didn’t consider urgent. (The Ultimello Chrome extension was useful for reordering the columns by number of votes.)

It was at this point we wanted to understand which features the team regarded as must haves (features that are important and vital), should haves (important but not vital), could haves (nice to have, but not important or vital), or won’t have this time.

MoSCoW planning poker

That was when we realised we could do something similar to how we run estimation planning poker, but for getting consensus on prioritisation.

We each took four packs of Post-it® notes and wrote on them: MUST, SHOULD, COULD and WON’T. That’s where MoSCoW prioritisation gets its name. We chose green for ‘must’, yellow for ‘should’, orange for ‘could’, and pink for ‘won’t’.

Four Post-it note pads with Must, Should, Could and Won't written on them.
A handy use for pads of Post-it® notes when planning

Then the MoSCoW planning poker game ran in very much the same way as it did later for estimation: select the next card, everyone chooses their preferred priority, once everyone is ready reveal your sticky note of choice, then discuss until consent is reached.

It was satisfyingly effective.

As with estimation planning poker it flattens the playing field: it puts everyone on the same level, gives everyone the same right to an opinion, and enables them to express it without feeling intimidated by more experienced team members.

I can certainly see us using this approach again in the future.

New URL shortening service

Something we’ve talked about for years is creating our own URL shortening service.

A couple of months ago Duncan and I sat down one Friday afternoon and created a very basic one using only household objects.

What URL shortening is

As the name suggests, a URL shortening service takes a long web address—something that is either difficult to remember, or too long to include neatly in a print publication—and creates a much shorter address that forwards users to the original, long one.

For instance, this URL:

http://tinyurl.com/hv2bqn9 (26 characters long)

redirects users to this blog:

https://digitalcommunications.wp.st-andrews.ac.uk (49 characters long).

There are many free URL shortening services around, such as Bitly, TinyURL, and Google URL Shortener, but we wanted something that would include a St Andrews-specific domain.

While Bitly offers an enterprise edition it costs approximately US $995 per month, which is somewhat expensive for our requirements. There are also self-hosted options available such as php-url-shortener and Yourls, but we challenged ourselves to create something using only the tools that we had, essentially as a cheap feasibility exercise to find out if we actually used the tool and what features we needed.

We settled on using the HTML meta http-equiv=”refresh” attribute and our content management system, TerminalFour Site Manager (T4).

Short domain name

The first thing was the domain name.

As it happens, we already had that: the University’s original domain name is actually still lurking in the background—and it still works: st-and.ac.uk.

The short URL is not permitted for use in any publications or in email signatures, but it’s perfect for this, and already we’ve saved eight characters on any URL.

Go!

In T4 we created a new section called “Short URLs (st-and.ac.uk/go/)”.

We knew that we’d need to keep the shortened URLs together in a section—we couldn’t just let them roam wild on the root of the domain—so we gave the section a URL of /go/ which was both short and gave it a sense of urgency and purpose.

The short URL sections in T4
The short URL sections in T4

Each shortened URL is a separate section, and each has a specific naming convention: short-url/ – meaningful description.

Redirect information

Within these sections we created a new content type in T4. Each shortened URL requires the following information:

  • Name—a meaningful name, used internally by T4.
  • Page title—what Google Analytics will call the page (we generally use the same name as Name, above).
  • Destination URL—this is the long URL.
  • Requester name—this is the key contact who asked for the redirect to be created, someone who is responsible for the redirect.
  • Requester email—contact details for the above contact.
  • Description—a justification for why the redirect has been created.

As well as these fields, we may also choose to fill in the publish and expiry dates for the content item. This way we can ensure that a redirect, for instance, is live for only 6 months and expires automatically.

We slapped a Google Analytics code on the redirect page and now we can also track which URLs are being used.

The final result

Wait an hour for the publish cycle to complete and lo and behold! we have our own redirect service.

We now have a St Andrews-specific short URL for this blog:

http://st-and.ac.uk/go/dct (26 characters)

This is the same length as the Bitly version.

Get access to this service

If you are a staff member with moderator access to T4 and would like access to this new service please simply contact the digital communications team at itservicedesk@st-andrews.ac.uk.

The difference between acceptance criteria and definition of done

Back in August I wrote a post called “How do you know when you’re done?” in which I explored the agile concept of the “definition of done” or “done done”. However, in conversations with developers over the last few weeks I’ve observed a confusion between acceptance criteria and definition of done. So, let’s use this post to tease out the differences.

tl;dr

In a nutshell, the differences are subtle but simple. While both are used to prove that a feature is complete, the scope of each is different.

The definition of done is, generally speaking, a list of requirements that must be satisfied for all user stories. These are set at the start of the project and rarely change.

Acceptance criteria (or user story acceptance criteria), however, are particular to each feature being developed. They are set at the start of a sprint and may change as development progresses and more is discovered about that feature.

Building a house

White and brown house
Who would live in a house like this?

To explain this, let me move away from software and website development, and consider houses.

Let’s imagine for a moment that you are a building contractor and have been commissioned to build 10 new homes.

You have a plan for each house. They will be small, single storey houses each with two bedrooms, a living room, a kitchen, bathroom, and a walk-in cupboard for storage. Each house will have four outer walls, four windows, and a sloping roof. They will be powered by electricity (no gas), and plumbed in for water and waste.

Definition of done

A definition of done for building these houses, then may initially look something like this:

  • Must have four outer walls of brick.
  • Must have a sloped, tiled roof.
  • Must have a secure front door (high security level lock as minimum)
  • Must have four windows.
  • Must be wind- and watertight.
  • Must be wired for electricity.
  • Must be plumbed in (water and waste in kitchen and bathroom).
  • Must pass building control inspections and receive a special certificate.

As you can see, this list is fairly generic. It could apply to any of the 10 houses on the street. If the house was built with wooden walls, or a flat roof, then it wouldn’t pass. But regardless of what else was done to the house, if it passed those eight criteria then it could be regarded as done and the house could be put on the market.

Acceptance criteria

Now, let’s imagine that we have a customer, Nigel, who wants to buy house number four. Nigel is particular about the kind of house he lives in, so he takes a look at the brochure and picks out a few options that he likes:

  • The front door must be mahogany, and be painted blue.
  • Front door lock must be a Yale platinum 3 star (maximum security level) lock.
  • The window frames must also be mahogany.
  • One window must be a Velux® window fitted in the roof.
  • One window must be round and situated above the front door.
  • The electrical sockets must also be fitted with CAT-5 computer network ports.
  • The kitchen must have a double oven and oak worktops.
  • The bathroom must have a bath and shower.

This list is very specific and applies only to house number four. These details don’t need to be gathered until just before the house is built. Not every house needs to be built to these exact specifications. This is the acceptance criteria only for this particular house. If these criteria are not met then Nigel isn’t going to buy this house, regardless of whether the definition of done criteria are met or not.

Now, let’s imagine that there is a national shortage of mahogany doors and window frames. As the building contractor, you contact Nigel and explain the situation. Nigel isn’t particularly happy but accepts that this is outwith your control, so he takes another look at the brochure and selects a nice oak door and matching windows. Here, one acceptance criterion has changed due to something that was discovered during development.

Development continued until all the acceptance criteria and the definition of done criteria were met. After which the house was sold to Nigel who was delighted with his house, and lived there happily for many years with his cat, Haggis.

Our definitions

For the digital pattern library project we have the following criteria defining done:

  • Must adhere to code standards and style guidance.
  • Must be accessible, including using WAI-ARIA landmark roles.
  • Must include print CSS rules.
  • Code must be well-commented to explain why it has been done in a particular way not what it does — that can be gleaned from the code itself: prioritise the why over the what.
  • Documentation needs to comply with house style and writing for the web guidelines.
  • Drop the ‘related patterns’ section of each pattern as the new interface for categorising patterns will make this redundant.
  • Need to convey how a pattern relates to another e.g. breadcrumb pattern must only be used after a navigation pattern.
  • Code must be version controlled, with each feature in its own branch, and a pull request created to merge into master.
  • The change log must be updated.
  • Merge to master may only take place after a peer code review, and deployment to live environment for testing.
  • Must meet acceptance criteria for the feature.

These are generic guidelines that may apply to every pattern we create or edit in the code base. They ensure a consistency of approach and quality.

We often copy this list into each feature card in Trello as a reminder about what we need to check before we move the card into testing, or again into done.

The acceptance criteria is then defined within the card for each feature, and is different for each pattern. For example, the accordion must work equally with one accordion or six; the header pattern must also include a condensed header for web applications; the footer pattern must use social media icons from the Font Awesome set rather than Glyphicon, etc.

Conclusion

As you can see, both the definition of done and acceptance criteria are used to ascertain whether a particular feature is complete or not but they are defined at different times, and have different scopes.

Definition of done is defined up front before development begins, and applies to all user-stories within a sprint, whereas acceptance criteria are specific to one particular feature and can be decided on much later, just before or even iteratively during development.

How we schedule work requests from other projects

Over the last couple of months we’ve had a number of requests from people and projects, somewhat out of the blue, asking for pieces of work to be completed with a deadline within only a couple of days of asking.

We often say no to such requests, which can take people a little by surprise. But it’s usually not a downright no, it’s more of a ‘not yet’. This post goes some way to explaining why this is.

Fixed time, fixed resource

The reason behind often saying “yes… just not yet” is to do with the time budgets we have allocated for working on different kinds of work each sprint. (We stack up our work in blocks of two weeks, which we call sprints.)

In agile-speak this is the principle of fixed time and fixed resource. In our team that means:

  • a fixed resource of 10 team members, and
  • a fixed time of 725 hours per fortnight (assuming a work day of 7.25 hours).

That’s why I like the analogy with financial budgets. Like our monthly pay packets, we can’t just go out and blow the lot on whatever we want. There will be fixed payments that we need to make, and it would also be wise to fix budgets for certain categories (food, clothing, fuel, entertainment, etc).

Our time budget

That’s what our universe of work is all about. It is essentially our big-picture budget sheet.

Universe of work
The magic universe of work document that works out our time budgets

The universe of work lays out for us the immovable meetings and commitments that we have and the regular bits and pieces that we must do to keep things working. It defines the framework for our sprint, it beats out the rhythm for the fortnight:

  • Daily stand-up meetings every morning, with a period of 30 minutes beforehand to prepare.
  • A start-of-sprint kick off and planning meeting on the first Monday.
  • Fix-it Fridays, and 5% time for personal development.
  • A strategy meeting each Wednesday, and a meeting with the business visionary to settle on goals for the following sprint.
  • A meeting with software developers to discuss the road map for our digital pattern library.
  • A demo on the final Thursday.
  • And an end of sprint retrospective.

Once we have factored these into our calendars we have only 326 hours left to split between project (229 hours) and business as usual work (97 hours).

Into this steady heartbeat we insert our work: business as usual (work to run the business) and project work (work to change the business). We check emails and we blog, we write the digital communications team newsletter, we meet with the digital advisory board (the DAB), and our portfolio board (DCPB); we meet and discuss, we plan, and we write, we develop and we edit. And we consult.

Consultancy

We define consultancy as any piece of work for a project or programme, outwith our own portfolio, that we are not managing. Sometimes people are just looking for advice or guidance on a solution, sometimes they want us to quality-assure or evaluate a possible solution, and other times they want us to help as solutions developers to help them create the solution itself.

For consultancy work we currently set aside a maximum of 20 hours each sprint – that is, two hours per team member per sprint. Obviously, should our team size change then the amount of time we have for consultancy would also adjust relatively.

Like any good budget, we track how much consultancy we use up each sprint. And if we reach 20 hours then we cap it there and any further requests get bumped to the next sprint. We usually can’t borrow time from other budgets, and certainly not project budgets, because the time for those should be fixed.

At the beginning of many sprints we already have an idea of who wants our help, and our consultancy budget has already been allocated. That’s why the answer to ad hoc requests for work to be completed within a couple of days is often “No, well… yes… just not yet”.

Conclusion

The key to securing our time is advance notice. Let us know as soon as you do that you may require our time and expertise; if you can also give us a rough idea of when and for how long, then even better.

Because we work in two-week sprints, we prefer if you can give us at least a fortnight’s notice. That gives us time to include it in the next sprint.

Seven of my favourite books on Agile

I first encountered Agile at the Scotland on Rails conference in Edinburgh in early 2008. While much of the conference (about Ruby on Rails, a server-side web application framework written in Ruby) went sailing over my head, the keynote speaker Jim Weirich spoke passionately and accessibly about Agile development. What he said about self-organising teams, and methods of working quickly and iteratively, but with discipline and ensuring quality, struck a chord. I was intrigued to find out more. I bought a book (The Art of Agile Development, below), which I quickly read from cover-to-cover, and took my own first steps down the Agile path.

Of course, Agile is not just a single thing. It’s a collection of various methodologies, frameworks and practices that encompasses DSDM, Scrum, XP (extreme programming), Scaled Agile Framework (SAFe), kanban, lean and a whole host of other goodies. But often what is said about one particular flavour of Agile can apply to another.

Here are seven of my favourite books and resources about Agile.

1. The Agile manifesto

The Agile methodology manifesto

The manifesto for Agile software development – often simply called the Agile manifesto – is where a lot of this really took off.

In February 2001, 17 developers, representing various disciplines such as Scrum, XP and DSDM, met to discuss where the similarities were in their methodologies.

There they wrote their manifesto, which declared that they valued:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

and their supporting principles behind the Agile manifesto.

The manifesto and principles are still valuable resources. They are short and simple and can still be referred to when considering your own Agile practices. Just last week, in response to a change, someone on our team quoted the second princple, that we “welcome changing requirements, even late in development.”

Interestingly, there is currently a debate going on in some areas of the Agile community asking: does the Agile manifesto need an update? But that’s perhaps a topic for another blog post.

2. The art of Agile development

by James Shore and Shane Warden (O’Reilly, 2008)
ISBN-13 978-0-596-52767-9

The Art of Agile Development by James Shore and Shane Warden

Read some chapters online, or buy on O’Reilly, or Amazon UK.

This is the book that started it all off for me, and in my opinion, it’s still one of the best books on Agile out there. A few months ago we had an Agile coach in for a quick Agile health-check, and when he spotted this book on my bookshelf he said that this was one of his favourite books too.

The book focuses very much on XP methodologies, but there is little presented that cannot be used by a team using Scrum or DSDM.

The book is arranged into three parts. Part one looks at why you might wish to use Agile, it introduces the reader to Agile ways of thinking and practical ways to adopt Agile, including a list of prerequisites (mangagement support, team agreement, a colocated team, onsite customers, the right team size and a commitment to using all the practices).

Part two explored various practices used by XP teams, organised into categories: thinking, collaborating, releasing, planning, developing. These cover all sorts of things from pair programming and informative workspaces, to “done done”, version control, estimating, risk management and retrospectives. This, for me, is one of the most practical parts of the book. The sections are easily digestable and practical, covering the hows and whys of each methodology, answering questions, looking at the results of using that method, as well as what to expect if you don’t use it. Each section includes a brief look at alternatives and indicates which other XP methods can be used to support this one.

Part three begins to look at ways to master Agility, looking at Agile values and principles, the importance of relationships and quality, and examining ways to improve the process, deliver value and eliminate waste.

This is a book that I go back to again and again. So much so that I eventually bought a second copy in PDF so that I could access it wherever I was, even from my smartphone.

If I had to recommend only one book on Agile then hands down it would be this one.

3. Agile project management in easy steps

by John Carroll (In Easy Steps, 2012)

Agile Project Management in easy steps
ISBN: 978-1-84078-447-3

Buy on Amazon UK

If you are looking for a broad introduction to Agile project management then you can’t go far wrong with Agile Project Management in easy steps by John Carroll.

The book first contrasts how Agile is different from traditional project management methodologies and frameworks before introducing the reader to four of the main Agile approaches: DSDM, Scrum, XP (extreme programming), and lean.

The remainder of the book walks the reader through the five main phases in any project which the author calls getting started, foundations, development, deployment and post project.

Most topics are covered in only one or two pages, but the author manages to really get the key concepts into a very readable, very rich book.

If you are looking for a very clear, usable book to get started then I can’t recommend this one highly enough. I even used it when revising for my DSDM practitioner exam, that’s how good it is.

4. DSDM handbook

by DSDM Consortium (edited by Andrew Craddock, Barry Fazackerley, Steve Messenger, Barbara Roberts, and Jennifer Stapleton)

DSDM handbook

Read 2008 edition online; or read 2014 edition online; or buy at DSDM Consortium.

This is the official handbook for DSDM (formerly DSDM Atern), the Agile project management framework that we use in the digital communications team. The handbooks that we used to pass both our foundation and practitioner exams.

If you don’t fancy spending £37.00 on it the DSDM Consortium have very kindly made the whole text available online, in both the 2008 edition and the updated 2014 edition.

This is a book that I have gone back to again and again; it is rarely off my desk. I learned using the 2008 edition, some of my colleagues learned from the 2014 edition. While I appreciate many of the updates in the 2014 edition, there are still a few really useful resources that never made the jump between editions that I hope they return in future edits, such as appendix C which details every DSDM product (think: document): what it is, who is involved in its creation, quality criteria, and in which phase it is created.

If you do use DSDM, then I recommend that you also get a copy of, or at least bookmark, both editions. That way you get a wider perspective of the framework.

5. Getting value out of Agile retrospectives: a toolbox of retrospective exercises

by Luis Gonçalves and Ben Linders (Leanpub, 2015)
ISBN: 978-1304789624

Getting value out of Agile retrospetives: a toolbox of retrospective exercises

Buy on Leanpub

Agile teams are invited to continuously reflect on their practices and behaviours, to tweak and improve their effectiveness. Often this is done during sprint and iteration retrospectives, where the team looks back over the previous timebox and critically evaluates what went well and what didn’t.

Luis Gonçalves’ and Ben Linders’ book Getting Value out of Agile Retrospectives is a really useful and practical book for running retrospective meetings effectively.

After explaining what a retrospective is and the benefits achieved from running them, the authors document 13 retrospective exercises that teams can use to approach the task of reflecting on their practices and habits from different angles.

There is a wealth of knowledge contained in this short book (60 pages) that has helped our team immensely. Ben Linders‘ website is also very useful resource; his blog posts are often thought-provoking and challenging.

6. Agile planning with a multi-customer, multi-project, multi-discipline team

by Karl Scotland

Agile planning with a multi-customer, multi-project, multi-discipline team

Download PDF (257 KB)

Although this is only a short paper, it’s one that I still find challenging and inspiring.

I blogged about this paper last November in a post entitled Agile release planning with multiple projects, but it’s worth adding it to this list too.

As I said in that post, most Agile literature assumes one cross-functional team working on a single project for a single customer. They have a backlog of tasks which any team member can dip into and pull work towards themselves: everyone has the skills required to work on any of the tasks.

Unfortunately, over here in the real world, not everything works like that and Karl Scotland’s article was the first article I read that addressed how working with multiple teams on multiple projects for multiple customers might be managed within an Agile context.

Two years ago, Karl wrote a post, The BBC Seeds of Kanban Thinking, that reflects on this article; it’s also worth reading.

7. The people’s scrum: Agile ideas for revolutionary transformation

by Tobias Mayer (Dymaxicon, 2013)
ISBN 978-1-937965-15-0

The People's Scrum: Agile ideas for revolutionary transformation

Buy on Amazon UK

Mayer’s book is a collection of short essays and favourite blog posts from two of his early blogs: Agile Thinking and Agile Anarchy.

A lot of books on Agile focus on the mechanics of how it all fits together, who needs to be where with whom in order for the machine to work effectively.

This book is different. It focuses not on the how, but challenges the why. It is open to critically questioning every aspect of Agile with the intention of uncovering the core drivers behind Agile practices.

I love Mayer’s boldness and passion for Agile. He is unrelenting in his belief that Agile cannot be pinned down: by its nature it has to be fluid and adaptive. At the heart of Agile are people who collaborate, who gather around a workflow board, who self-organise, and who regularly and critically evaluate their own practices and adapt. Sounds pretty close to the Agile manifesto to me.

More than any other book I’ve read on Agile this is the one that got me thinking most deeply about why we do certain things. Mayer doesn’t always offer the answer, because – in good post-modern tradition – my answer may be different to your answer, but he does make you think. Like all good books I come away from this one feeling like I have changed, and seeing the world a little differently. I thoroughly recommend it.

How do you know when you’re done?

Icons for done from The Noun Project.
Icons that suggest ‘done’, from The Noun Project.

How do you know when you’ve completed something?

Not just nearly finished it – not a simple shrug of the shoulders and a mutter of “I guess that’ll do”, but absolutely certain that what you’ve created is (to the best of your knowledge and skills and ability) fit for purpose, has been adequately tested, and is ready to go into production without any more work needed on it. Which, note, is different to wanting to add extra features to it in the future. This post looks at the Agile concept of the ‘definition of done’ and the repetitious ‘done done’.

Done done

Agile has this interesting concept of ‘DONE’ or ‘done done’. Not just done, but ‘DONE’… ‘done done’. The term suggests a more complete version of complete. Like belt and braces.

Continue reading “How do you know when you’re done?”

MoSCoW prioritisation is on effort

St Basil's cathedral and the Kremlin in Moscow, Russia
No! Not that Moscow. (Photo taken by me on a school trip in 1988)

One misunderstanding that I’ve encountered a lot over the last couple of years in relation to DSDM agile project management is in the area of prioritisation, and in particular how MoSCoW prioritisation works. In this post I hope to make things a little clearer

What is fixed?

In all projects you have, broadly speaking, four variables:

  • Features
  • Quality
  • Time
  • Cost (resource)
Project variables—traditional and DSDM (Source: DSDM Consortium)
Project variables—traditional and DSDM (Source: DSDM Consortium)

In traditional projects, as it is assumed that all requirements will be delivered, features are fixed. So, time, cost and to an extent quality are therefore variable. Which makes sense, if things are more complex than at first anticipated, in order to deliver all the features then you will need to give the project more time and/or money.

DSDM takes a different approach. This methodology argues that surely not all requirements can possibilty be of equal importance, and so DSDM argues for fixed time, fixed resource, and fixed quality, and it has developed a method for managing a variable set of features: MoSCoW.

MoSCoW prioritisation

MoSCoW is a handy acronym to remember the four categories of prioritisation: must, should, could and won’t (this time).

  • Must — Without these the product will not function, be legal, or will be unsafe. This is often given the ‘backronym’ Minimum Usable SubseT.
  • Should — Important but not critical to the project; it may be painful to leave them out, a workaround may be required, but the product will still be viable.
  • Could — Nice to have but if left out will create less of an impact than if a should is omitted
  • Won’t — These are the requirments which it has been agreed will be omitted entirely from either the whole project, or at least this increment or iteration.

In timeboxes (sprints) it is requirements marked as ‘could’ that create the main source of contingency. If something happens that puts the deadline at risk, it is from the pool of coulds that requirements get dropped first.

And if things continue to go badly then once the coulds have been depleted you can then start dropping shoulds.

This way, you guarantee that all the must-have requirements will be delivered.

60/20/20

DSDM is also quite opinionated on how to organise your timeboxes, so there is a realisitic balance of musts, shoulds, and coulds.

It would make no sense to work only on must-have requirements during a sprint — there would be no contingency, unless you can guarantee that your estimates are 100% correct. So DSDM recommends a balance of:

  • 60% must-have effort
  • 20% should-have effort
  • 20% could-have effort

But this is where I have encountered the most confusion when dealing with MoSCoW. I have been in planning sessions with Agile project managers and business analysts who have tried to make sure that 60% of the requirements are categorised as musts, 20% as shoulds, and a further 20% as coulds.

In other words, let’s say we have a project with 100 requirements — they have tried to ensure we have 60 musts, 20 shoulds and 20 coulds.

While this could potentially be a useful exercise while gathering requirements, to reinforce to stakeholders that not everything is a must-have, when it comes to timebox planning this isn’t what the DSDM guidelines recommend.

This is what the DSDM Handbook says:

On a typical project, DSDM recommends no more than 60% effort for Must Have requirements on a project, and a sensible pool of Could Haves, usually around 20% effort.

The thing to notice here is the word ‘effort’.

What is effort?

Effort, of course, means the amount of work required to complete a task.

In Agile projects we often estimate in either ideal time or story points. Ideal time (measured usually in hours or days) is an estimate of how long a task will take to complete assumeing it’s all you work on, you have no interruptions, and everything you need is available). Story points are an arbitrary measurement used to estimate the size of tasks relative to one another; they often use an adjusted Fibonacci sequence (0, ½, 1, 2, 3, 5, 8, 13, 20, 40, 100 plus ∞).

Example

So, let’s take as an example, a very small, very simple project that has only 10 requirements, which are written in Latin, not because we’re St Andrews but so you don’t get distracted reading them.

1. Gather requirements and priorities

Here is our list of requirements:

ID Description Priority
1 Lorem ipsum dolor Must
2 Sit amet Must
3 Consectetur adipisicing elit Should
4 Fuga sapiente, nulla facere Could
5 Eaque molestias similique Could
6 Cupiditate error voluptas! Could
7 Fugit, quasi aliquid Must
8 A quas ea rerum Must
9 Quis ipsam illo Should
10 Dolorem fuga Should

 

As you can see, when we gathered these requirements from our business stakeholders, we also asked their opinion on prioritisation.

In their opinion what would be the minimum usable subset of features required to create a successful product? And which requirements might be regarded as nice-to-haves that have painful workarounds (shoulds) and easy workarounds (coulds)?

2. Estimate requirements

At this point, we have no indication of the effort required to deliver these requirements.

So we now meet with the solution development team to gahter from them their estimates on how long each feature will take to develop. Here we are using story points.

ID Description Priority Estimate
1 Lorem ipsum dolor Must 3
2 Sit amet Must 8
7 Fugit, quasi aliquid Must 20
8 A quas ea rerum Must 13
3 Consectetur adipisicing elit Should 8
9 Quis ipsam illo Should 5
10 Dolorem fuga Should 13
4 Fuga sapiente, nulla facere Could 8
5 Eaque molestias similique Could 8
6 Cupiditate error voluptas! Could 5
Total 91

 

We can see that the total effort to deliver the entire product is equal to 91 story points.

3. Team velocity

We are almost there, but before we can begin to plan our timeboxes, to determine what gets developed when, we first need to have an idea of our team’s velocity.

Velocity is the term used to measure how many story points a team can comfortably complete in one iteration (one sprint, if you are using Scrum terminology).

Let’s assume that our team can comfortably complete 32 story points each iteration.

We now know, that if all goes well we should be able to complete all these requirements within three iterations (sprints):

91 story points / 32 story points per iteration = 2.8 iterations.

4. Use MoSCoW to plan iterations

We can now begin to organise the requirements into timeboxes, trying to keep as close to these limits of 60% must-haves, 20% should-haves and 20% could-haves as we can, so that we build into each iteration some contingency.

Remember, we’re working this out on effort. So, if we can complete 32 story points each iteration we can work out that:

  • 60% of 32 = 19.2 story points
  • 20% of 32 = 6.4 story points

This now gives us a useful guide: aim for about 20 story points for must-have requirements, and around 6 story points for both should-have and could-have requirements.

The task of actually working out what goes into each iteration is often more of an art than a science. It is not always easy or straight-forward. You may have to take into account things like how often you plan to deploy, project dependencies, resource availability, etc. I often use post-it notes or spreadsheets to work out iteration plans.

So, here’s how we might organise these

Iteration 1
ID Description Priority Estimate Percentage
7 Fugit, quasi aliquid Must 20 62%
9 Quis ipsam illo Should 5 16%
6 Cupiditate error voluptas! Could 5 16%
Total 30 94%
Iteration 2
ID Description Priority Estimate Percentage
8 A quas ea rerum Must 13 40%
3 Consectetur adipisicing elit Should 8 25%
5 Eaque molestias similique Could 8 25%
Total 29 90%
Iteration 3
ID Description Priority Estimate Percentage
1
2
Lorem ipsum dolor
Sit amet
Must
Must
3
8
34%
10 Dolorem fuga Should 13 41%
4 Fuga sapiente, nulla facere Could 8 25%
Total 32 100%

 

As you can see, we have not been able to stick exactly to 60% musts, 20% shoulds, and 20% coulds. But in each iteration we have built in enough contingency to allow us to drop features if required without compromising the success of the whole project.

You can also see that, apart from in the final iteration, we have also built in some ‘slack’ (6% in the first iteration, and 10% in the second). Slack in Agile is basically unassigned time that allows some breathing space for tasks that may take a little longer than estimated. This is an additional type of contingency that we’ve built into the iteration plan.

Conclusion

MoSCoW prioritisation is a very useful tool for ensuring that quality, time and resources are fixed to ensure that the right product is developed on time.

Be aware, however, that if you use MoSCoW prioritisation that the balance of 60% must-haves, 20% should-haves, and 20% could-haves are made on the estimated effort (time) of requirements and not simply on the total number of requirements.

Beginning to think about risk management in Agile projects

Three books on Agile risk management
Of course, there is always a risk that I don’t find the time to read all of these

I’ve been thinking a lot about risk in Agile projects recently. It is something that I’ve known for a while we need to manage better. Here’s some of what I’ve discovered so far.

Continue reading “Beginning to think about risk management in Agile projects”

When retrospective objectives stack up

In my last post on retrospectives I said that at the end of each retrospective the team should settle on an action that they believe will help improve the next sprint: the retrospective objective.

James Shore and Shane Warden’s advice on choosing a retrospective objective in The Art of Agile Development (2008) is two-fold:

  1. During the retrospective, don’t worry about the detail: a general direction is good enough at this stage.
  2. Choose just one objective: it helps the team focus, and improve incrementally without feeling overwhelmed.

We take on too much

As a team, we’ve not been terribly good at doing either of these, to be honest. I think we’ve been guilty of trying to bite off more than we can chew; taking on too many changes at once.

In a recent conversation with one team member mid-sprint they told me that they felt a quite overwhelmed by how often our processes change, and that they didn’t feel able to keep up. That backed up my feeling that we were agreeing to too many objectives each retrospective.

As did this tiny piece of empirical evidence: we had a list of 22 outstanding retrospective objectives going back three months.

Continue reading “When retrospective objectives stack up”

Using Trello for team retrospectives

We recently moved our retrospectives from a physical board using Post Its to Trello
We recently moved our retrospectives from a physical board using Post-it notes to Trello

Retrospectives are an important tool for Agile teams like ours. They allow the team to reflect frequently (usually at the end of an iteration) on work habits and processes, and agree how to improve them. We hold retrospectives every second Friday at the end of our sprints.

Until recently, we’ve been running all our retrospectives in the office using a magnetic white board and a small forest of Post-it® notes. Each retrospective we’ve lamented how much paper we waste, having used the sticky notes for only about an hour before they end up in the recycling bin.

So a couple of months ago, as one of our team members was working from home on the final day of our sprint, we used Trello (and Apple FaceTime) to allow him to fully participate.

We’ve used Trello now for the last four retrospectives, even when the whole team has been together in the same room. I want to use this post to reflect a little on what we’ve learned from the process. A retrospective on retrospectives, if you like.

Continue reading “Using Trello for team retrospectives”