Gareth is the web architect within the University of St Andrews digital communications team. A graduate of St Andrews (BD Hons, 1993), Gareth joined the web team in 2006 and worked mainly on information architecture and front-end development (HTML, CSS, and JavaScript). He currently spends most of his time doing DSDM agile project management and business analysis.

Meet our sprint planning document

Let’s say you have a team of eleven knowledge workers, who have a wide portfolio of responsibilities, and you need to get a bunch of tasks done, across multiple projects, within a two-week window. How would you keep focus on everything that needs to be done?

Here’s one of the tools we’ve developed to help us do it: we call the sprint planning document.

Sprint planning document for sprint 62 Elmo on my desk
Sprint planning document for sprint 62 Elmo, where it usually sits, in plain view on my desk

But first, a little background.

As we have no doubt lamented in multiple blog posts, one of the main challenges we face in the digital communications team is the breadth of responsibilities we have.

We are responsible not only for building beautiful websites as part of the external website programme, we have many other (business as usual) commitments, such as

  • support and maintenance calls
  • social media campaigns
  • monthly editorial calendar to ensure the external website is up-to-date
  • update the digital prospectuses each year
  • consultation on other digital-focused projects

There is such a variety here that it’s easy to see how on some days it feels like we’re flitting, like a butterfly, from one thing to another. The question remains: how do we keep focus?


About a year ago (at the end of sprint 14—we are currently galloping through sprint 62) we trialled our first sprint planning document.

This first edition was little more than three pages of A4 with four tables. The tables listed all our projects (project code, name, current stage, project manager, sprint goals and deadlines), portfolio board meeting dates, business as usual goals, and retrospective objectives.

It ended with a short biography and photo of the person after whom the sprint was named; it was the American cyclist and Olympian Bobby Julich, thanks for asking.

The first page of our prototype sprint planning document - a table listing six projects
The first page of our prototype sprint planning document

It was simple, but it worked. It wasn’t much more than a glorified list but it helped keep our focus. So we iterated on it and began to add more information that we found ourselves repeatedly looking for throughout the sprint.


In sprint 18 we added a calendar to capture significant events during the sprint. In sprint 20, we indicated when team members would be out of the office. By sprint 40 it was in colour, and there were further small tweaks here and there.

From sprint 20 we added a calendar to highlight key events during the fortnight
From sprint 18 we added a calendar to highlight key events and team member absences

Even now, we’re not done tweaking it—it is still very much a ‘living’ document that adapts to our ever-changing requirements. Over the past year we have continued to iterate and improve it.

Quite often you’ll hear someone in the team say, “Oh, you know what would be really useful to include on the sprint planning document?” And usually they are right.

The document now stretches to nine pages. But it doesn’t feel overly long because it contains the right information.

What it contains and why

Team capacity and calendar

The first page tells us who is available this sprint and gives us a rough shape of the team. Do we have an even balance between developers and content editors? Are we low on project management support this sprint? This is useful for planning.

The calendar keeps us focused on the key events throughout the sprint, as well as planned team absences.

Programme and projects

Next, we have a list of the current open projects in our portfolio. This gives us an at-a-glance summary of how much work we have on at the moment.

This includes key information about the projects, such as project manager, what stage in the project lifecycle it is, its size and complexity, and what the goals are for this sprint.

These goals are what I find most useful. They drive the team planning sessions, give us something to track at daily stand-up meetings, and during weekly strategy meetings, and they help keep me focused throughout the sprint.

Business as usual

The business as usual section tracks pretty much everything that cannot be regarded as project work, such as social media, consultancy, training, editorial calendar work, support and maintenance, and conferences. We make sure it’s clear who is responsible for each task.

We also have a ‘horizon gazing’ section that saves us from having to search through a shared Outlook calendar for upcoming events. This can include key University dates (start or end of semester, graduation, Raisin weekend, etc), conferences, team arrivals or departures, and our monthly digital advisory board.

This is also the section in which we keep track of our various subscriptions such as web hosting accounts, team chat, video and audio hosting, and content management system licenses. it’s just a way to keep these things visible and transparent, and to ensure that bought-in services won’t accidentally expire.

Professional development

A new addition to the document is a section that tracks when everyone has had their professional development review meetings. There is a transparency here that I like. It doesn’t list any personal details such as development goals, but it does help ensure that personal development is not forgotten and is regularly followed up.

Content management system

Another recent addition is a table that tracks how many content items are being published from our two primary content management systems, T4 version 7 and T4 version 8. This is for licensing and publishing purposes.

Retrospective objectives

Each fortnight we reflect on our processes and agree on actions to improve them. We create Trello cards to track their progress, but I also add the retrospective objectives to this document as a reminder.

Project burn down chart

A burn down chart shows a graphical representation of the total amount of work committed to, how much has been completed and therefore also how much is left to do.

Burn down chart, showing that we met our goals
The burn down chart for our last project

While we have a burn down chart on a whiteboard in the office, it’s really useful to also have a portable, paper copy for reference—particularly in meetings.

Programme summary

Another fairly recent addition is the external website programme summary. This is a big picture ‘map’ of the entire programme, broken down by project and phase. It’s colour-coded too, which helps. It allows us to see an at-a-glance status of the whole programme: what is complete, what is in progress, and estimated dates for the remaining phases.

About the sprint

And right at the end of the sprint planning document we have a short biography and photograph of the person [after whom the sprint has been named.

Our current theme for sprint names is literary figures. Last sprint was Desdemona from Othello, this sprint is Elmo from Sesame Street. It’s usually educational and informative. Did you know, for instance, that Elmo was originally called “Baby Monster” and that his birthday is 3 February?

We usually mine Wikipedia for facts about the character, and choose a photo from Google Image Search, and credit the source for each.


Since its introduction, the sprint planning document has been a very useful tool, both to keep us focused and as an historical record to look back on what we have achieved. It’s like a cross between a paper-based dashboard that pulls together the key information and a map that enables us to navigate our way through each sprint.

Why CSA Outbound will not be in the new style

In October we launched the inbound students hub as the first step towards creating a new Collaborations and Study Abroad (CSA) website.

This week we are planning to launch the rest of the Collaborations and Study Abroad website. Here is a sneak peek at what the new CSA central hub will look like.

Screenshot of Collaborations and Study Abroad website. The hero banner reads Outbound study abroad
A sneak peek at the new Collaborations and Study Abroad website.
Screenshot of Collaborations and Study Abroad website. The hero banner reads Outbound study abroad
A sneak peek at the new Collaborations and Study Abroad website on mobile.

The new site which has been built in T4v8, and rather suitably in collaboration with the Collaborations and Study Abroad team, employs the new digital pattern-driven look and feel. It looks equally great on mobile and desktop devices, and integrates seamlessly with the new external website.

Well, most of it does.

Over the last few months we’ve built new hubs for inbound students, academic collaborations, and overseas partners, as well as a few pages to explain what CSA is all about and contact information. Each of these sections is primarily aimed at an external audience—that is, an audience outside the current St Andrews community.

The outbound students section is different. It is aimed very firmly at current students—students currently studying at St Andrews who wish to study abroad as part of their degree. As such, this section sits out of the scope of the external website programme and therefore the current project.

Screenshot of Collaborations and Study Abroad website.
The outbound students section of the new Collaborations and Study Abroad website uses the old web design style as this is still an tool for internal audiences.

It would be lovely to give this whole section a makeover too, and update the designs of the pages for internal audiences (current students, current postgraduates, and current staff) as well, but that would be a lot of work, and require a whole programme of work in itself. Don’t worry, though, a project proposal has been submitted for this.

In the meantime, though, that is why, when the new site launches on Thursday 14 December, you will find that the outbound students information sits within the current students section of the University website, while the rest of the CSA site looks shiny and new.

You can read more about the CSA website project as a whole and the inbound hub in particular.

How and why we QA

At a conference recently one of our team was speaking about how we organise our work using Trello and the Kanban-style workflow we have settled on. A number of people told her afterwards that they were impressed with both the simplicity and effectiveness of our rules around the QA (quality assurance) or testing phase.

As Wikipedia explains, “Kanban is a method for visualizing the flow of work, in order to balance demand with available capacity and spot bottlenecks.”

At its simplest level a Kanban board needs only three columns or lists to track your tasks:

  1. To do
  2. In progress
  3. Done

We have added a fourth column, ‘QA’, between ‘In progress’ and ‘Done’.

This post explains why.


The first column we have is our ‘To do’ column; we name this ‘Backlog’ because the team has a Scrum background.

First, we stack up work for the forthcoming, fortnightly sprint in the backlog. We have a team rule that nothing gets worked on unless there is a card in Trello to represent it. This keeps us accountable and transparent about the work we are doing.

On each card we include the minimum amount of information required by the person working on it, and the person checking it to know what to test it against. Broadly speaking, this means:

  • Meaningful title
  • Estimate (in hours, which includes time for QA work)
  • Short description of the task
  • Any supporting documents or files
  • User stories (optional)
  • Acceptance criteria
  • Contact details for those involved
  • Who is responsible for the task?
  • Who will test it?

When a card has this information we mark it with a ‘READY’ label. It is now ready to be started.

In progress

When we start working on a piece of work we move the Trello card representing it into the ‘In progress’ column.

The card should now be fairly self-contained. From the description, user stories and acceptance criteria it should be clear what is required and the task can now be iterated on by the solutions developers and business ambassadors and advisers.

Once the task has been completed we move it into the ‘QA’ column.


We didn’t always have a QA column or testing phase. When we first started using a Kanban-style approach, work that we considered finished was moved straight into the ‘Done’ column and we cracked on with the next task on the backlog.

But we noticed that quite often we would have to fish cards back out of ‘Done’ into ‘In progress’ when errors were later spotted.

Releasing code or content that contains errors is not agile. Agile strives to create a ‘zero bugs’ approach (The Art of Agile Development (O’Reilly, 2008), p.160). In other words, build in quality from the start. Or as the fourth principle of Agile DSDM project management states: “never compromise quality”.

We addressed this shortfall in our process in a retrospective at the end of a particularly error-prone sprint. We decided to introduce a ‘QA’ (quality assurance) column between ‘In progress’ and ‘Done’. This has proved very successful.

So now, once a piece of work has been completed it is moved into the QA column and the team member responsible for the task finds someone else to check it for them.

This is an important part of the workflow: you must not check your own work, you must find someone else to check it for you.

And of equal importance, the person who checks the work must have the right skill set to be able to verify its accuracy.

Why you can’t QA your own work

It’s a well-known fact in writing circles that it is very difficult to edit your own writing. The same thing can be said for writing code, or producing video or audio, or the many other steps required to create websites.

When you try to edit or QA your own work you are emotionally too close to it. As soon as you have put effort into creating something you feel some degree of attachment to it, you lose objectivity and find it hard to criticise it. As you know what you intended to write, you often read what you think you have written, rather than what you actually wrote and so typos get missed.

(Of course, there will be situations where it is necessary to edit your own work. Lifehacker offers some tips: How to edit your own writing. But wherever possible, we try to ensure that you don’t.)

What if it fails QA?

If errors are found then we have two options.

If it’s a fix that can be done by the person doing the QA then we encourage the tester to fix it, and update the Trello card explaining what has been done. We believe in what Shore and Warden in The Art of Agile Development (O’Reilly, 2008) call collective code ownership: “everyone shares responsibility for the quality of the code. No single person claims ownership over any part of the system, and anyone can make any necessary changes anywhere.” (ibid, p.191). Obviously, this isn’t restricted to code, it applies to other types of content, too.

If the fix requires input from the original creator then we update the Trello card to explain what needs to be done, and move the card back into ‘In progress’. Then we let the person responsible for the task know that there is something to fix.


Once we are certain that the piece of work is to the highest quality required, that it meets its acceptance criteria and has been checked by someone else, we move the card to ‘Done’.

And start again on another task.

What we have learned

As the conference goers noted, adding a QA column and the simple rule: ‘do not QA your own work’ has been a very simple but effective addition to our workflow.

The most immediate consequence, thankfully, is that fewer errors now make their way to live websites. But there are a few hidden benefits too.

All work now feels more like a collaborative, team effort. And because more than one person has worked on the feature (or more than two in the case of pair-programming or pair-writing) then it benefits from a wider input and wider experiences.

Work is now less likely to be done in silos. More team members get to review it before it goes live, which helps with both familiarity and future maintenance. And spreading the maintenance burden further empowers team members to own and fix bugs as they find them.

It is a practice that we fully endorse. Sure, be proud of the work you do, but once it’s done, let it go, allow others to review it and improve it, and in the end through the wisdom of crowds you will end up with better code, better content, and fewer mistakes.

Example Trello board

Example Trello board
Example Trello board

To give you an idea of what our project boards look like in Trello, I have created an example project board.

You will see next to the backlog column we also have a ‘Sprint 1’ column. This allows us more focus, allowing us to move from the backlog cards that are scheduled for this sprint.

We also create a new ‘Done’ column for each sprint, named with the sprint number and name and dates, where appropriate.

Feel free to use this board as an inspiration for your own project boards.

MoSCoW planning poker

Backlog refinement

One agile practice that we’ve adopted from Scrum is that of product backlog refinement. In short, it involves representatives from both the project level team (project manager, business visionary, technical coordinator, etc.) and solution development team getting together periodically to review the prioritised requirements list (the backlog) to decide whether the upcoming features still have the same priorities, given what we’ve learned throughout the project so far.

This isn’t an entirely alien concept to DSDM, which recommends that “at the end of a project increment, all requirements that have not been met are re-prioritised in the light of the needs of the next increment” (DSDM 2014 onwards, paragraph 10.4.4). It simply gives it a memorable name.

As we approached the final two sprints of the digital pattern library (DPL) project (DC1001/2) we found ourselves with quite a few new requirements that had emerged from the work we had completed so far on the DPL. Many requirements were little more than ideas jotted onto a Trello card, or non-critical bug reports. Most had not been estimated. It felt like the right time to take a couple of hours out of the sprint to begin to get things into order for the next one.

First, we read through the cards to understand what they were about. Then we prioritised them. And last, we estimated them.

For estimation we use planning poker cards, which I wrote about in this article: Planning poker—why and how we estimate. It’s a simple, democratic technique that quickly helps build consensus. So, building on that success we adopted a similar approach for determining priorities.

Vote on priorities

With all our features in Trello, we switched on the voting power-up and each member of the team voted for the features we thought were the highest priorities.

There were six in the team, so we restricted ourselves to looking at only cards that received four or more votes—no point wasting time on features we didn’t consider urgent. (The Ultimello Chrome extension was useful for reordering the columns by number of votes.)

It was at this point we wanted to understand which features the team regarded as must haves (features that are important and vital), should haves (important but not vital), could haves (nice to have, but not important or vital), or won’t have this time.

MoSCoW planning poker

That was when we realised we could do something similar to how we run estimation planning poker, but for getting consensus on prioritisation.

We each took four packs of Post-it® notes and wrote on them: MUST, SHOULD, COULD and WON’T. That’s where MoSCoW prioritisation gets its name. We chose green for ‘must’, yellow for ‘should’, orange for ‘could’, and pink for ‘won’t’.

Four Post-it note pads with Must, Should, Could and Won't written on them.
A handy use for pads of Post-it® notes when planning

Then the MoSCoW planning poker game ran in very much the same way as it did later for estimation: select the next card, everyone chooses their preferred priority, once everyone is ready reveal your sticky note of choice, then discuss until consent is reached.

It was satisfyingly effective.

As with estimation planning poker it flattens the playing field: it puts everyone on the same level, gives everyone the same right to an opinion, and enables them to express it without feeling intimidated by more experienced team members.

I can certainly see us using this approach again in the future.

New URL shortening service

Something we’ve talked about for years is creating our own URL shortening service.

A couple of months ago Duncan and I sat down one Friday afternoon and created a very basic one using only household objects.

What URL shortening is

As the name suggests, a URL shortening service takes a long web address—something that is either difficult to remember, or too long to include neatly in a print publication—and creates a much shorter address that forwards users to the original, long one.

For instance, this URL: (26 characters long)

redirects users to this blog: (49 characters long).

There are many free URL shortening services around, such as Bitly, TinyURL, and Google URL Shortener, but we wanted something that would include a St Andrews-specific domain.

While Bitly offers an enterprise edition it costs approximately US $995 per month, which is somewhat expensive for our requirements. There are also self-hosted options available such as php-url-shortener and Yourls, but we challenged ourselves to create something using only the tools that we had, essentially as a cheap feasibility exercise to find out if we actually used the tool and what features we needed.

We settled on using the HTML meta http-equiv=”refresh” attribute and our content management system, TerminalFour Site Manager (T4).

Short domain name

The first thing was the domain name.

As it happens, we already had that: the University’s original domain name is actually still lurking in the background—and it still works:

The short URL is not permitted for use in any publications or in email signatures, but it’s perfect for this, and already we’ve saved eight characters on any URL.


In T4 we created a new section called “Short URLs (”.

We knew that we’d need to keep the shortened URLs together in a section—we couldn’t just let them roam wild on the root of the domain—so we gave the section a URL of /go/ which was both short and gave it a sense of urgency and purpose.

The short URL sections in T4
The short URL sections in T4

Each shortened URL is a separate section, and each has a specific naming convention: short-url/ – meaningful description.

Redirect information

Within these sections we created a new content type in T4. Each shortened URL requires the following information:

  • Name—a meaningful name, used internally by T4.
  • Page title—what Google Analytics will call the page (we generally use the same name as Name, above).
  • Destination URL—this is the long URL.
  • Requester name—this is the key contact who asked for the redirect to be created, someone who is responsible for the redirect.
  • Requester email—contact details for the above contact.
  • Description—a justification for why the redirect has been created.

As well as these fields, we may also choose to fill in the publish and expiry dates for the content item. This way we can ensure that a redirect, for instance, is live for only 6 months and expires automatically.

We slapped a Google Analytics code on the redirect page and now we can also track which URLs are being used.

The final result

Wait an hour for the publish cycle to complete and lo and behold! we have our own redirect service.

We now have a St Andrews-specific short URL for this blog: (26 characters)

This is the same length as the Bitly version.

Get access to this service

If you are a staff member with moderator access to T4 and would like access to this new service please simply contact the digital communications team at

The difference between acceptance criteria and definition of done

Back in August I wrote a post called “How do you know when you’re done?” in which I explored the agile concept of the “definition of done” or “done done”. However, in conversations with developers over the last few weeks I’ve observed a confusion between acceptance criteria and definition of done. So, let’s use this post to tease out the differences.


In a nutshell, the differences are subtle but simple. While both are used to prove that a feature is complete, the scope of each is different.

The definition of done is, generally speaking, a list of requirements that must be satisfied for all user stories. These are set at the start of the project and rarely change.

Acceptance criteria (or user story acceptance criteria), however, are particular to each feature being developed. They are set at the start of a sprint and may change as development progresses and more is discovered about that feature.

Building a house

White and brown house
Who would live in a house like this?

To explain this, let me move away from software and website development, and consider houses.

Let’s imagine for a moment that you are a building contractor and have been commissioned to build 10 new homes.

You have a plan for each house. They will be small, single storey houses each with two bedrooms, a living room, a kitchen, bathroom, and a walk-in cupboard for storage. Each house will have four outer walls, four windows, and a sloping roof. They will be powered by electricity (no gas), and plumbed in for water and waste.

Definition of done

A definition of done for building these houses, then may initially look something like this:

  • Must have four outer walls of brick.
  • Must have a sloped, tiled roof.
  • Must have a secure front door (high security level lock as minimum)
  • Must have four windows.
  • Must be wind- and watertight.
  • Must be wired for electricity.
  • Must be plumbed in (water and waste in kitchen and bathroom).
  • Must pass building control inspections and receive a special certificate.

As you can see, this list is fairly generic. It could apply to any of the 10 houses on the street. If the house was built with wooden walls, or a flat roof, then it wouldn’t pass. But regardless of what else was done to the house, if it passed those eight criteria then it could be regarded as done and the house could be put on the market.

Acceptance criteria

Now, let’s imagine that we have a customer, Nigel, who wants to buy house number four. Nigel is particular about the kind of house he lives in, so he takes a look at the brochure and picks out a few options that he likes:

  • The front door must be mahogany, and be painted blue.
  • Front door lock must be a Yale platinum 3 star (maximum security level) lock.
  • The window frames must also be mahogany.
  • One window must be a Velux® window fitted in the roof.
  • One window must be round and situated above the front door.
  • The electrical sockets must also be fitted with CAT-5 computer network ports.
  • The kitchen must have a double oven and oak worktops.
  • The bathroom must have a bath and shower.

This list is very specific and applies only to house number four. These details don’t need to be gathered until just before the house is built. Not every house needs to be built to these exact specifications. This is the acceptance criteria only for this particular house. If these criteria are not met then Nigel isn’t going to buy this house, regardless of whether the definition of done criteria are met or not.

Now, let’s imagine that there is a national shortage of mahogany doors and window frames. As the building contractor, you contact Nigel and explain the situation. Nigel isn’t particularly happy but accepts that this is outwith your control, so he takes another look at the brochure and selects a nice oak door and matching windows. Here, one acceptance criterion has changed due to something that was discovered during development.

Development continued until all the acceptance criteria and the definition of done criteria were met. After which the house was sold to Nigel who was delighted with his house, and lived there happily for many years with his cat, Haggis.

Our definitions

For the digital pattern library project we have the following criteria defining done:

  • Must adhere to code standards and style guidance.
  • Must be accessible, including using WAI-ARIA landmark roles.
  • Must include print CSS rules.
  • Code must be well-commented to explain why it has been done in a particular way not what it does — that can be gleaned from the code itself: prioritise the why over the what.
  • Documentation needs to comply with house style and writing for the web guidelines.
  • Drop the ‘related patterns’ section of each pattern as the new interface for categorising patterns will make this redundant.
  • Need to convey how a pattern relates to another e.g. breadcrumb pattern must only be used after a navigation pattern.
  • Code must be version controlled, with each feature in its own branch, and a pull request created to merge into master.
  • The change log must be updated.
  • Merge to master may only take place after a peer code review, and deployment to live environment for testing.
  • Must meet acceptance criteria for the feature.

These are generic guidelines that may apply to every pattern we create or edit in the code base. They ensure a consistency of approach and quality.

We often copy this list into each feature card in Trello as a reminder about what we need to check before we move the card into testing, or again into done.

The acceptance criteria is then defined within the card for each feature, and is different for each pattern. For example, the accordion must work equally with one accordion or six; the header pattern must also include a condensed header for web applications; the footer pattern must use social media icons from the Font Awesome set rather than Glyphicon, etc.


As you can see, both the definition of done and acceptance criteria are used to ascertain whether a particular feature is complete or not but they are defined at different times, and have different scopes.

Definition of done is defined up front before development begins, and applies to all user-stories within a sprint, whereas acceptance criteria are specific to one particular feature and can be decided on much later, just before or even iteratively during development.

How we schedule work requests from other projects

Over the last couple of months we’ve had a number of requests from people and projects, somewhat out of the blue, asking for pieces of work to be completed with a deadline within only a couple of days of asking.

We often say no to such requests, which can take people a little by surprise. But it’s usually not a downright no, it’s more of a ‘not yet’. This post goes some way to explaining why this is.

Fixed time, fixed resource

The reason behind often saying “yes… just not yet” is to do with the time budgets we have allocated for working on different kinds of work each sprint. (We stack up our work in blocks of two weeks, which we call sprints.)

In agile-speak this is the principle of fixed time and fixed resource. In our team that means:

  • a fixed resource of 10 team members, and
  • a fixed time of 725 hours per fortnight (assuming a work day of 7.25 hours).

That’s why I like the analogy with financial budgets. Like our monthly pay packets, we can’t just go out and blow the lot on whatever we want. There will be fixed payments that we need to make, and it would also be wise to fix budgets for certain categories (food, clothing, fuel, entertainment, etc).

Our time budget

That’s what our universe of work is all about. It is essentially our big-picture budget sheet.

Universe of work
The magic universe of work document that works out our time budgets

The universe of work lays out for us the immovable meetings and commitments that we have and the regular bits and pieces that we must do to keep things working. It defines the framework for our sprint, it beats out the rhythm for the fortnight:

  • Daily stand-up meetings every morning, with a period of 30 minutes beforehand to prepare.
  • A start-of-sprint kick off and planning meeting on the first Monday.
  • Fix-it Fridays, and 5% time for personal development.
  • A strategy meeting each Wednesday, and a meeting with the business visionary to settle on goals for the following sprint.
  • A meeting with software developers to discuss the road map for our digital pattern library.
  • A demo on the final Thursday.
  • And an end of sprint retrospective.

Once we have factored these into our calendars we have only 326 hours left to split between project (229 hours) and business as usual work (97 hours).

Into this steady heartbeat we insert our work: business as usual (work to run the business) and project work (work to change the business). We check emails and we blog, we write the digital communications team newsletter, we meet with the digital advisory board (the DAB), and our portfolio board (DCPB); we meet and discuss, we plan, and we write, we develop and we edit. And we consult.


We define consultancy as any piece of work for a project or programme, outwith our own portfolio, that we are not managing. Sometimes people are just looking for advice or guidance on a solution, sometimes they want us to quality-assure or evaluate a possible solution, and other times they want us to help as solutions developers to help them create the solution itself.

For consultancy work we currently set aside a maximum of 20 hours each sprint – that is, two hours per team member per sprint. Obviously, should our team size change then the amount of time we have for consultancy would also adjust relatively.

Like any good budget, we track how much consultancy we use up each sprint. And if we reach 20 hours then we cap it there and any further requests get bumped to the next sprint. We usually can’t borrow time from other budgets, and certainly not project budgets, because the time for those should be fixed.

At the beginning of many sprints we already have an idea of who wants our help, and our consultancy budget has already been allocated. That’s why the answer to ad hoc requests for work to be completed within a couple of days is often “No, well… yes… just not yet”.


The key to securing our time is advance notice. Let us know as soon as you do that you may require our time and expertise; if you can also give us a rough idea of when and for how long, then even better.

Because we work in two-week sprints, we prefer if you can give us at least a fortnight’s notice. That gives us time to include it in the next sprint.

Seven of my favourite books on Agile

I first encountered Agile at the Scotland on Rails conference in Edinburgh in early 2008. While much of the conference (about Ruby on Rails, a server-side web application framework written in Ruby) went sailing over my head, the keynote speaker Jim Weirich spoke passionately and accessibly about Agile development. What he said about self-organising teams, and methods of working quickly and iteratively, but with discipline and ensuring quality, struck a chord. I was intrigued to find out more. I bought a book (The Art of Agile Development, below), which I quickly read from cover-to-cover, and took my own first steps down the Agile path.

Of course, Agile is not just a single thing. It’s a collection of various methodologies, frameworks and practices that encompasses DSDM, Scrum, XP (extreme programming), Scaled Agile Framework (SAFe), kanban, lean and a whole host of other goodies. But often what is said about one particular flavour of Agile can apply to another.

Here are seven of my favourite books and resources about Agile.

1. The Agile manifesto

The Agile methodology manifesto

The manifesto for Agile software development – often simply called the Agile manifesto – is where a lot of this really took off.

In February 2001, 17 developers, representing various disciplines such as Scrum, XP and DSDM, met to discuss where the similarities were in their methodologies.

There they wrote their manifesto, which declared that they valued:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

and their supporting principles behind the Agile manifesto.

The manifesto and principles are still valuable resources. They are short and simple and can still be referred to when considering your own Agile practices. Just last week, in response to a change, someone on our team quoted the second princple, that we “welcome changing requirements, even late in development.”

Interestingly, there is currently a debate going on in some areas of the Agile community asking: does the Agile manifesto need an update? But that’s perhaps a topic for another blog post.

2. The art of Agile development

by James Shore and Shane Warden (O’Reilly, 2008)
ISBN-13 978-0-596-52767-9

The Art of Agile Development by James Shore and Shane Warden

Read some chapters online, or buy on O’Reilly, or Amazon UK.

This is the book that started it all off for me, and in my opinion, it’s still one of the best books on Agile out there. A few months ago we had an Agile coach in for a quick Agile health-check, and when he spotted this book on my bookshelf he said that this was one of his favourite books too.

The book focuses very much on XP methodologies, but there is little presented that cannot be used by a team using Scrum or DSDM.

The book is arranged into three parts. Part one looks at why you might wish to use Agile, it introduces the reader to Agile ways of thinking and practical ways to adopt Agile, including a list of prerequisites (mangagement support, team agreement, a colocated team, onsite customers, the right team size and a commitment to using all the practices).

Part two explored various practices used by XP teams, organised into categories: thinking, collaborating, releasing, planning, developing. These cover all sorts of things from pair programming and informative workspaces, to “done done”, version control, estimating, risk management and retrospectives. This, for me, is one of the most practical parts of the book. The sections are easily digestable and practical, covering the hows and whys of each methodology, answering questions, looking at the results of using that method, as well as what to expect if you don’t use it. Each section includes a brief look at alternatives and indicates which other XP methods can be used to support this one.

Part three begins to look at ways to master Agility, looking at Agile values and principles, the importance of relationships and quality, and examining ways to improve the process, deliver value and eliminate waste.

This is a book that I go back to again and again. So much so that I eventually bought a second copy in PDF so that I could access it wherever I was, even from my smartphone.

If I had to recommend only one book on Agile then hands down it would be this one.

3. Agile project management in easy steps

by John Carroll (In Easy Steps, 2012)

Agile Project Management in easy steps
ISBN: 978-1-84078-447-3

Buy on Amazon UK

If you are looking for a broad introduction to Agile project management then you can’t go far wrong with Agile Project Management in easy steps by John Carroll.

The book first contrasts how Agile is different from traditional project management methodologies and frameworks before introducing the reader to four of the main Agile approaches: DSDM, Scrum, XP (extreme programming), and lean.

The remainder of the book walks the reader through the five main phases in any project which the author calls getting started, foundations, development, deployment and post project.

Most topics are covered in only one or two pages, but the author manages to really get the key concepts into a very readable, very rich book.

If you are looking for a very clear, usable book to get started then I can’t recommend this one highly enough. I even used it when revising for my DSDM practitioner exam, that’s how good it is.

4. DSDM handbook

by DSDM Consortium (edited by Andrew Craddock, Barry Fazackerley, Steve Messenger, Barbara Roberts, and Jennifer Stapleton)

DSDM handbook

Read 2008 edition online; or read 2014 edition online; or buy at DSDM Consortium.

This is the official handbook for DSDM (formerly DSDM Atern), the Agile project management framework that we use in the digital communications team. The handbooks that we used to pass both our foundation and practitioner exams.

If you don’t fancy spending £37.00 on it the DSDM Consortium have very kindly made the whole text available online, in both the 2008 edition and the updated 2014 edition.

This is a book that I have gone back to again and again; it is rarely off my desk. I learned using the 2008 edition, some of my colleagues learned from the 2014 edition. While I appreciate many of the updates in the 2014 edition, there are still a few really useful resources that never made the jump between editions that I hope they return in future edits, such as appendix C which details every DSDM product (think: document): what it is, who is involved in its creation, quality criteria, and in which phase it is created.

If you do use DSDM, then I recommend that you also get a copy of, or at least bookmark, both editions. That way you get a wider perspective of the framework.

5. Getting value out of Agile retrospectives: a toolbox of retrospective exercises

by Luis Gonçalves and Ben Linders (Leanpub, 2015)
ISBN: 978-1304789624

Getting value out of Agile retrospetives: a toolbox of retrospective exercises

Buy on Leanpub

Agile teams are invited to continuously reflect on their practices and behaviours, to tweak and improve their effectiveness. Often this is done during sprint and iteration retrospectives, where the team looks back over the previous timebox and critically evaluates what went well and what didn’t.

Luis Gonçalves’ and Ben Linders’ book Getting Value out of Agile Retrospectives is a really useful and practical book for running retrospective meetings effectively.

After explaining what a retrospective is and the benefits achieved from running them, the authors document 13 retrospective exercises that teams can use to approach the task of reflecting on their practices and habits from different angles.

There is a wealth of knowledge contained in this short book (60 pages) that has helped our team immensely. Ben Linders‘ website is also very useful resource; his blog posts are often thought-provoking and challenging.

6. Agile planning with a multi-customer, multi-project, multi-discipline team

by Karl Scotland

Agile planning with a multi-customer, multi-project, multi-discipline team

Download PDF (257 KB)

Although this is only a short paper, it’s one that I still find challenging and inspiring.

I blogged about this paper last November in a post entitled Agile release planning with multiple projects, but it’s worth adding it to this list too.

As I said in that post, most Agile literature assumes one cross-functional team working on a single project for a single customer. They have a backlog of tasks which any team member can dip into and pull work towards themselves: everyone has the skills required to work on any of the tasks.

Unfortunately, over here in the real world, not everything works like that and Karl Scotland’s article was the first article I read that addressed how working with multiple teams on multiple projects for multiple customers might be managed within an Agile context.

Two years ago, Karl wrote a post, The BBC Seeds of Kanban Thinking, that reflects on this article; it’s also worth reading.

7. The people’s scrum: Agile ideas for revolutionary transformation

by Tobias Mayer (Dymaxicon, 2013)
ISBN 978-1-937965-15-0

The People's Scrum: Agile ideas for revolutionary transformation

Buy on Amazon UK

Mayer’s book is a collection of short essays and favourite blog posts from two of his early blogs: Agile Thinking and Agile Anarchy.

A lot of books on Agile focus on the mechanics of how it all fits together, who needs to be where with whom in order for the machine to work effectively.

This book is different. It focuses not on the how, but challenges the why. It is open to critically questioning every aspect of Agile with the intention of uncovering the core drivers behind Agile practices.

I love Mayer’s boldness and passion for Agile. He is unrelenting in his belief that Agile cannot be pinned down: by its nature it has to be fluid and adaptive. At the heart of Agile are people who collaborate, who gather around a workflow board, who self-organise, and who regularly and critically evaluate their own practices and adapt. Sounds pretty close to the Agile manifesto to me.

More than any other book I’ve read on Agile this is the one that got me thinking most deeply about why we do certain things. Mayer doesn’t always offer the answer, because – in good post-modern tradition – my answer may be different to your answer, but he does make you think. Like all good books I come away from this one feeling like I have changed, and seeing the world a little differently. I thoroughly recommend it.

How do you know when you’re done?

Icons for done from The Noun Project.
Icons that suggest ‘done’, from The Noun Project.

How do you know when you’ve completed something?

Not just nearly finished it – not a simple shrug of the shoulders and a mutter of “I guess that’ll do”, but absolutely certain that what you’ve created is (to the best of your knowledge and skills and ability) fit for purpose, has been adequately tested, and is ready to go into production without any more work needed on it. Which, note, is different to wanting to add extra features to it in the future. This post looks at the Agile concept of the ‘definition of done’ and the repetitious ‘done done’.

Done done

Agile has this interesting concept of ‘DONE’ or ‘done done’. Not just done, but ‘DONE’… ‘done done’. The term suggests a more complete version of complete. Like belt and braces.

Continue reading “How do you know when you’re done?”

MoSCoW prioritisation is on effort

St Basil's cathedral and the Kremlin in Moscow, Russia
No! Not that Moscow. (Photo taken by me on a school trip in 1988)

One misunderstanding that I’ve encountered a lot over the last couple of years in relation to DSDM agile project management is in the area of prioritisation, and in particular how MoSCoW prioritisation works. In this post I hope to make things a little clearer

What is fixed?

In all projects you have, broadly speaking, four variables:

  • Features
  • Quality
  • Time
  • Cost (resource)
Project variables—traditional and DSDM (Source: DSDM Consortium)
Project variables—traditional and DSDM (Source: DSDM Consortium)

In traditional projects, as it is assumed that all requirements will be delivered, features are fixed. So, time, cost and to an extent quality are therefore variable. Which makes sense, if things are more complex than at first anticipated, in order to deliver all the features then you will need to give the project more time and/or money.

DSDM takes a different approach. This methodology argues that surely not all requirements can possibilty be of equal importance, and so DSDM argues for fixed time, fixed resource, and fixed quality, and it has developed a method for managing a variable set of features: MoSCoW.

MoSCoW prioritisation

MoSCoW is a handy acronym to remember the four categories of prioritisation: must, should, could and won’t (this time).

  • Must — Without these the product will not function, be legal, or will be unsafe. This is often given the ‘backronym’ Minimum Usable SubseT.
  • Should — Important but not critical to the project; it may be painful to leave them out, a workaround may be required, but the product will still be viable.
  • Could — Nice to have but if left out will create less of an impact than if a should is omitted
  • Won’t — These are the requirments which it has been agreed will be omitted entirely from either the whole project, or at least this increment or iteration.

In timeboxes (sprints) it is requirements marked as ‘could’ that create the main source of contingency. If something happens that puts the deadline at risk, it is from the pool of coulds that requirements get dropped first.

And if things continue to go badly then once the coulds have been depleted you can then start dropping shoulds.

This way, you guarantee that all the must-have requirements will be delivered.


DSDM is also quite opinionated on how to organise your timeboxes, so there is a realisitic balance of musts, shoulds, and coulds.

It would make no sense to work only on must-have requirements during a sprint — there would be no contingency, unless you can guarantee that your estimates are 100% correct. So DSDM recommends a balance of:

  • 60% must-have effort
  • 20% should-have effort
  • 20% could-have effort

But this is where I have encountered the most confusion when dealing with MoSCoW. I have been in planning sessions with Agile project managers and business analysts who have tried to make sure that 60% of the requirements are categorised as musts, 20% as shoulds, and a further 20% as coulds.

In other words, let’s say we have a project with 100 requirements — they have tried to ensure we have 60 musts, 20 shoulds and 20 coulds.

While this could potentially be a useful exercise while gathering requirements, to reinforce to stakeholders that not everything is a must-have, when it comes to timebox planning this isn’t what the DSDM guidelines recommend.

This is what the DSDM Handbook says:

On a typical project, DSDM recommends no more than 60% effort for Must Have requirements on a project, and a sensible pool of Could Haves, usually around 20% effort.

The thing to notice here is the word ‘effort’.

What is effort?

Effort, of course, means the amount of work required to complete a task.

In Agile projects we often estimate in either ideal time or story points. Ideal time (measured usually in hours or days) is an estimate of how long a task will take to complete assumeing it’s all you work on, you have no interruptions, and everything you need is available). Story points are an arbitrary measurement used to estimate the size of tasks relative to one another; they often use an adjusted Fibonacci sequence (0, ½, 1, 2, 3, 5, 8, 13, 20, 40, 100 plus ∞).


So, let’s take as an example, a very small, very simple project that has only 10 requirements, which are written in Latin, not because we’re St Andrews but so you don’t get distracted reading them.

1. Gather requirements and priorities

Here is our list of requirements:

ID Description Priority
1 Lorem ipsum dolor Must
2 Sit amet Must
3 Consectetur adipisicing elit Should
4 Fuga sapiente, nulla facere Could
5 Eaque molestias similique Could
6 Cupiditate error voluptas! Could
7 Fugit, quasi aliquid Must
8 A quas ea rerum Must
9 Quis ipsam illo Should
10 Dolorem fuga Should


As you can see, when we gathered these requirements from our business stakeholders, we also asked their opinion on prioritisation.

In their opinion what would be the minimum usable subset of features required to create a successful product? And which requirements might be regarded as nice-to-haves that have painful workarounds (shoulds) and easy workarounds (coulds)?

2. Estimate requirements

At this point, we have no indication of the effort required to deliver these requirements.

So we now meet with the solution development team to gahter from them their estimates on how long each feature will take to develop. Here we are using story points.

ID Description Priority Estimate
1 Lorem ipsum dolor Must 3
2 Sit amet Must 8
7 Fugit, quasi aliquid Must 20
8 A quas ea rerum Must 13
3 Consectetur adipisicing elit Should 8
9 Quis ipsam illo Should 5
10 Dolorem fuga Should 13
4 Fuga sapiente, nulla facere Could 8
5 Eaque molestias similique Could 8
6 Cupiditate error voluptas! Could 5
Total 91


We can see that the total effort to deliver the entire product is equal to 91 story points.

3. Team velocity

We are almost there, but before we can begin to plan our timeboxes, to determine what gets developed when, we first need to have an idea of our team’s velocity.

Velocity is the term used to measure how many story points a team can comfortably complete in one iteration (one sprint, if you are using Scrum terminology).

Let’s assume that our team can comfortably complete 32 story points each iteration.

We now know, that if all goes well we should be able to complete all these requirements within three iterations (sprints):

91 story points / 32 story points per iteration = 2.8 iterations.

4. Use MoSCoW to plan iterations

We can now begin to organise the requirements into timeboxes, trying to keep as close to these limits of 60% must-haves, 20% should-haves and 20% could-haves as we can, so that we build into each iteration some contingency.

Remember, we’re working this out on effort. So, if we can complete 32 story points each iteration we can work out that:

  • 60% of 32 = 19.2 story points
  • 20% of 32 = 6.4 story points

This now gives us a useful guide: aim for about 20 story points for must-have requirements, and around 6 story points for both should-have and could-have requirements.

The task of actually working out what goes into each iteration is often more of an art than a science. It is not always easy or straight-forward. You may have to take into account things like how often you plan to deploy, project dependencies, resource availability, etc. I often use post-it notes or spreadsheets to work out iteration plans.

So, here’s how we might organise these

Iteration 1
ID Description Priority Estimate Percentage
7 Fugit, quasi aliquid Must 20 62%
9 Quis ipsam illo Should 5 16%
6 Cupiditate error voluptas! Could 5 16%
Total 30 94%
Iteration 2
ID Description Priority Estimate Percentage
8 A quas ea rerum Must 13 40%
3 Consectetur adipisicing elit Should 8 25%
5 Eaque molestias similique Could 8 25%
Total 29 90%
Iteration 3
ID Description Priority Estimate Percentage
Lorem ipsum dolor
Sit amet
10 Dolorem fuga Should 13 41%
4 Fuga sapiente, nulla facere Could 8 25%
Total 32 100%


As you can see, we have not been able to stick exactly to 60% musts, 20% shoulds, and 20% coulds. But in each iteration we have built in enough contingency to allow us to drop features if required without compromising the success of the whole project.

You can also see that, apart from in the final iteration, we have also built in some ‘slack’ (6% in the first iteration, and 10% in the second). Slack in Agile is basically unassigned time that allows some breathing space for tasks that may take a little longer than estimated. This is an additional type of contingency that we’ve built into the iteration plan.


MoSCoW prioritisation is a very useful tool for ensuring that quality, time and resources are fixed to ensure that the right product is developed on time.

Be aware, however, that if you use MoSCoW prioritisation that the balance of 60% must-haves, 20% should-haves, and 20% could-haves are made on the estimated effort (time) of requirements and not simply on the total number of requirements.