This is part of my series about writing cards for software development. See the introduction here and the post on bugs here.

Agile Cards - Stories

Story1 cards are the most common agile card you will work with. They define a planned piece of work to be delivered by a single person. Following a common structure allows you to write cards faster, and reduces the overhead reading or picking up a card.

A story should take around one to three days, and make sense as a single atomic piece of work. A well written story should provide clarity to the person doing the work, visability to those who have interest in the work, and be able to stand on it’s own without needing additional discussion to implement. It is absolutely not a mandory requirement that a story is a complete feature (though it’s a bonus if it is). If a story card is going to take more than three days, it probably needs to be broken up.

Structure

Story cards follow the same general structure. You might be tempted to turn these headings into proper “fields” enforced by the agile task software. Don’t. By enforcing a structure, you break the ability to add or omit fields that make sense on a story by story basis, and removes the ability for team members to experiment with improved structures.

Cheat Sheet

Title (six words or less)

---

Summary goes here. Expand on the title.

# Context
Why this needs doing. What larger goal is being worked towards here.

# Deliverables
1. Do this thing
2. And do this thing

# Acceptance Criteria
1. When complete, software must do this thing.
2. And a user must be able to do that thing.

# Out of Scope
Don't do that piece of work in this card.

# Testing
1. Implement tests for foo accessing bar.
2. At least 95% unit test coverage.

# Links
Links to documentation, relevant cards, etc.

Title

The title (summary in Jira speak) should be short and snappy. This is something you can use in conversation as a short hand to seperate out the work from the various other pieces of work in the sprint. It is not a description of the work to be done. Aim for ten words or less, preferably six.

Examples of good titles

  • Create Tiltforms login UI
  • Add TLS support to Email service

Examples of bad titles

  • “UI”
    • Do what to the which UI? Delete it? Design it?
  • “Add login form to General Access Repository so that external vendors can access documentation.”
    • Too long, too much detail. Move the why to the context section below.

Main field

Jira and the various other sprint tools have a big empty text field. Divide this up into the following headings (except for the extended title).

Extended title or Summary

The extended title is almost always a single sentence. Here you can be more specific about what is being done where, and how it’s being implemented on. It’s not a substitute for any of the other sections, but provides a longer explanation for someone familiar with the project who needs to refresh their memory. For example, people who worked on the project but are looking at a story six months after it was originally written, or an engineer from a associated project.

NOTE: This doesn’t actually need a heading.

Examples of good extended titles

  • “Create Tiltforms login UI for Google & Office 365 login support to the Dendofy v3 UI.”
  • “Update Cloudformation deployment for the KT-exclusion detection microservice template to include UAT environments.”

Examples of bad extended titles

  • “Add TLS support to Email service.”
    • This is just repeating the title. It adds noise, but no new information.
  • We’ve engaged CloudNine consulting to build the UI for our 3rd gen Fintech competitor.
    • These details are important, but don’t provide any information on the work to be done. This detail is part of the context/background section.

Context or Background

This is one of the most important, but often most forgotten pieces of a good story card. This section explains why this particular piece of work is required.

By providing context to a given piece of work, team members are empowered to question and provide feedback on a story, such as when a piece of work is no longer required, or the client has changed their requirements. Providing a short background of a few sentences here massively decreases the chances of waste from having incorrect deliverables.

This section should be a few sentences that describes why a particular task exists. If there’s deadlines, a particular client involved, or to support a future feature, this is the section to use.

In a what/why/when/where breakdown, this is the “why”.

Examples of good context

  • “Two of our clients in the sales pipeline have indicated they use Okta for SSO, and require Okta support to complete the sale. Implementing Okta support means fixing several outstanding pieces of tech debt in our authentication framework.””
  • “The current UI for the Dendrites product is implemented in Angular 1. In line with all our other projects, the re-design for Dedrites UI v2 will be in React.”

Examples of bad context

  • “Refactor the Authentication framework with OAuth1 support.”
    • This is an instruction on what to do, rather than the background.
  • “We need to add SSO to our application.”
    • This is appropriate for the context section, but on it’s own does not provide enough detail. Who is we? Which application? What business goal does adding SSO fulfil?

Deliverables

Deliverables provides the “what” of a story. It defines what needs to be done, to what, and how. This is almost always means a numbered list of action items. Be as specific here as possible about relevant modules, systems, notes, documentation, and other stories. If documentation or other non-code items needs to be done as part of the story, include it here. Even if your team does kick-offs, a team member should be able to pick up a story and get started on it in some way, even if the rest of the team is unavailable.

Examples of good deliverable items

  • Add two new GST tax classes for Singapore & Australia implementing the LineItemTaxInteface. See the notes section for claculation rules.
  • Register the above two tax classes in the NationalTaxFactory.
  • Update the REST endpoint for attaching supplementary documentation to customs orders to use the new authorisation system built in card JRT-4411.
  • Add documentation of the new payment system data flow from epic RTT-555 as a new page in Confluence at System Documentation > Architecture > Payments > Dataflows.
  • Create UI mockups for a landing page on the StrongRow website.

Examples of bad deliverables

  • “Update documentation.””
    • Which documentation? What details is required?
  • “Fred has asked us to implement structured logging on the SuspectionCheck system, talk to him about how.”
    • “Talks about irrelevant history, doesn’t have the required level of detail, mentions a person by their first name only.””
  • “Create a login page.”
    • Not enough detail. Which application? What features need to be supported? Is there an existing login page?

Acceptance Criteria

This section is usually (but not always) written from a business perspective, detailing the expected behaviour of the system after the deliverables.

If all the deliverables are done, but the acceptance criteria is not met, the story is not complete, either due to a poor implementation, or incomplete scoping of the deliverables. In the latter case, it is usually up to the person implementing the story to pick this up and alert the original author of the story to the missing deliverables.

Examples of good Acceptance Criteria

  • “Checking ‘Remember me’ when logging in will keep the user logged in for 28 days after their last page view.”
  • “Invoices show the company logo in the top right hand corner when print in PDF and on the top left hand corner when viewed on a screen.”

Examples of bad Acceptance Criteria

  • “Create a login page.”
    • This is a (poorly written) deliverable.
  • “Customers can see an invoice”
    • Vague. This needs to be more specific about when and where customers can see an invoice.

Out of Scope

The Out of Scope section is used to limit the the work done in this story. This is common for prototype stories, separating out design from implementation, or where a piece of work needs to be completed in order to open up other parallel work. It’s also useful for enthusiastic developers who want to release to production before business sign off and communications are complete.

Where there is another piece of work does implement the out of scope piece, including the job reference means the implementer is empowered to cross check that the following piece of work makes sense in light of the work they have done.

Examples of good Out of Scope

  • “Deployment of this feature to production. See JST-4441”
  • “Implentation of this design in the UI. See FE-539”
  • “Sending reports out to clients. Forward the reports to the #sales channel on slack.”

Examples of bad Out of Scope

  • “Testing”
    • Is all testing out of scope, or just a particular type? Is there a follow up story for testing?
  • “Don’t add any of the extra items from the meeting”
    • It’s impossible to know which items from which meeting are referenced here.
  • “Only do exactly what is in the deliverables.”
    • This and similar phrases indicates one or more engineers is implementing only some of the deliverables, and/or implementing deliverables from other cards, thus causing planning issues.2 Good Out of Scope items help define scope for everyone, not targetted at a particular staff member.

Testing

This section covers everything from unit test through integration testing, manual testing, and post production deployment testing. Where possible, link to existing test plans, standard definitions of done, and be specific as possible about expectations. Since testing is usually one of an engineers least favourite things to do, it’s best to make the expectations as clear as possible.

Ways to define testing may include, coverage requirements, functional test cases to cover, maximum run times, specificed test frameworks, and so on.

Examples of good Testing

  • “90% unit test coverage all new code”
  • “Functional tests on all acceptance criteria, covering both success and failure scenarios”
  • “Tests as per standard Definition of Done from System Documentation > Development Standards > Definition of Done.

Examples of bad Testing

  • “Normal DoD”
    • This is a common way of referencing the standard Definition of Done. However, there’s no indication of where to find it, and the acronym could be confusing for a new starter.
  • “Prove new features in this card work”
    • This statement is vague, and doesn’t specify how to test the features, or to what level.

Notes

The notes section is free form area for various pieces of information that don’t fit into the standard structure. This could include technical documentation, links to relevant external websites, and links within the agile software to dependencies and related cards.

Wrapping up

Writing a good agile card takes practice. Following the formula laid out above the first few times will seem intimidating and difficult. After a few iterations, you will find that the sections have their distinctly useful spots and fill them out becomes a matter of just filling in the blanks.

Next time

Next time I’ll tackle bug cards, a space which often either suffers from a dearth or glut of information.


  1. Some teams will also define separate “task” cards, for doing a one-off task such as setting up a server or repository. Structurally, these are the same as story cards, so the same rules apply. ↩︎

  2. While this is a problem, this should be addressed directly by the engineers manager as part directly, rather than resorting to passive/aggressive Out of Scope points on an agile card. Whole books have been written on how to handle said employees, and that process is Out of Scope of this article. ↩︎