Defining success

Prep time:
Variable
Run time:
Variable
People:
1+
Contributed by:
Digital Service Design Office, Queensland Department of Transport and Main Roads
Stages:
DiscoveryAlpha

The defining success play helps teams measure the real impact of their work with metrics, ensure you get value and impact from projects and stay aligned on what success looks like.

During the play, your project team will work together to define what success looks like for your project and identify the best metrics to measure your intended outcomes. You will consider your problem, plan how to measure progress and take benchmark measurements to track your progress against.

By the end of the activity, your team will have clear success criteria to guide your project and ensure that you are on track to a successful outcome.

Running Defining success in the discovery stage ensures that your project team has a clear understanding of what success means and how to measure it. This can help prevent misunderstandings, conflicts, and wasted effort. Running the play early on can also help your team identify potential problems and limitations that might impact the project, and adjust your approach accordingly.

In the alpha, beta, and live stages your team should use the outputs of this play to run the measuring success play to track your progress throughout the project.

Outcomes

  • Clear success criteria that align with project goals and outcomes
  • A well-defined set of metrics you can use to track and measure real impacts
  • Keep the team aligned on what success looks like

What you need

RemoteIn-person
  • Microsoft Teams
  • Miro or Microsoft Whiteboard
  • Meeting space
  • Whiteboard
  • Markers
  • Sticky notes
  • Laptop or a screen for data review and analysis

Instructions

1. Make sure you have a clearly defined problem

Make sure your problem statement is more than just a fact or a judgement. If you haven’t identified the true problem, you won’t be able to solve it. Remember to identify who or what is affected by the problem.

For example:

  • ‘Support centre call times are too long’ is a judgement – too long for whom?
  • ‘Customers are unhappy about support call wait times’ is a problem for customers.
  • ‘Support centre call times have increased’ is a fact which it may or may not indicate a problem.
  • ‘Support centre costs are too high for our budget’ is a problem for the organisation.

If you don't have your problem clearly defined yet you can run the Problem definition, Desktop research, or User research planning plays.

2. Make sure you have a clearly defined outcome

What will it look like if your project is a success?
You’ll refine your intended outcome as part of the Defining success play, but it’s important to have an idea of it before you start.

If you're unsure about the outcome, take some time to work through the Impact Mapping play before you begin this play.

3. Identify the types of data you have access to

Consider the following:

  • Which analytics tools does your organization already use?
  • Which metrics are currently being tracked?
  • What research has already been conducted?

Make a list of all the data sources you have and if you need to request access to any additional ones.

4. Invite your participants

You might run this play as a single session, or you might complete the steps over time with breaks in between to collect data.

It’s important to give enough notice to ensure participants are able to attend the session. Aim to send invitations around 2 weeks in advance. When you send an invitation make sure that you clearly explain the goal of the session, how long it will take, and why people’s participation will be beneficial.

Calendar meeting request for defining success


Subject:
Join us for a defining success workshop for [Project name]

Meeting description:
Hi [team name if sending a group invitation, or participant name if sending individual invitations],

We're at a stage in the project where we need to establish what success looks like. In this session we will review our goals and identify the best metrics to measure the intended project outcomes.

The aim of this session is to set standards for what measurements constitute a successful project outcome.

The session will take about [x] minutes and there is no preparation required.

Having [you/each of you] attend will help to articulate the desired outcomes and select appropriate metrics to measure the impact of our project.  
You can read more about the defining success play in the Digital service design playbook.

Kind regards,
[Facilitator name]

1. Consider your problem

A problem is an issue that is negatively affecting people, groups, or organisations.
You should already have a well-defined problem. Begin the play by briefly summarising the problem, including what caused it and who it affects.

2. Craft a short statement that defines your intended outcome

The outcome is the impact of your project.
You should already have an idea of your intended outcome. Put your outcome into a short statement that answers the question “What does success look like for this project?”, and keep the following in mind as you craft your statement:

The outcome is the impact of your project.

The outcome is not an output or a metric of your project.

  • An output is a product created by the project – for example, a workshop artefact, a tool, or a report.
  • A metric is a measurement that indicates whether the impact of the project is being achieved.


Example

Problem:

Customers are unhappy about how long it takes to get their problem resolved through the call centre.

Outcome:

❌ A chatbot that customers can talk to instead of calling – This is an output. It may or may not lead  to success.
❌ Reduced call centre volume – This is a metric. It could indicate success, but it could also be a result of  other factors, including coincidence.
✅ Shorter customer problem resolution times and increased customer satisfaction when contact is made via the call centre – This is an intended outcome. When this statement is true, the project will  have succeeded.

The outcome may need several elements.

If your project touches multiple services, groups of stakeholders, or groups of users, you will need to consider the impact on each one.

For example, if your customers are unhappy about support call wait times you could increase the number of call centre staff. This could be a success from the perspective of customer satisfaction, but would it meet the business’s financial needs?

The outcome must be clear enough that you can tie it to metrics.

Without this, it’s likely that ‘success’ won’t be clearly defined, and it will be difficult to measure the project’s effectiveness.


Example


Problem: Support centre costs are too high for our budget
Outcome:
❌ Improve call centre efficiency – ‘Efficiency’ is too vague to measure
✅ Reduce call centre costs – Costs can be measured using financial data.

3. Plan how to measure progress towards the outcome over time

Metrics are measurements that are used to track progress toward your intended outcome.

Checking your metrics regularly throughout your project helps you track your progress towards solving your problem and keep your project on track. For examples of metrics you might use, see the resources section below. The types of metrics you should choose depends on your project.

If you are re-designing a service, your measurement plan may build on what is already measured and reported.

Whatever measurements you consider, identify ones that will help you understand the outcomes of the service, not just its outputs. For example, rather than relying solely on the count of how many people use the service annually, consider measuring how interacting with a service impacted someone’s situation, business, or family. Context for the numbers will help you understand the true value or impact of a service.

Decide what to measure based on your problem and your intended outcomes.

  • How do you know your problem is real (what data and user insights do you have that indicates a problem?)
  • What kind of data would give you a clearer idea of the problem?
  • What data can be tied to the root causes of your problem?
  • What kind of data would be useful to track changes over time?

Make a list of what you are going to measure.

Consider the resources you already have, and what kind you might need to acquire

  • What sources of data do you have access to? For example: digital (web) analytics, user feedback, site performance, call centre data, financial information
  • What analytics tools does your organisation already have?
    The Queensland Government uses Google Analytics to monitor digital services.
  • Are any relevant metrics currently being tracked?
  • What kind of additional data might you need that doesn’t currently exist or isn’t easily accessible? How could you access or generate it?


Tip

Never use a single metric as a goal.
Relying on metrics as goals can lead to a narrow focus on short-term outcomes, and may not capture the full picture of what is happening.


Metrics can be misleading because they can be influenced by a range of factors. Focusing too much on a metric can lead to a distorted view of performance and may not reflect the reality of the situation.

Metrics can lead to a narrow view of performance. For example, a project that focuses solely on decreasing cost to serve may neglect other important factors such as customer satisfaction.

Metrics can create incentives for unwanted behaviour. For example, a call centre employee who is focused on hitting a calls-per-day target might transfer calls unnecessarily, leading to worse experiences for customers.

Metrics can stifle creativity. Focusing on hitting specific metrics may cause a team to neglect opportunities for innovation and improvement.

Add these details to your list.

4. Take a benchmark measurement

A benchmark is an initial set of measurements to be compared against later.
Now that you’ve planned how to measure your problem and progress towards the desired outcome, plan who will take a measurement (or set of measurements) to use as a benchmark and when that will be done.

  • Which other teams do you have to collaborate with to collect the required measurements?
  • When will you take the initial benchmark (this may be different for different parts of the service)?

Create a document to record your measurements and track them over time. You may need to end the session here and reconvene once the benchmark data has been collected.

5. Define your success criteria

Success criteria are measurements that indicate that the problem has been solved. Establishing clear goals and success criteria creates alignment and understanding across the team.

Read your intended outcome statement, then look at your benchmark data. If the problem was resolved, how would that be reflected in your metrics?

Estimate the difference you would expect to see, then document it.
If you defined multiple outcomes, establish clear success criteria for each one.
Remember, the success metrics are not the goal, they’re indicators that you’ve reached the goal.

6. Set up a measurement schedule

Decide on the frequency with which you will capture data to measure your project's progress. This can be weekly, bi-weekly, or monthly, depending on the project's timeline and the team's preference.
You do not need to review the data each time you collect it.

7. Allow for iteration

As you move through the project and uncover more information, you might need to update your desired outcomes. If that happens, you will also need to update your success criteria or even choose different metrics.

Collect data for your measurements

Based on the measurement schedule you defined earlier, collect the data you need.

Run the Measuring success play

Run the measuring success play regularly throughout the alpha, beta, and live stages. You will use this play to evaluate your progress against the metrics and success criteria you defined in this play, and iterate based on your results.

Resources

See below for a collection of templates and other pages which will help you run this play. These resources are also linked in the play instructions.

Subject:
Join us for a defining success workshop for [Project name]

Meeting description:
Hi [team name if sending a group invitation, or participant name if sending individual invitations],

We're at a stage in the project where we need to establish what success looks like. In this session, we will review our goals and identify the best metrics to measure the intended project outcomes.

The aim of this session is to set standards for what measurements constitute a successful project outcome.

The session will take about [x] minutes and there is no preparation required.

Having [you/each of you] attend will help to articulate the desired outcomes and select appropriate metrics to measure the impact of our project.
You can read more about the defining success play in the Digital service design playbook.

Kind regards,
[Facilitator name]

Quantitative Data

Quantitative data is numbers-based, countable, or measurable. Quantitative data can tell us how many, how much, or how often, but it can’t tell us why something is happening.

Qualitative Data

Qualitative data is interpretation-based, descriptive, and relating to language. Qualitative data can help us to understand why, how, or what happened.


Qualitative data can’t be used as a metric for benchmarking or measuring success, but it can complement quantitative data by providing insight into why things are happening and what needs improvement.

Example:  

Using qualitative data to complement quantitative benchmarking metrics

In the Mobile Phone and Seatbelt Technology project, the Single Ease Questionnaire (SEQ) was used as a quantitative metric for benchmarking the current state against two following rounds of iterations when working to improve users’ experience of completing tasks via the digital service.

Usability testing was undertaken at each stage, where users were given three key tasks to complete and then asked to indicate their perceived ease on a scale of 1 to 7. If the answer was less than 5, the user was asked why they gave it that score. This qualitative data gave richer insight into what made the transaction difficult and helped to focus the following round of iteration on how the service could be improved.

Leading indicators

Leading indicators are measurable factors that change before the system or process starts to follow a certain trend and are used to predict changes or trends. They provide advanced warnings and are used to prepare and act proactively. These indicators are important because they can help predict the performance of a particular aspect in the future.

By tracking and analysing these leading indicators, you can anticipate changes in demand, user behaviour or technology trends and adjust your digital services accordingly to meet user needs and expectations.

Here are some examples of leading indicators that you could use as metrics for your project:

  • Digital engagement metrics: These are indicators such as page views, unique visitors, bounce rates, or click-through rates on government websites. For example, a sudden increase in page views or unique visitors to a particular service page might indicate increased interest or demand for that service.
  • CSAT Scores: Customer Satisfaction (CSAT) surveys can be useful leading indicators. Regularly measured, they can provide early signs of changes in public sentiment towards the digital services provided by the government.
  • Search trends: Using tools like Google Trends can provide insights into what services the public is searching for. An increase in searches related to a particular service may be a leading indicator of increased demand.
  • Online help requests or calls to customer contact centres: The number of requests for assistance or queries received through digital channels could indicate issues or difficulties users are experiencing, serving as a leading indicator of potential service design flaws.
  • Social media mentions: Increases in social media mentions about a particular digital service could signal changes in public interest or sentiment, which could subsequently affect the usage of that service.
  • Service uptake rates: The rate at which users are adopting or subscribing to a new service can be a leading indicator of the success of that service.

These lagging indicators can provide valuable feedback about the success of your digital service delivery efforts, the effectiveness of your user experience designs, and the impacts of your decisions. They can also be used to identify areas for improvement and measure the progress of improvement efforts over time.


Lagging indicators

Lagging indicators are a set of measurable factors that change after the operating environment or the system itself has begun to follow a certain trend. Unlike leading indicators that predict changes and trends, lagging indicators confirm long-term trends or changes once they occur. These indicators are useful for providing feedback on the effectiveness of past decisions or actions.

Here are some examples of lagging indicators that you could use as metrics for your project:

  • Usability measures: Validated measures such as the System Usability Scale (SUS) or the Single Ease Questionnaire (SEQ) can measure the usability of your service. These measures can be used to assess the usability of prototypes or live services by adding a survey to the end of task completion for users or research participants.
  • Service completion rate and time: This is the percentage of users who can complete their intended tasks using a digital service and how long it takes them. It's a lagging indicator because it reflects the cumulative effect of the design, user interface, and functionality of the service.
  • Support tickets, online help requests or calls to customer contact centres: Changes in the number of support tickets, help requests contact centre calls that are received after the launch of a new service can be a lagging indicator of user difficulties, system bugs or service improvements.
  • Cost per transaction: The cost to the government for each successful transaction completed through the digital service. This metric can help in evaluating the cost-effectiveness and efficiency of digital vs. traditional service delivery methods.
  • System downtime or performance: The amount of time a digital service is unavailable due to system failures or maintenance. A high downtime percentage or performance issues such as slow response or load times could indicate potential issues with the service infrastructure or design.
  • Error rates: The number or percentage of transactions or tasks that result in errors. This could reflect issues with the service interface, user instructions, system bugs, or other factors impacting the users' ability to successfully use the service.
  • Service Reviews: Evaluations or reviews conducted on prototypes of planned service changes or after a new digital service has been implemented can provide insights into what worked well and what didn't. They can measure the success or failure of a service and are therefore considered a lagging indicator.

Example:
Using metrics to measure project outcomes

In the Mobile Phone and Seatbelt Technology project, time to complete tasks and the Single Ease Questionnaire (SEQ) were metrics used to measure whether prototypes were moving towards the desired outcome of making the digital service easier to use. This was a lagging indicator of changes to the user experience of the digital service even though it was collected from prototypes.

After the design changes were released in production there was a 6% reduction in calls to the contact centre in the first three weeks, equating to almost 10,000 calls per year. This is a leading indicator that the broader desired outcome of channel migration and reduced cost per transaction will be achieved, but more data needs to be collected to confirm.