Pace Layer Mapping

This is a recording of a talk given at UCD Gathering, Thursday, 18 November 2021.

Delivery teams and user research practitioners often have problems working together well. They work with very different artefacts, often operate on different timescales, and both groups can experience conflict over the value each discipline brings to the end product.

We’ll tell the story of several different attempts at helping teams integrate research work effectively. We’ll talk about what worked, what didn't, and how those attempts eventually evolved into the practice of Pace Layer Mapping.

Pace Layer Mapping builds a common artefact that shows (a) the confidence we have in research insights; (b) what we need to do to increase that confidence; and (c) how stable that research is over time. It allows the whole team — product managers, researchers, and engineers — to come together around research insights and methods effectively.

(An earlier version of this talk was given at Outcome 2021 and #mtpcon Digital 2020.)

(A stand alone PDF of the slides is also available.)

Slides & Transcript

The talk title 'Pace Layer Mapping' on top of a photo of sunlit sandstone rock layers.

Hi, I'm Adrian Howard. And I spend my time working with companies who are having problems where the work of product management, user research and agile delivery overlap. One of the pain points I often encounter is where teams fail to integrate user research work effectively. Helping teams do that better, eventually led to what we're going to talk about today, Pace Layer Mapping.

And yes, it's yet another canvas!

A screen shot of a google image search for 'product canvas' with lots of results.

Product Management and User Research are full of canvases. You're probably familiar with a bunch already. Google will get you a bunch more. So you know what to expect.

First, I will briefly explain the structure of the canvas. Then I'll fill in some of the details from a nice simple example. we'll sprinkle it with a bunch of post-it notes and show how it solves that simple problem for you. And we're done!

A picture of a grid titled 'pace layer mapping' with a description of 'yet another canvas' — four horizontal squares by 3 vertical. A few yellow post-it notes are sprinkled across the squares.

I'm not going to do that.

Text heading: '…the Long Messy Journey'

Instead, I'm going to tell you about the long, messy, complicated journey that eventually led to the Pace Layer Mapping canvas and practice.

It didn't spring fully formed out of nothing — like the products you work with. We started solving one problem saw some value, experienced other problems, ran experiments, learned from our experiences, and evolved the practice over time to help solve new problems. So that's the journey we're going to go on today.

A text slide showing a list: Scale Of Truthiness, Lean Persona, Incremental Persona, Iterative Persona, Flair, and Pace Layer Mapping. All but the final 'Pace Layer Mapping' list item are crossed out.

We started with an alignment problem that we solved with a practice called Scale Of Truthiness. After that alignment problem was solved, the practice evolved into something called Lean Persona when it became more of a persistent artefact, rather than a one off activity. Which then evolved into something we called Incremental Persona, which then evolved into something we called Iterative Persona, which then evolved to add another structure we called Flair. Which eventually broadened out and changed a bit more into what we're now calling Pace Layer Mapping.

All of those things were useful at different times for different reasons. They solved some problems and raise new ones. And for those who want to switch off now, that's the lesson I want you to take away today. That the artefacts, practices, and tools that you use for your work should be… well… useful! And when they're not — change them until they are! Don't just take something off the shelf — customise it and build it for your context.

Text heading: Scale of Truthiness

Let's return to the beginning of this story. With the Scale Of Truthiness. I started using this with a startup that was still figuring out problem-solution fit. They'd done some early customer development work, built some early prototypes, delivered some MVPs, hired a bunch of people — but were beginning to have problems identifying and solidifying their target market and their target customers.

The details of what they were building doesn't really matter. So I'm going to simplify things, and say they were building Twitter for Cats.

Photo of a cat in front of a laptop with the twitter logo on the back.

They hired me to help them get some clarity on those problems with their product strategy and vision, and who their customers were. After talking to a bunch of people at the organisation, it rapidly became clear that there was a serious lack of alignment on who their customers actually were. When you talked to the product managers, they thought they customers look like this…

A photo of a ginger house cat.

While the developers thought they looked a little bit different, maybe something more like this…

A photo of a ginger house cat is now joined by a photo of a tiger.

The designers thought they were building for something that looked like this…

The photos of a ginger house cat and a tiger are joined by a photo of somebody dressed up as Hello Kitty.

And some of the investors were thinking about a kind of customer that was so far away from everybody else it was it was actively confusing…

The photos of a ginger house cat, a tiger, and somebody dressed up as Hello Kitty are joined by a photo of the the album 'The Best of Cat Stevens' which includes a headshot of the singer.

Unsurprisingly, this lack of alignment on who they were building for — and whose problems we were trying to solve — led to a lot of conflict within the organisation on what direction the product should go in.

A photo of a mock advert that includes two cowboys herding cats.

I'm sure the process of herding different stakeholders with different needs is familiar to you. It's what product managers and user researchers spend a lot of their time doing. So I did what I expect many of you would have done in the same situation.

A photo of a group of people in front of a wall with lots of post-it notes on. The people have been blurred out.

We got all the stakeholders in the same room with a lot of paper, sharpies, and post-it notes. We even got a few of the end-users of the product in the room too. We split folk into groups, and got them to start articulating who they thought the product's customers were — using a few different methods from a few different people.

We got them to use simple four-by-fours to describe the typical customers. Something I first learned from the writing of the excellent Janice Fraser.

A 4x4 grid alongside a photo of Janice Fraser. Top-right is the name of the person (Mary) and a rough sketch of a dark haired woman with glasses. Top-left is a list of behaviours (Has a house cleaner, Buys takeaways three nights a week, Frequently feels overwhelmed when she forgets something). Bottom-left is a list of demographics (Working mum, 34 years old, Lives in Reading, Works in London, Married with 2 kids, Household income 125k/y). Bottom-right a list of needs and goals (Help! Running errands, Managing Kids, Keeping things running, Time for her girlfriends, To feel like she has it sorted, Top clone herself).

We got them to use empathy maps to describe those customers experiences, what they saw what they heard what their pain points were. A practice I learned from Dave Gray's work.

A picture of an empathy map alongside a photo of David Gray. An empathy map is a canvas built around a stylised face. It has areas for: Who are we empathising with?, What do they need to do?, What do they hear?, What do they say?, What do they see?, What do they do?, and What do they think and feel?

We also got people to come up with different dimensions and continua that they could put customers on. Something I learned from Cindy Alverez's Lean Customer Development book — an excellent read.

A list of several different dimensions alongside a photo of Cindy Alverez. The list of dimensions are: values cash to values time, prefers predictability to tries new things, makes decisions to follows orders, health conscious to taste conscious, and makes decisions for self to concerned what others will think.

Once the individual groups were happy, we pulled that information off those sheets and artefacts, and threw all the evidence and assumptions and wild-ass-guesses onto post-it notes — and stuck them up on the wall.

At this point, we hadn't got alignment! We in fact had a lot of disagreement, as people walked the wall and saw how different people saw the customer. People had different conclusions about what their pain points and gain points were, what their behaviours were, what kind of people they were, what assets they had.

Which is where the Scale Of Truthiness comes in — and it's really simple.

Text heading: Apply Scale of Truthiness…

We just drew a line right across the length of the wall. At one end of the scale are things we've basically made up — guesses. And at the other end of the scale are things we're really confident of. Things we have a lot of evidence for.

A photo of a group of people in front of a wall with lots of post-it notes on. The people have been blurred out. Above the post-it notes there is a scale that goes from 'Wild Guess' on the left to 'True' on the right.

We then got folks to justify where their things sat on that scale. Which led to lots of interesting conversations about what counts as evidence, the different experiences people have had with customers, and so on.

We saw situations where someone from marketing was saying customers will like "this". And someone from the product group saying customers would like "that". Then the customer in the room saying "Sometimes it's 'this', sometimes it's 'that'".

There was lots of discovery there — even though there wasn't lots of research. At that point, it was just getting everybody on the same page about how everybody else saw the customers.

Ideas that were mutually contradictory, Obviously, couldn't both be true. Which led to much more productive conversations about how we could figure out what the real state of the world was. People started annotating the board with research activities and experiments to help figure those things out.

When things settle down a bit, it became clear that we needed to move that scale of truth further out. Because nearly everything that we had discovered, needed more research and experimentation to either invalidate or validate it.

A photo of a group of people in front of a wall with lots of post-it notes on. The people have been blurred out. Above the post-it notes there is a scale that goes from 'Wild Guess' on the left to 'True' on the right. The right end of the scale has been extended further right — so no post-it notes are underneath. This blank area of wall is highlighted.

And as the facilitator for this session, it helped me make sure that a few of the "facts" that the team seemed 100% certain of actually got checked as well!

While we weren't yet totally aligned on our customer — there was still a lot of disagreement — we could begin to see that some options were much more likely than other options.

We return to the photos of the house cat, the tiger, the person in the Hello Kitty costume, and Cat Stevens. Everything but the photo of the house cat is faded out into the background.

We had some solid next step for research and experimentation to clarify those areas of uncertainty — and hopefully get everyone talking about the same customers in the organisation.

A meme style photo of Keanu Reeves from Bill and Ted. It's subtitled 'HUH WAIT WUT?'

Now, some folks will be saying at this point that these people should have built proper persona upfront, and that might well have helped. Persona for those who have not used them are research based archetypes of our customers that are usually summarised with some kind of persona description. They're an artefact that comes out of user research work.

Pictire of a persona description, text to a photo of Tod Zaki Warfel

Like this example from Tod Zaki Warfel. It can be a really useful and powerful way to quickly communicate the results of user research. Persona can also be misused, and lead teams to make dumb mistakes. I've certainly experienced both scenarios in my work. I'm not going to get into the details of the whole personas-good vs. personas-evil argument. Ask Twitter and lots of people will shout at you.

The quote “Personas are to Persona Descriptions as Vacations are to Souvenir Picture Albums” next to a photo of Jared Spool.

However, I do love this quote from Jared Spool

“Personas are to Persona Descriptions as Vacations are to Souvenir Picture Albums”

They’re a great reminder — but they don’t actually replace the experience. A picture album isn’t the same as going on holiday. A Persona Description isn’t the same as experiencing that research process.

A repeat of the Scale of Truthiness illustration: A photo of a group of people in front of a wall with lots of post-it notes on. The people have been blurred out. Above the post-it notes there is a scale that goes from 'Wild Guess' on the left to 'True' on the right.

So what we had after the Scale Of Truthiness exercise wasn't a persona description. It was more like a snapshot of a bunch of research questions around the company's customers. It was something that, with some more work, might get distilled down to a persona description. But instead, something a little bit more interesting happened.

That Scale Of Truthiness artefact got left on the wall. People started referring to it during product and development work when questions about customers came up. It stopped being the output of a one-off alignment activity, and started being an artefact people evolved over time and actively updated.

Rather than a continuum, people added a little bit of structure — and created broad categories for the research on the wall.

A picture of a wall of post-it notes divided into four roughly equal categories. Running from left-to-right the categories are Guess, Weak, Stong, and True. There are no post-it notes in the 'True' category.

For example, "guesses" were things that they'd heard from one customer, or just something a person on the team thought might be relevant.

Once they had these broad categories, and a way of describing what belonged in them, people started formalising rules for when and how things move between the categories.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). A blue arrow shows a transition from Guess to Weak.

For example, a "guess" could move to the "weak" category, when we heard the problem that it talked about from five other customers.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). Blue arrows show transitions from Guess to Weak, and from Weak to Strong.

Something might move from "weak" to "strong" if we could deliver some kind of MVP experiment to validate that customer need.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). Blue arrows running left-to-right show transitions from Guess to Weak, from Weak to Strong, and from Strong to True.

Things might move from "strong" to "true" if we shipped a feature based on that MVP; where we saw real customers use the feature to solve the problem tha research insight talked about.

We also started seeing things move in the other direction.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). Blue arrows running left-to-right show transitions from Guess to Weak, from Weak to Strong, and from Strong to True. A blue arrow running right-to-left illustrates a transition back from True to Strong.

So if we shipped a feature, and nobody ended up using it, we might move that insight back a step, because there was obviously something that we misunderstood about how it actually worked.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). Blue arrows running left-to-right show transitions from Guess to Weak, from Weak to Strong, and from Strong to True. Blue arrows running right-to-left illustrates a transition back from True to Strong, and from Strong to weak.

If we had a bit of evidence, but it wasn't actually used to make any product decisions over a period of time, like a month, then we might move it back a step. Because while it might be "true", it wasn't actually being used or useful for our context.

This helped people start building a playbook of activities and practices for working with the user research, and making sure it was relevant to the product work that was currently in progress. Because we have this shared resource — that product people, user research people, and developers could all come together around — we started having much more productive conversations about priorities and research versus build decisions.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). The Guess category is highlighted in red.

Things over on the left hand side needed more research before we could make sensible product decisions. It's probably not even worth doing a little experiment — it's definitely not worth building anything.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). The Weak and Strong categories are highlighted in yellow.

Things in the middle, we have some weak evidence for. Maybe it's time to do some kind of small scale cheap bets on the kind of product direction will come out of those insights — experiments and MVPs.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). The true category is highlighted in green.

Once we've got stuff over to true, we have some really solid evidence that those things were used, and useful, and applied to our customers. So we could start making larger investments, bigger bets, longer term work around the things that appeared there.

This was all useful enough that we wanted to start talking about this set of practices together — and communicating it to other teams so they could get the value that this team was getting out of it.

Text title: Lean Persona (underneath the crossed-out title 'Scale of Truthiness')

So we packaged it all together and started talking about it with other people in the organisation. We labelled it Lean Persona.

A meme style photo of Keanu Reeves from Bill and Ted. It's subtitled 'HUH WAIT WUT?'

This turned out to be a huge mistake. A really bad label.

Because when you Google Lean Persona what you see are those four-by-fours that we use in the initial exercise, and another of those four-by-fours, and another of those four-by-fours, and another of those four-by-fours and another of those four-by-fours.

A screen shot of a google image search for 'lean persona' — which returns a lot of four-by-fours.

It seems to have become, at some point over the last few years, the generic label for those kinds of artefact. So when people were talking about it, and they hadn't had the experience of the Scale of Truthiness, and all that work before. They had the wrong idea in their head and started doing different things.

Text title: Incremental Persona (underneath a list of old crossed out titles titles: 'Scale of Truthiness', and 'Lean Persona')

So we change the name. Since we were building up the persona over time, we thought that Incremental Persona might be a better label. Did that work better? Let's try it. Let's take that out to different teams and see what they think of it, and what they expect that practice to be.

A meme style photo of Keanu Reeves from Bill and Ted. It's subtitled 'HUH WAIT WUT?'

Turned out, it was another mistake.

Because for some folk incremental persona was communicating the idea that we were delivering bits of the completed persona one after the other, rather than refining our ideas over time.

To the right is a photo of Jeff Patton. To the left an illustration from his book showing the difference between iteration (somebody imagines the Mona Lisa and constructs it one piece at a time like a jigsaw) and iteration (somebody imagines a woman in a pastoral setting and constructs the Mona Lisa using a series of sketches of increasing fidelity)

This illustration from Jeff Patton's excellent book on user story mapping makes this point really well. Incremental is delivering chunks, it has a start, and it has a finish. Calling it incremental set the wrong expectations with some of our listeners.

Iteration, on the other hand, is a process. We're refining something over time — gradually making that sketch and low fidelity insights into higher fidelity insights that we're more confident about.

Text title: Iterative Persona (underneath a list of old crossed out titles titles: 'Scale of Truthiness', 'Lean Persona', and 'Incremental Persona')

So we started calling them Iterative Persona. This one seemed to stick and be useful. And as more people use this method, we spotted some other interesting behaviours.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). Blue arrows running right-to-left illustrates a transition back from True to Strong, and from Strong to weak.

Remember those backward pushing rules we talked about before when insights weren't applied, and then dropped back to the less confident categories? We regularly saw some aspects of stuff that traditionally sucked for some people with persona, like gender and age, disappear off the board and move into the less relevant categories.

Which is good, because those kind of demographic things often drive bad assumptions about customers.

A photo of Indi Young alongside a quotation from her article on persona: 'Please remove age, gender, ethnicity, location from your personas. None of these things cause behavior/thinking. But they cause assumptions.'

There's a great article by Indi Young on this that I recommend you read. But despite these problems, people seem really attached to those photos that we use and little bits of basic demographic information, and kept bringing them up again, if they did actually disappear off the board. Especially if they hadn't been present for the work that pushed it off the board. Even though that information didn't end up being useful for driving the product direction.

So we just put them in a corner of the board. Acknowledging that they existed — but divorced from the actual decision driving process that was happening in the main sections.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). Blue arrows running left-to-right and right-to-left illustrate transitions between the sections. Underneath is another boxed off category labelled 'Flair' containing some photos.

So we can point to them and talk about them, but also demonstrate the fact that while these might be "true", they weren't useful to help drive our product direction. The team started calling this "flair", which is a joke from the office space films — extraneous information that was supposed to reflect something important about a person, but actually didn't in real life.

Text title: Flair (underneath a list of old crossed out titles titles: 'Scale of Truthiness', 'Lean Persona', 'Incremental Persona', and 'Iterative Persona')

Now we've added flair. What happened next?

After we started using this model, with a few different teams and a few different clients, we found some interesting issues around longer term stability and focus. People sometimes got distracted by the new shiny information, the new observation that just dropped onto the board — and failed to pay attention and keep an eye on the bigger problems and longer term insights.

Title text: Realigning problems…

Our initial attempt at fixing this was to do a periodic realignment exercise. We'd take a step back every few weeks or months, depending on the bit of work, and take a look at the Iterative Persona models that we'd built up.

Sometimes we ended up throwing them away, because we discovered the entire model really wasn't that relevant anymore to the work that was happening.

In the top-left an illustration of somebody throwing something into a bin.

Sometimes we discovered that there was a new model, there was a whole chunk of research about a different kind of customer that we weren't looking at until this point. We need to kick off a whole new artefact. It didn't fit any of our existing models.

Two illustrations. To the left somebody throwing something into a bin. To the right somebody looking through a telescope.

Sometimes we had two different models and we merged them together. We saw that an observation that we thought might be an important difference between two different classes of customer turned out not to be important at all. And everything could sit together in the same board.

Three illustrations. Top-left somebody throwing something into a bin. Top-right somebody looking through a telescope. Bottom a sign for merging lanes.

Sometimes we did the opposite. Sometimes we split things. We had one model, and were seeing some new behaviours that were different from different groups. Having it mixed in with everything else was making those new insights hard to focus on. So we split it out into two different artefacts.

Four illustrations. Top-left somebody throwing something into a bin. Top-right somebody looking through a telescope. Bottom-left a sign for merging lanes. Bottom-right a sign for splitting lanes

That helped, but after we started doing that, after we started doing that splitting out and merging back together, we noticed something else that was really quite interesting. We noticed that different things changed at different paces. Some stuff changed rapidly. And some stuff changed much less frequently.

For example, a new iOS release came out with new notifications and widgets. Suddenly our customers are all talking about that feature — "the app should really do that" and "why isn't it doing it?" A new thing that didn't exist a month ago that everyone's now talking about. People want to pay attention to it and do things about it right now.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). Many short blue arrows illustrate rapid transitions between the sections.

Other research insights change much less often. There's an overall need for live notifications, that hangs around. But it doesn't change particularly often.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). A few medium length blue arrows illustrate occasional transitions between the sections.

Other insights change really rarely — an underlying customer need about wanting to feel confident they're not missing important facts about the project. Those insights are well validated, solid needs, and they don't change at all, or really rarely.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). One long blue arrow illustrate a rare transition between the sections.

These different cadences reminded me of Stuart Brand's "shearing layers" in his book about how buildings learn, which in turn was based on some of the work from a 1970s, architect called Frank Duffy who talked about "layers of longevity".

To the left is a black and white picture of Stuart Brand lifting his glasses to his forehead. To the right is a stylised illustration of various layers of a house with 'stuff' at the centre, surrounded by 'space plan', 'services', 'skin', 'structure', and finally 'site'.

If you look at any building, an office block or a house, there's the stuff in the building, — the furniture, the decorations — and that changes a lot. People get new chairs, change colour of the walls, etc.

And then there's the physical space that the building has — the rooms — and you'll sometimes add a window or knock down a wall, especially if it's an office space with easily movable things. But that happens less often than changing the "stuff".

Then there's services like plumbing, electricity. They change less often than the space of the building. Then there's the skin of the building. Then there's the underlying structure of the building and finally there's the site the building sits on — the physical ground. All those things are all interrelated, but they change at very different paces and it reminding me of the different rates of change in the research.

To the left is a black and white picture of Stuart Brand lifting his glasses to his forehead. To the right is a stylised illustration of pace layers 'fashion' at the top, supported by 'commerce', 'infrastructure', 'governance', 'culture' and finally 'nature'.

Brand later talked about Pace Layers in the context of civilizations. Fast paced "fashion", which relied on slower pace "commerce", which relied on even slow paced "infrastructure" changes — and so on until he got right down to the natural world — the ground, the earth, and trees, etc.

To the left is a black and white picture of Stuart Brand lifting his glasses to his forehead. To the right is his quotation “Each layer is functionally different from the others and operates somewhat independently, but each layer influences and responds to the layers closest to it in a way that makes the whole system resilient.”

As Brand says:

“Each layer is functionally different from the others and operates somewhat independently, but each layer influences and responds to the layers closest to it in a way that makes the whole system resilient.”.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). At the top many short blue arrows illustrate rapid transitions between the sections at the top. In the middle a few medium length blue arrows indicate illustrate occasional transitions. At the bottom a single long blue arrow illustrates a rare transition.

These kinds of layers of different paced work was something that we'd seen with our Incremental Persona. So we moved things around! We separated the insights that we'd seen into layers. Insights that changed a lot was pushed up the diagram, insights that change rarely were pushed down the diagram.

We started calling it Pace Layer Mapping.

Text title: Pace Layer Mapping (underneath a list of old crossed out titles titles: 'Scale of Truthiness', 'Lean Persona', 'Incremental Persona', 'Iterative Persona', and 'Flair')

We started adding things on there that weren't directly customer related. They were about product insights, or other research insights that we found out when we were doing experiments and delivering the product.

A meme style photo of the character Inigo Montoya from The Princess Bride saying 'No, there is too much. Let me sum up.'

So let's review. We started off with a one-off alignment activity. Helping people get confidence in which research insights were true and which weren't. And understanding where everybody sat along those dimensions.

A photo of a group of people in front of a wall with lots of post-it notes on. The people have been blurred out. Above the post-it notes there is a scale that goes from 'Wild Guess' on the left to 'True' on the right.

We started adding structure and rules to that initial one-off artefact as people started referring to it more often during product development work.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). Blue arrows running left-to-right show transitions from Guess to Weak, from Weak to Strong, and from Strong to True. Blue arrows running right-to-left illustrates a transition back from True to Strong, and from Strong to weak.

When we saw that irrelevant information was coming back, again, and again, and again — despite the fact it wasn't useful for driving the product direction — we pulled it out into a separate area.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). Blue arrows running left-to-right and right-to-left illustrate transitions between the sections. Underneath is another boxed off category labelled 'Flair' containing some photos.

Then when we saw that different bits of research moving at different paces, we started separating things into layers depending on how often things changed.

A picture of a wall of post-it notes divided into four categories running left to right (Guess, Weak, Strong, and True). At the top many short blue arrows illustrate rapid transitions between the sections at the top. In the middle a few medium length blue arrows indicate illustrate occasional transitions. At the bottom a single long blue arrow illustrates a rare transition.

Which brings us back to that nice, neat canvas.

A picture of a grid titled 'pace layer mapping' with a description of 'yet another canvas' — four horizontal squares by 3 vertical. A few yellow post-it notes are sprinkled across the squares.

We have confidence in the utility of research insights mapped horizontally.

A pace layer mapping canvas — with the confidence dimension illustrated left-to-right.

We have how often those insights change mapped vertically.

A pace layer mapping canvas — with the pace-of-change dimension illustrated top-to-bottom.

We define transition rules for how we decide to move the insights around the map.

A pace layer mapping canvas — with boxes containing the transition rules highlighted at the bottom.

Those things all together, let us do some really interesting things with the team as a whole.

A pace layer map with the left quarter highlighted.

We know the stuff over on the left has weak evidence and needs more research. That means developers can ignore it. Sometimes a bunch of the product people can ignore it as well.

A pace layer map with the centre half highlighted.

We know we have stronger evidence for insights in the middle. We have to do more experiments, or build MVPs or make small investments around the product direction and strategy.

A pace layer map with the right quarter highlighted.

But when we shift over to the right we know we have much stronger evidence. We can use this information to drive decisions about building features that we can be fairly confident are going to fulfil a customer need. We can be fairly sure that the bets we make here are going to pay off.

A pace layer map with the top third highlighted.

We know that the insights of the top are subject to change a lot. So if we are going to make an investment there we know that it should probably be a short term rather than a long term thing. Until we see how they play out over a longer period of time. Maybe we should just wait and not do the thing until it settles down. Those are the decisions that layer helps drive.

A pace layer map with the middle third highlighted.

As we move down the map, we have more and more confidence that these things are less likely to change on short notice.

A pace layer map with the bottom third highlighted.

The bottom is a place where we can make larger and larger investments with more confidence. We know the bottom layer isn't going to change very often. We know the history. We know that it's not going to be something that is going to be moving around every other week or month.

A meme style image of a character from the film Office Space saying 'We need to talk about your flair'. He's a middle aged male with a mostache & glasses wearing a green waistcoat with many, many, many badges.

What about the "flare"? Those useless bits of information that aren't actually useful for decision making. We had that separate little category before and it's gone now. "Flare" naturally ends up in that bottom left chunk of the map. It doesn't change much and it's not useful for product decisions so it gets pushed back to the left and pushed down to the bottom.

A pace layer map with the bottom-left quadrant covered by a meme style image of a character from the film Office Space saying 'We need to talk about your flair'.

Another way to view a Pace Layer Map is by treating it as quadrants.

A pace layer map with the bottom-left quadrant highlighted and labelled 'ignore'.

Insights that migrates to the bottom-left are things that tend not to change, are not useful, or turn out not to be true — and can generally be ignored.

A pace layer map with the top-left quadrant highlighted and labelled 'research'.

Whereas things in the top-left are things that are likely to change. So probably need more research. They need a bit more attention to see what's, what's happening with those things, and whether they're going to be useful.

A pace layer map with the top-right quadrant highlighted and labelled 'short-term'.

In the top-right, we have some confidence that those are actual customer problems. But things might change quickly there — they might be a transient thing. So short term investments, short term product work, smaller bets are the things that we do in that top, top-right quadrant.

A pace layer map with the bottom-right quadrant highlighted and labelled 'long-term'.

Where the insights on the bottom-right are stable. We have we have higher confidence in their long-term utility, so they're much more suitable for longer term investments — we can go build features on this right now.

Text title: Why mapping?

Why are we saying "map" and "mapping" when we talk about using these diagrams?

Some of you might be familiar with Simon Wardley's work. He has quite a strict definition of a map. He says

A photo of Simon Wardley saying 'I often visit new companies and they tell me that they use maps. I get excited and ask to look. Unfortunately what they show me are usually box and wire diagrams which lack basic elements of mapping. From a strategic point of view they are next to useless. They then often try to show me their strategy based upon their maps which invariably is the usually endless round of meme copying, consultant blah blah and wasted effort.'

"I often visit new companies and they tell me that they use maps. I get excited and ask to look. Unfortunately what they show me are usually box and wire diagrams which lack basic elements of mapping. From a strategic point of view they are next to useless. They then often try to show me their strategy based upon their maps which invariably is the usually endless round of meme copying, consultant blah blah and wasted effort."

He doesn't like people calling things maps that aren't maps!

Sketch map of the battle of the pass of Thermopylae; showing how the Persian army was forced along the coastal road into the narrow pass of Thermopylae. In the bottom left is a key indicating that the map is visual, shows context, shows position, has an anchor, shows movement, and has components.

This is Simon's definition of map (this is a picture from from one of his posts). The key elements for Simon are that it's:

  1. Visual.
  2. It's context specific, it's about a particular situation or environment.
  3. There are separate movable components on the map. So in a military map, there are armies.
  4. Those components have position relative to other things on the map — there's some kind of anchor. A North Star, a compass, a grid, etc.
  5. The components move around. So you can shift things and see how things move as that context changes.

Pace Layer Maps have all of those properties.

They're visual. They're about a particular context — a particular bit of research, a particular product context. The components, insights, and other things on the map have positions relative to an anchor around confidence and pace — there is a architecture to it. The components move around and we have rules for how those components move around.

Once we have this mapping, we can start seeing the connections between things more easily.

An example of a pace layer map drawing a connection between a stable insight at the bottom-right and a less stable insight in the centre-right, and then further connections between that less stable insight and two even less stable insights in the top-right.

For example, a long term established need to stay current. In that bottom-right quadrant might drive a more medium term insight around staying up-to-date, which might in turn drive, short term new customer needs about building a new notification widget for a new operating system, or having new push notifications in the web app, or Alexa notifications, or whatever.

So people can start seeing the connections between different aspects of research and product work all in one place, which is just super useful.

Text title: Pace Layer Mapping (underneath a list of old crossed out titles titles: 'Scale of Truthiness', 'Lean Persona', 'Incremental Persona', 'Iterative Persona', and 'Flair')

That's where Pace Layer Mapping is now — but as I said at the start, if you're going to take one thing away or take away this…

Remember the messy journey we took from that initial one-off alignment activity, through renaming, adding new structure, adding rules, adding flair, discovering the new structure around different pace layers, and how to interpret that map once we had drawn it. That was that was a long, messy journey, with lots of different clients, over a period of years.

And that's the lesson I want you to take away: The artefacts and practices that you use should be useful! When they're not, please change them until they are.

Further Reading

Keep in touch

If you liked this you'll probably like our newsletter (No fluff — just useful, applicable information delivered to your inbox every other week. Check out the archive if you don't believe us.)