Product
Resources
PricingContact

Customizing JPD Fields

So You've Outgrown the Training Wheels

So I want to talk about something that happened to me (well, happened to a team I was working with, because I've learned that "happened to me" in blog posts is a social contract that obligates me to be embarrassed, and I've met my quota this month).

They'd been using Jira Product Discovery for about six months. They were pretty happy with it. Ideas were coming in, the board didn't look like a disaster zone, and they'd even figured out how to share a roadmap view with the exec team without triggering a thirty-minute meeting about what "In Progress" means. Good stuff.

Then one day a product manager (let's call her Sarah, since that's not her name) said: "How do I know which of these ideas anyone has actually looked at?"

And nobody had a good answer. Because the answer was buried somewhere in a Status column that was trying to do three jobs simultaneously and doing none of them particularly well. The ideas were all nominally "Backlog," which in practice meant anything from "we know about this and it's interesting" to "this was submitted eighteen months ago and has been silently ignored by four different PMs." All equally Backlog. All equally mysterious.

This is the moment a lot of teams hit, and it's actually a good sign when they do. It means the defaults have done their job.

A Brief Defense of Defaults (Before We Evolve Past Them)

Here's the thing about JPD's out-of-the-box fields (Impact, Effort, Status, the standard prioritization stuff): they're actually pretty thoughtful. Atlassian didn't just throw darts at a whiteboard. Those defaults let you stand up a working discovery process in an afternoon. They're genuinely good for onboarding a team that's never used dedicated product discovery tooling before, or for an org that's still figuring out what their prioritization philosophy even is.

The defaults are a starting point, and a decent one. Think of them the way you'd think of a furnished apartment: it's not your taste, exactly, but it's livable while you're getting oriented, and there's a certain wisdom in not making every decision on day one when you don't yet know how you live.

At some point, though, you know how you live. You've been in the space long enough to know that you need a real desk in that corner and that the couch belongs under the window, not against the wall. And that's when you start customizing. Not because the defaults were wrong, but because your requirements got specific.

That's the whole post, really. But let me tell you how to actually do it.

The Thing Nobody Tells You About Field Types

Before you create a single custom field, you have to internalize something that the documentation handles with less enthusiasm than it deserves: field type is a decision with long-term consequences.

It's not just a format choice. It determines how your data sorts. How it participates in formulas. What a stakeholder sees when they look at a roadmap six months from now. Getting this wrong isn't catastrophic (you can always change it), but you'll feel the friction every time you're in a view wondering why something isn't sorting the way you expect.

The three field types you'll be choosing between for most custom operational fields are Select, Rating, and Number, and they are not interchangeable, even though they can all kind of represent the same underlying concept if you squint.

Select is for things that belong in named buckets. Categories. Labels. "Strategic Theme" is a Select field (when you're not using Atlassian Goals, which we'll get to). The value is meaningful as a name, not as a quantity. You're not saying one option is bigger than another, you're saying it's different. The tradeoff (and there's always a tradeoff) is that Select fields don't do math. You can filter by them, group by them, sort them, but you can't stick them in a formula and get a number out. JPD does let you assign weights to Select options for prioritization purposes, which is a clever little escape hatch, though the weights are invisible to whoever's filling the field in, which creates its own interesting team dynamics.

Rating is for things with a natural order but where the specific distance between values isn't really the point. A 1-5 gut-check on strategic fit. A confidence level that goes from "I made this up" to "I have actual data." Rating fields are visually satisfying (JPD renders them as stars or a clean 1-5 scale), they sort well, and they feed into formula fields. Their limitation is the ceiling: five. That's it. If you want a Confidence Score you can express as 80% or 35%, Rating is going to frustrate you, because it'll round your nuance down to the nearest star. There's also an implicit calibration problem that teams don't usually notice until it's caused a disagreement: a "3" means something slightly different to every person on your team, and without labels on the intermediate values, you have no way to surface that until someone goes "wait, I thought 3 meant moderate confidence" and someone else thought it meant "I checked one Slack message."

Number is for when arithmetic matters. Percentages, dollar amounts, calculated scores: anything where the difference between 40 and 80 is meaningful and you might someday want to multiply it by something else. Number fields support conditional coloring (useful for making low-confidence items visually pop), they're sortable, and they're the building block for Custom Formula fields. The catch: they require documentation and team discipline in a way that Rating doesn't. A Number field without a clearly agreed-upon scale is just an invitation for entropy. Someone enters 8 thinking it's out of 10. Someone else enters 0.8 thinking it's a decimal. Now your RICE scores are doing interpretive performance art.

The short version: Select for categories, Rating for quick ordinal gut-checks, Number for anything that needs to survive contact with a formula or a finance team.

Triage: Add It to Your Workflow Status, Not a Separate Field

Back to Sarah's question. The temptation is to create a separate "Triage Status" field to run alongside the built-in Status. Resist it. It creates two sources of truth for essentially the same question, and nothing enforces that ideas actually pass through triage before landing in Discovery.

The simpler fix is to add Triage as the first state in your existing Status workflow. Go to your JPD project settings, navigate to Statuses, and add "Triage" at the top of the list, before Parking Lot. Your workflow then reads:

Triage → Parking Lot → Discovery → Ready for Delivery → Delivery → Impact → Archived

Now the sequence is enforced by the workflow itself. A new idea lands in Triage, a PM reviews it and either moves it to Parking Lot (interesting but not now), Discovery (let's actively explore this), or Archived (not for us). Nothing reaches Discovery without a conscious human decision. No automation required, no separate field to keep in sync, no gap where someone accidentally skips the step.

Set up a simple automation to assign every incoming idea to Triage by default, and your List view filtered to "Status = Triage" becomes your working inbox. That's the whole system.

For a good mental model of how views and fields work together, and how to structure your board so the List view actually functions as the triage queue it should be, Released has a very practical guide to JPD views that covers exactly this.

Strategic Theme: The Field That Exposes the Gap Between Strategy and Backlog

Here is a pattern I've seen more times than I should admit: company has a stated strategy. Product team has a backlog. Nobody has ever formally connected the two. Leadership says "we're doubling down on enterprise" and the roadmap is full of features that are useful for teams of five people. Not because the product team is ignoring strategy (they're not, usually), but because there's no field making the relationship visible and therefore no friction when someone adds a well-intentioned SMB feature without thinking about the current strategic focus.

This is fixable in about ten minutes. But before you reach for a custom field, there's a question worth asking: is your org already using Atlassian Goals?

If the answer is yes, skip the custom field entirely and use JPD's built-in Goals field instead. Goals are a first-class object in the Atlassian platform (they used to live in Atlas, which has since been retired and absorbed into the core platform, accessible via Atlassian Home). They're defined centrally and shared across your Jira projects. When you link an idea to a Goal, you're connecting it to something your strategy team actually owns and updates. When a goal gets renamed, hits its target, or gets retired, that change propagates automatically. You're not maintaining a dropdown that slowly drifts out of sync with reality, which is what every manually maintained strategy field eventually becomes.

The other thing Goals gives you that a custom field can't: proper roll-up. You can see, at the goal level, how many ideas and how much delivery work is attached to each strategic objective. That's a genuinely useful view to bring to a planning cycle.

If your org isn't using Atlassian Goals, or if your "strategic themes" are looser than formal OKRs (more like buckets than tracked objectives), then a custom Multi-select field works well. Create one called 🏢 Strategic Theme and populate it with options that mirror how your leadership actually talks about priorities:

  • 🏢 Enterprise Expansion

  • 🔁 Retention & Engagement

  • 💰 Revenue Growth

  • 🔧 Platform Scalability

  • 🌍 International Markets

Either way, once your ideas are tagged, go to the Board view and group by that field. What you're looking at is a reasonably honest picture of where your team's actual attention is going, as opposed to where people say it's going in planning meetings. These two things are often usefully different, and the gap between them is usually worth a conversation.

Released has a good piece on organizing and hierarchically structuring ideas in JPD, covering how Themes, Opportunities, and Features can nest together, which gives you the fuller picture of where strategic alignment fits into a layered discovery taxonomy.

Insights: Attaching Evidence to Ideas (and How to Do It Without Manual Work)

Linking ideas to Goals tells you why something matters strategically. The Insights field tells you what evidence you actually have for it. These are different questions, and JPD keeps them separate for a reason.

Insights are attached pieces of evidence: customer feedback quotes, interview notes, support ticket patterns, sales call snippets. When an idea has five Insights attached to it from real customer conversations, that changes how a prioritization discussion goes. When it has zero, you're debating gut feelings, and everyone's gut is equally authoritative, which is a polite way of saying nobody wins except the loudest person in the room.

The field exists by default in JPD. The question is how you populate it. Manual entry works for small teams with high-touch discovery practices, but it doesn't scale and it tends to get skipped when people are busy, which is always.

The more sustainable approach is to close the feedback loop automatically. Released's portal integration with JPD does exactly this: when customers or stakeholders submit feedback through a Released portal, vote on a feature request, or leave a comment, that input gets pushed directly into JPD as an Insight on the relevant idea. The evidence accumulates without anyone having to remember to copy-paste it.

What you end up with is a backlog where the ideas people care most about are visibly well-evidenced, and the ones that are mostly internal opinion have a conspicuously thin Insights column. That visibility is useful in ways that are hard to manufacture artificially. It doesn't tell you what to build, but it does make the conversation more honest.

Confidence Score: Honest About What You Know (and What You're Guessing)

Confidence Score is the field I've seen teams resist the most, and the field I've seen make the biggest difference once they stop resisting it.

The resistance usually sounds like: "We don't have great data on most of our ideas, so filling this in will just expose that we're mostly guessing." Yes. Exactly. That's the point. That's information.

In the RICE prioritization framework, Confidence specifically tracks how much evidence supports your Reach and Impact estimates. High confidence means A/B tests, verified analytics, actual user research. Low confidence means you talked to two customers and had a hunch. Both are fine. The danger is when you can't see the difference. When a 100% RICE score derived from hard data sits in the same list as a 100% RICE score someone reverse-engineered from optimism.

Use a Number field for this. Call it 🎯 Confidence Score. Set up conditional formatting: values below 50 go orange, values below 25 go red. Don't set hard validation on the range; trust your team and documentation for that. A conventional scale that works well: 100 for high confidence (you have real data), 80 for medium (solid qualitative signal), 50 for low (some evidence, significant assumptions), 20 for minimal (educated guess territory).

What you get is a view where the evidence quality of every idea is immediately visible without anyone having to read the description. Ideas cluster naturally into "things we're pretty sure about" and "things that are still hypotheses," and that distinction should drive different conversations and different levels of investment.

If you want to take this further and compute full RICE scores automatically, Released has done the step-by-step work in their RICE scoring guide for JPD. Your Confidence Score field plugs directly into a Custom Formula field to produce a live-calculated RICE total. It's pretty satisfying when it works.

A Word About the Emojis (Because Someone Will Ask)

I've gotten exactly one piece of feedback every time I share this kind of setup: "do the emojis have to be there?" And the answer is no, they don't have to be. You can skip them. Your fields will work exactly the same.

But here's the thing: in a crowded JPD List view with twenty columns, 🎯 Confidence Score is easier to scan than Confidence Score. The emoji acts as a visual anchor. Your PMs are making split-second decisions about where to look, and a small pre-attentive signal (color, shape, icon) genuinely reduces cognitive load. Atlassian supports emoji in field names precisely because this matters at scale.

There's a second thing, which is harder to articulate: emoji in field names signal that someone thought about the field from the perspective of the person using it. It's a small ergonomic detail. Teams notice those details, even when they can't quite say why a setup feels well-considered versus haphazard.

Actually Putting This Together

Once these changes are in place, your JPD setup gets meaningfully different capabilities. You can pull up a filtered List view showing only ideas in Discovery or beyond, with 🎯 Confidence Score above 50, tagged to a specific 🏢 Strategic Theme. That view is a real prioritization conversation, not a backlog triage session.

You can publish a roadmap externally via Released that shows stakeholders what's in Discovery and beyond, without surfacing the Triage pile or the Archived ideas with snarky comments in them. How to publish JPD roadmaps covers that workflow cleanly.

The broader thing I'd leave you with is this: JPD is a platform that rewards investment. The defaults get you started (and they're genuinely good for that), but the teams that get compounding value from the tool are the ones who revisit their field setup every few months and ask "does this still reflect how we actually work?" The setup I've described here isn't universal. Your workflow states might be named differently. Your Strategic Themes will definitely be different. The point isn't the specific fields; it's the habit of building a taxonomy that's yours.

For more depth on how all the JPD pieces fit together, Released maintains a solid JPD knowledge base that's worth bookmarking.

Now go update your field settings. Sarah's waiting.

Customizing JPD Fields

So You've Outgrown the Training Wheels

So I want to talk about something that happened to me (well, happened to a team I was working with, because I've learned that "happened to me" in blog posts is a social contract that obligates me to be embarrassed, and I've met my quota this month).

They'd been using Jira Product Discovery for about six months. They were pretty happy with it. Ideas were coming in, the board didn't look like a disaster zone, and they'd even figured out how to share a roadmap view with the exec team without triggering a thirty-minute meeting about what "In Progress" means. Good stuff.

Then one day a product manager (let's call her Sarah, since that's not her name) said: "How do I know which of these ideas anyone has actually looked at?"

And nobody had a good answer. Because the answer was buried somewhere in a Status column that was trying to do three jobs simultaneously and doing none of them particularly well. The ideas were all nominally "Backlog," which in practice meant anything from "we know about this and it's interesting" to "this was submitted eighteen months ago and has been silently ignored by four different PMs." All equally Backlog. All equally mysterious.

This is the moment a lot of teams hit, and it's actually a good sign when they do. It means the defaults have done their job.

A Brief Defense of Defaults (Before We Evolve Past Them)

Here's the thing about JPD's out-of-the-box fields (Impact, Effort, Status, the standard prioritization stuff): they're actually pretty thoughtful. Atlassian didn't just throw darts at a whiteboard. Those defaults let you stand up a working discovery process in an afternoon. They're genuinely good for onboarding a team that's never used dedicated product discovery tooling before, or for an org that's still figuring out what their prioritization philosophy even is.

The defaults are a starting point, and a decent one. Think of them the way you'd think of a furnished apartment: it's not your taste, exactly, but it's livable while you're getting oriented, and there's a certain wisdom in not making every decision on day one when you don't yet know how you live.

At some point, though, you know how you live. You've been in the space long enough to know that you need a real desk in that corner and that the couch belongs under the window, not against the wall. And that's when you start customizing. Not because the defaults were wrong, but because your requirements got specific.

That's the whole post, really. But let me tell you how to actually do it.

The Thing Nobody Tells You About Field Types

Before you create a single custom field, you have to internalize something that the documentation handles with less enthusiasm than it deserves: field type is a decision with long-term consequences.

It's not just a format choice. It determines how your data sorts. How it participates in formulas. What a stakeholder sees when they look at a roadmap six months from now. Getting this wrong isn't catastrophic (you can always change it), but you'll feel the friction every time you're in a view wondering why something isn't sorting the way you expect.

The three field types you'll be choosing between for most custom operational fields are Select, Rating, and Number, and they are not interchangeable, even though they can all kind of represent the same underlying concept if you squint.

Select is for things that belong in named buckets. Categories. Labels. "Strategic Theme" is a Select field (when you're not using Atlassian Goals, which we'll get to). The value is meaningful as a name, not as a quantity. You're not saying one option is bigger than another, you're saying it's different. The tradeoff (and there's always a tradeoff) is that Select fields don't do math. You can filter by them, group by them, sort them, but you can't stick them in a formula and get a number out. JPD does let you assign weights to Select options for prioritization purposes, which is a clever little escape hatch, though the weights are invisible to whoever's filling the field in, which creates its own interesting team dynamics.

Rating is for things with a natural order but where the specific distance between values isn't really the point. A 1-5 gut-check on strategic fit. A confidence level that goes from "I made this up" to "I have actual data." Rating fields are visually satisfying (JPD renders them as stars or a clean 1-5 scale), they sort well, and they feed into formula fields. Their limitation is the ceiling: five. That's it. If you want a Confidence Score you can express as 80% or 35%, Rating is going to frustrate you, because it'll round your nuance down to the nearest star. There's also an implicit calibration problem that teams don't usually notice until it's caused a disagreement: a "3" means something slightly different to every person on your team, and without labels on the intermediate values, you have no way to surface that until someone goes "wait, I thought 3 meant moderate confidence" and someone else thought it meant "I checked one Slack message."

Number is for when arithmetic matters. Percentages, dollar amounts, calculated scores: anything where the difference between 40 and 80 is meaningful and you might someday want to multiply it by something else. Number fields support conditional coloring (useful for making low-confidence items visually pop), they're sortable, and they're the building block for Custom Formula fields. The catch: they require documentation and team discipline in a way that Rating doesn't. A Number field without a clearly agreed-upon scale is just an invitation for entropy. Someone enters 8 thinking it's out of 10. Someone else enters 0.8 thinking it's a decimal. Now your RICE scores are doing interpretive performance art.

The short version: Select for categories, Rating for quick ordinal gut-checks, Number for anything that needs to survive contact with a formula or a finance team.

Triage: Add It to Your Workflow Status, Not a Separate Field

Back to Sarah's question. The temptation is to create a separate "Triage Status" field to run alongside the built-in Status. Resist it. It creates two sources of truth for essentially the same question, and nothing enforces that ideas actually pass through triage before landing in Discovery.

The simpler fix is to add Triage as the first state in your existing Status workflow. Go to your JPD project settings, navigate to Statuses, and add "Triage" at the top of the list, before Parking Lot. Your workflow then reads:

Triage → Parking Lot → Discovery → Ready for Delivery → Delivery → Impact → Archived

Now the sequence is enforced by the workflow itself. A new idea lands in Triage, a PM reviews it and either moves it to Parking Lot (interesting but not now), Discovery (let's actively explore this), or Archived (not for us). Nothing reaches Discovery without a conscious human decision. No automation required, no separate field to keep in sync, no gap where someone accidentally skips the step.

Set up a simple automation to assign every incoming idea to Triage by default, and your List view filtered to "Status = Triage" becomes your working inbox. That's the whole system.

For a good mental model of how views and fields work together, and how to structure your board so the List view actually functions as the triage queue it should be, Released has a very practical guide to JPD views that covers exactly this.

Strategic Theme: The Field That Exposes the Gap Between Strategy and Backlog

Here is a pattern I've seen more times than I should admit: company has a stated strategy. Product team has a backlog. Nobody has ever formally connected the two. Leadership says "we're doubling down on enterprise" and the roadmap is full of features that are useful for teams of five people. Not because the product team is ignoring strategy (they're not, usually), but because there's no field making the relationship visible and therefore no friction when someone adds a well-intentioned SMB feature without thinking about the current strategic focus.

This is fixable in about ten minutes. But before you reach for a custom field, there's a question worth asking: is your org already using Atlassian Goals?

If the answer is yes, skip the custom field entirely and use JPD's built-in Goals field instead. Goals are a first-class object in the Atlassian platform (they used to live in Atlas, which has since been retired and absorbed into the core platform, accessible via Atlassian Home). They're defined centrally and shared across your Jira projects. When you link an idea to a Goal, you're connecting it to something your strategy team actually owns and updates. When a goal gets renamed, hits its target, or gets retired, that change propagates automatically. You're not maintaining a dropdown that slowly drifts out of sync with reality, which is what every manually maintained strategy field eventually becomes.

The other thing Goals gives you that a custom field can't: proper roll-up. You can see, at the goal level, how many ideas and how much delivery work is attached to each strategic objective. That's a genuinely useful view to bring to a planning cycle.

If your org isn't using Atlassian Goals, or if your "strategic themes" are looser than formal OKRs (more like buckets than tracked objectives), then a custom Multi-select field works well. Create one called 🏢 Strategic Theme and populate it with options that mirror how your leadership actually talks about priorities:

  • 🏢 Enterprise Expansion

  • 🔁 Retention & Engagement

  • 💰 Revenue Growth

  • 🔧 Platform Scalability

  • 🌍 International Markets

Either way, once your ideas are tagged, go to the Board view and group by that field. What you're looking at is a reasonably honest picture of where your team's actual attention is going, as opposed to where people say it's going in planning meetings. These two things are often usefully different, and the gap between them is usually worth a conversation.

Released has a good piece on organizing and hierarchically structuring ideas in JPD, covering how Themes, Opportunities, and Features can nest together, which gives you the fuller picture of where strategic alignment fits into a layered discovery taxonomy.

Insights: Attaching Evidence to Ideas (and How to Do It Without Manual Work)

Linking ideas to Goals tells you why something matters strategically. The Insights field tells you what evidence you actually have for it. These are different questions, and JPD keeps them separate for a reason.

Insights are attached pieces of evidence: customer feedback quotes, interview notes, support ticket patterns, sales call snippets. When an idea has five Insights attached to it from real customer conversations, that changes how a prioritization discussion goes. When it has zero, you're debating gut feelings, and everyone's gut is equally authoritative, which is a polite way of saying nobody wins except the loudest person in the room.

The field exists by default in JPD. The question is how you populate it. Manual entry works for small teams with high-touch discovery practices, but it doesn't scale and it tends to get skipped when people are busy, which is always.

The more sustainable approach is to close the feedback loop automatically. Released's portal integration with JPD does exactly this: when customers or stakeholders submit feedback through a Released portal, vote on a feature request, or leave a comment, that input gets pushed directly into JPD as an Insight on the relevant idea. The evidence accumulates without anyone having to remember to copy-paste it.

What you end up with is a backlog where the ideas people care most about are visibly well-evidenced, and the ones that are mostly internal opinion have a conspicuously thin Insights column. That visibility is useful in ways that are hard to manufacture artificially. It doesn't tell you what to build, but it does make the conversation more honest.

Confidence Score: Honest About What You Know (and What You're Guessing)

Confidence Score is the field I've seen teams resist the most, and the field I've seen make the biggest difference once they stop resisting it.

The resistance usually sounds like: "We don't have great data on most of our ideas, so filling this in will just expose that we're mostly guessing." Yes. Exactly. That's the point. That's information.

In the RICE prioritization framework, Confidence specifically tracks how much evidence supports your Reach and Impact estimates. High confidence means A/B tests, verified analytics, actual user research. Low confidence means you talked to two customers and had a hunch. Both are fine. The danger is when you can't see the difference. When a 100% RICE score derived from hard data sits in the same list as a 100% RICE score someone reverse-engineered from optimism.

Use a Number field for this. Call it 🎯 Confidence Score. Set up conditional formatting: values below 50 go orange, values below 25 go red. Don't set hard validation on the range; trust your team and documentation for that. A conventional scale that works well: 100 for high confidence (you have real data), 80 for medium (solid qualitative signal), 50 for low (some evidence, significant assumptions), 20 for minimal (educated guess territory).

What you get is a view where the evidence quality of every idea is immediately visible without anyone having to read the description. Ideas cluster naturally into "things we're pretty sure about" and "things that are still hypotheses," and that distinction should drive different conversations and different levels of investment.

If you want to take this further and compute full RICE scores automatically, Released has done the step-by-step work in their RICE scoring guide for JPD. Your Confidence Score field plugs directly into a Custom Formula field to produce a live-calculated RICE total. It's pretty satisfying when it works.

A Word About the Emojis (Because Someone Will Ask)

I've gotten exactly one piece of feedback every time I share this kind of setup: "do the emojis have to be there?" And the answer is no, they don't have to be. You can skip them. Your fields will work exactly the same.

But here's the thing: in a crowded JPD List view with twenty columns, 🎯 Confidence Score is easier to scan than Confidence Score. The emoji acts as a visual anchor. Your PMs are making split-second decisions about where to look, and a small pre-attentive signal (color, shape, icon) genuinely reduces cognitive load. Atlassian supports emoji in field names precisely because this matters at scale.

There's a second thing, which is harder to articulate: emoji in field names signal that someone thought about the field from the perspective of the person using it. It's a small ergonomic detail. Teams notice those details, even when they can't quite say why a setup feels well-considered versus haphazard.

Actually Putting This Together

Once these changes are in place, your JPD setup gets meaningfully different capabilities. You can pull up a filtered List view showing only ideas in Discovery or beyond, with 🎯 Confidence Score above 50, tagged to a specific 🏢 Strategic Theme. That view is a real prioritization conversation, not a backlog triage session.

You can publish a roadmap externally via Released that shows stakeholders what's in Discovery and beyond, without surfacing the Triage pile or the Archived ideas with snarky comments in them. How to publish JPD roadmaps covers that workflow cleanly.

The broader thing I'd leave you with is this: JPD is a platform that rewards investment. The defaults get you started (and they're genuinely good for that), but the teams that get compounding value from the tool are the ones who revisit their field setup every few months and ask "does this still reflect how we actually work?" The setup I've described here isn't universal. Your workflow states might be named differently. Your Strategic Themes will definitely be different. The point isn't the specific fields; it's the habit of building a taxonomy that's yours.

For more depth on how all the JPD pieces fit together, Released maintains a solid JPD knowledge base that's worth bookmarking.

Now go update your field settings. Sarah's waiting.

Customizing JPD Fields

So You've Outgrown the Training Wheels

So I want to talk about something that happened to me (well, happened to a team I was working with, because I've learned that "happened to me" in blog posts is a social contract that obligates me to be embarrassed, and I've met my quota this month).

They'd been using Jira Product Discovery for about six months. They were pretty happy with it. Ideas were coming in, the board didn't look like a disaster zone, and they'd even figured out how to share a roadmap view with the exec team without triggering a thirty-minute meeting about what "In Progress" means. Good stuff.

Then one day a product manager (let's call her Sarah, since that's not her name) said: "How do I know which of these ideas anyone has actually looked at?"

And nobody had a good answer. Because the answer was buried somewhere in a Status column that was trying to do three jobs simultaneously and doing none of them particularly well. The ideas were all nominally "Backlog," which in practice meant anything from "we know about this and it's interesting" to "this was submitted eighteen months ago and has been silently ignored by four different PMs." All equally Backlog. All equally mysterious.

This is the moment a lot of teams hit, and it's actually a good sign when they do. It means the defaults have done their job.

A Brief Defense of Defaults (Before We Evolve Past Them)

Here's the thing about JPD's out-of-the-box fields (Impact, Effort, Status, the standard prioritization stuff): they're actually pretty thoughtful. Atlassian didn't just throw darts at a whiteboard. Those defaults let you stand up a working discovery process in an afternoon. They're genuinely good for onboarding a team that's never used dedicated product discovery tooling before, or for an org that's still figuring out what their prioritization philosophy even is.

The defaults are a starting point, and a decent one. Think of them the way you'd think of a furnished apartment: it's not your taste, exactly, but it's livable while you're getting oriented, and there's a certain wisdom in not making every decision on day one when you don't yet know how you live.

At some point, though, you know how you live. You've been in the space long enough to know that you need a real desk in that corner and that the couch belongs under the window, not against the wall. And that's when you start customizing. Not because the defaults were wrong, but because your requirements got specific.

That's the whole post, really. But let me tell you how to actually do it.

The Thing Nobody Tells You About Field Types

Before you create a single custom field, you have to internalize something that the documentation handles with less enthusiasm than it deserves: field type is a decision with long-term consequences.

It's not just a format choice. It determines how your data sorts. How it participates in formulas. What a stakeholder sees when they look at a roadmap six months from now. Getting this wrong isn't catastrophic (you can always change it), but you'll feel the friction every time you're in a view wondering why something isn't sorting the way you expect.

The three field types you'll be choosing between for most custom operational fields are Select, Rating, and Number, and they are not interchangeable, even though they can all kind of represent the same underlying concept if you squint.

Select is for things that belong in named buckets. Categories. Labels. "Strategic Theme" is a Select field (when you're not using Atlassian Goals, which we'll get to). The value is meaningful as a name, not as a quantity. You're not saying one option is bigger than another, you're saying it's different. The tradeoff (and there's always a tradeoff) is that Select fields don't do math. You can filter by them, group by them, sort them, but you can't stick them in a formula and get a number out. JPD does let you assign weights to Select options for prioritization purposes, which is a clever little escape hatch, though the weights are invisible to whoever's filling the field in, which creates its own interesting team dynamics.

Rating is for things with a natural order but where the specific distance between values isn't really the point. A 1-5 gut-check on strategic fit. A confidence level that goes from "I made this up" to "I have actual data." Rating fields are visually satisfying (JPD renders them as stars or a clean 1-5 scale), they sort well, and they feed into formula fields. Their limitation is the ceiling: five. That's it. If you want a Confidence Score you can express as 80% or 35%, Rating is going to frustrate you, because it'll round your nuance down to the nearest star. There's also an implicit calibration problem that teams don't usually notice until it's caused a disagreement: a "3" means something slightly different to every person on your team, and without labels on the intermediate values, you have no way to surface that until someone goes "wait, I thought 3 meant moderate confidence" and someone else thought it meant "I checked one Slack message."

Number is for when arithmetic matters. Percentages, dollar amounts, calculated scores: anything where the difference between 40 and 80 is meaningful and you might someday want to multiply it by something else. Number fields support conditional coloring (useful for making low-confidence items visually pop), they're sortable, and they're the building block for Custom Formula fields. The catch: they require documentation and team discipline in a way that Rating doesn't. A Number field without a clearly agreed-upon scale is just an invitation for entropy. Someone enters 8 thinking it's out of 10. Someone else enters 0.8 thinking it's a decimal. Now your RICE scores are doing interpretive performance art.

The short version: Select for categories, Rating for quick ordinal gut-checks, Number for anything that needs to survive contact with a formula or a finance team.

Triage: Add It to Your Workflow Status, Not a Separate Field

Back to Sarah's question. The temptation is to create a separate "Triage Status" field to run alongside the built-in Status. Resist it. It creates two sources of truth for essentially the same question, and nothing enforces that ideas actually pass through triage before landing in Discovery.

The simpler fix is to add Triage as the first state in your existing Status workflow. Go to your JPD project settings, navigate to Statuses, and add "Triage" at the top of the list, before Parking Lot. Your workflow then reads:

Triage → Parking Lot → Discovery → Ready for Delivery → Delivery → Impact → Archived

Now the sequence is enforced by the workflow itself. A new idea lands in Triage, a PM reviews it and either moves it to Parking Lot (interesting but not now), Discovery (let's actively explore this), or Archived (not for us). Nothing reaches Discovery without a conscious human decision. No automation required, no separate field to keep in sync, no gap where someone accidentally skips the step.

Set up a simple automation to assign every incoming idea to Triage by default, and your List view filtered to "Status = Triage" becomes your working inbox. That's the whole system.

For a good mental model of how views and fields work together, and how to structure your board so the List view actually functions as the triage queue it should be, Released has a very practical guide to JPD views that covers exactly this.

Strategic Theme: The Field That Exposes the Gap Between Strategy and Backlog

Here is a pattern I've seen more times than I should admit: company has a stated strategy. Product team has a backlog. Nobody has ever formally connected the two. Leadership says "we're doubling down on enterprise" and the roadmap is full of features that are useful for teams of five people. Not because the product team is ignoring strategy (they're not, usually), but because there's no field making the relationship visible and therefore no friction when someone adds a well-intentioned SMB feature without thinking about the current strategic focus.

This is fixable in about ten minutes. But before you reach for a custom field, there's a question worth asking: is your org already using Atlassian Goals?

If the answer is yes, skip the custom field entirely and use JPD's built-in Goals field instead. Goals are a first-class object in the Atlassian platform (they used to live in Atlas, which has since been retired and absorbed into the core platform, accessible via Atlassian Home). They're defined centrally and shared across your Jira projects. When you link an idea to a Goal, you're connecting it to something your strategy team actually owns and updates. When a goal gets renamed, hits its target, or gets retired, that change propagates automatically. You're not maintaining a dropdown that slowly drifts out of sync with reality, which is what every manually maintained strategy field eventually becomes.

The other thing Goals gives you that a custom field can't: proper roll-up. You can see, at the goal level, how many ideas and how much delivery work is attached to each strategic objective. That's a genuinely useful view to bring to a planning cycle.

If your org isn't using Atlassian Goals, or if your "strategic themes" are looser than formal OKRs (more like buckets than tracked objectives), then a custom Multi-select field works well. Create one called 🏢 Strategic Theme and populate it with options that mirror how your leadership actually talks about priorities:

  • 🏢 Enterprise Expansion

  • 🔁 Retention & Engagement

  • 💰 Revenue Growth

  • 🔧 Platform Scalability

  • 🌍 International Markets

Either way, once your ideas are tagged, go to the Board view and group by that field. What you're looking at is a reasonably honest picture of where your team's actual attention is going, as opposed to where people say it's going in planning meetings. These two things are often usefully different, and the gap between them is usually worth a conversation.

Released has a good piece on organizing and hierarchically structuring ideas in JPD, covering how Themes, Opportunities, and Features can nest together, which gives you the fuller picture of where strategic alignment fits into a layered discovery taxonomy.

Insights: Attaching Evidence to Ideas (and How to Do It Without Manual Work)

Linking ideas to Goals tells you why something matters strategically. The Insights field tells you what evidence you actually have for it. These are different questions, and JPD keeps them separate for a reason.

Insights are attached pieces of evidence: customer feedback quotes, interview notes, support ticket patterns, sales call snippets. When an idea has five Insights attached to it from real customer conversations, that changes how a prioritization discussion goes. When it has zero, you're debating gut feelings, and everyone's gut is equally authoritative, which is a polite way of saying nobody wins except the loudest person in the room.

The field exists by default in JPD. The question is how you populate it. Manual entry works for small teams with high-touch discovery practices, but it doesn't scale and it tends to get skipped when people are busy, which is always.

The more sustainable approach is to close the feedback loop automatically. Released's portal integration with JPD does exactly this: when customers or stakeholders submit feedback through a Released portal, vote on a feature request, or leave a comment, that input gets pushed directly into JPD as an Insight on the relevant idea. The evidence accumulates without anyone having to remember to copy-paste it.

What you end up with is a backlog where the ideas people care most about are visibly well-evidenced, and the ones that are mostly internal opinion have a conspicuously thin Insights column. That visibility is useful in ways that are hard to manufacture artificially. It doesn't tell you what to build, but it does make the conversation more honest.

Confidence Score: Honest About What You Know (and What You're Guessing)

Confidence Score is the field I've seen teams resist the most, and the field I've seen make the biggest difference once they stop resisting it.

The resistance usually sounds like: "We don't have great data on most of our ideas, so filling this in will just expose that we're mostly guessing." Yes. Exactly. That's the point. That's information.

In the RICE prioritization framework, Confidence specifically tracks how much evidence supports your Reach and Impact estimates. High confidence means A/B tests, verified analytics, actual user research. Low confidence means you talked to two customers and had a hunch. Both are fine. The danger is when you can't see the difference. When a 100% RICE score derived from hard data sits in the same list as a 100% RICE score someone reverse-engineered from optimism.

Use a Number field for this. Call it 🎯 Confidence Score. Set up conditional formatting: values below 50 go orange, values below 25 go red. Don't set hard validation on the range; trust your team and documentation for that. A conventional scale that works well: 100 for high confidence (you have real data), 80 for medium (solid qualitative signal), 50 for low (some evidence, significant assumptions), 20 for minimal (educated guess territory).

What you get is a view where the evidence quality of every idea is immediately visible without anyone having to read the description. Ideas cluster naturally into "things we're pretty sure about" and "things that are still hypotheses," and that distinction should drive different conversations and different levels of investment.

If you want to take this further and compute full RICE scores automatically, Released has done the step-by-step work in their RICE scoring guide for JPD. Your Confidence Score field plugs directly into a Custom Formula field to produce a live-calculated RICE total. It's pretty satisfying when it works.

A Word About the Emojis (Because Someone Will Ask)

I've gotten exactly one piece of feedback every time I share this kind of setup: "do the emojis have to be there?" And the answer is no, they don't have to be. You can skip them. Your fields will work exactly the same.

But here's the thing: in a crowded JPD List view with twenty columns, 🎯 Confidence Score is easier to scan than Confidence Score. The emoji acts as a visual anchor. Your PMs are making split-second decisions about where to look, and a small pre-attentive signal (color, shape, icon) genuinely reduces cognitive load. Atlassian supports emoji in field names precisely because this matters at scale.

There's a second thing, which is harder to articulate: emoji in field names signal that someone thought about the field from the perspective of the person using it. It's a small ergonomic detail. Teams notice those details, even when they can't quite say why a setup feels well-considered versus haphazard.

Actually Putting This Together

Once these changes are in place, your JPD setup gets meaningfully different capabilities. You can pull up a filtered List view showing only ideas in Discovery or beyond, with 🎯 Confidence Score above 50, tagged to a specific 🏢 Strategic Theme. That view is a real prioritization conversation, not a backlog triage session.

You can publish a roadmap externally via Released that shows stakeholders what's in Discovery and beyond, without surfacing the Triage pile or the Archived ideas with snarky comments in them. How to publish JPD roadmaps covers that workflow cleanly.

The broader thing I'd leave you with is this: JPD is a platform that rewards investment. The defaults get you started (and they're genuinely good for that), but the teams that get compounding value from the tool are the ones who revisit their field setup every few months and ask "does this still reflect how we actually work?" The setup I've described here isn't universal. Your workflow states might be named differently. Your Strategic Themes will definitely be different. The point isn't the specific fields; it's the habit of building a taxonomy that's yours.

For more depth on how all the JPD pieces fit together, Released maintains a solid JPD knowledge base that's worth bookmarking.

Now go update your field settings. Sarah's waiting.

Build what matters

With customer feedback in Jira

Build what matters

With customer feedback in Jira

Build what matters

With customer feedback in Jira