Pitfalls of Prioritization
Are you a PM? Then am sure you use prioritization as a part of your process. But, have you ever stopped to consider the pitfalls as much as the most important value addition of using those frameworks?
How it all might have started
Decades ago, much earlier to a time when there was effective prioritization & a few best practices, there may have been discussions between various department heads, next product / feature's list that each of them heading the discussion thinks would be nice to have and should be built based on his own reasoning that grossly represented his own perspective and not necessarily the market's / user's voice or what he thinks is needed to put them in the front with regards to the organizational objectives and goals as defined by the leadership.
After may be a few failures, lack of adoption from the users, tons of feedback regarding how that feature / product fails to address the central problem let alone solving it even partially, we today have a standard, a benchmark of popularly followed and adopted practices in the form of prioritization methods.
Quick Walkthrough
These prioritization methods could intern depend on a few variables / parameters like:
User Behavior
User Validation
Quantitative data
Qualitative data
Of the many prioritization methods used widely, here are some popular ones with a short description of what they involve.
NOTE: When each of these methods are enough to occupy a separate chapter on a book of prioritization frameworks and is out of the scope of this article for obvious reasons, it is important to have a knowledge of all of them.
1.KaNo
Goes by defining and splitting features placing them appropriately over the Must haves, Performance benefits and Delighters in trying to gauge user satisfaction.
Type: Qualitative
User Validation Required: Moderate
2.MoSCoW
Tries to split all the listed features over 4 different tranches, that is:
must have
should have
could have &
won’t have
Type: Qualitative
User Validation Required: Moderate
3.Weighted Shortest Job First aka Cost of Delay
This tries to put a cost on the impact it would have on the market if there was to be a delay in shipping the said feature and tries to work over an estimate on the Duration (quantified by the effort to build that feature)
Type: Qualitative
User Validation Required: Low
4.Cost vs Benefit
It tries to list out the drivers with a score on a scale of 1-5 (Low-High) and the highest total across the row would be the winner
Type: Qualitative
User Validation Required: Low
5.RICE
Starts with weighing out each of Reach, Impact, Confidence & Effort numerically and then calculates a score by the formula:
The features with the highest score would be the one that is prioritized.
Type: Qualitative
User Validation Required: Low
6.ICE
(same as the above with the parameter ‘R = REACH’ dropped)
Type: Qualitative
User Validation Required: Low
7.Buy-a-Feature
Starts with listing out features that’d make sense to have alongside their prices.
They are all thrown open to the users & the ones with featuring mostly across all users are the ones that are carried forward.
Type: Qualitative & Quantitative
User Validation Required: High
8.User Story Mapping
Stories (that ought to essentially be converted to features later on) are listed over a board from left to right & sorted top to bottom over the value each of it is supposed to add to the end users and the overall business
Type: Qualitative
User Validation Required: Moderate
9.Affinity Mapping
The members of teams would all come together, collaborate and brainstorm over placing each of the listed features at certain places hierarchically over sticky notes as they deem important.
Type: Qualitative
User Validation Required: Low
10.Opportunity Scoring
In trying to rate the features required over its importance:
whilst gaining an understanding of whether the needs of the market so to speak are overserved / underserved
(and)
as much as also trying to estimate whether it would lead to a state of satisfaction / dissatisfaction post-build
Type: Quantitative
User Validation Required: Low
As is prominent from the whole table above, out of the 10 frameworks that have been listed here there are a couple of them that may require High User Validation as the features are thrown open to interfacing with the users, tracking their choices.
A common strategy used here in any of the above frameworks seems heavily reliant on “Majority wins”. But, the question to ponder over is when one refers to the majority how much of it is hypothesized and how much of it stands validated? And, given that understanding now, is it the right away to go about it?
Lets’ find out over the next section.
Pitfalls of Prioritization
Going by the golden rule of MATH theories and postulates as we have been warned right from such time that we learnt these things in school:
“In theory each & every rule / theorem / postulate / converse has a criterion, a context & a boundary of application beyond which it ceases to hold good”.
In a similar vein, as we described the prioritization frameworks above there are many situations where they can’t be applied and, in most cases, they tend to require some kind of an extrapolation based on a few assumptions that may not always land in favor of the entire exercise thus end up inducing a whole new bias. And, those assumptions are bound to induce many pitfalls which can be presumed to have many layers to it and being forewarned has no real substitute in the real world.
1. Decision-making Bias
Necessity & Needs
PMs and related teams have an onus of creating value by scoping the problems that are being faced today and so tend to consider variables and parameters that are known to them already with the others being assumptions / extrapolations.
But, in an ever-changing world where nothing is constant there is a threat of the solution itself being out of place very soon in a few cases.
External Factors
There has to be a fine balance between the knowledge gained over discovery & research as much as the external factors that influence the buying decisions of the users right from:
price
adoption patterns
user attitude and behavior towards the product
legalities
compliance to regulatory guidelines
Competition
Most PMs are needed to and also have a good understanding of the domain / space they are operating in and that may also lead to inducing a common-knowledge bias in what they think is right for the product and the direction to take basing it over decisions that the other competitors have taken.
Not a One-size-fits-all
Time stands testimony to the fact that there is not a single framework that fits all situations, teams and organizations. Some teams go by the book in following a method that’s been touted as the best ignoring facts over how 10 others were considered over a trial and that one was chosen as the best one given the situation the team was in.
Over Complicating Things
Sometimes PMs tend to factor in a whole lot more that they think is possible looking at it generically without analyzing the nuances of the situation let alone the justifications & validations needed to counter that complication.
No Standard Scoring
The fact that there is no prescribed standard for these scores and teams could induce their own benchmarks could make things worse at times. It then turns into a case of aiming too low thus compromising on quality of features and delivery reflecting in poorer adoption than presumed.
Recency
Blame it in human error / thinking, there is a bias induced by the most recent thing that worked. Sticking to what worked well over a previous organization / team / silo / situation is carried forward as a template and used to build on from there. In reality, most times this may not work unless product(s) / feature(s) are all unidimensional which is again a super-rarity.
Ignoring Constraints
Every organization / product / target market / user base / problem / solution has constraints. Sometimes the tendency is to jump quickly towards solutioning by estimating / over-estimating a problem and not doubling down on the constraints which leads to terrible decisions in picking features that may end up being grossly incorrect to the market & users.
Management Intervention
Feature prioritization sometimes is dropped in some cases and the requests coming in from sources that are high-up in the management ladder / a very powerful external stakeholder are directly prioritized.
Underestimating Impact
When it is important to think of it tactically at a feature level it also becomes important to figure out how that fits in with the ecosystem of the users and the features that have already been shipped earlier. The planning and impact assessment could be crucial without which one could easily miss out on the big picture.
Verbatim
Talking to the users directly is a good practice but asking them what to build next isn’t that great an idea. With some frameworks like the Buy-a-Feature asking the users directly as to what they think they want may have to be taken in with a pinch of salt.
2. Risks, Latency, Delays
Not Iterating Enough
Prioritization as an exercise if anything has to be iterated over and over again by collaborating, brainstorming between teams considering and vetting all perspectives over aspects, variables, stakeholders et. al. If it is done and dusted over one single sitting you may not give it enough room to evolve and course-correct.
Wasting Time Focusing on Trivial Matters
Those little distractions over something that could be very trivial in comparison to the macros like the outcomes, value-additions leading to a terrible lack of focus resulting in a lot of overruns due to rework, reshuffling could prove futile both in terms of cost and time.
Ignoring Complexity & Effort
Some frameworks fail to take the complexity into consideration and thus overlook the effort required. So, even post the prioritization with no information of the complexity & effort could most certainly lead to unwanted delays.
Not Adapting to New Findings
Timing indeed is everything for the market, users, problems, products. With some findings which are enough to induce a complete change of course / direction it becomes crucial to own up and incorporate that into the process of solutioning without which one could end up wasting a lot of time in building something unwanted and may lead to a terrible lack of interest.
Alignment
With team members freely participating over the brainstorming sessions with an aim to pick one feature out of a given lot, it would more often than not lead to many an alignment issue and could require the intervention / supervision by someone experienced to get it sorted.
3. Maintenance Issues
Complexity
Listing out all features (100+) can get very complicated unless affinities, interdependencies are identified, charted out, sorted & bucketed over some defining criteria clearly. And, the level of complexity would shoot up if you already have a full backlog of bug fixes and new feature releases across a product-suite in there to cope with.
Subjectivity
In some methods post listing out the features and in trying to map / order them suitably over their importance like for instance MoSCoW the problem is the tirades teams could get into over who perceives features to be what between Must / Should / Could / Won’t have. Each and every team member’s opinion can be very subjective and exploring all of them over multiple brainstorming sessions / collaborating may literally be impossible with all the dangling deadlines.
4. Mitigation Methods
So, when there are so many issues to using these frameworks over prioritizing features for a given product then why are they still being used and how is it that there hasn’t been any innovation in terms of factoring the constraints and bettering these models?
Not all methodologies prescribed and used worldwide are perfect to apply to fit in to any situation. But they have one chief purpose they have been designed for and as long as you stick to that purpose and understand the limitations you could eliminate the pitfalls & mitigate the risks.
Without pointing to a particularly framework / methodology discussed above, broadly the methods that could be underpin the mitigation strategy could be:
1. Be aware of the PROs & CONs of each methodology
2. Collaborate effectively with your teams
3. Focus on the Users / Market’s voice
4. Double-down into the problems / pain points of the market
5. Factor in Constraints like Complexity, Effort, Timelines, Budgets et. al.
6. Validate findings before finalizing anything
7. Ensure you are always building a shared understanding
8. Understand your team’s comforts / strengths and play to it
9. Revisit features over ideation session(s) esp. post user-interfacing
10. Standardize Quantitative Data (i.e., scores, costs, delays, estimates)
11. Push for continuous improvement in those set standards