The ICE Scoring Model is an agile methodology that prioritizes features using data according to three components: impact, confidence, and ease of implementation. The ICE Scoring Model was initially created by author and growth expert Sean Ellis to help companies expand. Today, the model is broadly used to prioritize projects, features, initiatives, and rollouts. It is ideally suited for early-stage product development where there is a continuous flow of ideas and momentum must be maintained.
The three measurements of the ICE Scoring Model
Prioritization is achieved by considering three parameters which make up the ICE acronym.
Each parameter is rated on a scale of 1 to 10 which will be explained in more detail below.
1 – Impact
Impact is defined as the potential of a project feature to support core business or user objectives.
Impact is rated as follows:
- 1 – very low impact.
- 2-5 – minimal impact.
- 6-8 – measurable impact.
- 9-10 – significant impact.
2 – Confidence
This describes the degree to which project teams are confident the impact will be realized. Confidence can be rooted in gut instinct, but it’s better to back it up with hard data that quantifies known and unknown risk, for example.
Score confidence like this:
- 1 – very low confidence.
- 2-5 – minimal confidence.
- 6-8 – measurable confidence.
- 9-10 – significant confidence.
3 – Ease of implementation
Simply put, how easily can the project feature be tested or completed? In other words, how long will it take to complete? Ease of implementation will ultimately be determined by the capabilities of the team and the resources available to them.
Each business will score ease of implementation differently, but as a general rule:
- 1-2 – long time frame (3-6 months)
- 3-5 – significant time frame (2 months)
- 6-7 – minimal time frame (1 month)
- 8-10 – short time frame (2 weeks)
Calculating and interpreting ICE scores
To arrive at the ICE score, the team must rate each of the three parameters and then multiply them together. For example, a feature that scores 7 for impact, 5 for confidence, and 4 for ease of implementation receives a score of 140.
Alternatively, the team may choose to simply add the scores for each parameter to arrive at a final score. In either method, high scoring features should receive priority and the lowest scoring should be incorporated later or in some cases, not at all.
Strengths and weaknesses of the ICE Scoring Model
- Speed and simplicity. With just three parameters to consider, the ICE Scoring Model allows teams to rapidly prioritize tasks and move forward with momentum and purpose.
- Avoids analysis paralysis. Sean Ellis intended for the ICE Model values to represent a “good enough” estimation. While it is perhaps less rigorous than some other models, it does allow project teams to avoid becoming preoccupied with details.
- Prone to subjectivity. Parameter scoring is highly subjective. For example, how might a project team describe a confidence value of 8? In the worst case scenarios, the model may also be prone to bias. A worthwhile project feature requiring a lot of work may intentionally receive a lower score so that teams can avoid pursuing it.
- Requires broad expertise. Few people within an organization will have the expertise to score each parameter accurately. Ease of implementation is a technical consideration, while impact and confidence are business considerations.
- The ICE Scoring Model prioritizes features or initiatives by scoring three key parameters: impact, confidence, and ease of implementation.
- The ICE Scoring Model is suited to early-stage product development where there is a flow of ideas and sustaining momentum is important.
- The ICE Scoring Model is a simple and reasonably accurate prioritization method. However, scores can be prone to bias as a result of subjectivity and a lack of requisite knowledge.
Main Free Guides: