Sunday 19 August 2018

When everything is 4-star, how do you differentiate between them?

Ranking on a scale of 1 to 5 stars has emerged as a standard. Whether it's reviews on Amazon, Google or for an employer on a job board. The coloured stars seems to be an average of all the rankings. It certainly makes it easier for a reviewer to give a ranking from 1 to 5, essentially between a choice of 4. I like that. Imagine if one has to rank them from 0 to 100. I'll be thinking too much. Yuck!

The problem I find is that they concentrate around 4 stars. I propose to keep the coloured stars as-is but augment them with a percent (%) score.


Anything less than half-a-star is hard to make out. This gives us 8 bins with width of 0.5 each, between 1 and 5. Assuming we round up all the decimal points, an average of 3.6 to 4 would render a full 4 coloured stars but the percentage score can will be between 72% to 80%. We get a bigger range if we add decimal points to % score.

Still not convinced. Continuing with our running example of two average boundaries that render full 4 stars, namely 3.6 and 4. 

percent score with 3.6 average = (review_count x 3.6)/(review_count x max_score(i.e.5)) x 100 = 72%. Similarly, it is 80% with an average boundary of 4.

Simple data science solving a real problem? :-)

No comments:

Post a Comment