RANKING
How the Top Rated list is sorted, and why a 5.0 with 2 reviews doesn't always beat a 4.8 with 10.
The problem
A simple average rewards luck. Three perfect 5★ reviews give a 5.0, but that's a tiny sample — one bad review would tank it. A game with a 4.8 average over 40 reviews is a stronger signal of quality, even though the raw number is lower.
Sorting by raw average puts low-volume games on top by accident. Sorting by review count alone ignores quality. We want both.
Bayesian average
Top Rated uses a Bayesian average — every game is treated as if it had a few extra phantom votes at a neutral rating. Games with many real reviews barely feel the prior; games with very few are pulled toward the neutral score.
With k = 2 and m = 4.0, the prior is light. It only matters when review counts are small. Once a game has ~10+ reviews, the score is essentially the real average.
Live example
The current top 5 reviewed games on the site, scored under the formula above:
| # | Game | Avg | Reviews | Score |
|---|---|---|---|---|
| 1 | Floppy Brawler | 4.7 | 6 | 4.53 |
| 2 | EmojiBeats | 4.6 | 12 | 4.51 |
| 3 | Almost Surgery | 5.0 | 2 | 4.50 |
| 4 | Joe Must Drive | 5.0 | 2 | 4.50 |
| 5 | ECO Guardian | 4.5 | 22 | 4.46 |
The displayed star average stays the real number — only the internal ranking score is weighted.
One author, one vote
Anyone can post as many reviews as they want — but for the rating that matters (the average shown on the game page), all reviews from the same author are collapsed into a single vote using their own internal average. So six 5★ reviews from one person count the same as one 5★ review.
This means the rating is resistant to spam: posting the same review ten times doesn't move the needle. It also means a single determined troll can't tank a game by spamming 1★ — they get one 1★ vote, not ten.
On the game page, multi-review authors show up as a single card with the average of their own ratings up top, the most recent review below, and a "Show N previous reviews" expander that opens a tree view of the rest.
Identifying authors
Anonymity is a product requirement — there's no login. To identify the same person across reviews without an account, three signals are combined, in order of strength:
- Device fingerprint — a SHA-256 hash of canvas rendering, WebGL renderer/vendor, OfflineAudioContext output, installed font probes, screen, timezone and user-agent. Stable across cookie clears, browser restarts, and most VPN switches.
- Anonymous client cookie — a random UUID stored in an
httpOnlycookie (vr_uid) for one year. Survives IP changes; cleared when the user wipes cookies. - IP hash — SHA-256 of the request IP plus a server secret. Raw IPs are never stored. Last-resort signal — shared networks (campus Wi-Fi, mobile carriers) collapse onto one hash.
For grouping, the strongest available signal wins: fingerprint → cookie → ip. Each review carries an opaque, truncated fingerprint id (e.g. #a1b2c3d4) shown on the card so repeat authors are visible at a glance.
Tiebreakers
- Higher Bayesian score wins.
- If scores tie, more reviews wins.
- If review counts also tie, higher raw average wins.
Other sorts
- Newest— sorts by the game's submission timestamp. Reviews don't affect this.
- Most Reviewed — pure review count, with average as the tiebreaker.
Star display
Stars use fractional fills — a 4.5 average shows four-and-a-half stars filled, not five. The numeric score (e.g. 4.8 (4)) is shown next to the stars so close averages are easy to tell apart.