RANKING

How the Top Rated list is sorted, and why a 5.0 with 2 reviews doesn't always beat a 4.8 with 10.

The problem

A simple average rewards luck. Three perfect 5★ reviews give a 5.0, but that's a tiny sample — one bad review would tank it. A game with a 4.8 average over 40 reviews is a stronger signal of quality, even though the raw number is lower.

Sorting by raw average puts low-volume games on top by accident. Sorting by review count alone ignores quality. We want both.

Bayesian average

Top Rated uses a Bayesian average — every game is treated as if it had a few extra phantom votes at a neutral rating. Games with many real reviews barely feel the prior; games with very few are pulled toward the neutral score.

score = (n · avg + k · m) / (n + k) n = number of reviews avg = actual average rating k = prior weight (2 votes) m = prior mean (4.0)

With k = 2 and m = 4.0, the prior is light. It only matters when review counts are small. Once a game has ~10+ reviews, the score is essentially the real average.

Live example

The current top 5 reviewed games on the site, scored under the formula above:

#GameAvgReviewsScore
1Floppy Brawler4.764.53
2EmojiBeats4.6124.51
3Almost Surgery5.024.50
4Joe Must Drive5.024.50
5ECO Guardian4.5224.46

The displayed star average stays the real number — only the internal ranking score is weighted.

One author, one vote

Anyone can post as many reviews as they want — but for the rating that matters (the average shown on the game page), all reviews from the same author are collapsed into a single vote using their own internal average. So six 5★ reviews from one person count the same as one 5★ review.

This means the rating is resistant to spam: posting the same review ten times doesn't move the needle. It also means a single determined troll can't tank a game by spamming 1★ — they get one 1★ vote, not ten.

per_game_average = average over distinct authors of (each author's own average)

On the game page, multi-review authors show up as a single card with the average of their own ratings up top, the most recent review below, and a "Show N previous reviews" expander that opens a tree view of the rest.

Identifying authors

Anonymity is a product requirement — there's no login. To identify the same person across reviews without an account, three signals are combined, in order of strength:

  1. Device fingerprint — a SHA-256 hash of canvas rendering, WebGL renderer/vendor, OfflineAudioContext output, installed font probes, screen, timezone and user-agent. Stable across cookie clears, browser restarts, and most VPN switches.
  2. Anonymous client cookie — a random UUID stored in an httpOnly cookie (vr_uid) for one year. Survives IP changes; cleared when the user wipes cookies.
  3. IP hash — SHA-256 of the request IP plus a server secret. Raw IPs are never stored. Last-resort signal — shared networks (campus Wi-Fi, mobile carriers) collapse onto one hash.

For grouping, the strongest available signal wins: fingerprint → cookie → ip. Each review carries an opaque, truncated fingerprint id (e.g. #a1b2c3d4) shown on the card so repeat authors are visible at a glance.

Tiebreakers

  1. Higher Bayesian score wins.
  2. If scores tie, more reviews wins.
  3. If review counts also tie, higher raw average wins.

Other sorts

  • Newest— sorts by the game's submission timestamp. Reviews don't affect this.
  • Most Reviewed — pure review count, with average as the tiebreaker.

Star display

Stars use fractional fills — a 4.5 average shows four-and-a-half stars filled, not five. The numeric score (e.g. 4.8 (4)) is shown next to the stars so close averages are easy to tell apart.