DEV Community

Cover image for Why every CS2-to-CSGO rank converter gives you a different answer (and how I picked one)
graysonwerner100-commits
graysonwerner100-commits

Posted on

Why every CS2-to-CSGO rank converter gives you a different answer (and how I picked one)

I built a small tool called cs2rankconverter.com — punch in your CS2 Premier rating, get the equivalent CSGO rank. It exists because Valve replaced the familiar Silver-to-Global rank system with a 1,000–35,000 number scale and never bothered to publish a mapping. Players who quit during CSGO and came back during CS2 have no clue what their new number means.

Should be simple right? Take Premier's rating, look up the matching CSGO rank, return it. Two days of work, ship it, done.

Except every existing converter I checked gave a different answer.

Dexerto says 18,000 is "DMG-LEM range." Esportstales says 18,000 is mid-LEM. Scope.gg's 2.5M-player analysis maps it to high LE. Leetify's data implies upper-LEM. None of them are wrong. They're all using legitimate methods that produce different results from the same underlying reality.

This is a fun little case study in why "convert X to Y" problems get harder the moment you actually look at the data. If you're building anything that maps between two scales — game ranks, school grades to GPAs, Likert survey scales, anything where two systems measure roughly the same thing differently — the same trap applies.

The two methods, and why they disagree

There are exactly two principled ways to map CS2 Premier ratings to CSGO ranks. They produce different answers. Most converters quietly pick one without telling you.

Method 1: Bracket-matching

Premier has 7 color brackets (Gray, Light Blue, Dark Blue, Purple, Pink, Red, Yellow). CSGO had 18 ranks. Divide them up proportionally. Gray covers Silver I through Silver Elite Master (6 ranks). Light Blue covers all four Gold Novas. And so on.

const COLOR_TO_RANKS = {
  gray:      ['S1', 'S2', 'S3', 'S4', 'SE', 'SEM'],   // 6 ranks
  lightBlue: ['GN1', 'GN2', 'GN3', 'GNM'],            // 4 ranks
  darkBlue:  ['MG1', 'MG2', 'MGE'],                   // 3 ranks
  purple:    ['DMG', 'LE', 'LEM'],                    // 3 ranks
  pink:      ['SMFC'],                                // 1 rank
  red:       ['SMFC', 'GE'],                          // overlap
  yellow:    ['GE'],                                  // 1 rank
};

function bracketMatch(rating) {
  const color = colorFromRating(rating);
  const ranks = COLOR_TO_RANKS[color];
  const colorMin = COLOR_RANGES[color].min;
  const colorMax = COLOR_RANGES[color].max;
  const positionInColor = (rating - colorMin) / (colorMax - colorMin);
  const rankIndex = Math.floor(positionInColor * ranks.length);
  return ranks[Math.min(rankIndex, ranks.length - 1)];
}
Enter fullscreen mode Exit fullscreen mode

Clean. Easy to explain. Easy to remember.

The problem: it assumes the two scales' brackets correspond to equivalent population slices. They don't. CSGO's Silver tier (6 ranks) historically held about 32% of the playerbase. CS2's Gray bracket holds about 10% in 2026. By bracket-matching, a player at rating 3,000 (which is "mid-Gray, top half of Silver tier" in bracket terms) is actually in the bottom ~5% of CS2 — way worse than the average Silver in CSGO ever was.

Method 2: Percentile-matching

Take the player's percentile in Premier. Find the CSGO rank that historically held that same percentile. Return that rank.

// Pre-computed from CSGO 2017-2019 distribution data
const CSGO_RANK_PERCENTILES = {
  S1:   { lower: 0.000, upper: 0.045 },
  S2:   { lower: 0.045, upper: 0.087 },
  S3:   { lower: 0.087, upper: 0.146 },
  S4:   { lower: 0.146, upper: 0.222 },
  SE:   { lower: 0.222, upper: 0.305 },
  SEM:  { lower: 0.305, upper: 0.391 },
  GN1:  { lower: 0.391, upper: 0.481 },
  GN2:  { lower: 0.481, upper: 0.560 },
  GN3:  { lower: 0.560, upper: 0.637 },
  GNM:  { lower: 0.637, upper: 0.710 },
  MG1:  { lower: 0.710, upper: 0.781 },
  MG2:  { lower: 0.781, upper: 0.844 },
  MGE:  { lower: 0.844, upper: 0.893 },
  DMG:  { lower: 0.893, upper: 0.931 },
  LE:   { lower: 0.931, upper: 0.961 },
  LEM:  { lower: 0.961, upper: 0.982 },
  SMFC: { lower: 0.982, upper: 0.995 },
  GE:   { lower: 0.995, upper: 1.000 },
};

function percentileMatch(rating, distribution) {
  const percentile = percentileForRating(rating, distribution);
  for (const [rank, range] of Object.entries(CSGO_RANK_PERCENTILES)) {
    if (percentile >= range.lower && percentile < range.upper) {
      return rank;
    }
  }
  return 'GE';
}
Enter fullscreen mode Exit fullscreen mode

This is more honest about how good a player actually is relative to peers. If you're at the 95th percentile in Premier, you map to the 95th percentile in CSGO, which was around LE/LEM.

The problem: it produces narrower bands that "feel weird." A rating of 18,500 maps to LEM under percentile-matching, but to mid-LEM-to-Supreme under bracket-matching. Players coming back from a 4-year break expect the bracket-matching answer because that's how their brain still organizes ranks visually.

Why the answers differ in practice

Run both methods on a few sample ratings and you get this:

Rating Bracket method Percentile method Difference
3,000 Silver III Silver I 2 ranks
6,000 Gold Nova II Silver Elite Master 1 rank
9,000 Gold Nova Master Gold Nova III 1 rank
12,000 Master Guardian Elite Master Guardian I 2 ranks
15,000 Distinguished MG Master Guardian Elite 1 rank
18,000 Legendary Eagle Master Legendary Eagle 1 rank
22,000 Supreme MFC Legendary Eagle Master 2 ranks

The two methods produce systematically different results because Premier's distribution is more compressed than CSGO's was. CS2's rating system is closer to FACEIT's Elo scale, which spreads players across a numeric continuum. CSGO's 18 ranks were skewed toward the middle (most players were Gold Nova or low MG).

Neither method is "wrong." They answer slightly different questions:

  • Bracket-matching answers: "Which CSGO rank icon would Premier's color tier correspond to?"
  • Percentile-matching answers: "Which CSGO rank held the same fraction of players as my current Premier rating?"

For most players the second question is the more useful one — "am I above average, top 10%, top 1%?" — but the first question matches their visual mental model.

What I actually shipped

A 60/40 weighted blend of percentile-matching and bracket-matching, because:

  1. Pure percentile-matching produces results that contradict every existing community converter, which makes users distrust the tool ("but Dexerto said I was DMG?")
  2. Pure bracket-matching produces results that don't reflect actual current skill distributions
  3. The blended approach lands within 1 rank of every major community source for ~85% of ratings
function convertRating(rating) {
  const bracketRank = bracketMatch(rating);
  const percentileRank = percentileMatch(rating, CURRENT_PREMIER_DISTRIBUTION);

  // If they agree, return that
  if (bracketRank === percentileRank) return bracketRank;

  // Otherwise weight 60% percentile (more honest), 40% bracket (more familiar)
  const bracketIdx = RANK_ORDER.indexOf(bracketRank);
  const percentileIdx = RANK_ORDER.indexOf(percentileRank);
  const blendedIdx = Math.round(percentileIdx * 0.6 + bracketIdx * 0.4);
  return RANK_ORDER[blendedIdx];
}
Enter fullscreen mode Exit fullscreen mode

I document this honestly in the tool's About page. There's no "official" mapping; Valve never published one. Anyone claiming their converter is canonical is selling something.

Distributions drift, so the answer drifts

Here's the part that actually keeps me up: the underlying CS2 distribution isn't static. Sample data from Leetify shows this:

  • January 2024 average rating: ~8,000
  • September 2024 average: ~9,000
  • January 2026 average: ~11,000

The bar moved 3,000 points in two years. A rating of 11,000 in 2024 was solidly above-average; in 2026 it's literally the average. Any converter that hard-codes percentile cutoffs based on 2024 data will tell users they're better than they are by 2026.

The fix is to refresh distribution data periodically:

// Re-aggregate distribution from a 30-day rolling window
async function refreshDistribution() {
  const samples = await fetchRecentLeaderboardSamples({ 
    minDate: thirtyDaysAgo() 
  });
  const histogram = bucketize(samples, { bucketSize: 1000 });
  const cdf = computeCDF(histogram);
  await db.set('premier_distribution', cdf);
}
Enter fullscreen mode Exit fullscreen mode

I update mine quarterly. That's enough to track Elo inflation without constantly shifting answers under returning users. If you build any converter where one side of the mapping has time-varying statistics, you need a refresh strategy or your tool gets quietly wrong.

Why "Valve should just publish the mapping" doesn't work

You'd think the cleanest solution is for Valve to release official numbers. They won't, and probably shouldn't. Here's why:

  1. The systems measure different things. CSGO's per-map ranks are now per-map in CS2 Competitive. Premier is a unified global rating. There isn't a single mapping; the right answer depends on which CSGO rank context you're asking about (Inferno-only? Average across all maps? Highest map?).

  2. Publishing it would lock in inflation. If Valve says "20,000 = Global Elite," players above 20k start expecting GE-tier behavior from teammates. Premier ratings continuing to drift would break that contract within months.

  3. Marketing. The whole reason CS2 abandoned named ranks was to escape the Silver-to-Global meme economy. An official mapping resurrects it.

So community-derived approximations are the only game in town. They'll always be approximations. That's fine — most users want a ballpark, not exactness.

Lessons that generalize

If you're building any cross-scale converter, the lessons here apply:

  1. Identify which method you're using and document it. Bracket vs. percentile is the same dichotomy as nominal vs. ordinal scales in stats. Most "converters" are bracket-matchers without realizing it.

  2. Distributions drift. Anything pegged to "what the average user looks like" needs a refresh schedule. Hardcoded percentile cutoffs go stale.

  3. Users expect bracket-matching, even when percentile-matching is more correct. Their mental model is the dominant scale's named buckets, not the underlying skill distribution. A pure-correctness answer will feel "wrong" to them.

  4. Consensus across converters tells you when you're broken. I sanity-check my output against three other community sources. If I disagree with all of them by 2+ ranks, something's wrong on my end (usually stale distribution data).

  5. Be honest in the UI about uncertainty. I show a confidence band, not a single rank, when the rating is near a bracket boundary. Eg. rating 4,950 returns "Silver Elite Master / Gold Nova I" — because honestly, both are defensible.

If you want to play with the live tool, cs2rankconverter.com does this conversion in one click. The math behind it is roughly what's described above; the data refreshes quarterly. Feel free to plug in numbers and tell me where it disagrees with your gut — those edges are where the next iteration usually comes from.

If you've shipped a similar two-scale converter and ran into different gotchas, I'd love to hear about it in the comments. Especially curious about converters between systems where one side has a long-tail distribution and the other has a normal-ish one — that's a research question I haven't fully solved.

Top comments (0)