MYHockey News

Understanding the Rankings

By Scott Lowe -

We get a decent number of emails and social media direct messages from people who say they just don’t understand how we come up with our MYHockey Rankings. Trust me, I’m the journalist here, not the math or IT guy, and I’m pretty sure that my Math 110 class in college didn’t cover the type of complicated algorithm required to rank more than 20,000 North American hockey teams as accurately as possible.

So I’m right there with you.

And somehow that makes me oddly qualified to explain to you in the simplest terms possible how the rankings work.

By simplest terms I mean that I will not be getting into any specifics about the actual algorithm and how it functions or any formulas or equations. Any attempt on my part to dive that deeply into how the rankings are formulated would lead to confusion and frustration on my part and likely an indecipherable and less-than-exciting manuscript that would have readers wanting to poke their eyes out.

Stated simply, “The rankings are determined by computing an average team rating for each team using their reported game scores for the current season,” said Neil Lodin, the mastermind behind the rankings since 2003. 

So, instead of getting all mathematical with folks, this article will discuss why the method used by MYHockey Rankings is really the only way to rank so many teams with any sort of accuracy and hopefully will help readers decipher the numbers attached to the rankings and understand why a team might move up or down during a given week. And it might help readers guess with more success where a team might be ranked when the rankings are released every Wednesday between now and April.

But first, here’s a quick history lesson. 

Neil began playing with the idea of ranking teams in 2003 to help his son’s Indiana mite team find opponents to play. The team was pretty strong, and finding competition at a similar level was difficult. The following year, his son’s squirt coach asked Neil to help the scheduler find better teams to play. That’s when he realized he could use statistics to rank teams and possibly find more evenly matched opponents.

The algorithm was born and originally used just to rank teams in the Midwest, but Neil started ranking squirt AAA teams nationally on a site called MYHockey when his son progressed to that level. Then he added peewees at the request of some parents and coaches. People around the country caught wind of the rankings, and by 2005 MYHockey Rankings had volunteers ranking teams from regions all over the country. Within five years there were 200 volunteers helping him rank more than 20,000 North American teams.

“The rankings are a tool being used to improve the sport of hockey,” Neil said. “It started selfishly as a tool to identify teams of similar capability to my son’s team 17 years ago, but it evolved into a tool to help just about everyone who is involved in youth hockey. Because it computes an objective and accurate average performance rating for every team, people have found it useful to do everything from choosing at-large bids to Nationals to helping ‘B’ level travel teams be able to play similarly capable teams in local tournaments.”

That bring us to the first principle of the rankings. 

They aren’t intended to determine a national champion – USA Hockey runs tournaments to do that – but they were established as a tool to help teams schedule more competitively, help tournament organizers create more competitive events and avoid those mismatched blowout games that aren’t good for anybody. The hope was that the rankings would promote better player development as teams were able to find more challenging opponents and began trying to develop their players in an effort to move up in the rankings.

Neil points to the player development angle instead of explaining the intricacies of creating a schedule that can maximize a team’s chances of being ranked higher.

“I believe the best way to move up in the rankings is to develop your players,” he said. “Your team is only as strong as your weakest line or backup goalie. Develop them. Play them early in the season. Make sure they can play successfully against the same competition as your top line. The irony is that teams worry too much about the rankings early in the season, shorten the bench and pay for it in the long run while teams that focus on the development of all their players will improve in the rankings as the season stretches on.”

The teams that follow the development model also will be better prepared if presented an opportunity to play in USA Hockey Nationals.

Neil has admitted that his fear of the rankings being used the wrong way and being the sole focus of teams and organizations rather than developing their players and finding appropriate levels of competition made him reluctant to change and grow at times, but positive feedback from so many folks around the country convinced him that he had created an effective and valuable tool for youth hockey organizations all over North America.

“I had no clue how MYHockey would evolve over time,” Neil said. “Ironically, I often resisted change along the way until fans of the site made it clear to me that progress is a good thing. I’ve been fortunate to get a lot of great input from hockey people over the years.”

The feedback isn’t always positive.

One of the most frequent comments received is that some sort of human input or analysis should be involved for any type of rankings to be accurate. A more subjective ranking certainly can create discussion and debate – and it can allow for some intangible factors to be considered that an algorithm won’t factor in – but it also can be swayed by inherent biases and outside pressures.

Subjective rankings by a panel of experts with different backgrounds and biases can work for smaller groups of teams, but there aren’t enough youth hockey experts watching enough games for humans to rank 22,000 teams from around the continent accurately. In fact, one of the longstanding issues with college sports rankings is that not every voter gets to see all the teams on a regular basis.

For as long as there have been college athletics polls, there have been debates and discussion about East Coast and West Coast biases in the voting among other concerns. Similar concerns prompted college football to use multiple computer rankings as part of the chosen formula to select teams for its postseason playoff.

There is absolutely no human input or bias involved with MYHockeyRankings.

“When the objective is to rate or rank every competitive youth team in North America, you have to remove the human subjective element from the equation,” Neil said. “No subjective ranking system can efficiently scale to handle the 22,000 teams MYHockey ranks on an annual basis. The rankings get extremely accurate as the season progresses because interplay by teams from all areas of the continent gets pretty thorough and redundant.”

That brings us back to the original purpose of the article, to help readers understand how the rankings work. For the lay person, Neil’s quote simply means that as more games are played and more teams from different regions play each other, the rankings become more accurate and “early season aberrations disappear.” 

Okay, so the rankings are 100-percent based on math with no subjective human intervention (The only human involvement is from volunteers who enter scores, and yes, when you have hundreds of volunteers entering thousands of games, there can be mistakes.). The rankings become more accurate as more games are played. Scores of every game are entered. Then what?

That’s when all the complicated math stuff that I couldn’t possibly explain happen.

But in a nutshell, every team’s results are entered into the system thanks to our amazing network of volunteers. Those results produce two basic numbers, a team’s strength of schedule (Iisted as SCHED on the rankings page) and its average goal differential in each of those games (AGD). Those two numbers are added together to determine a team’s rating.

The strength of schedule is a calculation of the average strength of a team’s opponents. How a team does against its opponents, along with how it’s opponents do against their opponents and how its opponents’ opponents fare against their schedules and so on down the line is all part of a constantly changing dynamic that determines a team’s strength of schedule.

AGD is currently calculated by accumulating the goal differential of each game, to a maximum of 7, and divides it by the number of games played. SOS can be added to the team’s AGD to determine a team’s overall rating. 

The highest rating possible in an age group is 99.99, and the numbers filter down from there. Typically, the top team in a particular age group is a major birth year AAA team because those teams play stronger schedules than the minor and Tier 2 teams. Top-ranked minor birth-year teams tend to top out three or four points behind the highest-rated major teams, with the Tier 2 teams usually falling below that.

So, now if you go to the rankings page for a particular team’s age group, looking at the columns left to right you have “Rank,”, “Team” and “Record” – all self-explanatory. Next comes “Rating.” That’s the team’s rating, which again is the sum of the strength of schedule (SCHED) and average goal differential (AGD). Those two columns come after “Rating,” so adding them is how the rating number is determined.

If you want to dive a little deeper, click on a particular team to see its individual page, which includes its schedule and results. Across the top and above the game results you will see from left to right, “Games,” “Rating Math,” “Last 10 Math,” etc. By clicking on “Rating Math” you can get an idea of how a team performed during a given week based on how it should have performed according to its ranking.

For example, looking at one game it is possible to tell if the team won or lost, the score, the goal differential, the opponent’s rating and then points received for that game. “Points” allows a team to figure out how its actual performance compared to what the rakings showed as the game’s expected outcome.  

The “Points” total simply is the opponent’s rating plus (or minus the goal differential).

Let’s say Team A, with a rating of 93.0, beats Team B, with a rating of 90.0, by a score of 8-1. To get the expected goal differential for the game, simply subtract the lower-ranked team’s rating (90) from the higher-ranked teams rating (93).

So, based on our ranking system, Team A was expected to win by three goals. The “Points” column is the goal differential from the game plus the opponent’s rating. For this game, Team A would have recorded 97 points, which is four above the expected total of 93 Team A would have received had it won by the three goals the rankings predicted. The “+/-“ column on the far right of the “Rating Math” page would show a +4.00, indicating that Team A had out-performed expectations by four goals for that particular game.

Keep in mind that the maximum goal-differential recorded for any one game is seven (7) as we don’t want to promote teams running up the score in mismatched contests. A game in which a team performs better than expected will help its rating, especially if it’s against a higher-rated team. Likewise, a team doing worse than expected in a given game would negatively impact a its rating. The impact would be magnified against a lower-ranked team.

By following how a team performs during a given week compared to the expectations based on our rankings, it should provide a pretty good indication as to whether the team will move up or down when the next set of rankings is released.

As mentioned previously, mistakes can happen. If any numbers seem off, go back and check the team’s results. Make sure scores were entered correctly and that the proper opponent is listed. Sometimes a team other than the one you played from the same club might be listed as an opponent by mistake.

If you have an questions or concerns about the rankings, check out our Frequently Asked Questions page to see if the answer might be there:


For a little more in-depth explanation of the rankings, some other helpful links can be found below.

Initial Strength of Schedule -

The Math Further Explained -

More articles like this...
Other articles of this type...