MYHockey News
Understanding the How & Why of Our Youth Hockey Rankings
By Scott Lowe - MYHockeyRankings.com
Most people love our rankings and look forward to their release each week, but there are many others who may not like them or understand their need or purpose.
Surely some folks enjoy the rankings because the teams they are associated with – as players or parents – are highly ranked and win a ton of games. There’s no doubt it can be an ego thing or at the very least a great source of pride for players, players, coaches and club administrators. Of course, there are some people who are frustrated by the rankings because they feel like their team is pretty good and has beaten some other good teams but for some reason just can’t move up. That frustration results mainly from a lack of understanding of how teams are ranked.
Some people think it’s really cool that in the past we have ranked more than 20,000 teams from all over North America at the youth, high school, junior and college levels. Others think that ranking 10-year-old hockey teams is ridiculous and promotes winning and being highly ranked over player development and fun.
We do get angry messages from people complaining that whoever is watching these games and ranking these teams knows nothing about hockey, and then there are those who are kind of amazed that we are able to rank so many teams and that – for the most part – the rankings turn out to be pretty accurate. We also get sarcastic or snarky comments questioning the need to rank teams competing in the youngest age groups.
All of it comes with the territory. We have heard from all sides on the matter, and we understand each different point of view – and even agree with the haters and naysayers from time to time.
No matter how you feel about the rankings, though, there’s no denying that they are extremely popular among the North American hockey community, and they are always a hot topic of discussion on social media and in rinks across the continent. That is evidenced by the interaction we see weekly on our social-media accounts and the tens of thousands of pageviews our website generates each month during the hockey season.
For those of you who eagerly anticipate the release of the new rankings every Wednesday and use them as a measuring stick for your team or program – as well as those who don’t agree with the idea of ranking youth athletic teams – we think it would be helpful to explain how MYHockey Rankings came about and how the rankings are determined.
But before we dive into the history and the process, here are a few important points to keep in mind:
- The rankings aren’t perfect; no rankings are. That’s why we play the games, but over time given the results of games played in tournaments and national-championship events, they have proven to be a remarkably accurate representation of where teams stand compared to other programs at the same age level.
- MHR rankings are completely based on mathematics with absolutely no subjective, human involvement in the process. No, we don’t have people watching 20,000 teams play and sending us their votes, although we are very thankful for the countless volunteers who make sure that game scores are entered properly and on time. Without them, MHR would not exist.
- The rankings weren't designed to determine which team is the best in the country or to determine national champions. There are tournaments in place so that can be decided on the ice. And there actually are cases in which because of the math, a team's strength of schedule, the teams it has beaten or lost to and other factors, a club wins a national, state or district tournament and still isn't ranked higher than a team that beat to earn that title. We don't manually insert those champions into the top slots in the rankings; our rankings are based on an algorithm that takes many factors into account before determining where teams are ranked.
We get a good number of emails and social media direct messages from people who say they just don’t understand how we come up with our MYHockey rankings. Trust me, I’m the journalist here, not the math or IT guy, and I’m pretty sure that my Math 110 class in college didn’t cover the type of complicated algorithm required to rank more than 20,000 North American hockey teams as accurately as possible.
So as far as that goes, I’m right there with you.
And somehow that makes me oddly qualified to explain to you in the simplest terms possible how the rankings work.
By simplest terms I mean that I will not be getting into any specifics about the actual algorithm and how it functions or any formulas or equations. Any attempt on my part to dive that deeply into how the rankings are formulated would lead to confusion and frustration on my part and likely an indecipherable and less-than-exciting manuscript that would have readers wanting to poke their eyes out.
Stated simply, “The rankings are determined by computing an average team rating for each team using their reported game scores for the current season,” said Neil Lodin, the mastermind behind the rankings since 2003.
So, instead of getting too technical, this article will discuss why the method used by MYHockey Rankings is really the only way to rank so many teams with any sort of accuracy and hopefully will help readers decipher the numbers attached to the rankings and understand why a team might move up or down during a given week. And it might help readers guess with more success where a team might be ranked when the rankings are released every Wednesday between now and April.
But first, here’s a quick history lesson.
Lodin began playing with the idea of ranking teams in 2002 to help his son’s Indiana mite team find opponents to play. The team was pretty strong, which made finding competition at a similar level challenging. The following year, his son’s squirt coach asked Neil to help the scheduler find better teams to play. That’s when he realized he could use statistics to rank teams and possibly find more evenly matched opponents.
The algorithm was born and originally used just to rank teams in the Midwest, but Lodin started ranking squirt AAA teams nationally on a site called MYHockey when his son progressed to that level. Then he added peewees at the request of some parents and coaches. People around the country caught wind of the rankings, and by 2005 MYHockey Rankings had volunteers entering scores for teams from regions all over the country. Within five years there were 200 volunteers helping enter scores for more than 10,000 North American teams.
“The rankings are a tool being used to improve the sport of hockey,” Lodin said. “It started selfishly as a tool to identify teams of similar capability to my son’s team many years ago, but it evolved into a tool to help just about everyone who is involved in youth hockey. Because it computes an objective and accurate average performance rating for every team, people have found it useful to do everything from choosing at-large bids to Nationals to helping ‘B’ level travel teams be able to play similarly capable teams in local tournaments.”
That bring us to the first principle of the rankings. They were established as a tool to help teams schedule more competitively, help tournament organizers create more competitive events and avoid those mismatched blowout games that aren’t good for anybody. The hope was that the rankings would promote better player development as teams were able to find more challenging opponents and began trying to develop their players in an effort to move up in the rankings.
Lodin pointed to the player-development angle instead of explaining the intricacies of creating a schedule that can maximize a team’s chances of being ranked higher.
“I believe the best way to move up in the rankings is to develop your players,” he said. “Your team is only as strong as your weakest line or backup goalie. Develop them. Play them early in the season. Make sure they can play successfully against the same competition as your top line. The irony is that teams worry too much about the rankings early in the season, shorten the bench and pay for it in the long run while teams that focus on the development of all their players will improve in the rankings as the season stretches on.”
Lodin admitted that his fear of the rankings being used the wrong way and being the sole focus of teams and organizations rather than developing their players and finding appropriate levels of competition made him reluctant to change and grow at times, but positive feedback from so many folks around the country convinced him that he had created an effective and valuable tool for youth hockey organizations all over North America.
“I had no clue how MYHockey would evolve over time,” he said. “Ironically, I often resisted change along the way until fans of the site made it clear to me that progress is a good thing. I’ve been fortunate to get a lot of great input from hockey people over the years.”
The feedback isn’t always positive.
One of the most frequent comments received is that some sort of human input or analysis should be involved for any type of rankings to be accurate. A more subjective ranking certainly can create discussion and debate – and it can allow for some intangible factors to be considered that an algorithm won’t factor in – but it also can be swayed by inherent biases and outside pressures.
Subjective rankings by a panel of experts with different backgrounds and biases can work for smaller groups of teams, but there aren’t enough youth hockey experts watching enough games for humans to rank 22,000 teams from around the continent accurately. In fact, one of the long-standing issues with college sports rankings is that not every voter gets to see all the teams on a regular basis.
There is absolutely no human input or bias involved with MYHockeyRankings.
“When the objective is to rate or rank every competitive youth team in North America, you have to remove the human subjective element from the equation,” Lodin said. “No subjective ranking system can efficiently scale to handle the 20,000-plus teams MYHockey ranks on an annual basis. The rankings get extremely accurate as the season progresses because interplay by teams from all areas of the continent gets pretty thorough and redundant.”
That brings us back to the original purpose of the article: to help readers understand how the rankings work. For the layperson, Lodin's quote simply means that as more games are played and more teams from different regions play each other, the rankings become more accurate.
Okay, so the rankings are 100-percent based on math with no subjective human intervention, and the rankings become more accurate as more games are played and scores of every game are entered.
Then what?
That’s when all the math stuff happens.
In a nutshell, every team’s results are entered into the system thanks to our amazing network of volunteers. Those results produce two basic numbers, a team’s strength of schedule (listed as SCHED on the rankings page) and its average goal differential in each of those games (AGD). Those two numbers are added together to determine a team’s rating.
The strength of schedule is a calculation of the average strength of a team’s opponents. How a team does against its opponents, along with how its opponents do against their opponents and how its opponents’ opponents fare against their schedules and so on down the line is all part of a constantly changing dynamic that determines a team’s strength of schedule.
AGD is currently calculated by accumulating the goal differential of each game, to a maximum of 7, and divides it by the number of games played. SOS can be added to the team’s AGD to determine a team’s overall rating.
The highest rating possible in an age group is 99.99, and the numbers filter down from there. Typically, the top team in a particular age group is a major birthyear AAA team, because those teams play stronger schedules than the minor and Tier 2 teams. Top-ranked minor birth-year teams tend to top out three or four points behind the highest-rated major teams, with the Tier 2 teams usually falling below that.
So, now if you go to the rankings page for a particular team’s age group, looking at the columns left to right you have “Rank,”, “Team” and “Record” – all self-explanatory. Next comes “Rating.” That’s the team’s rating, which again is the sum of the strength of schedule (SCHED) and average goal differential (AGD). Those two columns come after “Rating,” so adding them is how the rating number is determined.
If you want to dive a little deeper, click on a particular team to see its individual page, which includes its schedule and results. Across the top and above the game results you will see from left to right, “Games,” “Rating Math,” “Last 10 Math,” etc. By clicking on “Rating Math” you can get an idea of how a team performed during a given week based on how it should have performed according to its ranking.
For example, looking at one game it is possible to tell if the team won or lost, the score, the goal differential, the opponent’s rating and then points received for that game. “Points” allows a team to figure out how its actual performance compared to what the rankings showed as the game’s expected outcome.
The “Points” total simply is the opponent’s rating plus (or minus the goal differential).
Let’s say Team A, with a rating of 93.0, beats Team B, with a rating of 90.0, by a score of 8-1. To get the expected goal differential for the game, simply subtract the lower-ranked team’s rating (90) from the higher-ranked teams rating (93).
Thus, based on our ranking system, Team A was expected to win by three goals. The “Points” column is the goal differential from the game plus the opponent’s rating. For this game, Team A would have recorded 97 points, which is four above the expected total of 93 Team A would have received had it won by the three goals the rankings predicted. The “+/-“ column on the far right of the “Rating Math” page would show a +4.00, indicating that Team A had out-performed expectations by four goals for that particular game.
Keep in mind that the maximum goal-differential recorded for any one game is seven as we don’t want to promote teams running up the score in mismatched contests. A game in which a team performs better than expected will help its rating, especially if it’s against a higher-rated team. Likewise, a team doing worse than expected in a given game would negatively impact its rating. The impact would be magnified against a lower-ranked team.
Examining how a team performs during a given week compared to the expectations based on our rankings should provide a pretty good indication as to whether the team will move up or down when the next set of rankings is released.
As mentioned previously, mistakes can happen. If any numbers seem off, go back and check the team’s results. Make sure scores were entered correctly and that the proper opponent is listed. Sometimes a team other than the one you played from the same club might be listed as an opponent by mistake.
If you have any questions or concerns about the rankings, check out our Frequently Asked Questions page to see if the answer might be there:
https://myhockeyrankings.com/faq.php
For a little more in-depth explanation of the rankings, some other helpful links can be found below.
Initial Strength of Schedule - http://myhockeyrankings.com/news.php?b=297
The Math Further Explained - http://myhockeyrankings.com/news.php?b=305