Posted: Mon Oct 17, 2005 8:46 pm
Welcome back to the CFB forum, Sam.
![Image](http://www.gabbf.com/images/archives/wagon.jpg)
![Image](http://www.gabbf.com/images/archives/wagon.jpg)
Who said the SEC is in a down year?4 Georgia
5 Alabama
6 LSU
Look, dumbass. I'll try and 'splain this to a nerdy politico type, ok?DrDetroit wrote:Who the fuck is Greenfield?
4-3 Michigan @ #17 and 5-2 Minnesota, that beat Michigan @ #18?
Fuck this yahoo.
RadioFan wrote:Where is the 95 team? What a pussy list,![]()
:brad:
Seriously, WHO doesn't believe that Fresno State deserves a top 10 bid?The Seer wrote:babbling babs. Greenfield is also the one that can prove the world is flat...
Greenfields own page wrote:Unlike many other systems, there's no easy way to get a good ranking, other than to play well! Destroying weak teams will not boost a team's rankings, but neither will losing consistently to strong teams. One of the ideas behind these rankings is that a team should be able to be highly or lowly ranked regardless of its schedule. This is in strict contradiction to other systems (especially the RPI), which heavily penalize teams for destroying a weak opponent. In this system, destorying a weak opponent will have negligible effect in either direction.
The points and wins ratings are variations on this theme: points only rates teams based on scoring, wins only on whether they win or lose. However, the points rating system does place a far larger distance between, say a one-point win and a one-point loss than between a 32-point win and a 34-point win.
If you're looking for a predictive model, the points ratings are definitely your best bet. These are the closest to Jeff Sagarin's ratings, which, I believe, do an adequate job as a predictor, but a lousy job ranking teams based on past performances. My points ratings, however, discount blowouts far more than Sagarin's do. As a result, I believe them to be much a much better indicator of how a team will perform in close games against most other teams.
Traditional "Strength of Schedule" measures only average opponents' rankings, which is an absurd way to do things. Over two games, a team may have the choice of playing one great team and one terrible team, or two average teams. A good team would likely take the latter, which would probably result in two wins, as opposed to the former which would result in one. A mediocre team, however, would prefer the first choice, in which they'd likely split, to the second, where they would probably get swept. This is the general idea between my schedule strength listings, which seeks to define a team's schedule difficulty relative to its ranking. Thus a poor team, which has played only average and above teams (but not great teams), will be seen to have had a very tough schedule, while a great team which played the same schedule will be seen to have played only an average one. For this reason, when comparing schedule strength, it's best to only look at teams of comparable ranking. There's an inevitable bias toward the top teams having a seemingly "weak" schedule, and the bottom teams having a "strong"schedule. However, this bias is not in any way included in the rankings - the strength of schedule measures are computed only after the rankings are computed.
Upward Stability and Downward Stability provide a measure of how "sure" the rankings are, in both the positive and negative directions. That is, a team with high Upward Stability is probably ranked pretty accurately, and should not be ranked too much higher. A team with low Upward Stability, on the other hand, is not very well entrenched in its place, and could be considerably better than the rankings indicated. This generally is the case for teams that haven't played many games, or teams that have mainly played against teams of vastly different levels.
RPI Ratings are (approximately) those used by the NCAA to determine teams and seedings for the NCAA Tournament. Anyone who has studied statistics knows that these rankings are extremely flawed, but for whatever reason, the NCAA uses them. I don't condone them as a ranking system, I just compute them.
These ratings were originally designed for College Basketball. While I've made some adjustments for other sports, the rankings and predictions are undoubtedly more accurate for College Hoops than for anything else.
If I've somehow missed a game, gotten the score wrong, gotten the location wrong, or done something else incorrectly, please email me.
A special thanks to Ken Pomeroy, for obtaining the most accurate college basketball scores and schedules I can find.
About the rankings designer
Mike Greenfield is a statistical modeler at PayPal in Palo Alto, California. He holds a BS from Stanford University in Mathematical and Computational Science. He developed this system in 1997, and has been refining, improving, and expanding it ever since.