So, yesterday I mentioned I sent Martin Manley of Upon Further Review (with KC Star) all of the info regarding my method of determining a "true" conference champion, or who should be playing in the MNC game against LSU (henceforth referred to as "Jeremy's Method").
He DID respond to me late last night, and I'd like to share with you the conversation we have had so far.
Everything is after the jump.Here is the original email I sent Martin:
I read UFR fairly regularly. I haven't read it as much lately because there seems to have been a bit of a downturn on Big 12 football coverage this season compared to other years (maybe I am misremembering things though).
When the talking heads continued to insist that Bama's "overall body of work" was more impressive than OSU's, and when Bob Stoops said there should not be a co-conference champion if OU, OSU, and KSU ended up with 7-2 conference records, because "OU beat OSU and KSU head to head", I decided to try and figure out a simple way to numerically evaluate a team's overall body of work.
I have already taken the time to explain this methodology on the KSU blog www.bringonthecats.com, so I will not waste my time rehashing it here. You can find the original post here: http://www.bringonthecats.com/2011/12/1/2603389/a-possible-way-to-determine-a-true-champion
Essentially what this does is tries to give teams more credit for beating "better" teams, and penalizes teams for losing to "bad" teams. This method could be used to determine which of the one-loss teams are the most deserving to play in the NCG, or could be used to determine which team from a division should get the nod for a conference championship game in the event of a tie (such as last year in the Big12 South), or could be used as THE tie breaker (none of this head to head, or record against common opponents, or final ranking in the BCS standings) in a situation such as the 7-2 Big 12 situation that could have happened this year.
Anyway, if you read the post I linked above to get the basic understanding, this is how some of the teams shake out based on the Sagarin rankings that were released yesterday morning:
LSU: 2525 or 194.2 avg
Bama: 2059 or 171.6 avg
OSU: 2305 or 192.1 avg
Ore: 2003 or 154.1 avg
Stan: 2030 or 169.2 avg
VT: 1845 or 141.9 avg
Clem: 1908 or 146.8 avg
OU: 1832 or 152.7 avg
Bay: 1741 or 145.1 avg
KSU: 1922 or 160.2 avg
BSU: 1794 or 149.5 avg
TCU: 1454 or 121.2 avg
Mich St: 1559 or 119.9 avg
Mich: 1787 or 148.9 avg
Wisc: 18815 or 139.6 avg
I know everyone is sick of talking about this, and it really won't do any good, since the BCS isn't going to change what they do because of what one guy like me thinks, but I wanted to get this view out to a larger demographic than just BOTC, and thought you would be a good person to share this with, since you like looking at things in a numerical way.
I would be happy to know what you think of my idea, but I know you are a busy guy.
Jeremy Sharp KSU '03
And his Response:
Nice idea. It’s intuitive and in its own way, fairly simple. But, consider this.
Suppose two teams played the exact same schedule (winning all games) except that Team A also beat a 10-win weak conference team while Team B beat a 9-win strong conference team. By treating all wins equal, Team A would get more credit than Team B even though it probably should not.
If this were basketball, I might have a little bit different thought on it because teams play so many games. But, when you only play 12 or 13 anything that doesn’t factor in margin of victory seems to me to be flawed. Oklahoma State gets the same credit whether they beat Oklahoma 11-10 or 41-10. That restriction is placed on the six BcS computer rankings for no valid reason as far as I’m concerned. If they don’t want teams to run up the score for the computers, how do they prevent them from running up the score for the 115 Harris voters or the 59 USA Today voters? Thus, why is the restriction even there?
If it were 100% objective, all computer systems would agree and they don’t. So, there is going to be subjectivity involved no matter what. However, I’m always nervous when ranking teams solely based upon a Win or a Loss. In a game where you only play 12-13 and with an oval ball and human officials, it just seems like it’s taking too of a risk to put all the emphasis on a W or an L.
But, I do like the idea. I’ve never seen it before.
And my response this morning:
Martin, Thanks for the response. But I don't think I AM treating all wins equally. Well, I take that back. A win over OU is worth 243 points (246 total teams in Sagarin rankings, 1-246, and since OU's final Sagarin ranking is #4 247-4 = 243). I guess what you are getting at is that Baylor gets the same amount of credit for beating OU by 7 points that OSU gets for beating OU by 34. I DO see how that could present a problem. You bring up an interesting point though, and this is part of the reason why the system is flawed. The computers are not allowed to factor in margin of victory, but how can the pollsters NOT factor it in. That is, IMO, one of the things that really distinguishes how "on" a team may or may not be. Of course, this would probably negatively impact my Wildcats this season since they have won 8 of 10 games by 7 points or less (an amazing feat if you ask me). Maybe you could have scoring ranges that correspond to multipliers to be applied to each of the sagarin rankings. Winning by 1-10 points is the base Sagaring ranking, winning by 11-20 points is a multiplier of 1.05, winning by 21-30 points is a multiplier of 1.10, and winning by 31 or more points is a multiplier of 1.15. So, for the OU example above, Baylor would only get to add the 243 positive points for beating OU by 7, but OSU would add 279.5 points. My gut tells me that my multipliers need tweaking, because 35 points seems like a lot to add for OSU's win...however, it WAS rather impressive, and like my system already gives more credit for beating a "better "team (OU than Akron), a team would (and should) get a bigger bonus for beating OU by 34 than by beating Akron by 34. I suppose you could also do the same thing for margin of loss, where you apply a multiplier to the score you are subtracting from a team's score. The bigger the loss the more detrimental to your overall score. This multiplier makes the process a LOT more involved, and would probably take me about 10 times as long to compute the 15 teams that I calculated. It is, however, closer to a true evaluation of a team's "complete body of work". I would be interested to see how Bama's score compared to OSU's with these multipliers applied (however, OSU had large margins of victory in most of their games, so my gut says OSU would still come out on top). Again, thanks for taking the time to look at my idea, and especially for getting back to me with your comments and criticisms.
Additional commentary: As you can see from my response back to Martin, I DO see some validity in figuring out a way to incorporate margin of victory into the equation. I'm just not sure how to do it exactly. I like the idea of the multiplier (of course I would, I suggested it!), but I think the values of the multipliers might need some tweaking. And as I mentioned in my response, this makes the process take a considerably longer amount of time to do, because I would have to look at the score from each game to determin the multiplier, and apply that to the sagarin (or reverse sagarin) ranking as opposed to just looking at the schedule and who won each game.
As I mentioned in the original post, I chose Sagarin rankings because they are easy to obtain, and they include ALL schools from FBS and FCS, which is crucial when trying to determine how good a win against Wofford is or how bad a loss against North Dakota State is. If this approach was actually adopted in principle to actually help determine anything by the people who determine things, I would have no problem if a completely different ranking system was used, something that the majority would agree upon that is fair. The most important thing for Jeremy's Method (IMO) to work, is that whatever ranking system is used, it is an actual "ranking", and not some nebulous number (like the 1.000 or 0.985 in the BCS that DETERMINES the ranking). I would be ok with using the BCS rankings as the basis for my method, but the BCS rankings to not include ALL FBS and FCS teams, so I don't think that would work).
I never expected to spend this much time on Jeremy's Method when I got started on this, but this whole Alabama over OSU thing really irritates me, and since nobody I have presented Jeremy's Method to has come up with a strong argument for why it doesn't make any sense, I can't seem to let it go. Am I saying it is perfect? Of course not. But then, none of the methods that are currently in place are perfect either. And personally, I think this would be a much better way to settle a tie within the Big 12 Conference (think of last year's OSU/OU/Texas tie for the South division) than using a series of 6 steps and one of those steps being "whichever team is ranked highest in the final BCS standings (because Jeremy's Method can be used to include only the conference games, and the BCS rankings take the entire season (and non-con games) into account). I haven't thought about it much, but Jeremy's Method could also be used for basketball to determine exactly the same kinds of things (ties, and which bubble teams actually DO have a more impressive overall resume...and you don't have to rely on the "eye test").
Sorry for rambling on and on about this. I also never intended this to become the 1789 word rant that it is.