Tumgik
#I documented my process and adjustments while baking it - this recipe is based on the refined end-notes
threewaysdivided · 7 months
Text
Chocolate Sapote Pie
Tumblr media
Hey 3WD, why are you making a vegan, gluten-free, no-processed-sugar dessert?  Well, you see, I have friends with many, many allergies and felt like a challenge.
This recipe is an experiment with Black Sapote (also called Chocolate Pudding Fruit) – a seasonal persimmon relative that’s high in Vitamin C and has a pudding-like flesh with mild chocolate and caramel notes.  While limited availability of Sapotes means it’s not as accessible as something like Chocolate Avocado Pie, it’s a fun thing to play with if you like poking around with the weird fruit section of produce markets (as a bonus, Black Sapotes were selling for $1-2 per fruit both times I visited).
Let's get cooking:
Kitchen equipment
Blender/food processor
Beaters (hand-crank or electric)
20cm (8 inch) springform cake tin
Baking/ Greaseproof paper
Fridge & Oven
Fine mesh sieve (or tea strainer)
Mixing bowls, spatula, knives, spoons etc.
Prep-time
Bench work: approx. 40 minutes
Oven time: approx. 30 minutes
Cooling time: approx. 2-3 hours (minimum)
Coconut cream chill time:  12 hours (minimum)*
*Coconut cream needs to be refrigerated for 12 hours in order to separate properly before whipping.  Put your cream in the fridge the night before.
Ingredients
Tumblr media
Pie crust
300g (10.5 Oz) pecans
4 large medjool dates (seeds removed, roughly chopped)
1 tsp salt
4 tbsp (approx. 60g/ 2 Oz) margarine, melted
Filling
2 large, ripe Black Sapotes*
1/2 cup (125mL) cocoa powder
4 large medjool dates (seeds removed, roughly chopped)
1/2 tsp salt
1 tsp espresso powder
1 tsp vanilla extract
*Black Sapotes are ripe when the skin has darkened to a dull green-brown and the fruit is very soft to the touch (it will look and feel over-ripe).
Topping
One 400mL (13.5 Fl Oz) can high-quality full-fat coconut cream*
1/2 tsp salt
1 tsp vanilla extract
Approx. 80g (3 Oz) pecans, toasted** and crumbled
Cocoa powder for dusting
*Use a premium quality coconut cream – the difference in price is very small and cheaper creams may not separate properly, which will stop them from whipping up. **Toast pecans by baking in a preheated 140°C/248°F (fan forced) oven for 15-30 minutes until done to your liking. 
Instructions
Pie crust (Inspiration recipe)
Preheat oven to 180°C/350°F (fan forced).
Line the base and sides of the spring form pan with baking paper.  (Try to minimise wrinkles/folds in the paper lining – this will make it easier to remove the crust later.)
Place chopped dates, pecans and salt into a blender.  Blend until the mixture has the texture of breadcrumbs/ almond meal.
Stream in melted margarine and blend until completely combined (crumbs should stick together when pressed)
Press mixture into a firm, even layer across the base and up the sides of the spring form tin, using clean hands or the back of a spoon to smooth down.
Bake for 10-12 minutes or until the crust begins to brown.  (Watch closely after the 8-minute mark to prevent burning).
Remove from oven. If the crust has puffed up, press back down using the back of a spoon. (Tip: use a toothpick or fork to gently pop any large air-pockets in the base.)
Cool on a wire rack or heat-safe cutting board until the crust reaches room temperature.
Filling (Inspiration recipe)
Preheat oven to 180°C/350°F (fan forced).
Cut open the Black Sapotes.  Remove the seeds and scoop/scrape the flesh out with a spoon. (Note: because Black Sapote skin is very thin and soft when ripe, it cannot be traditionally peeled.)
Transfer one quarter of the fruit flesh to a blender, along with the chopped dates.  Process until completely smooth.
Transfer the sweetened sapote mixture into a bowl alongside the remaining sapote flesh.  Mix until completely combined (use your spoon or spatula to break down any lumps or fibers in the fruit flesh).
Fold in the cocoa powder, expresso powder, vanilla and salt until completely combined.  Taste for flavouring and adjust with more cocoa/vanilla/espresso/salt as desired.  (Note:  adding cocoa will increase the bitterness of the filling alongside the chocolate-y flavour.  This can be hard to counteract without adding sugar or blending in more dates so be careful how much you add at once.)
Transfer filling into your pre-prepared pecan crust.  Smooth the top with a spoon or spatula.
Bake for 20 minutes or until the filling is lightly firm to the touch.
Cool completely on a wire rack or heat-safe cutting board, then cover and transfer to the fridge until chilled through.
Cream Topping (Inspiration recipe)
Chill coconut cream for at least 12 hours.
Remove coconut cream from the fridge (Note:  DO NOT Shake).
Use a spoon to scoop out the thick cream that has risen to the top, leaving the watery parts behind.  (The more water is in your cream, the less firmly it will whip).  Transfer the thick cream to a bowl.  (Tip: have an extra jar of chilled cream on hand in case the cream yield from the first tin is low)
Add the vanilla and salt to the cream. Stir through, then taste test and adjust the vanilla/salt as needed. (Note: Like with plain dairy cream, this should taste more neutral than sweet).
Beat your cream, starting on a slow speed.  Gradually increase speed until whipped to thick, semi-stiff peaks. (Tip: if your cream is very soft to start with, try covering and chilling in the refrigerator to help it firm up.)
Remove the chilled sapote tart from the spring form pan. (Optional:  If the sides of your pecan crust are much taller than your filling, you can use a pair of clean, sharp kitchen scissors to trim it down.  Use a clean hand to shield the pie filling from any falling crust-crumbs).
Spoon the whipped cream onto the sapote filling, using the back of the spoon to make decorative swirls.  (Optional: use a piping bag with a decorative nozzle for a fancier top).
Sprinkle the cream with crumbled pecans.  Dust with cocoa powder using a fine sieve or tea-strainer.  (Note: Add the nuts first - otherwise the cocoa powder will prevent them from sticking to the cream, causing them to roll everywhere when the pie is cut).
Return to the fridge until the coconut cream is completely chilled.
Serve.
Tumblr media
4 notes · View notes
mgjansen81 · 6 years
Text
Whole Grain Spelt Sourdough Ciabatta
New Post has been published on https://makesomethingtasty.com/whole-grain-spelt-sourdough-ciabatta/
Whole Grain Spelt Sourdough Ciabatta
amzn_assoc_placement = "adunit0"; amzn_assoc_tracking_id = "pramedgro-20"; amzn_assoc_ad_mode = "search"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_title = "Shop Related Products"; amzn_assoc_default_search_phrase = "cooking"; amzn_assoc_default_category = "Kitchen"; amzn_assoc_linkid = "51fe4d035c7af8dc5928e6f5e5b79c4e"; amzn_assoc_default_browse_node = "284507"; amzn_assoc_rows = "4"; amzn_assoc_design = "text_links";
Whole grain spelt sourdough ciabatta [ciabatta integrale] packs all the nutrition of whole spelt wheat into the delicious and fun to use style of ciabatta bread. After developing a white flour sourdough ciabatta recipe for Breadtopia, I wanted to try my hand at an ancient or heirloom grain sourdough ciabatta. Spelt seemed like a good choice because in my past experiments with it, the gluten structure was fairly good. And of course, I enjoy the mellow nutty flavor of spelt wheat. For more about the history of ciabatta in general, read here.
I worked this recipe two ways and suggest you try whichever appeals to you the most, or try both. One version is 100% whole grain spelt because the sourdough starter is made of whole grain spelt flour. The other version has about 70% whole grain spelt flour because the sourdough starter is made of all purpose flour. Here is the math for anyone who is interested.
275g whole grain spelt flour
approx. 125g all purpose flour in the 275g of sourdough starter
400g total flour
275g spelt flour / 400g total flour = .687 or rounded 70% whole grain spelt
Both versions of this recipe call for the same technique, which is different from the white flour sourdough ciabatta recipe I created. To develop the gluten and flavors in the whole grain spelt, I autolyse the dough (water and flour only) while the starter is building, and this step makes mixing short and easy.
Here is a video of mixing to windowpane test in under two minutes:  
  The spelt doughs also ferment faster than white flour dough, and I suggest gentle stretching and folding earlier in the bulk fermentation. Shaping is also dictated by the spreading nature of the spelt dough. I fold several times for pre-shaping, and again after the bench rest, before proofing. Lots of dusting with flour and a no-flip, parchment paper lift onto the hot baking stone round out this delicate process.
This whole grain spelt sourdough ciabatta project was exciting, and a few delicious flatbreads were consumed before I shortened the fermenting and proofing times, and started folding the dough more to shape it. I hope you’ll give it a try. Please comment about your experience and modifications, as well as post photos if you’re so inclined.
Whole Grain Spelt Sourdough Ciabatta
Whole grain spelt ciabatta offers the complex flavors and powerful nutrition of the ancient spelt wheat. With high hydration and careful handling of the dough, this bread has the rough, chewy crust and open tender crumb of a white flour ciabatta.
Ingredients
Starter
275g all-purpose flour starter, ~120% hydration, mature and floating
OR
275g whole grain spelt starter, ~120% hydration, mature and floating
Additional Ingredients
12g olive oil (1 Tbsp)
7g salt (1 rounded tsp)
Instructions
Note: This recipe is for one large slipper. The photo gallery below shows two versions of the recipe being made at once, hence two slippers. If you want two (or three smaller) slippers, double the recipe ingredients.
Sourdough Starter
Build a 100% or higher hydration sourdough starter over a couple of days until it is more than 275g in weight, about 2 1/4 cups volume, active and floating.
There are numerous, equally effective ways to get this amount of active starter. I fed my starter and refrigerated it at about 1 cup in volume. On baking day, I put 110g of starter in a large bowl, fed it 75g of all-purpose flour and 90g of water. In volume, this is approximately 1/2 cup starter, 1/2 cup flour and 3/8 cup of water. Several hours later, it was over 2 cups in volume and it floated.
Autolyse
At about the same time you do the last feeding of your starter, combine the whole grain spelt flour and water in a large bowl and cover. This will help build the gluten structure and enhance the flavor. If the dough seems too dry to incorporate all the flour, pause for a few minutes, mix again. Add 10g more water if there is still any dry flour.
Mixing
When the starter is ready (for me this is about 4 hours after feeding), add it, the olive oil, and the salt to the autloysed dough. Mix by hand until incorporated and continue to develop the gluten for about 2 minutes or until the dough passes the windowpane test. Use one hand to turn the bowl and the other hand to scrape, scoop, and pull the dough around the bowl. Because of the autolyse, this mixing takes less time than what I recommend in the white flour ciabatta recipe.
Here is a video of mixing the dough with the all purpose starter from beginning to end in under 2 minutes, and here is a video of the last 30 seconds of mixing the dough with the whole grain spelt starter. Both videos show the windowpane test.
After mixing, mark the time and room temperature if you like to track fermentation this way.
Bulk Fermentation / Stretching and Folding
Cover and let the dough rest for 30 minutes.
Do three gentle stretch and folds, spaced 25-30 minutes apart. Use damp fingertips to pull the dough from the sides of the bowl and fold it over (four sides, twice around). Cover after each stretch and fold.
Let ferment until puffy and roughly doubled. This was about 3 hours from mixing at 74F for me.
Shaping and Proofing
Heavily flour your countertop, then scrape or pull the dough out of the bowl onto it.
Flour your hands repeatedly and gently slide your fingers under the dough from all sides to pull it outward into a square shape about 1/2 an inch thick. Both versions of this dough are more delicate than white flour ciabatta dough, so work slowly until you get a feel for it.
Using a bench scraper, fold the dough in half. Brush off the extra flour on the top, then fold it in thirds (see gallery below). Let the dough rest about 20 minutes.
Prepare a piece of parchment paper, on a floured couche or tea towel, on a small flipped over cookie sheet. The couche/towel should be floured in case it touches the top of the slipper when you cover the slipper. The cookie sheet is under the towel to be a flat surface if you want to proof in the refrigerator.
Fold the preshaped dough in half as it will have spread while resting. Consider a second fold for the all whole grain dough which spreads more.
Using a bench scraper, transfer the slipper to the parchment paper.
Dust the top of the slipper with more flour, and loosely cover it with the tea towel or couche.
Let it proof for 30 minutes to 1 hour at room temperature, or 1+ hour in the refrigerator.
Oven Prep and Baking
If you have a baking stone, put it in the oven. You can also cook the slippers on a metal cookie sheet or upside down cast iron pan. (Do not use a dark non-stick surface because of the high oven temperature.)
Prepare to create steam early in the baking process. You can put a cast iron pan on the shelf under the baking stone. Or you can create a drip system by putting an aluminum pan with a pinhole in the oven under the stone, and a metal cookie sheet on the base of the oven. When making the pinhole, test over your sink that water drips through it about 1-2 drops a second. This system was devised by Breadtopia community member @peevee. My heating element is exposed on the bottom of my oven, so the aluminum pan drips onto the bottom of the oven rather than a cookie sheet, which works fine too. (See gallery for photo of setup.)
Preheat your oven and stone to 500 F for 30 minutes before the dough finishes proofing. If you are baking on a cookie sheet, a 15 minute preheat should be enough.
When proofing is complete, remove the tea towel from under the parchment paper and slipper, and slide the parchment paper and slipper onto your preheated stone. You can use a pizza peel or the same upside down cookie sheet to transfer the paper and slipper.
Immediately pour a cup of water onto the aluminum tray or cast iron pan that is under your stone, and quickly close the oven door to trap the steam.
Bake at 500 F for 10 minutes. Then remove the parchment paper and rotate the slippers 180 degrees if your oven seems to brown unevenly.
Bake an additional 8-10 minutes at 450 F.
Important After baking, turn off the oven, crack open the door a couple of inches, and leave the slippers in the oven for another 5 minutes. This will make the crust crunchier.
If you make rolls or larger slippers, adjust your cooking time accordingly. Internal temperature should be about 205 F.
Let cool on a rack for at least 1 hour before slicing.
For most purposes (olive oil and tomatoes, butter and blue cheese, BBQ pulled meat), I like to slice a slipper horizontally and then into roll-size pieces 3-4 inches wide.
4.21
https://breadtopia.com/whole-grain-spelt-sourdough-ciabatta/
    About the author
Melissa Johnson: Devoted bread baker, inspired by the science and art of sourdough.
(function(d, s, id) var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "http://connect.facebook.net/en_US/all.js#xfbml=1&appId="; fjs.parentNode.insertBefore(js, fjs); (document, 'script', 'facebook-jssdk')); amzn_assoc_placement = "adunit0"; amzn_assoc_search_bar = "true"; amzn_assoc_search_bar_position = "bottom"; amzn_assoc_tracking_id = "pramedgro3-20"; amzn_assoc_ad_mode = "search"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_title = "Shop Related Products"; amzn_assoc_default_search_phrase = "cookware"; amzn_assoc_default_category = "All"; amzn_assoc_linkid = "b45319dac495d29e17b5eff312392025"; Source link
0 notes
junker-town · 6 years
Text
College football strength of schedule rankings and how to have a useful conversation about them
Included at the bottom is a new SOS-style ranking system that will be updated here each week.
SB Nation college football editor Jason Kirk asked me to write about strength of schedule, and I’ll be honest: it pissed me off. Nothing personal, of course. We just all have our triggers.
I hate strength of schedule arguments more than I hate red zone fade routes, five-hour football games, and that this dumbass sport I love continues to get away with a bunch of nonsense because of a cynical, decades-old definition of “student athlete.”
I hate these arguments partly because of the contradiction. Let me explain.
In my work with advanced stats, the S&P+ ratings, and so forth, I field certain categories of insult pretty frequently. Here are two of the most common:
“Watch the games, nerd!” The insinuation: if you’d get your nose out of the spreadsheets, dork, you’d see that Team A is clearly better than Team B.
“Team A ain’t played nobody!” The insinuation: How could Team A possibly be any good when they haven’t played a team that clears someone’s arbitrary bar of goodness?
“You saw what happened in the games I watched, right?” the first one says. The second says to watch one particular game and ignore the others.
Football’s going to have a sample size issue no matter what, so we should milk whatever meaning we can out of every game we’ve got.
Arguing about strength of schedule the wrong way means limiting the sample even further and acting like we can’t get meaning out of every play of every game. We can. Honest.
Strength of schedule is at the heart of virtually every college football argument between October and January each year. Hell, it’s a point of debate for every college sport. The schedules are too varied and not comprehensive enough. That the power conferences are moving toward three-game non-conference schedules in football makes this national connectivity even worse.
At the pro level, there are strong and weak divisions, sure. But the schedules are still infinitely more balanced. In the MLB, everybody plays everybody in their league at least a few times, with some cross-league games. In the NBA and NHL, everybody plays everybody at least once. There is connectivity.
With 130 FBS teams and 12 games, that simply isn’t an option for college football. So we play who we play, and we yell about who our rivals ain’t played.
How strength of schedule determines the national champion, sort of
Arguments are one thing, but college football’s national title is more directly affected by strength of schedule than that of any other major sport. It’s baked right into the College Football Playoff selection process.
When circumstances at the margins indicate that teams are comparable, then the following criteria must be considered:
* Championships won
* Strength of schedule
* Head-to-head competition (if it occurred)
* Comparative outcomes of common opponents (without incenting margin of victory)
We believe that a committee of experts properly instructed (based on beliefs that the regular season is unique and must be preserved; and that championships won on the field and strength of schedule are important values that much be incorporated into the selection process) has very strong support throughout the college football community.
(A digression, but they should clarify that they’re looking at conference championships. A fan could read that and assume Alabama’s going to get permanent preferential treatment for its 114 claimed national titles.)
It is decreed that the committee takes strength of schedule into account, but it intentionally doesn’t say how. It frowns on advanced analytics — from the same rules document: “Nuanced mathematical formulas ignore some teams who ‘deserve’ to be selected;” God forbid — and gives no alternative guidance. So the committee ends up going with things like “wins over top-25 teams” or “assuring there’s no way in hell a team from a Group of 5 conference will ever get in.”
By now, though, some are figuring out pretty clearly how strength of schedule is taken into account.
ESPN’s stats team has created both forward- and backward-looking measures to assess the difference between the “best” teams and those typically determined most deserving of a title shot. As it turns out, the backwards view is pretty effective at mirroring committee action. The Strength of Record measure has proven quite adept at figuring out how the committee will rank teams.
Everything that goes into FPI’s SOS rankings — opponent strength, game site, distance traveled and rest — is used to capture the difficulty of a team’s schedule. Thus, amassing a number of “good wins,” no matter how the game was won, will boost a team’s Strength of Record.
Despite the committee’s mantra of selecting the “four best teams in the country,” it appears that in the first two years of playoff selection, the committee favored team accomplishment over team strength. So if you are trying to predict what the committee will do, take a look at strength of record, because seven of eight teams to make the playoff ranked in the top four of that metric before playoff selection. Then FPI can be used to predict which teams will ultimately come out on top.
The committee insists it is looking for the “best” team. It is not. Kirby Hocutt, chairman of the CFP committee as of 2016, has conflated “best” and “most deserving” on a number of occasions. An example:
Q: Are you looking for the four best teams or the four most-deserving teams? Is there a difference?
A: You have to take into account the entire season. The season doesn't start in October. Everybody has 12 regular-season opportunities, and the committee is watching. At the end of the year, we want to make sure we have the four very best teams over the course of the entire season.
They do not. And that’s fine, I guess.
Two major problems, however: a one-point win is not a 30-point win, and you don’t have to wait till someone plays a good team to start learning.
Take these two pieces as an example.
Here I am in Oct. 2013, writing about how Florida State had been dominant long before the Noles dominated Clemson.
"Are they tested?" is just a box you check. While you can certainly find examples of teams that look great against awful teams, then stumble when punched in the mouth for the first time (Louisville), you can usually glean just as much from how a team dominates bad competition as from how it plays against really good teams. Picking Team A simply because it is more tested than Team B is usually a recipe for making bad picks.
The early-season stats suggested that, despite not playing a top team yet, Florida State was pretty incredible. The Seminoles went out and left no doubt on Saturday night in Clemson.
Here I am in Oct. 2014, writing about how Florida State was probably doomed as a title contender despite some wins over good teams.
The best teams, the most likely championship teams, are the ones that handle their business early and put games out of reach before luck, chance, fumbles, and vital offensive pass interference calls can impact the outcome.
According to the F/+ rankings, the Seminoles have been just barely good enough to survive No. 9 Clemson at home (without Winston), No. 15 Louisville on the road, and No. 19 Notre Dame at home. They survived No. 44 Oklahoma State on a neutral field, and they pulled away from No. 53 NC State in the fourth quarter. They pummeled No. 76 Syracuse and eventually got around to doing the same to No. 89 Wake Forest.
They have, in other words, solidified that they should be ranked around seventh to 12th in these ratings. They were 11th heading into this week. [...] Without sustained improvement, and without the ability to play a full 60 minutes at a high level, they will in no way be a favorite to beat two top-four teams in the College Football Playoff and win the national title.
The 2013 team that hadn’t played nobody, but that was destroying its opponents, went on to win the national title.
The 2014 team that was winning, but not impressing the numbers, eked out a Playoff bid with seven one-possession wins (five against teams that went 8-5 or worse) but got embarrassed.
To its credit, the CFP committee did dock FSU a bit for its lackluster performance. The unbeaten Noles were third in the rankings behind two one-loss teams. But these two FSU teams are ultimate examples for this simple truism:
You can learn something from every game, if you try.
That’s the point of using advanced stats in the first place, be it S&P+ or any other flavor. You set the baseline depending on the opponent(s) at hand, and you compare the actual output to that adjusted expectation. It fills in what your eyes are missing. (And with 800-plus college football games in a season, your eyes are always missing something.)
Your record does matter. Even as an advanced stats loyalist, I’m not exactly going to call for a three-loss team to get a CFP spot, even if said team was unlucky in every loss and ranks first in S&P+. Wins and losses aren’t particularly predictive in and of themselves, but they still have to mean something. Even the best team shouldn’t get in, if it’s not high on the most-deserving list.
So what if we actually tried to combine the two worlds? What if we used a “best” measure to begin approximating what “most deserving” truly is?
What if we took the Strength of Record idea and added an extra level of team quality to it?
Introducing Résumé S&P+.
I’m going to introduce my takes on two pretty familiar concepts.
There are countless ways to measure one’s strength of schedule, but I’m going to choose one most directly tied to the national title race. Makes sense, since “strength of schedule” is right there in the mission statement.
Below are each FBS team’s rankings in three categories:
S&P+, an overall team efficiency rating system you can read more about here. It can be used to predict wins and losses going forward.
Strength of Schedule (SOS), which amounts to how well the average top-five team (according to S&P+) would fare, in terms of win percentage, against your schedule. The lower the number, the harder the schedule.
Résumé S&P+, which looks at a team’s season scoring margin and compares it to what the average top-five team’s scoring margin would likely be against the schedule at hand. If the number is positive (and for most, it won’t be), that means said team is faring better then the typical top-five team. Instead of any advanced stats or win probability info, I’m adhering strictly to actual margins.
To use the current top Résumé S&P+ team as an example, let’s look at Alabama.
The Crimson Tide are currently second in overall S&P+. Their schedule strength ranks 34th; if the average top-five team played Alabama’s previous opponents a countless number of times, they would win about 79 percent of those games. The Tide have won 92 percent of their games, with an average scoring margin about 12.5 points per game higher than said average top-five team.
Compare that to the schedule of someone like Auburn. The average top-five team would have a win percentage of only about 73 percent against the Tigers’ slate. Those tiny differences can add up; Auburn’s been a legitimate top-10 team with a top-10 résumé but has still suffered three losses.
Then there’s UCF. The Knights’ strength of schedule ranks just 103rd overall, and they are being punished severely for it by the CFP committee. But their average scoring margin is about 3.8 points ahead of how the average top-five team would be expected to perform. That’s a Résumé S&P+ rating within 0.6 points of Oklahoma.
(This is a wide table, because it has to be; if it’s not showing well on your phone, consider taking a look on another device later.)
Eleven teams have positive Résumé S&P+ ratings. That’s more than one would expect if you’re comparing performances to that of an average top-five team. But here’s where I remind you that the average top-five team at the moment is not nearly as good as it has been in previous years. It only takes perhaps one truly awesome performance to produce an average above the top-five expectation.
By the way, to those who want to moan about incentivizing running up the score in this measure (since we’re using scoring margins), suck it up.
Margin of victory is infinitely more informative than “did they win?” It just is. Besides, it's odd to suddenly care about hurt feelings when I'm pretty sure telling half of FBS they don't have a shot at the national title no matter how well they play is more hurtful to those feelings than winning 59-0 instead of 49-0.
Best versus most deserving. How you’ve played versus who you’ve played. Maybe there’s a way to tie together these worlds after all.
I’m still mad at Jason, though.
0 notes
junker-town · 6 years
Text
College football strength of schedule rankings and how to have a useful conversation about them
Included at the bottom is a new SOS-style ranking system that will be updated here each week.
SB Nation college football editor Jason Kirk asked me to write about strength of schedule, and I’ll be honest: it pissed me off. Nothing personal, of course. We just all have our triggers.
I hate strength of schedule arguments more than I hate red zone fade routes, five-hour football games, and that this dumbass sport I love continues to get away with a bunch of nonsense because of a cynical, decades-old definition of “student athlete.”
I hate these arguments partly because of the contradiction. Let me explain.
In my work with advanced stats, the S&P+ ratings, and so forth, I field certain categories of insult pretty frequently. Here are two of the most common:
“Watch the games, nerd!” The insinuation: if you’d get your nose out of the spreadsheets, dork, you’d see that Team A is clearly better than Team B.
“Team A ain’t played nobody!” The insinuation: How could Team A possibly be any good when they haven’t played a team that clears someone’s arbitrary bar of goodness?
“You saw what happened in the games I watched, right?” the first one says. The second says to watch one particular game and ignore the others.
Football’s going to have a sample size issue no matter what, so we should milk whatever meaning we can out of every game we’ve got.
Arguing about strength of schedule the wrong way means limiting the sample even further and acting like we can’t get meaning out of every play of every game. We can. Honest.
Strength of schedule is at the heart of virtually every college football argument between October and January each year. Hell, it’s a point of debate for every college sport. The schedules are too varied and not comprehensive enough. That the power conferences are moving toward three-game non-conference schedules in football makes this national connectivity even worse.
At the pro level, there are strong and weak divisions, sure. But the schedules are still infinitely more balanced. In the MLB, everybody plays everybody in their league at least a few times, with some cross-league games. In the NBA and NHL, everybody plays everybody at least once. There is connectivity.
With 130 FBS teams and 12 games, that simply isn’t an option for college football. So we play who we play, and we yell about who our rivals ain’t played.
How strength of schedule determines the national champion, sort of
Arguments are one thing, but college football’s national title is more directly affected by strength of schedule than that of any other major sport. It’s baked right into the College Football Playoff selection process.
When circumstances at the margins indicate that teams are comparable, then the following criteria must be considered:
* Championships won
* Strength of schedule
* Head-to-head competition (if it occurred)
* Comparative outcomes of common opponents (without incenting margin of victory)
We believe that a committee of experts properly instructed (based on beliefs that the regular season is unique and must be preserved; and that championships won on the field and strength of schedule are important values that much be incorporated into the selection process) has very strong support throughout the college football community.
(A digression, but they should clarify that they’re looking at conference championships. A fan could read that and assume Alabama’s going to get permanent preferential treatment for its 114 claimed national titles.)
It is decreed that the committee takes strength of schedule into account, but it intentionally doesn’t say how. It frowns on advanced analytics — from the same rules document: “Nuanced mathematical formulas ignore some teams who ‘deserve’ to be selected;” God forbid — and gives no alternative guidance. So the committee ends up going with things like “wins over top-25 teams” or “assuring there’s no way in hell a team from a Group of 5 conference will ever get in.”
By now, though, some are figuring out pretty clearly how strength of schedule is taken into account.
ESPN’s stats team has created both forward- and backward-looking measures to assess the difference between the “best” teams and those typically determined most deserving of a title shot. As it turns out, the backwards view is pretty effective at mirroring committee action. The Strength of Record measure has proven quite adept at figuring out how the committee will rank teams.
Everything that goes into FPI’s SOS rankings — opponent strength, game site, distance traveled and rest — is used to capture the difficulty of a team’s schedule. Thus, amassing a number of “good wins,” no matter how the game was won, will boost a team’s Strength of Record.
Despite the committee’s mantra of selecting the “four best teams in the country,” it appears that in the first two years of playoff selection, the committee favored team accomplishment over team strength. So if you are trying to predict what the committee will do, take a look at strength of record, because seven of eight teams to make the playoff ranked in the top four of that metric before playoff selection. Then FPI can be used to predict which teams will ultimately come out on top.
The committee insists it is looking for the “best” team. It is not. Kirby Hocutt, chairman of the CFP committee as of 2016, has conflated “best” and “most deserving” on a number of occasions. An example:
Q: Are you looking for the four best teams or the four most-deserving teams? Is there a difference?
A: You have to take into account the entire season. The season doesn't start in October. Everybody has 12 regular-season opportunities, and the committee is watching. At the end of the year, we want to make sure we have the four very best teams over the course of the entire season.
They do not. And that’s fine, I guess.
Two major problems, however: a one-point win is not a 30-point win, and you don’t have to wait till someone plays a good team to start learning.
Take these two pieces as an example.
Here I am in Oct. 2013, writing about how Florida State had been dominant long before the Noles dominated Clemson.
"Are they tested?" is just a box you check. While you can certainly find examples of teams that look great against awful teams, then stumble when punched in the mouth for the first time (Louisville), you can usually glean just as much from how a team dominates bad competition as from how it plays against really good teams. Picking Team A simply because it is more tested than Team B is usually a recipe for making bad picks.
The early-season stats suggested that, despite not playing a top team yet, Florida State was pretty incredible. The Seminoles went out and left no doubt on Saturday night in Clemson.
Here I am in Oct. 2014, writing about how Florida State was probably doomed as a title contender despite some wins over good teams.
The best teams, the most likely championship teams, are the ones that handle their business early and put games out of reach before luck, chance, fumbles, and vital offensive pass interference calls can impact the outcome.
According to the F/+ rankings, the Seminoles have been just barely good enough to survive No. 9 Clemson at home (without Winston), No. 15 Louisville on the road, and No. 19 Notre Dame at home. They survived No. 44 Oklahoma State on a neutral field, and they pulled away from No. 53 NC State in the fourth quarter. They pummeled No. 76 Syracuse and eventually got around to doing the same to No. 89 Wake Forest.
They have, in other words, solidified that they should be ranked around seventh to 12th in these ratings. They were 11th heading into this week. [...] Without sustained improvement, and without the ability to play a full 60 minutes at a high level, they will in no way be a favorite to beat two top-four teams in the College Football Playoff and win the national title.
The 2013 team that hadn’t played nobody, but that was destroying its opponents, went on to win the national title.
The 2014 team that was winning, but not impressing the numbers, eked out a Playoff bid with seven one-possession wins (five against teams that went 8-5 or worse) but got embarrassed.
To its credit, the CFP committee did dock FSU a bit for its lackluster performance. The unbeaten Noles were third in the rankings behind two one-loss teams. But these two FSU teams are ultimate examples for this simple truism:
You can learn something from every game, if you try.
That’s the point of using advanced stats in the first place, be it S&P+ or any other flavor. You set the baseline depending on the opponent(s) at hand, and you compare the actual output to that adjusted expectation. It fills in what your eyes are missing. (And with 800-plus college football games in a season, your eyes are always missing something.)
Your record does matter. Even as an advanced stats loyalist, I’m not exactly going to call for a three-loss team to get a CFP spot, even if said team was unlucky in every loss and ranks first in S&P+. Wins and losses aren’t particularly predictive in and of themselves, but they still have to mean something. Even the best team shouldn’t get in, if it’s not high on the most-deserving list.
So what if we actually tried to combine the two worlds? What if we used a “best” measure to begin approximating what “most deserving” truly is?
What if we took the Strength of Record idea and added an extra level of team quality to it?
Introducing Résumé S&P+.
I’m going to introduce my takes on two pretty familiar concepts.
There are countless ways to measure one’s strength of schedule, but I’m going to choose one most directly tied to the national title race. Makes sense, since “strength of schedule” is right there in the mission statement.
Below are each FBS team’s rankings in three categories:
S&P+, an overall team efficiency rating system you can read more about here. It can be used to predict wins and losses going forward.
Strength of Schedule (SOS), which amounts to how well the average top-five team (according to S&P+) would fare, in terms of win percentage, against your schedule. The lower the number, the harder the schedule.
Résumé S&P+, which looks at a team’s season scoring margin and compares it to what the average top-five team’s scoring margin would likely be against the schedule at hand. If the number is positive (and for most, it won’t be), that means said team is faring better then the typical top-five team. Instead of any advanced stats or win probability info, I’m adhering strictly to actual margins.
To use the current top Résumé S&P+ team as an example, let’s look at Alabama.
The Crimson Tide are currently second in overall S&P+. Their schedule strength ranks 31st after their Week 13 loss to Auburn; if the average top-five team played Alabama’s previous opponents a countless number of times, they would win about 79 percent of those games. The Tide have won 92 percent of their games, with an average scoring margin about 12.5 points per game higher than said average top-five team.
Compare that to the schedule of someone like Maryland. The average top-five team would have a win percentage of only about 72 percent against the Terrapins’ slate. Those tiny differences can add up, and they certainly have for the 4-8 Terps.
Then there’s UCF. The Knights’ strength of schedule ranks just 115th overall, and they are being punished severely for it by the CFP committee. But their average scoring margin is about 4.4 points ahead of how the average top-five team would be expected to perform. That’s a Résumé S&P+ rating within 1.3 points of Wisconsin and Clemson.
(This is a wide table, because it has to be; if it’s not showing well on your phone, consider taking a look on another device later.)
Teams won’t always be in the order we’d expect. Miami has done okay vs. the expected output of a typical top-five team, but the Hurricanes would still be projected to beat, say, a two-loss Notre Dame team that has faced a much harder schedule and currently features a ‘better’ Resume S&P+ rating.
As of now, 12 teams have positive Résumé S&P+ ratings. That’s more than one would expect if you’re comparing performances to that of an average top-five team. But here’s where I remind you that the average top-five team at the moment is not nearly as good as it has been in previous years. It only takes perhaps one truly awesome performance to produce an average above the top-five expectation.
By the way, to those who want to moan about incentivizing running up the score in this measure (since we’re using scoring margins), suck it up.
Margin of victory is infinitely more informative than “did they win?” It just is. Besides, it's odd to suddenly care about hurt feelings when I'm pretty sure telling half of FBS they don't have a shot at the national title no matter how well they play is more hurtful to those feelings than winning 59-0 instead of 49-0.
Best versus most deserving. How you’ve played versus who you’ve played. Maybe there’s a way to tie together these worlds after all.
I’m still mad at Jason, though.
0 notes
junker-town · 6 years
Text
College football strength of schedule rankings and how to have a useful conversation about them
Included at the bottom is a new SOS-style ranking system that will be updated here each week.
SB Nation college football editor Jason Kirk asked me to write about strength of schedule, and I’ll be honest: it pissed me off. Nothing personal, of course. We just all have our triggers.
I hate strength of schedule arguments more than I hate red zone fade routes, five-hour football games, and that this dumbass sport I love continues to get away with a bunch of nonsense because of a cynical, decades-old definition of “student athlete.”
I hate these arguments partly because of the contradiction. Let me explain.
In my work with advanced stats, the S&P+ ratings, and so forth, I field certain categories of insult pretty frequently. Here are two of the most common:
“Watch the games, nerd!” The insinuation: if you’d get your nose out of the spreadsheets, dork, you’d see that Team A is clearly better than Team B.
“Team A ain’t played nobody!” The insinuation: How could Team A possibly be any good when they haven’t played a team that clears someone’s arbitrary bar of goodness?
“You saw what happened in the games I watched, right?” the first one says. The second says to watch one particular game and ignore the others.
Football’s going to have a sample size issue no matter what, so we should milk whatever meaning we can out of every game we’ve got.
Arguing about strength of schedule the wrong way means limiting the sample even further and acting like we can’t get meaning out of every play of every game. We can. Honest.
Strength of schedule is at the heart of virtually every college football argument between October and January each year. Hell, it’s a point of debate for every college sport. The schedules are too varied and not comprehensive enough. That the power conferences are moving toward three-game non-conference schedules in football makes this national connectivity even worse.
At the pro level, there are strong and weak divisions, sure. But the schedules are still infinitely more balanced. In the MLB, everybody plays everybody in their league at least a few times, with some cross-league games. In the NBA and NHL, everybody plays everybody at least once. There is connectivity.
With 130 FBS teams and 12 games, that simply isn’t an option for college football. So we play who we play, and we yell about who our rivals ain’t played.
How strength of schedule determines the national champion, sort of
Arguments are one thing, but college football’s national title is more directly affected by strength of schedule than that of any other major sport. It’s baked right into the College Football Playoff selection process.
When circumstances at the margins indicate that teams are comparable, then the following criteria must be considered:
* Championships won
* Strength of schedule
* Head-to-head competition (if it occurred)
* Comparative outcomes of common opponents (without incenting margin of victory)
We believe that a committee of experts properly instructed (based on beliefs that the regular season is unique and must be preserved; and that championships won on the field and strength of schedule are important values that much be incorporated into the selection process) has very strong support throughout the college football community.
(A digression, but they should clarify that they’re looking at conference championships. A fan could read that and assume Alabama’s going to get permanent preferential treatment for its 114 claimed national titles.)
It is decreed that the committee takes strength of schedule into account, but it intentionally doesn’t say how. It frowns on advanced analytics — from the same rules document: “Nuanced mathematical formulas ignore some teams who ‘deserve’ to be selected;” God forbid — and gives no alternative guidance. So the committee ends up going with things like “wins over top-25 teams” or “assuring there’s no way in hell a team from a Group of 5 conference will ever get in.”
By now, though, some are figuring out pretty clearly how strength of schedule is taken into account.
ESPN’s stats team has created both forward- and backward-looking measures to assess the difference between the “best” teams and those typically determined most deserving of a title shot. As it turns out, the backwards view is pretty effective at mirroring committee action. The Strength of Record measure has proven quite adept at figuring out how the committee will rank teams.
Everything that goes into FPI’s SOS rankings — opponent strength, game site, distance traveled and rest — is used to capture the difficulty of a team’s schedule. Thus, amassing a number of “good wins,” no matter how the game was won, will boost a team’s Strength of Record.
Despite the committee’s mantra of selecting the “four best teams in the country,” it appears that in the first two years of playoff selection, the committee favored team accomplishment over team strength. So if you are trying to predict what the committee will do, take a look at strength of record, because seven of eight teams to make the playoff ranked in the top four of that metric before playoff selection. Then FPI can be used to predict which teams will ultimately come out on top.
The committee insists it is looking for the “best” team. It is not. Kirby Hocutt, chairman of the CFP committee as of 2016, has conflated “best” and “most deserving” on a number of occasions. An example:
Q: Are you looking for the four best teams or the four most-deserving teams? Is there a difference?
A: You have to take into account the entire season. The season doesn't start in October. Everybody has 12 regular-season opportunities, and the committee is watching. At the end of the year, we want to make sure we have the four very best teams over the course of the entire season.
They do not. And that’s fine, I guess.
Two major problems, however: a one-point win is not a 30-point win, and you don’t have to wait till someone plays a good team to start learning.
Take these two pieces as an example.
Here I am in Oct. 2013, writing about how Florida State had been dominant long before the Noles dominated Clemson.
"Are they tested?" is just a box you check. While you can certainly find examples of teams that look great against awful teams, then stumble when punched in the mouth for the first time (Louisville), you can usually glean just as much from how a team dominates bad competition as from how it plays against really good teams. Picking Team A simply because it is more tested than Team B is usually a recipe for making bad picks.
The early-season stats suggested that, despite not playing a top team yet, Florida State was pretty incredible. The Seminoles went out and left no doubt on Saturday night in Clemson.
Here I am in Oct. 2014, writing about how Florida State was probably doomed as a title contender despite some wins over good teams.
The best teams, the most likely championship teams, are the ones that handle their business early and put games out of reach before luck, chance, fumbles, and vital offensive pass interference calls can impact the outcome.
According to the F/+ rankings, the Seminoles have been just barely good enough to survive No. 9 Clemson at home (without Winston), No. 15 Louisville on the road, and No. 19 Notre Dame at home. They survived No. 44 Oklahoma State on a neutral field, and they pulled away from No. 53 NC State in the fourth quarter. They pummeled No. 76 Syracuse and eventually got around to doing the same to No. 89 Wake Forest.
They have, in other words, solidified that they should be ranked around seventh to 12th in these ratings. They were 11th heading into this week. [...] Without sustained improvement, and without the ability to play a full 60 minutes at a high level, they will in no way be a favorite to beat two top-four teams in the College Football Playoff and win the national title.
The 2013 team that hadn’t played nobody, but that was destroying its opponents, went on to win the national title.
The 2014 team that was winning, but not impressing the numbers, eked out a Playoff bid with seven one-possession wins (five against teams that went 8-5 or worse) but got embarrassed.
To its credit, the CFP committee did dock FSU a bit for its lackluster performance. The unbeaten Noles were third in the rankings behind two one-loss teams. But these two FSU teams are ultimate examples for this simple truism:
You can learn something from every game, if you try.
That’s the point of using advanced stats in the first place, be it S&P+ or any other flavor. You set the baseline depending on the opponent(s) at hand, and you compare the actual output to that adjusted expectation. It fills in what your eyes are missing. (And with 800-plus college football games in a season, your eyes are always missing something.)
Your record does matter. Even as an advanced stats loyalist, I’m not exactly going to call for a three-loss team to get a CFP spot, even if said team was unlucky in every loss and ranks first in S&P+. Wins and losses aren’t particularly predictive in and of themselves, but they still have to mean something. Even the best team shouldn’t get in, if it’s not high on the most-deserving list.
So what if we actually tried to combine the two worlds? What if we used a “best” measure to begin approximating what “most deserving” truly is?
What if we took the Strength of Record idea and added an extra level of team quality to it?
Introducing Résumé S&P+.
I’m going to introduce my takes on two pretty familiar concepts.
There are countless ways to measure one’s strength of schedule, but I’m going to choose one most directly tied to the national title race. Makes sense, since “strength of schedule” is right there in the mission statement.
Below are each FBS team’s rankings in three categories:
S&P+, an overall team efficiency rating system you can read more about here. It can be used to predict wins and losses going forward.
Strength of Schedule (SOS), which amounts to how well the average top-five team (according to S&P+) would fare, in terms of win percentage, against your schedule. The lower the number, the harder the schedule.
Résumé S&P+, which looks at a team’s season scoring margin and compares it to what the average top-five team’s scoring margin would likely be against the schedule at hand. If the number is positive (and for most, it won’t be), that means said team is faring better then the typical top-five team. Instead of any advanced stats or win probability info, I’m adhering strictly to actual margins.
To use the current top Résumé S&P+ team as an example, let’s look at Alabama.
The Crimson Tide are currently second in overall S&P+. Their schedule strength ranks just 64th after their Week 12 win over Mercer; if the average top-five team played Alabama’s previous opponents a countless number of times, they would win about 82 percent of those games.
Compare that to the schedule of someone like Iowa. The average top-five team would have a win percentage of only about 73 percent against the Hawkeyes’ slate. Those tiny differences can add up, obviously.
Résumé S&P+, however, shows us that when it comes to actual performance — actual points scored and allowed — the Crimson Tide have still laid waste to this schedule at a clip far greater than the average top team would have.
It also notes, by the way, that Wisconsin (plus-5.1 points per game above expectation) and UCF (plus-4.5 points per game) have done more than Miami has done with its own schedule. That the CFP committee ranks the Knights so far behind the Badgers and Hurricanes remains outrageous.
(This is a wide table, because it has to be; if it’s not showing well on your phone, consider taking a look on another device later.)
Teams won’t always be in the order we’d expect. Miami has done okay vs. the expected output of a typical top-five team, but the Hurricanes would still be projected to beat, say, a two-loss Notre Dame team that has faced a much harder schedule and currently features a ‘better’ Resume S&P+ rating.
As of now, 12 teams have positive Résumé S&P+ ratings. That’s more than one would expect if you’re comparing performances to that of an average top-five team. But here’s where I remind you that the average top-five team at the moment is not nearly as good as it has been in previous years. It only takes perhaps one truly awesome performance to produce an average above the top-five expectation.
By the way, to those who want to moan about incentivizing running up the score in this measure (since we’re using scoring margins), suck it up.
Margin of victory is infinitely more informative than “did they win?” It just is. Besides, it's odd to suddenly care about hurt feelings when I'm pretty sure telling half of FBS they don't have a shot at the national title no matter how well they play is more hurtful to those feelings than winning 59-0 instead of 49-0.
Best versus most deserving. How you’ve played versus who you’ve played. Maybe there’s a way to tie together these worlds after all.
I’m still mad at Jason, though.
0 notes
junker-town · 6 years
Text
College football strength of schedule rankings and how to have a useful conversation about them
Included at the bottom is a new SOS-style ranking system that will be updated here each week.
SB Nation college football editor Jason Kirk asked me to write about strength of schedule, and I’ll be honest: it pissed me off. Nothing personal, of course. We just all have our triggers.
I hate strength of schedule arguments more than I hate red zone fade routes, five-hour football games, and that this dumbass sport I love continues to get away with a bunch of nonsense because of a cynical, decades-old definition of “student athlete.”
I hate these arguments partly because of the contradiction. Let me explain.
In my work with advanced stats, the S&P+ ratings, and so forth, I field certain categories of insult pretty frequently. Here are two of the most common:
“Watch the games, nerd!” The insinuation: if you’d get your nose out of the spreadsheets, dork, you’d see that Team A is clearly better than Team B.
“Team A ain’t played nobody!” The insinuation: How could Team A possibly be any good when they haven’t played a team that clears someone’s arbitrary bar of goodness?
“You saw what happened in the games I watched, right?” the first one says. The second says to watch one particular game and ignore the others.
Football’s going to have a sample size issue no matter what, so we should milk whatever meaning we can out of every game we’ve got.
Arguing about strength of schedule the wrong way means limiting the sample even further and acting like we can’t get meaning out of every play of every game. We can. Honest.
Strength of schedule is at the heart of virtually every college football argument between October and January each year. Hell, it’s a point of debate for every college sport. The schedules are too varied and not comprehensive enough. That the power conferences are moving toward three-game non-conference schedules in football makes this national connectivity even worse.
At the pro level, there are strong and weak divisions, sure. But the schedules are still infinitely more balanced. In the MLB, everybody plays everybody in their league at least a few times, with some cross-league games. In the NBA and NHL, everybody plays everybody at least once. There is connectivity.
With 130 FBS teams and 12 games, that simply isn’t an option for college football. So we play who we play, and we yell about who our rivals ain’t played.
How strength of schedule determines the national champion, sort of
Arguments are one thing, but college football’s national title is more directly affected by strength of schedule than that of any other major sport. It’s baked right into the College Football Playoff selection process.
When circumstances at the margins indicate that teams are comparable, then the following criteria must be considered:
* Championships won
* Strength of schedule
* Head-to-head competition (if it occurred)
* Comparative outcomes of common opponents (without incenting margin of victory)
We believe that a committee of experts properly instructed (based on beliefs that the regular season is unique and must be preserved; and that championships won on the field and strength of schedule are important values that much be incorporated into the selection process) has very strong support throughout the college football community.
(A digression, but they should clarify that they’re looking at conference championships. A fan could read that and assume Alabama’s going to get permanent preferential treatment for its 114 claimed national titles.)
It is decreed that the committee takes strength of schedule into account, but it intentionally doesn’t say how. It frowns on advanced analytics — from the same rules document: “Nuanced mathematical formulas ignore some teams who ‘deserve’ to be selected;” God forbid — and gives no alternative guidance. So the committee ends up going with things like “wins over top-25 teams” or “assuring there’s no way in hell a team from a Group of 5 conference will ever get in.”
By now, though, some are figuring out pretty clearly how strength of schedule is taken into account.
ESPN’s stats team has created both forward- and backward-looking measures to assess the difference between the “best” teams and those typically determined most deserving of a title shot. As it turns out, the backwards view is pretty effective at mirroring committee action. The Strength of Record measure has proven quite adept at figuring out how the committee will rank teams.
Everything that goes into FPI’s SOS rankings — opponent strength, game site, distance traveled and rest — is used to capture the difficulty of a team’s schedule. Thus, amassing a number of “good wins,” no matter how the game was won, will boost a team’s Strength of Record.
Despite the committee’s mantra of selecting the “four best teams in the country,” it appears that in the first two years of playoff selection, the committee favored team accomplishment over team strength. So if you are trying to predict what the committee will do, take a look at strength of record, because seven of eight teams to make the playoff ranked in the top four of that metric before playoff selection. Then FPI can be used to predict which teams will ultimately come out on top.
The committee insists it is looking for the “best” team. It is not. Kirby Hocutt, chairman of the CFP committee as of 2016, has conflated “best” and “most deserving” on a number of occasions. An example:
Q: Are you looking for the four best teams or the four most-deserving teams? Is there a difference?
A: You have to take into account the entire season. The season doesn't start in October. Everybody has 12 regular-season opportunities, and the committee is watching. At the end of the year, we want to make sure we have the four very best teams over the course of the entire season.
They do not. And that’s fine, I guess.
Two major problems, however: a one-point win is not a 30-point win, and you don’t have to wait till someone plays a good team to start learning.
Take these two pieces as an example.
Here I am in Oct. 2013, writing about how Florida State had been dominant long before the Noles dominated Clemson.
"Are they tested?" is just a box you check. While you can certainly find examples of teams that look great against awful teams, then stumble when punched in the mouth for the first time (Louisville), you can usually glean just as much from how a team dominates bad competition as from how it plays against really good teams. Picking Team A simply because it is more tested than Team B is usually a recipe for making bad picks.
The early-season stats suggested that, despite not playing a top team yet, Florida State was pretty incredible. The Seminoles went out and left no doubt on Saturday night in Clemson.
Here I am in Oct. 2014, writing about how Florida State was probably doomed as a title contender despite some wins over good teams.
The best teams, the most likely championship teams, are the ones that handle their business early and put games out of reach before luck, chance, fumbles, and vital offensive pass interference calls can impact the outcome.
According to the F/+ rankings, the Seminoles have been just barely good enough to survive No. 9 Clemson at home (without Winston), No. 15 Louisville on the road, and No. 19 Notre Dame at home. They survived No. 44 Oklahoma State on a neutral field, and they pulled away from No. 53 NC State in the fourth quarter. They pummeled No. 76 Syracuse and eventually got around to doing the same to No. 89 Wake Forest.
They have, in other words, solidified that they should be ranked around seventh to 12th in these ratings. They were 11th heading into this week. [...] Without sustained improvement, and without the ability to play a full 60 minutes at a high level, they will in no way be a favorite to beat two top-four teams in the College Football Playoff and win the national title.
The 2013 team that hadn’t played nobody, but that was destroying its opponents, went on to win the national title.
The 2014 team that was winning, but not impressing the numbers, eked out a Playoff bid with seven one-possession wins (five against teams that went 8-5 or worse) but got embarrassed.
To its credit, the CFP committee did dock FSU a bit for its lackluster performance. The unbeaten Noles were third in the rankings behind two one-loss teams. But these two FSU teams are ultimate examples for this simple truism:
You can learn something from every game, if you try.
That’s the point of using advanced stats in the first place, be it S&P+ or any other flavor. You set the baseline depending on the opponent(s) at hand, and you compare the actual output to that adjusted expectation. It fills in what your eyes are missing. (And with 800-plus college football games in a season, your eyes are always missing something.)
Your record does matter. Even as an advanced stats loyalist, I’m not exactly going to call for a three-loss team to get a CFP spot, even if said team was unlucky in every loss and ranks first in S&P+. Wins and losses aren’t particularly predictive in and of themselves, but they still have to mean something. Even the best team shouldn’t get in, if it’s not high on the most-deserving list.
So what if we actually tried to combine the two worlds? What if we used a “best” measure to begin approximating what “most deserving” truly is?
What if we took the Strength of Record idea and added an extra level of team quality to it?
Introducing Résumé S&P+.
I’m going to introduce my takes on two pretty familiar concepts.
There are countless ways to measure one’s strength of schedule, but I’m going to choose one most directly tied to the national title race. Makes sense, since “strength of schedule” is right there in the mission statement.
Below are each FBS team’s rankings in three categories:
S&P+, an overall team efficiency rating system you can read more about here. It can be used to predict wins and losses going forward.
Strength of Schedule (SOS), which amounts to how well the average top-five team (according to S&P+) would fare, in terms of win percentage, against your schedule. The lower the number, the harder the schedule.
Résumé S&P+, which looks at a team’s season scoring margin and compares it to what the average top-five team’s scoring margin would likely be against the schedule at hand. If the number is positive (and for most, it won’t be), that means said team is faring better then the typical top-five team. Instead of any advanced stats or win probability info, I’m adhering strictly to actual margins.
To use the current top Résumé S&P+ team as an example, let’s look at Alabama.
The Crimson Tide are currently second in overall S&P+. Their schedule strength now ranks just 50th, up from a couple of weeks ago; if the average top-five team played Alabama’s previous opponents a countless number of times, they would win about 81 percent of those games.
Compare that to the schedule of someone like Florida State. The average top-five team would have a win percentage of only about 72 percent against that slate. Those tiny differences can add up, obviously.
Résumé S&P+, however, shows us that when it comes to actual performance — actual points scored and allowed — the Crimson Tide have still laid waste to this schedule at a clip far greater than the average top team would have. It also notes, by the way, that UCF has done more against its schedule than fellow unbeaten teams Wisconsin and Miami have done with theirs.
(This is a wide table, because it has to be; if it’s not showing well on your phone, consider taking a look on another device later.)
Teams won’t always be in the order we’d expect. Miami has done okay vs. the expected output of a typical top-five team, but the Hurricanes would still be projected to beat, say, a Notre Dame team that has faced a harder schedule and currently features a ‘better’ Resume S&P+ rating.
As of now, 13 teams have positive Résumé S&P+ ratings. That’s more than one would expect if you’re comparing performances to that of an average top-five team. But here’s where I remind you that the average top-five team at the moment is not nearly as good as it has been in previous years.
If the sport’s top tier begins distancing itself from the field, a few positive averages will turn negative. Of late, the opposite has occurred.
By the way, to those who want to moan about incentivizing running up the score in this measure (since we’re using scoring margins), suck it up.
Margin of victory is infinitely more informative than “did they win?” It just is. Besides, it's odd to suddenly care about hurt feelings when I'm pretty sure telling half of FBS they don't have a shot at the national title no matter how well they play is more hurtful to those feelings than winning 59-0 instead of 49-0.
Best versus most deserving. How you’ve played versus who you’ve played. Maybe there’s a way to tie together these worlds after all.
I’m still mad at Jason, though.
0 notes
junker-town · 6 years
Text
College football strength of schedule rankings and how to have a useful conversation about them
Included at the bottom is a new SOS-style ranking system that will be updated here each week.
SB Nation college football editor Jason Kirk asked me to write about strength of schedule, and I’ll be honest: it pissed me off. Nothing personal, of course. We just all have our triggers.
I hate strength of schedule arguments more than I hate red zone fade routes, five-hour football games, and that this dumbass sport I love continues to get away with a bunch of nonsense because of a cynical, decades-old definition of “student athlete.”
I hate these arguments partly because of the contradiction. Let me explain.
In my work with advanced stats, the S&P+ ratings, and so forth, I field certain categories of insult pretty frequently. Here are two of the most common:
“Watch the games, nerd!” The insinuation: if you’d get your nose out of the spreadsheets, dork, you’d see that Team A is clearly better than Team B.
“Team A ain’t played nobody!” The insinuation: How could Team A possibly be any good when they haven’t played a team that clears someone’s arbitrary bar of goodness?
“You saw what happened in the games I watched, right?” the first one says. The second says to watch one particular game and ignore the others.
Football’s going to have a sample size issue no matter what, so we should milk whatever meaning we can out of every game we’ve got.
Arguing about strength of schedule the wrong way means limiting the sample even further and acting like we can’t get meaning out of every play of every game. We can. Honest.
Strength of schedule is at the heart of virtually every college football argument between October and January each year. Hell, it’s a point of debate for every college sport. The schedules are too varied and not comprehensive enough. That the power conferences are moving toward three-game non-conference schedules in football makes this national connectivity even worse.
At the pro level, there are strong and weak divisions, sure. But the schedules are still infinitely more balanced. In the MLB, everybody plays everybody in their league at least a few times, with some cross-league games. In the NBA and NHL, everybody plays everybody at least once. There is connectivity.
With 130 FBS teams and 12 games, that simply isn’t an option for college football. So we play who we play, and we yell about who our rivals ain’t played.
How strength of schedule determines the national champion, sort of
Arguments are one thing, but college football’s national title is more directly affected by strength of schedule than that of any other major sport. It’s baked right into the College Football Playoff selection process.
When circumstances at the margins indicate that teams are comparable, then the following criteria must be considered:
* Championships won
* Strength of schedule
* Head-to-head competition (if it occurred)
* Comparative outcomes of common opponents (without incenting margin of victory)
We believe that a committee of experts properly instructed (based on beliefs that the regular season is unique and must be preserved; and that championships won on the field and strength of schedule are important values that much be incorporated into the selection process) has very strong support throughout the college football community.
(A digression, but they should clarify that they’re looking at conference championships. A fan could read that and assume Alabama’s going to get permanent preferential treatment for its 114 claimed national titles.)
It is decreed that the committee takes strength of schedule into account, but it intentionally doesn’t say how. It frowns on advanced analytics — from the same rules document: “Nuanced mathematical formulas ignore some teams who ‘deserve’ to be selected;” God forbid — and gives no alternative guidance. So the committee ends up going with things like “wins over top-25 teams” or “assuring there’s no way in hell a team from a Group of 5 conference will ever get in.”
By now, though, some are figuring out pretty clearly how strength of schedule is taken into account.
ESPN’s stats team has created both forward- and backward-looking measures to assess the difference between the “best” teams and those typically determined most deserving of a title shot. As it turns out, the backwards view is pretty effective at mirroring committee action. The Strength of Record measure has proven quite adept at figuring out how the committee will rank teams.
Everything that goes into FPI’s SOS rankings — opponent strength, game site, distance traveled and rest — is used to capture the difficulty of a team’s schedule. Thus, amassing a number of “good wins,” no matter how the game was won, will boost a team’s Strength of Record.
Despite the committee’s mantra of selecting the “four best teams in the country,” it appears that in the first two years of playoff selection, the committee favored team accomplishment over team strength. So if you are trying to predict what the committee will do, take a look at strength of record, because seven of eight teams to make the playoff ranked in the top four of that metric before playoff selection. Then FPI can be used to predict which teams will ultimately come out on top.
The committee insists it is looking for the “best” team. It is not. Kirby Hocutt, chairman of the CFP committee as of 2016, has conflated “best” and “most deserving” on a number of occasions. An example:
Q: Are you looking for the four best teams or the four most-deserving teams? Is there a difference?
A: You have to take into account the entire season. The season doesn't start in October. Everybody has 12 regular-season opportunities, and the committee is watching. At the end of the year, we want to make sure we have the four very best teams over the course of the entire season.
They do not. And that’s fine, I guess.
Two major problems, however: a one-point win is not a 30-point win, and you don’t have to wait till someone plays a good team to start learning.
Take these two pieces as an example.
Here I am in Oct. 2013, writing about how Florida State had been dominant long before the Noles dominated Clemson.
"Are they tested?" is just a box you check. While you can certainly find examples of teams that look great against awful teams, then stumble when punched in the mouth for the first time (Louisville), you can usually glean just as much from how a team dominates bad competition as from how it plays against really good teams. Picking Team A simply because it is more tested than Team B is usually a recipe for making bad picks.
The early-season stats suggested that, despite not playing a top team yet, Florida State was pretty incredible. The Seminoles went out and left no doubt on Saturday night in Clemson.
Here I am in Oct. 2014, writing about how Florida State was probably doomed as a title contender despite some wins over good teams.
The best teams, the most likely championship teams, are the ones that handle their business early and put games out of reach before luck, chance, fumbles, and vital offensive pass interference calls can impact the outcome.
According to the F/+ rankings, the Seminoles have been just barely good enough to survive No. 9 Clemson at home (without Winston), No. 15 Louisville on the road, and No. 19 Notre Dame at home. They survived No. 44 Oklahoma State on a neutral field, and they pulled away from No. 53 NC State in the fourth quarter. They pummeled No. 76 Syracuse and eventually got around to doing the same to No. 89 Wake Forest.
They have, in other words, solidified that they should be ranked around seventh to 12th in these ratings. They were 11th heading into this week. [...] Without sustained improvement, and without the ability to play a full 60 minutes at a high level, they will in no way be a favorite to beat two top-four teams in the College Football Playoff and win the national title.
The 2013 team that hadn’t played nobody, but that was destroying its opponents, went on to win the national title.
The 2014 team that was winning, but not impressing the numbers, eked out a Playoff bid with seven one-possession wins (five against teams that went 8-5 or worse) but got embarrassed.
To its credit, the CFP committee did dock FSU a bit for its lackluster performance. The unbeaten Noles were third in the rankings behind two one-loss teams. But these two FSU teams are ultimate examples for this simple truism:
You can learn something from every game, if you try.
That’s the point of using advanced stats in the first place, be it S&P+ or any other flavor. You set the baseline depending on the opponent(s) at hand, and you compare the actual output to that adjusted expectation. It fills in what your eyes are missing. (And with 800-plus college football games in a season, your eyes are always missing something.)
Your record does matter. Even as an advanced stats loyalist, I’m not exactly going to call for a three-loss team to get a CFP spot, even if said team was unlucky in every loss and ranks first in S&P+. Wins and losses aren’t particularly predictive in and of themselves, but they still have to mean something. Even the best team shouldn’t get in, if it’s not high on the most-deserving list.
So what if we actually tried to combine the two worlds? What if we used a “best” measure to begin approximating what “most deserving” truly is?
What if we took the Strength of Record idea and added an extra level of team quality to it?
Introducing Résumé S&P+.
I’m going to introduce my takes on two pretty familiar concepts.
There are countless ways to measure one’s strength of schedule, but I’m going to choose one most directly tied to the national title race. Makes sense, since “strength of schedule” is right there in the mission statement.
Below are each FBS team’s rankings in three categories:
S&P+, an overall team efficiency rating system you can read more about here. It can be used to predict wins and losses going forward.
Strength of Schedule (SOS), which amounts to how well the average top-five team (according to S&P+) would fare, in terms of win percentage, against your schedule. The lower the number, the harder the schedule.
Résumé S&P+, which looks at a team’s season scoring margin and compares it to what the average top-five team’s scoring margin would likely be against the schedule at hand. If the number is positive (and for most, it won’t be), that means said team is faring better then the typical top-five team. Instead of any advanced stats or win probability info, I’m adhering strictly to actual margins.
To use the current top Résumé S&P+ team as an example, let’s look at Alabama.
The Crimson Tide have now moved to first in overall S&P+ again. Their schedule strength ranks just 67th; if the average top-five team played Alabama’s previous opponents a countless number of times, they would win about 83 percent of those games.
Compare that to the schedule of someone like Florida State. The average top-five team would have a win percentage of only about 75 percent against that slate. Those tiny differences can add up, obviously.
Résumé S&P+, however, shows us that when it comes to actual performance — actual points scored and allowed — the Crimson Tide have still laid waste to this schedule at a clip far greater than the average top team would have. That earns them a place atop this list for now. (This is a wide table, because it has to be; if it’s not showing well on your phone, consider taking a look on another device later.)
Teams won’t always be in the order we’d expect. Notre Dame has played well against a harder overall schedule than Georgia has, and that’s reflected in the SOS and Résumé S&P+ rankings. However, S&P+ would pick UGA to beat ND if they were to play again right now, as reflected in the S&P+ rankings.
As of now, 13 teams have positive Résumé S&P+ ratings. That’s more than one would expect if you’re comparing performances to that of an average top-five team. But here’s where I remind you that the average top-five team at the moment is not nearly as good as it has been in previous years.
If the sport’s top tier begins distancing itself from the field, a few positive averages will turn negative. Of course, in Week 10, the opposite happened: everybody got even more bunched together.
By the way, to those who want to moan about incentivizing running up the score in this measure (since we’re using scoring margins), suck it up.
Margin of victory is infinitely more informative than “did they win?” It just is. Besides, it's odd to suddenly care about hurt feelings when I'm pretty sure telling half of FBS they don't have a shot at the national title no matter how well they play is more hurtful to those feelings than winning 59-0 instead of 49-0.
Best versus most deserving. How you’ve played versus who you’ve played. Maybe there’s a way to tie together these worlds after all.
I’m still mad at Jason, though.
0 notes
junker-town · 6 years
Text
College football strength of schedule rankings and how to have a useful conversation about them
Included at the bottom is a new SOS-style ranking system that will be updated here each week.
SB Nation college football editor Jason Kirk asked me to write about strength of schedule, and I’ll be honest: it pissed me off. Nothing personal, of course. We just all have our triggers.
I hate strength of schedule arguments more than I hate red zone fade routes, five-hour football games, and that this dumbass sport I love continues to get away with a bunch of nonsense because of a cynical, decades-old definition of “student athlete.”
I hate these arguments partly because of the contradiction. Let me explain.
In my work with advanced stats, the S&P+ ratings, and so forth, I field certain categories of insult pretty frequently. Here are two of the most common:
“Watch the games, nerd!” The insinuation: if you’d get your nose out of the spreadsheets, dork, you’d see that Team A is clearly better than Team B.
“Team A ain’t played nobody!” The insinuation: How could Team A possibly be any good when they haven’t played a team that clears someone’s arbitrary bar of goodness?
“You saw what happened in the games I watched, right?” the first one says. The second says to watch one particular game and ignore the others.
Football’s going to have a sample size issue no matter what, so we should milk whatever meaning we can out of every game we’ve got.
Arguing about strength of schedule the wrong way means limiting the sample even further and acting like we can’t get meaning out of every play of every game. We can. Honest.
Strength of schedule is at the heart of virtually every college football argument between October and January each year. Hell, it’s a point of debate for every college sport. The schedules are too varied and not comprehensive enough. That the power conferences are moving toward three-game non-conference schedules in football makes this national connectivity even worse.
At the pro level, there are strong and weak divisions, sure. But the schedules are still infinitely more balanced. In the MLB, everybody plays everybody in their league at least a few times, with some cross-league games. In the NBA and NHL, everybody plays everybody at least once. There is connectivity.
With 130 FBS teams and 12 games, that simply isn’t an option for college football. So we play who we play, and we yell about who our rivals ain’t played.
How strength of schedule determines the national champion, sort of
Arguments are one thing, but college football’s national title is more directly affected by strength of schedule than that of any other major sport. It’s baked right into the College Football Playoff selection process.
When circumstances at the margins indicate that teams are comparable, then the following criteria must be considered:
* Championships won
* Strength of schedule
* Head-to-head competition (if it occurred)
* Comparative outcomes of common opponents (without incenting margin of victory)
We believe that a committee of experts properly instructed (based on beliefs that the regular season is unique and must be preserved; and that championships won on the field and strength of schedule are important values that much be incorporated into the selection process) has very strong support throughout the college football community.
(A digression, but they should clarify that they’re looking at conference championships. A fan could read that and assume Alabama’s going to get permanent preferential treatment for its 114 claimed national titles.)
It is decreed that the committee takes strength of schedule into account, but it intentionally doesn’t say how. It frowns on advanced analytics — from the same rules document: “Nuanced mathematical formulas ignore some teams who ‘deserve’ to be selected;” God forbid — and gives no alternative guidance. So the committee ends up going with things like “wins over top-25 teams” or “assuring there’s no way in hell a team from a Group of 5 conference will ever get in.”
By now, though, some are figuring out pretty clearly how strength of schedule is taken into account.
ESPN’s stats team has created both forward- and backward-looking measures to assess the difference between the “best” teams and those typically determined most deserving of a title shot. As it turns out, the backwards view is pretty effective at mirroring committee action. The Strength of Record measure has proven quite adept at figuring out how the committee will rank teams.
Everything that goes into FPI’s SOS rankings — opponent strength, game site, distance traveled and rest — is used to capture the difficulty of a team’s schedule. Thus, amassing a number of “good wins,” no matter how the game was won, will boost a team’s Strength of Record.
Despite the committee’s mantra of selecting the “four best teams in the country,” it appears that in the first two years of playoff selection, the committee favored team accomplishment over team strength. So if you are trying to predict what the committee will do, take a look at strength of record, because seven of eight teams to make the playoff ranked in the top four of that metric before playoff selection. Then FPI can be used to predict which teams will ultimately come out on top.
The committee insists it is looking for the “best” team. It is not. Kirby Hocutt, chairman of the CFP committee as of 2016, has conflated “best” and “most deserving” on a number of occasions. An example:
Q: Are you looking for the four best teams or the four most-deserving teams? Is there a difference?
A: You have to take into account the entire season. The season doesn't start in October. Everybody has 12 regular-season opportunities, and the committee is watching. At the end of the year, we want to make sure we have the four very best teams over the course of the entire season.
They do not. And that’s fine, I guess.
Two major problems, however: a one-point win is not a 30-point win, and you don’t have to wait till someone plays a good team to start learning.
Take these two pieces as an example.
Here I am in October 2013, writing about how Florida State had been dominant long before the Noles dominated Clemson.
"Are they tested?" is just a box you check. While you can certainly find examples of teams that look great against awful teams, then stumble when punched in the mouth for the first time (Louisville), you can usually glean just as much from how a team dominates bad competition as from how it plays against really good teams. Picking Team A simply because it is more tested than Team B is usually a recipe for making bad picks.
The early-season stats suggested that, despite not playing a top team yet, Florida State was pretty incredible. The Seminoles went out and left no doubt on Saturday night in Clemson.
Here I am in October 2014, writing about how Florida State was probably doomed as a title contender despite some wins over good teams.
The best teams, the most likely championship teams, are the ones that handle their business early and put games out of reach before luck, chance, fumbles, and vital offensive pass interference calls can impact the outcome.
According to the F/+ rankings, the Seminoles have been just barely good enough to survive No. 9 Clemson at home (without Winston), No. 15 Louisville on the road, and No. 19 Notre Dame at home. They survived No. 44 Oklahoma State on a neutral field, and they pulled away from No. 53 NC State in the fourth quarter. They pummeled No. 76 Syracuse and eventually got around to doing the same to No. 89 Wake Forest.
They have, in other words, solidified that they should be ranked around seventh to 12th in these ratings. They were 11th heading into this week. [...] Without sustained improvement, and without the ability to play a full 60 minutes at a high level, they will in no way be a favorite to beat two top-four teams in the College Football Playoff and win the national title.
The 2013 team that hadn’t played nobody, but that was destroying its opponents, went on to win the national title.
The 2014 team that was winning, but not impressing the numbers, eked out a Playoff bid with seven one-possession wins (five against teams that went 8-5 or worse) but got embarrassed.
To its credit, the CFP committee did dock FSU a bit for its lackluster performance. The unbeaten Noles were third in the rankings behind two one-loss teams. But these two FSU teams are ultimate examples for this simple truism:
You can learn something from every game, if you try.
That’s the point of using advanced stats in the first place, be it S&P+ or any other flavor. You set the baseline depending on the opponent(s) at hand, and you compare the actual output to that adjusted expectation. It fills in what your eyes are missing. (And with 800-plus college football games in a season, your eyes are always missing something.)
Your record does matter. Even as an advanced stats loyalist, I’m not exactly going to call for a three-loss team to get a CFP spot, even if said team was unlucky in every loss and ranks first in S&P+. Wins and losses aren’t particularly predictive in and of themselves, but they still have to mean something. Even the best team shouldn’t get in, if it’s not high on the most-deserving list.
So what if we actually tried to combine the two worlds? What if we used a “best” measure to begin approximating what “most deserving” truly is?
What if we took the Strength of Record idea and added an extra level of team quality to it?
Introducing Résumé S&P+.
I’m going to introduce my takes on two pretty familiar concepts.
There are countless ways to measure one’s strength of schedule, but I’m going to choose one most directly tied to the national title race. Makes sense, since “strength of schedule” is right there in the mission statement.
Below are each FBS team’s rankings in three categories:
S&P+, an overall team efficiency rating system you can read more about here. It can be used to predict wins and losses going forward.
Strength of Schedule (SOS), which amounts to how well the average top-five team (according to S&P+) would fare, in terms of win percentage, against your schedule. The lower the number, the harder the schedule.
Résumé S&P+, which looks at a team’s season scoring margin and compares it to what the average top-five team’s scoring margin would likely be against the schedule at hand. If the number is positive (and for most, it won’t be), that means said team is faring better then the typical top-five team. Instead of any advanced stats or win probability info, I’m adhering strictly to actual margins.
To use the current top Résumé S&P+ team as an example, let’s look at Alabama.
The Crimson Tide are second in overall S&P+ right now. Their schedule strength ranks just 82nd; if the average top-five team played Alabama’s previous opponents a countless number of times, they would win about 86 percent of those games.
Compare that to the schedule of someone like Maryland. The average top-five team would have a win percentage of only about 74 percent against that slate.
Résumé S&P+, however, shows us that when it comes to actual performance — actual points scored and allowed — the Crimson Tide have still laid waste to this schedule at a clip far greater than the average top team would have. That earns them a place atop this list for now. (This is a wide table, because it has to be; if it’s not showing well on your phone, consider taking a look on another device later.)
Teams won’t always be in the order we’d expect. Notre Dame has played well against a harder overall schedule than Georgia has, and that’s reflected in the SOS and Résumé S&P+ rankings. However, S&P+ would pick UGA to beat ND if they were to play again right now, as reflected in the S&P+ rankings.
As of now, 13 teams have positive Résumé S&P+ ratings. That’s more than one would expect if you’re comparing performances to that of an average top-five team. But here’s where I remind you that the average top-five team at the moment is not nearly as good as it has been in previous years. If the sport’s top tier begins distancing itself from the field, a few positive averages will turn negative.
By the way, to those who want to moan about incentivizing running up the score in this measure (since we’re using scoring margins), suck it up.
Margin of victory is infinitely more informative than “did they win?” It just is. Besides, it's odd to suddenly care about hurt feelings when I'm pretty sure telling half of FBS they don't have a shot at the national title no matter how well they play is more hurtful to those feelings than winning 59-0 instead of 49-0.
Best versus most deserving. How you’ve played versus who you’ve played. Maybe there’s a way to tie together these worlds after all.
I’m still mad at Jason, though.
0 notes