predicting models are pretty darn close. I know the people who do S&P+ use it for predicting against the spread for gambling and has hit at about 53% on the season.
Only five bowl games have more likely winners.
It's pretty hard to consistently get above a 53% win rate against the spread. Vegas is very sharp on that stuff.
The link below contains S&P+ results (against the spread, and vs. the over/under), by week, for the 2018 season.
The S&P+ model is pretty damned advanced, and it was built by people who know an awful lot about football. And, still, its predictions are correct just over half of the time.
In its best week, the model went 34-20 (63% winning percentage). If you bet $100 on all 54 games, you'd make a profit of about 26% ($1400).
In its worst week, the model went 27-33-1 (45% winning percentage). If you bet $100 on all 61 games, you'd lose $600 (-10% for the week).
I'm surprised there aren't more dramatic peaks & valleys on a week-to-week basis. Regardless, my top take-away is that trying to predict college football results is a fool's errand.
Backdoor cover with a late TD?
but I think his "recent" model predicts a closer game. ND's strength in computer ratings is being held down partly by the closeness of the scores against Vanderbilt and Ball State.
Normally I would say there's no reason to exclude games like that. But in this case they happened before a quarterback change that made quite a difference for the offense.
Our best win, against Michigan, was with Wimbush as the starting QB too.
Syracuse or Northwestern might be considered better wins than Michigan.
Because they overvalue home/away and overvalue point spread.
If anything, people could (probably reasonably) argue that we would’ve beaten Michigan by more with Book and Dex.
I think the “can’t have it both ways” is more applicable to people looking at the close A&M/Syracuse results for Clemson while not factoring in Lawrence wasn’t the QB for most of those games, yet still claiming Vandy/Ball State as flukes because of no Book.
Sagarin's model is pretty complicated, with four different predictors. I went with the "RATING" factor because it's a (in his words) "synthesis of the three different SCORE-BASED methods".
A more accurate statement would have been, "Sagarin's model has Clemson by 4.24-12.41 points".