Thanks for explaining the methodology behind your 30% guess tschu. The problem I have with that prediction is my contention that polls are an inaccurate, sometimes biased portrayal of a teams true talent or chance to win.
Sure, human polls are open to all kinds of bias. But computer rankings have none. Each team is a data point, nothing more.
Generally, the computer models fall into one of 2 categories (or a combination of the two). The first type gives each team a rating (arbitrary at first) and then based on that rating, produces a predicted "spread" for each game. Then the actual results of each game are inputted, and the error (difference between the prediction/actual, probably squared) is calculated. Then, the computer solves for the rating for each team that produces the lowest possible sum of that error for all games. Sagarin's formula is a well-guarded secret, but I think this is roughly what he does. You can add modifiers that weight blowouts differently (the difference in beating a team by 40 or by 50 is usually meaningless, but beating a team by 1 or by 10 is usually more meaningful).
The second type is based on offensive and defensive efficiency - usually how many yards per play you gain or give up (ypp) is the backbone, but plenty of adjustments need to be made, like for the speed that you play at and number of plays per game each team runs, as well as adjusting for the strength of the opponent - gaining 7 yards per play against a horrible defense is an average performance, but 7 ypp against a top-5 defense is really good, that sort of thing. I used to follow a separate ranking site that did predicative adjusted YPP stuff but he isn't updating it this year
But the Football Outsiders F+ is based on a collection of a bunch of different efficiency ratings.
Each has its pros and cons, the first method, being based on scores, has fewer data points per season (just the score of each game) and therefore is inaccurate until several games have passed. But the score is ultimately what the output you're looking for out of a model is, so at least with this you make no assumptions in calculating a score from other data. With the second model, you have to figure out a formula that turns your efficiency data into score data, which introduces another variable. Over time I'm sure these models have looked at more and more historical data and refined how they do that, but it's still another step. The plus side of these models is that they have a ton of data - each play is a data point, rather than each game being a data point.
Betting markets, etc - yes, absolutely human bias comes big into play here. I tend to believe in the wisdom of crowds and the efficient market hypothesis, but all sorts of things come into play, with sportsbooks playing the edge themselves, with sharps faking one side and then hammering the other when the line moves, with the public favoring established favored teams, etc. But overall it's still a good source to look at because the ultimate source of a collection of opinions is one where each person has put their money behind their belief. Being right or wrong has a reward or punishment.
Edit - I need to add that this is purely for predicative power ranking purposes. Some rankings take your accomplishments themselves into account. For example, you could play every team in the top 10, lose to each by a single point, and be 0-10. An accomplishment-based ranking would not rank this team very high at all, but a predicative model would probably have them in the top 15.