walksalone
Heisman Trophy Winner
All the games with below average rush attempts, we either lost or were very close to losing...
I note the problems with the analysis. Several times, in fact. I think this line came up a few times: "the argument that can be made that we had to pass more because we were behind and you need to preserve clock if you are to come back. The running game doesn't preserve clock, so you must pass."The problem is you're taking all of these statistics out of context. What do the numbers say when the game score is equal- +- one score? What about interaction effects with score difference and time remaining, I bet you'd get more than just a main effect. What about proportion of run to pass compared to number of plays run?
Teams throw more when they're behind. Teams run more when they're ahead. You try to account for this anecdotally, but who are you to say when time management was important or not? You also fail to account for how you determined significance.
I appreciate what you're trying to do, but you're creating a strawman argument.
You using SPSS? If so, I'd be happy to play a bit with it as well, but couldn't do it as much until January.I note the problems with the analysis. Several times, in fact. I think this line came up a few times: "the argument that can be made that we had to pass more because we were behind and you need to preserve clock if you are to come back. The running game doesn't preserve clock, so you must pass."The problem is you're taking all of these statistics out of context. What do the numbers say when the game score is equal- +- one score? What about interaction effects with score difference and time remaining, I bet you'd get more than just a main effect. What about proportion of run to pass compared to number of plays run?
Teams throw more when they're behind. Teams run more when they're ahead. You try to account for this anecdotally, but who are you to say when time management was important or not? You also fail to account for how you determined significance.
I appreciate what you're trying to do, but you're creating a strawman argument.
I mention that it's time consuming to do so. But I've got a break coming up, so maybe I can give a more in depth look. Perhaps there is an interaction between the two groups and say, the score difference in the game. If so, the main effects are obviously misleading and the stuff above means jack sh*t.
Thanks!
I am using SPSS.You using SPSS? If so, I'd be happy to play a bit with it as well, but couldn't do it as much until January.I note the problems with the analysis. Several times, in fact. I think this line came up a few times: "the argument that can be made that we had to pass more because we were behind and you need to preserve clock if you are to come back. The running game doesn't preserve clock, so you must pass."The problem is you're taking all of these statistics out of context. What do the numbers say when the game score is equal- +- one score? What about interaction effects with score difference and time remaining, I bet you'd get more than just a main effect. What about proportion of run to pass compared to number of plays run?
Teams throw more when they're behind. Teams run more when they're ahead. You try to account for this anecdotally, but who are you to say when time management was important or not? You also fail to account for how you determined significance.
I appreciate what you're trying to do, but you're creating a strawman argument.
I mention that it's time consuming to do so. But I've got a break coming up, so maybe I can give a more in depth look. Perhaps there is an interaction between the two groups and say, the score difference in the game. If so, the main effects are obviously misleading and the stuff above means jack sh*t.
Thanks!
Wow. That's quite the under taking. Thanks for posting what you've done.I am using SPSS.
Really, it's just taking a look at this season. Although, if we wanted to, we could take a look at Tim Beck's time as the signal caller here. Has he been here two or three years? If it's two, if you wanted to do last season then that would be wonderful.
Here's what I'm planning on doing:
ESPN has play by play information for each game. I'm separating by game situation, such as tied, up 1 score, up 2 scores, up 3+ scores, down 1 score, down 2 scores, and down 3+ scores. 1 score is anywhere from 1-8, 2 scores is anywhere from 9-16, and 3 scores is anywhere from 17 up. I'm then keeping track of:
Rush attempts
Rush yards
Yards per carry
Rush attempt to touchdown % (i.e. if we ran the ball 10 times and had two rushing TDs, this number would be 20%)
Pass attempts
Pass completions
Completion %
Pass yards
Yards per completion
Pass completion to touchdown % (same as the rushing)
Turnovers
Penalty yards
for both NU and the opponent. All opponent stats are placed with respect to where Nebraska stands in the game. So if Nebraska is down 7, all plays that happen while Nebraska is down 7 go under the down 1 score category.
The test is an F-test which measures effect/error. The significance test is a p-value which tests if that F value is significantly greater than 0. I'm using .05 as that cutoff. So if p is less than .05, then the F is significantly greater than 0 and there is an effect. There's less than a 5% chance of committing a Type I error or a false alarm: saying there's an effect when there really isn't an effect.Thanks for the extensive analysis! I'm not sure what conclusions can really be drawn from the stats here, but I'm still looking through them.
When you say the results were "statistically significant", what do you mean? Is that just your opinion or is there a specific stat test you're using?
Don't F-tests test the likelihood of data being being drawn from some distribution? Are you comparing the run/pass plays from each game against a normal distribution with the mean and variance based on the plays for the whole season?The test is an F-test which measures effect/error. The significance test is a p-value which tests if that F value is significantly greater than 0. I'm using .05 as that cutoff. So if p is less than .05, then the F is significantly greater than 0 and there is an effect. There's less than a 5% chance of committing a Type I error or a false alarm: saying there's an effect when there really isn't an effect.Thanks for the extensive analysis! I'm not sure what conclusions can really be drawn from the stats here, but I'm still looking through them.
When you say the results were "statistically significant", what do you mean? Is that just your opinion or is there a specific stat test you're using?
But never mind that above post. Huskerzoo pointed out that I should have considered how far Nebraska was ahead or behind. Which is true, teams tend to run more when they are farther ahead and so the difference in the score is a confounding variable. What I'm doing now is controlling for it by using it as a second independent variable and taking a look at the interaction between the outcome of the game and how we called plays when we were in various situations.
I may have 2013's season done by later tonight.
Exactly, and that'll be another thing I'll look at. But for now I'm looking at scoring margin.Don't F-tests test the likelihood of data being being drawn from some distribution? Are you comparing the run/pass plays from each game against a normal distribution with the mean and variance based on the plays for the whole season?The test is an F-test which measures effect/error. The significance test is a p-value which tests if that F value is significantly greater than 0. I'm using .05 as that cutoff. So if p is less than .05, then the F is significantly greater than 0 and there is an effect. There's less than a 5% chance of committing a Type I error or a false alarm: saying there's an effect when there really isn't an effect.Thanks for the extensive analysis! I'm not sure what conclusions can really be drawn from the stats here, but I'm still looking through them.
When you say the results were "statistically significant", what do you mean? Is that just your opinion or is there a specific stat test you're using?
But never mind that above post. Huskerzoo pointed out that I should have considered how far Nebraska was ahead or behind. Which is true, teams tend to run more when they are farther ahead and so the difference in the score is a confounding variable. What I'm doing now is controlling for it by using it as a second independent variable and taking a look at the interaction between the outcome of the game and how we called plays when we were in various situations.
I may have 2013's season done by later tonight.
I'll take a look at the new results when you post them. Also, if you're looking at confounding variables, then down and distance might be just as significant as score margin.
Which is also why I'm baffled as to why there weren't more run plays called. I know the line got beat up as the year went on, but we had 3 backs that could have been rotated to try to keep them as "fresh" as possible. And concievably, wouldn't running the ball eat a little more clock, giving the D a chance to catch a breath.[i'm also curious as to what the longest streak of run plays called was this season (in a situation where the game wasn't yet decided]
Yeah, that's the Full Beck I think people think "he goes to". But 'Id venture that he was pretty consistent this year. Didn't seem to have a "Wisconsin before half" this year.