Learning From Last Year (Part II)

As promised, this is the second post in my analysis of last year’s elections. Here, I will turn to models that purport not just to predict vote totals, but rather to predict the actual outcomes of the election. I will use both a linear prediction model and the more sophisticated probit model. Before I get into the results, let me just say that while linear prediction may be the bête noire of statistics (it’s unbounded, meaning that it sometimes predicts negative probabilities or probabilities over 1), its predictions were as good as though of the probit, although interestingly each failed to correctly guess different candidates. For what we most want to observe (the borderline candidates), linear models really aren’t so bad. Well, with that brief digression out there, I’ll get to it (as before, I’ve cut things out, so email me if you want the data and/or details).

First, I’ll show the linear probability model:

![](http://img28.imageshack.us/img28/9985/screenshot20100310at415.png)
This is the linear prediction model for ASSU Senate elections.
The linear probability model essentially makes each increase in a given variable worth its coefficient in additional percentage chance of election (however, if you look at the constant term, they’re starting not from 0, but from -226.26 percent; this is almost entirely due to the minimum 100 petitions needed to be on the ballot, as 100*.02084-2.2626 = -.1786, a much smaller deficit). According to this model, each additional petition was worth a 2 percent additional chance at election and an SBS endorsement was worth an astonishing *50* percent boost. None of the other coefficients of interest were significant. If you’re wondering about the lower sample size, I honestly am not exactly sure what is up, but I will tell you that when I didn’t use a GLS correction, the coefficients didn’t change much, so the results shouldn’t be too skewed (SBS and Petitions were still the only significant variables, although their effects were slightly lower).

Now, the more common model, the probit result (as I linked above, the probit model is a model that uses a cumulative distribution function to achieve a bounded curve that never goes below 0 and never goes above 1):

![](http://img10.imageshack.us/img10/9200/screenshot20100310at437.png)
This is the probit model of the ASSU Senate Elections.
The first interesting thing to note here is that the only factor that remains significant is Petitions. Due to the incredible standard errors for SOCC and SBS, they are not at all significant (this is likely due to the tiny size of the data set; a larger data set would be far better). As a whole, the model does retain predictive value, however (pseudo R2 of .7783 and a prob>Chi2 of .0000).

The coefficients of a probit model actually tell us incredibly little, since the marginal effect of a change in any one variable is actually based on the other variables as well. With that in mind, I computed the effect of changes in certain variables for a freshman who is not endorsed by either SOCC or SBS:

![](http://img94.imageshack.us/img94/3144/screenshot20100310at448.png)
These are the marginal effects for changes in certain variables for an unendorsed freshman
What does this say? Even though SOCC and SBS were not significant in the probit model overall, for a freshman candidate looking to win, an endorsement from SOCC was worth an additional 64.255 percent chance to win, while an SBS endorsement was worth an incredible 94.27575 additional percent chance to win. The reason why these variables may not have been significant overall is likely because the endorsements did not have the same power for non-freshmen. So, if these results tell us anything, it’s for freshmen to find themselves a group!

I’ll finish this series with the predictions of the model next time.

Subscribe to the Stanford Review