Thinking out loud: Why are presidential elections so close to 50–50?

Toph Tucker
8 min readNov 18, 2017

OK so like what do I know but I feel like U.S. presidential elections are always weirdly close to 50–50. Maybe this is partially a bias due to coming of age amidst Bush–Gore recounts but still. I have two wildly speculative things to say about possible mechanisms but first I want to look at numbers for a veneer of rigor. Look:

https://commons.wikimedia.org/wiki/File:Presidents_-_PartyVotes_(2016_election_update).png

Right?? (Also wow Ross Perot got, uh, 18.91%?) Here’s the ratios.

Credit: Me in Excel in Calibri at 1am on a Saturday. (Popular vote winner first, so they’re all >1.)

Huh. To be honest I kind of expected it’d have gotten closer but there’s no trend there at all. Well. It still feels like a low ratio, right? Bear with me, I know it feels obvious, of course it’s close, but whyyy? This isn’t a straightforward law of large numbers thing; that just says that if you poll a ton of people it should converge on the “true” preference.¹ But for most choices of two candidates, one would expect a significant majority to prefer one. So it’s more like a statement about the efficiency of our primaries. How is it that we pick presidential candidates that people choose at a rate so close to a coin flip? (Even, incredibly, that last dot!) That doesn’t happen with lots of things. Like, Super Bowls aren’t that close:

Ibid.

That’s a bad comparison, cuz there is a law-of-large-numbers problem there (all the scores are too low!). NBA Finals would be better, and I’m sure they’d be much closer, maybe closer than our elections, but I can’t easily find that data right now. What about another big election? How about the world’s biggest democracy:

Ibid.

That’s not a great comparison either, as far as I understand it; India has a parliamentary system, the President is mostly ceremonial, and those are electors (India has ~5,000), not popular votes. OK so their general election should be closer.

Ibid.

Great!

Folks talk about the metaphor of a pendulum. This election swung back the other way, etc. I don’t love the connotation of regular periodicity. If it’s pendulum-like, the pivot is drifting, too, so you get a kind of hunting oscillation around some equilibrium. But I guess nobody’s really trying to imply strict periodicity and I’m just explaining the joke back to itself. I do think it’s interesting how you lose your reference frame, just following the margins, if the equilibrium is also shifting. As red and blue or left and right both shift. Can’t see the horizon, no fixed stars. Airplane pilots probably have a name for it. Ah, it’s “spatial disorientation”. RIP JFK Jr. Equivalence principle. Non-inertial moral frames.

One view, which I’m initially inclined to call the optimistic view, is that the precursory campaigns before giant democratic elections are incredibly efficient at finding two people who will basically split the electorate. There are tons of ways to do that but I imagine it’s something like a singular value decomposition, splitting the electorate along the, uh, first singular vector or whatever. You could probably study why that’s strategic or easy or efficient, as opposed to other partitions. The electorate is some undulating high-dimensional blob of beliefs and parties kinda line up along major axes. Maybe? Point being, they split the electorate. Like a well-balanced binary tree. (Funny that the U.S., anyway, has settled on a base of 2.) Which means the election is maximally informative. For a specific reductive sense of “informative”.

Another view, which I’m inclined to call the pessimistic view, is that you could pick any two people at all, subject them (and the country) to a campaign, and they’d end up splitting the vote. Maybe because the people end up being empty vessels who just pick up whatever beliefs get votes — but I think the darker view is that it’s because the electorate picks up the candidates’ views. That we’ve constructed an equivalence machine that, given any two positions, will just very efficiently take positions on opposite sides. Not a new idea!!!! But… umm… how does that work…? They’re called “representatives”, not… prescriptives! Lol. What’s the opposite of representation? It’s like backflow from the map to the territory. Like when you get monopolistic pricing power, or like spoofers moving prices just by quoting in the direction they want them to move. I should read Baudrillard, ha-ha. Eyeroll.

You could tell a rational Bayesian story about electorates and candidates updating each others’ beliefs.² I’m not convinced! I mean, maybe the updating is good and the priors are just a crapshoot. I don’t know. It’s certainly a major force. Here’s a paper I could read if it weren’t 4:23 a.m. (Oh, now I let that stop me?)

Sometimes on Twitter people are like, “I’m mad that people are saying this,” and then someone replies “Nobody’s saying that.” They may both agree with P, but one of them sees people saying not-P and the other doesn’t, so now there’s a second-order disagreement about the rate of people saying not-P. (I guess this is like the “Kierkegaard’s lunatic” parable that was going around.) And if you don’t have a clear horizon, that can be disorienting. I think that sort of thing can drive the equilibrium-hunting. The grounds of the debate shifts from P to rate-of-belief-in-P and so on till it finds some balanced argument. Which, under normal circumstances, may take the conversation on a maximally-informative path. Maybe this is like the law of maximum entropy production. Is that crackpot-ish? Daisyworld, then. Maybe this is all like political Daisyworld.

I guess the troubling thing to me is the argumentative theory of reasoning plus social network virality. Like, put Hugo Mercier and Dan Sperber and Zeynep Tufekci in a room together.

Argumentative theory of reasoning: conversation, paper, article.

  • “Reasoning was not designed to pursue the truth. Reasoning was designed by evolution to help us win arguments.”
  • “Science works very well as a social process, when we can come together and find flaws in each other’s reasoning. We can’t find the problems in our own reasoning very well. But, that’s what other people are for, is to criticize us. And together, we hope the truth comes out.”
  • “There are no adequate instructions for reasoning effectively about most real life problems.”

I haven’t read the book. Yes Alexi I know I should read Haidt.

Anyway, extend the filter bubble thing. People see different worlds. So the worlds they see disagree with each other’s. You and I agree, but I see a ton of people saying something abhorrent, and you don’t, because we follow different people, and then I go after that, and then you disagree with me, etc. Or people amplify the most abhorrent things in order to critique them, which reflexively creates the counterarguments, and counter-counterarguments, which manage to successively “rationalize” any arbitrary original grain, and so the new terrain is defined, and the slippage between our maps shears the territory, or, or, I don’t know. In a classical argumentative context, a killer takedown of some terrible idea didn’t massively amplify the terrible idea, it just won some local argument, it couldn’t metastasize. Maybe. Who knows, I wasn’t there. But I guess there are books and histories. I should read more. I guess Socrates amplifies lots of bad ideas just to take them down. Everything I say is bullshit.

Will Wright in 2001, such a classic interview (via Bret Victor, natch):

In Go, both players have a model of what’s happening on the board, and over time those models get closer and closer and closer together until the final score. At that point you have a total shared model of, you know, “you beat me.” (Laughter.) Up until that point, though, there’s quite a large divergence in the mental models that players have. Especially if you ask them what the score is, or “How are you doing?” They’ll frequently say, “I’m doing pretty well, here,” or “He’s whipping me.” Or that backwards thing, “Oh, he’s whipping me,” when really you’re the one winning. And it really comes down to how each person is mentally overlaying their territories onto this board. In each player’s mind, there’s this idea that “Oh, I control this and they control that, and we’re fighting over this.” They each have a map in their head of what’s going on, and those maps are in disagreement. And it’s those areas of maximum disagreement where the battles are all fought. You play a piece there, and I think “Oh, that’s in my territory, I’m going to attack it cause you’re in my territory.” Whereas you’re thinking, “Oh, that’s my territory, you’re invading me.” And finally, the battle resolves that in our heads, and then it’s pretty clear that, “Okay, that’s your territory and that’s mine.” So the game is in fact this process of us bringing our different mental models into agreement. Through battle.

If “common sense” is literally the sensory stimulus that we share in common, then of course putting dynamic screens between everyone and their world will drive common sense to some vanishing point.

(No, no Toph, come on. It just reorients it. You can share much more in common with other people, it just breaks contiguity, some smoothness of the mapping from common sense to space.)

¹ Although it’s kind of hard to pin down what an election is supposed to be an estimator of. Is it even supposed to be an estimator at all, or it more like mechanism to construct a framework of legitimacy?

² In the early U.S. presidential elections, you didn’t know who’d won for like a week or two, right? (And voting was spaced out?) And then in 2012, everyone knew the results a couple weeks before the election, as if the latency had gone negative! The thiotimoline election. And then in 2016, everyone knew a couple weeks before, but it was wrong. If the rationalist Bayesian story isn’t true (and uh turnout is low), one can imagine how election forecasting is self-negating… like, gee, I don’t want to be predictable. All those time travel paradoxes. Ha no. But I can imagine forecasting getting worse. Or I guess you have some Grossman-Stiglitz paradox. Shut up Toph, there’s actual poli-sci literature about all of this, just go to sleep.

If I were to write this article for real it’d be a good explorable explanation. I think toy models that reproduce these phenomena are very doable. Except that “explanation” sounds too apodictic.

If I were to try to argue for some thesis here, I guess it might be:

Argumentative reasoning fed through social mass media produces a false political horizon and graveyard spiral.

In short,

--

--