The Lichtman Perception Paradox

The Allan Lichtman hit piece you've been waiting for.

allan lichtman

Since the 2024 election, Allan Lichtman has been under fire for his incorrect prediction of a Harris victory. I’d like to take a look at his model, in-depth, and show what I think it gets right, and why I think this is not the first time Lichtman and his keys got it wrong.

I am going to include a lot of quotes from his book, the 2024 edition of Predicting the Next President: The Keys to the White House. As many Lichtman fans seem blissfully unaware of what he has actually said, and, in particular, the contradictions that have accumulated over the years. I think if you have faith in Allan Lichtman, you should read this article by The Post Rider, as it’s a much better dissection of Lichtman’s errors than I could ever provide.

However, my primary focus will not be as much on whether or not his 13 Keys are a valid predictive model. Rather, I want to focus on the underlying assumptions and axioms that his model is based on, and see if those are reasonable.

As a mathematician with a strong interest in American presidential elections, I was very intrigued by Lichtman’s model. And I think that I can provide an alternate perspective on his keys: one that is more charitable than most critics, but that he might not like very much.

The Lichtman Axioms

I’d like to start with the basics of his key model, and the assumptions he makes. While he does not articulate them in quite this way, I am going to give my own steelman argument of his model, the best I can as a mathematician who has studied both his work and presidential elections extensively.

There is another axiom that I will add later, but it deserves a full introduction. First, I would like to support these axioms with direct quotes from his book.

“In addition, the keys do not presume that voters are driven by economic concerns alone. Voters are less narrow-minded and more sophisticated than that; they decide presidential elections on a wide-ranging assessment of the performance of incumbent parties, all of which are reflected in one or more keys.” (Predicting the Next President, 2024 Edition, p. X)

And,

“The lesson? Despite the hundreds of millions of dollars and months of media attention lavished on them, general-election campaigns don’t count. The critics have it backwards when they complain that candidates and their handlers deceive the public with sound bites, negative ads, dirty tricks, and stage-managed events. Rather, the candidates deceive themselves into thinking that such machinations can get them elected. Robert Strauss, Jimmy Carter’s campaign manager in 1980, understood—in retrospect at least—the futility of battling adverse conditions with the polls’ usual weaponry: “With all the politics in the world … I think I could have stayed in bed the last ninety days and would have got exactly the same number of votes he had, give or take two. … The real world is all around us.” What losing candidate has ever run a brilliant campaign? Such judgments always follow perceptions of how a candidate is doing or the election result itself.” (Predicting the Next President, 2024 Edition, p. 5)

The Electoral College is a whole thing, which complicates the relationship between the popular vote and the outcome of the presidential election. However, to be on the same page, I’d like to briefly explain how American presidential elections work.

Notably, the candidate who gets the most votes nationwide (wins the popular vote) does not necessarily win the presidency. This has happened five times in American history, most recently in 2016. However, the times that we care about are

Now, first I want to explain Allan Lichtman’s record in terms of what he has actually predicted for the elections from 1984 to 2024, as well as the retrospective results he claims explain results all the way back to 1860. In particular, since a prediction is binary (one candidate wins or the other), the candidate who wins the popular vote versus the candidate who wins the electoral college is a critical distinction.

To be quite fair, it’s much easier to list the elections which are suspect, than list all of those which match his predictions. In particular, for every election from 1860 to 1980, excluding 1876 and 1888, his keys match the actual winner of both the popular vote and the electoral college. From 1984 to 2020, his keys match the actual winner of both the popular vote and the electoral college in every election except for 2000 and 2016. However, in 2024, his keys predicted the winner of neither the popular vote nor the electoral college.

So, let us discuss the four suspect elections: 1876, 1888, 2000, and 2016. Because who he predicted will give us insight into what his model is actually capturing.

One of these is not like the others. Can you spot which one?

Now, Allan Lichtman has made a video on this, clarifying his record. He basically claims

Now, while there are many problems with these claims, I would like to turn to his book to see what he actually says about these elections.

“Retrospectively, the keys account for the results of every presidential election from 1860 through 1980, much longer than any other prediction system. Prospectively, the keys predicted well ahead of time the popular-vote winners of every presidential election from 1984 through 2008.” (Predicting the Next President, 2024 Edition, p. X)

As you can see, Lichtman does not carefully update his book, because 2012 is missing from this list. But, he gets more explicit:

“When five or fewer keys are false, the incumbent party wins the popular vote; when six or more are false, the challenging party prevails. The keys to the White House diagnose the national political environment. They correlate with the popular balloting, not with the votes of individual states. Only three times since 1860, however, has the electoral college not ratified the popular vote: the “stolen” election of 1876, when Democrat Samuel J. Tilden outpolled Republican Rutherford B. Hayes 51 to 48 percent but lost a disputed contest for the electoral vote; the election of 1888, when electoral college votes overrode President Grover Cleveland’s narrow popular-vote margin over Benjamin Harrison; and the 2000 election described above.” (Predicting the Next President, 2024 Edition, p. 2)

Well, four times, if you count 2016. Side note: It’s kind of nice that Lichtman doesn’t update his book very carefully, because it gives this sense that we’re reading words written across decades. It’s not great for him because you get some glaring contradictions and errors where it’s clear as day he changed his mind later, but it’s kind of charming in a way.

So, case closed, right? Lichtman is clearly predicting the popular vote, and not the electoral college. Thus, his model is not invalidated by 2000. Even if Gore lost the electoral college, which is highly debatable, he still won the popular vote comfortably. So, the unpredictable chaos in Florida does not affect Lichtman’s model. Supposedly.

What about 2016, then? Lichtman clearly predicted Trump, who lost the popular vote. How does Lichtman explain this? Well, we will get to that later. But, I’d like to go back to the axioms.

The Missing Axiom

I am going to add our final axiom now, and also be more specific about the previous ones.

I want to take a brief moment to just briefly talk about why I believe at least most of these axioms are quite reasonable.

Economic Performance and Elections

There are indeed multiple elections where the ruling party changed while the economy was doing well, or vice versa.

One could also include 2000, where the economy was doing well. However, as we have said, Lichtman is only predicting the popular vote, which Gore won (though very narrowly). Still, the popular vote margin swung 8 points against the incumbent party from 1996 to 2000, which is quite substantial.

I’d like to also highlight 1940, where FDR won a third term despite the economy still doing poorly from the Great Depression. Rather, in all these cases, there were other important factors at play, such as war, social unrest, and the desire for change.

Lichtman is quite adamant on this. Trying to predict the president based purely on economic indicators is not sufficient. Not unimportant, though, as those are indeed part of his keys. But most economic models often fail spectacularly on some of the most obvious elections.

Now, this is where Lichtman and I disagree, but only slightly. Do I believe that campaigning can change tens of millions of votes, such that George McGovern could have won in 1972, closing the distance of the 23 point margin Nixon won over him, if only he had campaigned better? No, I do not.

What of 2020 (approximately 4.5%)? Biden won by over 7 million votes. Could Trump have campaigned better and won those votes? I think that is honestly ludicrous.

How about an election like 2004 (approximately 3%)? Bush won by 3 million votes. Do I believe that a better campaign by Kerry could have won him 3 million votes so that he would have won the popular vote? Frankly, I do not.

What about 1976 (approximately 2%)? Carter won by a bit over 1.6 million votes. Could Ford have tried a little harder and won the popular vote? Again, I do not believe so. 2016 had a similar margin of 2.1% and 2.8 million votes, and I do not believe Trump could have realistically won the popular vote with a better campaign. Same with 2024, with its margin of 1.6% and 2 million votes, I don’t think any Democrat could have realistically won the popular vote that year, no matter how well they campaigned.

But what about 2000? Gore won by only half a percentage point, or about 540,000 votes. Could a better campaign have won him those votes? I think it’s highly unlikely, but I think it’s certainly within the realm of possibility. But 500,000 votes is I think around the upper limit of what campaigning could realistically change in a close election. More than that, and I think it’s implausible.

If we instead focus on the electoral college, then you might be surprised how few votes you need to change the outcome. In 2000, we have said that 538 more votes in Florida would have changed the outcome. I think that is absolutely within the realm of possibility for campaigning to change. A few targeted ads, more door-to-door canvassing, and a better ground game could have easily swung Florida for Gore.

But even some of those elections which were not particularly close in the popular vote could have been swung by campaigning in specific states.

Here’s a little list for you! These are the number of votes one would need to flip from the winner to the loser in order to change the outcome of the electoral college. I’m going to give the minimum number of votes needed, which may mean tying the electoral college rather than winning outright.

If you would like to explore more of these flip scenarios, you can check out my website Margin Matters, which allows you to explore presidential elections in depth and see how many votes you would need to flip to change the outcome.

Apart from 2000 and 1960, none of these elections are usually thought of as particularly close. Yet, we see that significantly less than 100,000 voters would need to have their minds changed to completely destabilize the electoral college outcome.

In essence, the electoral college is incredibly chaotic. This is, in my opinion, why I think Lichtman was absolutely right to focus on predicting the popular vote, rather than the electoral college. Trying to predict individual states is simply too difficult.

How many popular vote victories in the last 50 years were not clear in retrospect? I believe for most presidential elections, you can generally look back and identify clear factors which led to the outcome. I will go more in depth about some of these elections later, in particular the more contentious ones.

However, there are other elections where the popular vote was absolutely unclear. While almost all presidential elections tend to have popular vote margins of at least half a percent (which is still quite close), there are two notable exceptions: 1880 and 1960.

In 1880, Garfield won the popular vote by 0.10%, or about 9,457 votes. In 1960, Kennedy won the popular vote by 0.16%, or about 112,827 votes. I do not believe that one can reasonably claim that the winner of these elections really has a particularly strong claim to being a clear choice by the electorate. In fact, we cannot even prove that JFK really won the popular vote, given the way votes were counted in Alabama. I think campaigning could have absolutely changed who won the popular vote in these elections.

However, these elections are outliers. To be as charitable as possible to Lichtman, we will suppose that the popular vote is indeed predictable based on the quality of governance of the incumbent party, and that campaigning has “little” effect on the popular vote.

A Steelman for the Keys

I will now attempt to steelman Lichtman’s keys, based on what I have read in his book and seen in his public statements. We will take the axioms we have established as assumptions, and I will try to give a case for the keys.

To be perfectly fair, his track record is not bad. While I am not quite convinced it is because the keys are actually a particularly robust model, I do absolutely think Lichtman is picking up on something real, subtle, but substantial about presidential elections.

Let us suppose we are coming up on a presidential election, and let’s assume we know that the incumbent party is going to lose the popular vote. What might we expect is true about the last four years of governance?

We might expect that:

These are all concepts captured by one or more of Lichtman’s keys. And, historically, when the incumbent party has lost the popular vote, we consistently see many of these things being true. The keys are an attempt to formalize this checklist into a predictive model.

Further, you might be able to make an argument that if a sufficient number of these things are true, then the incumbent party is likely to lose the popular vote. And, based on 160 years of presidential elections, indeed, if a large enough number of these things are true, then the incumbent party has always lost the popular vote.

And, this is intuitively at least mildly reasonable. The checklist I provided touch on aspects that might make up how many people evaluate governance in the first place. If enough of these things are going wrong, then it is quite reasonable to conclude that the incumbent party is likely to lose the popular vote.

There is a concept called “the wisdom of the crowd”, where the aggregate or average opinion of a large group of people can be surprisingly accurate, even if the individual opinions are not. I think the idea of the keys capture something similar. At least, the idea that governance indicates the likely outcome of the popular vote. If the crowd feels that governance has been poor, then it is quite likely that the aggregate vote will go against the incumbent party.

Retrospective Clarity

One of the most interesting aspects of presidential elections is that, in retrospect, they almost always seem quite clear. There are some contentious exceptions that I will actually argue in favor of, but for the most part, presidential elections seem to have clear reasons for why the incumbent party won or lost the popular vote.

It’s easier to list the elections where the outcome is arguably more unclear in retrospect. I will identify

However, 2000 does, I think deserve special attention. Because while Gore did unquestionably win the popular vote, the election showed a massive 8 point swing against the incumbent party from 1996 to 2000 (from Clinton’s D+8.5% margin to Gore’s D+0.5% margin). This is quite substantial, and indicates that while the incumbent party did win the popular vote, there were significant issues at play. A number of states swing particularly hard against Gore, including Wyoming, West Virginia, Arkansas, Montana, Idaho, the Dakotas, and so on. Some of these states swung by over 20, almost 30 points against Gore compared to Clinton. This is also something we saw, slightly less dramatically, in 2016. So, I think there is certainly more to the story than simply “Gore won the popular vote”. Both 2000 and 2016 were very “red” elections in many ways, and in that way it’s also fitting that this is reflected in who won the electoral college.

I’d like to also discuss two more contentious elections, for which I agree with Lichtman’s assessment of the popular vote winner.

In 1968, Nixon won the popular vote by just 0.7%, but the country was deeply divided over the Vietnam War and civil rights issues. The presence of George Wallace as a third-party candidate further complicated the electoral landscape, but data has shown that Wallace voters would have likely broke mostly for Nixon. Thus, I do believe that the real winner of the popular vote was indeed Nixon. I strongly believe Nixon was the Condorcet winner in 1968.

In 1976, Carter won the popular vote by just 2.1%, but I think this is also fitting given the circumstances. From 1960 to 1976, we saw consistent 20 point swings in the popular vote margin:

Quite the rollercoaster. Thus, given Watergate, the relative strength of Ford’s administration (all things considered), and the desire for an honest outsider, I think Carter’s victory, and it’s narrow margin, is extremely reasonable in retrospect.

In particular, I think 1976 is a very key-like election, and shows some strengths of the ideas underlying Lichtman’s model. It wasn’t purely about the economy, as the economy was recovering from the 1973-75 recession. There was something else that the electorate wanted that Carter seemed to embody.

However, at the time, many of these elections were not clear at all. While after the fact it’s easy to identify reasons for why the incumbent party won or lost the popular vote, being able to identify those factors before the polls close is a much more difficult task.

In that respect, I think Lichtman’s keys are an attempt to formalize this retrospective clarity into a predictive model. His system, in my opinion, is attempting to streamline and simplify the process of finding these deciding factors by identifying them with “archetypes” of governance failures. Foreign policy failures, economic downturns, social unrest, scandals, and so on. So, to be perfectly fair to Lichtman, I think this is a relatively novel and interesting approach to predicting presidential elections.

The Role of Perception

There is, however, one aspect that I have not quite discussed yet, which I think is quite important. This is the role of perception in evaluating governance.

Despite his claims that the keys are based on objective facts, he did, in fact, yield to the popular perception of the economy in 1992. Despite the economy having left the recession by then, a large proportion of the electorate still believed they were in a recession, and thus he counted the economy key against the incumbent party.

“even though economic-growth numbers appeared to indicate the end of the recession that had begun in 1991, the National Bureau of Economic Research had not yet declared the recession over. More important, surveys consistently indicated that an extraordinary 75 to 80 percent of the public believed the economy was still in recession.” (Predicting the Next President, 2024 Edition, p. 15)

This leads to a natural question: are the keys really about objective governance failures, or are they about perceived governance failures?

Consider the nature of our axioms thus far. The basic idea is that governance decides elections, via the evaluation by the electorate of the incumbent party’s governance. However, is it not more reasonable to conclude that the electorate votes based on their perception of the incumbent party’s governance?

Suppose that by all objective measures, governance by the incumbent party has been good, if not arguably the best that could be hoped for under the circumstances. If the electorate perceives governance to have been poor, then will the incumbent party not lose the popular vote? If the average voter is truly unsatisfied, then no amount of objective governance metrics will change their mind.

If this sounds familiar, that’s because it is what I believe happened in 2024. By the measures that Allan Lichtman’s keys are based on, governance by the Biden administration was as good as it arguably could be post-COVID. But it’s hard to deny that a significant portion of the electorate perceived governance to have been poor. Inflation was high, polls showed a belief that immigration and crime were at all time highs (despite data showing otherwise), and so on. And thus, the Democratic party lost the popular vote.

Allan Lichtman blamed Elon Musk and Twitter for this perception, which many people ridiculed. However, on reflection, I think there is actually something to this (which, ironically, bodes poorly for the keys).

I would like to propose an alternate explanation for why the keys seemed to work before, and why they appear to have failed just now.

Allan Lichtman attempted to predict the popular vote based on objective governance metrics. This worked reasonably well for a long time, because for most of American history, the electorate’s perception of governance was relatively well-aligned with objective governance metrics. However, in recent years, the rise of social media and misinformation has led to a significant divergence between perception and reality. The two parties are living in different social media bubbles, leading to vastly different perceptions of the governance of the incumbent party.

But it is not just social media. For example, while the economy as a whole may have been doing well in 2024, it was not exactly hunky-dory for most of those in the middle and lower classes. Economic inequality has weakened the connection between objective economic metrics and the lived experience of many voters. Therefore, the value of the economy keys are themselves weakened.

I claim, therefore, that Allan Lichtman was never really predicting the popular vote based on objective governance metrics. Rather, he was predicting the popular vote based on the electorate’s perception of governance, which for most of American history was relatively well-aligned with objective governance metrics. As the dissonance between perception and reality grows, the keys will likely become less and less predictive of the popular vote.

The Steelman Argument

In summary, my best argument for the concept of the keys is as follows:

Now, I cannot possibly justify that Lichtman’s particular 13 keys are the correct set of governance failures to use. Nor can I justify that a hard threshold of six keys is a threshold that means anything. But, I do think that the general concept of using perceived governance failures to predict the popular vote is at least somewhat reasonable.

However, I do have some serious issues with Lichtman’s particular implementation of this idea.

Issues with the methodology

We have discussed the general concept and axioms underlying Lichtman’s keys. I think there is certainly something to them. However, I do have some serious issues with Lichtman’s particular implementation of this idea, and his methodology.

Now, I could be really mean and hammer Lichtman on “overfitting” his model (which, to be fair, I think he absolutely has done). But nobody wants to wait 12-15 years to validate their model with just three more data points. People want their prediction now, so tuning it on the go is at least understandable, if very unscientific. I’m going to focus on more fundamental issues with his methodology. This is one reason I don’t think you can really get a real scientific predictive model for presidential elections: they’re so infrequent!

Unweighted Variables

This is, where I think the flimsy nature of Lichtman’s keys really shows through. The first thing that stuck out to me when I started reading his book was that he had this 6 key threshold, but that they were all unweighted. He claims this makes them more robust, in comparison to other models which try to tune weights for different variables.

“Although there is a rough correlation between the number of keys turned against the party in power and its percentage of the popular vote, the final verdict depends only on the simple, unweighted total of negative keys (the use of weighted keys does not improve the ability of the system to distinguish between incumbent and challenging-party victories)…This approach avoids the modelers’ questionable assumption that each variable has the same influence in every election” (Predicting the Next President, 2024 Edition, p. 13)

Now, on one hand, like I said, I do think that one of the biggest difficulties in predicting presidential elections is that they only occur every four years, and depend on a huge number of variables. Thus, I do think that trying to tune weights for different variables is likely to lead to more overfitting than you probably already have, and thus poor predictive performance. So, in that respect, I do think that unweighted variables are a reasonable choice. The issue, however, is that he then also adopts a hard threshold of 6 keys.

As a mathematician, this broke my brain. You cannot just use a hard threshold on unweighted variables and expect it to work. That is mathematically irresponsible. He literally acknowledges that different variables have different importance in different elections, so then why would you expect that a hard threshold on unweighted variables would work?

Take an illustrative example for why this is completely indefensible. I’d like to present Taylor’s keys to whether or not you will enjoy a restaurant.

  1. The restaurant has good reviews online.
  2. The restaurant has clean restrooms.
  3. The food is not poisonous.
  4. The restaurant is not too expensive.
  5. The restaurant validates parking.

Now, like we discussed before. If you did not enjoy a restaurant, you might be able to predict that at least some of these were false. But I’m going to say that you will enjoy it if and only if at least 3 of these are true.

But this is clearly ridiculous. A restaurant could have 5 star reviews, the most pristine restrooms, be dirt cheap, and validate parking, but if the food is poisonous, you will not enjoy it.

This is indeed a hyperbolic example, but it illustrates the point quite well. One of these criteria is way more important than the others. So, why would you expect a hard threshold on unweighted variables to work in predicting presidential elections?

Lichtman does discuss which keys are more or less predictive over time, but he never actually adjusts his model to account for this.

But, all things considered, six false keys does seem to be a relatively reasonable threshold for predicting the popular vote. My primary objection is taking that and then claiming that it actually predicts the popular vote, rather than just being correlated with it.

I think we could fix a lot of the issues I have with the keys if it was instead just framed as a predictor of the popular vote, rather than this pseudoscientific claim that the popular vote flips if and only if six keys are false.

I think it’s entirely possible there is some abstract function based on tens, hundreds, or thousands of variables which could predict the popular vote with high accuracy if the right weights are used. I don’t believe that we can ever know what that function is, given how infrequent presidential elections are, it’d be impossible to validate it properly. But, as a mathematician, I like to think that such a function exists. I certainly do not, however, believe that Allan Lichtman’s particular 13 unweighted keys can do this. I can say that they’re probably a pretty good guess for this particular era of American history, but I see no theoretical justification, or scientific rigor or statistical validation for why they should work.

Subjectivity of the Keys

Lichtman also claims that his keys are simple yes/no questions. However, in practice, this is not really true. Many of his keys are quite subjective, and require a lot of interpretation to determine whether they are true or false. Nate Silver has made a compelling case for this.

For example, how much social unrest is enough to count the No Social Unrest key as false? Who decides when a candidate is charismatic enough to turn a key? How “major” does a scandal have to be to count against the incumbent party? How “serious” does a primary challenge have to be to count against the incumbent party?

One very strange example was that Lichtman gave Obama the Charisma key in 2008 but not in 2012. Similarly, Williams Jennings Bryan lost his Charisma key in 1908, despite being the same candidate as in 1896 and 1900. While I personally do not have a strict issue with this, as charisma is indeed in the eyes of the voters, it does highlight the subjective nature of some of these keys. The idea that a candidate can be objectively charismatic or not is quite dubious. Lichtman has also refused to give Donald Trump the Charisma key in any of his elections, despite many people arguing that Trump’s charisma is a key factor in his electoral success.

While I personally do not have an issue with Lichtman’s handling of the charisma key, I think that it’s really worth scrutinizing all aspects of his methodology if we want to take the keys seriously as a “scientific” model.

In particular, if we are to believe that the keys act as a foolproof function from \(2^{13}\to \{\text{Incumbent Party Wins Popular Vote}, \text{Challenger Party Wins Popular Vote}\}\), then we need to be able to determine the truth value of each key in a consistent and objective manner. Otherwise, the model is not really well-defined.

The 2016 Switch

When I first became a “Lichtman scholar”, I was very impressed by his track record. He had predicted every outcome from 1984 to 2020 correctly, except for 2000, which he could argue was stolen. I was particularly impressed by his prediction of 2016, as he was one of the few people who predicted Trump would win.

Once I took a closer look and began to read his (poorly updated) book, I began to see some cracks in the foundation. In particular, I was very confused by how he explained 2016. If you want an incredible in-depth look at this concept, I could never write something better than this amazing article by The Post Rider. Lars Emerson and Michael Lovito make a better case than I ever could.

I will give a shortened version on certain aspects of their argument, based on my reading of his book.

The prologue of his book reads like someone trying very hard to explain why his prediction of Gore in 2000 was actually correct. It had some dubious claims about “the crucial swing state of Ohio” (Predicting the Next President, 2024 Edition, p. vii) using punch card ballots (which it hasn’t since 2006. I had to check that this was the 2024 edition, and it was), but ultimately it boiled down to “Gore was the actual winner, the election was stolen from him, but even if he WASN’T, my model still predicts just the popular vote, which he won, so it doesn’t matter anyway”.

However, as shown in The Post Rider article, he has been clear about the popular vote since long before 2000. So I was actually wrong on that impression. Lichtman was very clear and consistent on this right up until 2016.

And, as I have said before: fair enough. I really do agree that if one wanted to predict a winner of Gore or Bush, then by almost all metrics besides who was inaugurated, Gore was the clear winner. The problem is that all of the reasons Lichtman gives for why Gore was the correct prediction cast direct doubt on his prediction of Trump in 2016.

For, indeed, Trump did not win the popular vote. Clinton did, by almost 3 million votes. And it wasn’t just a half-point margin like in 2000. Clinton won by 2.1 percentage points, which is incredibly comfortable. There was no reasonable way for Trump to have won the popular vote in 2016, no matter how you slice it.

Every reason that Lichtman gave for why Gore was the correct prediction in 2000 applies equally to Clinton in 2016. Trump won by less than 80 thousand votes (77,744) in three swing states (Pennsylvania, Michigan, and Wisconsin). And if “no system could have predicted the 537 vote margin for George W. Bush in Florida that decided the 2000 election” (Predicting the Next President, 2024 Edition, p. X), how can you claim that your system predicted the 80,000 votes in three states that decided the 2016 election? Is 77,744 votes just a more reasonable number to predict than 537 votes? Is the fact that it was in three states more predictable than just one state?

No, in fact, Lichtman gives an alternate explanation for why his model predicted Trump and not Clinton.

“In 2016, I made the first modification of the keys system since its inception in 1981. I did not change the keys themselves or the decision rule that any six or more negative keys predict the White House party’s defeat. Instead, in my final forecast for 2016, I predicted the winner of the presidency, e.g., the Electoral College, rather than the popular vote winner.” (Predicting the Next President, 2024 Edition, p. 191)

It goes on to say

“The switch from predicting the popular vote winner to the Electoral College winner resulted from a significant divergence in recent years between the two vote tallies. Virtually any Democratic candidate can now count on an extra 5 to 6 million net popular votes from the states of California and New York alone. These votes count for nothing in the Electoral College. There are no comparable red states where Republicans accumulate nearly as many wasted votes. Not a single state in 2016 gave Trump even a margin of one million votes, and Trump lost the popular vote to Clinton by nearly 3 million votes. When I first used the keys for prediction in 1982, the popular and Electoral College votes had not diverged since 1888—the only other time since 1860 when the two votes diverged was in the disputed election of 1876… In any close election, Democrats will win the popular vote but not necessarily the Electoral College. That’s why predicting the popular vote rather than just the winner of the presidency is no longer useful. Republican presidential candidates have only won the popular vote once since 1992. The dissonance between the popular and Electoral College vote is far more than a challenge for our democracy than for the keys.” (Predicting the Next President, 2024 Edition, p. 192)

Now, I’m honestly impressed by just how much foresight Allan Lichtman must have had to reach this conclusion before election night in 2016. I mean, because at that point we had Bush win the popular vote in 2004, and then Obama won the popular vote fair and square in 2008 and 2012. So, obviously, that meant Republicans would never win the popular vote ever again unless it wasn’t a close election (2024 was not close, right?). And, especially since “in Lichtman’s published paper from October of 2016 predicting the outcome of the election, where he writes that “the Keys predict the popular vote, not the state-by-state tally of Electoral College votes.” (The Post Rider) this means he must have made this deduction in late October or early November of 2016.

Okay, sarcasm aside, this is either an incredible deduction made just before election night, for which there is no evidence, or it’s complete and utter nonsense. Is it not far more likely that he saw the election results, saw that Trump won the electoral college but lost the popular vote, and then retroactively decided to change his model to fit the results? You may have heard the term “moving the goalpost” before, and that is exactly what this would be.

And, I have to take a second to mention the most ridiculous justification that Lichtman gives: that he said Trump would be impeached and that would make no sense if he wasn’t predicting who would govern. Yeah, no dip, Lichtman. You can’t strawman us by claiming us 2016 deniers are claiming we think he predicted Trump would win the popular vote and lose the electoral college. Obviously you thought he would win both. You can’t dance around this obvious glaring discrepancy by tearing down a strawman.

More than anything else, this completely breaks the credibility of Lichtman’s keys. He changed what his model measured to fit the results after the fact without changing any aspect of his methodology. This is incredibly disingenuous, if not outright fraudulent. Not to mention, completely unscientific.

It’s quite funny to watch Lichtman in interviews try to explain this. In one interview with Kamal Ahmed he shouted at the interviewer to “STOP”, claiming that he was spreading “misinformation”, and threatening to end the interview right then and there.

Now, The Post Rider article goes into much more depth about this, and I highly recommend reading it if you want to understand the full extent of the issues with Lichtman’s explanation of 2016. But, ultimately, it doesn’t, in my opinion, change the fact that there are aspects of Lichtman’s model that I think are quite reasonable, but that his particular implementation of those ideas is deeply flawed.

Conclusion

I have done my absolute best to give a fair and balanced assessment of Allan Lichtman’s keys. I think there is something to them, at least in the general concept of using governance failures to predict the popular vote. However, I have absolutely no faith or confidence in Lichtman’s particular implementation of these ideas. I mentioned briefly in passing previously that presidential elections are too infrequent and too complex to be able to create a rigorous and scientifically valid predictive model. Many models have tried, but none can boast a perfect track record. And, frankly, I’m not exactly going to put much trust in any of them. Politics is too volatile, and while there can certainly be trends and patterns and correlations, I do not believe that we can ever create a truly predictive model for presidential elections. And certainly not a perfectly deterministic one based on 13 unweighted partially subjective variables.

2024 was a Key-like Election

Now, I’m going far off the popular opinion here, but I actually think that 2024 was one of the most “key-like” elections in recent memory. Purely in the sense that the people clearly and decisively came out against the perceived poor governance of the incumbent party. I’m not going to get political with “was Biden a good president or not”, or how fair an assessment either way can be, but it would be hard to deny that the overall perception of how well the Democratic Party governed from 2020 to 2024 was not particularly positive.

A large proportion of the electorate really did believe the economy was not doing well. Whether or not Allan Lichtman is correct in saying that the economy was actually doing well or not is irrelevant. We have seen that he himself has yielded to public perception of the economy in 1992.

And, in 2024, the popular perception of the economy was quite poor. He might have predicted 2024 correctly if he had counted the economy key against the incumbent party. We saw other perceptions of poor governance as well: crime was perceived to be high, as was immigration.

However, my main point is not about nitpicking individual keys, when I don’t even believe a 6-out-of-13 threshold of unweighted variables is something that makes any sense. Rather, I think that if we look at what happened in 2024, what we see seems to bolster the standing of the axioms outlined earlier.

Democrats thought they had it in the bag. January 6th gave some uncertainty if Trump could even run for office, or if he was in the same category as a former member of the Confederacy as an insurrectionist. Gaffe after gaffe from Trump made it seem like he was unelectable. Democrats were at a loss as to how anyone could possibly vote for Trump again. But they did. And, in large numbers.

I’d like to focus in on one aspect of the election, which is the swing states. In particular, I don’t know how many people expected Trump to win all seven swing states: Arizona, Georgia, Michigan, Nevada, North Carolina, Pennsylvania, and Wisconsin. But he did. And the closest state was Wisconsin, which he won by 0.9%, nearly a full point. Compared to other elections where you have many states decided within a point, this is an outlier. The rest of the swing states were won by one to several percentage points.

The 2020 versus 2024 dichotomy is really interesting. In 2020, Biden won the popular vote by 7 million votes (4.5%), but could have lost the electoral college if just 21,461 (or 0.0135%) votes had swung in Arizona, Georgia, and Wisconsin. In 2024, Trump won the popular vote by only 2 million votes (1.6%), but the votes to flip 2024 are much larger (114,885, or about 0.074% of votes cast) in Wisconsin, Pennsylvania, and Michigan. That’s a third the margin of victory in the popular vote, but more than five times the number of votes needed to flip the electoral college.

I would call 2020 a “tall” but “brittle” victory for Biden, while 2024 was a “short” but “wide” victory for Trump. The weakest popular vote margin we’ve seen since 2000. In fact, since 1960, only three elections have had a popular vote margin less than 2%: 1968 (0.7%), 2000 (0.5%), and 2024 (1.6%). But Trump’s victory was very widely spread out across the swing states, with only Wisconsin being decided by less than 1%.

In a sense, we saw a clear national swing against the incumbent party in 2024. And most of the swing states were within about 2 points of the national popular vote margin. This is something we might expect if the axioms underlying Lichtman’s keys are true. If the electorate is voting based on their perception of governance, and that perception is relatively uniform across the country, then we would expect to see a national swing against the incumbent party, which is reflected in both the popular vote and the swing states. However, one election does not a trend make. It’s possible that 2024 was an outlier, and in 2028 we’ll be back to large popular vote margins but very narrow electoral college victories.

It’s a little funny. I am one of Allan Lichtman’s biggest critics, and yet I genuinely conjecture that future elections may continue to show patterns that align with the axioms underlying his keys. Whether or not his particular implementation of those ideas is valid is a separate question entirely (they’re not). We’ll see if I’m right or not in 2028, I suppose.

hyperlink

Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Is Approval Voting Strategyproof?
  • A Mathematical Case for Approval Voting
  • Why do we row reduce? What IS a matrix?
  • Introduction to Least Squares Part 2 (Electric Boogaloo)
  • The Wonderful World of Projectors