Part 2 of a 2-Part Series. Read Part 1 Here.
As with most things in 2020, the presidential election feels like it's been going on for decades. Despite the states slowly counting and recounting ballots (as well as the assumptive new president, Joe Biden, making active steps towards a presidential transition), litigation and accusations reign supreme as we enter the second week of November.
In our first part of this post, we talked about a couple of the big missteps that major news outlets made in contributing to this general feeling of confusion. Small sample sizes, incomplete data, and failure to segment were just a few of these. The biggest ones—failure to communicate uncertainty and the impact of predictions on outcomes—are the ones that blindsided most Americans.
Especially in the month leading up to Super Tuesday, there were many polling experts who were giving big odds in Biden’s favor. FiveThirtyEight had Biden as the winner 89 times out of 100 the week before Super Tuesday—pretty favorable odds with none of the nail-biting uncertainty that was to come.
In retrospect, they were right. But more than a handful of Americans were surprised to find that the race ended up much closer than all of these models predicted. But the problem is that these models presented their predictions as gospel, completely ignoring the level of certainty of these wins by not including them in the analysis at all.
In association terms, this would be like not reporting a specific likelihood that the member will not renew, but rather classifying them into groups like "Persuadables, Lost Causes, Sure Things, and Do Not Disturb.” It's important to communicate the output of the model in a way that makes the new knowledge more actionable and useful for your organization.
This is an important element of developing models, not just for presidential elections, but for associations wanting a clear glimpse into how to proceed to create a more sustainable and financially sound organization.
Another takeaway is that sometimes the model can actually change the outcome.
So for example, there's research showing that when people are confident in an outcome, they are less likely to participate in the process that creates that outcome. In a recent study from Dartmouth and U Penn, people were given a chance to “vote” on an outcome for a nominal fee. Before each voting session, they were shown randomized statistics showing different “nominees” ahead or behind in the polls.
What they found is that, the more likely their desired outcome was, the less likely they were to participate again in the voting game.
There are cases where modeling is a dynamic process, meaning that the output of the model itself changes the impact of the inputs to the model. Predictive models should be robust to this kind of process and have mechanisms for updating impact of the inputs.
So for example, in the case of election modeling, you build a model by gathering data on behavior and preferences from voters. Then you create a model that says "Candidate X is 99% likely to win the election." Then those same voters see that model, and change their behavior and preferences because of it. If the model is not built to anticipate that kind of behavior and update, it quickly becomes wrong.
In the case of associations, let's say you build a model that predicts the likelihood a person will buy any product if you send them an email with a discount for the product. Let's then say that your model identifies 30 different products a person is likely to buy if you send them an email with a discount. It might be tempting to then send that person 30 different emails in close succession. But that quickly gets spammy and the person will most likely stop opening the emails altogether.
BUT if you build something in your model to take into account the days since you last sent an email, the model would tell you to only send one email and then wait for the next one.
When creating models that interact with the outside world, it's important to understand the effect of that interaction because it could change the effectiveness of the model itself.
The 2020 Presidential Election has been a rollercoaster of a ride, both from an emotional and predictive analytics viewpoint. As we near the end of it, it is a great time to reflect on the lessons highlighted from the management of the prediction data and make better use of your own data to avoid the catastrophe of a razor-thin, election-sized growth margin.
To learn more about how we help associations with predictive analytics and data management for better member retention and overall strategy, download our free ebook.
Willow