How can activists understand if their tactics are working?
Why activists might ignore evidence and better ways to understand the impact of disruptive tactics
Being an activist is hard in many ways – you often fight for things that take a long time to win and you usually sacrifice free time, money and a stress-free life along the way. However, a challenge that activists face, which many people don’t talk about, is the difficulty in forming correct beliefs around the issue you’re fighting for and how to best make progress.
In this essay, I want to unpack one key mechanism why I think it can be challenging for activist groups to understand and act on evidence around the effectiveness of their tactics (I will hopefully explore a few other biases in subsequent pieces). In this context, I’m using the term “activist” to mean anyone who spends substantial amounts of their time and energy (e.g. via a job or significant volunteering efforts) campaigning for various issues. Tangibly, I’m drawing upon my anecdotal experience of being deeply involved with groups like Extinction Rebellion and Animal Rebellion, as well as doing research on groups like Just Stop Oil, Insulate Britain, Last Generation and more (as well as having many friends involved in these or similar groups).
Before I get into it, if it wasn’t obvious from my previous work and writing, I think activism has done many important things to improve the world, and I expect it’ll keep doing so in the future. I have deep gratitude for anyone who dedicates themselves to improving the world and sacrifices things along the way. However, that doesn’t mean we can’t hold ourselves to higher standards and keep improving.
The challenge: Knowing when to ignore evidence and other people’s opinions, and when to take them seriously
A key challenge I think many activists face, particularly those using disruptive or otherwise unconventional tactics, is that almost everyone will tell them their tactics aren’t productive. If anything, activists utilising disruptive tactics are actually told that they are making things worse and that it would be better if they stopped. It is very easy to find cases where clearly the people saying this were wrong:
A Gallup poll from 1966 shows that 63% (!) of Americans had an unfavourable view of Martin Luther King Jr, despite his actions leading to several historic wins for civil rights, and many of us today consider him a hero.
The Suffragettes were often met with verbal and physical violence when campaigning for women’s right to vote, despite many crediting them with significantly helping the women’s movement in moving towards equality
Extinction Rebellion UK being told that their tactics could alienate the public, despite empirical evidence showing an improvement in public concern for climate change around the April 2019 Rebellion (as well as other positive impacts covered in a case study I did of XR some years ago).
I think this is often due to the public focusing on whether they approve of a specific tactic, and conflating that for the effectiveness of the tactics (e.g. when polls like this show that most people don’t approve of major disruption, surprise surprise). Thankfully, most activist groups are savvy enough to understand that support for a particular group isn’t the same as progress on the issue you care about, and that the latter thing is what really matters.
That said, it does not make sense for activist groups to use these cases to think “Oh, the public always tells people that disruptive activism is counterproductive, therefore I should ignore (approximately) all evidence and all people who tell me this”. This may seem obvious, but sadly, this is a surprisingly common viewpoint within the parts of the UK climate movements of which I’m familiar.
For example, there have been disruptive tactics employed which were actually counterproductive for the goals of the activists taking part – such as violent tactics in the Spanish anti-austerity movement leading to a reduction in support for the movement, or UK climate activists pouring faeces over the grave of a war veteran and national hero (I have no empirical evidence that this was negative for the climate movement but I really struggle to think of credible reasons it was positive). The latter example is one that I think is particularly important because, in many ways, it has the same key features of other disruptive tactics that most climate activists would defend: It was nonviolent, it received media attention and the messenger spoke about the urgency of climate change – just like climate groups who throw soup at paintings, disrupt sporting events or theatre shows. However, I bet that even the most disruptive-tactic-loving groups wouldn’t defend this as effective, indicating that there is some required nuance here.
In essence, whether activists utilising disruptive or social norm-breaking tactics are actually having positive impacts or negative impacts, they will hear the same response from the broader public and media, which is “What you’re doing isn’t helping and you need to stop”. So, when activist groups innovate with new tactics, such as when Just Stop Oil or similar direct action groups begin targeting museums, sporting events or art galleries, they hear the same old “These tactics aren’t working, it’s time to try something different”. If anything, this emboldens groups, and they invoke references to the 1960s US Civil Rights Movements and the Freedom Riders, saying this is the same resistance past movements felt.
So the challenge for activists becomes: How do I know what evidence, and which messengers, to trust?
Something that I’ll come back to in this essay is a key question, stolen from the excellent Scout Mindset by Julia Galef, “What evidence would I need to see to change my mind?”. When answering this question, it is clear that tracking public or elite response to your protest/group to understand the effectiveness of your tactics is insufficient, as they both show the same outcome (people hate being disrupted, whether it’s net good for the issue or not). The more challenging question is to find a form of evidence which can reliably distinguish between tactics that are helping a given issue versus harming it. You can see what this might look like in the table below, to help make things a bit more concrete.
Sadly, I think most grassroots groups aren’t pursuing the bottom category of “actually useful evidence”. Instead, many groups focus on tracking things like how much media coverage, which isn’t a reliable indicator of impact either, as it simply means you did something that was story-worthy in the eyes of the media, which can include pouring faeces over the grave of a national hero or stealing the remains of a grandmother from her grave. Sure, raising issue salience is important, and I believe that, but at what cost? If you raise the salience of your issue, but put off key stakeholders or parts of the public you need to win over, you may have damaged progress on your issue overall, despite your media coverage metric telling you that you’re winning. (For some issues, some salience is better than no salience, so disruptive tactics may be extremely helpful. However, I have some questions about whether salience is still a key bottleneck for the climate movement).
I’ll be the first one to say that I don’t have a great solution to finding meaningful indicators of progress for activist groups. Measuring the effectiveness of social movement tactics and strategies is complicated – this is our sole focus at Social Change Lab and it’s not easy. There are many reasons that make understanding the success of various tactics challenging:
Social movements have many different goals, whether it’s influencing policymakers directly, influencing public opinion, shifting media discourse, changing voting behaviour, building the movement further or more. Tactics may have differential impacts on these outcomes, whereby they may (for example) positively influence public opinion but actually have a negative impact on media discourse. It’s not clear how we should trade this off against one another.
The impact of activist tactics on the indicators/outcomes above is not always easy to measure. For example, it is relatively easy to understand the impact on public opinion but very challenging to understand the impact on political representatives, even though this is a very important outcome for activists who want to change policy.
The effects of some tactics may have different short-term and long-term consequences, such as recent research we did at Social Change Lab which found that disruptive animal rights protests had negative impacts on public opinion in the short term but were neutral over a 6-month timeframe.
By now, you probably get the point. Understanding the impact of activist tactics isn’t easy, and especially so when you’re a mainly volunteer-run group operating on a shoestring budget.
So, what can activist groups, especially those pursuing disruptive or innovative tactics do about this? One important, but challenging, mindset activist groups can adopt is humility, and taking the idea that our tactics may not be working as intended very seriously. This is easier said than done, especially when you’ve been arrested 10+ times for a tactic that some new data says may not be helping. When you’ve been so deeply committed to something, to the point of risking your liberty, it’s hard to rationally step back and evaluate things.
However, I do believe there are some other tangible things that activist groups can do to have a more realistic and grounded-in-reality understanding of their strategies and tactics:
1. Think hard about which measures would show you a positive outcome for successful tactics and a negative outcome for harmful tactics
As stated above, I don’t think it’s particularly useful to be tracking your media coverage as the main indicator of your success, which is currently a central part of most direct action groups’ evaluation of themselves (and sometimes for people who fund them).
So, what are some more reliable indicators people might use? Obviously, it depends on the organisation’s theory of change, and ultimately what they want to achieve. As an example, let’s say you’re an activist group that wants to achieve policy change on home insulation through building issue salience and direct impact on policymakers (à la Insulate Britain). What should your indicators of success be then, if not only media coverage? Some ideas include:
Mentions of home insulation in the House of Parliament (via the Hansard records) to monitor the salience of this issue amongst elected politicians
Whether key political figures (outside of the government), such as leaders of other parties or Lords/Baroness, have made commitments or comments on insulation
How many people in the UK consider home insulation a top 3 issue
Feedback from discussions that insider policy advocacy groups (e.g. IPPR or Friends of the Earth) have had with policymakers who work on insulation
The formation of any sub-committee or political working groups around insulation (or any other form of political/legislative movement)
I only spent five minutes coming up with the above indicators, all of which you can obtain for free (except the polling, although you could cross-reference similar information from existing YouGov polls), so I’m sure you can do much better with some dedicated focus if you’re actually running a campaign. My point is, that it often is possible to monitor more meaningful metrics if you put a bit of effort (and sometimes money) into it. The tricky thing is that some of the metrics don’t provide instantaneous feedback on tactics, as is provided by media coverage, which is why activists may resort to media hits. However, if you have a political strategy you want to meaningfully test, feedback should be considered on the order of months or longer, rather than days.
This relates to another point that social movement organisations seemingly forget sometimes – social change takes time, and you often won’t win in 6-12 months. A prime example of this is also showcased by Insulate Britain, who declared failure no more than 6 months after their campaign started. Did people seriously expect the government to roll out a national-scale plan to insulate over 20 million leaky homes in less than 6 months? Much to my surprise, it seems like this is exactly what they expected.
2. Pre-register your hypotheses before you start a campaign, and stick to what you said
Another pet peeve of mine is when groups say “We’re doing this campaign because we believe our tactics will lead to X” (where X might be an increase in public opinion or a policy commitment) then when the campaign finishes, and X doesn’t happen, then they suddenly find a great reason why X was never meant to happen, but the campaign was very effective regardless. This is a classic example of motivated reasoning, when “Individuals tend to favour evidence that coincides with their current beliefs and reject new information that contradicts them, despite contrary evidence”.
To make this concrete, I’ll provide two brief examples. One, from the Extinction Rebellion UK protests back between April - October 2019 or so. The dominant thinking at the time, at least within XR, was that we simply need (approximately) 1000-1500 people to get arrested during a week-long campaign, which would “overwhelm” the police, leading to a crisis where there are not enough police station cells for all the arrested climate activists. Ignoring that this hypothesis is pretty ridiculous (the reality is that people just got taken to police stations further and further from London, not exactly that overwhelming), XR did indeed get over 1000 people arrested in both April 2019 and October 2019. So, what happened next given the hypothesis was disproven? Hopefully some reasonable updating towards a new strategy that didn’t focus on mass arrest to apply instrumental pressure to the police? Wrong, instead, (a small number of influential) people touted that “We actually need 3,000 arrests to overwhelm the police, so that will be our new goal!”.
This is also a prime example of creating an unfalsifiable theory of change (and shifting the goalposts). People can always say “My hypothesis of getting 3,000 arrests to overwhelm the government and lead to policy change is correct, we just haven’t tested it yet so we need to keep trying”. Well, this is great and all, but don’t we think it’s relevant that there was a similar prediction for 1,000 people, and it turned out to be wrong? What’s stopping you from saying “Oh it’s actually 5,000 people I realised” after 3,000 fails too? These unfalsifiable predictions are particularly worrying, because, by definition, you will not know whether you are pursuing the wrong strategy or not. This is, in my opinion, a very dangerous place to be, as it relies (mostly) on blind faith, without acknowledging any evidence that will change your mind.
Another example is from some recent research we’ve done at Social Change Lab with some grassroots groups. In short, we collaborated with two activist groups to do public opinion polling for their campaigns. At the outset of this, we always ask the groups what they want to measure, and what their goals actually are, so we can measure the relevant thing. For example, the group might say that they expect (and want) public support for banning private jets to increase. Great – this is exactly something useful and concrete we can measure. Fast forward a few months to after the campaign, and the results are in. In fact, public support for banning private jets has stayed the same (or for some other broader measures around climate policy, things have gotten worse). We take this result to the group, and what do they say? “Ah yes, we never actually expected public opinion to improve. Our theory of change actually revolves around recruiting more people to join the movement and applying direct pressure to policymakers”. Urm, great, but then why did you pay for thousands of pounds of public opinion polling, and tell me you expected (and wanted) this to change? Sadly, I think this is another case of motivated reasoning – where the evidence doesn’t match someone’s pre-existing beliefs on how social change works, so they will slyly adjust their hypothesis, rather than admit their previous theory was wrong.
3. Consult other people in the movement (and take their opinions seriously)
Another potentially useful avenue for activists to get some external perspectives on their strategy can be soliciting feedback from other people in the same movement, who care about the issue but are pursuing a different theory of change. In the case of groups like Extinction Rebellion or Just Stop Oil, this might be slightly more moderate groups like Friends of the Earth, Greenpeace or Uplift.
There are some benefits to this approach:
Ultimately, these other groups in the movement have the same goals, and they often have (in the background) been supportive of groups like Extinction Rebellion, so are likely to have the issue’s best interests at heart when giving you feedback on a potential controversial strategy.
It is relatively easy to gather this feedback, and the side benefit is that you build better relationships with folks working on different parts of the movement ecology, which is almost always a positive for movement collaboration and trust.
It might be especially useful to get feedback from groups doing insider political advocacy, as often they will hear things from policymakers that you won’t, which can often provide useful feedback. For example, a group doing insider advocacy on ending oil and gas exploration might hear from politicians that either JSO’s campaigning has made this issue untouchable for now, due to fear of “giving in” to JSO demands. On the other hand, policymakers working on insulation might say thanks to Insulate Britain, there is enough salience around home insulation to quickly progress policy.
An obvious drawback is that the people in other organisations are likely not prophets who can predict the outcomes of innovative social change tactics, and are often more moderate by virtue of the groups they work for, so may be too conservative in their recommendations and feedback.
Overall, I think this is a hard problem for activist groups and not one where there are neat or obvious solutions. However, my view is that the most successful organisations, whether it’s a business, nonprofit or social movement organisation, need to be able to grapple with challenging evidence and change strategy if required.
Great article, James. Very insightful. One additional factor to consider, imo: Certain tactics might have a positive effect in one country/region, but negative spillover effects in others. I sometimes wonder whether a movement that uses disruptive tactics successful in very democratic countries where the public is open to disruptions will make it harder for advocates of the same movement in less democratic countries.