Does science have anything to say about cancel culture?
Does science have anything to say about cancel culture? When we remove negative events from our environment, we begin to subconsciously identify neutral and positive events as negative.
A team of psychologists recently studied concept-creep. When we remove negative events from our environment, we begin to subconsciously identify neutral and positive events as negative.
It’s hard to turn on American news these days without seeing some sort of headline describing the latest thing to be “canceled.” Whether it’s cartoons, advertisement strategies, rappers, or even podcasts, in the last few years Americans have been speaking out and stepping up to combat perceived unethical and immoral behavior. There is a growing polarization developing between those for and against this new cultural shift.
With increasing numbers of so-called “cancel culture” behaviors becoming the subject of criticism in the United States, some psychologists and researchers have taken a scientific approach to investigating how a change in the frequency of a “perceived object” (such as an unethical view or immoral behavior) affects our ability to truthfully identify those perceived objects in context. In layman’s terms, these scientists are investigating the question: “If we were to reduce how frequently a negative (such as an immoral behavior) occurs in society, would we be able to accurately perceive the change in how often this immoral behavior is occurring, or would we begin to perceive previously acceptable behaviors as immoral?” The answer that David Levari and his colleagues published might surprise you.
To investigate this question, researchers created three similar experiments using three different “negative conditions”. In the first experiment, researchers asked a randomized group of volunteers to identify blue dots from a group of 1000 dots that ranged on a gradient scale, from very purple to very blue. After 200 trials, the researchers decreased the number of blue dots in the experiment by replacing them with purple dots. They repeated this experiment with the volunteers, and in one condition instructed the participants to be consistent and truthful when identifying blue dots, and in another condition even offered monetary incentives for the “most right” answer.
In the second experiment, researchers randomized a new group of volunteers and conducted a similar experiment to the first. Instead of blue and purple dots, the volunteers were exposed to 800 pictures of faces ranging from neutral face postures to threatening face postures as rated by a universal rating system. For each volunteer, they went through all 800 faces 200 times and asked the participants to distinguish between threatening and neutral faces. After 200 trials, some of the threatening pictures were replaced with neutral ones and the experiment was repeated again.
In the last experiment, researchers again randomized volunteers and conducted a similar experiment. In this experiment, they exposed the volunteers to a series of research proposals that were rated very ethical to very unethical, according to a universal rating system. The participants were then asked to rate 48 of these proposals in two trials, one in which the unethical proposals were included at the same rate as the sample size of 240 proposals that they selected from, and in the second trial, they replaced some of the unethical proposals with ethical ones in an attempt to decrease the frequency of unethical proposals reported.
In all three experiments, regardless of changes in the experimental condition (instruction, monetary incentive, warnings, colors, pictures, etc.), researchers found an increase in the number of times the participants misidentified “negative events,” aka blue dots, threatening faces, and unethical proposals after the researchers decreased their occurrences in the experiments. The team attributed this finding to something called “concept creep” or “judgment creep.”
This phenomenon can be described in the following way: when the frequency of the behavior in question decreases, humans respond by expanding their definition of the behavior. Expanding the definition leads to the detection of “false positives.”
Human definitions of problems expand, increasing the number of identified problems. This occurs even if the problems in question genuinely become less prevalent. This tendency provides some basic evidence for the growing trend of pessimism in the United States. The researchers state in their paper that concept-creep in humans may suggest that even the most well-meaning of people might subconsciously be identifying issues as problems that were previously not considered problems.
However, this type of science requires much more investigation. One may argue that the more time one spends on solving a single problem leads to the discovery of newer problems. In other words, the expanding definition of a problem may be appropriate as we learn more. Moreover, there needs to be more investigation into whether or not concept creep is a bad thing. After all, science continues to function under concept creep, as scientists’ goal is to use knowledge to expand our understanding and definitions until we have a well-rounded understanding of the problem and its answer. While this study gives us food for thought, in the end, the biggest question still remains: Will there ever be a point where we are satisfied, or will we continue to find problems because we are psychologically predisposed to find them?