a. Suicide/Self-Injury Prediction Meta-Analyses. All of our meta-analyses on this topic have produced the same two findings. First, no factor or small set of factors (e.g., depression, prior SITBs, risk questionnaires) predicts future self-injurious thoughts and behaviors much better than random guessing. Second, there are hundreds - and potentially thousands - of things that predict future self-injurious thoughts and behaviors slightly better than random guessing. Many factors can tell us a very small part of the story about suicide risk, but none alone can tell us much more than that.
What does this clear pattern tell us about the nature of suicidality? In our view, this means that there is no simple recipe for self-injurious thoughts and behaviors. In other words, there is no magic factor or combination of 3-4 (or even 23-24) factors that will account for all or even a sizable portion of these thoughts and behaviors. Accurate suicidality prediction (and cause) will likely take the consideration of a large number of factors. We estimate that this will require a diverse set of 50-500 predictors for consistent 95% accuracy of eventual suicidal behavior (i.e., the who question), and ~10,000 for consistent 95% accuracy of the specific date of suicidal behavior (i.e., the when question). It also suggests that there will be no determinate recipe for self-injurious thoughts and behaviors. This means that there's likely no one magic formula (even if the formula includes 800 things). Instead, suicidality prediction/cause is likely indeterminate, meaning that there may be a near-infinite set of factor combinations that could accurately account for suicidality (similar to how there are near-infinite solutions [and non-solutions] to the equation X + Y = 1). In short, the nature of suicidality (and its causes and predictors) appears to be complex and indeterminate (vs. simple and determinate). This conclusion is further supported by recent machine learning evidence from our group and several other groups. Simple algorithms (even those that consider hundreds of factors) are poor predictors of suicidality, but complex algorithms using these same factors are accurate. Consistent with indeterminacy, there is no one magical algorithm -- very different algorithms can produce similarly good predictive accuracy on the same data set.
What does this clear pattern tell us about risk assessment protocols, risk factor/warning signs guidelines, and clinician judgment? Unfortunately, these meta-analyses represent conclusive evidence that traditional risk assessment protocols, risk factor/warning signs guidelines, and clinician judgement are too inaccurate to be clinically helpful. These methods produce a tremendous amount of false negatives and false positives. At present, there is no better alternative available for clinical use. Our group, along with a few others, are attempting to remedy this situation by figuring out how to integrate accurate machine learning risk algorithms into clinical practice.
What does this clear pattern tell us about how suicidality research should proceed? Most fundamentally, this pattern makes clear the need to shift from a "factor-based approach" to understanding and predicting suicidality to an "algorithmic approach." To this point, suicide research has focused on a highly intuitive approach based on identifying THE factor or small set of factors that truly accounts for suicidality (i.e., the simple and determinate approach). This is how humans naturally approach most things -- we typically assume that there is a specific and relatively simple recipe for everything. Unfortunately, most things in nature - including suicidality - are complex and indeterminate. This is why the search for 'THE factor or small set of factors' has produced only small and inconsistent prediction across thousands of studies spanning the past 50 years. The meta-analytic findings clearly indicate that the 'factor-based approach' is unlikely to yield the magic recipe for suicidality. Research should instead assume complexity and indeterminacy, taking a more algorithmic approach. Indeed, recent machine learning findings strongly support this view. However, there remains a tendency for researchers to try to use machine learning to identify 'THE algorithm' or to identify 'the factors that are actually most important in suicidality.' This is applying a new method to an old paradigm. Consistent with complexity and indeterminacy, we should stop trying to understand suicidality on the level of specific factors and instead try to understand how suicidality emerges from a complex and indeterminate set of factors.
By analogy, let's imagine that instead of figuring out what causes suicide, our task is to try to figure out what causes a good book.
One strategy would be to try look for differences between letters in books (e.g., maybe good books tend to have far more m's and z's in them than bad books). This would obviously not be helpful -- it is clear that this would approach would be 'too zoomed in.' Yet, it's analogous to focusing on specific factors to try to determine who is/isn't going to engage in suicidal behaviors.
Another strategy would be to try to look for differences in words between good/bad books. Again, this would obviously be a poor strategy -- the specific words in a book don't have much to do with whether it's a good or bad book. Again, this strategy is too zoomed in. But this is analogous to what we do when we try to understand suicide in terms of a small set of factors.
Yet another strategy would be to try to look for differences in sentences/paragraphs between good and bad books. This is certainly a better strategy than the previous two, but it's also too zoomed in. The major problem here is that books are too variable to contain the same sentences or paragraphs. This is analogous to adding a large number of factors together and hoping that it will account for suicidality.
A slightly different strategy would be to take one good book as a whole and judge other books based on how similar they are to this book in terms of letters, words, and sentences. This approach is obviously misguided -- there is no one configuration of words/sentences that makes a good story. This is analogous to assuming that there is a singular magic machine learning algorithm to be found and focusing on the factors that constitute a particular algorithm.
The best strategy for judging good/bad books is the one that people actually use: we consider books on the level of a story as a whole. The story is what emerges from the complex and indeterminate combination of words in the book. To focus on the words themselves obviously misses the whole point -- the story. Continuing the analogy, where we as humans get caught up is in the fact that the story could not exist without the words (which is a truism; cf. truism that whole algorithms are made up of factors), so we assume that we can better understand the story if we focus on the words (which is a reductionist fallacy; cf. assumption that we can understand an algorithm best by looking at its individual factors). If we wanted to dig deeper, we might sometimes evaluate stories in terms of characters, plot, mood, tone, and historical relevance; but we never do this on the level of letters, words, sentences, or even paragraphs because the meaning of the story does not exist on this level -- it emerges at a higher level. In suicide research, this is analogous to a complex and indeterminate view. We should not try to make sense of suicidality in terms of specific factors (cf. letters or words) , we should try to understand how suicide emerges from a complex and indeterminate set of factors (cf. the story as a whole, or broader story elements analogous to psychological primitives).
b. Brain imaging and suicidality meta-analyses. Researchers have long pointed to brain abnormalities as a potential cause of self-injurious thoughts and behaviors. Several studies have detected brain abnormalities in groups of people with a history of self-injury/suicidality, but these findings are highly variable. Rarely do two studies find the same structural or functional abnormalities. To try to understand this literature better, we worked with Dr. Derek Nee's lab at FSU to conduct a meta-analysis. In short, findings showed that - based on the current literature - there are no consistent structural or functional brain imaging abnormalities among people with a history of any kind of self-injurious thought or behavior. This manuscript is currently under review. As a whole, these findings are consistent with the position that self-injurious thoughts and behaviors cannot be reduced to brain abnormalities and have no specific neural signature.
c. Meta-analyses of treatments/interventions for self-injurious thoughts and behaviors. It is unclear which interventions work and what might moderate their efficacy. To try to gain a clearer understanding of this literature, we collaborated with Drs. Kathryn Fox at Harvard and Christine Cha at Columbia University to meta-analyze hundreds of randomized controlled trials that had included self-injurious thoughts and behaviors as an outcome. No treatment significantly reduced suicide death, few interventions significantly affected other outcomes, no treatment types was significantly better than any other, and there were clear issues with group equivalence (e.g., pre-test SITB rates differences, differential attrition). This manuscript is currently under review. Broadly, these findings indicate that radically different approaches to treatment may be needed to successful treat those at-risk for suicide and self-injury.
Basic Psychological science and beyond
Our meta-analytic work emphasized to us that, despite decades of hard work, suicidality largely remains a mystery. As noted above, the meta-analyses have produced some broad patterns that are foundational to our lab's current understanding of suicidality. But they also raised many perplexing questions that we didn't (and in some cases, still don't) have cogent answers to. In our view, we must go far beyond suicide research and clinical science to gain more insight into these issues. We briefly discuss some of these below and note a few directions that we are currently exploring.
What are psychological phenomena and how do they come about? In the suicide research field (and most other subfields of psychology), phenomena such as thoughts are assumed. In other words, we don't ask "what is a thought?," we ask, "what is a suicidal thought?" -- and explanations typically declare that suicidal thoughts are thoughts with some kind of suicide-related content (different researchers differ on the specifics of this content). We think that this may be putting the cart before the horse. Shouldn't we have a good understanding of what a 'thought' is before exerting a lot of energy trying to understand, predict, and prevent 'suicidal thoughts'?
So, then, what is a thought? There is unfortunately no straightforward or definitive answer to this kind of question yet, but there have been some exciting, evidence-based advances in recent years. In our view, the sum of the existing evidence is most consistent with the "psychological constructionist" explanation for psychological phenomena (and how they fit with biological and social processes). The radical implications of this view are highly consistent with our meta-analytic work and experimental work. We are currently conducting several studies to thoroughly test these novel hypotheses.
What is cause? Most questions in suicide research are directly or indirectly related to the central question of what causes suicidality. Most theories are postulations about what causes suicidality; interventions seek to target/interrupt the causes of suicidality; prediction studies typically assume that causal factors will likely make for good risk factors; and most correlational studies conclude that their findings could eventually be shown to have relevance to the causes of suicidality. Given the major importance of causality, we were curious about what 'cause' actually is, how it works, and how to study it. In most of the research that we read, cause was rarely defined and is assumed to correspond to a vague notion of causality. We wondered whether causality was this straightforward or if it was a bit more complicated.
Our reading of metaphysics and epistemology unfortunately showed us that it turns out that causality is a bit more nebulous and complicated that we had hoped. A few elements are consistent across many different philosophical positions on cause. These include causal complexity and indeterminacy; concepts such as causal simultaneity, over-determination, and preemption; and the utility of the counterfactual dependence test of causation in science. Each of these concepts is consistent with our meta-analytic and experimental work, and implies the need for a re-evaluation of some of the fundamental assumptions about suicidality causes. These concepts also highlight the need for experimental studies that can put the counterfactual dependence test of causation to use. Our recent virtual reality work reflects our efforts to provide a method of safely but meaningfully studying suicidality causes.
What are complexity and indeterminacy? As briefly noted above, our meta-analytic evidence, machine learning evidence, and experimental evidence all clearly point to complexity and indeterminacy (vs. traditional assumptions of simple and determinate cause/prediction/differences). As a heuristic, you can think of complexity as meaning that it will take a large number of things combined in a complicated way (i.e., non-linear, beyond addition and multiplication) to accurately account for cause (or prediction or differences). Also as a heuristic, you can think of indeterminacy as meaning that there may be near-infinite sets of causes (or predictors or differences). Importantly, this does not mean that "anything goes" -- indeterminacy does not imply meaninglessness or make it impossible to understand or predict a phenomenon. This is similar to the equation X + Y = 1. There are near-infinite solutions to this equation, but there are even more non-solutions to this equation. The human mind tends toward a dichotomy of "one single recipe or solution" or "anything goes", but this is a false dichotomy because there's a middle way. And that middle way is called indeterminacy. These heuristic explanations are necessary oversimplifications of what we actually mean by complexity and indeterminacy. For more information on these concepts, we recommend reading up on complex adaptive systems. We believe that all human behavior, including suicidality, is the product of a complex adaptive system.
New Paradigms and THeories
(Please note that, for now, all of what follows in this section should be cited as Franklin, J.C., & Ribeiro, J.D. . All our data are pointing toward a radically different paradigm for suicide research. Paper presented at the annual meeting for the American Association of Suicidology, Washington, D.C.)
The Protean Paradigm. Our meta-analytic, machine learning, and experimental work, along with our in-depth reading of the broader literatures, all suggest the need for a new paradigm for suicide research. We mean paradigm in the sense that philosopher of science Imre Lakatos used the term "research programme" (and slightly different from how Thomas Kuhn used the term paradigm). From this perspective, a paradigm is a common set of fundamental assumptions that can be shared by many different theories. Theories are more specific elaborations on a paradigm; falsification of a given theory does not necessarily falsify the paradigm that contains the theory. However, when the fundamental assumptions that make up a paradigm are falsified, it falsifies all theories within that paradigm and indicates that it's time for a new paradigm. Based on our review of the literature, we believe it is time for a new paradigm for suicide research. We call this the "Protean Paradigm" and contrast it with what we term the "Classical Paradigm" of suicide research. The Protean Paradigm consists of four fundamental and interlocking assumptions.
1. Complexity and indeterminacy of cause, prediction, and differences. Classically, in suicide research we have assumed that the causes (and predictors/differences) of suicidality are simple and determinate. In other words, we assumed that our mission was to find the single recipe (or the very small number of recipes) that accounted for suicide (i.e., determinate assumption) and we assumed that this recipe would include a small handful of factors that combined in a straightforward way (i.e., simple assumption). These assumptions are entirely reasonable and intuitive -- it is the default assumption that we humans make about most things. Unfortunately, nature's default is complexity and indeterminacy. The sum of the evidence does not point toward a simple and determinate explanation of suicidality; instead, it points strongly toward complexity and indeterminacy. To be consistent with this Protean Paradigm assumption, a theory would have to move beyond a factor-level focus (i.e., the level of Classical Theories) to try to explain how suicidality emerges from a complex and indeterminate set of factors.
2. Complex and indeterminate biological contributions to suicidality. This assumption is a subset of assumption #1 above, but receives special focus because of the mass of suicide research that centers on biological contributions. Classically, in suicide research we have assumed our mission was to find some kind of biological signature for suicidality -- a particular brain structure or function, neurotransmitter level, gene, etc. This signature search stems from a simple and determinate assumption about the biological contributions to suicidality. In contrast, the Protean Paradigm assumes that biological contributions to suicidality will be complex (will require the consideration of a large number of biological features combined in a complicated way) and indeterminate (this is no specific recipe or biological signature for suicidality; there may be near-infinite combinations). Complexity and indeterminacy are the default in biological science and the sum of our evidence suggests that suicidality conforms to this assumptions as well. To be consistent with the Protean Paradigm, a theory would have to move beyond a focus on specific biological features (i.e., the level of Classical Theories) to try to explain why the biological contributions to suicidality are complex and indeterminate.
3. Non-essentialized categories of suicidality (including suicidality itself). Classically, we have assumed that suicidality and its subtypes are "natural kinds" phenomena. That is, we have assumed that each of these phenomena has an essence (i.e., a necessary and sufficient feature) that distinguishes it from all other phenomena in the universe, provides strong and clear boundaries between it and all other phenomena in the universe, and exists independent of human perception. This is a natural assumption -- we humans assume this about most phenomena we encounter because it makes the world much easier to understand and manipulate. Many physical phenomena meet criteria as natural kinds. For example, chemical elements are natural kinds: the number of protons are each element's essence and boundary, and elements exist independent of human perception. Frustratingly, psychological phenomena are not natural kinds. Instead, they are non-essentialized categories: they do not have an essence, which means they do not have firm boundaries, and they do not exist outside of collective human agreement that they exist. This means that there is no "universally correct" taxonomy of suicidality, no essence of particular suicidality phenomenon (e.g., ideation vs. attempt), no essence of particular groups of people (e.g., ideators vs. attempters). Instead, the differences between ideation and attempt (and ideators and attempters) is complex and indeterminate. These groups are separable, but not in the way that carbon and oxygen are separable; they are separable in the way that good books and bad books are separable. To be consistent with the Protean Paradigm, a theory would have to move beyond a focus on finding essences of suicidality and people who engage in suicidality (i.e., a focus of Classical Theories) to try to explain how people categorize their own behavior (and the behavior of others) as being a particular suicide-related phenomenon.
4. Direct extension of basic science. Classically, most suicide theories have been very specific to suicidality. But this is not how other, more programmatic areas of science operate. For example, there is no independent theory about how to make methamphetamine, there's a general theory of chemistry and meth-making is derived from this theory. Likewise, there is no independent theory of comets; there's a general theory of physics (or astrophysics) and comet behavior is derived from this general theory. By analogy, there should be no general theory of suicide; there should instead be a general theory of psychology from which we derive an understanding of suicidality. Until recently, this was not possible because there wasn't an evidence-based general theory of psychology. However, we believe that the psychological constructionist theory represents such a general theory, so it may now be possible to derive a suicide theory that is a direct extension of more basic science. At the level of the Protean Paradigm, we do not specify a specific or "correct" general theory of psychology for this purpose. However, to be consistent with the Protean Paradigm, a theory suicide must be directly derived from a more general (and highly evidence-based) theory of psychology.
We are currently working on a manuscript outlining the Protean Paradigm and a theory derived from the Protean Paradigm called the Most Sensible Theory of Suicide. Once it is published, we will describe these ideas in more detail here.