a. Suicide/Self-Injury Prediction Meta-Analyses. All of our meta-analyses on this topic have produced the same two findings. First, no factor or small set of factors (e.g., depression, prior SITBs, risk questionnaires) predicts future self-injurious thoughts and behaviors much better than random guessing. Second, there are hundreds - and potentially thousands - of things that predict future self-injurious thoughts and behaviors slightly better than random guessing. Many factors can tell us a very small part of the story about suicide risk, but none alone can tell us much more than that.
What does this clear pattern tell us about the nature of suicidality? In our view, this means that there is no simple recipe for self-injurious thoughts and behaviors. In other words, there is no magic factor or combination of 3-4 (or even 23-24) factors that will account for all or even a sizable portion of these thoughts and behaviors. Accurate suicidality prediction (and cause) will likely take the consideration of a large number of factors. We estimate that this will require a diverse set of 50-500 predictors for consistent 95% accuracy of eventual suicidal behavior (i.e., the who question), and ~10,000 for consistent 95% accuracy of the specific date of suicidal behavior (i.e., the when question). It also suggests that there will be no determinate recipe for self-injurious thoughts and behaviors. This means that there's likely no one magic formula (even if the formula includes 800 things). Instead, suicidality prediction/cause is likely indeterminate, meaning that there may be a near-infinite set of factor combinations that could accurately account for suicidality (similar to how there are near-infinite solutions [and non-solutions] to the equation X + Y = 1). In short, the nature of suicidality (and its causes and predictors) appears to be complex and indeterminate (vs. simple and determinate). This conclusion is further supported by recent machine learning evidence from our group and several other groups. Simple algorithms (even those that consider hundreds of factors) are poor predictors of suicidality, but complex algorithms using these same factors are accurate. Consistent with indeterminacy, there is no one magical algorithm -- very different algorithms can produce similarly good predictive accuracy on the same data set.
What does this clear pattern tell us about risk assessment protocols, risk factor/warning signs guidelines, and clinician judgment? Unfortunately, these meta-analyses represent conclusive evidence that traditional risk assessment protocols, risk factor/warning signs guidelines, and clinician judgement are too inaccurate to be clinically helpful. These methods produce a tremendous amount of false negatives and false positives. At present, there is no better alternative available for clinical use. Our group, along with a few others, are attempting to remedy this situation by figuring out how to integrate accurate machine learning risk algorithms into clinical practice.
What does this clear pattern tell us about how suicidality research should proceed? Most fundamentally, this pattern makes clear the need to shift from a "factor-based approach" to understanding and predicting suicidality to an "algorithmic approach." To this point, suicide research has focused on a highly intuitive approach based on identifying THE factor or small set of factors that truly accounts for suicidality (i.e., the simple and determinate approach). This is how humans naturally approach most things -- we typically assume that there is a specific and relatively simple recipe for everything. Unfortunately, most things in nature - including suicidality - are complex and indeterminate. This is why the search for 'THE factor or small set of factors' has produced only small and inconsistent prediction across thousands of studies spanning the past 50 years. The meta-analytic findings clearly indicate that the 'factor-based approach' is unlikely to yield the magic recipe for suicidality. Research should instead assume complexity and indeterminacy, taking a more algorithmic approach. Indeed, recent machine learning findings strongly support this view. However, there remains a tendency for researchers to try to use machine learning to identify 'THE algorithm' or to identify 'the factors that are actually most important in suicidality.' This is applying a new method to an old paradigm. Consistent with complexity and indeterminacy, we should stop trying to understand suicidality on the level of specific factors and instead try to understand how suicidality emerges from a complex and indeterminate set of factors.
By analogy, let's imagine that instead of figuring out what causes suicide, our task is to try to figure out what causes a good book.
One strategy would be to try look for differences between letters in books (e.g., maybe good books tend to have far more m's and z's in them than bad books). This would obviously not be helpful -- it is clear that this would approach would be 'too zoomed in.' Yet, it's analogous to focusing on specific factors to try to determine who is/isn't going to engage in suicidal behaviors.
Another strategy would be to try to look for differences in words between good/bad books. Again, this would obviously be a poor strategy -- the specific words in a book don't have much to do with whether it's a good or bad book. Again, this strategy is too zoomed in. But this is analogous to what we do when we try to understand suicide in terms of a small set of factors.
Yet another strategy would be to try to look for differences in sentences/paragraphs between good and bad books. This is certainly a better strategy than the previous two, but it's also too zoomed in. The major problem here is that books are too variable to contain the same sentences or paragraphs. This is analogous to adding a large number of factors together and hoping that it will account for suicidality.
A slightly different strategy would be to take one good book as a whole and judge other books based on how similar they are to this book in terms of letters, words, and sentences. This approach is obviously misguided -- there is no one configuration of words/sentences that makes a good story. This is analogous to assuming that there is a singular magic machine learning algorithm to be found and focusing on the factors that constitute a particular algorithm.
The best strategy for judging good/bad books is the one that people actually use: we consider books on the level of a story as a whole. The story is what emerges from the complex and indeterminate combination of words in the book. To focus on the words themselves obviously misses the whole point -- the story. Continuing the analogy, where we as humans get caught up is in the fact that the story could not exist without the words (which is a truism; cf. truism that whole algorithms are made up of factors), so we assume that we can better understand the story if we focus on the words (which is a reductionist fallacy; cf. assumption that we can understand an algorithm best by looking at its individual factors). If we wanted to dig deeper, we might sometimes evaluate stories in terms of characters, plot, mood, tone, and historical relevance; but we never do this on the level of letters, words, sentences, or even paragraphs because the meaning of the story does not exist on this level -- it emerges at a higher level. In suicide research, this is analogous to a complex and indeterminate view. We should not try to make sense of suicidality in terms of specific factors (cf. letters or words) , we should try to understand how suicide emerges from a complex and indeterminate set of factors (cf. the story as a whole, or broader story elements analogous to psychological primitives).
b. Brain imaging and suicidality meta-analyses. Researchers have long pointed to brain abnormalities as a potential cause of self-injurious thoughts and behaviors. Several studies have detected brain abnormalities in groups of people with a history of self-injury/suicidality, but these findings are highly variable. Rarely do two studies find the same structural or functional abnormalities. To try to understand this literature better, we worked with Dr. Derek Nee's lab at FSU to conduct a meta-analysis. In short, findings showed that - based on the current literature - there are no consistent structural or functional brain imaging abnormalities among people with a history of any kind of self-injurious thought or behavior. This manuscript is currently under review. As a whole, these findings are consistent with the position that self-injurious thoughts and behaviors cannot be reduced to brain abnormalities and have no specific neural signature.
c. Meta-analyses of treatments/interventions for self-injurious thoughts and behaviors. It is unclear which interventions work and what might moderate their efficacy. To try to gain a clearer understanding of this literature, we collaborated with Drs. Kathryn Fox at Harvard and Christine Cha at Columbia University to meta-analyze hundreds of randomized controlled trials that had included self-injurious thoughts and behaviors as an outcome. No treatment significantly reduced suicide death, few interventions significantly affected other outcomes, no treatment types was significantly better than any other, and there were clear issues with group equivalence (e.g., pre-test SITB rates differences, differential attrition). This manuscript is currently under review. Broadly, these findings indicate that radically different approaches to treatment may be needed to successful treat those at-risk for suicide and self-injury.
Basic Psychological science and beyond
Our meta-analytic work emphasized to us that, despite decades of hard work, suicidality largely remains a mystery. As noted above, the meta-analyses have produced some broad patterns that are foundational to our lab's current understanding of suicidality. But they also raised many perplexing questions that we didn't (and in some cases, still don't) have cogent answers to. In our view, we must go far beyond suicide research and clinical science to gain more insight into these issues. We briefly discuss some of these below and note a few directions that we are currently exploring.
What are psychological phenomena and how do they come about? In the suicide research field (and most other subfields of psychology), phenomena such as thoughts are assumed. In other words, we don't ask "what is a thought?," we ask, "what is a suicidal thought?" -- and explanations typically declare that suicidal thoughts are thoughts with some kind of suicide-related content (different researchers differ on the specifics of this content). We think that this may be putting the cart before the horse. Shouldn't we have a good understanding of what a 'thought' is before exerting a lot of energy trying to understand, predict, and prevent 'suicidal thoughts'?
So, then, what is a thought? There is unfortunately no straightforward or definitive answer to this kind of question yet, but there have been some exciting, evidence-based advances in recent years. In our view, the sum of the existing evidence is most consistent with the "psychological constructionist" explanation for psychological phenomena (and how they fit with biological and social processes). The radical implications of this view are highly consistent with our meta-analytic work and experimental work. We are currently conducting several studies to thoroughly test these novel hypotheses.
What is cause? Most questions in suicide research are directly or indirectly related to the central question of what causes suicidality. Most theories are postulations about what causes suicidality; interventions seek to target/interrupt the causes of suicidality; prediction studies typically assume that causal factors will likely make for good risk factors; and most correlational studies conclude that their findings could eventually be shown to have relevance to the causes of suicidality. Given the major importance of causality, we were curious about what 'cause' actually is, how it works, and how to study it. In most of the research that we read, cause was rarely defined and is assumed to correspond to a vague notion of causality. We wondered whether causality was this straightforward or if it was a bit more complicated.
Our reading of metaphysics and epistemology unfortunately showed us that it turns out that causality is a bit more nebulous and complicated that we had hoped. A few elements are consistent across many different philosophical positions on cause. These include causal complexity and indeterminacy; concepts such as causal simultaneity, over-determination, and preemption; and the utility of the counterfactual dependence test of causation in science. Each of these concepts is consistent with our meta-analytic and experimental work, and implies the need for a re-evaluation of some of the fundamental assumptions about suicidality causes. These concepts also highlight the need for experimental studies that can put the counterfactual dependence test of causation to use. Our recent virtual reality work reflects our efforts to provide a method of safely but meaningfully studying suicidality causes.
What are complexity and indeterminacy? As briefly noted above, our meta-analytic evidence, machine learning evidence, and experimental evidence all clearly point to complexity and indeterminacy (vs. traditional assumptions of simple and determinate cause/prediction/differences). As a heuristic, you can think of complexity as meaning that it will take a large number of things combined in a complicated way (i.e., non-linear, beyond addition and multiplication) to accurately account for cause (or prediction or differences). Also as a heuristic, you can think of indeterminacy as meaning that there may be near-infinite sets of causes (or predictors or differences). Importantly, this does not mean that "anything goes" -- indeterminacy does not imply meaninglessness or make it impossible to understand or predict a phenomenon. This is similar to the equation X + Y = 1. There are near-infinite solutions to this equation, but there are even more non-solutions to this equation. The human mind tends toward a dichotomy of "one single recipe or solution" or "anything goes", but this is a false dichotomy because there's a middle way. And that middle way is called indeterminacy. These heuristic explanations are necessary oversimplifications of what we actually mean by complexity and indeterminacy. For more information on these concepts, we recommend reading up on complex adaptive systems. We believe that all human behavior, including suicidality, is the product of a complex adaptive system.
New Paradigms and THeories
We plan to release formal description of a new paradigm (and a new theory) for suicidality within the next year. We will keep you posted!