It is difficult to advance knowledge if you don't know what's known. Unfortunately, academia disincentivizes having a broad and deep view of one's research area. Time and attention demands of most graduate students and professors make it impossible to read the tidal wave of new studies that come out each year. This leaves most of us with an impressionistic and woefully incomplete understanding of our area, which makes it difficult to identify broad patterns in findings and major gaps in knowledge. This is a recipe for stagnation. To try to avoid this, we devote a lot of time to large meta-analyses that summarize broad areas of suicide and self-injury research. We briefly summarize our previous and ongoing work here:
a. Suicide/Self-Injury Prediction Meta-Analyses. All of our meta-analyses on this topic have produced the same two findings. First, no factor or small set of factors (e.g., depression, prior SITBs, risk questionnaires) predicts future self-injurious thoughts and behaviors much better than random guessing. Second, there are hundreds - and potentially thousands - of things that predict future self-injurious thoughts and behaviors slightly better than random guessing. Many factors can tell us a very small part of the story about suicide risk, but none alone can tell us much more than that.
See here for our published studies in this area.
What does this clear pattern tell us about the nature of suicidality? In our view, this means that there is no simple recipe for self-injurious thoughts and behaviors. In other words, there is no magic factor or combination of 3-4 (or even 23-24) factors that will account for all or even a sizable portion of these thoughts and behaviors. Accurate suicidality prediction (and cause) will likely take the consideration of a large number of factors (i.e., at least 50, maybe hundreds). It also suggests that there will be no determinate recipe for self-injurious thoughts and behaviors. This means that there's likely no one magic formula (even if the formula includes 800 things). Instead, suicidality prediction/cause is likely indeterminate, meaning that there may be a near-infinite set of factor combinations that could accurately account for suicidality (similar to how there are near-infinite solutions [and non-solutions] to the equation X + Y = 1). In short, the nature of suicidality (and its causes and predictors) appear to be complex and indeterminate (vs. simple and determinate). This conclusion is further supported by machine learning evidence referenced in the 'advancing knowledge' section. Simple algorithms (even those that consider hundreds of factors) are poor predictors of suicidality, but complex algorithms using these same factors are accurate. Consistent with indeterminacy, there is no one magical algorithm -- very different algorithms can produce similarly good predictive accuracy on the same data set.
What does this clear pattern tell us about risk assessment protocols, risk factor/warning signs guidelines, and clinician judgment? Unfortunately, these meta-analyses represent conclusive evidence that traditional risk assessment protocols, risk factor/warning signs guidelines, and clinician judgement are too inaccurate to be clinically helpful. These methods produce a tremendous amount of false negatives and false positives. At present, there is no better available alternative available for clinical use. Our group, along with a few others, are attempting to remedy this situation by figuring out how to integrate accurate machine learning risk algorithms into clinical practice.
What does this clear pattern tell us about how suicidality research should proceed? Most fundamentally, this pattern makes clear the need to shift from a "factor-based approach" to understanding and predicting suicidality to an "algorithmic approach." To this point, suicide research has focused on a highly intuitive approach based on identifying THE factor or small set of factors that truly accounts for suicidality (i.e., the simple and determinate approach). This is how humans naturally approach most things -- we typically assume that there is a specific and relatively simple recipe for everything. Unfortunately, most things in nature - including suicidality - are complex and indeterminate. This is why the search for 'THE factor or small set of factors' has produced only small and inconsistent prediction across thousands of studies spanning the past 50 years. The meta-analytic findings clearly indicate that the 'factor-based approach' is unlikely to yield the magic recipe for suicidality. Research should instead assume complexity and indeterminacy, taking a more algorithmic approach. Indeed, recent machine learning findings strongly support this view. However, there remains a tendency for researchers to try to use machine learning to identify 'THE algorithm' or to identify 'the factors that are actually most important in suicidality.' This is applying a new method to an old paradigm. Consistent with complexity and indeterminacy, we should stop trying to understand suicidality on the level of specific factors and instead try to understand how suicidality emerges from a complex and indeterminate set of factors.
By analogy, let's imagine that instead of figuring out what causes suicide, our task is to try to figure out what causes a good book.
One strategy would be to try look for differences between letters in books (e.g., maybe good books tend to have far more m's and z's in them than bad books). This would obviously not be helpful -- it is clear that this would approach would be 'too zoomed in.' Yet, it's analogous to focusing on specific factors to try to determine who is/isn't going to engage in suicidal behaviors.
Another strategy would be to try to look for differences in words between good/bad books. Again, this would obviously be a poor strategy -- the specific words in a book don't have much to do with whether it's a good or bad book. Again, this strategy is too zoomed in. But this is analogous to what we do when we try to understand suicide in terms of a small set of factors.
Yet another strategy would be to try to look for differences in sentences/paragraphs between good and bad books. This is certainly a better strategy than the previous two, but it's also too zoomed in. The major problem here is that books are too variable to contain the same sentences or paragraphs. This is analogous to adding a large number of factors together and hoping that it will account for suicidality.
A slightly different strategy would be to take one good book as a whole and judge other books based on how similar they are to this book in terms of letters, words, and sentences. This approach is obviously misguided -- there is no one configuration of words/sentences that makes a good story. This is analogous to assuming that there is a singular magic machine learning algorithm to be found and focusing on the factors that constitute a particular algorithm.
The best strategy for judging good/bad books is the one that people actually use: we consider books on the level of a story as a whole. If we want to dig deeper, we might sometimes evaluate stories in terms of characters, plot, mood, tone, and historical relevance. In suicide research, this is analogous to a complex and indeterminate view. We do not try to make sense of suicidality in terms of specific factors, we try to understand how suicide emerges from a complex and indeterminate set of factors (or even more broadly, in terms of psychological primitives).
b. Brain imaging and suicidality meta-analyses. Researchers have long pointed to brain abnormalities as a potential cause of self-injurious thoughts and behaviors. Several studies have detected brain abnormalities in groups of people with a history of self-injury/suicidality, but these findings are highly variable. Rarely do two studies find the same structural or functional abnormalities. To try to understand this literature better, we worked with Dr. Derek Nee's lab to conduct a meta-analysis. In short, findings showed that - based on the current literature - there are no consistent structural or functional brain imaging abnormalities among people with a history of any kind of self-injurious thought or behavior. This manuscript is currently under review. As a whole, these findings indicate that self-injurious thoughts and behaviors cannot be reduced to brain abnormalities.
c. Meta-analyses of treatments/interventions for self-injurious thoughts and behaviors. It is unclear which interventions work and what might moderate their efficacy. To try to gain a clearer understanding of this literature, we collaborated with Drs. Kathryn Fox at Harvard and Christine Cha at Columbia University to meta-analyze hundreds of randomized controlled trials that had included self-injurious thoughts and behaviors as an outcome. No treatment significantly reduced suicide death, few interventions significantly affected other outcomes, no treatment types was significantly better than any other, and there were clear issues with group equivalence (e.g., pre-test SITB rates differences, differential attrition). This manuscript is currently under review. Broadly, these findings indicate that radically different approaches to treatment may be needed to successful treat those at-risk for suicide and self-injury.