Mobile Treatment Apps
Based on some of our early laboratory work, we developed and tested a mobile app that aims to ameliorate suicidal and nonsuicidal self-injury (see here for a description of the three web-based RCTs). We call this approach Therapeutic Evaluative Conditioning, or TEC. The results of these initial trials were encouraging, though we caution that these studies necessarily had many limitations and TEC should not take the place of other forms of treatment. Due to a spate of requests for the app, we developed a publicly available version. It is available for free for all Apple devices (click here) and Android devices (click here). Please see this website (click here) for more information on the publicly available app.
Although our work with this app was promising, we believe that we can do much better. Rather than iterating on the specific approach we used with TEC, we are currently working on developing a much better understanding of the most fruitful treatment targets for self-injury. We have new ideas for what these targets might be, and we believe that TEC's targets only partially intersect with the most central causes of self-injurious behaviors. Through the Advancing Knowledge arm of our work, we are currently learning about these new targets and how to combat them. If we are successful with this more basic groundwork, we will then begin exploring how to translate that knowledge into a free and publicly available intervention option. In other words, we hope to repeat the process we took with TEC, but with much better treatment targets and more advanced technologies.
System-Wide AI-based Risk detection
It is clear from our work and the work of our colleagues that machine learning algorithms have the potential to accurately identify who will eventually go on to attempt suicide or die by suicide. This represents a major breakthrough in suicide science, but it unfortunately doesn't automatically translate into an immediate real-world impact. To have an impact, many other difficult problems must be addressed. For example:
a. How do we implement these algorithms in large hospital systems?
b. How do we integrate risk algorithm information seamlessly into the normal (and over-burdened) clinical workflow?
c. How do we solve the "when" problem (i.e., accurately predicting not just WHO will engage in suicidal behavior, but when)?
d. What do providers do once they encounter someone with an elevated risk score?
e. How do we continuously ensure the validity of algorithms across locations, groups, and time?
Along with our colleagues Drs. Jessica Ribeiro and Colin Walsh, we are currently conducting a large project aimed at answering questions (a) and (b) by developing and applying an algorithmic approach within a large hospital system. Through this process, we are developing Clinical Decision Support Tools that we hope will serve as prototypes for the technology that providers will one day use to understand their patients' risk levels and what to do about it.