“If, then” statement may save a person’s life

The Translation Bureau is a compilation team, focusing on technology, business, workplace, life and other fields, focusing on foreign new technologies, new ideas, new trends.

Editor’s note: Parents and teachers, as traditional “gatekeepers”, will intervene in the suicidal tendencies of young people, but when there are psychological problems in the LGBTQ population, they may not be able to intervene for many historical reasons. So, can Google and artificial intelligence help? Sidney Fussell, author of the article, talked about Google’s support for the suicide aid non-profit organization Trevor Project. Both parties hope that the algorithm can identify high suicide risk populations for the first time and conduct more timely intervention. The original was published in The Altantic, the original title The AI ​​That Could Help Curb Youth Suicide.

Save Life AI: Algorithms Identify High Suicide Risk Populations

Image source: ALASTAIR GRANT / AP PHOTO

The suicide prevention literature suggests that there will be people in the community who may be helpful when they have the idea of ​​suicide in the community. They are called “gatekeepers.” The term does not have a very rigorous definition. Usually, “gatekeepers” include teachers, parents, coaches, and older colleagues—when they see psychological problems around them, they have some form of authority and intervention. ability.

Does the “gatekeeper” also include Google?

When users search for keywords related to suicide methods, Google highlights the National Suicide Prevention Lifeline phone. But this is by no means a foolproof method. Google can’t edit a specific web page, it can only edit the search results, which means that when someone wants to find a suicide method, he can easily find it through links, forums, etc., without a search engine. At the same time, on the Internet today, “rolling” is more likely to be the expression of the pink circle, rather than the real cry for help, but the machine may not understand the nuances. When people don’t use English and search in other languages, Google’s artificial intelligence is much less effective at detecting suicidal ideation.

In general, search results are useful but too generalIntervention methods can be applied as a preventive strategy. After all, anyone can search for anything for any reason.

Google’s latest attempt to prevent suicide by algorithms is more targeted – for those who are looking for help.

“What happened?”

In May of this year, the technology giant donated $1.5 million to the non-profit Trevor Project, which is based in California and uses treasury (TrevorLifeline), SMS (TrevorText) and instant messaging platform (TrevorChat) to LGBTQ teens. Provide psychological counseling. The project’s leaders hope to improve the service of TrevorText and TrevorChat by using machine learning to automatically assess the suicide risk of the letter. Everything starts with the first question from Trevor’s consultant: “What happened?”

Sam Dorison, director of the Trevor Project, said: “If they have suicidal thoughts, we want to make sure that we talk to them in an unbiased way and let them guide the whole Dialogue process. Do they want to discuss the cabinet? Do they need the resources of the LGBT community in their own community? We really let them guide the conversation, which may be the most helpful.”

At present, those who want suicide assistance need to wait in line. Trevor’s average waiting time is short, no more than five minutes, but in some emergency situations, one second can’t be delayed. Trevor’s leadership team hopes that with the development of artificial intelligence, it will eventually be able to identify callers with high suicide risk by analyzing the caller’s answer to the first question and immediately transfer the call to the manual consultant.

Google will use two data points for artificial intelligence training: the initial phase of a conversation between a teenager and a counselor, and a suicide risk assessment completed by the counselor after talking to them. The idea is that by comparing the data from the initial phase with the final risk assessment, artificial intelligence can predict suicide risk based on the earliest response.

John Callery, technical director of the Trevor Project, said: “We believe that if we can train the algorithm based on the first few pieces of information and the final risk assessment, then we will find many things that humans can’t detect but the machine can recognize. This may help us learn more about it.” Callery added that the consultants will continue to make their own assessments.

The algorithm has the amazing potential to identify unknown patterns, but to be a good “gatekeeper”, the key is to move forward and intervene when problems arise. Although we have done it in some respects, it is still unknown whether we really want to integrate into technology. Canada andThe UK’s public health program mines social media data to predict suicide risk. On Facebook, once the algorithm detects self-harm or violent behavior in the video, it quickly marks the live video and sends it to the police.

We searched Google for “how to ease hangovers”, search for medical advice, and search for “how to recover from a broken love”. We use Google to understand everything. Search results may be mixed with irrelevant information, and may even be misleading, but the search itself does not judge this.

Stephen Russell, head of the Department of Human Development and Family Sciences at the University of Texas at Austin, said: “(Students) go online after they go home, they can disclose this information to anyone in the world. “For decades, Russell has been researching the LGBTQ community. His research is groundbreaking. He said that although students with psychological problems really “should not use Google to solve these problems,” let all in real life It is really difficult for gatekeepers to become open and positive about the LGBTQ crowd, because people have been stigmatized and prejudiced for decades. He said: “Even today, I can hear some managers say, ‘We don’t have such children here.’ This has always been a dilemma in reality.”

The role of the Trevor Project is here. In the end, the leadership team of this non-profit organization hopes to design an artificial intelligence system that predicts what resources LGBTQ young people will need – housing, help out, psychotherapy – all by analyzing the first few chats. A piece of information to achieve. In the long run, they want artificial intelligence to evolve and be able to identify patterns in metadata, not just scan the initial message. For example, if artificial intelligence can determine information such as the education level of a letter from a message, can it infer the impact of these structural factors on suicide risk?

A bunch of dazzling “if, then” statements can’t seem to save a person’s life, but I believe it will be fine soon.

Translator: Xitang