Thursday, March 23, 2017

    Artificial Intelligence is Learning to Predict and Prevent Suicide        
          
           Megan Molteni, a writer for Wired, recently wrote an article on how social media giants such as Facebook and Instagram are using artificial intelligence to spot and help avoid suicide victims. She talks about how Doctors at research hospitals and also the US Department of Veterans Affairs have gotten involved in using technology to prevent suicide. The goal of the research is to create artificial intelligence programs that are extremely accurate at predicating a potential suicidal victim.
           I did a little research and found that the suicide rates are increasing. From 1999 to 2014, the suicide rate has risen by more than 24%. I also found that suicide is the 10th leading of cause of death in the U.S. Out of all the things that people could die from, I wouldn’t have thought that suicide would be such a high ranking cause of death. Since so many people are on social media today or even just use the internet in general, the use of suicide prevention software can be a significant factor in preventing suicide. Let alone the use of artificial intelligence can be huge when it comes to helping spot potential victims. It will be the most helpful for younger victims in my opinion. Most young people use social media as a platform to express themselves. I have heard of cases of suicide where the victim had posted things online before committing suicide They were almost cries for help that should have been noticed easily by others.
            Facebook and Instagram have already been using artificial intelligence to find posts relating to suicide, self harm, or anything else that would make others worry about the user. The sites allow users to flag the posts for employees to review the individual and determine what should be done. It is great that current social media sites have already been accumulating this data. The more data that researchers have, the more efficient the artificial intelligence programs will be.
Artificial intelligence would be able to spot individuals as soon as they post and be able to send help. I think the growing use of the internet will actually be a good thing in reducing the suicide rate. Once algorithms are perfected based off of past data, and the evolution of technology increases, artificial intelligence will be able to police the internet. I’m certain most people who use the internet regularly have heard or even seen cases where people have posted online before they commited suicide. I have heard about more than one and to me it’s awful that they weren’t avoided because they so easily could have been. I guess it is because many people online don’t take the threats seriously and that is where the artificial intelligence programs are going to help the most. Every single post pertaining to suicide will be analyzed. Now, most of the posts are going to be false alarms but that doesn’t matter. It is so much better to be safe than sorry.  


 https://www.wired.com/2017/03/artificial-intelligence-learning-predict-prevent-suicide/

5 comments:

Michael Bacci said...

Sadly, I have been close to many incidents of suicide within my lifetime. I am optimistic that artificial intelligence can have a tangible benefit. I am also realistic, as the article mentions this seemingly altruistic effort could very easily be about trying to win back public opinion after the bad publicity of people broadcasting suicide and self-harm through the website’s new live feature. A recent notable case of a Miami teen hanging herself in a 2-hour long Facebook live video is horrible publicity. This was not an isolated incident either. Additionally, I am unsure to what extent artificial intelligence can help to prevent suicide. From my experience, it was usually understood that the person was struggling and going through a tough time. If the artificial intelligence only alerts people when there is a problem, I don’t see that having a significant impact as most of the time this is known. Where I do see the ability to make a difference is by alerting to cases where some sort of significant event puts someone’s well-being in jeopardy. In the case where someone experiences some sort of traumatic event that drastically changes their psychological well-being, AI technology could help. It could catch a change in behavior that clues to something significant having happened that suddenly makes a person want to harm themselves. Another way that I could see AI helping is through tailored treatments to help individuals who are struggling. This is briefly mentioned in the article and is part of what I agree with. AI can be an effective tool for gathering information about what the matters the most to the individual and thus what things may help them feel better. With some of this information, AI could hopefully then prescribe treatments. Still, as is a primary problem with AI it is tough for artificial intelligence to mimic a specialist’s knowledge and experience in an effective way. Until the technology can be improved upon with proven results, AI should only be used sparingly. Even despite my pessimism, I support any effort that helps reduce suicide and self-harm.

Bobby Austin said...

I think that Artificial Intelligence could revolutionize how we identify potential suicide victims on social media. For example, the potential software could constantly monitor Facebook activity and identify people who need help, and could notify a human specialist team to proceed from there. Facebook's CEO, Mark Zuckerberg, "acknowledged the need to detect signs of suicidal users to offer help before it's too late. 'There have been terribly tragic events that perhaps could have been prevented if someone had realized what was happening and reported them sooner'"(Garun).

Facebook has adapted to their recent creation of Facebook live; sadly, there have been numerous cases of people broadcasting their own suicide live on Facebook. "If Facebook believes a reported Live streamer may need help, that user will receive notifications for suicide prevention resources while they're still on the air. The person who reported the video will also get resources to personally reach out and help their friend"(Garun). I think that this is a great development, however the potential flaw could come in Facebook's AI algorithm. What if someone slips through the cracks of the suicide notification system?

Facebook will continue to develop this potential asset further to reduce the chance for possible 'misses.' They have partnered "with organizations like the National Suicide Prevention Lifeline, the National Eating Disorder Association, and the Crisis Text Line so when users' posts are flagged and they opt to speak to someone, they can connect immediately via Messenger"(Garun).

http://www.theverge.com/2017/3/1/14779120/facebook-suicide-prevention-tool-artificial-intelligence-live-messenger

matt cannon said...

It is no surprise that suicide rates are increasing along with the developing of technology. It saddens me to say that as technology develops kids are gaining more ways to target others when it comes to cyberbullying. The idea that artificial intelligence can predict suicides is a nice idea but in reality I don’t think it something that will have any impact on the reduction of suicides. Using algorithms to predict this successfully and have an impact will be near impossible. I say this because there is a lot more that goes into suicide prevention than looking at posts on social media. I think the focus needs to be on preventing cyberbullying and harassment if you want to try and stop anything like this online. Making the internet a safer place for kids should be the first step towards suicide prevention or self-harm instead of trying to look and understand the aftermath of it. Waiting that long can be too late and even then if you stop the action it does not stop the emotional and mental scarring kids have from it. I also worry that if this type of intelligence predicts the wrong people it will have a reverse effect of what it is supposed to have. Wrongly predicting someone to be suicidal can push them towards it. It highlights a focus that should not be highlighted. I also don’t see how this will work in an actual sense. It takes time to accumulate data and process it. Along with that once it is all processed who does it go to? Who handles that situation? There are too many questions to be answered in order for this to be successful. It takes too much time and with something like this you don’t always have time. It also should not be left up to the people who work for the companies to decide what happens to the flagged content because how do they know what is actually going and what needs to be done? All in all, I like the idea that this article highlights but I do not think it will be a successful implementation nor is it the right one. I think that the only way to engage suicide prevention effectively via IT is to stop the problem from occurring rather than trying to find a solution for it.

Matt Lodato said...

There is no question that suicide has grown to become a leading cause of death, making doctors and hospitals more interested in doing anything in their power to prevent the already high numbers from increasing. Most people probably know someone or have heard of someone who was a victim of suicide. More often then not, it is something that is unexpected, catching people by great surprise. I, like you, find the research statistics that you found to be very surprising. I agree that it is surprising that suicide is the 10th leading cause of death and that the rate has risen by more than 24% in 15 years. Taking artificial intelligence to social media sites is a great start in lowing this rate.
I think that it is very smart to use social media sites like Facebook to target potential suicide victims. As you said, this is where people often go to express themselves especially when they might not have anyone to talk to in person. I believe that the use of artificial intelligence on social media sites will have a great impact, as it will be able to flag questionable posts and be able to send help to individuals that might need it. I too have heard of a few different cases where people who have committed suicide posted on a social media site shortly before, but it wasn’t taken seriously until after the fact. As this technology becomes more advanced in years to come, I think that it will definitely have an impact in preventing suicide and providing help for those who need it. Social media sites are becoming significantly smarter over time, learning more and more about its users, so it is no surprise to me that soon they will be able to save lives.

Jason Baskind said...

Honestly, this seems like a terrific idea to me. If we can use technology to potentially save lives within our community, then why not! It is awesome that we have developed artificial intelligence to the point where it can detect suicidal thoughts or behaviors. A lot of times, before the person decides to take their own life, they leave the world with hints or notes. In this day and age, where we are so dependent and involved with social media, these notes or hints can be presented in social media. The fact that artificial intelligence can be used in social media to detect these hints is quite helpful. Then the person can be sent help right away and a lot of times a little help is all it takes to save someone from taking their own life. Overall, this is a genius idea! I agree that even though the artificial intelligence will make a bunch of mistakes, it is so much better being safe than sorry. Hopefully this lowers the high suicide rate in the upcoming years. It certainly should help!