RTInsights is proud to be a media partner of the Women on Stage Conference taking place February 10-11 online. In this article, we interview NLP expert Neta Snir who is one of the speakers.

6Q4: NLP Expert Neta Snir, on How AI Can Save Lives

PinIt

NLP Expert Neta Snir explains how NLP can be used to detect high-risk situations online and save lives.

Our “6Q4” series features six questions for the leaders, innovators, and operators in the real-time analytics arena who are leveraging big data to transform the world as we know it.

RTInsights recently asked NLP expert Neta Snir, who is a speaker at the Women On Stage Conference taking place February 10-11th, about how AI can be used for good, her journey, and more.

Q1: When did you become interested in tech and what is your technical background?

Neta Snir

During the second year studying for my degree in mathematics I noticed two of my fellow students were occasionally disappearing for long days off campus. I had to ask, and they told me about this other subject that they majored in called generative linguistics. The following semester I was registered at the linguistics department. I would always try to take the classes that were more about logic in language, and cognitive science, but I thought I was going to become a professor. Until I hit “NLP 101” with my dear teacher Dr. Mori Rimon who got me hooked, and I never looked back since. 

Q2: What is your current position and how did you get there?

I’m an NLP Expert, I work with early-stage startups on building their NLP and data science technology, usually from scratch. My expertise is in bringing tech into entrepreneurship and helping C-level management to think data.

I’ve always been a tech geek and I find that being a freelancer is a great way for me to continuously engage with various technologies, trends and innovative companies, and simply to stay in sync with the ecosystem.

Q3: You’ve developed technology for detection of suicidal posts over the Internet. Can you describe the application and how it is used? 

For seven years, I have volunteered in a helpline NGO as part of the “detection force,”a team of trained volunteers who would scan the internet and read hundreds of web posts daily, in order to find cries for help and reach out to them on time. It was natural for me to seek for an automated way to do it.

There’s a lot of knowledge that you get seeing all those cases – the language, the patterns, the colors… Something there tells you that there’s more than the text uncovers. But it takes years to train a human to pick up the intuition. It’s not always the explicit words that describe despair. Sometimes silence is also informative, as people choose when to respond to their commenters and what to respond to. My work was to take this “gut feeling” and turn it into measurable features that are machine readable. There’s a whole book of “web emotional behaviors” to be written, and I believe that AI is necessary for writing at least some of its chapters.

Q4: What does the application do when it detects something? For example, does it escalate the situation, alert a human, or something else?

Using technologies like mine, the detection process of emotional posts and high-risk situations becomes fast and efficient, but that’s not enough if you’re not connecting the ends: Any detection technology needs an alerting system that interacts with emergency care or help centers, or monetization apps. The goal is to achieve traceability and auditability in streamlining the info, but there needs to be a human there to receive it, reach out to the person at risk and provide them with the required help.

Q5: How was it before and why does a solution like this do a better job than humans?

The internet is where you exist, now more than ever with Covid-19 and its global effect on social disassociation. But when you turn to the internet, you never know what you’re going to find. Since trained human resources are not necessarily accessible to everyone, and many people don’t actually turn to get professional help, anyone is basically a helper (or not). There becomes a need for a positive online presence that can be this connection between people who are seeking for help, evidently or implicitly, and the actual help they deserve to get. Algorithms that are trained to be that, would be able to scale, avoid cognitive bias, adapt quickly to new trends and slang, and of course respond faster than any human.

Q6: What other high-risk situations can it be applied to?

Similar technologies could be applicable for detection of bullying, sexual harassment, loneliness, and any other emotional behaviors that you’re interested in indicating. Each has its own map of features and needs to be developed independently. I’m now working with a pioneering startup called MoodKnight that develops a distress detection tool, and I see a great future of using AI for Impact.

Bonus Question: What advice, comments, or tips do you have for other women in tech?

Everyone, in any industry: Find your squad. Collaborate with peers and colleagues that are sharing your journey. It’s reassuring to know where you stand, it’s crucial to see where you can get, and it’s good karma to show others they can get there too.  

Lisa Damast

About Lisa Damast

Lisa Damast is a senior writer and the Head of Marketing at RTInsights.com. She has over 12 years experience in online media as a reporter and marketing manager. Follow her on Twitter @Lisa_Damast

Leave a Reply

Your email address will not be published. Required fields are marked *