Thanks to digital tools equipped with artificial intelligence, we’re (theoretically) better than we used to be. Devices and apps track our workouts, our sleep patterns, our periods, our sexual encounters. We give these digital spies access to any intimate part of our lives, whatever they demand, because we assume having more data will allow us to see where we’re failing, and to make improvements accordingly.
But, if you had data on how the kinds of conversations you had with other people, would it make you a better person? Can AI actually teach us to communicate better — you know, with other humans?
Startup founder Nancy Lublin thinks the answer is yes. She founded Loris.ai, with the intention of helping managers at companies tackle difficult conversations. The company is named after the slow loris animal — just like the loris’ toxic bite, botched workplace conversations can end companies or poison relationships, according to the company’s web site.
We really don’t know how Loris.ai intends to accomplish its stated goal — or how AI will be involved at all. The company has not yet started beta testing, and details about its inner-workings are negligible. Yet, it has a clear draw to investors: the company has already raised $2 million in seed funding.
It’s pretty easy to guess Loris.ai will make money by collecting user data and selling it. That’s how all the big tech companies, from Facebook to Google, make their sizable earnings.
Lublin got a good start doing this through her nonprofit Crisis Text Line. Founded four years ago, the organization offers texting-based support to those in emotional crises. The organization used machine learning to analyze the millions of messages exchanged via Crisis Text Line, looking for patterns of behavior. They then used these insights to improve training for the service’s 12,000 counselors. Last year Crisis Text Line partnered with Facebook to improve the social network’s response to users in crisis.
Now, Lublin is using similar techniques for her new startup. The company will likely use the conclusions it gleaned from Crisis Text Line to offer “empathy lessons” to interested companies, training managers and employees about how to improve their communication, according to Wired.
“Managers are nervous having a one-on-one meeting with a direct report of a different gender, and that holds women back,” Lublin told Wired. “People worrying about inclusion worry they’ll get it wrong, and that holds people back.”
AI can give us data — buckets of it, if we want, and complete with specific advice about ways to fix the things that ail us. But humans ultimately have to make the call about whether, or how, to act on those instructions. In the right hands, actionable items like the ones Loris.ai may offer could reduce workplace discrimination, or clear up toxic work environments. But in others, it might just be another piece of information to ignore as we continue making the same dumb mistakes we always have.
Yes, humans have to choose to use that information, at least for now. Soon we may wonder why we ever needed human managers at all, when our robo-bosses exert control so much more easily.
[“Source-futurism”]