Chatbots and virtual assistants are growing in popularity thanks in large part to the successes Amazon has had with Alexa and its Echo line of voice-activated speakers. But there can be a darker side to this technology if companies don’t figure out ways to benefit people and at the same time protect their personal information and maintain their trust.
Aiming to be a leader in developing so-called responsible conversational artificial intelligence, Microsoft recently laid out guidelines that it developed in-house it hopes will be embraced by the industry at large. After all, it knows a thing or two about the dark side of AI and chatbots. In March of 2016, it was forced to shut down Tay, it's AI-powered chatbot that handled tweets on GroupMe and Kik. The bot had trouble recognizing offensive statements and though it wasn’t coded to be racist the bot quickly learned from those it was interacting with online. It didn’t take long for Tay to spout fake information and make controversial political statements which prompted its demise.
Microsoft’s guidelines emphasize the development of conversational AI that is responsible and trustworthy from the start and encourages companies to think about how the bot will be used and take steps to prevent abuse, Lili Cheng, corporate vice president, conversational AI at Microsoft wrote in the blog post. “We think earning that trust begins with transparency about your organization’s use of conversational AI. Make sure users understand they may be interacting with a bot instead of – or in addition to – a person, and that they know bots, like people, are fallible,” wrote Cheng. “Acknowledge the limitations of your bot, and make sure your bot sticks to what it is designed to do. A bot designed to take pizza orders, for example, should avoid engaging on sensitive topics such as race, gender, religion, and politics.” What’s more, the executive said companies should view conversational AI as an extension of the brand and be cognizant that when a customer interacts with a bot it is a representative of that company. If the bot violates the trust of the customer it can hurt his or her view of the entire organization.
Furthering its push to drive responsible AI in chatbots, Microsoft also announced it is acquiring XOXCO, a software product design and development studio known for its conversational AI and bot development capabilities, for an undisclosed sum. XOXCO has been in the market since 2013 and is behind Howdy, the first commercially available bot for Slack that helps schedule meetings and Botkit, which provides development tools to the developers on GitHub. It marked the latest AI acquisition Microsoft made in the past six months. In May it acquired Semantic Machines, while AI company Bonsai was added to the roster in July and Lobe was acquired in September. It's $7.5 billion acquisition of GitHub in October furthers its vision that communities will fuel the next wave of bot development.
“Our goal is to make AI accessible and valuable to every individual and organization, amplifying human ingenuity with intelligent technology,” Microsoft said when announcing the deal in the middle of November. “To do this, Microsoft is infusing intelligence across all its products and services to extend individuals’ and organizations’ capabilities and make them more productive, providing a powerful platform of AI services and tools that makes innovation by developers and partners faster and more accessible, and helping transform business by enabling breakthroughs to current approaches and entirely new scenarios that leverage the power of intelligent technology.”