但为Alexa的学识进行众包是个好主意吗？从种族主义网络钓鱼者“策反”的微软聊天机器人，到类似Alexa Answers但因充斥不良信息而恶名在外的Yahoo Answers，用户提供数据的系统走入歧途的案例在过去几年随处可见。因此不难想象出最糟糕的情景：在Alexa操控下，智能音箱“愉快地”播报着假新闻、危险的阴谋论或者白人至上主义者的观点。
亚马逊负责Alexa Information的副总裁比尔·巴顿向商业杂志《Fast Company》描述Alexa Answers时态度乐观。他说：“我们注入的是贡献者的正能量和善意，我们用机器学习和算法来排除少数聒噪者和坏家伙。”
实际上，从爱因斯坦的袜子到芥末的颜色，Google Assistant目前可以回答几乎所有上述问题，只是它们都直接取自Amazon Answers网站。谷歌的答案来自谷歌搜索引擎找到的结果、谷歌featured snippet以及知识图谱。亚马逊想利用公众提供的答案在这个领域追上谷歌。
除了为每个问题找到答案，来自Alexa Answers的数据还将用于训练亚马逊语音助手背后的人工智能系统。该公司发言人告诉《钜富》杂志：“Alexa Answers不仅是扩展Alexa知识面的另一条途径，还可以让她更好地帮助其他用户并为他们提供更多信息。”当初发布Alexa Answers时，亚马逊将其称为“变得更聪明的”Alexa。
但自动化系统可以轻松识别并阻拦个人发表的冒犯性语句，尽管在这方面也有负面风险。在一次测试中，笔者在Alexa Answer上回答问题时想提到‘20世纪90年代的摇滚乐队Porno for Pyros’，结果答案被拒，原因不是不准确，而是因为包含了‘porno’这个词。系统提示称：“Alexa不会用这个词”。
网络钓鱼并非Alexa在精神卫生方面面临的唯一风险。如果Alexa没有正确解读提问者的话语，就连初衷良好的问题也可能变得荒谬。比如，上周五上午在Alexa Answers出现了一个问题，内容是“What is a piglet titus?”。看来用户实际上问的是“What is Epiglottitis?”（答案是：急性会厌炎，一种罕见咽喉疾病）。如果有足够多的用户尝试回答这个毫无意义的问题，比如小熊维尼的粉丝或者急于获得分数的用户，他们就可能让数据池变得混乱，而不是得到改善。
Did Albert Einstein wear socks? How do you prevent tears when cutting an onion? Did Burt Reynolds marry Sally Field? What makes wasabi green? The average person might not know the answer to these questions, but Amazon Alexa, through the new Alexa Answers portal that was announced Thursday, might. Well, more accurately, an Alexa user could.
An online community where anyone who logs in can suggest answers to user-supplied questions posed to the voice-activated Alexa A.I. assistant, Alexa Answers is designed to answer the tough questions that can’t already be answered by the voice-enabled assistant. Once the answers are submitted, they are vetted for accuracy, scored, and if they are good enough, make their way back to Alexa users.
But is crowdsourcing Alexa's smarts a good idea? From a Microsoft chatbot subverted by racist trolls to Yahoo Answers, a similar service to Alexa Answers that has become notoriously rife with bad information, the past few years have been littered with cases of user-generated data systems gone bad. So it's not hard to imagine the worst-case scenario: an Alexa-backed smart speaker blithely spouting fake news, dangerous conspiracy theories, or white supremacist talking points.
Describing Alexa Answers to Fast Company, Bill Barton, Amazon’s Vice President of Alexa Information, struck an optimistic tone. “We’re leaning into the positive energy and good faith of the contributors," he said. "And we use machine learning and algorithms to weed out the noisy few, the bad few.”
Experts on data use and its impacts are markedly less cheery.
“We have plenty of examples of why this is not going to play out well,” says Dr. Chris Gillard, who studies the data policies of Amazon and other tech companies at Macomb Community College near Detroit. Crowdsourcing data, and then using that data in training the Alexa algorithm, he says, presents “pitfalls that Amazon seem intent on stepping right into.”
The race to beat Google
While better assistants and smart speakers drive sales of accessories like voice-activated lights, Google’s decades in the search business seem to have given it an advantage over Amazon when it comes to understanding queries and returning data. Google's smart speaker has steadily gained market share against the Echo, and Google Assistant has almost uniformly outperformed Alexa in comparison tests.
In fact, almost all of the questions above, from Einstein's socks to wasabi's color, are are currently answered with Google Assistant, though they were taken directly Amazon Answers' website. Google's answers come from its search engine's results, featured snippets, and knowledge graph. Amazon is trying to use crowd-supplied answers to catch up in this space.
“Amazon’s not Google,” says Dr. Nicholas Agar, a technology ethicist at Victoria University of Wellington, New Zealand. “They don’t have Google’s [data] power, so they need us.”
Beyond just providing missing answers to individual questions, data from Alexa Answers will be used to further train the artificial intelligence systems behind the voice assistant. “Alexa Answers is not only another way to expand Alexa's knowledge,” an Amazon spokesperson tells Fortune, “but also... makes her more helpful and informative for other customers.” In its initial announcement of Alexa Answers, Amazon referred to this as Alexa “getting smarter.”
Money for nothing, facts for free
As important as Alexa Answers might be for Amazon, contributors won’t get any financial compensation for helping out. The system will have human editors who are presumably paid for their work, but contributed answers will be rewarded only through a system of points and ranks, a practice known in industry parlance as ‘gamification.’
Agar believes this will be effective, because Amazon is leveraging people’s natural helpfulness. But he also thinks a corporation leveraging those instincts should give us pause. “There’s a difference between the casual inquiry of a human being, and Amazon relying on those answers," he says. "I think it’s an ethical red flag.”
Gillard also thinks Amazon should pay people to provide answers, whether its one of its own workers or partner with an established fact-checking group.
Amazon certainly has the infrastructure to do it. The ecommerce giant already runs Mechanical Turk, a ‘gig’ platform that pays “Turkers” for performing small, repetitive tasks, and would seem well-suited to supplementing Alexa’s training.
But Gillard believes that relying on a ‘community’ model insulates Amazon if Alexa starts spouting bad or offensive answers, based on crowd input. “I think not paying people lets you say, well, it was sort of the wisdom of the crowd,” he says. “If you pay people, you’re going to be accused of bias.”
A gamified incentive system, though, is not without its own risk. In 2013, Yahoo Answers disabled part of its user voting system. That's allegedly because some participants created fake accounts to upvote their own (not necessarily accurate) answers. (Source: Quora. Also, this is a good example of how crowd-sourcing information impacts reliability.)
The biggest question facing Alexa Answers is whether Amazon can effectively prevent abuse its new platform. Amazon declined to answer questions from Fortune about the precise role of human editors in the system. But their presence alone represent an acceptance that automated systems in their current state can't reliably detect offensive content, or evaluate the accuracy of facts.
Amazon has never grappled with these challenges as directly as companies like Facebook and Twitter, though according to some critics, it has failed even to consistently detect fake reviews in its own store. Barton told Fast Company that Amazon will try to keep political questions out of the system, a subtle task Gillard says will likely fall to humans. “A.I. can’t do those things," he says, "It can’t do context.”
Yet automated systems can easily detect and block individual offensive terms, though even that has its downsides. In a test, this reporter attempted to reference the ‘90s rock band Porno for Pyros when suggesting an Alexa Answer. The answer was rejected, not because of inaccuracy, but because of the word ‘porno.’ According to a notification, “Alexa wouldn’t say that.”
Not everything has an answer
Barton told Fast Company that “we’d love it if Alexa can answer any question people ask her,” but that’s clearly impossible. Alexa cannot be expected to know, for instance, what the meaning of life is, and crowdsourcing answers to questions that are enigmas could make the entire system more fragile. In a 2018 study, researchers found that search queries with limited relevant data, which they called “data voids,” were easier for malicious actors to spoof with fake or misleading results.
And trolls aren’t the only risk to Alexa’s mental hygiene. Even well-intentioned questions can wind up nonsensical, if Alexa doesn’t properly interpret the questioner’s speech. For example, the question “What is a piglet titus?” appeared on Alexa Answers Friday morning. It seems likely the user actually asked “What is Epiglottitis?” (Answer: a rare throat condition). If enough users tried to answer the nonsense question—perhaps Winnie the Pooh fans, or users hungry for points—it could muddy the data pool, instead of improving it.
It’s unclear how Alexa's overall performance might be impacted by messy or malicious data—those answers are a ways away yet. Bit it's a wonder if, after all the stumbles of similar systems, Amazon is taking the risks of crowdsourced answers seriously.