Microsoft Pitches Technology That Can Read Facial Expressions at Political Rallies
TECHNOLOGY, 8 Aug 2016
4 Aug 2016 – On the 21st floor of a high-rise hotel in Cleveland, in a room full of political operatives, Microsoft’s Research Division was advertising a technology that could read each facial expression in a massive crowd, analyze the emotions, and report back in real time. “You could use this at a Trump rally,” a sales representative told me.
At both the Republican and Democratic conventions, Microsoft sponsored event spaces for the news outlet Politico. Politico, in turn, hosted a series of Microsoft-sponsored discussions about the use of data technology in political campaigns. And throughout Politico’s spaces in both Philadelphia and Cleveland, Microsoft advertised an array of products from “Microsoft Cognitive Services,” its artificial intelligence and cloud computing division.
At one exhibit, titled “Realtime Crowd Insights,” a small camera scanned the room, while a monitor displayed the captured image. Every five seconds, a new image would appear with data annotated for each face — an assigned serial number, gender, estimated age, and any emotions detected in the facial expression. When I approached, the machine labeled me “b2ff” and correctly identified me as a 23-year-old male.
It interpreted my facial expression as “neutral,” with a bit of “surprise.”
I asked Christina Pearson, a nearby Microsoft spokesperson, to confirm that the technology was meant to be used on a large crowd, like at a Trump rally. “Yes,” she confirmed. “Or it’s meant to be the Super Bowl, whatever you want.”
“Realtime Crowd Insights” is an Application Programming Interface (API), or a software tool that connects web applications to Microsoft’s cloud computing services. Through Microsoft’s emotional analysis API — a component of Realtime Crowd Insights — applications send an image to Microsoft’s servers. Microsoft’s servers then analyze the faces and return emotional profiles for each one.
In a November blog post, Microsoft said that the emotional analysis could detect “anger, contempt, fear, disgust, happiness, neutral, sadness or surprise.”
Microsoft’s sales representatives told me that political campaigns could use the technology to measure the emotional impact of different talking points — and political scientists could use it to study crowd response at rallies.
But the use of facial analysis at political events is eerily reminiscent of George Orwell’s 1984, where the government monitors faces for any sign of dissatisfaction, or “facecrime.” In Orwell’s world, “to wear an improper expression on your face (to look incredulous when a victory was announced, for example) was itself a punishable offense.”
Microsoft’s Realtime Crowd Insights could potentially pick out the stern faces of dissenters, or angry faces of future protestors, all in a matter of seconds.
Donald Trump’s security personnel have already tried to pre-empt protests at rallies by kicking out people they thought likely to protest. At one rally in February, security asked 30 black students to leave before Trump started speaking. According to USA Today, the students had planned to sit in silent protest, but one 19-year-old student said, “We didn’t plan to do anything.”
In Politico’s suite in Cleveland, one passerby told me he was “slightly creeped out,” and another asked me why Microsoft was collecting their facial information. The machine also picked up on a small range of negative responses in the room, including “fear, contempt, and disgust.”
When I attended the “Realtime Crowd Insights” display in Philadelphia, I asked to speak with a spokesperson and was introduced to Kathryn Stack, a managing director with the public affairs firm Burson-Marsteller. I asked Stack whether the product could be used to identify protestors or dissidents at rallies or political events.
“I think that would be a question for a futurist, not a technologist,” she responded.
Facial recognition technology — the identification of faces by name — is already widely used in secret by law enforcement, sports stadiums, retail stores, and even churches, despite being of questionable legality. As early as 2002, facial recognition technology was used at the Super Bowl to cross-reference the 100,000 attendees to a database of the faces of known criminals. The technology is controversial enough that in 2013, Google tried to ban the use of facial recognition apps in its Google glass system.
But “Realtime Crowd Insights” is not true facial recognition — it could not identify me by name, only as “b2ff.” It did, however, store enough data on each face that it could continuously identify it with the same serial number, even hours later. The display demonstrated that capability by distinguishing between the number of total faces it had seen, and the number of unique serial numbers.
Instead, “Realtime Crowd Insights” is an example of facial characterization technology — where computers analyze faces without necessarily identifying them. Facial characterization has many positive applications — it has been tested in the classroom, as a tool for spotting struggling students, and Microsoft has boasted that the tool will even help blind people read the faces around them.
But facial characterization can also be used to assemble and store large profiles of information on individuals, even anonymously.
Microsoft has traditionally adopted an “opt in” policy with facial recognition, requiring users’ consent before Microsoft can store an image of their face. The Kinetic Sensor on an Xbox, for example, allows users to sign in through facial recognition technology — but requires users to first give consent, according to Microsoft’s privacy policy.
Microsoft has a similar code of conduct for APIs, which requires developers to “obtain the consent of the people whose data (such as images, voices, video or text) are being processed by your app.”
Alvaro Bedoya, a professor at Georgetown Law School and expert on privacy and facial recognition, has hailed that code of conduct as evidence that Microsoft is trying to do the right thing. But he pointed out that it leaves a number of questions unanswered — as illustrated in Cleveland and Philadelphia.
“It’s interesting that the app being shown at the convention ‘remembered’ the faces of the people who walked by. That would seem to suggest that their faces were being stored and processed without the consent that Microsoft’s policy requires,” Bedoya said. “You have to wonder: What happened to the face templates of the people who walked by that booth? Were they deleted? Or are they still in the system?”
Microsoft officials declined to comment on exactly what information is collected on each face and what data is retained or stored, instead referring me to their privacy policy, which does not address the question.
Bedoya also pointed out that Microsoft’s marketing did not seem to match the consent policy. “It’s difficult to envision how companies will obtain consent from people in large crowds or rallies.”
________________________________
Alex Emmons – ✉alex.emmons@theintercept.com
Go to Original – theintercept.com
DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.