With widespread adoption among law enforcement, advertisers, and even churches, face recognition has undoubtedly become one of the biggest threats to privacy out there.
By itself, the ability to instantly identify anyone just by seeing their face already creates massive power imbalances, with serious implications for free speech and political protest. But more recently, researchers have demonstrated that even when faces are blurred or otherwise obscured, algorithms can be trained to identify people by matching previously-observed patterns around their head and body.
READ MORE ON MOTHERBOARD | VICE
Ref: Faceless Person Recognition; Privacy Implications in Social Media. arXiv - Computer Science > Computer Vision and Pattern Recognition (28 July 2016) | arXiv.org/abs/1607.08438 | PDF
As we shift more of our lives into the virtual domain, the volume of data shared on the web keeps increasing and presents a threat to our privacy. This works contributes to the understanding of privacy implications of such data sharing by analysing how well people are recognisable in social media data. To facilitate a systematic study we define a number of scenarios considering factors such as how many heads of a person are tagged and if those heads are obfuscated or not. We propose a robust person recognition system that can handle large variations in pose and clothing, and can be trained with few training samples. Our results indicate that a handful of images is enough to threaten users' privacy, even in the presence of obfuscation. We show detailed experimental results, and discuss their implications.