1 comments

  • m_2000 4 hours ago
    The original Fawkes ( https://github.com/Shawn-Shan/fawkes ) project came on my radar when I have started working on adversarial examples in deep learning. Fawkes cloaks facial images by adding pixel-based perturbations to the original images, barely visible to the human eye. This bypasses face-recognition systems (or at least weakens their confidence).

    I highly recommend reading the original paper available from the developer's website: https://sandlab.cs.uchicago.edu/fawkes/ to understand how it works. Generally speaking, Fawkes computes _cloaks_ by maximizing feature similarity to unrelated faces while minimizing DSSIM (Structural Dissimilarity).

    These cloaks are then applied to the original images, producing cloaked images. These cloaked images look as similar as possible to the original images, whereas their feature space deviate "as maximal as possible" from the original.

    DISCLAIMER: I am not the developer of Fawkes. I merely developed a web-interface for it. Big thanks to the [original researchers from SANDLAB, Chicago: https://people.cs.uchicago.edu/%7Eravenben/publications/abst...

    Yes, the trained model is part of the repo. Just in case you're irritated by it's size (~300MB).