How to confuse Artificial Intelligence

There was a lot of excitement last week sbout fooling AI image recognition by changing 1 pixel:
as in the title image from arXiv:1710.08864  [really about 0.1% of pixels]

This study goes a bit further to criticize probability theory and underlying statistical assumptions:

A starting point is this animated gif and study showing different graphs with the same statistical characteristics:

Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing


However what humans call Common Sense isn’t just fine tuning of one algorithm, it’s a comparison of different algorithms and sensory inputs to cross-check the result.

Or to put it another way: If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
If we only hear a quack, we can’t be sure.

One thought on “How to confuse Artificial Intelligence

  1. On further investigation, actually google wrote a paper on this in 2015, showing for example that by applying only a 4% adjustment, this panda could be recognised as a Gibbon…
    Panda to Gibbon

    A good summary here.

    The one-pixel attack sounds new and sensational but is based on a very low number of pixels. On the other hand many exercises due use a very low number of pixels..


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s