There’s no guaranteed way to always determine at a glance that an image has been AI-generated. Certain anomalies can be big giveaways, but they aren’t foolproof. A human artist’s image can be every bit as asymmetrical and illogical — and have just as questionable-looking hands — as one that AI has generated.
This isn’t to say, though, that technology hasn’t been created in turn to try and answer this question. AI images are created by means of Generative Adversarial Networks (GANs). In his Université de Montréal paper “Generative Adversarial Nets,” Ian J. Goodfellow both introduced and explained the term. “The generative model can be thought of as analogous to a team of counterfeiters, trying to produce fake currency and use it without detection, while the discriminative model is analogous to the police, trying to detect the counterfeit currency,” Goodfellow wrote. “Both teams [attempt] to improve their methods until the counterfeits are indistinguishable from the genuine articles.”
A GAN detector, then, is a means of determining whether such a model (and, by extension, AI generation) has been used to create an image. Mayachitra offers a demo version of such a detector, but its results are not conclusive. Users can query the origin of a small selection of provided images or upload their own, but its results appear limited to “maybe GAN generated,” “probably GAN generated” and “probably not GAN generated.” Another option comes from Hive, which similarly offers a demo of its tool.