Now generating with our most advanced model yet — sharper eyes, truer skin tones, same 90-minute delivery.

Research-backed deep dive

Can recruiters detect AI headshots? What the research says

JA

Joshua Albanese

Founder & working photographer

·

The recruiter glances at your photo for half a second. Your colleague stares at it for ten. That gap is the whole story. Buyers ask whether anyone will catch the AI headshot. The studies say the recruiter almost never does. The colleague often does.

Joshua Albanese · May 4, 2026 · 14 min read

The short version

The studies are blunter than the marketing copy on either side of this debate.

  • Untrained people classified AI faces correctly 48.2% of the time, worse than a coin flip (Nightingale and Farid, PNAS, 2022).
  • AI faces were rated 7.7% more trustworthy than real ones in the same study.
  • A 2024 survey of 1,087 recruiters found a 39.5% correct identification rate, with 80% thinking they were "accurate or very accurate."
  • The AI face passes the stranger test. It fails the friend test, because colleagues see your unposed face every week.
  • Aurawave's grading checklist is built around the specific tells the studies and the photo trade have named.

In this post

The short answer

The recruiter probably can't. Your colleague can. The AI face passes the stranger test and fails the friend test.

That is the post in one line. The studies back it. So does the recruiter survey data. So does the eye-tracking research on how long anyone actually looks at a profile photo before they move on. The trick is not pretending the gap doesn't exist. It is knowing which gap your photo has to cross.

"The recruiter probably can't. Your colleague can. The AI face passes the stranger test and fails the friend test."

What the research actually says

Four academic studies frame the question. Together they tell one story: humans are bad at this, even when they think they aren't.

Nightingale and Farid, PNAS, 2022. Three experiments, around 600 participants, 400 StyleGAN2 faces balanced for gender, race, and age. Untrained participants got it right 48.2% of the time. Trained participants with feedback only reached 59%. AI faces were rated 7.7% more trustworthy than real ones. The PNAS study finding humans cannot reliably distinguish AI faces is the foundation of every honest answer here.

Tree et al., Cognitive Research, 2025. Researchers at Swansea, Lincoln, and Ariel University tested whether familiarity helps. Participants were shown AI versions of celebrity faces, with and without reference photos. The improvement was modest. Even prior knowledge of the person produced limited gains.

Kramer et al., Applied Cognitive Psychology, 2024. Individual humans hovered near chance. Aggregating multiple human judgments pushed accuracy up. A team in a hiring committee, comparing notes on a candidate, gets closer to detection. A lone recruiter glancing once does not.

Dunn et al., British Journal of Psychology, 2025. Super-recognizers sit in the top tiny fraction of a percent for face memory. They were only modestly better than typical observers. The study's title named the giveaway. AI faces are too symmetric, too average. Most people, the study found, are overconfident about their detection skill.

The headline number does the heavy lifting. 48.2% accuracy for untrained viewers. 7.7% more trustworthy than real faces. That is the floor of every conversation about whether anyone can tell.

48.2% Accuracy rate for untrained humans classifying AI faces, per Nightingale and Farid, PNAS 2022. Worse than a coin flip.

The 30-second LinkedIn pass

A LinkedIn profile photo lives or dies inside a 30-second profile scan. Eye-tracking research on professional recruiters found that 19% of the total time on a profile fixates on the photo. The initial fixation runs under one second before the eye moves to job titles and the headline.

Industry coverage of the same recruiter-screening behavior reports that 86% of recruiters screen LinkedIn profiles within 30 seconds. The photo is a Gestalt impression: lighting, framing, symmetry. The recruiter is not auditing pore detail. The recruiter is checking that the photo reads as professional, posed, and recent.

That window is too short for the deliberate inspection the academic detection studies measured. The studies' subjects were given seconds, sometimes minutes, with the face. The recruiter does not get that. The recruiter is in motion.

Pair the eye-tracking number with the trustworthiness finding from PNAS. AI faces were rated 7.7% more trustworthy than real ones. The recruiter is making a Gestalt trust call in under a second. The studies say the AI face is winning that call by a small but real margin.

Why colleagues spot it when recruiters don't

Two things separate the colleague from the recruiter. One is time. One is baseline.

The colleague sees the new LinkedIn photo in a feed update and sits with it. They have a baseline, which is your face at the desk next to theirs. The recruiter has neither. A Benchmark Reviews recruiter forum thread named the test plainly: would coworkers recognize you at 8 a.m. on a bad hair day? AI photos optimize toward a posed-best version. The gap between "AI's best of you" and "Tuesday-morning of you" is what the colleague locks onto.

The Kramer et al. 2024 finding sharpens the point. Individual humans hovered near chance. Aggregating multiple judgments pushed accuracy up. A hiring committee comparing notes is closer to the crowd condition. A lone recruiter glancing once is the worst-case detector configuration.

Joshua's working observation, from 20 years of headshot direction: real headshots intentionally preserve some asymmetry, fatigue, and texture. The photographer's job is to make the photo still resemble the human. The AI's default is to smooth toward an average. Average reads wrong to anyone who knows the original.

The specific tells AI photos give off

When AI fails, it fails in patterned ways. The academic literature, the recruiter forums, and the 1-star competitor reviews name the same tells. Each one is craft-attributable.

Plastic skin, over-smoothed texture. The Dunn et al. 2025 study described AI faces as "too average." The customer language is more direct. A Substack first-person tester called the result "a wax figure of yourself that's 95% accurate but somehow makes you question your own face." Pore detail, freckles, and micro-shadows go missing. The skin reads as a print of skin, not skin.

Dead eyes and wrong specular highlights. The catchlight is the reflection of the light source in the eye. It is the first place the brain reads "alive." AI generators routinely drop catchlights or place them asymmetrically. One HeadshotPro 1-star reviewer wrote, "the skin texture is still a plastic version of skin... computers have no soul." The eyes are where viewers feel that.

Mangled teeth, asymmetric ears. Generators do well on the central face and poorly on its edges. Teeth blur. One ear runs higher than the other in ways a real face does not. These are the GAN-artifact tells the academic detection literature has tracked since 2022.

Head plopped on a body. The neckline, jaw, and shoulder fit is where the structural failure shows up. An Aragon 1-star reviewer named it. "It was clear it was AI as my head was just plopped on." The collar does not align with the jaw. The neck is the wrong length. Gravity does not read as gravity.

Background and foreground inconsistency. The studio backdrop is too clean. The light direction on the face does not match the light direction on the shoulders. The hair edge softens into the background instead of sitting in front of it.

Identity drift. This is the dominant complaint pattern in the competitor 1-star reviews. Hair color shifts. Age shifts. Skin tone shifts. Face shape shifts. Ben Adams, an Aragon customer, wrote that "my employer couldn't use any of them." The photo no longer looks like the person it claims to be.

Too symmetric, too average. Real faces have asymmetry. Eyebrow heights differ slightly. The jaw is not perfectly even. The Dunn et al. 2025 study named this directly. AI tools that smooth both eyebrows to the same height are the ones a familiar viewer spots first.

What Aurawave does about each tell

Joshua designed the grading checklist before a line of code shipped. The Intelligence Engine encodes a working photographer's curation step. It generates more candidates than the order, grades each one, deletes the failures, and regenerates until the delivered set passes.

Texture preservation. The grade keeps pore detail, freckles, and the micro-shadows that read as skin. Joshua's check: if the surface looks like a print, kill it.

Catchlight check. Both eyes carry consistent, naturally placed highlights. Joshua's rubric reads catchlights at roughly the same clock position, soft-edged, not glassy. AI outputs that drop a catchlight or place them asymmetrically fail the check.

Identity and anatomy grade. Teeth, ears, and jaw geometry are checked against the reference photos. Outputs with a drifted ear-line or a mangled smile do not ship.

Ear-line and jaw separation. This is where most generators fail structurally. A real head sits on a real neck with the jaw clearing the collar at a specific angle. Joshua's check looks at the ear-line first because it is the earliest signal of head-on-body drift.

Fill ratio and environmental separation. Side-light to fill should match the apparent room. Over-filled lighting kills dimensionality and produces the wax-figure effect. The grade checks the light's behavior on the face against the light's behavior on the shoulders and background.

Identity preservation. Same hair color, skin tone, age, and face shape as the reference photos. The dominant 1-star complaint pattern across the competitor set is identity drift, and the grade kills any output that drifts.

"Other tools show you their work. Aurawave shows you their graded work."

The recruiter behavior data

The strongest evidence that recruiters specifically cannot tell comes from a June 2024 Ringover survey of 1,087 recruiters. The survey is the post's structural punchline.

Recruiters were shown a mixed set of AI and real headshots. 76.5% preferred the AI versions when they did not know which were AI. When asked to spot the AI photos, the correct identification rate was 39.5%, worse than chance for a binary task. The overconfidence gap was the headline: 80% of the same recruiters thought they had been "accurate or very accurate" at spotting AI. The PetaPixel coverage of the Ringover recruiter survey reports both numbers.

A second finding in the same survey matters for the ethics frame. 66% of recruiters said they would be put off by a candidate once they realized the photo was AI. 88% said candidates should be required to disclose AI-generated photos. Recruiters cannot catch it on first pass. They say they care about it anyway.

The Ringover survey also separated tool tiers. Top-tier AI generators fooled recruiters 60% of the time. Free generators were detected 58.9% of the time. The tool matters. The studies do not say all AI headshots pass. They say good ones pass and bad ones fail.

A Benchmark Reviews recruiter, Mike34, named the deal-breakers in customer language. "Hard no, plastic skin, glowing eyes, fantasy backgrounds, extreme age reduction." The recruiter cannot detect AI on first pass at scale. The recruiter can absolutely detect a bad AI tool's output. The two facts are not in conflict. The first describes the average case. The second describes the failure case the bad tools ship as standard.

Where this might change

Algorithmic detection is improving. Tools like Hive, Reality Defender, and Sensity report high accuracy on AI faces from models they were trained on. Independent benchmarks from 2024 put Hive's image detection at around 98% accuracy on its training distribution. The UNSW research on overconfidence in spotting AI faces frames the human side bluntly. People think they are good at this. They are not.

The catch is generalization. Detectors trained on one model's output drop sharply on faces from a model they were not trained on. The arms race is real. A recruiter is not running a detector on every candidate. Even if they did, the detector is in a cat-and-mouse loop with the next generator.

The Tree et al. 2025 study added an honest caveat. Even familiar viewers can be fooled by current 2025-era models. Familiarity helps. It does not close the gap. Aurawave's grading workflow narrows the gap. It does not claim to close it. The honest answer is that the human side of detection is unlikely to improve much. The algorithmic side will get better, but it will always lag the latest generators by a release.

The disclosure question is the other moving part. The 88% who said candidates should disclose AI use are signaling a norm in formation. A graded headshot that looks like you on your best day is closer to a flattering studio shot than a fake persona. The NPR coverage of fake LinkedIn profiles using AI faces reports a Stanford finding of 1,000+ fake LinkedIn profiles. That story is about deception. Professional polish is not the same thing.

What this means practically

The bad-hair-day test is the practical version of all of the above. The photo has to look like the person who shows up to the meeting. If a weekly colleague sees the photo and reads it as you, the recruiter will read it the same way. Probably better.

Aurawave's strongest customer signal is the colleague test passing. Jeremy Bengtson, a 5-star Trustpilot reviewer, wrote: "former colleagues messaged asking when I had a professional shoot done." That is the outcome the grading checklist is built for. If the photo passes the people who know you, it will pass the people who don't.

The buying decision is downstream of one question. Will this photo survive the half-second glance? The studies say the average AI photo, from a competent tool, will. The follow-up question is whether it will survive the people who know your face. That is the harder bar, and the grading workflow is what either clears it or doesn't.

About the author

Joshua designed Aurawave's grading checklist after 20 years of reading light on real faces in his Chicago and Fort Myers studios. The catchlight, fill-ratio, and ear-line checks named in this post are the same checks he runs on every studio shoot.

Joshua Albanese is the founder of Aurawave AI. He spent 20+ years as a working professional headshot photographer. He founded a top-10 US headshot studio in Chicago and has delivered 15,000+ studio sessions and 3M+ images across his career.

Joshua Albanese, founder, Aurawave AI

Read next

Try the photo that passes the colleague test

The grading checklist is the product. We deliver the photos that passed. If the photo doesn't look like you, we redo it.

See sample headshots · Read the next post

JA

About the author

Joshua Albanese

Working photographer, 20+ years behind the lens, 15,000+ studio sessions. Founder of Aurawave AI and JA Headshots. Profiled by Voyage Chicago and GlobeNewswire.

Ready to try it?

Upload 10 selfies. Get a hand-picked set of professional AI headshots in under 90 minutes. Every photo graded.