The original Princeton paper, the 76% number every AI headshot company quotes, and the LinkedIn data that closes the loop. Three sources, one story, and what each one actually says about your face on a profile.
By Joshua Albanese, founder, Aurawave AI · May 4, 2026 · 11 minute read.
The short version
Five things to take with you, each tied to its real source.
- A face is read in 100 milliseconds, per Willis and Todorov 2006 in Psychological Science.
- Competence judged from a photo predicted 71.6% of US Senate races, per Todorov 2005 in Science.
- A 2019 industry study from HeadShots Inc found a 76% lift in perceived competence after a professional headshot.
- LinkedIn's own platform data shows profiles with photos get 21x more views and 9x more connection requests.
- The research describes perception, not reality. Aurawave's grading checklist is built around that distinction.
In this post
- What the Princeton research actually measured
- The 100 millisecond verdict
- Why competence is the trait that decides things
- The 76% number you have seen quoted everywhere
- What this means on LinkedIn
- The limits the research itself acknowledges
- What a competence cued photo looks like in practice
- Why an AI headshot can pass the 100 millisecond test
- How Aurawave's grading checklist operationalizes the research
- What this means for buyers
- About the author
What the Princeton research actually measured
The cleanest study on first impressions from faces is Willis and Todorov, published in 2006 in Psychological Science. The paper is open-access through the Willis and Todorov 2006 first-impressions paper, DOI 10.1111/j.1467-9280.2006.01750.x.
The setup was simple. About 200 Princeton students looked at faces. Each face appeared for 100 milliseconds, 500 milliseconds, or 1 second.
Participants rated five traits per face. Attractiveness, likability, trustworthiness, competence, and aggressiveness. Then a separate group rated the same faces with no time pressure.
The researchers compared the snap judgments to the unhurried ones. The question was whether the brain finishes its read before the conscious mind catches up.
The answer was yes. Across all five traits, the 100 millisecond ratings tracked the unrushed ratings closely. Trustworthiness produced the tightest correlation. Competence was close behind.
The 100 millisecond verdict
A face is read in a hundred milliseconds, and the read does not change when the recruiter stares longer. It only gets more confident.
That is the load bearing finding. Snap judgments and unrushed judgments lined up across all five traits. More viewing time did not change the verdict, only the conviction behind it.
Trustworthiness was the fastest trait to assess. Competence tracked close behind.
"If given more time, people's fundamental judgment about faces did not change. Observers simply became more confident in their judgments as the duration lengthened." (Alexander Todorov, Princeton, press release)
The press release is the cleanest journalistic anchor for what Princeton itself said the paper showed.
Why competence is the trait that decides things
A year before the 100 millisecond paper, Todorov and three coauthors published a different study in Science. Subjects rated competence from US congressional candidate headshots, with no other information.
The numbers were striking. Competence judged from a face predicted 71.6% of Senate races and 66.8% of House races. The full study is the Todorov 2005 inferences of competence paper, DOI 10.1126/science.1110589.
Other traits did not predict outcomes. Attractiveness, trustworthiness, charisma, likability, none of them. Only competence.
A 2007 follow up in PNAS extended the result to gubernatorial races. The same competence cue still worked at 100 millisecond exposures. So a snap read of a face still predicted who won an election.
Todorov said the findings surprised him. He did not believe them at first, per the Princeton press summary of the 2005 paper. Inferences about other people, he wrote, are often automatic. They can happen outside conscious awareness.
The implication for a working professional is direct. Whatever the recruiter, client, or patient is going to decide about you from a photo, they are going to decide quickly. The photo carries the verdict before the resume gets opened.
The 76% number you have seen quoted everywhere
This is where the citation honesty matters. The 76% statistic is real. It is not from Princeton.
It comes from a 2019 industry study run by HeadShots Inc on the photofeeler.com rating platform. Sample size was 243 participants over four months. Subjects rated headshots before and after a professional shoot.
The headline result was a 75.93% average lift in perceived competence. There was also a 62.03% lift in perceived influence and a 9.7% lift in likability. Some subjects swung as high as 115%.
We treat this as industry data, not academic data. It is not peer reviewed and the platform's business model rewards people caring about portrait quality. Useful for marketing claims, useful as corroboration, not the same authority level as a Princeton paper.
76% more competent The lift in perceived competence after a professional headshot, per the 2019 HeadShots Inc industry study run on photofeeler.com (n=243). Industry data, not peer reviewed.
The number is cite-worthy. The source belongs in the citation.
What this means on LinkedIn
LinkedIn publishes its own profile photo statistics. Members with a photo get 21x more profile views and 9x more connection requests than members without one. The source is the LinkedIn profile photo guidance from Talent Solutions, published March 14, 2017.
This is platform data, not academic research. LinkedIn measured its own users, then published the numbers in a marketing post.
We treat it the same way. The 21x and 9x are real, the source is named, the framing stays honest.
The combined picture is what the post is actually about. Princeton tells us how fast the face is read. The election paper tells us competence is the trait that drives decisions. The industry study quantifies the lift from a polished portrait. LinkedIn shows what that lift converts into on the profile that recruiters actually see.
The limits the research itself acknowledges
A research-honest post says where the research falls short. The 2006 paper used about 200 Princeton undergraduates in lab conditions. That is a narrow sample.
A high correlation between a snap judgment and an unrushed one is not proof the judgment is correct. Confident first impressions can be confidently wrong. The paper measures consensus, not accuracy.
Todorov himself addressed this in a 2015 review in Annual Review of Psychology. He argued that the diagnostic accuracy of face based social judgments has been overstated in popular coverage. Faces predict perception, not always reality.
The election outcomes paper has its own caveats. The candidates were US politicians from races between 2000 and 2004. Cultural and demographic factors carried over from the campaigns. The 2007 follow up handled some of those. The paper is still not a universal law of how all faces are judged everywhere.
The HeadShots Inc 2019 study is even narrower. A 243 person sample, on a paid rating platform, run by a vendor whose business depends on the result. We cite it because the number is real and the methodology is documented. Not because it is peer reviewed.
We agree with the cautious framing. The research doesn't say your headshot has to be accurate. It says it has to be readable. Aurawave grades for readability.
What a competence cued photo looks like in practice
This is where the craft side of the post lives. Joshua spent twenty years figuring out which catchlight position reads as competent. The grading checklist is what he wrote down.
"Joshua spent twenty years figuring out which catchlight position reads as competent. The grading checklist is what he wrote down."
Joshua's read on the research lines up with the rubric. Catchlight in the eye signals attention and presence. The brain reads the absence of one as off before it can name what is wrong.
Fill ratio shapes the face. Too flat and the photo reads as snapshot. Too contrasted and it reads as severe.
Ear line and jaw separation control the angle. A clear jaw line, ears not collapsed against the head, that is what a confident face looks like in 100 milliseconds.
Joshua's take is straightforward. When a working photographer reads a face for catchlight first, that is not aesthetic preference. That is literally what the research says competence looks like.
Why an AI headshot can pass the 100 millisecond test
Here is the bridge from theory to product. A 2022 PNAS study by Nightingale and Farid found that AI generated faces rated 7.7% more trustworthy than real faces. The sample was 223 participants on a 1 to 7 scale.
Real versus AI classification accuracy in the same study landed at 48.2%, which is at chance. People could not reliably tell synthetic from real.
That does not mean every AI headshot reads as trustworthy. It means a synthetic face can land on the trustworthy side of the distribution if it carries the right cues.
The operationalization question becomes obvious. What does the synthetic face have to look like to land on the trustworthy side instead of the uncanny one?
How Aurawave's grading checklist operationalizes the research
Aurawave's Intelligence Engine grades every output before it ships. The rubric is the photographer's checklist, encoded.
Catchlight position, fill ratio, ear line, jaw separation. Every candidate image is scored against those cues. Failures are deleted. Replacements regenerate until 25 photos all pass.
The customer never sees the rejects. Other tools ship 40 or 100 photos that mix keepers with failures. We ship the curated set.
"The research doesn't say your headshot has to be accurate. It says it has to be readable. Aurawave grades for readability."
That is the line the whole post turns on. Perception research describes how a face is judged. Our engine kills the photos that fail that judgment so you never ship one.
What this means for buyers
If you are refreshing your LinkedIn before a job hunt, the photo is doing more work than you think. The recruiter is not looking long. They are looking briefly and then moving on.
If you are a lawyer, doctor, or realtor, the photo is part of how clients decide. It earns the call before the consultation. Avvo, Healthgrades, Zillow, the same rule applies.
If you are on a dating app, the same research applies, just with different downstream stakes. The first frame decides the swipe.
The research is the floor, not the ceiling. A readable face is the price of entry. After that, the rest of your profile takes over.
The buyer takeaway is simple. The 76% number is real. The Princeton 100 millisecond paper is real. The 21x LinkedIn lift is real. They come from three different places, and all three describe the same effect from different angles. Your headshot is doing more work than your resume in the seconds before the resume gets read.
That work is finished before the recruiter knows it has started. The only question is whether your photo earns the conviction or contradicts it.
About the author
Joshua read the original Princeton papers twice for this post. He tested the findings against fifteen years of his own studio sittings. The grading rubric Aurawave runs is the one he wrote in his Chicago studio, long before any AI was involved.
Joshua Albanese, founder, Aurawave AI
Read next
Three more on this topic.
- Can recruiters detect AI headshots? What the colleague at the next desk sees that the recruiter probably does not. (/blog/can-recruiters-detect-ai-headshots)
- The ethics of AI headshots, from a working photographer. Joshua's first person take on the question his clients ask in his studio every week. (/blog/ethics-of-ai-headshots)
- LinkedIn headshots, built for the recruiter who is looking for two seconds. The vertical page for the LinkedIn refresher with a call on Tuesday. (/linkedin-headshots)
Build the headshot the research describes
The Princeton work tells you how fast the face is judged. Our engine kills the photos that fail that judgment so you never ship one.