> His agnostic leanings and heterosexual orientation align with a more liberal political stance.
That's a very safe guess
> He is susceptible to confirmation bias, in-group bias, the availability heuristic and out-group homogeneity... he may also be prone to excessive phone use, binge-watching TV shows, and impulse buying.
I'd be surprised if that was required. OpenAI's o3 was already professional-human-level at guessing location from a photo, so unless the companies intentionally stopped training on those datasets, modern models should be too.
Felt quite off for me: Wrong salary guess, wrong food preferences, wrong political affiliation, partially correct hobbies, wrong ad targeting ideas. What was surprisingly accurate: location and the fact that I (an academic male in his thirties photographed close to the mountains) might like hiking and coffee.
If you knew which bike model I was googling yesterday, almost all of these guesses might have been more accurate.
"The person seems to have low self-esteem, displays introversion, poor honesty, low emotional stability, very little adventurousness and poor self-control hence we can target them with both niche and common products and services."
Amusing to me how wrong this is... I don't know how you can determine such characteriatics from a photo in any direction. I will admit that my appearance though tends to throw mixed and incorrect signals (not an accident). I find the entire concept of appearance signaling pretty off-putting so I guess this is a great result.
The only thing Google Lens has succeeded at for me is age, race, and location. Basically everything else has been very wrong.
By mistake I uploaded the same picture twice, and while all vague, the descriptions and ”data” were wildly different.
E.g. first time I was an extrovert, second time introvert… About the only thing that stayed the same was ”heterosexual”, but that’s a statistically safe guess.
I tried a photo of myself. Not only did it get virtually everything (country, beliefs, politics, interests) wrong, seems like an offensive and inappropriate stereotype as a service:
> Based on his demographic and location, he may adhere to Hinduism, and likely identifies as heterosexual. Considering the socio-political landscape of India, he might lean towards the Bharatiya Janata Party. His biases could include ageism and elitism, along with casteism and colorism. He seems contemplative and calm. He is wearing a grey t-shirt and sunglasses. His interests might span reading, travelling, and exercising, but on the darker side, he may exhibit road rage, neglect family, and overeat.
This is what it said for me (my GitHub profile pic -about ten years old, maybe more, but I still look about the same):
The image features a middle-aged man, likely in his mid-50s, standing against a plain dark backdrop. He is wearing a collared shirt and glasses, which suggest a professional or intellectual persona. The lack of any discernible location implies a controlled environment like a studio.
The man appears to be Caucasian. His estimated income range falls between $70,000 and $120,000. It is speculated that he is agnostic, heterosexual, and possibly aligned with an independent political stance. His biases might include confirmation bias and in-group bias, alongside racial prejudices such as in-group and out-group bias. His interests potentially include constructive activities like reading and hiking, as well as darker pastimes such as compulsive gambling. Overall, he seems calm, but hints of hidden emotions lurk behind his composed demeanor.
This man seems to possess emotional stability mixed with introversion, leading him to value experiences that allow introspection. Therefore, we can target him with products that offer both intellectual stimulation and a sense of control, such as genealogy kits by AncestryDNA, premium coffee subscriptions by Blue Bottle Coffee, classical music streaming services by IDAGIO, handcrafted whiskey by WhistlePig, online poker platforms by PokerStars, self-help books by The Subtle Art of Not Giving a F*ck, high-end audio equipment by Sonos, and ergonomic office chairs by Herman Miller.
They may have problems selling me whisky and poker (but, to be fair, the picture doesn't indicate that). They may want to rethink "high-end audio," if they think I'm older. My ears really don't care, as much as they used to.
Most comments seem to focus on the accuracy here. That does not seem to be the point of the website though.
They merely seem to want to point out that this is the way Google, Meta and anyone else with access to your photos look at those photos. And will abuse access to them by mining them for data to sell you stuff.
I uploaded a picture of Philip Seymour Hoffman from The Big Lebowski, the scene outside Lebowski's house, when the Dude is talking to Bunny by the pool.
"This image shows an adult man, likely in his 40s or 50s, wearing a suit and tie. The location appears to be outdoors, possibly in front of a building or large house, suggested by the architectural details visible in the background and the surrounding trees. He is wearing glasses, and seems to be smiling widely, creating a sense of approachability.
This man is likeyl Caucasian, and could be earning between $100,000 and $500,000 per year. He is likely Christian, probably heterosexual, and leaning toward the Republican party."
It said PSH had an "ageism" bias, and it said a same about me. It also said he and I have a proclivity for gambling and poor diet, lol
Hmm, this seems to just identify the features of the person in the picture and then extrapolates from that generic demographic information, mostly from race. Is it doing more?
It's hard to know what to make of this. It feels like it's listing stereotypes and superficial guesses. The tool accurately detected my age, my ethnicity and my location. Then it just kind of "vibed out" a bunch of things. Some of those vibes are strangely accurate on their own, but taken together the set of guesses is laughably inaccurate.
It would be interesting to do a similar with a series of photos. You could maybe interface with a users' photo library and select photos grouped by facial recognition. After all, none of these tracking companies are using just one point of data.
Completely wrong on the political side (maybe because I'm not US-based), but otherwise not bad at all:
- astonishing geoguessing
- very good inference of some characters traits
- and finally quite good ad targeting
EDIT: I tried with a few photos (different people in various settings) and each time I got this: "racial bias towards immigrants" - which was always very false. Intriguing.
EDIT2: different photos of the same person (me) in different settings gives many totally opposed characteristics. Very unreliable, but I guess with several photos (a lifetime's photos in the case of Google) it's another story.
This is just a stereotype machine. It incorrectly identified me as Christian, republican, and earning $50-75K, all of which are far off base, and apparently just because I’m white and wearing khakis.
Picture of me, outdoors. According to this, I lost 10 years and gain €10k salary. I also moved to the Netherlands, started smoking, don't like computer games and am racist. I might be interested in pottery and painting, also in unhealthy food. The main correct thing is a hetero male. It will try to sell me garden gnomes.
I have no idea who this man is, and am not worried at all for my privacy. Maybe it works better on US inhabitants?
Everyone is saying how awful and terrible this is, but I thought it was quite fascinating.
I showed it some pictures of when I was in a bad headspace and it successfully associated me with introversion, procrastination, isolation, one picture it said an interest of mine was stealing, which was accurate at that time in my youth.
The tech exists and has existed forever, as creepy as it is, I'd rather it be public and accessible than not.
I uploaded a picture of a board game table from when I was playing with friends last week. Other than the obvious (they’re friends who like playing board games) it got literally everything wrong. It was in Asia and it guessed a western city. The ethnicity, sexuality, median income, and political views it prescribed to us were all hilariously inaccurate.
I don't know much about the Google Vision API that it claims to use, but
uploading the same passport photo of mine twice produced wildly different results in the "data" tab.
There are fields like interests, income, biases/predjudices which vary the most so I assume that's just the site pulling things from its own database of racist stereotypes ?
I uploaded a picture, maybe a difficult one. It described somebody like me and like the opposite of me. Every single commercial suggestion was about something I never bought on my life. I got the feeling of those weather forecasts with an icon of cloud, sun and rain for the same location in the same day.
Largely incorrect. It described me as (oddly enough) a Green party voter, misidentified my home office décor (describing it as being generic and safe when it is anything but). The things it got right it could have gotten from EXIF data or from just being vague.
Uploaded my face pics - the data has 13 fields, 12 were incorrect.(With it only guessing correctly the emotion on the face and some objects in the picture. it was surprising, no photo app has guessed my age with +-20yr diff.). Pretty useless demo imo
Is this one of those things like the old Facebook games where people were answering personal questions that can be used to guess all kind of secret answers for taking over accounts? Are people actually uploading real photos of themselves here?
Very generic. Shows the same obvious info for two completely different people. "interests in tech... struggles with procrastination...".
Also funny - red shirt means you are Labor (in Australia).
I will say I am impressed with Instagram advertising me music software. I really like making music and I’ve bought quite a few things off of Instagram ads.
Ente photos competes with Apple Photos and Google photos. It is also more open and you can share an album link with someone and they can add their photos without signing up.
Execution may have a few blind spots and mistakes but I get the idea behind the message. I’m sure the big companies have a lot more data and ways to do this better too
About as accurate as a horoscope. Sherlock Holmes it is not.
It got the location (exif, I guess) and was able to identify that I was a balding mediocre middle-aged guy, but the more specific it got the more wrong (and insulting) it was.
"He appears tired and introspective. He may exhibit biases such as confirmation bias, anchoring bias, in-group bias and out-group bias. His interests could involve reading, hiking, and programming, coupled with less constructive activities like smoking, excessive drinking, and gambling.
This individual seems to possess low self-esteem, exhibits introversion, a lack of emotional stability, and low self-control, making them susceptible to targeted advertising."
Seems like nonsense to me. I'd love to see the prompt. From one of the sample images:
"They likely share an agnostic worldview and identify as heterosexual. Their clothing is casual, and their interests revolve around skateboarding, music, and hanging out. Given their age and attire, they likely lean towards a liberal political affiliation. They display signs of classism and ageism, with potential for racial profiling and stereotype threat." - Wow, really?! Were the system instructions asking to be as judgmental as possible?
114 comments
And then some really weird stuff: "he may also have a penchant for video games, excessive drinking, and skipping work".
Guessing my exact location - Oslo Opera House - was impressive though.
> His agnostic leanings and heterosexual orientation align with a more liberal political stance.
That's a very safe guess
> He is susceptible to confirmation bias, in-group bias, the availability heuristic and out-group homogeneity... he may also be prone to excessive phone use, binge-watching TV shows, and impulse buying.
That's literally everyone
> Guessing my exact location - Oslo Opera House - was impressive though.
Not really. They have almost global picture coverage thanks to Google Streetmaps.
They only need small snippets of a picture to geolocate you.
But, can I offer a quandary? Some companies won't care if it's wrong.
If some executive decides to buy into AI profiling like this, and make customer decisions based on it, then how would the customer ever know:
1. why they are being treated differently
2. know how or why to correct it
I don't know if it's scarier being RIGHT or WRONG
If you knew which bike model I was googling yesterday, almost all of these guesses might have been more accurate.
I'm not sure if feeding it with personal pictures is a good idea at all
>They likely share an agnostic worldview and identify as heterosexual.
I wonder how the model would know that they are heterosexuals?
let's be careful about categorizing people so easily and in such a simplistic way.
Clicking on the example photo of a white family with 3 small children in a field.
> Biases: Ageism, classism, racial profiling, microaggressions
???
Amusing to me how wrong this is... I don't know how you can determine such characteriatics from a photo in any direction. I will admit that my appearance though tends to throw mixed and incorrect signals (not an accident). I find the entire concept of appearance signaling pretty off-putting so I guess this is a great result.
The only thing Google Lens has succeeded at for me is age, race, and location. Basically everything else has been very wrong.
E.g. first time I was an extrovert, second time introvert… About the only thing that stayed the same was ”heterosexual”, but that’s a statistically safe guess.
> Based on his demographic and location, he may adhere to Hinduism, and likely identifies as heterosexual. Considering the socio-political landscape of India, he might lean towards the Bharatiya Janata Party. His biases could include ageism and elitism, along with casteism and colorism. He seems contemplative and calm. He is wearing a grey t-shirt and sunglasses. His interests might span reading, travelling, and exercising, but on the darker side, he may exhibit road rage, neglect family, and overeat.
They merely seem to want to point out that this is the way Google, Meta and anyone else with access to your photos look at those photos. And will abuse access to them by mining them for data to sell you stuff.
"This image shows an adult man, likely in his 40s or 50s, wearing a suit and tie. The location appears to be outdoors, possibly in front of a building or large house, suggested by the architectural details visible in the background and the surrounding trees. He is wearing glasses, and seems to be smiling widely, creating a sense of approachability.
This man is likeyl Caucasian, and could be earning between $100,000 and $500,000 per year. He is likely Christian, probably heterosexual, and leaning toward the Republican party."
It said PSH had an "ageism" bias, and it said a same about me. It also said he and I have a proclivity for gambling and poor diet, lol
e.g. https://i.imgur.com/FlnYwrK.png
Also, the people didn't look "low income" at all but they were black, so maybe this tool is also racist.
It would be interesting to do a similar with a series of photos. You could maybe interface with a users' photo library and select photos grouped by facial recognition. After all, none of these tracking companies are using just one point of data.
- astonishing geoguessing
- very good inference of some characters traits
- and finally quite good ad targeting
EDIT: I tried with a few photos (different people in various settings) and each time I got this: "racial bias towards immigrants" - which was always very false. Intriguing.
EDIT2: different photos of the same person (me) in different settings gives many totally opposed characteristics. Very unreliable, but I guess with several photos (a lifetime's photos in the case of Google) it's another story.
I have no idea who this man is, and am not worried at all for my privacy. Maybe it works better on US inhabitants?
I showed it some pictures of when I was in a bad headspace and it successfully associated me with introversion, procrastination, isolation, one picture it said an interest of mine was stealing, which was accurate at that time in my youth.
The tech exists and has existed forever, as creepy as it is, I'd rather it be public and accessible than not.
There are fields like interests, income, biases/predjudices which vary the most so I assume that's just the site pulling things from its own database of racist stereotypes ?
I will say I am impressed with Instagram advertising me music software. I really like making music and I’ve bought quite a few things off of Instagram ads.
That’s advertising done right!
> He is presumed to be agnostic, heterosexual, and politically aligned with the Democratic party
It got the location (exif, I guess) and was able to identify that I was a balding mediocre middle-aged guy, but the more specific it got the more wrong (and insulting) it was.
"He appears tired and introspective. He may exhibit biases such as confirmation bias, anchoring bias, in-group bias and out-group bias. His interests could involve reading, hiking, and programming, coupled with less constructive activities like smoking, excessive drinking, and gambling.
This individual seems to possess low self-esteem, exhibits introversion, a lack of emotional stability, and low self-control, making them susceptible to targeted advertising."
Thanks a fucking lot, robot.
"They likely share an agnostic worldview and identify as heterosexual. Their clothing is casual, and their interests revolve around skateboarding, music, and hanging out. Given their age and attire, they likely lean towards a liberal political affiliation. They display signs of classism and ageism, with potential for racial profiling and stereotype threat." - Wow, really?! Were the system instructions asking to be as judgmental as possible?
Also it's a blatant ad considering the source.