I just teased my friend about this. He doesn’t have a dog, but does dog sitting, so he sent me a photo of a gorgeous white husky on a giant dog bed on his deck. The dog/bed combo looked so surreal and the dog so perfect, I pretended it was an AI-generated dog and congratulated him on his AI acquisition and he kept texting back, “No, it’s a real dog, I swear!” and sending me pics of the dog in his living room and other places as “proof.” The problem is that there is no acceptable proof!
Basically, if people think it’s real you can’t convince them otherwise, and if they think it’s fake, they can’t be convinced it’s real. We’re in a real mess!
I started noticing that phony, over-processed look several years ago, and it has only gotten worse. Yuck! But we can’t stop people from unrealistically pumping up their photos when everyone has a smartphone camera that is so sophisticated that “photographers” don’t really have to know much more than how to press the shutter button.
I don’t know that this is possible. With currency, they watermark the authentic item (the counterfeit item lacks the correct watermark). They’re also dealing with a physical product, manufactured by a very limited number of producers, where new versions supplant old versions.
With images, you’re dealing with a digital item with millions of different producers. If you’re going to flag/watermark the authentic item, how do you upgrade all existing cameras to watermark the original? (And what about other authentic image sources, like scans of film photos?) How do you enable all the camera manufacturers to flag authentic items while preventing the fake image generation programs–some of which are coded by bad actors–to include the same flag? (If you’re going to only flag the fakes, how do you force the fake image creators to include the flag and prevent any subsequent removal of the flag by anyone with image editing capabilities?)
Not to mention – what do you do about the billions of existing photographs? If tools can create a fake of a contemporary photo, they can also create a fake of an older or archival photo, and the real photos from that era will never be marked as authentic.
Sadly, I don’t think there’s a real solution out there.
Adobe is trying to create some kind of system to mark AI-generated images, but I couldn’t find much here about how it works technically or why it wouldn’t be trivial to remove the watermarking:
This might work for fully-AI generated images, but if you just added a bit of AI-gen into another image or manipulated/distorted the generated image, wouldn’t the watermarking be lost?
I’m also not sure it matters, any more than using clip art or stock photos, which don’t necessarily require disclosure that’s what you’re doing. The only time I see it being an issue is when you’re trying to pass off AI-gen as original art or as reality: in those cases it’d be nice to have a way to prove the image is generated.
Reminds me of all the well-intentioned but hopelessly naïve approaches to dealing with spam that was going on in the 90’s.
“Let’s specify a mail header to clearly indicate if something is an ad”.
But, of course, there’s no possible way to make the spammers use it, and almost all refused, because people would filter on that header to auto-delete the ads.
You could pass a law mandating it’s use, but how are you going to convince people in other countries to obey? And what makes you think a criminal/scam operation would care?
This is pretty much the same thing. All the laws and regulations and standards might make sure your attempt to paste your cat into last year’s vacation snapshot is clearly labeled, and social media may therefore ban the picture, but it’s not going to do a thing about politicians, governments and criminal organizations using AI-generated photos with intent to deceive the masses.
Yep, although in this case, if all the main tool vendors and platforms adopt or require it, that could push everyone else into compliance. I’m not holding my breath, though, much as I wish them the best of luck.
One thing that troubles me about this entire issue is that memory can be notoriously unreliable. We’re pretty good at the gist of an event, but details are often lost or changed over time. It’s one of the reasons that people can disagree about what happened at a shared event. (Another is that we may even notice or perceive different things.)
We’ve long believed that photos somehow capture an objective reality that could aid our fallible memories. You may not remember who was at an event, but a photo might make it clear. If photos become so easily modifiable that we can no longer trust them—someone was retouched out of the photo—that may further confuse what we remember or create additional opportunities for confusion.
But there’s nothing really new here anyway. Even before Photoshop, photos could capture a selective, non-representative view of an event—perhaps there were people outside the frame—or they could have been staged entirely. I have a nice photo of a friend breaking the tape at a race she won, but it was restaged well after the finish since there was no tape for real and she came in surrounded by a bunch of guys. I know that, but no one else seeing it would guess.
The other claim is that the photo should represent what we remember, and altering it creates a false memory. But given the fallibility of memory, who’s to say what we do remember? If we’re editing out some random person who happens to be in the background, we probably wouldn’t remember that they were present anyway.
And again, there’s nothing new here. We tell people to smile for the camera, and many do regardless once they realize they’re having their picture taken. But what if two people were having a bitter argument moments before a photographer pops up and gets them to smile for a photo? Those people’s memories probably don’t include them being happy with each other, despite the photographic evidence. The photographer didn’t alter the photo, but they did alter the scene.
In the end, I think I’m on the side of thinking that all photos and videos are constructed creations. For the most part, I don’t plan to assume or worry that significant changes were made by AI or the photographer, but it’s always a possibility. In personal photos and videos, I can’t see it being a major problem, though I can see it becoming a greater issue in fields where some level of authenticity is required, such as journalism and law enforcement. Fact-checking will have to have to expand to visual artifacts and evidence.
Is reality real?
There was this family with their kids sitting at the beach front. Two dolphins were spotted in the water. One of the kids said: “Ohh look, it’s very realistic.”
My working life started as a press photographer in 1979. We did typical darkroom manipulations - dodging, burning etc. When we went to digital in the late 90s I had moved into management but was asked to contribute to our photo policy. It was simplified to a single line; You can change the content but not the context of the image.
This allowed us to tidy up an image but it was required to have the same meaning as the original. We could remove rubbish on the ground, eliminate someone flipping the bird at us or cover up a number plate on a car. I was once asked to ‘paint’ black tights onto a young ballet dancer to protect her modesty. Most of what we did could have been done with more advanced retouching in the darkroom and management were comfortable with it - we never struck a problem.
As for personal photos, it’s personal choice. If someone prefers super saturated oceans with dolphins and Sophia Loren that’s their choice. It’s not for me but I’m not going to preach what others can do.
I have no issue with what Adam did but the artefacts are a bit rough - I’d probably just have cropped it. I’m definitely a minimalist when it comes to editing - rarely doing anything other than basic colour correction, cropping and sharpening. My true love is black and white so I’m more likely to go that route.
With regards AI - it’s not photography - it’s ‘art’. In the same way as a painting or sculpture, people are free to create whatever they want and reality doesn’t have to be a part of it.
There’s a big difference (in my mind) between spotting dust off a scanned negative and changing the reality as portrayed. I’ve hated HDR photographs since they were invented, with lurid colours that look like the cover of a science fiction paperback rather than the world we live in.
But the colours in traditional photography have never looked like the ‘world we live in’. Apart from the different colour casts that depended on film stock, non-HDR photos have never come close to capturing the range of shadow to light that the human eye sees. From this perspective, traditional photos are often very flat with either the shadows a big black blob or the light areas blown-out white. I’m not saying that non-HDR photos can’t look good (I personally almost never use HDR), but I don’t see how anyone can say they look like the ‘real’ world as seen through human eyes.
It’s a matter of picking the right film, isn’t it? Fuji Pro400H or Kodak Portra 160 show us reality, whilst Ektar 100 is more saturated like the HDR of the film world. But it never gets quite as lurid as digital HDR.
As a photographer, I find it disturbing people are so willing to alter reality just for a cleaner picture.
I assume you are not a fan of SF. Didn’t like 2001 or Star Trek or Star Wars, e.g. Or any action movie. Of practically any other movie. Or the music of Les Paul and Mary Ford. And on and on and on. Leonardo: La Gioconda may have been life-like, but the background was not real, at least as related to that painting.
As a photographer, I feel that I am entitled to present my original work in any manner that suits my fancy. But I promise not to reproduce or alter your work.