That “tap” is not what you expect. Apparently, you cannot tap on a person or object in the photo to delete it. But on a few (currently very rare) photos, there are odd-ball objects that are “glowing” and which, if you tap them, will get deleted. That works fairly well. But in a busy photo with a ton of objects, how it picked one tiny thing to be deletable… I don’t know. Maybe over time it will find more things in the same photos? That would seem unlikely because, why not look right now while I wait? I only have one photo open. Odd.
circle works, sort of. But if this is the long awaited intelligence, then I’m really disappointed. Notice how it amputated my daughter’s hand.
Could I zoom in like crazy and try to “circle” around each finger and maybe get a better result? Actually, not even. No. You can only zoom in so far, and the tool is pretty wide. And anyway, that’s not intelligence; it’s brute force.
brush works a lot like the retouching tool you already are used to, though the latter was only ever on macOS Photos, not iOS Photos, IIRC. Nothing spectacular, although I did see it try to be “smart” and include for removal content that was outside of my brushed region but which appeared related. In the photo above, I brushed over “SPOOKY” and it removed that and “NIGHT” as well. Luckily, I found that using the “circle” alternative gave me the control to only have the one word removed, not both.
TL;DR
“Clean Up” will be useful, but I’m not feeling a wow factor.
I am rather impressed, actually. I started with an ordinary photo taken at a restaurant a few weeks ago on my birthday. The full photo is of our table and my relatives eating with the street just outside in the background. The front window of the restaurant had glare and distracting cars out on the street. Just for fun I tried Clean Up on it, rubbing out two cars and some window reflections. Bingo, they were gone!
Closeup of the edited street view Before and After:
Sure, if you look closely the road’s wavy and there are wonky details, but 99% of people won’t notice that. They will be looking at the people and the food and ignoring the irrelevant background. Most wouldn’t notice anything amiss without having the original to compare it with.
What impressed me was the ease (30 seconds and my first attempt using Clean Up with zero instructions) and how AI generated the rest of that window across the street with almost no information (the red car was blocking 90% of the place). I doubt it’s what was actually there behind the car in real life, but it looks pretty amazing. It even took out the person standing next to the car.
In the full photograph, this allows you to focus on the important things, not the background. I am impressed, though I’m sure I’ll run into bad use cases as I use it more.
Yea, that’s quite astounding. It would be particularly impressive if it recognized the location from it’s Maps database, fetched the store front that it knows is actually behind the car, and then doctored it up to fit into your scene :-) That would be impressive.
Mostly, though, I’m glad you’re family didn’t suffer any amputations :-)
I actually thought about that! But it’s a tiny town of 2K people, so I seriously doubt that. Though it is a tourist (wine tasting) town, so there could be lots of photos online.
It is impressive that the result looks as good as it does, but it is clearly not what was really behind the removed objects, because we can see objects near that car that were also removed:
The menu board on the wall, just above the red car’s rear window, was deleted and seems to have been replaced with three lights on the wall over a black rectangle.
The person was not deleted. At least not completely - his head is still there.
The contents of the window above the red car, is completely different. In the original, it appears to be an entryway to the building, but the replacement looks more like the contents of a refrigerator case of some kind.
It is impressive, and you may prefer it for your scrapbook, but the content used to replace deleted objects is clearly constructed from whole cloth.
I’m writing about Clean Up soon. It’s far from perfect, but it can be very good when used in the right situations. It really likes small, discrete objects against easily faked backgrounds. You’ll see much worse results with more overlap, larger objects, and more detailed backgrounds.
Look Around isn’t available here, even in the “big” town of 35K nearby.
Probably available on Google Maps, though who knows if it’s up-to-date (a lot of those small shops change frequently).
Oh, sure. We were just having fun with the idea of how it generated that shop behind the car. It still did a decent job, especially for the time involved. It would have taken me hours in Photoshop to do it. I could see myself using Clean Up and then doing some finer retouching in Photoshop to remove some of the artifacts (cleaning up Clean Up).
(I hadn’t even noticed the guys’ head is still there! Pretty funny. It looks just like a pattern on the wall and blends in. You’d never now it was a head without the original photo to compare it with.)
Obviously you’re still best off taking the best picture you can (garbage in, garbage out), but if you do end up with a snapshot with something distracting in it, Clean Up seems useful. (In another photo, I removed a green can of Sprite that felt like obnoxious product placement and Clean Up instantly removed it.)
I wouldn’t use it on a photo you plan to blow up to poster size, but for casual snapshots and bad photographers (I know someone who just took a photo with his fingers at the edge of the lens), Clean Up should be helpful.