Agreed. And I think we’ve long since passed the point where non-professionals would even notice more pixels.
Here’s my reasoning:
A standard “35mm” negative is 24x36 mm (about 0.9 x 1.4").
My personal experience with scanners (and consumer films at 200 and 400 ISO) shows that resolutions above about 3600 dpi gon’t give you any more information. At higher resolutions, you end up just scanning the film’s grain - interesting to view, but usually pointless.
So that negative’s maximum practical resolution is around 3400x5100 pixels or 17 megapixels. Maybe a bit higher if you’re using a particularly fine-grained film.
If printed at 300 dpi, that will produce a print that’s about 11.33 x 17". Which is larger than a typical “8x10” portrait and is therefore going to be large enough for most people. Yes, some printers print photos at higher resolutions, but most people need to look very closely or use magnification in order to see the difference. Especially if a good quality printer and photo paper are used.
So I’m not sure what the big deal is about cameras that are substantially higher than this. Extra pixels give you room to zoom in and still have a good result, and it will serve to reduce round-off errors during image processing, but that’s really only important during the editing process. Once the editing is done, it just makes the files bigger.
For the final file, I think resolutions like 48mp are only going to be useful for professional uses - like printing posters and banners. Those applications where historically a standard 35mm camera would also be insufficient.
And that’s exactly it and why high MP is, in fact, important, for prosumers like me. In many cases it can function as an alternative to having better zoom.
For the majority of consumers who never crop in post or print posters, high MP is just fluff. But they’re also not candidates for the Pro models, which is what this thread is entirely about
And that’s exactly it and why high MP is, in fact, important, for prosumers like me. In many cases it can function as an alternative to having better zoom.
If all pixels were equal, that would be great and I’d be all-in because I love more pixels on target–there’s no such thing as too much (useful) magnification. Unfortunately there are trade offs.
Oversimplifying to the benefit of the higher pixel count: given the same sensor size and technology, a pixel on a 24MP sensor will be able to collect 4 times as much light as a pixel on a 48MP sensor. The larger pixels will have much less noise in the signal, increasing the accuracy. It will also have a larger dynamic range, and allow for shorter shutter speeds to reduce motion blur.
To make up for the poor data, Apple is inserting a lot of calculations. For many things that works pretty well, but a big draw of closeup/macro is the fine detail of small things. The imaging process can make things up. This can happen even with careless minimal processing by e.g. over-sharpening, and the possibilities of artifacts get more extensive with more complex operations. So you might get an impressive picture, but some of those details could be a lie without your knowing it. Things like fine hairs can be particularly troublesome, and they’re sometimes important for species ID. The sensitivity of fine details could be a (small) part of why Apple keeps kicking macro shots to 24MP instead of 48MP.
I certainly understand not wanting to carry around a dSLR. But the sheer ease of getting far better results for closeups even with a non-macro lens keeps my olympus E-M5 III with me most of the time. I also have an olympus TG-6, which was/is? one of the best compact cameras for closeups, but it’s fiddly and disappointing compared to the E-M5 III because of the physical limitations of tiny sensors–not only tiny pixels, but diffraction effects, severely limited control of f-stop, short working distance and more. On the other hand, it’s small, light and pretty versatile, and easier to use than the phone camera when I don’t want to carry the ‘real’ camera.
Your entire post is based on the premise that Apple concocted 48 MP on the same sensor that was previously just giving something lower, like 24 MP, and therefore has a lower light/pixel ratio. Do you know that for a fact? It’s not 48 MP delivered on a more sensitive sensor?
Your entire post is based on the premise that Apple concocted 48 MP on the same sensor that was previously just giving something lower, like 24 MP, and therefore has a lower light/pixel ratio. Do you know that for a fact? It’s not 48 MP delivered on a more sensitive sensor?
No, I’m saying that Apple (or Sony, or Canon) makes a sensor built on some particular technology, including the light sensitivity of the material. They then have a choice of how many pixels to put on that sensor. They can choose a lot of small pixels or fewer big pixels. Choosing bigger pixels has a number of advantages due to physics, and you can’t have all the good things all at once. There’s a compromise option to use pixel binning on demand–merging the photon counts of several tiny pixels to mimic a bigger pixel. That’s presumably what Apple is doing with the 48MP vs 24MP on that one sensor. But binning is never as good as a bigger pixel. It gets complex, because there’s light collection area lost to the parts needed to read the pixel data, there’s the color filter array choice, and more. But every complication I know about biases things in favor of bigger pixels on similar sensors being better for photography–higher dynamic range, higher useful ISO, better color accuracy, less diffraction (which reduces resolution). It matters more for some types of photography than others, but closeup/macro is one of the areas where it’s often noticeable.
There’s a different high resolution scheme for larger sensors that’s simultaneously much better and worse than using smaller pixels. You can move the sensor precisely one pixel in three directions (up, sideways, down) leaving the color filter array in place. This results in four images at overlapping pixel locations. Then the camera merges the 4 images in the right way to quadruple the effective pixel count. It also reduces color artifacts from the color filter array. The sad trade off is that you need a good tripod and a slow or stationary subject, so I haven’t played with it much. Some of the newer high end cameras are getting fast enough at it to do some hand holding, but those aren’t in my likely future. At those prices I could get more microscopy stuff instead…
I mean, by your logic we should still be doing 640x480 on this sensor because that will be crystal clear and super high SNR per pixel. But we aren’t, because our sensor is high enough quality to support lots of pixels, presumably 48 mega pixels. But here:
To make up for the poor data
you clearly don’t think so.
To understand my challenge here, ask yourself this question: what is the RIGHT number of pixels to put in a given sensor? How do you make that determination?
And to give you a sneak peak at the point I’m making, it’s that I think you’re presenting a solution to a problem that doesn’t exist. Or at least I haven’t seen anyone here claim that Apple’s choice to carve 48 MP out of this sensor is presenting an image quality issue. The problem here is that we can’t get a 48 MP closeup.
I mean, by your logic we should still be doing 640x480 on this sensor because that will be crystal clear and super high SNR per pixel.
There are applications where that’s almost exactly true. In microscopy, the higher the magnification, the fewer pixels you need for a photo. For 40x through the optical limit of 1000x, ~1.3MP is the standard–as long as the sensor is good enough, which in this case means CCD, not CMOS, and preferably temperature controlled. For research grade cameras prices at the low end start at several thousand dollars for monochrome, and go up from there. (Part of that is what the market will bear, but then so are iPhones…) VGA CMOS sensors are often sold for low end hobby microscopes, and they’re a good match for the resolution provided by the kind of lighting that those scopes have. More pixels would be wasted, and would also reduce the frame rate for video. I have a 5MP camera for my stereoscope, because that has a lowest magnification of 7x (and because the 4MP wasn’t on sale). When you use a DSLR in a camera tube on the scope or attach a phone to an eyepiece you do need some more pixels, because the image is a little circle in the frame, and it’s that circle that needs the appropriate number of pixels.
Everything has tradeoffs. For tripods, you want sturdy, lightweight and cheap, but you can have at best two of those. To design a phone camera sensor, you need to choose between what people want, what people can be convinced that they want, and at least a reasonable level of quality within what’s physically possible–but that quality is unlikely to be even-handed for all potential uses, because improving one thing will likely make something else worse. Apple seems to design for videographers at the high end, then average people who basically want people scale things, and then adds features for some people outside of the average interest range.
I think it’s great that Apple is making some effort to get people introduced to closeups (maybe even bugs!), though somewhat misleading by calling it macro. They also do a fairly good job of using a non-specialized sensor/lens system for a specialized kind of photography. But you need to choose between the convenience of having just one device that gives adequate results for your purpose (if it does), or if you get disappointed or hooked enough, diving into the specialization a bit and maybe branching out into hardware that can do a much better job with more latitude, more easily.
The issue isn’t the sensitivity of the sensor, it’s the area of a sensor, which affects how much light it is exposed to. If you take the same overall sensor size – which is limited by the device size – and divide it into twice as many pixels, then each pixel will get exposed to half as much light, regardless of the sensitivity of the sensor. If you divide the sensor into half as many pixels, then each pixel will get twice as much light, again regardless of the sensitivity of the sensor.
An iPhone camera–given our current understanding of an “iPhone”–will always have to have a smaller sensor than is possible on a larger camera. And for a given number of pixels, a smaller sensor will be exposed to less light per pixel than a larger sensor.
(If the size of a pixel is doubled in both dimensions, then you would have 4x the light per pixel and 1/4 as many pixels – I think the original quote should have said “2 times as much light” rather than “4 times” when comparing 24MP to 48MP.)
Sebastiaan de With, the creator of the Halide camera app, has released his annual iPhone camera review. In it, he has the most concise explanation of what is going on with Macro mode:
"One very compelling bonus of the 48 MP upgrade is that you get more than for the high-resolution shots. It does wonders for macro photography.
Since the iPhone 13 Pro, the ultra-wide camera on iPhone has had the smallest focus distance of any iPhone. This let you get ridiculously close to subjects.
The problem was that… it was an ultra-wide lens. The shot above is a tight crop of a very wide frame. If you wanted a close up shot like that, you ended up with a lot of extra stuff in your shot which you’d ultimately crop-out.
In the past, that meant a center crop of your 12 MP ultra wide image would get cropped down to a 3 MP image. In Halide, we worked around this with the help of machine learning, to intelligently upscale the image.
With 48MP of image however, a center crop delivers a true 12 MP image. It makes for Macro shots that are on another level."
I did a similar experiment to the rest of you, shooting a Macro image and then a 48MP Ultra Wide image from the same location (in this case, my keyboard with my arm stable on the armrest of my desk chair). I then cropped both images so that they both showed the same portion of the keyboard. The final pixel dimensions were very close, showing that the Macro photo used the 48MP ultra-wide mode. As a control, I also took the same picture in normal ultra-wide mode and cropped it similarly. The final pixel dimensions were about 1/2 the others in each dimension. So, the picture had 1/4 the number of pixels, which is just what you would expect.
Yes this is exactly the same test and explanation I posted above. I will probably not sue for plagiarism. But if there’s money in it….
So yea, we’re not getting 48 MP macro, we’re getting 12 MP. If the previous equivalent for the same workflow was 3 MP, then sure this is improvement. It still seems like Apple’s pitch of 48 MP macro is misleading.
He also doesn’t appear to add any intel to our discussion about the Auto Macro feature, and whether it’s performing some type of processing, or if it’s just an indicator that iPhone is going to use the UW lens in spite of zooming in.
Meanwhile, a good friend of mind just posted this amazing macro shot:
Having now used the very latest iPhone Pro, something told me this was not shot on iPhone. And sure enough, she said it was shot with:
Samsung s23, I’ve gotten similar results with S20 and Huawei p30 as well
And in a sister thread here, we have been discussing that Samsung also has the enviable 10x optical zoom feature.
I hate spending $1000 for a top of the line Apple product and still feeling inferior to Google. Maybe the S23 IS the droid I’m looking for?? But I just can’t…
Bigger is always better when it comes to collecting light, which is why people still pay a premium for “full frame” camera bodies, noting that they also result in larger cameras and lenses. It’s of course also why old cameras had gigantic photo plates in them.
But the manufacturers are always finding way to get better photos with the same number of photons - or even fewer photons. The get higher ISO, lower noise, etc with fancy electronics. You can see some of the timeline of that with Canon’s DIGIC history:
This tech is, of course, why that tiny lens (and sensor) on your iPhone is taking pix that rival the heavy, pro cameras with large aperture lenses and 35mm sensors.
I’m not interested in doing a lot of editing, so I tried a somewhat different test, which makes more sense to my non-photographer brain, which wants to fill the viewfinder with what I’m shooting rather than assume I can crop what I want out of a larger image.
I took the exact same shot on my iPhone 16 Pro using Auto Macro mode and Manual Macro mode, moving the iPhone closer to my trail mix in the Manual Macro mode. Since I still have it, I took the same shot with the iPhone 15 Pro, and we’ll start with that.
It seems pretty clear to me that the Manual Macro mode shot is the worst, despite being 47MP (I still don’t understand why the size of these is often under 48MP). The almond on the left is fuzzy, and the color isn’t as good. I’d give the nod to the iPhone 16 Pro in Auto Macro mode over the iPhone 15 Pro, but they’re fairly comparable to my eye.
But what if you do want to crop? I cropped the same portion of both iPhone 16 Pro photos, getting a 3MP image from Auto Macro mode and a 9MP image from Manual Macro mode.
I still prefer the color on the Auto Macro crop, but it is distinctly less sharp when viewed closely.
It sounds like what Apple considers “48MP macro photography” really is about giving professionals the flexibility to use the 48MP Ultra Wide camera close up and then crop out something useful that’s potentially larger than what could be achieved with 12MP shots.
I can’t see myself using this mode hardly ever. I just like to stuff my iPhone into pretty flowers and see what comes out.
Yes, I agree, but that certainly spins it in the most favorable light for Apple.
I guess Apple is getting the UltraWide lens to serve double duty:
to take true wide-angle shots, such as in a small room, or a closer group shot, or an outdoor landscape view;
leveraging the fact that wide-angle lenses have an inherently deeper depth of field, including close focus, they use it take macro / close-up shots
I’m not a pro photographer, but any time I did “pro” style macro work, the subject was pretty much 1:1, or what iPhone calls 1x. That is, it didn’t look like it got pushed away from you even when you were close, as does the UW in 0.5x mode (the only way to get 48 MP). So I did a little homework:
It does seem that you can technically do 0.5x and call it macro, as iPhone 16 Pro does. But that appears to be bottom-of-the-barrel vs. real macro gear that often brings the subject CLOSER than it appears in real life, not only making it easier to shoot live subjects, but also saving you from having to lose pixels by zooming (before shooting) or cropping (in post).
But I’m being a demanding, unreasonable customer This is a phone, for goodness’ sake. It’s an impressive feat what it can do, and the higher density UW lens is better still than it was before. And like you @ace , I will probably (after all this research, lol) just use the Auto Macro feature, given the trade-offs previously discussed.
But I’m still intrigued how the Samsung is taking such great macros…
One thing I’ve just discovered is that the Macro mode icon is only present if you turn on Macro Control in Settings > Camera, and it’s only there so you can turn Macro mode off! Apple added it after complaints about iOS 15 and the iPhone 13 Pro.
Yes, I’m sorry I thought you knew that . My previous phone was the iPhone 13 Pro. Some people found it weird the way, especially when up super close, iPhone would switch lenses, sometimes back and forth, largely changing what was in your frame and which part of it was in focus.
Of course, disabling Auto Macro means your close up will be blurry unless you tap 0.5x or zoom out. Leaving it on ensures that iPhone will do that for you automatically, flappy though it may be.
If the sensor itself is 48MP, and the firmware is using multiple exposures in order to implement image stabilization, then the small vibrations from your hand may result in the edges (pixels not present in every exposure) being cropped away from the final result.
I just like to stuff my iPhone into pretty flowers and see what comes out.
An often useful technique. My favorite lucky dip was a crab spider eating a fly inside of an ocean spray flower cluster with my first digital camera (1.3 MP Casio QV-8000). But do try some side lighting occasionally–you can hold the phone in one hand and the light in the other and avoid fussing with a rig to meld them together.
any time I did “pro” style macro work, the subject was pretty much 1:1, or what iPhone calls 1x
Apple’s X is a focal length* multiplication factor, not a reproduction ratio. They use their 24mm lens, which has a wide angle lens perspective, as the basis for the ratios. That’s 1X. It’s not wrong though the choice is mostly marketing. They then call their normal perspective 48mm lens (approximately the perspective of human eyeballs) “2X Telephoto”; telephoto is wrong and raised a loud chorus of "tsk tsk"s when they first did it.
But you can measure/estimate the reproduction ratio yourself. What’s the smallest area that you can photograph in focus? Graph paper is a good subject because it makes the measuring easy. Use some side light with a lamp or flashlight if you need to get more even illumination so that nothing washes out too much to count the squares. Then find out the sensor size and take the ratio of sensor size/actual subject size.
Focal length is a physical property of a particular lens, independent of sensor size. The practice of using “35mm equivalents”, which pretty much everyone including Apple does, is a bit oversimplified but useful for comparisons between different systems.