Do Megapixels Matter When Converting Negatives With A Digital Camera

preview_player
Показать описание
Do you get any more detail from a negative when shooting with a 50 megapixel camera compared to a 12 or 24 megapixel camera?

Negative Lab Pro :

Dropbox link for photos :

Real Sir Robin :

Cameras used in this video.

Sony A1
Sony A7c
Sony A7s II

This video was shot on the Sony FX3 and the Sony 35mm f/1.4 G-Master lens.

negative lab pro
Рекомендации по теме
Комментарии
Автор

I owned a $400, 000 Kodak Pro Photo CD film scanning system from 1993 to 2007 which used true RGB linear scanning arrays and Schneider apochromatic lenses . We could scan 35mm frames up to 4k x 6k resolution (24MP) at 16bit Lab colour but we did this rarely as scans of our studio calibration photos (taken on EVERY film stock) showed that only Kodachrome 25 ASA (ISO now) transparencies shot in ideal studio lit conditions with the very best lenses and apertures could provide enough film detail to justify this, 24MP resolution. We did over two million scans for archive producers and movie companies (all the stills for Star Wars etc.) and nobody ever wanted more that the 4k x 6k Pro resolution. It was the 16bit tonal range and the hundreds of custom negative and positive film terms developed by Kodak and us that made the scanning quality the best in the world! David Myers, Digital Masters Australasia.

dummatube
Автор

Adding the timestamp at the beginning of the video is such a good-natured generous gesture (losing watch time-an important ad revenue metric), that it deserves this user engagement datapoint post and a thumbs up. Wish all content creators were this awesome.

posysajrazdwatrzy
Автор

Paused at 2:44. There are conflicting forces. Let's start with the 35mm negatives and their resolution. In the film days, resolution was expressed in a linear unit because that directly relates with human perception of detail resolution: LinePairs per millimeter LP/mm. A pair being a black and a white line. Two times the LP/mm means we see that as twice as sharp (ceteris paribus, assuming parts in both comparison tests can reproduce both). You might argue that a 36mm x 24mm negative needs at least 7, 200 LP * 4, 800 LP and this gets us to an area size of 34.56 million. Why that number? Because 100 LP/mm was attainable with good film and professional grade lenses. Nikon glass would get better in the center and a bit less towards the edges when stopped down 2 to 3 f-stops from fully opened. With Leica glass you might see some higher values in the center of the frame and a bit more fall-off from that towards edges/corners. So a 36MP digital camera would equate that resolution? Well, if the black and white lines in the subject precisely align with the photosites in the sensor.
But there is a problem with a digital camera. You see, film has RGB at all coordinates because the color is in three layers on top of each other with subtractive filtering between them. But a digital camera has not. The sensor is colorblind and in order to get color in the raw file, there is a filter grid over the sensor that has repeating filter patterns of 4 photosites that thus build an R.G.G.B quartet next to each other. The problem then is to convert this into RGB pixels and this is done in raw processing. This happens in camera for JPEG and MPEG shots, and in your raw processing software for your raw shots. Your camera comparisons in Lightroom tell more about Lightroom than about the camera. The 14 bits gradation resolution you had at the R.G.G.B photosite data in your raw file have been degraded to less than 9 in raw processing.
That raw processing is a mathematically precise and repeatable guessing of missing color for the R.G.G.B quartets in the raw file so as to change R to RGB, G to RGB and B to RGB. This creates artifacts like Moiré if we recognize it and "noise" if we do not recognize it. Application of basic computer vision type AI can help a bit, but without "deep knowledge" in the raw processing software about what is in the photo, raw processing is not helped a lot.
Lower resolution (than 36MP) cameras make raw processing easier with a so-called Anti-Aliasing filter (AA) or low-pass filter. These filters disperse the light for the R cell to its surrounding neighbors. Hence I call it the fuzzy filter. It helps raw processing and the idea already was applied in the 1970s to aid image processing out of data from scanning tunneling electron microscopes.
Or, the presence of a fuzzy filter must be part of the reasoning. We see that higher than 35MP resolution cameras generally have done away with the fuzzy filter - making lenses sharper and depth of field shallower.
The problem with digital reproduction of film is the grain in the film. With increasing resolution, this generates a raw file that has more detail of the grain and this is not likely understood by your raw processing software. So we have to find an optimum here between these conflicting forces.
But my negative is black and white - unless you shoot a Leica Monochrom that is even more expensive for not placing the R.G.G.B filter layer and it still may have the fuzzy filter by the way because Leica's old lens designs are not well adapted to sensors, you always have an R.G.G.B raw file that needs raw processing.
A scanner can run a multipass scan and build an RGB.RGB.RGB.RGB file that is better than raw and does not need raw processing. If it can create more than 8 bits per color channel TIFF, you basically outperform the raw file. So 16 bit TIFF is way more better than 14 bits raw when at the digital level there is a factor of only 4 between them.
Some people would say that 24MP is enough. Well it might be the optimum, bottom line, after these conflicting forces got sorted out.
The problem with digital is not only that we look at the results of raw processing (where we lost a lot of quality) but also look at digitally upscaled or even upsampled representations of our images, when we depict them larger than 100%. This gives an illusion of image quality. Even if we depict our images at smaller scales there is some digital processing going on, to the images. Like anti-aliasing for the display, depending on your software. Some defunct guy would call your digital photo on your computer "fake" and here he would have a point.
I have tested Topaz's Gigapixel AI app last year as a way to get control over what is sent to the printer. Its ability to upsample and in that make guesses about what detail to add in blowing up a raw file, was incredible. A year later and Adobe have improved their Enhance Super Resolution features by a bit.
The magic of a 12MP Nikon D700 is in these aspects: for human perception, two times as sharp as 12MP is 48MP and all wow conversations about in between values is extremely naive. It explains people's remarks that the gain from 12 to 24 is not as great as they had expected - it's only 40%.
The magic is also in its discrete analog to digital circuit (i.e. not bundled with, stacked on, the sensor) and it is in the relation between Nikon F-glass's qualities and the fuzzy filter.
To circumvent raw processing, we could mimic the multipass of scanners by shifting the original a tiny bit and shoot again, later match/stack the layers in Photoshop. But our raw images already have had raw processing at that moment and damage was done.
What about sensor shift - cameras have IBIS so that should be easy, right? Well, the wild assed guessing of RGB colors in raw processing gets stuck in the edges of the image where there are no neighboring cell values to make guess from. The difficult solution is to have two algorithms in raw processing of which one is specific to the edge problem. The other solution is to have a sensor with more columns and rows in the edges and record the data thereof but never allow these columns and rows in the displayed image - they only serve as aid in RGB guessing. Well, throw in a couple more, and these rows and columns can also be used in IBIS without moving the sensor. If there are enough rows and columns you might only move the sensor, say with 5 photosite units, when you detect 5 units shift at image level. Or, I don't expect the IBIS systems to be able to make single photosite unit steps that we would need for RGB shots of 14.14.14 bits natively.
Or, if you make a living in scanning negatives and want to speed up - the promise of the digital camera - then I would look at a Pentax K-something that can do this sensor shift actually. I would not use it anywhere else, by the way.
All this still leaves us with the film grain problem. With increasing resolution the raw file has more grain detail and the software may have difficulties abstracting that grain noise away.
When we stick with the regular Bayer-filter filtered sensor and its raw files, a comparative test is required indeed. And it will be valid until somebody develops a raw processing program for film scans.

jpdj
Автор

prefer the sharper grain on the higher res

faiosung
Автор

Back when digital cameras we're barely more than a glimmer in kodak's eye, I read an article in one of the photography magazines that said that 24 megapixels would be enough to capture all the data that a 35 mm slide contains. This was all based on math, of course, because no one had ever seen in a 24 megapixel sensor at the time.

GrantSR
Автор

Great video Darryl! Really did us all a service with the comparison, and nice production value 👍

pushingfilm
Автор

Such a good comparison! Was so looking forward to this.

AnupamSingh_nz
Автор

For 35mm, I’ve been really happy to use 24 megapixels. I have an older Nikon D600 that I use exclusively for my 35mm scans so I don’t waste shutter count on my newer cameras and it works great for that purpose. I’m glad to learn that 24 works well for medium format. I have quite a few medium format negatives to convert. Great video. Looking forward to your video on the Kaiser system

ericlarson
Автор

Hmm, nice experiment - I have the same setup and have run many of my own.

I think you will find that higher megapixels will excel once you print (if that is your target output). A 6x7 negative, for example, can achieve 1:1 optical resolution at 300dpi all the way up to 30”x45” which is equivalent to the 1:1 output of a 63 megapixel camera sensor. A 24 megapixel sensor is only capable of rendering a 16x24 print at the same (optimal) resolution, so capturing a 6x7 negative with a 24 megapixel sensor will limit your output options. There are some great resources for this online. I keep a chart at my desk.

Enjoyed the video! You have a new subscriber .

groovejunky
Автор

Really interesting to see these comparisons, thank you. I do disagree on the grain comparison though - I felt the one on the left (50MP) was more pleasing and the grain to me looks smoother and more "shaped" than the garin on the right photo which looked more pixellated to me (although I'm looking at it from a compressed video). Guess it all boils down to testing for yourself and seeing what your own preference is.

JimTheEngineer
Автор

Thanks for a great comparison!
My 18 megapixel aps-c sensor does a great job for me, lots of resolution for my photos!
Have a good week!

SinaFarhat
Автор

This is an amazing video! Would be interesting to test how big of a difference there’s between a (24Mp in my case) full frame and an (24Mp) apsc cameras. Great work nonetheless, as always!

ubaldosaracco
Автор

I’ve always had this question, ever since the lab I use rejected 2 photos I wanted to blow up to 36” x 36” and another at 60” x 60”, respectively. So I’ve been testing a small inexpensive digital camera with a resolution of 48mp. When the high mpx sensor is used, you can blow up the picture to larger sizes with even greater sharpness and clarity. I think 24mp and even 12mp for up to an 8” x 10” print is fine. If you want larger prints like poster size and so forth, the increased resolution of 48mp, 50mp and so forth really pays off. It’s really all about what size of print you want to end up with.

Quark.Lepton
Автор

More is less! This video confirmed my thoughts about high resolution cameras and scanners used for film scanning. Years ago I began scanning my 120 black and white negs with a Nikon 8000 dedicated film scanner - and I hated the results !!! The grain structure was overly sharp and drew way too much attention. Then during the pandemic I rescanned them all again with a Canon 5d Mark 4 (30 megapixals) and loved the results - slightly softer and way more natural, and they print beautifully :)

FlyingOrbProductions
Автор

Really enjoyed this. I was looking at getting a high resolution camera for scanning negatives recently. You’ve just saved me a lot of money!

kieranplaymusic
Автор

I have boxes of my grandfather's negatives to scan and I've been wondering if my 26MP camera is enough, now I know! Thank you!

Flburr
Автор

Man I’m sooo glad I just found this video, I shoot with an X-Pro3 (26MP) as well as an M6 and a Hassy 501cm and just ordered the 80mm fuji macro to “scan” my film negatives, thank you for this Awesome experiment.

MARKLINMAN
Автор

A high resolution 35mm negative, properly exposed and developed, contains around 12~16 MP of data. A 24 MP digital camera is more than adequate to extract all the information recorded on the negative. The situation is different if one is digitizing MF negs. Here a different technique is required.

lensman
Автор

To me a far more interesting comparison is: How does a wet darkroom PRINT of say, 16x20, from a medium format negative compare with a 16x20 PRINT produced from a scanned medium format negative?

buskman
Автор

Dynamic range is a big factor, too. Digital cameras are getting better, but I'd be curious to see a dynamic range comparison with top end flatbed and drum scanners specifically for dynamic range

lrochfort