> You don't need an APS-C or larger sensor to get decent images.
I don't have the space and time to debate the merits here but there is a reason they exist, there are lots of things you can get by having a large sensor (including a different aesthetic and better SNR for low light images) and I want those things with a hackable interface and programmatic control of whatever the sensor is capable of.
I've been doing photography with full frame sensors for a a decade after upgrading from APS-C and telling me "you don't need an APS-C camera" without understanding why I use a full frame camera or the work I produce with them isn't really helpful.
> even mid-range DSLRs might have 20MP sensors or more.
This is actually a funny thing. The APS-C Canon 90D has a slightly higher resolution sensor than the full frame 5Dmk4, and significantly higher than the brand new, also full frame, 1DXmk3. Sony’s APS-C bodies are all 24MP, the same as the a7iii and the flagship a9/a9ii.
Other than dedicated high-res models (5DS, a7R, D850/Z7), high-end cameras routinely have much lower pixel density than cheaper cameras, because resolution doesn’t matter that much for most applications, and pros care for a whole bunch of other features that scale poorly with resolution (such as low light performance and buffer drain speed)
> Thanks to all the CMOS sensor advancements, we now have cameras that can produce amazing dynamic range and very little noise
i shoot with a pretty recent APS-C Nikon D7500 & good glass these days and i can tell you that its dynamic range is still nothing compared to what a new (or even older) iphone can get by bracketing a shot at 960fps and stacking it automatically for you. i would happily sacrifice half its resolution for the kind of dynamic range you effortlessly get with an iphone.
i had a good Fuji mirrorless X-T2 and its low latency electronic viewfinder sucked the battery dry in an hour and lagged + looked like shit in low light.
>> I just said people spend an insane amount of time arguing who is a "real" photographer because of that difference. Like "you cannot possibly shoot wedding on APS-C!!!" until it became the most popular format for weddings, and now it is "you cannot possibly shoot wedding with m43!!!"
The thing is that you can say this for just about anything where someone can take himself (or herself) a little too seriously, whether it be cameras. coffee, wine, vinyl records, operating systems or programming languages.
Myself, I buy cameras that suit my shooting use cases, and more often than not, a small sensor is perfectly fine. I went from using an APS-C sensor to using 1" and 2/3" sensors, and have been perfectly happy with my transition, although I'm sure some camera purists would scoff at my choices.
> Better signal processing, amplification, read noise and probably lot more that I can’t even imagine.
Yeah, I get that, but I don't think there is an improvement here by sticking a modern smartphone CPU and OS into a camera.
Making improvements to those metrics ultimately require newer sensors with lower read noise and better on-chip ADCs. The camera manufacturers can't do 24-bit RAWs @ 20fps when the sensor only outputs 12-bits @ 10 fps.
If we are doing multiple exposures, then as you know, cameras already have exposure bracketing, and I can do the rest in my PC.
> The amount of light that’s let in depends entirely on the aperture diameter, not the size of the sensor.
That’s wrong. The size of the sensor, the aperture size (and of course the distance between the two) are all factors that together on this.
Saying the sensor size is the reason is also wrong, but the size of the sensor is a factor in the equation - and the sensor being so small forces manufacturers to go with wide open apertures. It’s not ideal for every shot.
> Even with a wide aperture lens and 400 ISO film
If I was shooting at night, why would I use 400 iso?
> You can drive yourself insane trying to get a 4x5 negative that doesn't have any uneven development or scratches.
Sounds like a good time for a hobbyist.
> Thus, it annoys me when people compare these incredible devices unfavourably with "real cameras". It's pure gatekeeping.
I’m not the one gatekeeping. Cell phone cameras are real cameras. They’re just different.
You say it’s good for general shooting, I’m talking about professional and hobbyist use.
I will say though, it’s just an interesting fact AFAICT - digital cameras are still behind - or are only just hitting parity - in terms of dynamic range.
Lenses for full frame cameras are super cheap -- you can find tons of old Russian, Japanese, and East German lenses that will work really well. Many of those lenses are built like tanks and can be had for <$100, some <$50. Most of them produce very nice images and aesthetically have much better look than what I see out of these CCTV lenses for the Pi HQ camera. CCTV lenses were never designed for art, and among other things produce horrible out-of-focus highlights.
> The image processing code
Well yes, that's also the point, by having an open source APS-C or full frame camera you can tinker to your heart's content with changing the image processing code.
I use Magic Lantern extensively and there's only so much you can do with it, and it's a pain in the ass to recompile code for it. Having a full-fledged Linux system with gcc, opencv, python, and pytorch at my disposal on camera, and with Wi-Fi, Bluetooth, USB, and running an SSH server, and the ability to connect arbitrary I2C and SPI sensors, would be freaking amazing, to say the least.
Wildlife camera with thermal camera trigger and a neural net that recognizes mountain lions? You got it.
LIDAR-based insanely accurate servo-driven autofocus? You got it.
Microphone array that figures out who in the picture is talking and refocuses the camera to that person? You got it.
Home-made Alt-Az tracker with built-in autoguider and remote Wi-Fi progress monitoring? You got it.
And if it can be made to work with the Pi, someone will hopefully also make it work with a Jetson Nano or Xavier NX and then voila I could do some neural net processing in real-time on-board. I've been able to blow Canon's in-camera denoising out of the water with state-of-the-art neural nets by postprocessing RAW images, and if I had a Xavier or Nano on-board I could easily put those neural nets in-camera for convenience.
The possibilities are endless, which is why I really want this hardware so much.
> I would like to see computational photography applied to raw images from DSLRs and MILCs with APS-C and larger sensors.
Wash your mouth out with soap! I did not spend five thousand dollars on a D850-based macro rig to have it produce results no better in quality than what I can get from my phone.
>The only reason I might want to upgrade from my Canon 40D is a drastically improved dynamic range, a compacter size and the ability to shoot movies. So far it's all not quite there yet to be worth it, from my point of view.
There are modern high-megapixel sensors (2-3x) in much more compact cameras with 2-3x the high-ISO performance and 2-3 stops more dynamic range. If that's not a drastic improvement what is?
The idea that the MP race has given us worse cameras seems very prevalent but doesn't seem to be based on fact. The current crop of Sony full-frame sensors shows that very well:
The 36MP sensor is better in all respects than the 24MP one. The 12MP sensor has great high-iso performance at the cost of resolution, dynamic range and color quality.
>The role of the camera is generally overrated, however there are certain characteristics that you only get with expensive full frame cameras such as the Canon EOS 5D Mark III he appears to be using
The "shallow DOF" can be achieved with a longer lens on a crop-factor camera. And most landscape pictures (such as those) use the hyperfocal distance anyway.
A "expensive full frame" camera will have better "low noise" but marginally so compared to a APS C or even 4/3rds camera with the latest (4-5 years) generation of sensors. We're simply above the point of having new cameras really give anything beyond marginal returns in ISO utility (not because they don't get better ISO, but because so much sensitivity is useless for most kinds of photography, including most of landscape work (plus, it affects color rendition). In any case, in all the history of film photography, all known celebrated photographs seldom straiyed outside something like 800-1600 ISO. Not sure why we need 6400+ today, except for pissing contents and/or stalking).
Now, while most of the points are real, they are all marginal returns, and depend so much upon the conditions at the shoot and the skill of the photographer that they might as well not matter at all.
I'd go as far as to say that a $300 APS-C used well will be able to take just as good photos as any $5K full frame (yeah, it might lose a couple of stops of DOF, just use a faster lens on it), and even an expensive lens on the latter wont make that much of a different at any normal print size (at least up to A3).
>> So I fail to see how this camera what outperform a DSLR.
It might be better to look at it with a less narrow perspective. If you're a hardcore photographer, what you are saying is absolutely true.
But a lot of "normals" want DSLR-like quality without bulk.
A lot of people buy DSLRs because they want better quality photos and more flexibility, but after the honeymoon period, they get tired of lugging a big camera with them.
This is probably why Sony's RX100 series has done so well.
"The best camera is the one you have with you" -- that's typically someone's smartphone. If you can give a consumer a camera that size that roughly performs like a DSLR, it will probably make a lot of people happy.
>Making improvements to those metrics ultimately require newer sensors with lower read noise and better on-chip ADCs.
Which big pocket tech companies could do :)
>If we are doing multiple exposures, then as you know, cameras already have exposure bracketing, and I can do the rest in my PC.
Sure, but we have gyroscopes, accelerometers, electronic shutters and 2Ghz processors on this dream device of mine.
Surely we can do better than using a tripod, exposing entire scenes multiple times and try to align and blend them in the computer hours later.
>OK, but that seems to be turning the camera into a point-and-shoot
Not at all.
>For me, I would ultimately want to control things like exposure, shutter opening, etc... myself (who is to say a "bad" over/under-exposed photo or some blur won't make for a better photo?).
100% agreed. But if instead of “shit, I missed this shot” I could have 3 taken before I even pressed the shutter, and one of them is perfect, I'd take it in a blink. Besides all the pie in the sky stuff that I imagine could be done as well.
>I think people who want a point-and-shoot like experience are by-and-large are happy with smartphone cameras today (especially with advanced processing like Android's Night Sight), and they are not going to carry around a bulky ILC just to take photos.
You're probably right. I don't know if it's a viable market, but one can dream.
> It seems far more likely to me that perceived image quality after in device post processing was similar.
That’s just what they said. The purpose of cameras is to produce images we find pleasing, for a few different values of “pleasing” (recording memories, aesthetics, etc).
Nobody cares about the “how”. Whether it’s a photographer with an MFA doing pixel-by-pixel adjustments on a RAW image or an algorithm in an ISP, nobody cares.
Ok, not nobody, but no casual user, which is 99.99% of the market. For most of us, we take a picture and look at the picture. Insisting that one technology is better even though it produces no user benefit is missing the point.
> The problem with DSLRs/mirrorless is that they're proper tools and just like with a brush and a canvas you have to put time and effort to master them. No amount of computational photography will make you a good photographer and if a modern 2k$ camera is the bottleneck it certainly is more of your fault than the camera.
This is a good analogy, I will have to borrow it.
I got friends who ended up buying expensive DSLRs after they saw my photos taken with f/1.8 lens. But they had kit lens and were complaining that their cameras. I told them to get f/1.8 lens. Then they were complaining about zoom. Also they could not use maximum aperture outside. I helped them get ND filters. Now they are complaining about that. And forget to take ND filters indoors, get way too many photos incorrectly focused etc.
Most people expect pro cameras to just act just like cell phone cameras.
> There’s nothing inherently interesting about photos with a blurry background
Congrats! You took a bad photo with a shallow depth of field! I guess the photographer does matter more than the equipment.
> Achieving it with software effects is perfectly legitimate
Sure you can. But if you think software bokeh can compete with any dedicated camera bokeh (even a vintage 35mm camera for that matter) than you need to try and use a dedicated camera with a good prime lens, ie 50mm f/2.
The biggest thing that a dedicated camera can get you is simply options. You can choose the depth of field of the shot. You can choose the shutter speed. You can choose the focal length. Sure you have some control over these things in some apps, but it’s not the same level of control.
I shot on my iPhone for years, but after I got my mirrorless camera I have taken so many pictures that never would have been possible on an iPhone.
I don't have the space and time to debate the merits here but there is a reason they exist, there are lots of things you can get by having a large sensor (including a different aesthetic and better SNR for low light images) and I want those things with a hackable interface and programmatic control of whatever the sensor is capable of.
I've been doing photography with full frame sensors for a a decade after upgrading from APS-C and telling me "you don't need an APS-C camera" without understanding why I use a full frame camera or the work I produce with them isn't really helpful.
reply