At what point do you start seeing diminishing return on pixel-count with such a small sensor? (Or can you always get value if you're willing to bump up your exposure time)
I guess if you don't mind reducing the resolution, you can e.g. average 4 pixels to reduce the noise (but I don't know if that gives you a lower noise floor than just a 16MP sensor in the first place...)
You would actually have more noise, or at least, not less noise. There is readout noise associated with reading the information out from each pixel. Using one bigger pixel would give you less noise than four smaller ones :)
The summed value scales linearly with the pixel count but the combined readout noise only with the square root of the pixel count.
Though, there are indeed other reasons to prefer a 16MP sensor in the first place (most notably, dynamic range, but also allowing slower readout).
It’s mostly a problem of pixel size. The smaller the pixels the less light the pixels get. That means really terrible low light performance. And no software can’t magically make that as good as a pixel 4x the size.
The only advantage here is it allows you to get away with more crime like cropping and digital zoom in brighter lights. That may be an advantage depending on the application.
I’d rather have a 12MP sensor twice that size with some glass in front though for most applications. Far more useful.
Since it sounds like you're familiar with the field, any chance you've come across a CS- or C-mount lens with an autofocus motor? I've looked around a bunch and haven't had much luck. The current solution is a full C-mount-to-Canon-EF-mount adapter that has drive electronics in it, but it's eye-wateringly expensive and huge & heavy.
Never had a reason to use them, but I do recall that Edmund Optics has some C-mount units with liquid lenses in their catalog. The driver modules will allow control over USB or I2C.
I've used them in some industrial vision applications with Dalsa cameras. They worked fine, I would have preferred a fixed focus and a manual adjustment when the requirements changed (it was much more difficult to get good real-unit measurements and to compensate for distortion) but they worked fine.
Awesome, good to know! Yeah, the EO Liquid ones came up in my searches too, but I'm completely unfamiliar with that tech (coming from a conventional DSLR world). Thanks!
My original canon digital rebel XT, I think, is 6MP, and its output can be printed poster size with no visible degradation at any normal viewing distance (like six inches or farther).
So yeah, I'll take some good glass and literally any sensor post-2005 (when I paid $1100 for that 6MP)
If you use the Rayleigh criterion to calculate the maximum angular resolution based on aperture (? = 1.22 * ? / d) using green light for the wavelength (and focal length / focal ratio to get the aperture) and use the resulting angular resolution and the given diagonal 84 degree field of view... you end up getting that there are about half as many pixels as the maximum theoretical angular resolution. Combined with lens imperfections and other sources of distortion (as opposed to the other commenters talking about exposure times) the result is that the extra pixels are really probably not adding any new information, or that they're right at the levels of diminishing returns where the returns are more or less zero for extra pixels.
>can you always get value if you're willing to bump up your exposure time
No, there are hard limits on how much extra information you can get with more exposure time.
What about pixel size compared to light wavelength?
If my maths is right, 7.4mm sensor side divided by 9,150 pixels is about 800nm size. Visible light is 400-700nm. And presumably not all of the pixel is actually light-sensitive.
Is this a valid comparison though? I’m not good at quantum...
The Rayleigh criterion above is exactly what is used to answer the question you're asking :) it tells you what size objects you can resolve with a given wavelength of light, which if you think about it in a different manner also tells you what the smallest useful pixel size you can have is. Two smaller pixels next to one another would be able to distinguish no more information than a combined one.
Actually, that's not true.
However, going beyond requires deconvolution of the wavelength-scaling diffraction pattern, so it's only practical if you have narrowband radiation (stops working at 10~20% FWHM bandwidth).
Suitable sources include monochromatic LEDs, most lasers, and some gas discharge lamps (notably low pressure sodium).
Nice, I'm right now working on a raspberry pi camera solution to create a timelapse of something I'm building that will take about 6 months. I have the original camera module (raspberry pi camera rev 1.3) but I'd love to try this 64mb one out someday.
The sensor will have a Bayer pattern [1] and the data will go across the bus in Bayer format. The processor will then perform debayering. So no need to multiply by 3 channels per pixel :)
While many image sensors will perform debayering for you if you want, doing so means you need 3x the bandwidth for not much benefit, so very few people do it.
No - go check out that Wikipedia link about Bayer filters.
For the vast majority of cameras, if you buy a "4 megapixel" image sensor, you get 1 million red pixels, 2 million green pixels, and 1 million blue pixels.
When you load the image into your image editor and see 3 colours per pixel, 2 of the 3 are interpolated from neighbouring pixels.
Well you still end up with less sensor surface area with 4x as many pixels on the same substrate because there’s a gap around each cell and the micro lenses are smaller. So you can aggregate them but you’re still getting less light.
There’s a happy medium somewhere on density vs performance. I think the iPhone 13 pro seems to have it about right for “reasonable compromise”
This really depends on if the packing ratio changes with smaller pixels or if there are nonlinear effects with light collection and pixel size.
Classic geometry problem is maximum circle packing, regardless of the size of identical circles, the maximum area covered in a plane is ~90.7%. Basically if the "wasted" space on a sensor is proportional to pixel size, then you don't lose any collected light by combining smaller pixels vs. having a larger pixel covering the same unit area as n smaller pixels.
> Turning your Raspberry Pi into a DSLR-like camera.
This is stretching the truth. From the sample images the low image quality is what I expected from a cheap tiny sensor and lens. Zooming in just gives you noise instead of detail. It's like the cheap "4k" IP CCTV cameras I have at home that aren't much sharper looking than HD 1920x1080 cameras I had.
Though perhaps hacks like sensor cooling to reduce noise and replacing the cheap optics with quality could improve quality.
>"Future Raspberry Pi board may be able to take advantage of the higher video resolution and framerate enabled by the new camera module."
This is interesting, the community it's been asking for more powerful Raspberry Pi models, with more PCI lanes too. Models like that would be amazing for building NAS, routers and IoT devices
I’ve always wanted to run a small pi box as a router with pfsense. Right now my only other option for a small all in one, low power pfsense box is maybe an Intel NUC?
For pfsense you'd really do a lot better with a small x86-64 system (mini-itx motherboard) that has four or more gigabit ethernet ports with Intel chipset onboard, attached to PCI-E 3.0 bus. In a small mini-itx case it would definitely be a lot bigger than a raspberry pi but also a great deal more capable.
Depending on your needs you can also get systems that have several 10GbE SFP+ cages on-motherboard.
If you search "mini firewall appliance" or "pfsense router" on your favorite marketplaces you'll get results for some custom intel hardware that has all of everything you need in a package smaller than mITX.
If you're looking for something cheaper than a Pi try finding used thin clients, which are basically x86 industrial PCs. With a managed switch behind it you can use seperate VLANs so that you can run pfsense with both WAN and LAN on a single NIC. Obviously this will cause reduced speed on the WAN-LAN interface due to the single shared NIC, but if you're looking at a Pi anyways the networking performance of the thin client may be adequate for your needs.
really wish they had put a C-mount for lenses on it. Also, every single raspberry pi CSI connector or camera I have (in the tens now) has broken under light handling.
I want to use this in a situation where I can control the focal distance (microscope). In principle you could even make an autofocus system with a zoom lens and a servo although the engineering is definitely non-trivial.
Is there a standard mount (other than DSLRs) that supports autofocus?
reply