Then you should know that tilting is rotation. Unless by rotation you mean specifically around the optical axis to differentiate from pitch and yaw... in which case you should know that such rotation is uncorrectable by optical systems - though it is correctable by sensor-shift systems.
I think the point you're trying to make with "a few degrees has magnitudes more of an impact" is that rotation has more impact than shift. I agree that the rotation caused by moving the lens a few millimeters has more impact than shifting the camera a few millimeters for sufficiently distant subjects. This is why early IS systems were focused on mitigating pitch and yaw as opposed to shifts: moving lens groups to counter millimeters of rotation is within the physical capabilities of a system that can fit within a lens and it gives a lot of bang for the buck in a narrow sweet spot. Rotational movement is most prevalent in hand-held cameras while operating the camera - for example pressing the shutter button - and less so when it's just being held. This is why there was a standard trick in still photography of using bursts of continuous shooting to get clearer captures: the first frame is impacted by sudden pitch but subsequent ones are much less so. OIS does an excellent job of addressing that first frame blur.
Due to inertia, when a camera is attached to a larger object in motion (athlete, surfboard, vehicle, etc.) random rotational movement is much less pronounced while continuous shifting is. Correcting sustained directional movement - be it rotational or shift - is not something OIS systems are designed to handle. They are meant to correct random movement around a mean, essentially they provide an instant mechanical "reversion to the mean" of sorts. Correcting tens of centimeters of shift is well outside their capabilities and shifting a camera tens of centimeters during exposure will absolutely result in noticeable blur. It doesn't remotely require "the fastest possible athlete" to achieve that level of shift during a 1/30s exposure.
The point is not that EIS is better than OIS - the systems serve different purposes. The point is that OIS doesn't bring as much value to action camera use cases and it comes with the significant downside of reduced durability.
Certain Hasselblad cameras have had this for a long time, and Pentax, Olympus, Panasonic, and Sony have it on various models, using the sensor shift image stabilization to implement it.
Rather than a single lens element being shifted to compensate for camera shake, Nokia’s
OIS system moves the entire optical assembly in perfect synchronisation with the
camera movement, or to be more precise, unintended camera shake.
Nokia’s new OIS system can cater for around 50% more
movements per second than conventional OIS systems – up to around 500
movements every second
shutter speeds slower than 1/30th
second
typically results in camera shake. Depending on the amount of camera movement
requiring compensation we’ve found in testing that shutter speeds as long as 1/4th second can be used. This is a 3EV improvement or 8x longer shutter speed
If you know anything about cameras and low light photography, the sensor shift OIS is a HUGE deal, especially combined with an f/1.6 lens. Instant purchase!
OIS systems move a subgroup of lens elements to compensate for movement of the camera. The degree of that movement is very limited. The main purpose of OIS is to approximate a stationary camera platform from a near-stationary one. Many OIS systems get turned off for panning - while more sophisticated ones support being turned off in the panning axis - specifically because they don't handle movement outside of a very limited range. For applications where the camera is intended to be in-motion the appropriate mechanical solution is using a gimbal, not OIS.
The main purpose of gimbals or EIS is not to produce sharper individual frames but to reduce jarring across a sequence of frames by smoothing the camera movement - gimbals do it physically while EIS simulates it after the fact by using a subset of the total sensor area.
At 30fps the longest possible per-frame exposure is 1/30s. An action camera-wearing athlete moving at a leisurely 20mph will travel about a foot in 1/30s - and that's just movement in one axis. OIS is not going to help with per-frame motion blur when the camera moves that much or more during the exposure.
The respective pros and cons of EIS & OIS don't change with light level. EIS works by cropping part of the image which results in a narrower FoV and a lower resolution, however these effects occur regardless of how bright the scene is.
The additional moving parts of OIS systems make them less suitable for ruggedized cameras.
Pre-OIS Google did this with image stacking which was a ghetto version of a long exposure (stacking many short exposure photos, correcting the offsets via the gyro, was necessary to compensate for inevitable camera shake). There is nothing new or novel about image stacking or long exposures.
What are they doing here? Most likely it's simply enabling OIS and enabling longer exposures than normal (note the smooth motion blur of moving objects, which is nothing more than a long exposure), and then doing noise removal. There are zero camera makers who are flipping their desks over this. It is usually a "pro" hidden feature because in the real world subjects move during long exposure and shooters are just unhappy with the result.
The contrived hype around the Pixel's "computational photography" (which seems more incredible in theory than in the actual world) has reached an absurd level, and the astroturfing is just absurd.
In fact the Olympus implementation of this feature moves the sensor by half a photosite diagonally. They do it by re-purposing the existing in-body stabilization mechanism to move the sensor around.
User 'twic' posted a link to a very interesting article that describes this and also explains the difference between photosites and pixels:
On the one hand that's correct. On the other hand you could completely avoid that if you did long exposures the way a Google Pixel (2+ I think) does it (in software):
Take lots of short exposures and fuse them. If the device is handheld you get variability in positioning for free, if it's on a tripod, it will automatically wiggle the OIS slightly to achieve the same effect.
Most cameras these days--other than the low end and the very high end--have IBIS (In-Body Image Stabilization) built in, and then stabilization built into the longer lenses (typically > 100mm). In higher-end/more recent cameras that both IBIS and lens stabilization can work together to improve how effectively the system works. I don't know if it's true of universally, but the recent cameras in Nikon's ecosystem which I'm familiar with use a 5-axis IBIS unit. A quick search suggests the K-1ii and some other Pentax cameras also moved to 5-axis IBIS--probably one of the reasons most brands are claiming 5+ stops in-body these days.
OM-1, formerly Olympus, does has some very cool tricks using the tiny micro 4/3s sensor combined with a sick IBIS unit allowing hand-held astrophotography that the larger companies haven't bothered with.
Where did I discount EIS? EIS+OIS is a golden solution. It's what the Pixel 3 does. It's what the iPhone 8 does. It's what the Huawei P20 does.
This all gets very reductionist, but EIS over a series of bursts is a bad alternative to OIS. It will be garbage in->garbage out. EIS with OIS, however, gives you the benefits of OIS, with the safety valve and "time travelling" effect of EIS (in that it can correct where OIS made the wrong presumption, like the beginning of a pan).
>and even using something like Olympus' top range 5-axis IBIS
The ability of OIS to counter movement is a function of the focal length. Your Olympus probably has a 75mm equivalent or higher lens, where a small degree of movement is a large subject skew. That smartphone probably has a 26-28mm equivalent lens. Small degrees of movement are much more correctable.
EIS is brilliant. OIS is better for small movements, but add EIS and it's great. Computational photography is brilliant. However Google has really, really been pouring out the snake oil for their Pixel line.
Shift shifts which part of the image you are implicitly cropping to. (Every photo is a crop: you’re taking a circular image plane and cropping a rectangle out of it.). Tilt tilts the focal plane.
I occasionally wish someone would make a camera with a circular sensor that captures the entire usable field of view from the lens. If nothing else, a professional camera like this could skip the extra side grip and save some bulk because there would be no need to ever hold the camera sideways.
Pentax cameras take a different approach with stabilization--rather than stabilize inside lens, which means every lens is shipping its own stabilization solution, they stabilize the sensor itself.
It limits stabilization to two axes, but now any lens is essentially stabilized. And it also lets them do some tricks, since it's so integrated. One is to do sub-pixel sensor shifts for higher res photos, and another is to do astrophotography tracking when GPS data is available.
I can see your point: this basically just looks like long exposures, or stacked exposures, which is basically the same thing of letting more photons hit the sensor, aligned using OIS.
Any thoughts on why Apple, as the other leading phone maker with a heavy emphasis on camera quality, has not implemented anything like it? Not to discount the difficulty, but OIS aligned long exposures kind of seems like low hanging fruit. Instead, they keep trying to open the aperture more.
reply