I'm speaking as a colorist working in film/television here. TV's already have enough issues destroying the look we carefully craft with "features" like dynamic lighting, motion smoothing, vivid mode, etc. A huge color shift is going to throw this off even more and dramatically change the look and tone to the detriment of the work. This is great for reading text on a screen, but should be disabled for viewing media - you wouldn't look at the Mona Lisa with amber glasses.
I'm speaking as a colorist working in film/television here. TV's already have enough issues destroying the look we carefully craft with "features" like dynamic lighting, motion smoothing, vivid mode, etc. A huge color shift is going to throw this off even more and dramatically change the look and tone to the detriment of the work. This is great for reading text on a screen, but should be disabled for viewing media - you wouldn't look at the Mona Lisa with amber glasses.
Cyanogen runs one of the nodes. I'm not sure where Whisper Systems receives their funding from. At one point Twitter was involved with them, though I believe that is no longer the case.
It's mostly developers posting here, but I thought I'd reach out from the post production world.
I'm a colorist with lots of experience in music videos, commercial, short and long form narrative. Experience with major brands/networks/artists (MTV, ESPN, BET, Coke, Sony, Chevy, Janelle Monae, Diplo, Travis Porter, etc). Knowledge of Alexa, Epic, F65, etc. Fully calibrated DaVinci suite with RAIDs, LTO, etc.
That's a valid concern, though you can root, install and run the app, and then unroot. It's not a good solution, but it is a solution until Android L builds in the ability to use different passwords.
I'm a colorist. I spend all day looking at color intensely with very expensive monitors. It makes me really excited to see people who actually care about color reproduction over things like resolution.
With that said, I have to ask why these groups are interested in 10-bit when I'm essentially certain they cannot view in 10-bit. Only workstation GPU's (Quadro and FirePro) output 10-bit (consumer GPU's are intentionally crippled to 8-bit) and I can't really think of any monitors that have 10-bit panels under about $1000 (though there are many with 10-bit processing which is nice but doesn't get you to 10-bit monitoring). There are some output boxes intended for video production that can get around the GPU problem, but by the time you've got a full 10-bit environment, you're at $1500 bare minimum which seems excessive for most consumer consumption.
So I guess what I'm asking, are these groups interested in having 10-bit because it's better and more desirable (and a placebo quality effect) or are they actively watching these in full 10-bit environments?
This is the same reason try and keep post production workflows at 10bit or better (my programs are all 32bit floating point). A lot of cameras are capable of 16bit for internal processing but are limited to 8 or 10bit for encoding (outside some raw solutions). An ideal workflow is that raw codec (though it's often a 10bit file instead of raw) going straight to color (me working at 32) and then I deliver at 10bit from which 8bit final delivery files (outside of theatrical releases which work off 16bit files and incidentally use 24bit for the audio) are generated. So all that makes sense to me.
I was mostly curious why people were converting what I assume are 8bit files into 10bit. The responses below about the bandwidth savings and/or quality increase on that final compressed version seem to be what I missing!
I can't find any details on the Crossover, but the Achieva is an 8bit panel with FRC to attain effective 10-bit. It's not true 10 and I wouldn't recommend using it in a professional setting (benchmark being a true 10bit panel with usually 12 bit processing such as Flanders Scientific http://www.flandersscientific.com/index/cm171.php).
And while NVIDIA does output 10bit on GeForce, it's limited to
> full screen Direct X surface
which means no OpenGL (a must with pro apps). I suppose it might be possible to use a DirectX decoder in your video player to output 10bit video but I haven't heard anyone doing it or tried myself.
All that said, I was originally talking about consumer facing 10bit, so the monitors are probably a valid point. As someone who cares about color reproduction, I hope to see more of that - especially as we move towards rec2020.
Of course this argument falls apart with the many truly free OS's that would work perfectly well in the general market (as long as you didn't tell consumers they were using GNU/Linux and made it look the same as their Windows box).
I agree with you that "free" as a business model needs to go in one direction or the other. Truly free, as in my Fedora install (ignoring some binaries) or pay for privacy.
The problem with pay lies in the search for more revenue. Cable television used to be ad free. There wasn't a need to advertise since they were already getting subscription money - now look at it. The temptation to add easy profit is too great for many businesses (which is why we now have targeted ads within an OS we already quasi-paid for).
Meanwhile Red Hat isn't doing too bad with their blend of paid and free.
None of what I'm saying is particularly well written or fleshed out after a long day, but this is also something I think about a lot.
Worth noting that those "HDR" televisions are still only Rec709 which is limited to about 6 stops of dynamic range. Sure some are approaching rec2020 color gamut, but we're still cramming all that gamma down into 6-9 stops. Even a consumer DSLR can shoot 12.
I work in post and have moved our workflow over to the Academy Color Encoding Specification (ACES) which supports up to 27 stops (more than the eye can see which is about 24 best case scenario). ACES allows us to dynamically change our output to rec709, rec2020, P3 etc and while 2020 has better colors, it's definitely still a highly compressed image (in terms of dynamic range).
I don't work with much HDR so I can't help you there.
As for file formats, it varies entirely on the show. Higher end, fx heavy shows tend to use DPX and occasionally openEXR while lower end shows are typically some flavor of ProRes (422HQ and 4444 being nosy common). DNxHD 175 or 220 are also occasionally used. Some people are trying to move to the new ProRes high quality (4444XQ or DNxHR).
Delivery specs for whatever network are almost always ProRes 422HQ 720p or 1080p. Features we'll output to DPX or TIFF which will be converted to DCP (JPEG2000) in the P3 color space (XYZ instead RGB or YUV).
I'm excited to see higher bitrates for delivery, but it's going to be a long time on the broadcasters end. They're wary to upgrade so soon - especially when a lot of people already have trouble telling 720p from 1080p.
ProRes 4444 is lossy, but it's still more than enough for most projects. Drive was shot in ProRes 4444 and I deliver network television shows shot in 422HQ or 4444 all the time.
Raw is nice but definitely not strictly necessary.
I've been following this camera since everyone considered RED vaporware. Development has been agnozingly slow and now that they're getting close there are better options for less money. It's great if you want something open source (which I can't applaud enough) but in terms of shaking things up it's too little too late.
Just a note here. RED can't do remote focus, iris, zoom (FIZ). You need a remote follow focus with zoom and iris gears to do that, something like the Preston FIZ. In fact, RED doesn't even make one of these.
You are correct that you can adjust ASA, shutter speed, etc with the REDmote add on. Historically though, they are rather finicky pieces.
Instead they've created a world where I specifically avoid phones (Samsung for example) because their software is so awful and bloated (not to mention ugly).
I believe you're talking about Aspera, recently bought by IBM. It's pretty fast and reliable and I've never had a problem with it over even residential Internet speeds ( many master files for delivery well over 100GB). Bitmax is the big CDN that uses it for media delivery.
A lot of my work is in music videos. I've got hundreds of millions of views on vevo and even a VMA winning video under my belt. There is almost no one in this industry who works for a single artist. This week, for example, I've got a video coming out from a big artist from a major label, a brand new band a major label decided will be cool, a major YouTube star (top 10 YouTube artist), a tiny artist on a tiny label, and a totally unsigned rapper (busy week). You'd be surprised how close the paychecks for a lot of these are.
Point is all of us work with dozens or hundreds of artists because outside of maybe ten mega artists you can't depend on a single "brand" to support you. Granted this applies to video more than other areas, but even agents and managers will have a few to several artists they work with.
It's not a money problem, it's a marketing one. I work alongside the music industry in the code world (won a VMA along the way) and see how labels really work. The problem isn't so much about album sales and streaming money going straight to the creator (though not enough does by a long shot) but rather that if you're not signed to a label and have access to their marketing engine then no one will ever hear your music.
There are plenty of places like bandcamp where you can find great music and buy it directly from the artist with them taking most of the cut, but the numbers of people buying are so low it doesn't matter. With labels, I've literally watched them take a nobody band and decide they're going to be the next thing. They finish their album, label foots the bill for a big music video, PR team call$ pitchfork or world star or whatever site is genre appropriate, band gets a frontpage feature, radio play start$, and next thing you know album sales are way up, tour tickets are sold out and label made bank.
The problem is discovery and curation and ethereum doesn't fix that.
But then half you lose easy access for the half of seats not on the sidewalk side. There's not a good way to lose the aisle without condensing down to only a couple seats across.
While trying to be vague, I can say the tobacco industry is ready to profit off of marijuana legalization. They have lots of stuff ready to go, but not public. In the meantime, however, they are more than happy to continue raking in their current profits and not rocking the boat.
It's only about 6K best case scenario and usually a little under 4K in practical resolution. Of course some of the major 6K or 8K cameras on the market are only a little better than half that in practical resolution. There's also a lot more to final image quality than just raw pixels or grain, it's just a small piece of the puzzle and why a 3.2K camera may look substantially better than a 6K alternative.
VFX are substantially more expensive to complete at higher resolutions. Even in major films released at 4K, fx are frequently lower resolutions because of time and budget constraints. Going back and rescanning older films that have lots of cgi usually results in very ugly vfx scenes.
And again, resolution is just a small component of the final image quality. There is a major 3.2(ish)K camera on the market that has substantially better IQ than 4 and 6K alternatives. And another 6K camera that has substantially better IQ than 8K options.
As for upscaling, we're probably better off leaving that for the end user as better upscaling algos are created and deployed all the time and baking it in from the studio locks us into what will soon be old methodology.
The latest IPCC report does not account for Arctic methane release. There were too many variables and they decided to omit it and avoid being alarmist. This is discussed on page 28 of WG1 AR5 SPM (the final 2013 IPCC policy maker report).
The NFL actually has very similar rules involving suspected head injuries though in practice they seem arbitrarily enforced - especially if it involves a key player.
I am a colorist and I also want to add that the technical details check out - minus the Red One being better than film. It was most decidedly worse in every way - but was almost as good which was the important thing. I recently was going back and look at One and MX footage and couldn't believe how terrible it has held up while the Alexa sensor is still great today as are the 16 and 35 scans I work with.
That aside, I found the discussion about the misuse of ACES particularly interesting. It's like the completely misunderstood the tool and how it's intended to be used (which isn't uncommon with the tech unfortunately).
FWIW, I'm a colorist and have very expensive color critical monitors and calibration probes. For GUI monitors I've started recommending gaming panels for suites that don't want to invest in professional graphics displays. Once calibrated these monitors are surprisingly accurate and hold the calibration well (at least as good as professional displays many times more expensive - though still a far cry from color critical displays). The expanded refresh rates are a nice bonus for eye relief as well.
Black and white is fairly simple and straightforward but still requires a totally dark room (or bag) and a bunch of sort of expensive chemicals that go bad (and some of which are fairly toxic for the environment).
Color is similar but more complicated with more expensive chemicals and more room for mistakes. You're almost always better off paying someone else to develop (in money and quality) though it's not nearly as satisfying.
Of course after you develop you need more stuff to make prints or a film scanner to get something useful from your negatives. More investment here - labs usually do this for you for a very small fee if you get it developed there.
I can't recommend developing your own color film, but black and white is fun to play with if you want to get a better understanding of the process. If you're just doing a few rolls occasionally though then development is probably the best option.
The L shutdown has nothing to do with funds - it's mostly covered by federal Sandy money. The issue is the tunnel needs to be essentially gutted and replaced after being submerged in salt water for days post Sandy. There's no reroutes available so the only option is a shutdown. It can either be done all at once in less than 18 months (and MTA actually has a good track record coming in under time for repair work) or done one direction at a time over three years with a fraction of the current brooklyn-manhattan service level.
A full shutdown is absolutely the best option as much as it sucks for people dependent on the line (like me and my neighbors).
The L mine has CBTC and the 7 will as well. The MTA has a plan to deploy this to all their lines but they have almost 900 miles of track with more than 6000 cars and it's an expensive and slow process - especially when the lines run 24/7 unlike everywhere else.
Auto updates to Windows users of interest (who Microsoft already track a plethora of data about making desnonymization trivial)? Many people use windows with a Microsoft account logged in making it even easier to identify.
If you ignore the IPCC reports of 1.5m because it was based on inomplete data and consequently intentionally ignored several positive feedback liops, the most recent studies suggest 3m-4m by 2075-2100 are very possible and even likely. This obviously totally floods places like miami, osaka, all three airports in NYC, Bangladesh is gone, many of the coastal Chinese cities, parts of vancouver, etc. I haven't looked in detail at Silicon Valley but the western coast tends to fare better because there is a rapid rise from the coast (many cliffs as well though they are extra vulnerable to increased erosion and you will see many collapses) and because of how the rise won't be even (due to gravity and other variables), places like the east coast will see a much greater percentage of that water focus there.
Displays also drift over time. So even if you have a professional display calibrated at the factory it needs to be recalibrated a minimum of once per year with 6 months being preferred in a professional environment.
This is true with even the highest quality color critical displays like we use in film/tv color correction ($5-$50k+ for 25" panels).
I'm a colorist. Many of us do use windows, many use OSX, many more use Linux. Every major color critical application supports many types of LUTs and color management.
Further, HP Dreamcolors have tons of problems and aren't considered solid for color critical work (but are fine for semi color accurate stuff like intermediate comps etc). Color accurate work is done over SDI with dedicated LUT boxes handling the color transforms and the cheapest monitors being $7500 Flanders Scientific Inc 25" OLED panels.
The EX3 and Spyder are not good probes. The cheapest quality probe I know of is the xrite i1 display pro. Everything below that is basically a toy. I've never used a ColorHug though so I'm not sure how it stacks up.
As for software, DisplayCal is actually very well regarded in pro color and considered one of the only serious three choices, the others being CalMAN and the big dog being Light Illusions.
This is just not true, you're spreading misinformation. I am a colorist, I'm the person making these final decisions. Every single high end color suite I've ever been in runs Linux. In fact, the full version of Baselight (one of the defacto color correction suites) only runs on Linux. DaVinci Resolve (one of the other major ones) ran only on Linux for the majority of it's existence and the full panel version (the pro choice) only ran on Linux until last year.
Every major color house I've worked in runs Linux exclusively in their suites (CO3, The Mill, Technicolor, etc).
That's not to say windows and OSX suites don't exist, I use them and my own suite runs windows, but the highest end of color is basically Linux only.