> They choose the LGPL which is certainly a less known/understood license and they tack on a bunch of extra requirements.
If it had extra requirements, it wouldn't be using the LGPL (and I don't think it would even be compatible). Those are just restatements of parts of the license; you can see which sections just by reading it, as it's quite short.
> The standard provides for arithmetic coding, but no one implements it because of the patent.
That is, there is an implementation in the libjpeg source tree, but it's disabled. I thought the patent had actually expired now, though.
Also, H.264's arithmetic coding mode saves ~20% bitrate; Vorbis or AAC could presumably benefit too, but neither of them use an arithmetic coder, even though AAC should have no patent problems. Maybe there's some technical problem I don't know about.
I have the same opinion and I don't see anything weird about it.
They're both OO languages without many features compared to C++ or something, but Obj-C is dynamic and much more flexible (see NSProxy, NSArray class clusters, etc.) whereas Java is not at all.
Plus Obj-C is actually C, so you can write "if (!i)" without the compiler deciding you're too stupid to handle implicit type conversions.
By the way, I've noticed that people (who don't seem to use it often) complain about the extremely long method names in Obj-C, but if you count by the number of tokens instead it can be pretty terse. And I think that's how reading natural language is supposed to work, anyway…
They certainly don't bind methods to objects at the same time - Obj-C does it when you send the message. You can send and catch messages with no methods attached to them, or that weren't even declared in the first place (at the expense of a compiler warning).
Of course there are other differences, but the most important one is just that the class library is much better - UI doesn't rely on inheritance, there's no class named LinkedHashMap (see http://ridiculousfish.com/blog/archives/2005/12/23/array/), and there _is_ a class named NSMutableArray.
Of course you don't. That would be like having to get your own patent license to use iTunes.
Although you do actually need a license to use the QuickTime AAC encoder in your own program, but only on Windows. Or so I remember from mailing list traffic, haven't tried it myself.
> I can't wait to see benchmarks of how much of a difference this makes.
Some difference, but it's not the best thing ever, and not even the most important problem Flash has.
For one thing, the H.264 decoder in Flash is already faster than QuickTime's software decoder and competitive with FFmpeg, so video decoding is absolutely not the reason people think it takes too much CPU.
For the size of video you usually watch in a browser, drawing other elements on top of the video (and the necessary conversion to RGB) takes more CPU than actually decoding it (see http://blogs.adobe.com/penguin.swf/2010/01/solving_different...). I don't know if they've improved that, maybe not - I guess the best solution would involve doing the whole thing in OpenGL which is tricky.
The other problem is that the browser plugin API is really old and doesn't match up with the double-buffered window system that OS X always used. This is supposedly better in Flash 10.1 and would improve it more than anything done here if so. Of course 10.1 isn't out yet, so who knows.
> Ok great, so Flash is also "open enough": fully published specs, anyone is allowed to use the format (creating, playing) without royalties. SWF is more open than h264.
But does anyone do this? I see no usable reimplementations of Flash Player, and the spec cannot possibly be complete.
H.264 is a real standard with a conformance suite and bit-accuracy, and no scripting languages or multiple versions with different bugs to reimplement.
Flash also includes VP6 and RTMPE, which are proprietary and must be reverse-engineered.
> They don't care about open standards - that they would is, on the face of it, absurd.
They really do care about open standards. There is no reason why WebKit and several other projects have to be run in the open, after they stopped releasing several parts of Darwin, but they are.
At worst, this is because there is more than one person at Apple, and the people who want things to be proprietary don't care about this and have left it to the people who like open standards to deal with.
What promise do you mean? Cocoa has existed for 10 years now. It was never going to go away at any point.
Finder was kept in Carbon as a demonstration that large codebases ported from OS 9 would still be supported (it would take quite some time to rewrite them, after all).
iTunes's lack of a rewrite is probably due to having to run on Windows without having to immediately do the extra work needed to port Safari, but at this point it doesn't really have an excuse. At least it supports background scrolling, so it's still beating Word...
It doesn't have to be run in the open; they have to release the source whenever they release a new Safari update, but they don't have to do project administration in public or let anyone else access their SVN. This is how several of the Darwin opensource projects work, notably Apple's gcc branch and the kernel.
Mozilla and Opera have promised everyone that they would be sued for royalties (and there's no reason not to believe them), but I've never seen evidence of this. Nobody is ever going to sue Perian/ffdshow/VLC for royalties.
The difference in maintainability is pretty big - iTunes and FCP are some of Apple's oldest application codebases at this point and both probably need a total rewrite to be up to their modern standards and work with 64-bit/background threading/etc. (They also both started out at other companies, though who knows what that means.)
I assume the reason they haven't been rewritten is the same reason that QuickTime 10 isn't finished yet. But I don't know what that reason is.
I copied this High@3.0 file out of my iTunes library and it successfully syncs to my 3GS.
I can't get it to play over Wifi, while I can get a Main@3.0 file to play, but I think this has something to do with not enforcing some bitrate limit rather than the profile.
By the way, the folklore.org book is also inaccurate; I seem to remember it including some of the website comments, but not any from Jef, and some of them were important. Can't remember any of them offhand, though.
And, of course, the bio section didn't mention that he'd died before the book came out.
A non-fun thing about that license: it seems to me that it's not compatible with the GPLv2, which doesn't allow adding extra restrictions on the end user. I can't remember if it's compatible with LGPL or not, though GPLv3 seems safe.
This has been raised on ffmpeg-devel, so hopefully Google has some opinion about it.
The work in x264 isn't that irrelevant suddenly. H.264 has great market penetration and isn't going anywhere. And x264 could already be adapted into a VP8 encoder if necessary; it'd work better than libvpx, as the fundamental algorithms and psy optimizations are all better tuned.
"-Werror=implicit" would be more useful here. -Wall won't prevent that from compiling, but there's hardly any situation where you'd want it to compile.
> though he does make a few minor technical blunders
Such as?
> I mean he says it's better than H.264 baseline. If you're watching it on an iPod or iPhone then that's the only profile they support. In other words Android phones will have better maximum quality video than Apple's. Is that part getting overlooked somehow?
"Better" doesn't mean "much better"; a better residual coder is useful, but H.264 has better reference frame system - On2 seems to love idiosyncratic systems which aren't as good - and much more flexible adaptive quantization. Which is the basis of x264's psy optimizations, so the same (fundamentally good) ideas probably can't be implemented as well.
> It's only a matter of time and at least the Xiph guys have experience of doing exactly this on a similar codebase and they've been working on VP8 for weeks already.
IIRC Theora barely implemented rate-distortion optimizations in the first place, let alone Psy-RD - you have to get that out of the way before you can do the really interesting things.
However, I don't think VP8 uses the weird Hilbert coding of the older system, so it might be easier to implement all of that. Besides, you can just copy it from x264.
It reads more like a conspiracy theory than anything to me. It is certainly long, which makes it look "sober", but what does that have to do with being accurate?
The press were invited to the keynote; they didn't take up any of the conference slots. I doubt that many people come just to be there, though I did see Gruber around last year and not this year.
> In my case it's Google as they are more open and willing to crack old monopolies that also get in my way (as a dev).
Well, Apple quite successfully introduced an app store that didn't require carrier approval for everything. You wouldn't have been able to get past that in the first place.
Or did Google announce something like that in Android first? I can't remember now, but I doubt it.
Handbrake is the only GUI I would recommend on any platform.
This isn't because I really like it - the implementation of presets is broken, for one thing - but mostly because so many other ones are license violators.
If you aren't doing DVD ripping, I'd consider it a failure of the CLI tool if it's not easier to use than any GUI frontend.
x264 is nearly there, since "x264 -o out.mp4 in.something" should be all you need to get a good encode after this GSoC.
It uses a standard Internet protocol (SIP+SRTP) as they said in the keynote.
Skype actually does use a proprietary codec (On2 VP7, which Google hasn't released) in their Skype<>Skype video chat. Does Fring use that, or does it support 3G too?
In any case I'd see not using 3G as a major advantage for FaceTime, since H.264+AAC is much better than low-bitrate H.263 used in 3G.
I don't see what that has to do with an "anonymous Web" at all. Nobody on Facebook thinks they're an anonymous forum poster, and I doubt anyone goes around mentioning their zip code and date of birth on forums anyway. (Although many profile pages do have city and year of birth, which might be enough…)
Besides, if it was an anonymous forum (http://wakaba.c3.cx/shii/), nobody could tie your separate posts together even if you put some identifying information in them.
On the other hand, people on Facebook are happy to put their real names on angry political statements. So maybe this article shouldn't have bothered confusing "anonymous" and "uses pseudonyms" to make its point.
They were both on wifi. The problem on the 4 was likely caused by it supporting N, which is of course much better than 3GS's G but is more sensitive to the edge case of ~500 people in the same room as you running wireless base stations without bothering to set up channels properly.
I don't think a change to the antenna would affect how it behaves in the same room as its base station, anyway.
Browsing the store does not send them location information except for an IP address. The system never requests location information without giving a confirmation dialog first.
This hasn't ever changed and I don't see why it would have. The only thing that has changed is that you read a misleading article.
It's required so you can map phone numbers to the other person's IP address. I assume it's also used for STUN (http://en.wikipedia.org/wiki/STUN).
(This is just me guessing, of course, but the "don't automatically assume a huge conspiracy theory" method almost always ends up with me being right, so there.)
> Aaaand we've come full circle again, where people start to realize that Apple is, like Microsoft, a gigantic company that doesn't care about anyone.
I hope nobody realizes this, since they're quite different. MS's primary market is OEMs and businesses, and they care about selling to them.
> Gee, it took a shiny user interface and a meticulously-engineered corporate image to sucker in that last batch, how low can be set the bar this time?
Would you prefer it if shiny interfaces actively drove people away?
OS X will still fragment large files - they don't fit in the catalog so HFS+ compression won't work on them (not that it works on user files anyway), and the auto-defragmenter only works on small things.
The difference is Apple doesn't ship a defragmenter, so you don't care. I think this solution would work just as well for MS.
If it had extra requirements, it wouldn't be using the LGPL (and I don't think it would even be compatible). Those are just restatements of parts of the license; you can see which sections just by reading it, as it's quite short.