I don't think that makes mkstemp obsolete as it may be important to have named temp files.
(and of course though there is a slight race you can unlink the file you get through mkstemp while keeping it open, that seems to be the strategy used by python's tempfile.TemporaryFile)
I can't think of any case, except if you are too lazy to pass a file descriptor to another process, and its easier to pass the name (which cannot be ruled out, it is easier!). You can't safely close and reopen the file, or it might have been modified.
There is still a possibility that an attacker opens the file before you unlink it and starts writing to it, which O_TMPFILE gets rid of as it never has a name so can never be opened.
Even ignoring the fact that these are not exactly the same... How does introducing a new interface necessarily "obsolete" the old one? Particularly when the old one is part of a standard, and the new one is not.
Some part of me would love to see a new group - one with a more pragmatic approach - fix the POSIX spec in a way similar to what happened with HTML5 and WHATWG. The other part of me wonders if it's even worth the trouble - this would involve taking the time of knowledgeable people away from projects worthy of that time to do this. What value would we get out if it? Would it be worth the cost?
So you want many broken implementations with quirks of their own? (HTML5 + WHATWG's output).
There needs to be a single dictator with a single standard. The problem so far is that the dictator and standard need to be hit with the clue stick man times before rather than pushing corporate features and agendas. POSIX, HTML5, Java are all design-by-committee crapfests.
Golang is about right on this. Once core standard and reference implementation which is opinionated and built by people who know their shit.
As I understood it, HTML5 was specifically about getting away from the design-by-committee mode that was causing problems in XHTML. It was a group of people that got together and said, "Okay, what are peole actually doing right now, that is what needs to be documented in the standard." They did not want a perscriptive standard, but rather, a descriptive one.
And it's not like they wanted a descriptive standard, it's that they knew a prescriptive one would not fly and would just be completely ignored due to the conflicting interests of the half-dozen actors.
A standard everybody ignores is utterly pointless.
Unlike modern web browsers, unix variants are a very inequal market. Linux can and does introduce new APIs that programs then depend on, without consulting anyone else or trying to standardize them, rather like IE in the bad old days (see recent cgroups/systemd). Until another unixlike achieves firefox-like success we're unlikely to see any change - it's not in linux's interest to cooperate, and linux doesn't even bother staying source-compatible with previous versions of itself, never mind other systems.
There's another platform with a larger marketshare that has a POSIX API available too. The question is whether it's the API that developers for that platform actually use.
It doesn't really have much of a POSIX API though. It could but it is very incomplete, deliberately, having just enough to bootstrap the other environment...
Strangely this post managed to give me the exact opposite impression of what the poster intended.
Based on the description in this post, it seems to me that shm_open() has been a successful addition to the standard. It's basically a semantic annotation: "I intend to use this file descriptor to share memory between processes". The OpenBSD people looked at projects that use this API and discovered that they're overwhelmingly using it to share memory between processes owned by one user (e.g. WebKit), so they solidified that practice into a hard limit as part of their implementation.
Isn't this perfectly in the spirit of why we have all these Unix variants in the first place? Some of them will be at the bleeding edge of implementing new APIs, whereas others like OpenBSD take the cautious and security-minded approach. Thanks to the work of the OpenBSD crew and the (granted, apparently largely implicit) semantics of shm_open(), WebKit is now a bit safer to use in a multi-user scenario on OpenBSD than elsewhere. Maybe others will adopt this interpretation of shm_open() and everyone wins.
It seems like this is only really a problem if someone uses mode = 0777. But not every caller is going to do that. Are there really such a large group of bad programmers who are still using low-level C interfaces? And can't OpenBSD just audit the handful of programs that are using shared memory?
[note: if your reply is going to be some variant of "all programmers are shit"-- not interested, heard it before.]
I think the attack goes something like: You figure out how you should name a temp file by checking to see if files with the same name already exist. Between when you decide on the name and when you open the file, a nefarious user creates her own temp file with mode = 0777. You open your file and write to it, not realizing that another user can now read all your temporary data. Because the file was already created when you opened it, whether your umask is set properly doesn't matter.
It sounds like the OpenBSD implementation would throw an error if the file was owned by someone else when you tried to shm_open() it, which mitigates this race attack. mkstemp mitigates this attack by atomically determining the name and opening the file without the opportunity for a nefarious process to touch the file system in between.
I suppose it depends on how good you are at coming up with random file names. You could also use O_EXCL | O_CREAT to fail if the file already existed. The more I think about it, though, the worse this whole interface is starting to smell.
As I see it from my own programming experience, the main value of "everything is a file" philosophy is in the unified namespace. Except sockets are file descriptors but not in the filesystem namespace... except when they are "unix domain". The process identifiers aren't have their own namespace but they are sorta in the filesystem via the linux specific /proc. And of course there's the sysv shared memory which comes with it's own namspace (except if you mmap an actual filesystem node and just... ugh... open it from two different process). Sigh.
From what I've always understood, the philosophy isn't "everything is a file" (the unified namespace variant), but "everything is a file descriptor" (the API kind).
Well my point was that I personally find the idea of a unified namespace for global system objects (files, pipes, sockets, processes etc) more powerful than the idea of a unified namespace of fd's within a single process.
Perhaps I should expand the history, for those who don't know. In the early 80s, QNX was the first fully multitasking operating system available for the IBM PC architecture. It was small, efficient, real-time, and somewhat idiosyncratic. It was fully ten years ahead of its time.
At the same time, in another sphere, POSIX was shaping up as factor in the emerging Open Systems wars. It drove compatibility between big vendors like IBM and HP and Sun (over other idiosyncratic offerings, like Apollo).
Perhaps POSIX as a user response to corporate power had value, but there was also collateral damage. As I understand the story, Canadian colleges standardized on POSIX, and so a great little Canadian OS (QNX) was left out in the cold. They had to become like others to survive.
Conformity served to reduce innovation and developer choice.
Now fast forward to 2013, and leaving aside shared memory details, what is POSIX driving today? Don't forget that ultimately a non-POSIX OS killed them all. And don't forget that users have their own means of bubbling up new features and architectures in the Open Source projects. Their control is far beyond what users got out of POSIX in the Open Systems age.
[1] https://lwn.net/Articles/557314/
reply