Hey! I remember that interaction (and sorry for my crass language, I was having a hard time by then).
Your explanation was certainly satisfactory. Now I understand a bit the motivation of people who want to compile different programs depending on what happens to be installed at a particular moment in their computers. I still think that it is "an exceptionally bad idea which could only have originated in California" ;)
”You can set it before compiling” is such a foreign concept to the real world that it’s almost meaningless, if that is the only way to change the behavior it’s pretty crazy.
I'm assuming the rationale is that for someone who's doing complex system-level stuff on unix/linux they should be able to change things deep in the weeds instead of pulling the entire code out to IntelliJ or an IDE.
No snark at all. I'm sorry you read my comment that way. What I got from this release announcement was a feeling of coming full circle, back to an older time when configuring a system meant compiling with different compile-time options (possibly with compilation happening on another system) as opposed to using something more complex like compiling once and reading config options from an rc file. (I was reminded if this also by dwm, which too is configured at compile rather than runtime.)
It's a matter of degree, not kind. It's much more of a hassle to compile a custom version of the program than to modify the default configuration, but the point is that in both cases there's a default that does something that the user doesn't want.
Yeah, I totally get it - I realized I'm a hypocrite when I posted this; just a few weeks ago I put a custom build environment together with an obscure version of gcc because I'm dealing with some code that depends on some of those "bugs".
I just don't think it's healthy to embrace this as an alternative to proper maintenance.
This creates compatibility issues with applications that inspect the command line of running programs and for example restart them with the same command line. It also probably ties in into a lot of general program-execution use cases like the Task Scheduler.
He’s arguing for changing the behaviour via an environment variable, and not recompiling/linking at all:
> The difference between changing a console variable to get a different behavior versus running an old exe, let alone reverting code changes and rebuilding, is significant.
I’ve not tried his way, but he’s pretty explicit about what The Way is.
So as a user, I have to recompile in deference to making things easier for the developer. Definitely not like Windows...for all their sins, they don't pull entitled BS like that on users.
It creates more and worse problems than it solves. People find it alright to write code that only works in a specific combination of language and package versions, and that is a shameful state of affairs.
You've said it is a bad technique no matter how it is used, and you have said it is an ok technique if used at some levels (like the Kernel) but not others (like the WTF project).
So you can see how I'm confused about what your opinion really is, I hope.
peronally, while totally leaving my life unaffected, it'd be one of those thoughts in the back of my head, both knowing that there is some random text being output (need to optimize everything!), and that there is something intentionally uncontrollable there, and to a lesser degree, something there to intentionally bother me.
Can I ask who came up with the idea, and what other alternatives were discussed? Was this requested by the sponsors?
edit: as a further question, if this results in a fork (well, I guess thats a bit late but) or an increase in the number of from-source builds, would you seek to make those two options more difficult? (e.g. legal pressure, or mangling the codebase in some way with hard to acquire dependencies)
I do this now today too, but not even for potentially malicious software. Some projects that are not inherently malicious are just written poorly or stupidly by design. Just the other day I ran a program that, before I even requested it to do anything, appended 3 lines to my ~/.bashrc. I didn't even notice until days later. I can't fathom why any developer thinks this is a good idea, and is exactly the kind of thing that makes me sandbox every foreign piece of code I run now.
This level of bad behavior is basically common-place now, because… I’m not even sure why companies and projects think it’s okay. They think it’s too hard to figure out? Their users can’t be bothered to read and follow instructions? Or obviously they know what’s good for their users better than them?
Between messing with profiles, no backups first (so no “.bak” or anything to revert to), and things like Google’s Cloud tooling deciding to just install an extra copy of 1GB-plus client tools and god-knows-what-else when adding an IDE plugin it’s getting realllllly tempting to start isolating stuff a lot harder.
Sucky working code is better than code that doesn't exist, yes.
But isn't it silly that every Linux system needs to place this exact file at this exact path or else no program for another system will possibly be able to run on it?
Your explanation was certainly satisfactory. Now I understand a bit the motivation of people who want to compile different programs depending on what happens to be installed at a particular moment in their computers. I still think that it is "an exceptionally bad idea which could only have originated in California" ;)
reply