I like Javascript, and given the choice I'd use it server-side as well. It's an excellent language; very minimal, lends itself well to concise but powerful code, and can be made to do pretty much anything given a decent API.
Most of the Javascript you look at is terrible because most of it is copypasted from tutorials dating back many years, when Javascript was, at best, not very well understood or known, and not the domain of serious applications. The modern libraries like jQuery and Prototype definitely exemplify how Javascript could be though.
Well, it's not borrowing another language engine (I'm not sure what that would even mean), but server-side Javascript with OS integration is becoming more feasible: http://www.aptana.com/jaxer
> I'm assuming that by capital A, versus lowercase a; atheists, you are trying to underscore the differences between people who understand that religion is a hoax but it doesn't affect me right now so I'll let it pass, and those who have been directly affected by religion and it's evils and see no other way but to fight it as hard as they push it?
I can't speak for the other poster, nor for a capital/lowercase distinction, but your own distinction fails to include me.
To me, religion inhabits a spectrum ranging from vague reverence for the numinous, through stories (either parable or claimed history) told for moral direction, to cynical, doctrinaire oppression. I have directly experienced some of each category (though not the most extreme of the oppressive end). That latter category absolutely affects me, both in terms of solidarity and threat of future expansion, and it absolutely deserves a rigorous fight. The rest is not so cut and dry, and some of it certainly does deserve to be not just allowed to "pass" but to be protected.
With that said...
> what do you hope to gain by talking to them?
A lot can be gained by talking to people you disagree with, even people you fiercely oppose. Aside from the other poster's excellent answer ("I like to be friendly with my neighbors, if not actual friends"), engaging ideas that you don't share is a good cure for your own ignorance and may provide ammunition for the battles actually worth having.
> don't let them in. keep them very far away from you.
Your post suggests that you go beyond your claim that religion is a "mental illness", that you seem to believe said illness is contagious even to the unafflicted. Have a little more confidence in your convictions!
In addition to the other suggestions, take a personal interest in technologies you may not be able to use professionally. If you write Java at work, maybe go learn Erlang. If you build web pages, go learn database optimization. If you do anything other than security, learn how exploits are discovered.
When I encounter functions that large, where the body isn't almost entirely declarative, they tend to be doing way too much. A function longer than 10 lines is a smell, IMO.
The article begins by laying out a premise that not all work is equal, in the sense that contributors will contribute work that maintainers aren't wont to do for one reason or another.
But what follows is the claim that this creates an additional work burden for maintainers, in terms of review. But what is each contribution, and how does it contribute to future work by maintainers and contributors alike?
Projects with a lot of attention benefit not just from "work" but from advancing design and efficiency where, as the author says, "each contributor brings fresh eyes and experience to the project". Simply counting hours worked doesn't account for this, because a tremendous amount of work may simply be avoided by this kind of contribution.
I'd suggest adopting a model that works for your workflow and your project goals, and asking contributors to adopt it as well. Github can also help accommodate this if you want. Example:
- Your project has two mainline branches, one against which development is done and the other from which releases are cut.
- The development mainline is the default branch
- this is the first thing contributors see, and the first branch PRs target
- You "adopt" PRs into the development branch, and when you feel the work is suitable, release it to the release mainline branch
I don't agree. If the PR you're reviewing is expected to be merged into a development branch, you can either:
1. Merge without comment. Unless the contributor is closely following development, they won't even notice what you do with the contribution between your development and release branches.
2. Merge with a comment that you'll be cleaning it up before release, and if you want the person to continue contributing explain what you'll be doing.
Differentiating stewardship from producing solutions can go a long way toward eliminating debate over minutiae.
If there's still tense back and forth, it's probably over approach, understanding of the problem, or awareness of consequences. In none of those cases should the PR be merged. If you want to engage the contribution, explain what the contributor missed and close the PR. If you don't think the contribution is going to be productive, just close it.
> That they have no harmony does not mean that preference for consonance over dissonance isn’t baked into universal human auditory processing because if they would develop harmony they might end up with the same preferences we have because we share the same human auditory processing.
Or they might develop the same preferences because they're exposed to the same stimuli. Or they might not develop the same preferences at all. Or they might prefer alternation between both.
> I'm more mentally exhausted by the endlessness of pure contrarianism on the Internet. It gets worse and worse with each year and I'm losing my enjoyment of social communities.
Let me be the first to say I agree. And I'm a contrarian!
It just can't be possible that every discrepancy deserves the effort to refute and correct, and we're biologically trained to approximate as a matter of priority.
As mentioned, `eval` executes code. While macros are generally about manipulating code for eventual evaluation, `read` enables more than that, for instance static analysis.
The distinction is also explicit in the term "REPL" (read eval print loop), though many REPLs just pass the input to an interpreter.
In addition to losing ACID guarantees, you lose single-query joins. They're performed in memory, sequentially, with all the incurred costs of a waterfall series of requests and whatever network communication is involved.
My understanding, which is admittedly minimal, is that MongoDB provides some conveniences around partitioning and distributing your dataset that Postgres does not without additional work. Simply supporting JSON accommodates MongoDB's approach to schemas (not having them; er, pushing constraints to application code), but doesn't address the distribution concern.
How well does MongoDB support this need, and how hard is it to achieve equivalent results with Postgres? What are the drawbacks?
How well does it stand up against other "NoSQL" solutions that compete in the same space? My impression has been that it tends compete poorly with a number of other solutions for data integrity, and my intuition is that its appeal is correlated to the proliferation of JavaScript and the convenience of interoperating with it.
I recently stopped regularly visiting Slashdot, and started regularly visiting HN.
Besides regularly finding more appealing content, I have found that typically HN comments are less hateful, disparaging and isolating.
One (minor) version of this distinction is the constant debate on Slashdot about what should be published, and what should not. There, an assumed standard is expected for something to be considered "news for nerds" or whatever. For the most part I haven't seen that here. I've appreciated that fact.
Because fuck me, if you can't just pass by a link among thirty or so, give up and stop using the web. It's full of shit you don't think belongs there. Just stop shitting on things I want to read just because you don't want to read it too.
[Edited because upon re-reading, I really wished it had line breaks.]
I'm really disappointed at how many negative comments I found shortly after this was posted.
While the author surely could use some guidance editing for both grammar and finding an appropriate way to express bona fides, the content is clearly expressed as a person who has already faced an uphill battle learning how to participate as an outsider in existing tech circles, and has learned how to bring others who have been treated as outsiders in to participate. This should be encouraged!
My advice to the author is that if you're publishing an article for general consumption, declaring your qualifications may be taken as defensive. Unfortunately, the expectation is that your qualifications will stand out and that you'll be barely noticeable in your humility. "Your work" is supposed to "speak for itself". Fortunately for you, your work absolutely does and will.
Learning is great, but we don't actually need more people to do it. Need is a really important term, and we shouldn't use it inappropriately.
What we actually need is to decide whether the people left behind by advances in technology deserve to not be left behind. We can't continue to live in a world where people are increasingly finding themselves with zero options. Either they must become productive with no exceptions (and eliminated if not productive!), or we must support them.
I think the choice is clear. We are not just advancing technology, we're creating innumerable surpluses. The only deficit created by those advances is a construct of profit.
If you have (created!) a job that closely resembles a work of dystopian fiction, laughing that off is absolutely lacking in human empathy. That's not even the first problem with this line of work, but since you're also laughing off the problem, it deserves a rebuttal.
If I said to you that I was going to create a network of surveillance devices that also serves as mindless entertainment and routinely broadcasts faith routines that non-participants will be punished for, and you told me that sounds like something out of 1984, and I told you were paranoid, you'd think I'm mad.
And the advance of technology unhindered is not a universal good. Algorithms only have better judgment than humans according to the constraints they were assigned. If there's a role for automation in criminal justice, that role must be constantly questioned and adjusted for human need, just as the role of human intervention should be. Because it's all human intervention.
I find them more readable than the endless wall of indistinguishable text of pick-a-random-news-site. Of course all the text is readable, and there's innumerable ways of increasing the readability. But their art direction and effort to distinguish particular content helps it stand out and helps overcome the kind of fatigue that causes readers to drop off within a couple paragraphs and has plagued other content-oriented sites with horrifying "minutes to read" estimates.
> I fail to see what is unscientific about stating conditional probabilities.
1. Create conditions which disadvantage and impoverish a segment of society.
2. Refine those conditions for centuries, continually criminalizing the margins that segment of society is forced to live in.
3. Identify that many of the people in that segment of society are likely to be identified as criminals.
4. Pretend that you're doing math rather than reinforcing generations of deleterious conditioning, completely ignoring the creation of those conditions that led to the probabilities you're identifying.
And science can't be divorced from ethics. These are human pursuits.
I frankly don't understand your response. I described a list of despicable things humans have done, and you're suggesting that I'm not pessimistic about people.
In my experience, Bing is substantially better than Google for certain use cases.
Google tends to be better for things I'm looking for very specifically, unless those things are likely to match DMCA takedown search terms (where there are DMCA notices at the bottom of results pages).
Bing tends to be better for things that are vaguely like what I'm searching for, and for specifics if they're likely to match DMCA takedown search terms.
Suburbs destroy public life, create anxiety and a sense of atomization, and turn natural spaces into scar tissue. Even if you could perfect transportation technology, suburbs are the least optimal configuration for humans to live in the world.
Immutability makes reasoning with state and concurrency much easier. Persistent data structures help make immutability practical in terms of performance and memory overhead.
> In 2016, there is really very little reason to pick a dynamically typed language for any serious project.
I wish this were true. I wholeheartedly agree that, all other things being equal, static typing is better than dynamic typing. If there were no other factors, I would choose static typing without a second thought. But there are other factors, and all other things are not equal.
I write ClojureScript every day. Its lack of static typing is a genuine source of pain. Every day. But when I compare it to Elm, which I'm quite interested in:
- JS interop. Maybe I suck at google, but it looks like it's not a first-class concern of the language or the community.
- Macros. Defining language features is great. It's the reason that a lisp can be small in terms of core language constructs (even if its standard library is enormous cough Clojure). In 2016, there is really very little reason to design a language without metaprogramming in mind.
- Community, which admittedly could and should and will grow in size and sophistication for Elm.
When I compare it to TypeScript?
- State. Welp.
- No seriously, why is everything mutable? It's 2016 and a language that was designed with concurrency and functional programming in mind from the start still hasn't caught on to mutability by default as a serious problem, and TypeScript inherits all this baggage.
- There's no simple clear path to even set up a TypeScript project. There's a million different ways with different tooling choices, and that fractured environment is evident whenever searching for answers. The world of JavaScript constantly bleeds through. If I want to know how to use TS, I need to know WebPack and NPM and Bower and who knows how many other things that are not even for TS.
- Forget metaprogramming.
- What is the TS community? Microsoft and a weird has-been cult team at Google? (Sorry guys, I did the Angular thing, and I watched it become a circus from the time Angular 2 was announced to the time your weird conference was more like a dance club than a tech conference.)
I could simplify this by calling out your last line. JavaScript rules the roost. You better play nicely with it, if you want to build a web app.
Honestly the web is going to be an increasingly fractured thing for many years. There's just too many ways to do things and they're all compromised in awful ways.
Postscript: I'm sorry this is so ranty. I mean no harm and I mean none of my criticisms to be taken as attacks. I want very much to do good work as a web dev and I'm just heartbroken that it's such a pain.
Not to give banks any wiggle room, but I would argue that gaining access to random Gmail accounts poses a greater risk to most people than gaining access to random bank accounts, because the former is a superset of the latter.
Since email is used as the focal point of trust in most online transactions, it should be the most secure.
On the other hand, it's basically impossible to negotiate fair terms with a bank because it's almost impossible to survive without a bank account, doing so carries extraordinary costs, and bank customers have basically zero leverage. You can argue that the bank's unilateral terms don't support a moral claim, but it doesn't invalidate the morality of the claim, only the legality.
Please correct me if I'm wrong, but last I looked the trend is toward:
- Traditional banks are jacking up fees
- For people who already don't need it (good credit, savings) those fees can be waived
- Credit unions can reject you for credit scores (obvs)
Where do the online banks fall on that spectrum? If you need good credit or stable cash you can't spend, it leaves a whole lot of people out in the cold. Never mind that a lot of the same class of people can't always reliably use the tubes to do their transactions (phone primary Internet device, often shut off, often changing numbers).
It's a great video, and clearly the design behind Swift has tried to address many of the biggest problems with modern OOP. But Swift is by necessity a multi-paradigm language that has to interface with existing OOP code.
If you're writing a Mac or iOS app, you're generally writing Cocoa with either Swift or ObjC syntax. Cocoa is unbearably stateful. For a simple app, you not only don't get to work with value types, you typically don't even get functions or methods which return anything.
Obviously for something more complex, your non-UI code can absolutely be written with the principles described in that video in mind, but if you try to apply those principles to Cocoa APIs directly, you're going to be fighting against its stateful nature constantly.
Years of education have been horribly ineffective, which is why phishing continues to work. Meanwhile, the ignorant audience keeps growing. A whole lot of UX work is about making ambiguous things less ambiguous and making the "right" or "safe" choice the default. In those scenarios, the best UX will make the non-default choice available with some additional effort (changing settings, diving deeper), which this feature in Safari absolutely does.
Hang on to your hat, because every other browser has been playing with the exact same feature, for the exact same set of reasons.
I appreciate the map nerdery, having worked with a bunch of geo nerds, but I also want to say this is my go-to transit app. It provides features like traditional mapping apps (routing) and features like One Bus Away but without required knowledge of routes.
I would love to have a Mac version of Edge, both for compatibility testing and for evaluating it as a main browser due to its ahem spartan UI/featureset.
The Ars article is probably a better link, because it shows that Ballmer's position was uncontroversial in technology circles at the time. With hindsight it's obvious that the iPhone would be a success, because the entire smartphone industry is now modeled after the UI and interaction model of iOS. But that was a really tough thing to imagine 10 years ago.
iPhone, iPad and Mac sales all have dropped; the iPhone drop was 50% greater (proportionally) than the Mac drop.
It's likely that Apple wants to continue to focus attention on the iPhone, not because of MBP sales, but because the iPhone continues to make up more than half of Apple's revenue (and probably a greater proportion of its profits).
Your use of "bent on" seems to suggest intent, I'm not sure if that's what you meant.
I think you would be hard pressed to find an application of technology (in normal human terms, not the uptight definition of "applied knowledge") that does not consume resources.
Electronics in particular consume electricity which must be generated and stored, as well as refined mining products. The production of these resources also consumes additional resources. Distribution and eventual disposal likewise. All of these at a greater rate than would be consumed absent electronics.
You might consider these costs worth the benefits of having these electronics. Or you might, if you're more careful, pepper "some" throughout the prior sentence. Many of the impacts could certainly be reduced, but almost none of them can be eliminated.
Making things has a cost, often a greater cost than not making them. Dismissing that as insufficiently narrow for discussion is either ignorant or dishonest.
I can think of few vulnerabilities with more monetary value than an arbitrary exploit of a password manager in broad use by a class of people who have access to huge numbers of private systems.
> I think that if you've created a design that requires you to update a 50K row table 500 times a second that itself is heavily indexed and used heavily in joins, you have a software design problem more than a database problem.
This would almost certainly be true if the design were widespread (which it's not as far as I know), but it isn't necessarily true for all cases.
I think it would be better framed as an optimization problem. If you design for a domain that actually models 500 events per second in a dataset of 50K items, the simplest correct implementation will implement exactly that. If that domain also involves reads which benefit from joins and indices that make those writes prohibitively slow, you have a conflicting set of optimization paths. The fact that some tools don't accommodate that well is an implementation detail, and addressing that fact is optimization, not necessarily a primary design consideration.
Most of the Javascript you look at is terrible because most of it is copypasted from tutorials dating back many years, when Javascript was, at best, not very well understood or known, and not the domain of serious applications. The modern libraries like jQuery and Prototype definitely exemplify how Javascript could be though.