Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Trusting science leaves people vulnerable to believing pseudoscience (journalistsresource.org) similar stories update story
143 points by kvee | karma 3086 | avg karma 5.25 2021-09-26 01:52:11 | hide | past | favorite | 170 comments



view as:

Another way I noticed that makes "smart" people believe in bullshit is using very scientific language and vague science concepts.

I noticed one of my friends (college educated and an ex FAANG executive) reading a book about astrology that basically tried to explain it with science. She made me read a chapter to prove that it's not bullshit. And it really didn't "sound" bullshit, and the author used many scientific terms and quoting papers on quantum mechanics and referring to astronomy. But once you actually tried to process the gist of it it was complete nonsense, literally zero information or stated facts. A horoscope dressed in scientific language.


Some people go through school life the same way, and that is probably partly the cause. They never really understood the subject, just went through learning about it and accepting it - without deeper understanding(?). This shapes people to basically believing anything - if it comes from a source with enough sheen of authority.

This is literally everyone, look at experts trying their hand outside of their field of expertise.

The issue isn’t that we need to take some things on trust it’s how to work out who to trust.

Even scientifically literate people find this hard, you’ll often find people arguing something quite general by linking a paper that is very specific (or actually says the opposite!).

Less honest people will outright lie and link to a source in the hopes you won’t read it and trust them to be diligent. Just the other day a user here stated some facts and linked to a wiki page that immediately and outright refuted them.

We’re not just in an environment where we need to make sure people aren’t making mistakes. It’s also highly adversarial.


It's about something else than just having to work out who to trust - that just sounds like a restatement of the relativization.

Not everyone can be scientifically literate. And I agree, there are many things masquerading as scientific literacy - including picking out single papers while being clueless of the wider stream of knowledge in that field.

Yes, it really is much better to ask someone in the field - for example - what does this paper mean when you see it in context, and so on. That's what the good science journalists use to do.

But, we can't just give up. We can't be perfectionists in the sense that - "either you're an expert and know the field or you just don't know anything". We can all be curious and humble about what we don't know, while still trying to understand things in their context and fit them into our model of the world.

> We’re not just an environment where we need to make sure people aren’t making mistakes. It’s also highly adversarial.

Here I'm a bit unsure what you mean. Maybe, if I do: I don't think 0 mistakes is the goal. Mistakes are allowed. The differentiator is always, how we handle and correct our mistakes.


I’m not advocating giving up.

By the last bit I mean people are not just making mistakes they are also deliberately lying by making things that lead people to making mistakes.


I think it's worse than that. Sure there are intentional liars, but they are the easy to spot bullshit factory managers. What's harder is dealing with the true believers, who lie to save their own world view, who need that thing to be true.

Bingo. You figured out public school.

> Bingo. You figured out public school.

what do you see as alternatives?


Mandatory, universal indoctrination of the youth by the state long into adulthood wasn’t even normal until about a century ago. I suppose the Ottoman Janissary Corps is the only older example I can think of.

Outside of knowledge-based fields, most adults I know obtained all of their practical knowledge in the course of their careers.


Out of curiosity, what book was it? I might get it as an April fool's joke for my astronomer friend of mine.

Sorry, I don't remember. But it was definitely written by a Prof. Something Something, PhD. And of course it had a mesmerizing rendering of some cosmic occurrence on the cover.

I recall a video where a biologist talked about how students from prestigious universities had become completely incapable of doing science.

He would take them in the field, and they were utterly unable to study the real world. To make predictions, observe the world, challenge their assumptions. All they could do was fall back on peer-reviewed papers.

If it wasn't in a peer-reviewed paper, it simply didn't exist.


There's a saying that goes something like "sometimes half an hour in the library can save you two weeks in the lab". I think that's true, but at the same time, sometimes half an hour in the lab can save you two weeks in the library.

In my experience, all of the latest esoteric scene tries this. Using a pseudo science language to impress the simple minded and if more elaborate, also the more clever ones.

Some of the more funny ones explained why the dna nucleous of the new vaccines change the energy field of the receiver. Meaning you loose your soul, if you get vaccinated.

Very hilarious are the scientific explanations around the flat earth society. They try in all seriousness to work out physics modells that work with a flat earth. Deep shit it seems.


Or just using complicated language to say something simple in the first place.

Annoyingly, science is sometimes also guilty of this - as is the medical profession. It's a form of gatekeeping. It would be better if we all just used the simplest set of words to communicate a concept. There are valid reasons for using more complex terms, they serve as some kind of data compression mechanism, but more often than not they are simply obfuscation.


Ain't HN the king of that..

"you are being disingenuous": 110 results

"you are being stupid": 4 results


I get your point but disingenuous doesn’t mean remotely the same thing as stupid.

Oh, you're right. But still:

"you are being deceitful" 1 result "you are being dishonest" 42 results

... if I tried them all they might add up to over 100, but still. It's hard to make the case with a few searches, maybe I shouldn't have started this ^^ It's just something I notice constantly. E.g. half of the time, "I'm not entirely sure I agree" is a sincere statement, the other half it's just "I disagree" in more words that are supposed to give an impression of depth that isn't there.


But "dishonest" is simpler language register than "deceitful". Doesn't that weakens your argument?

I didn't mean to contrast the two with each other, but both with "disingenuous", of which they're synonyms.

My takeaway from your data is that people call each other various shades of dishonest a lot more than they call each other stupid.

And those are two very different things that don't got much to do with how big the words you type are.

It's easy to not be entirely honest in an argument due to various biases, or when you have some sort of emotional attachment to an argument. That can be a reasonable thing to point out.

Calling people stupid, on the other hand, that is pointless and purely inflammatory.


Using complicated language because it is more precise is not gatekeeping. Boiling it down into simpler language is going to make it less accurate and more prone to misunderstanding.

> Using complicated language because it is more precise is not gatekeeping.

But that's not what OP said.

They said "science is sometimes also guilty of this". They didn't say "always guilty".


The Bogdanov Affair happened, where papers that were later concluded to be devoid of any real meaning but full of jargon were published in physics journals, and most likely it was not a troll attempt but the auctor's were actually serious.

“zero information” was a common criticism on these papers when they were later more heavily scrutinized, that they had a tendency to use paragraphs of jargon to say the obvious and nothing new.


I recall during my first degree, I once rewrote an essay, making the essential points clearer. I still believe it was my best essay, but I got the lowest mark with a comment that it was "too simple".

I got the highest mark for an essay I wrote the night before it was due with no knowledge of the subject. Just used a bunch of clever sounding words and threw in a fake bibliography at 8am in the library.


I think there's a logical explanation for why being born at a particular time of year might have an effect on how your personality develops, and that's that the time of year you're born determines where you are placed within your cohort at school. Only just old enough for this year's intake? You're more likely to be shorter and weaker than everyone else on the sports team. Almost too old, but just squeaked in? You have both a physical and mental advantage over everyone else.

While 'horoscopes' might not have any basis in reality, it's not unimaginable to me that groups of personalities might arise who all share a particular birth month.


In general I agree with you, but the more I consider topics like this the more I see it as a problem of correlation strength vs. effect size. That is, at the population level there may very well be a strong correlation between birthday and certain personality traits, but the effect size is likely to be infinitesimal compared to everything else we know about what shapes a person’s psychology.

I actually think this is one of the ways that published science tends to distort rather than clarify our view of reality, as the strength of correlation tends to be overly emphasized as measure of quality or relevance.


That's most definitely true. But describing one's character or predicting behavior based on that is simply random stereotyping. And even if we could make some use of it, the astrology crowd would never embrace it.

That's exactly the sort of thought I've always had about astrology: if there's any truth at all to it, it's because all the astrology BS is just being used as a clock. The effect is the result of other factors that are periodic such as seasons, social factors as you say, etc.

Any such effect would be infinitesmally small at a city, state or nation level, not to mention such effects will not be self-repeating, defeating the purpose of such a book

My sister, who is sharp but not highly educated, gave me a book on numerology once. I read the first chapter at least, just to humor her, and came away with the exact same feeling: zero information.

She’s a very “intuitive” kind of person, if you know what I mean. I do my best to stay curious and avoid judging her narratives about her personal experience; even if her explanations seem nonsensical I can’t deny the phenomenology of it.

However, I definitely draw a firm line with her on numerology. When you’re talking up theories about historical events and real people and organizations, you’re in the realm of falsifiability and you’d better bring evidence.


So true. I studied algebra and I am fairly acquainted with modern cryptography.

Sometimes I meet "seriously looking" texts that mix cryptographic and information theoretic expressions ("code", "one time pad", "asymmetric", "stream cipher", "entropy", "quantum") in a hodgepodge that does not make any sense, but looks convincing to the amateur reader.

I suppose it is the same with, say, immunology. People can cook up a Sokal-like text that throws all the buzzwords around and look knowledgeable on Facebook, without actually saying anything of value.


It’s quite alarming how much misunderstanding I find in articles the mainstream media channels in my country write about about anything I have professional/deep knowledge in. I wonder how much nonsense I’m reading and perhaps blindly accepting about subjects in which I am not as knowledgeable.

Same. I take raw video with a grain of salt.

I run into that all the time. I've got a solid grasp of biology and have a lot of knowledge about animals. It's pretty rare that I find an article that draws the conclusions I feel should be drawn from the information. I had to stop reading an article posted on this site the other day, as early in the article, they claimed that no one knows why plants produce latex. Obviously, that's not true so I went to see how hard the info would be to find and it's right there in the Wikipedia article on natural latex. I can forgive not knowing, but I can't forgive making a claim like that when the information is so accessible.

So what am I being told about economics or international relations that's just completely untrue? I don't even know enough to know what's an obvious lie, so how do I know what to read up on to verify the claims? How do I know that the source I'm using to verify the article isn't complete bunk itself? It's kind of terrifying.


But isn't this evidence that you should rely on the conclusions of experts? What you are describing here is that amateurs fail dramatically to synthesize correct beliefs about a subject, but if you spend years immersed in that subject you are much more capable of doing this correctly.

Even then, the problem is that it's sometimes hard to say whether there is nothing of value in there or whether you just don't get it. At least I read quite a few papers where I knew the information was in there, but it was formulated cryptic enough so that it took me a long time to find out.

Or one exaggerates results. Anybody read wisdom of crowds? Went back to the original nature paper (was it from 1904? Not sure) and I think the book made cited the result with ten times higher precision as in the paper.

Wonder if she peer reviewed some Sokal-esque papers some day

I like to express the same point with something like "new age is when you take someone with zero understanding of quantum mechanics and even less understanding of vedantic philosophy and then combine the vocabulary of both"

I feel they have a big confounder in their study that I wonder may spoil their results.

They only used two examples of scientific stories, and one of them is an issue around the health effects of GMOs.

Here's the thing: I think you'll likely find more people on the left/liberal (US-meaning of the word liberal) spectrum that will say they trust in science. You'll also find more people on the left being skeptical about GMOs.

In fact, if you'd if there's one issue where you'll have views disagreeing with science on the left, it's probably GMOs. It's simply the classic and most prominent example for that.

So I wonder if that study didn't tell much more than "People tend to believe what they believed before, even if they say they trust in science".


Also nuclear power is a big taboo for the European left, often spoken of in fear-mongering/puritan language, even if most scientists support it.

It's also possible that objections from some left leaning people to (for example) GMOs or nuclear power aren't scientific objections as such.

They may very well believe that those things work the way "science" says they work, but have downstream fears about the corporations that control them, or people implementing them properly, or taking long term care into the future, greed, shortcuts etc

Well, in my head that is one way someone could still generally agree with "science", but be against some of those things and stay somewhat consistent.


Sure. At the end we are speaking of political decisions. One thing is measuring how much energy you can get from 1 g of uranium (science), another thing is building a nuclear plant with safety constraints (engineering) but deciding whether to build a nuclear plant or not is a political decision.

It's just that in my experience some people in this camp make objections that are very emotional and the gist is that this is the vibe that dominates the discussion.

Of course there are concerns with GMO and nuclear power and you can trust science and still object to them on the grounds you cite. It's just that there are many people who behave, actually, inconsistently in that topic. At least from what I experienced.


In my experience, the pro-nuclear arguments have become more BS filled lately. It's a tell that a position is bad when the arguments used to justify it are bad. If there were good arguments, why didn't they use them?

GMO objections are 95% objections about what's done with GMO (e.g. engineering crops to be more resistant to cancer causing roundup).

Reporting on GMO objections is 95% about repeating the same studies that GMO crops themselves are safe to eat.

Both sides are scientific, neither are lying but one is profit driven, disingenuous and has a far larger megaphone to not just drown out the other side, but entirely misrepresent its objections in the eyes of the public (and link it to astrology, antivaxxers, climate change denial, etc.)


Quite a lot of anti nuke is anti statist. If you worry about authority, nuclear power empowers the state above and beyond the norm. It led to increases in armed police in Britain, and a huge rise in secretive policing.

Personally, having opposed Torness AGR nuclear power in the 70s I find I've returned to a more open minded view. I would hope we can build cheap reliable nuclear power but the legals cost more than the build, almost. It's not on a declining cost of build, where wind and solar are.

GMO is bound up in the obscenity of IPR in genetics. Mostly, people ignore benign routine genetic modification as in breeding and concentrate on Monsanto and roundup. Golden rice and the like have huge upsides.

I agree the left is a bit conflicted on this. Communism is Soviet power and electricity!


NPR ("Science" Friday) ran a segment on a nuclear plant decommission the other day. All fear mongering and swooning over the "waste" issues. No mention of the dire need to roll out new nuclear plants (for clean energy!) that run on...nuclear waste.

I was appalled but not surprised. I have seen much better science coverage on YouTube by indie content producers.


That's because there is no dire need to roll out new nuclear plants that run on nuclear waste.

BTW, the idea that all nuclear waste can be recycled is a lie. Fission products cannot be recycled. Some actinides can be recycled, but only to a limited extent (one-through-again for MOX fuel) in any reactor that's approved for building today.


This one is more about engineering and trust toward regulatory bodies. Science alone won't make nuclear plan not to break or leak. It is about trust toward companies that build the stuff.

Also, "scientists" is kind of nebulous group. I am unsure who exactly you mean and what exactly they support.


>You'll also find more people on the left being skeptical about GMOs

I've found it very strange that the same people who talk about the greed of Big Pharma and their price gouging and selling prescriptions to people who don't need them think the same big pharma companies wouldn't take advantage of a pandemic to make a fortune

oh well, my Pfizer and Moderna stock is doing great. Make sure to tell your friends to get your booster


This has been my take on it. Not that GMOs will poison your body or anything woo like that. Rather, it's an enormous amount of power to give to an industry with tight margins and few scruples. I can't help but envision vegetables full of simple sugars, pesticides, and so forth in their very DNA. It's a complex picture and there are no doubt great cases for its use, e.g. golden rice. But this is the same industry that sprays RoundUp on wheat just to ripen it[0].

[0] https://en.wikipedia.org/wiki/Crop_desiccation


And also sues farmers who save seeds from the plants to use next year. You have to pay the company every year for new seed instead of being able to just save some seed back and never having to purchase more again. Gives complete power to the companies and dependency on them.

There are a number of reasons to be skeptical of agribusiness, but this one is especially overblown. The widely known suits around "saved seeds" were not about that specifically.

So, you're against hybrid crop varieties? Because you can't save the seeds of those and get the same results.

You aren't at the mercy of the companies supplying the seeds because they compete with each other.

Also, presumably if this were the objection then something like Roundup-Ready soybeans would now be ok, since those are now off patent. You're ok with those now, right?


I've recently noticed that pseudoscientific claims are now often accompanied with references to supposedly credentialed and authoritative scientists. Basically the inventors of the fakes are adopting to public's growing literacy. And I don't think this is going to stop or slow down with people getting more "methodologicaly literate". The fakes will just get more sophisticated.

I've experienced this first-hand.

I'm an author on a fairly big-impact SARS-CoV-2 paper and made the mistake of looking what people were tweeting about it. People are referencing it as evidence that vaccines are weakening the immune system and posting links to the paper to make it appear like a legitimate scientific claim, despite the paper saying nothing of the sort (pretty much the opposite actually).

I don't know if it's malicious and they're relying on the typical anti-vax twitter user not reading or understanding the article, or just lack of scientific-literacy and they've genuinely misinterpreted it.


It's mostly stupidity and blind faith IMO. Most of those posts are reshares and its hard to find who actually was the first to refer to a paper.

I once saw such a claim from antivaxers on Facebook and decided to try and read the paper although I'm not an epidemiologist... So first of all the paper was behind a paywall, second, it was written in Spanish (which I and the poster do not read), third, already in the abstract it said the opposite of what the Facebook poster claimed... It's all just a waste of energy really. They just search for titles that sound like they support them and do not bother considering who wrote it, where was it published, or even what does it say in the abstract. It seems that no one is even close to actually reading a paper and evaluating its merits.


People get a kind of power and influence over others when they achieve to create an aura of credibility. Typically this is paired with strong, oversimplified or even false claims. People want to hear it and believe it, so abusers get to reign.

This is very typical and likely most problematic in politics. Cults are a smaller but often more extreme instance.

The root of the problem is three headed:

For one, there is simply not enough respect for education. It’s seen as a cost center and the best education is reserved for a small elite and education is too rigid overall, when it should be a flexible part of our lives through a much larger era.

Secondly, there are perverse incentives and little accountability. Abusers are rewarded with power and very rarely do they take real responsibility.

Third, there are many who feel disconnected, because they are. They get lied to, manipulated and exploited from all sides. Seen as resources instead of real human beings.


I see this all the time, but more indirectly.

People having, often quite politicized claims, and referencing popular science news articles to back them up, which reference actual results, when scrutinized tell quite a different story than the popular news articles' liberal interpretation.


The Internet has trained us to take out-of-context citations to an art form. People scan the text looking for matches to their pre-existing convictions and largely ignore everything else.

Not trusting it isn't exactly a great option either though

This article talked a lot about an experiment's results and how this may affect public policy. The experiment being discussed was about the behaviour of people who apply scepticism to scientific results.

It is ironic then that the article barely gave any account of the experiment itself. Just its findings. Readers are given no opportunity to critically evaluate the experiment.


Right. The whole purpose of science is for others to repeat the experiment and verify it’s result. Never trust a scientist! “Scientists” these days are really more modern day priests.

I cannot tell whether your comment is sincere or facetious. And its first word is unwarranted as your comment does not actually follow on from its parent.

I didn't read the paper on which this article is based. But I understand it's not so much about why to distrust scientists, as it is about the benefit of employing critical thinking to scientific results.

My earlier comment wasn't about the experiment per se. Rather, the way the article was written.


It was sincere. The scientific method is based on not trusting a scientist. Record your result, explain the experiment so others can repeat it. Why? Because you shouldn’t trust what someone says. The scientific process is the collective work of all those experiments and results. Trust the collective body of work produced by people. Not what someone says because they are a scientist.

Scientists were given the ring of power in political culture because it seemed the natural move and they would be the most cautious with its corrupting nature. Sadly much of science now is not real science and people are trusting them as once did priests.

In my opinion.


I don't know about "priests." I have recently seen crowds of rock dumb idiots gathered outside of scientists' homes threatening their life and their family's lives.

I don't think priests ever had to worry about the rock dumb laiety showing up with pitch forks.


You may be forgetting about the man who started it all...

Thank you

I am more than happy to trust a scientist that clearly explains his work and thought process. I am wary of scientists that only have conclusions and hide behind jargon and "it's too complicated to explain to rubes, they'll never understand" elitism.

> The researchers note their findings conflict with ongoing campaigns to promote trust in science as a way to fight misinformation about the COVID-19 pandemic, mask-wearing and COVID-19 vaccines.

Interesting they bring up mask-wearing, because it was the scientists who were telling us in the beginning that we should not be wearing masks.


This is a hard problem. At some level, all people have to rely on experts to interpret scientific results for them. In science, the standard of truth is experimental repeatability, and most results that are interesting enough to get published require quite a lot of effort to actually run the experiments. And at some level you either have to run the experiment for yourself or trust somebody's word that they got the result they got and didn't make any mistakes.

So, since replicating results is hard we rely on reputation and authority instead. Is this paper peer reviewed and accepted to a respected journal or conference? Are respectable journalists writing about it? Are government officials saying things that match what the paper seems to be saying? And so on.

There are shortcuts. If an article is claiming that scientists have created a perpetual motion machine, they haven't. If an article was plainly written by someone who doesn't understand the difference between a kilowatt and a kilowatt-hour, it can safely be ignored. And so on. Science literacy is helpful at spotting obviously wrong articles, but I think there's a danger that it can lull one into overconfidence in one's own abilities to spot falsehoods. Sometimes the things we're told are just plain wrong, and some of those things aren't obviously nonsense. Some of them come from seemingly reputable sources and could plausibly be true, and the only way to know for sure is to see if other reputable scientists are getting the same results.


It’s a problem that’s been solved for hundreds of years, and depending on the way you look at it, it’s not even a problem at all.

For starters, it depends on what you’re supposed to be trusting science to do. If you’re trusting science to provide you with a collection of knowledge that is definitely true, then you’ve misplaced your trust. Because that’s not what science is. Science is simply a methodology for refining knowledge, on its best day, science will provide you with the best understanding that we have currently achieved on a particular topic, and no more than that.

Are you trusting science to be a pure discipline, motivated only by the search of the truth? Because it’s not that either. Science and academia just as infested with politics, and ideology, and perverse incentives, and corruption as any other human institution.

Are you trusting the knowledge provided by science to be particularly sophisticated? Because depending on the field it might not be that either. If you require science to be capable of making repeatable predictions, then there’s entire scientific fields that are not capable of that. Psychology for instance has failed to reliably predict anything other than some rather broad trends, by some standards the entire field would be considered pseudoscience.

In any case, the solution we came up with for dealing with this issue was democracy. I think the real question that’s being asked here is “why are technocratic policies often not popular?”, there’s lots of reasons you could postulate for any particular policy, but in all cases democracy is doing its job properly by acting as a safeguard against unpopular technocratic policies.


> Are you trusting science to be a pure discipline, motivated only by the search of the truth?

A distinction can be made between science on the one hand and scientists on the other.


Yes, but the science in this context usually refers to the work done by its practitioners.

Archaeology. The use of scientific methods to determine facts. Facts are then used for large swathes of conjecture.

Psychology IS indeed pseudoscience, one of the most well known fields of pseudoscience, what made you think otherwise?

I agree with most of that; in fact, I've said similar things myself. I guess I just want anyone to be able to distinguish science that came from actual experiments run by honest scientists from pseudoscience that came from people just making stuff up and faking results. I mean, that shouldn't be a very high bar, but it's actually pretty hard to meet when you think about it. Respected scientists can get away with faking results for years sometimes.

As for technocrats: I find it hard to trust anyone who thinks their policy positions are simply the result of direct application of pure logic. That's not how it works. Logic and science can help you arrive at a policy position once you decide what your goals and priorities are, but different people have different goals and priorities, and those aren't determined by logic. Maybe there are some good technocrats out there, but to me it seems like their goals and priorities tend to somehow always gravitate towards the goals and priorities of lobbyists, think tanks, and wealthy people generally. Maximize shareholder value and don't rock the boat.


People believe what they want to believe. For some people, any scientists whose words can be twisted to agree with their worldview are the "real scientists", while others are just "bribed scientists" who just write what they are told to write.

If there is one dodgy study made against GMO, people who are against GMO will just focus on that one, and will completely ignore 100s of studies which contradict it.


For accuracy I would prefer the title had been:

> Trusting science [without understanding the scientific method] leaves people vulnerable to believing pseudoscience

But it was already a wordy title.


“the scientific method” is incredibly easy to understand and on top of that comes in many different forms and standards.

Understanding the design of a particular experiment and the theories and values it attempts to put to the test is quite another matter. Scientists and laymen in completely different fields all understand “the scientific method”, but they often cannot even begin to read or understand each other's work or experiments.

Luckily, the convenient thing is that the experiments that are used to support very policalized claims are often quite easy to understand to the lay man with some rudimentary knowledge, and it's the experiments in things such as particle physics or quantum chemistry that have almost no political influence that are incomprehensible without at least a 8 years of formal education in the specific field.


I don't think the scientific method is actually that easy for most people to grasp.

For a layperson, simply talking about experiments and repeatability doesn't really convey the reality of how science works.

The way I describe the scientific method to regular people is its purpose: It is simply the growing and improving collection of methods we have found for separating fact, from fiction, illusion, bias, wishful thinking, tradition, coincidence, and every other form of misconception or non-predictive source.

It is the only system of knowledge, that is based on relentlessly weeding out any false or unproven conjectures we have about reality.

This conforms to actual practice: actual scientific activity includes falsifiable predictions, experiments, reviews and repeatability, but also math (a biggie obviously), statistics, revealed conflicts of interest, regular meta-studies of active areas, tests for data bias, simulations for exploring implications, discussion groups vetting and encouraging each other's ideas, cataloging of fruitful concepts for guiding inquiry such as symmetry, ... and on and on and on.

And as new tests, methods or ideas are found useful, the scientific method accumulates them.

What all these activities have in common is they decrease the chance that something incorrect will be accepted.

And given all that testing and vetting and care, science holds itself to a higher standard than any other search for knowledge except pure math.

And given that science is an inherently decentralized activity, carried out with experts in numerous overlapping fields, organizations, and areas of research, and the fact that nature itself contains no contradictions, it is very difficult for errors to stick around unnoticed in any area attracting much attention.

Another practical part of science many people don't get, is most scientists are actively working at the intersection of the known and unknown, so are very aware of sciences limitations, and also the destructive danger to their own progress and credibility if they unwittingly or strategically choose to accept false premises, for political, economic or other reasons.

Not that bad science is never done, but it is very risky for the scientists own credibility and ability to make any meaningful discovery. Coalitions to intentionally push false narratives would know they were severely handicapping themselves from further discovery, and sabotaging their own relevance to downstream scientific discovery and technological applications.

Those of us familiar with science tend to take all this context for how and why science works for granted.


I really appreciate your willingness to jump in there and describe what you see; sharing your knowledge.

> I don't think the scientific method is actually that easy for most people to grasp.

Do you think most people have the actual opportunity to learn and grasp it though?

My worries/experience: https://news.ycombinator.com/item?id=28661179


I guess I would boil scientific awareness to two parts.

I. How science works so well. I.e. the body of methods for vetting ideas, scientists brutal incentives to be honest and competent, natures inherent consistency as the final unassailable arbiter of truth, etc.

II. Then how to judge the products of science:

None of us can understand a tiny fraction of science directly, we must all learn to judge what science we can trust.

From General Relativity to the game theory behind self-organizing ants, I think people need to recognize that although with effort they could understand individual effects, everyone has to trust other experts for most of their scientific knowledge.

The best indicators of solid science, after personal understanding, are ideas that are widely accepted by many experts in the field, especially experts working across multiple overlapping fields.

(Climate science is a fantastic example of the latter: Experts including chemists, atmospheric physicists, geologists, biologists, ecologists, solar physicists, .... almost every kind of science.)

And another strong indicator of both stable and promising science is that it provides a workable foundation for further scientific and even technological advances.

And finally, when non-scientist experts, whose career success objectively depends on recognizing good science (i.e. the military, insurance companies, ...) treat particular areas of science as credible, we can assume it is safe for us to as well.


Thanks for your response, yet i'm not sure if you've read my question thoroughly enough because i'm not seeing a response to the question i posed, which is concerned with lack of access to scientific literacy opportunities.

You are right. I gave a misdirected response because I didn't have a good answer for you. Thanks for calling me on that so that I am more aware.

This is something I personally want to figure out. We are working on math/scientific educational software (as a spin off of a larger mission) and addressing basic issues of illiteracy is a big deal.

I will continue to think very hard about your question!

--

A random idea: a simulation game of doing science, including brainstorming, making conjectures, testing, getting reviewed, etc. while doing the same for others would be great.

The blockchain aspect happening - i.e. building on others ideas and having others build on one's own. The need to vet some ideas, while learning the efficiency need to trust the output of reliable groups whose results have already been widely accepted.

Getting to see both how the system spreads good ideas and removes bad ideas, the incentives for individual self-honesty, etc would be great.

Not an easy "game" to design, but something very active like that, maybe even between schools would be really neat.

And there would be a whole game design issue as to topic choice and progression for this to work.

That's a pie in the sky!!!! - and not a solution for the public at large. I will re-watch your video and be thinking more.

--

For now, we are only working with one school as our lab for educational redesign in the areas of music and math, with science and crafting areas to follow later.

An improvement common to all topics is an RPG-like skill graph, broken down into the dependencies of each micro-skill. This is both to give students greater awareness of their progress relative to material, and also let them manage their own learning.

Students are unable to move past a task until it is very well understood, and teachers can see when a student is moving slowly so needs individual help. That avoids the problem of slower students being undermined by pressure to skip ahead too soon, and their hurdles being overlooked.

Both required and many optional skills are included to give students a chance to just stick to what is required, or follow their curiosity if they are able to move more quickly. This results in greater education, but also solves the problem of keeping faster students bored by providing neat topics and a clear sense of special accomplishment.


> An improvement common to all topics is an RPG-like skill graph, broken down into the dependencies of each micro-skill. This is both to give students greater awareness of their progress relative to material, and also let them manage their own learning.

I love that! I would have benefitted from that a lot in school.

> The need to vet some ideas, while learning the efficiency need to trust the output of reliable groups whose results have already been widely accepted.

Your project sounds great! I'm curious if you like Dr. Alex Freeman's approach to a new type of scientific publishing system, with her http://Science-Octopus.org (aims to make science more accountable and open access):

"Q: What problem are you aiming to tackle?

A: Fundamentally, I believe that the whole concept of the scientific paper drives people to tell stories rather than worry about the quality of the actual research. It's true that during the scientific process, you usually start with a problem and you finish with a real-world use of your solutions. But the whole chain, from one end to the other, takes a whole range of different skills and different people and different amounts of time. Forcing researchers to guard their work and get to the end of the chain before they can publish anything makes people work in silos, when we'd all be much better off if everybody shared their work as they went along and everybody collaborated to get to the end of the process.

There is a real problem in science—with widespread questionable research practices, lack of replication, publication bias, inequalities in access, slow progress, and wasted resources. And I think that the root of it all is the publication system, and the pressure to publish as the only way that scientists are being judged. The new publishing platform I envisage would stop forcing researchers to try to tell a story, allowing them to concentrate on doing their work to the best of their ability and on constructively collaborating.

Q: How would this new publication system work?

A: The overall idea behind the new platform, which I called Octopus, is to break the standard unit of publication up into eight smaller stages or pieces. These include formulated scientific problems, hypotheses, methods and protocols, data, analysis, interpretation, and real-world applications. The eighth piece is reviews where people share their comments about the other stages, but it's also treated as a publication itself because it is a constructive piece of collaboration. Researchers will be able to revisit their publications to acknowledge others' comments or suggestions and even include some of the useful reviewers as authors, with the old version remaining available so that readers can track the evolution of ideas.

The formulated scientific problem automatically becomes the beginning of a chain, and all the other pieces are vertically linked upward. If you have a great hypothesis, you can publish that and link it straight up to the problem that it is trying to address. And if you've collected some small amount of data, you can publish that, linking it directly up to the protocol that you followed. It will also be possible to create horizontal links between publications; maybe somebody has come up with an amazing algorithm whilst studying starling flocking behavior and somebody else working on oil pipelines has decided to use it.

Octopus will thus allow researchers to publish in much smaller author groups, and that can make the system much more accountable and meritocratic. If, say, you're an applied statistician, at the moment you're very rarely a first author on anything. But with Octopus, statisticians can do analyses of anybody else's data and get credit for it as the author of that publication. Octopus will also be made language-agnostic through automatic translation to lower language barriers and broaden access.

Another way to make sure that we are judging people on the actual quality of their work and not on the stories is through rating. Anybody who's logged in to the system will be able to rate entries as soon as they are published based on pre-set criteria reflecting what we as a scientific community value at each stage of the scientific process. So, for instance, there might be a rating for the size of the dataset, and another rating for how well it is annotated and how usable it is. Just like on eBay or Amazon, the most highly rated pieces of work will automatically become more easily visible.

Octopus will also allow the evaluation of individual authors through a professional page that shows their affiliations, their conflicts of interest, and all their publications and their ratings. Seeing these metrics, you will be able to get a feel for what type of researcher this is. Are they publishing lots of well-rated hypotheses? That's an idea-generating person. Are they writing a lot of constructive reviews? That's a very collaborative person."

Source: Meet Octopus, a new vision for scientific publishing: https://www.science.org/careers/2018/11/meet-octopus-new-vis...

Talk: Dr. Alexandra Freeman | Octopus - a radical new approach to scientific publishing | 29 October 2018: https://www.youtube.com/watch?v=Af6aITLEoD8

Or have I misunderstood your aims/interests and you are not interested in open access?


I agree with the sibling comment by Nevermark: the scientific method isn't easy to understand at all, partly because nobody has a consistent definition of what it is. Your own comment illustrates the difficulty. You say the scientific method is easy to understand and then start talking about experiments.

Experiments? Is that actually a required part of the scientific method? Apparently not: lots of fields presented as scientific to the public can't do experiments. Most obviously, neither climatology, epidemiology nor economics do experiments, despite being fields with enormous direct impact on governments and therefore people's lives. You can try to refine it to "making testable predictions" but then it would require that the predictions are actually tested - real scientific papers often make predictions without testing them against reality.

What about computer science? Is it really a science? Most people would say the scientific method is a way to discover knowledge about the natural world, but there's nothing natural about computers. I myself am not convinced computer science is a science, but, it's treated as one by the institutions.

The prevalence of papers that simply present unvalidated statistical/algebraic models raises huge questions about what science is in theory vs what it is in practice. In theory, science is about discovering knowledge about the natural world, by refining hypotheses which make testable predictions into validated theories. In practice science is whatever granting agencies, universities and journals find interesting. Nothing anywhere in the so-called scientific system actually tests that the work is following the scientific method, nor is there even a working definition of what that would mean. The result is that scientific journals routinely publish pseudo-scientific claims without realizing that it's what they're doing (see my other comment).

In fact many fields are almost entirely a-theoretical: social psychology papers like the one this thread is about aren't rooted in an attempt to refine a grand unified theory of the mind. They're just testing academic folk intuitions, and are often driven by political or ideological concerns more than an attempt to derive theory.

In such a world, deification of The Scientific Method™ is naive. There is no such method, just a hodgepodge of tactics and social conventions that seems to work better than nothing, but which may or may not be actually used by something labelled as scientific.


There is no scientific method.

> Now let us begin our course. The course is in scientific method—it is called ‘Introduction to Scientific Method’—and the first thing that I want to say as an introduction is that this is a subject that does not exist."

-- Karl Popper

https://twitter.com/DavidDeutschOxf/status/14391227643261255...


I agree, but would word it differently. There is a scientific method, but it is not a set of known and universally applicable concrete steps.

The real scientific method is: collect every means you can to test and vet ideas against nature as directly, efficiently and reliably as possible. Then do those things. As you do them, keep growing and fine tuning those methods.

The only thing the "methods" of the "scientific method" have, are their purpose - eliminating bad ideas, in order to more quickly find promising ideas, and then narrow those to the most promising at any given time.

Experiments, repeatability, etc. are just some of many activities, and don't even apply to all situations where they are not practical but other methods help.


> The researchers note their findings conflict with ongoing campaigns to promote trust in science as a way to fight misinformation about the COVID-19 pandemic, mask-wearing and COVID-19 vaccines.

If you trust in science, you know that ordinary surgeon's masks and cloths cannot protect you from viruses. You also know that the COVID vaccines are experimental and have no long-term studies done on them. You also know that the risk of dying from COVID below an age of 40 is about as high as winning a large sum of money in the lottery. You also know that highly vaccinated countries like Israel had COVID cases explode, making you wonder how exactly the vaccine protects the people. Trusting the science means trusting the scientific method, not trusting some random authorities who have every financial and political incentive on the planet to claim what they claim.


"Trusting astrology leaves people vulnerable to pseudoastrology."

It think this made up sentence makes it pretty clear why this isn't actually a problem.


Richard Feynman on pseudoscience:

https://www.youtube.com/watch?v=tWr39Q9vBgo


The concept of 'trusting science' is utterly absurd.

We may choose to trust certain scientists because no one has the time or inclination to critically assess all primary literature, but the nebulous idea of 'trusting science' should be thrown in the dustbin. No matter what side of the political spectrum you live on.

Edit: to add, I'm certain that a large majority of good scientists would agree with the above.


The fundamental principle of science is doubt, not trust.

“Believe in science” is just a new religion for the intellectual class.

Obviously, knee-jerk doubt is no better than knee-jerk trust, but there needs to be a balance, and recently the marketers & manipulators (even those employed by governments, corporations, international organisations, …) have caught up on the fact that labelling misinformation as “science” makes it much more viral…


Scientismic appeals commonly appear in relation to political agendas.

In popular discourse "trusting science" can be better understood as an appeal for conformity of thought. Scientists naturally disagree about a great many things. When people are asked to "trust The Science", the speaker is asking for unquestioning faith in the experts they feel are truthful. Scientism is just another way denigrate dissent as dangerous heresy.

The core of the issue revolves around the lack of trust in government and media institutions. It is a poor substitute for reflecting upon how the trust deficit came about and attempting to address those issues. The process would undoubtedly involve addressing past misdeeds, accepting responsibility and attempting to make amends. Understandably, this is not palatable. In lieu of this we are offered more divisive finger pointing and comically inept generalizations of the heathen unbelievers as "flat-earthers".

The end result is a doubling down on the trust deficit, digging us deeper with further divisiveness. At a certain point this approach and the institutions advancing it must be recognized as bankrupt.


> “Believe in science” is just a new religion for the intellectual class.

What does that even mean? If I believe the earth is round, then science is my religion? If I believe it is flat, then I am an atheist?


If you haven't confirmed it yourself (it's quite simple, there are various experiments you can do) then yes it is belief. You simply believe that the evidence is persuasive. That is good enough for most things, but for things of great importance, you wanna dig down deeper, do your own research. If you just trust whatever the current scientific authorities say, then yes, this behavior is indistinguishable from religion. Completely indistinguishable.

Belief in something (true or not) without concern for evidence is religion. If your faith can be changed by evidence that contradicts it, then you might begin to practice science by seeking more evidence.

So if the restaurant claims the tomatoes are locally sourced and I don't personally verify... that is religion?

I know the concept of religion is hard to define unambiguously, but come on.


"Believe in science" doesn't mean believing in any specific scientific fact. It means believing whatever "science" tells you. The problem is that most people aren't equipped to tell the difference between science and pseudoscience, so if they "believe science" then they also end up believing pseudoscience.

I think what the GP is getting at is that science is a process, not a priesthood. Being part of the social institution of science doesn't automatically make one's opinions more valid, what makes their opinions more valid than the layperson is the correct application of the scientific method. Believing someone "because they're a scientist therefore they are right" like most of the general public is still a faith-based approach, believing someone because their paper managed to convince most of their peers they were onto something and produced reproducable results is a scientific approach. The GP is essentially claiming that many people are believing the right things for the wrong reason if I've interpreted the point correctly.

The problem is in this brave new post-pandemic world we've elevated scientists to the social position of prophets without the general public really being aware of how the scientific method is supposed to work or the limitations of things like statistical modelling. Science is a mere process that society has foisted the role of a priesthood onto, and I really think that this is asking for trouble because for the most part people have trouble differentiating science from pseudoscience. This rising tide has lifted all boats, including the quacks and charlatans of the world.


It seems a lot of everyday things is a religion then. What if I hire a plumber to fix my sink and he says he need to replace a pipe to make it work. I know nothing about plumbing, so of course I trust the plumber even though I know any human make mistakes. Does that mean I have "elevated plumbers to a priesthood"?

It seems like almost any human interaction involves a level of trust - but if everything is "religion" then the word is meaningless.


Hardly, nobody's saying that you can use plumbing to completely ignore ethical philosophy as though moral principles can just drop out of logic if we derive them in a clever enough way, but lots of people seem to make this claim (intentionally or not) about science. There's even a word for this: Scientism.

I wouldn't ask a plumber OR a scientist about making moral decisions as though they had some special advantage.


Can you link to an example to someone making such a claim? Just so I'm sure I understand what you are referring to.

Forget the curvature of the earth. The controversy is how a belief in science, which is necessary, translates into a non-trivial understanding of human life, which is voluntary - this is where things get "religious", in the sense of tendentious ideological reductive thinking.


>There needs to be balance

Its is not possible on top of the social media/news media landscape. Whatever gets the most attention "wins".

Its not that people have "caught upto it". Its that its the only game worth playing in town and there are no other games that can get you influence as quickly.


Not possible? I'm not even sure it's desirable. "Balance" is not "truth."

> “Believe in science” is just a new religion for the intellectual class.

Amen. Credentialed scientists are now the high priests of yesterday.

The layperson is expected to defer to their authority and all cases, and ignore that they too, are human.


Well, as a scientist, I tend to defer to the fact that scientists in other domains know their respective domains much better than I do.

Witnessing knee-jerk reactions (recent and not-so-recent) by laypeople who have no clue what they are talking about but feel entitled to criticize specialists who have dedicated their lives to studying a phenomenon, just because they manage to catch one or two words of vocabulary, is... frustrating.

Whenever (some fringe of) media decide to hold (some piece of) science as controversial, said specialists find themselves the object of an intellectual DDoS whereby they are put in a loop of permanent justification and rejustification to people who simply do not understand the arguments, nor typically care.

Now, the solution is of course not blind trust. It's probably making sure that more laypeople have a clue about science.


> scientists in other domains know their respective domains much better than I do

But knowledge can also be misused. Those are exactly the people most capable to deceive you (if they want) (e.g. Peter Daszak)


How has Peter Daszak deceived us?

He strongly pushed the "natural origin" theory and branded "lab escape" theory as a conspiracy theory, despite knowing (and without revealing) that the lab was involved exactly in that kind of virus research (in order to conceal his involvement).

Since I have not seen any evidence that directly supports the lab escape theory, I do not feel deceived. It’s a pretty big jump from not disclosing some proposed experiments that may or may not have been done to lying about likely origins of the virus.

The real issue is that other members of your group have seriously damaged its credibility. If you fail to remedy that then the frustrations you describe are sure to increase.

Sure it’s ego-preserving to instead blame the rubes and look down on them, but it’s also counterproductive. The “uneducated“ masses may not understand the scientist’s field, but they easily comprehend condescension and contempt and it amounts to efficient anti-persuasion. Also plenty of non-scientists have considerable subject matter expertise in bullshitting. They may not understand the science, but they know when a scientist is communicating with the intent to persuade rather than inform.

Then of course there’s the possibility that an inability to explain something to lay people indicates a deficiency in the scientist’s understanding. Feynman certainly thought so and he was probably the most publicly trusted scientist of his generation. His QED[1] is a shining example of explaining a very difficult subject to the lay audience.

Edit: Einstein is another example of a publicly trusted scientist who made every effort to clearly and completely explain his work to lay people. In his Relativity[2] he mostly succeeds, and where he falls short he acknowledges it and still communicates the concepts of general relativity pretty well.

[1] https://www.goodreads.com/book/show/5552.QED

[2] https://www.goodreads.com/book/show/15852.Relativity


I think you're overestimating your importance, and for me that notion is indeed the crux of the matter, that ultimately also leads to QAnon, etc:

It's a misconception that scientists gather in their ivory towers and spend weeks and months scheming, how they can rub it in to the plebes that they are smarter. Most are not interested in that, and, I'd say, all have something better to do.

There are many, much easier, and much better ways to get into a position where you can live out your inferiority complexes. You think that's the motivation of someone going through grad school, doing a PhD, a couple of PostDocs, then tenure track, etc.

I admire Feynman and Einstein as the next guy, but they lived in an academic environment that is vastly different from what we have today.


> they know when a scientist is communicating with the intent to persuade rather than inform.

Persuasion is part of a good scientist's responsibilities to his/her community.

What is not is: "Here is our explanation, add it to the list of other explanations you got from your hairdresser, neighbour, priest, chief, etc. and choose which ever one makes your feel better."


> feel entitled to criticize specialists who have dedicated their lives to studying a phenomenon, just because they manage to catch one or two words of vocabulary, is... frustrating

I think "feel entitled" is the wrong way to think of this. People's skepticism of scientific experts or scientific facts is a failure of both journalism and science:

1. Science journalism is almost universally terrible, so people already get sold half truths and sometimes even outright falsehoods from allegedly reputable sources. Messaging needs dramatic improvement.

2. The replication crisis has shown that up to 50% of published results in medicine can't be replicated (and up to 66% in social sciences), and there are virtually no incentives to replicate or publish negative results, and too many incentives to data mine/p-hack and publish sensationalized results. There are now some efforts towards correcting this, but it's only just beginning.

Depending on the field and the topic, someone questioning expert advice actually has a coin flip's chance of being right and the expert wrong.

> Now, the solution is of course not blind trust. It's probably making sure that more laypeople have a clue about science

I don't think that will fix anything. Having a clue about how science should work and then seeing it's failures only validates science skepticism.

For example, Fauci said early in the pandemic that people had no reason to wear masks, and in fact, that mask wearing could be worse than not wearing them. He later publicly admitted that he lied to allegedly avoid PPE shortages. So a public health official whose duty was to convey accurate scientific information admitted he lied to the public, thus breaking public trust. Exactly what additional knowledge about science do you think will correct this violation of public trust?

You see science is not the problem here, but journalist' and scientist' public conduct is the problem.


No. Believe in science is very different than believe in scientists. I believe in science. That does not mean I inherently trust any scientist. Science is different than the people who practice it.

The fundamental core of science is repeatability and prediction. Belief in anything else, belief in things that are not repeatable and offer no external predictions, THAT is religion. When faced with a new phenomena I will to my last breath believe that there is a scientific explanation. That doesn't make me a fanatic. It makes me rational.


> core of science is repeatability

It is indeed. But with experiments and theories getting ever more complex and difficult to verify [1], we are edging towards just having to believe what we are told. Verifying Newton's theory of gravity was relatively simple, just throw stuff from your balcony and verify that s = 1/2 g * t^2!

So we are deep inside the second Middle Ages, where the priesthood tells us what to believe and how to behave, without us having any chance of verifying their claims.

I greatly wonder if and how humanity will get out of this one, and what a second Age of Enlightenment would look like.

[1] https://en.wikipedia.org/wiki/Replication_crisis


The 'priesthood' mostly isn't interested in telling you what to believe and how to behave. The vast majority of the 'priesthood' would much rather just return to their lab and do exciting research.

You're writing this on a device developed using physical insight into quantum mechanics. I am sure you can, without understanding the theories (neither do I), verify, that your device does, indeed, work. Surely, you are able to verify that your GPS works, without understanding the intricacies of neither general relativity nor literal rocket science to make this happen.


>The 'priesthood' mostly isn't interested in telling you what to believe and how to behave. The vast majority of the 'priesthood' would much rather just return to their lab and do exciting research.

Politicians on the other hand are more than happy to use science and the social credibility of science to coerce and manipulate though. It wasn't too long ago in terms of human generations that "the science" was that we should chemically castrate those who didn't fit traditional society's norms (like Alan Turing), or that the economic benefits of burning fossil fuels were worth the ecological catastrophe for future generations.

I think a discussion on scientific ethics needs to be had by the general public as well as better scientific education all around. It's not good enough to blindly trust "the science" when there's an ethical dimension in play which there almost always is.


Exactly! Fuck those people who studied a topic for 40 years -- why would they know one iota more about it than I do, or the facebook group of my crazy uncle -- they all have access to Google, after all!

I, for one, refuse to defer to the authority of so-called experts, 'so-called' merely because of decades of experience in the field. Pah. For questions about cancer treatment, I typically just ask someone else in the Walmart aisle standing in front of Sauerkraut; for investment advice I ask the feral cat outside, when my car makes funny noises, I just throw a dart at a printout of an amazon page after I searched for 'car funny noises'.

Edit: Also, fuck pilots. Why do they think they can do anything better than I can? They hide behind their locked doors and just Google stuff, I just know it!


1.) What do you think has made people scientifically incompetent (not trusting experts)?

2.) What do you think the impact of our current economic system has been on science? About the way science has been commoditized and feedback loops of the collective learning of humanity have been enclosed/black-boxed and made artificially scarce (innovations in the forms of patents, and other commoditized knowledge)? [1]

> Exactly! Fuck those people who studied a topic for 40 years -- why would they know one iota more about it than I do, or the facebook group of my crazy uncle -- they all have access to Google, after all!

I'm glad to see you state that access to Google and the 'commercial' web (can't think of better word) is not a good benchmark for access to humanity's shared inheritance. I see that claim too often.

[1] https://www.ipwatchdog.com/2019/02/19/dont-fooled-patent-pur...


Science is the process of gaining, evaluating, and revising knowledge and understanding, Science is not a body of knowledge. It's an algorithm, not data. So clearly, having access to the largest book ever written does not say anything about scientific literacy.

Here's the thing: people love to trust experts and, also, love to not think for themselves.

And they do trust experts, i.e. when it comes to having their cars repaired, getting medical treatment outside of COVID, etc. It's not like millions of people came up with using malaria medication and horse dewormer on their own after conducting studies in their basement. They are following other people, while, ironically, praising themselves for being independent and most certainly not sheep (it's horse dewormer, after all).


> Science is the process of gaining, evaluating, and revising knowledge and understanding, Science is not a body of knowledge. It's an algorithm, not data. So clearly, having access to the largest book ever written does not say anything about scientific literacy.

Great description, loved "It's an algorithm, not data." especially.

> Here's the thing: people [...] love to not think for themselves.

Wtf? I agree with what you write before, but then in this comment you're grossly generalizing and calling people dumb. Why? Does it make you feel better? Maybe you could try to dig into your judgement here and share what emerges? Why do people 'love not to think for themselves', and then why are you different? Or what makes you and a few others uniquely able to see this?

I tried to honestly and state my personal state here if you're interested or need inspiration to describe your lens or your experience: https://news.ycombinator.com/item?id=28661179


You're reading a lot into my comment that isn't there.

So in your experience, people don't read movie reviews, don't listen to sports commentaries, don't read amazon comments? People don't listen to TV personalities telling them that malaria medication is good against COVID? In your experience, people read Nietzsche and Schopenhauer all day long, and discuss existentialism, rather than watching Netflix? Etc.


> So in your experience, people don't read movie reviews, don't listen to sports commentaries, don't read amazon comments? People don't listen to TV personalities telling them that malaria medication is good against COVID? In your experience, people read Nietzsche and Schopenhauer all day long, and discuss existentialism, rather than watching Netflix? Etc.

Your words, not mine. The question I'm trying to get your perspective on is: despite all this (movie review, amazon comments, TV personalities), why do people make bad decisions? You keep saying that they do (and I agree with you) and I'm asking you why you believe that this is the case. Why do they make bad decisions?

And a follow up question: why aren't people more hungry to learn (e.g. reading Nietzsche and Schopenhauer all day long and discussing existentialism if they want to)? Why do they instead watch Netflix, etc.?

Am I misunderstanding you somewhere? Please let me know.


Not "my words", I asked you a question.

Oh, I am not claiming I have answers! A lot of things can be explained by people not having time and just rolling with whatever other people have strong opinions on (ever been in an organizing committee, or in a group trying to figure out what to watch on Netflix - at some point you just want this to be over).

I think another aspect is simply vanity. It makes some feel special when you have "inside information", and you're "not a sheep" (TM), when in reality, all you do, is run after some other leader.

Edit: Systematic misinformation. Have you compared FoxNews and CNN during the Trump presidency? It's like they were reporting about different planets. One group has a significantly better outcome when it comes to COVID, for example (the motivation behind this is, however, a complete mystery to me).


> Systematic misinformation. Have you compared FoxNews and CNN during the Trump presidency? It's like they were reporting about different planets. One group has a significantly better outcome when it comes to COVID, for example (the motivation behind this is, however, a complete mystery to me).

Agreed. I fear that deeply traumatized people like Tucker Carlson aren't able to care much about people.

https://www.youtube.com/watch?v=9E0NcmcGdSI

https://www.youtube.com/watch?v=rQwP0XRBjq4&t=37s

https://www.youtube.com/watch?v=wzUC4FUTKII

> Have you compared FoxNews and CNN

or is there more? https://www.youtube.com/watch?v=--TsGaNyr0U&t=1785s


Systematic misinformation follows from motivated reasoning. MSNBC, CNN and Fox all had plenty of it under Trump. MSNBC basically did to Trump what Fox did to Obama.

Plenty of studies have shown that liberals and conservatives can read the same studies, and both come away thinking the data supported their positions. And they were both wrong. Nuance is dead.


I agree with 'Nuance is dead' - it's horrible and I'm not sure how that can be fixed. I started reading Reuters to actually get information. I think CNN did themselves a big disservice by including too many opinion pieces and polemics.

The problem is that the layperson sees experts being wrong all the time. From the doctor who misdiagnosed their cousin, to the epidemiological modelling that turned out incapable of making accurate predictions, to entire fields of apparent bullshit like social psychology. They also see credentialed experts vociferously disagreeing about the truth of important matters. They watch news reports that tell them meat causes cancer and that you should eat red meat to reduce your chances of getting cancer. They read news articles telling them cloth masks will save your life and that cloth masks are useless.

People are bombarded with contradictory expert advice, so it's hardly surprising that they decide experts don't know what they hell they're talking about.


I see what you mean, and I think it's about expectations.

1.) Most of us are trained to have to expect to defer to authority, because that's what today's school teaches, due to the commodification and artificial scarcity of science, and to the division of labor of our profit-seeking economic system.

2.) They expect not to be able to understand things in depth, because capital controls the amount of knowledge workers that are 'allowed' to reach expert status, so as to control the labor supply of knowledge workers.


I expect to have to defer to authority because I recognise that it's infeasible for me personally to accumulate the total sum of human knowledge and experience. It might have been different in Galileo's day.

But they do love to follow “experts“ who recommend malaria medication and horse dewormer as cure against COVID. At the same time, blindly following a different group of people is somehow branded as being “independent thinking” and “not being a sheep”. Brillant!

You're unnecessarily narrowing this conversation to focus on a political group you don't like. I'm not interested in that.

Thank you for the heads up, I will always keep that in mind whenever I write another comment.

Let's not forget that you brought politics into this, not me. Let's not forget that I did not talk about who I like at all. Finally, let's not forget that this group, that you (not me) associate with a political side, is performing mass suicide by following TV personalities rather than experts, accepting massive collateral damage to others. Hundreds of thousands are dead, more than died in the Civil War, hundreds of thousands could have been saved had one not taken medical and epidemiological advice from TV personalities.


>>>But they do love to follow “experts“ who recommend malaria medication and horse dewormer as cure against COVID

Not to be confused with the FDA...which approved horse steroids for COVID treatment[1][2] at the same time it was lambasting the population via its Twitter account for taking horse dewormer. This sort of conflicting messaging is again why people are ignoring "experts".

[1]https://www.fda.gov/news-events/press-announcements/coronavi... [2]https://www.zerohedge.com/covid-19/never-say-neigh-fda-lists...


I think I have to apologize: of course you're not sheep but very much independent thinkers if you blindly follow the right guy (TM). How stupid of me.

The epidemiological modeling actually did not turned out all that bad. The expectation of perfect accuracy would be unreasonable to begin with. But the models I have seen clearly stated assumptions and goals. And within those parameters generally did not do that badly.

You really need to go into political manipulations of them - like taking worst case scenario model and claiming it was wrong because situation was not worst case.


In general, a totally agree with you. I was thinking specifically of Neil Ferguson in the UK, who has a history of loudly promoting the worst possible outcomes based on opaque models others in the field have described as an unreliable buggy mess. Yet his data and models were at the forefront of the argument against any relaxation of lockdown or mask wearing rules (all while he was flagrently flouting lockdown to conduct an affair). People like that do not give the general public confidence in the reliability of experts.

> for investment advice I ask the feral cat outside

You could do much worse, given monkeys throwing darts outperform average investors.

Not trusting science by default is what makes science have to be trustworthy. Scientists don't "trust" scientists. That's why every paper goes through rigorous questioning before being published-and even then lots of bullshit results still get published. If anything we need to trust science less.

Also, do you trust politicians even when they have over 40 years of experience?


Fair enough, with the investment advice.

As I wrote somewhere else, Science is an algorithm, not data. That algorithm was honed over hundreds of years. That algorithm is the reason that the ideas of some patent clerk in Switzerland were able to dethrone the ideas of one Isaac Newton, who some consider the greatest genius who ever lived. It's not about trusting scientists at all (ideally).

My whole spiel here is that you should ask experts in their field of expertise. Would I take medical advice from a career politician like Hillary Clinton or advice on climate change from Ted Cruz? Absolutely not. Would I take Obama's advice on how to raise money or negotiate a deal with Netflix? Maybe, he seems to be good at that.

Edit: Haha, what's with the downvote? Hillary being worried that I negatively impact her side-career as a chiropractor?


Planes are simple, they almost always land.

Medicine is complex, the track record is inherently muddier. That's why generally it's a good idea to get several independent opinions, for example about an expensive irreversible procedure. And then there are cases where medical experts do something else than what they recommend to their patients, e.g. end of life decisions.

Then there are places like certain social sciences subfields, where there is no track record and is just a disgraceful naked politics fest parasitising the word 'Science' in order to gain unearned legitimacy.

> Study Suggests Medical Errors Now Third Leading Cause of Death in the U.S.

https://www.hopkinsmedicine.org/news/media/releases/study_su...

> It’s not a frequent topic of discussion, but doctors die, too. And they don’t die like the rest of us. What’s unusual about them is not how much treatment they get compared to most Americans, but how little. For all the time they spend fending off the deaths of others, they tend to be fairly serene when faced with death themselves. They know exactly what is going to happen, they know the choices, and they generally have access to any sort of medical care they could want. But they go gently.

https://www.zocalopublicsquare.org/2011/11/30/how-doctors-di...


Sounds like you realize doctors do have some expert knowledge after all.

“believe in science” always makes me think of Nacho Libre

Correct. The author of this article and millions of others prove to us in their titles that they have no clue what science is... They often even personify it! Blind leading the blind.

You should trust the epistemological methods developed by science, not the scientist who use them. That means you understand why scientists believe in their discoveries instead of taking their word on it. But you still can trust someone's credentials are better than your own and that they are giving you a better judgement than your own.

But that isn't what the article is about.

Most (many) people have no (basic but proper) understanding of the scientific method and statistics.

When I hear "trusting since" it's most times not "trust the scientific method but be wary about interpretations of results yielded by it" but instead "fully non-questioning trust a specific interpretation of some scientific results" worse often a biased speculative interpretation based on some other speculations based on the scientific results.

As such "trusting science" because "it's science" is in many peoples vocabulary something I can't say is good. Especially if politicians use the terms.


Agreed, the term "trust science" is not what we need, at least as commonly used. It should be understand the scientific method, which helps put claims into perspective in terms of confidence in claims.

I don’t really even trust the method used by scientists. What I trust is the method used by engineers. When I get on an airplane I’m not trusting science, I’m trusting over a century of rigorous engineering that took planes from being death traps to being almost absurdly safe.

Not trusting the methods used by scientists is pretty natural if you get to see(experience) how the sausage is made.

> But you still can trust someone's credentials are better than your own and that they are giving you a better judgement than your own.

You should also understand that scientists are human and subject to all of human foibles like racism, sexism, etc.

https://en.wikipedia.org/wiki/Piltdown_Man

Not to mention politics, such as when "jewish science/math/etc" rejected.

You should trust the replicable experiments, not the scientists. Especially so when "science" and politics gets intermixed.

When a "scientist" says "trust me" or when a politician says "trust the scientist I say to trust", that's when you should be the most distrusting.


I wholeheartedly agree, but I also think it's an inevitability of removing religion.

It's not that I think people should be religious but that humans have the following proclivities:

- Mystical thinking

- Defensiveness against ideas that threaten the ego at the individual scale

- Slowness or lag against canonizing new information

- Poor introspection

A current day example of many of those attributes are many of the intellectual liberals of my generation whom are ironically quite conservative in the true sense of the word, just not in the way that most visualize conservatism. Challenge any ideas that they consider to be common sense, especially if they are ordained by authority figures, and they speak as if you have committed blasphemy against the holy church. It doesn't even matter if you agree with them on 99% of points; there's something wrong with you for disagreeing with The Science.

Experts themselves fall prey to these flaws. As someone who is freaking obsessed with nutrition and digestive science, it's baffling how many authority figures ignore information that either doesn't support the establishment or doesn't support a theory that their ego is riding upon. I can't help but conclude that the current axioms around human health are legging at least 20 years behind where they should be.

This is why I use science as a compass, and why it's not something I simply just believe in. Science isn't even designed to be a system of belief. Belief can only come from within, and society gets closer to the truth faster when there's a confluence of true individual belief, not because everyone believes in a system.

The mistake we've made IMO is the mindset that, by simply dumping knowledge on people, they will make better decisions and be smarter. In other words, the greatest tragedy of our time is in telling people they are smarter than they actually are without requiring any sort or rigor before they can reach that level of "smartness", if you will.

How else would "believe in the science" work if we didn't remove religion and didn't essentially tell people they are smart enough to understand enough science that they can believe in it? We threw science at the masses, from intelligent to mentally-challenged, and they turned into a cargo cult.

In fact, I often find intellectuals to be worse in this regard because the rationalization of cognitive dissonance requires intelligence. Many intellectuals will do anything to defend The Science os the one true way to examine anything, often turning philosophy into some sort of historic curiosity that shouldn't really be taken seriously once you've passed your humanities prerequisites in college. At least Forrest Gump would have the openness to consider the meaning that "Life is a box of chocolates". Some intellectuals would scoff and spend time coming up with reasons to conclude that life is entirely predictable, often not even to consider an idea themselves but to use the other party as a neuron-treadmill.

> Obviously, knee-jerk doubt is no better than knee-jerk trust, but there needs to be a balance,

If the knee-jerk leads to everyone running off a cliff, I fail to see how knee-jerks are "obviously" preferable to essentially doing nothing. There are problems that resolve faster without intervention, or are at least not made worse when people don't do what they think is obvious. It's obvious to many that if you get stabbed that the first step is to pull the knife out, but immediately pulling the knife out can make an already bad situation 100x worse.

Certainly, there must be trust, or else you don't have a society. Trust doesn't come from condescension or having information withheld. That can work for a period of time, but isn't sustainable, just as most despotic dictatorships don't last long in the 21st century (American invasions notwithstanding) in the face of another regime or a revolution. With a globalized world that efficiently propagates information, you can't proselytize The Science without a significant number of people disagreeing for any number of reasons. The only way for The Science to get its way is to suppress the scientific process and to cut individuals off from information channels for their own good.

I'm pessimistic. The Science is too profitable from all angles. Ironically, it's becoming the next step in religiosity. We went from many gods with specific roles to believing in one more amorphous and omnipresent God to an even more abstract system that doesn't specifically describe any universal truth. When the people know too much, religiosity has to become murkier in order to survive. Science turned out to be a superior host for religiosity because, the more research you read, the more you realize you don't know; black holes and white holes of knowledge can be manufactured at will.

Skepticism has become a tainted idea, in my eyes, and too often becomes "The Skepticism", but it is the only way forward that I think there may be, at least for individuals. They have to understand that skepticism doesn't mean knee-jerk disbelieving in everything but that gnosis is arrived at from both questioning and faith, or not having a position in an idea until a position is justified.

We have to choose to teach this as a virtue to our children. But will we do so?


I feel like this study is missing the forest for the trees. Which section of society has the highest trust in scientists? Probably it's other scientists, closely followed by science journalists. Yet this article is set in the paradigm in which people are divided into "the public" and "scientists", with the latter being universally correct and the former in need of education by the latter.

Over the past few years I've read a lot of scientific papers outside my normal field of reading (computer science/cryptography). My guess is over 100 in the past two years alone, or more if you count scan reading or checking only particular sections to follow up on a citation. The overwhelming impression this left is that many research fields are actually overrun with pseudo-science and that the institutions cannot detect this (or can but prefer not to).

This new study is especially ironic because social psychology is one of the biggest offenders when it comes to spreading misinformation. The entire field is rightly seen as junk due to the prevalence of non-replicable studies on 35 undergrads being generalized to the entire world, which are often then used to demand major political change. This particular study probably will replicate, because the claim it's making is so obvious as to be near tautological, but that is hardly better.

A good example of academic pseudo-science is claims about bots that subtly manipulate people's politics via Twitter. There are around 10,000 published papers on this phenomenon, which actually does not exist at all [1]. Claims it does are mostly derived from an academic "bot detector" that is claimed to be 95% accurate, but which actually has FP rates of around 30%-40% i.e. it looks and sounds like science, but is actually pseudo-science. The people writing these papers know their claims are junk (for example they almost never give lists of claimed bot accounts), but don't care because it lets them manipulate governments. California and Germany have both passed laws based on these claims.

Moreover, the study claims to have relevance to "COVID-19 misinformation". The formal COVID literature is a disaster zone. It's absolutely overrun with pseudo-science to an unbelievable degree. If you'd tried to tell me about the scale of the problem two years ago I'd never have believed you, so I won't be surprised if you don't believe me now. The sheer extent of pseudo-science in public health is a bit like the Matrix. Nobody can be told about it: really, you have to see it for yourself. Nonetheless, here are a few examples.

In [2] the very first sentence of the paper is a claim about public statistics that's factually false. Nobody seems to have noticed this despite that the entire paper is premised on the false claim, comes from Imperial College London, has 19 authors and was peer reviewed and then published in Nature. Having started with misinformation the paper goes downhill from there and makes several other pseudo-scientific claims.

In [3] the same team presents a model that is used to "prove" that lockdowns saved 3.1 million lives. The methodology is circular: the model by construction could not conclude anything else because it took as axiomatic that only government interventions can end epidemics (i.e. it assumed all epidemics have a single wave). The authors know that isn't true: the paper itself explains that it is "illustrative only" and that "in reality even in the absence of government interventions we would expect Rt to decrease and therefore [we] would overestimate deaths in the no-intervention model", i.e. the authors admit that it's a fictional illustration of a world that isn't real and that its conclusions are wrong by an uncharacterized amount. Yet scientists don't care: this pseudo-scientific paper has been cited over 1300 times.

In [4] the authors claim that that model is "validated", which they define as: the model outputs roughly match the results from other models, not reality. This is a circular, pseudo-scientific definition of validity.

It's not hugely surprising that this stuff happens. There are strong social conventions in science that you always assume good faith on the part of other scientists, often to an absurd degree. In [5] an obviously Photoshopped/airbrushed image got peer reviewed and published in a medical journal. When someone pointed it out, the authors claimed that the patient merely wore the same shirt six months later, and the journal accepted this explanation! Anyone with their brain switched on can see it's the same image, but until the journal was mocked and embarrassed in public "science" was defending an obvious fraud.

So before social psychologists start lecturing the public on how they should be better at detecting pseudo-science, how about they clean up their own house first? Once academics are reliably detecting and flagging pseudo-scientific papers in their own journals, then and only then should they expound on the dangers of misinformation.

[1] https://blog.plan99.net/fake-science-part-ii-bots-that-are-n...

[2] https://dailysceptic.org/2021/08/24/examining-the-latest-pap...

[3] https://nicholaslewis.org/did-lockdowns-really-save-3-millio...

[4] https://github.com/ptti/ptti/blob/master/README.md

[5] https://scienceintegritydigest.com/2019/11/11/photoshop-as-a...


Science, in political hands, has become a religion, a sledge hammer, a means of dividing populations in order to get votes, and more. When I was a kid my late grandfather always said: Never believe politics or religion, at the core, they are the same, dirty, manipulative, dishonest extremes of the human experience.

When politicians use "believe science" there's also the reality that some (most?) don't actually practice what they preach. This weeks our Vice President handed anti-vaxers and vaccine doubters some of the most powerful ammunition they could have ever hoped for.

For those who don't know, Harris was scheduled for an in-person interview at a popular TV show in the US called "The View". Moments before she was to join the hosts live on set, two of the four hosts tested positive for COVID-19.

What did they do? Believe the science?

No, of course not. The two hosts were removed from the set. Harris conducted the interview remotely from another room in the building.

They squandered an opportunity to "believe science". She is vaccinated. Everyone in her circle is vaccinated. Everyone in the TV studio is vaccinated. And yet, she, effectively, ran away and hid.

Believe science? Any doubter watching this can rightly ask: You ask us to believe what you say, get vaccinated, and you don't even trust the vaccine yourself? And masks? You tell us to use masks, you are vaccinated and you didn't even trust the science enough to put on a mask and hold the interview?

My entire family is vaccinated. I have no problem at all walking into a location where someone tests positive. At all. For me it isn't about belief --that's religious-- it's about intellectually understanding what's going on and, yes, trusting the system, methods and scientific process to deliver scientific truth to the extent possible.

This is where these politicians come of as the fake manipulators they actually are, all of them. Every time I hear one of them use "believe the science" as some high and mighty pedestal on which they stand I want to barf.


The key fact that's missing from your argument is that no vaccine is 100% effective, and that efficacy relies in part on herd immunity.

For example, several years ago, there were several instances of outbreaks of measles because the percentage of vaccinated individuals dropped below the threshold required to maintain herd immunity.


At >90% effectiveness I have no issue at all being in a group of positive-testing people. Herd immunity isn’t a problem for me and others. It’s a problem for those who are not vaccinated. If they were smart they’d get vaccinated and we could put this behind us. The only reason I have to put on a mask when going to the store or flying is because of those who are not vaccinated.

What about mutations?

Sure. Evolution is a heartless beast. It does not care. It just happens. What a lesson for evolution deniers this pandemic has been!

There is no such thing as a life with a 100% safety guarantee. Just a couple of weeks ago two kids from our local high school got killed due to racing cars on the street. Nobody can predict or guarantee outcomes in life. You take reasonable precautions, not racing on the street and getting vaccinated are two such precautions. Everything else is up to cosmic randomness, Jean Luke Piccard or the Flying Spaghetti Monster, take your pick.


Also news media making blanket statements like "increased 400%" and never tell you the real data is highly misleading.. Was it 1 to 5? which is a lot different than 1000 to 5000.

It's great to see a lot of comments with an awareness that many in today's society aren't verifying (or able to verify?) the science they come across.

Yet I'm not seeing much talk yet of theories and frameworks that take this reality as a starting point. That takes this widespread scientific non-competence as a starting point to then chart a path forward to a competent society.

In other words, what are some strategies to support and guide people into becoming competent? How do we prevent this non-competence from showing up in the next generations?

As someone who has a shoddy base when it comes to science, I am still often missing the societal acknowledgement that 1.) it is hard to find patient, compassionate teachers and suitable material if you didn't do well early on (e.g. my sister did, I didn't), and 2.) that acknowledge the reality of both a.) having to untangle existing miseducation (shoddy, neglected base), while b.) giving enough time and space for the right scientific theories to fall into place (i.e. for competence or 'mastery' to start occurring). In other words: many of you here agree that people aren't able to derive or verify their own scientific explorations, yet what are the best learning frameworks for these adults to re-base their scientific/critical thinking skills?

As I said before, I currently have quite a shoddy base (to my great sorrow), how and where do i start remedying this? How do I rebuild my base, starting from where I am at now? Who has done this before? Who has achieved this? What was their path like?

Personally I currently believe the big intellectual 'gap' between the intellectually competent and less-than competent is a combination of 1.) artificial scarcity of learning material and opportunities caused by the profit-motive in academia/sciences, together with the criminalization of cooperation due to intellectual property laws (the propertied class enclosing and monopolizing knowledge) [1], and 2.) the knock on effect this causes on the non-propertied class (those who do not own the black box (intellectual) property they create, and who receive wages to work on this property for a firm), as they are forced to focus on their own skillset and position [2], over fostering widespread peer learning (e.g. passing on their knowledge) and the shared building and emergence of a rich network/web of collective intelligence [3]. Added on top of this brew are the psychological effects on society of a small group of people who believe they are innately intellectually superior (because they have been told throughout their lives that they are, and they have been put on a pedestal: 'rewarded' [4]), while living and working in today's black box society. That small group does not believe it is possible to get a 'second chance' at building a scientific base because they unconsciously believe that the current economic system (the bourgeois property system and it's profit motive) hands out work to the 'right' people, every time. [4] Essentially they have come to completely attach their identity and self-worth to their professional and technical strengths/skills. [5]

In my experience there is unfortunately no talking with most of these people. Until they personally witness the miraculous intellectual growth of another human being who emerged from a place different than they have, or in an area they didn't expect, their ego will protect them. This ego-protection is is not necessarily a bad thing in our current dog-eat-dog world, yet it does mean that a new groundbreaking compassionate learning framework is less than likely to emerge from their circles.

So to summarize, and to spell out what the above reality means for me: to today's society I am 'not worth spending time nurturing' because I am seen as a 'very late bloomer' (worst framing: I am a lost cause because I was lazy and I supposedly didn't 'get with the program' early enough [6]). In many areas I do not yet have the competences to actively contribute to a specific scientific or technological endeavor, yet also very few places to turn to to properly, sustainably and affordably 'skill up'. Both the 1.) curse of knowledge [7] and 2.) the lack of time and patience experienced by existing many knowledge workers, due to the stress and they experience, means that finding allies for growth is made nearly impossible. To conclude, and to emphasize my aforementioned arguments: the underlying challenge seems to me to be how we remove the profit motive from science and from technological research and development.

"The profit motive, in conjunction with competition among capitalists, is responsible for an instability in the accumulation and utilization of capital which leads to increasingly severe depressions. Unlimited competition leads to a huge waste of labor, and to that crippling of the social consciousness of individuals which I mentioned before.

This crippling of individuals I consider the worst evil of capitalism. Our whole educational system suffers from this evil. An exaggerated competitive attitude is inculcated into the student, who is trained to worship acquisitive success as a preparation for his future career.

I am convinced there is only one way to eliminate these grave evils, namely through the establishment of a socialist economy, accompanied by an educational system which would be oriented toward social goals. In such an economy, the means of production are owned by society itself and are utilized in a planned fashion. A planned economy, which adjusts production to the needs of the community, would distribute the work to be done among all those able to work and would guarantee a livelihood to every man, woman, and child. The education of the individual, in addition to promoting his own innate abilities, would attempt to develop in him a sense of responsibility for his fellow men in place of the glorification of power and success in our present society."

— Albert Einstein, Why Socialism? [8]

So anyway, what beautiful frameworks do what I described above (and earlier) [9]? What will help me become scientifically competent? Who has gone there before (from my starting point); who should I talk to?

-

[1] Marianna Mazzucato, https://web.archive.org/web/20160204223931/https://nybooks.c...

[2] Wendy Liu, Gatekeeping in the tech industry: https://dellsystem.me/posts/fragments-50

[3] Aaron Swartz, Guerilla Open Access Manifesto: https://archive.org/stream/GuerillaOpenAccessManifesto/Goamj...

[4] Alfie Kohn, Punished by Rewards: The Trouble with Gold Stars, Incentive Plans, A's, Praise and Other Bribes: https://www.goodreads.com/book/show/541132.Punished_by_Rewar...

[5] Discussion on 'The Role of Jargon in Left Politics?': https://www.reddit.com/r/CriticalTheory/comments/l49hci/the_...

[6] Devon Price, Laziness Does Not Exist - But unseen barriers do: https://archive.is/s4Dn0 / https://humanparts.medium.com/laziness-does-not-exist-3af27e...

[7] Wikipedia, Curse of knowledge: https://en.m.wikipedia.org/wiki/Curse_of_knowledge

[8] Albert Einstein, Why Socialism?: https://monthlyreview.org/2009/05/01/why-socialism/

[9] Me on a new distributed rhizomatic education system: https://news.ycombinator.com/item?id=25753856


Two short comments.

(1) There is no such thing as "artificial scarcity of learning material", there were never better times in that regard. Just look at the many, free, course programs (Coursera, etc), and also the complete (sometimes not quite legal) availability of many textbooks and papers (scihub comes to mind) online. You want to learn something? Never in the history of mankind was there a time with better access.

(2) Have you been to a socialist country? Have you visited Poland in the 80s, or Yugoslavia, etc.? I don't know where you currently live, but it's most likely a huge step down in quality of life for the vast majority of people.


> There is no such thing as "artificial scarcity of learning material"

There is when someone (like me) isn't methodologically literate or has a learning framework with which to compare, structure and integrate what they cover and will cover (i.e. a system for framing the clear-ish paths/lessons, that shows the overarching path forward, and that helps set and manage expectations (with the help of more experienced guides/peers), and give clarity on what could be gained). After falling through the cracks of the hunger-games -esque capitalist education system, I am not offered a second chance. Ours is a 'zero second chance' society.

So yes, maybe to you it seems like "there were never better times", yet that is not my reality. There are few bridges from where I'm located.

This is what Denise-Marie Ordway, author of the article linked to on this thread, says about it: "The researchers note that it’s tough for lay audiences to fully understand complex topics [...] They suggest a more sustainable solution for curbing misinformation is helping the public develop a type of scientific literacy known as methodological literacy. People who understand scientific methods and research designs can better evaluate claims about science and research, they explain."

> Have you been to a socialist country?

Socialism isn't a project for building socialist 'nations'. Capitalist borders only exist for the propertied class to control their capital. Socialism aims to build up and connect the exploited and alienated working class globally.

I'm living in a capitalist hellscape and i want to abolish silicon valley.

https://tribunemag.co.uk/2019/01/abolish-silicon-valley


> or has a learning framework

or doesn't have* a learning framework


(1) So when where there better times in your reality? Again, I maintain that never in the history of mankind did anyone have better access to both material (books, articles), and curricula (e.g. from open courses, MIT courseware, Coursera, Udemy, etc). It's all available, it's up to you to use it.

(2) >Socialism isn't a project for building socialist 'nations'.

It very much is, though.

>Capitalist borders only exist for the propertied class to control their capital.

Socialist borders, on the other hand, exist to keep their own people within, with minefields, guard towers, self shooting devices, etc. We tried socialism, it doesn't work, and the outcome is often dehumanizing and horrible. I suggest you visit a border museum of the former East-Germany West-Germany border.


> Science is the belief in the ignorance of experts. When someone says ‘science teaches such and such’, he is using the word incorrectly. Science doesn’t teach it; experience teaches it

-- Richard Feynman

Feynman is an expert on science; so if you believe you should trust experts, you should trust him, and hence, distrust experts. It's a self-contradictory position to take.


The main problem is that science and philosophy have gotten muddled. Philosophers like to call themselves scientists today even though they just write down their thoughts on a subject rather than trying to create new testable theories or test old theories.

In general scientists tends to agree with each other on most things. Philosophers however have little basis in reality and create political camps fighting against each other. When scientists disagree it is about the best methods to move the field forward, not the state the field is in right now. If scientists disagree they just look or run experiments and see who was right. If they disagree about the interpretation of the experiment they are philosophers and not scientists, making so bad experiments that people can't even agree on what it shows means the experiment wasn't good enough to be science. But those experiments will still pass the peer review from people of the same faction since to them everything is obviously correct.

Yes, this means that most "science" today isn't really science but instead a ton of philosophers bickering about whose beliefs should be seen as correct. All this bickering generates a lot of papers and citations, but since none of it is rooted in well designed experiments they suffer from the same problems as the old Greek philosophers did.


Legal | privacy