Well, I personally haven't consented to socializing the cost of anything. I'm not sure what the ethical grounds are for imposing this on me, and in so doing dilluting available accuracy of data about people.
These feedback loops you're referring to, what order are you proposing for your chicken and your egg? What's causing what in your view?
This is one reason the field of behavioral economics is important to society: there are things that an individual or company would never publicly admit to doing — but which they're fine to admitting under NDA, to be used as a datapoint in an anonymized dataset used in academic research.
So, while no individual company will tell you that everyone's using dark patterns or hiring their friends, behavioral economists can put forth evidence-backed arguments that this is the case — and so save you the trouble of bothering to chase the Carrot.
Surely a lot of these “behaviors observed after X hours of Y” studies are subjective to the researchers and broader social opinions on what “normal” is.
I don’t disagree on the monetization part, but daily life is an implicit game of risk avoidance. We are cognitively tuned to play a cognitive simulation.
My hesitancy is social belief we all must be on board with playing “the real world” simulation as dictated by traditional political beliefs, which heavily influence which studies get funded.
Maybe utilitarian day jobs aren’t the only busy work we should expect of each other.
Frankly as a social scene, I’d rather people argue over DND rules than how much profit they can make if more people went hungry or died rather than get their insurance benefits they paid for.
Perhaps the behavioral economics math we use to advertise and market tribal belief in our teams superior product or service should be set aside to let folks navigate the sim as they wish and real economic activity must adjust to satisfy that?
Social norms have always followed technology. Maybe the perspectives we apply are no longer correct in this contemporary time.
Articles like this just go to show how little acceptance there still is for behavioral economics data in mainstream policy making.
Just a few years ago data from lab experiments was not widely accepted because people were unsure if it was relevant to how people act in the real world. Countless times the effectiveness of lab experiments has been proven.
I am looking forward to the day when people finally accept that traditional economic models are not perfect (duh!) and that people are not perfect, rational, decision making robots.
I find it odd that economists generally ignore the field of human psychology / behavior in their assessments.
Psychology, especially social psychology, is an epistemological disaster area[1][2] and everyone would be better off ignoring its "findings", not just economists.
So you admit that you would change your behavior in response to such a policy. That's my point: People change their behaviors in response to changed incentives.
Now, since society is a very dynamic, it is extremely difficult to predict the consequences of such changes in dynamics, or even to ascribe causation afterwards. That, however is not an argument to throw caution to the wind.
> I'm asking you to trust other people to make decisions as good as you would make, rather than making assumptions on their behalf.
I find that very hard to do, as I have multiple, hard examples of people making poorer decisions than me. Mostly they consist of people asking my advice, then not following it, then failing.
By the way, by the same measure, I am guilty of making poorer choices than other people as well, it's not a linear ranking.
Necessary caveat on the current state of irreproducibility and outright fraud in Behavioural economics. See, for example, the thread a few days back about Dan Ariely:
Most of these experiments come out of the US where there is overwhelming pressure by society that your value as a human is how well you can sell yourself. This is especially true as more and more people are priced out of broader economic participation by how high the cost of living is vs available income opportunity.
The messy, volatile human element is the product of the calculation process. Lets say you want to know whether consumers should spend $20 at Chuck-e-Cheese or $200 at Walt Disney World. You can let them spend their money and see, and some will be happy about their choices and make the same choice next time, some will be regretful and make different choices, and some will be on the margin and either conclude that they are regretful or satisfied because of psychological and social factors.
Now imagine that you simulated all these people and determined that 70% of them would prefer Chuck-E-Cheese and 30% would prefer Disney. What do you do with this information?
There's a danger here, of which I imagine they're aware - higher-order effects are very hard to estimate in chance and magnitude, or in how your actions specifically contributed to them. This makes them perfect for justifying whatever you want, intentionally or accidentally. Overemphasize some positive second-order effects, fail to notice some negative second-order effects, and suddenly your first-order selfish choice looks like a charity. Societies and markets are dynamic systems, so predicting outcomes is less like predicting path of a rocket from Newton's laws, and more like predicting the weather.
It’s the model most practise, except I do so knowingly. I’m Not against information sharing. It’s just that I do so in a parochially altruistic manner.
Besides we all know everyone will gladly partake in a negative sum game if it means they come out better. I just do that knowingly. That is, if other participants have x and I have y, I will gladly take actions that lead to long-term effects of me having y+delta and them having x-epsilon even if delta < n times epsilon. Everyone gladly does this. So I don’t have a zero sum view of the world but that’s irrelevant to this because the sum of human utility is less important than the utility to my parochial group.
A practical example is that I will often share stuff I know better than others but I won’t try too hard to convince them.
I think the key difference is whether the distortion is intentional or just incidental. All things being equal, we'd prefer fewer incidental distortions of smaller impact.
As for intentional distortions, I think the idea goes that they're all harmful in some way (some more than others) but that the benefit of behavioural change can outweigh the cost. I don't think that would be a terribly controversial statement among economists.
Even your example doesn’t seem that pathological if you assume that the individuals are actually pursuing their preferences rather than deliberately trying to inflate their group GDP.
I'm arguing that lamenting "mechanization" of social interaction - whether between corporate employee and client as in this example, or even between individuals - is akin to appealing to ether as a virtue of the universe.
If interactions between actors can be measured (which they can to some level of specificity in certain contexts), and the desires of the actors can be understood to such a degree (which we're starting to be able to do), then we can model actions which increase or reduce the likelihood of desired outcome, and following from that can produce and optimize decision support systems that nudge users toward some mutually beneficial state that may have otherwise been opaque to both actors.
People seem to hate this idea because it basically puts hard determinism right in their face - in that your past behaviors if known well enough should be predictive of future actions ceterus paribus.
It's often argued that, such a granularity of measurement in social dimensions is impossible technically, or the fact of being measured changes their behavior. I wouldn't argue either of those to be untrue, only that it need not be perfect to increase the overall optimization of the system.
So instead of saying, yes lets use measurement to optimize our system of interactions across commerce and relations, people bristle at the mere concept of social engineering in the Popperian sense of the term because it feels restrictive to our sense of "free will." I argue the opposite, that doing such would simply make us more aware of our predilections and much more likely to be able to align them across groups.
I remember being annoyed on the 2nd day of Econ class when the prof mentioned that empirically we know human preference graphs can be cyclic but we have to pretend that's not true for any micro to work at all.
Dan Ariely and Francesca Gino were two of the most well-known behavioral economists. Hell, Ariely even published a a board game about it (as well as a bunch of popular books). They've both been accused of data manipulation this year.
My friends in the field are worried that the whole field will be tainted. It grew out of a controversial idea - that despite what past models demand, people don't consistently behave in a rational way. If the two biggest practitioners of a new discipline are outed as frauds, what does that do the the reputation of the discipline as a whole? Will people be skeptical of any behavioral hypothesis that bucks tradition because Ariely was accused of fraud?
The "economically rational actor" thing has been debunked long ago (see Kahneman and Tversky). There are much better ways to get people to behave the way you want then appealing to their statistical skills.
Psychology definitely plays a role in biasing how people interpret things they see and their priorities, but I think much of this is better understood through an economic lens.
If you consider the actions you take to preserve your privacy, it's a strategy you developed over time. And no one can claim to have a formula for devising the perfect strategy when there are non-trivial unknowns. This isn't just common, but normal in economics.
An economic actor can a. estimate the potential costs and benefits, b. observe other actors' strategy and outcomes but must ultimately c. execute their own strategy.
None of these are fully rational, and you have scarce resources (time & money, ability to survey the problem, limited exposure to the actions and consequences of other actors) to allocate to A and B.
If everything works, you stick to your strategy. If you get burned, you adjust your strategy in response. (Though a strategy may be "eat a cost less than $X.")
And if you observe it working for others or others getting burned, you might also adjust your strategy. This does lead to a natural selection of successful strategies; actors are "eventually optimal."
Also robustness costs, -- many heuristics that get poked at by champions of behavioral economics are behaviors that tend to give improved results when reasoning with noisy inputs (and, in particular, inputs with unknown levels of noise or even malicious distortions) compared to a formal decision theory that doesn't handle low quality inputs well.
These limitations are only a concern when people attempt to suggest that behavioral economics is telling us to rework how we live our lives. Generally such claims are unjustified extrapolations from very basic studies which in no way support the pop-economics advice. Like if someone did some experiments and determined that submersing people in water often killed the test subjects, then other people ran to the presses recommending that no one drink anymore because water was proven to kill in scientific studies... :)
These feedback loops you're referring to, what order are you proposing for your chicken and your egg? What's causing what in your view?
reply