Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Lindy effect (en.wikipedia.org) similar stories update story
225 points by bushido | karma 1466 | avg karma 4.6 2017-07-30 17:05:31 | hide | past | favorite | 76 comments



view as:

I love it when I find an article about something that has crossed the edge of my mind, but which I've never given proper thought to.

What's that term where you learn of a new concept or word and then immediately see it referenced soon after learning it? In rereading zero to one by Peter Theil I came across the Lindy effect and here and now I've come across the wiki page on HN.

Baader Meinhof.


Synchronicity

Bullshit Casserole


> you learn of a new concept or word and then immediately see it referenced soon after

It happens to me all the time. For anyone who hasn't experienced this, I offer this experiment. Pick one or more words from the list below that you don't already know. Repeat it to yourself a few times, read the definition, and invent a sentence using the word. Remember, choose a word or words you don't know.

frisson -- shudder of emotion with goose bumps when deeply affected by music

comity -- courtesy between nations for the laws of another

blithe -- happy and not worried; not realizing how bad a situation is

voxel -- 3D analogue to 2D pixel; a portmanteau of "volumetric" and "pixel"

pratfall -- comical fall landing on the buttocks

deus ex machina -- a god introduced into Greek/Roman play to resolve plot; for example, in Raiders of the Lost Ark, the hero's problem is solved for him rather than having him solve the problem himself; the film would end the same way even if Indiana Jones didn't exist: Nazis open the ark and kill themselves; pronounced day-us-eks-mah-kah-nah

marmite -- large cooking pot having legs and a cover; also a British sandwich spread made from yeast extract

Maybe you can come back to this comment 48 hours from now and tell us if you've heard your chosen word in real life soon after learning it.


I once studied the expected remaining length of a game of chess, as a function of moves played: https://chess.stackexchange.com/questions/2506/what-is-the-a... What was interesting is that for the first 20 moves (40 half moves) the expected remaining length will decrease at a near linear rate, but then it levels off at about 25 remaining moves, and after 45 moves every move played will _increase_ the expected remaining length.

At the time it surprised me, but of course it is natural to expect long games to be long.


The threshold at 45 probably corresponds to endgames where the kings have to walk around the board to take care of pawns.

Not only that, but endgames frequently involve long periods of positional maneuvering that can take dozens of moves before one side realizes an edge, or before it becomes clear that it's heading toward a draw.

The longest possible chess game:

https://www.chess.com/blog/kurtgodden/the-longest-possible-c...

Chess AIs, perhaps needless to say, are very good at computing at the depth necessary to win drawn-out endgames.


How did you control for player level? I imagine novice games show a different pattern than grand master games

Looks like he chose only games where the players had ELO rating > 2000

How did you control for player level? I imagine novice games show a different pattern than grand master games

> At the time it surprised me, but of course it is natural to expect long games to be long.

Indeed, which is why I am skeptical of purely anecdotal claims of the Lindy effect, as they may be skewed by survivorship bias. In this case, however, you have the numbers to back up the observation.


The Lindy effect and survivorship bias are one in the same. That's kind of the idea.

Take programming languages for example. If I asked you to bet on 30 different programming languages which would still be in use in 10 years, and all you knew about them was how long they had already been in use, you'd probably correlate your bets to some degree with their age.


No, they're not at all the same thing.

Consider radioactive decay. An isotope has a half-life that remains constant over time. After one half-life, the remaining material of that isotope isn't there because it's "special" or robust or less likely to decay, it just got lucky. Its half-life from this point remains unchanged, so it does not display the Lindy effect.

Similarly, old humans give us plenty of opportunity to talk about survivorship bias, but we out lifetimes do not display the Lindy effect -- older humans are not expected to have longer remaining lives than young humans.


There's a similar effect when waiting for a bus or other public transport. At first, the expected time you'll have to wait exhibits decreases as time goes by: if there's a bus every 10 minutes, after waiting 8 minutes you expect one to arrive in 1 minute, compared to 5 when you started waiting. Stand there longer without a bus arriving, however, and the Lindy effect starts to apply. After 15 minutes without a bus, most likely the bus broke down, but you can expect another within 5 minutes. After 30 minutes, well, maybe the drivers are on strike today or this bus route got cancelled or you misremembered the frequency of the bus - either way, expect to keep waiting.

Anyone know of a term for this kind of behaviour? I've never seen it named, though I do recall an article that made the HN front page that demonstrated this effect with the New York subway.


Heuristics? It seems like a rough heuristic way to find a likely path.


Yes, this is it exactly! Even the framing of it as waiting for a bus is given as one of the examples, "Problem 2, The Standard Wait Problem" but the other examples also have similar behaviour.

You seem to be assuming that given you have waited for eight minutes already and that the schedule is 10 minutes that therefore half of the period remaining is the expected time for the bus. That seems like a fair heuristic but I'd personally read the timetable. To get a more exact answer you also need to know the distribution of actual bus arrival times. If they turn up at your stop every 10 mins (mean) but with a standard deviation of say two minutes then your heuristic is not the best.

Far more interesting is that two of the buggers will always turn up when you don't actually need one. I can prove that by assertion now and fairly confidently be able to appeal to around 60M Britons for testimony as required.

This is not related to "virality" which is my newly made up term for the Lindy Effect.


You might be interested in queuing theory: https://en.wikipedia.org/wiki/Queueing_theory

Well .. if you take hope and mistakes out of the equation, I think this is the memoryless property?

One type of cause for this behavior is the sunk cost fallacy, or more broadly, escalation of commitment: https://en.wikipedia.org/wiki/Escalation_of_commitment

If buses arrive uniformly every 10 minutes and you arrive at a random time, your wait on average will be 5 minutes.

However, if the buses arrive independently and by a purely random process on average every 10 minutes (i.e. a poisson process), your wait will average 10 minutes.

I always remember this result from probability when waiting for something. It helps me feel less unlucky: of course, I tell myself, it's more likely to land in the time interval between widely spaced busses because those intervals take up more space on the timeline.



I was thinking about what this had to do with HN and then it hit me: javascript is going to live forever; and C will outlast it.

We can also see why it's really difficult to compete with early [surviving] frameworks, since they will last for a really long time.


Bitcoin.

The longer it continues increasing adoption and users (like any network effect) the more useful it will become to more people. Also, the higher the market cap, the more people will be invested in its success. The longer it continues to work as designed, the more people will trust it.


Apple, Microsoft and Intel have dominated the personal computer industry for almost all of its existence, and their relative positions within that market have been remarkably stable. Therefore as every year passes with that still true, the expected future lifetime of that triopoly increases.

I am not sure companies follow this as they can go bankrupt and be gone in a matter of months.

There are many counterexamples in business - Nokia, Blackberry, Sears (the names live on, but as shells of their former selves.) One has to be wary of survivorship bias when looking for the Lindy effect.

Blockchain, perhaps. Bitcoin itself seems to not have gathered sufficient momentum or acceptance to gain any use outside of the use it currently has. Bitcoin is not dying, but is not thriving.

I feel like javascript could be in real danger when webassembly comes out, meaning any programming language could be compiled to webassembly so really there would be no reason to keep this ugly mess around.

Fortran is forever.

the canonical examples are cobol and java ("java is the new cobol"). Microsoft has managed to cram c# into the race by sheer effort and expense.

js got there by happenstance of history.


Plus c# is a great language. A better Java.

Lisp. Unless you don't count Clojure, Lisp could be expected to persist another 50 years.

Interesting that you mention C, but not Unix. Unix, an operating system invented in the 1970s, is still one of the most used server operating systems today (in its Linux, BSD, Solaris, etc. variations).

In the corporate IT world, there are IBM operating systems, such as Z/OS, that are direct descendants of 1960s operating systems.

Also, instruction sets for IBM mainframes and Intel x86 machines have been around for a very long time, and are not likely to go away any time soon.

Not to mention standards like 120v/60Hz AC and the shapes of electrical connectors, which have been around even longer and will thus probably survive for a very, very long time.

And going back to software, editors like Emacs and vi have been around so long (both come from the 1970s, and were invented for CRT terminals) that they're likely to keep on being used for a long time to come.


I had this same idea for quite a while and it's nice to see it has got a proper name. I always wondered whether it would make sense when it's applied to jobs - ie. jobs like being a chef or bartender will be around for a long long time, while being a programmer probably not.

I've once heard a similar description for the life expectancies of cancer patients, new motorcyclists, and hard-drives. There is a high initial mortality rate, but once you get past the hump your expectancy increases with every day to a limit.

Does anyone know if there a different name for those distributions?


Pareto

"Bathtub mortality" perhaps? https://en.wikipedia.org/wiki/Bathtub_curve

Weilbull distribution [1]. Why we used to run our mail-order PCs non-stop for a week to burn them in. If there were faulty components, we wanted them to fail under the 30-day return policy.

[1] https://en.wikipedia.org/wiki/Weibull_distribution


I'm thinking here if one should generalize that idea to other things we buy, as to make sure that everything we buy has no faulty components. For instance, if we buy a new car, should we stress it to check if something breaks while still within the legal guarantee period, etc.. Of course, one should not stress it too much to the point of wear out prematurely (as in the end of the bathhub curve).

Do you know of any literature, articles, etc., about this? I'm asking more from a consumer's view point..


> For instance, if we buy a new car, should we stress it to check if something breaks while still within the legal guarantee period, etc.. Of course, one should not stress it too much to the point of wear out prematurely (as in the end of the bathhub curve).

If it is a brand new car (and depending on the kind of car, too) - you probably don't want to do this, at least immediately.

This is because the engine has not been run much since it left the factory. There is an engine "break in" period (you can read about it in your owner's manual) during which you need to follow the instructions properly, or you can actually cause damage to the engine and decrease its life dramatically.

Essentially, it involves not running the engine under extreme loads or speeds for so many miles (500 - 1000 I think is normal). After that, you might also do an oil change and some other early maintenance. For certain sports cars or other high-performance vehicles, it can be even more strict.

It basically has to do with the frictional components wearing into each other with lubrication carrying away (and being captured in the filter) the small bits of metal scraped away. Because even despite the fine tolerances engines and transmissions, etc are machined too, they aren't exactly matched, and the wear-in period allows for this (then you change the fluids and filters to remove the contaminants). Running the engines and such hard during this period puts higher friction and stresses on the system, which actually causes more metal than normal to be removed (essentially - wearing the engine in more than needed, if that makes sense).

This is also basically the same thing you have to do when you get an engine overhauled or otherwise modified (ie - new pistons, rings, porting, honing, etc).


Are we talking about Bitcoin here?

Anyone living in Austin Texas who has seen the disaster of the Mopac Highway "Improvement" Project sees this effect first hand. Construction started in late 2013, originally scheduled to be finished Sept 2015, actual completion "3 to 6 months away" ever since.

I teach the Boston "Big Dig" as it ties the Lindy Effect to schedule estimation. Fred Brooks was knocking on the door of this with the Mythical Man Month: adding developers to a late project only makes it later. But Lindy Effect would say that late projects in and of themselves make themselves later.

Same difference, I guess.


I present to you https://en.wikipedia.org/wiki/Berlin_Brandenburg_Airport: scheduled to open in 2010, the current estimate is 2019. The stories around its delay are sometimes comical.

I teach this as part of an internal developer class; one important thing to note is that it applies to certain classes of non-perishable items. Not the books themselves, but the ideas the books contain. We talk about how things like the presentation framework du jour (e.g., many Javascript frameworks) change rapidly, while tech deeper in the stack turns over less frequently (middleware, operating systems, etc.) And we ask about why some tech survives.

The lesson here is that things that last have developed certain adaptations to make them last. It's always worth studying why some oft-repudiated or outdated tech won't die; it is almost always because it possesses some key attribute present that is essential. If you're proposing a new framework, or promoting a new idea, it is essential you understand why these crufty old incumbents are still around, and see whether your new framework or idea embodies those old adaptations.

I've learned a lot about how flashy surface features (which compete well against new tech at the surface level) can be inferior to tech that embodies what the incumbents did well.


This is a main rule of thumb when I am choosing technology stacks for my clients (I may choose new experimental ones only for hobby/experiments). And basically the reason I am coding on LISP (Clojure)

Given Wikipedia's standards I am a little surprised that the article is light on the math. One tool to measure this with is the hazard rate.

Say at age x my probability density of dying at that instant is f(x). Now we condition on the obvious fact that I would lived at least x before I die (counting from 0). The distribution function F(x) is the probability that I die before x. So that gives a conditional probability density of dying in this instant is

    h(x) = f(x) / (1- F(x))
If this quantity is constant (in other words independent of x) then I am Peter Pan. I don't age. I will die by some random accident that has no preference over time.

If h(x) is an increasing function of x then I am more human, I age. I age.

If its a decreasing function of x I am probably the Joker, "... makes me stronger". Whenever h(x) is a decreasing function of x one encounters Lindy effect.

Pareto distribution has been called out in the article but anything that has a fatter tail than the exponential distribution will suffice. The life of a database query, search engine request, etc., etc., likely all fall under this category. In such cases it is on us engineers to try to make those latencies have an increasing hazard function.


Don't forget you can edit Wikipedia!

Full Class on Ethical Hacking Tutorial Check out and learn to become an Ethical Hacker http://www.qtechpluz.com/search/label/Ethical%20Hacking

It probably holds true for startups. The longer the startup existed, the more has been built. Code, userbase... So one would expect it to last longer.

I wonder if it also holds true for profitable companies. That would mean that - all other things equal - a company 10 years old is worth as much as a company that is one year old but makes 10x the profits.


I would expect a bimodal distribution here. The way most startups write code, the velocity per developer is decreasing as lines of code increases. You can throw more developers at it, up to a point, but eventually the codebase becomes worthless and all the value is in the development team's heads.

There is kind of an escape velocity then... Some teams will be able to refactor or replace their way out of this, back to a point where more developer hours equals more features.

Other companies will fail to achieve escape velocity, and the dev team will expand to use all available resources while the application becomes slowly worse relative to competitors.

That process can be stretched out, and money made while the velocity nears zero, but the company will eventually die.

Whether your codebase and development team is an asset or a liability depends on whether you have that escape velocity.


"It probably holds true for startups. The longer the startup existed, the more has been built. Code, userbase... So one would expect it to last longer."

That's only if what they're building is capable of eventually turning a profit. History is full of dead startups that had ample funding and lots of customers, and went on for many years, but could never figure out how to turn a profit. For example:

https://en.wikipedia.org/wiki/Homejoy


The key insight is that you are unlikely to be experiencing the thing at a special time in its life. This is the Copernican principle (which J. Richard Gott uses in his version of this that Wikipedia mentions), which was basically "we (on Earth) are unlikely to occupy a special place in the solar system -- it's much more likely that some other object is the center."

Gott says you can be 95% confident that you're experiencing the thing in the middle 95% of its life. Let's say x is its life so far. If x is 2.5% of its eventual life (one extreme of the middle 95%), then the thing still has 39x to go. If you're at 97.5% (the other extreme), then the thing only has x/39 left. So the 95% confidence interval is between x/39 and 39x

Of course, 5% of the time you actually are experiencing something at the very beginning or very end of its life (outside the middle 95%), which is a unique thing. But that's why it's a confidence interval < 100% :)

I prefer this form of the principle a lot more than "the expected life is equal to 2X, always."

Side note: I took J. Richard Gott's class in college called The Universe. Maybe not the best use of a credit in hindsight, but we studied some really interesting things like this.


And the real fun is when you apply this to humanity itself: https://en.m.wikipedia.org/wiki/Doomsday_argument

Lots of interesting stuff in there. The problem I have with naive versions of this is that they assume as random people we don't live in a special time in human history, but if you look at human history so far the current era is both extremely short yet also spectacularly atypical in almost every conceivably way. It is also a period of still very rapid change. It's hard to get my head around what that means for estimating future trends or outcomes.

Funny thing with exponential curves, no matter where on the curve you are, everyone behind you seems mind-numbingly slow and everyone ahead seems mind-bogglingly fast.

This always confused me - people talk about an exponential explosion, but the rate of change of e^x is e^x, so there is no actual 'knee' in the curve with a huge speedup afterwards...

or, if you prefer, every point is a 'knee' in the curve with a huge speedup afterward

Except in reality, when samples are infrequent and there is significant 'noise' in the data, it can be far from clear what shape the graph is. This is usually particularly true in the early stages of the development of a trend. How do you know you're in the early stages? You don't, but it's a big mistake to think that trends which eventually turn out to be exponential must therefore be obviously so at all times.

You're betraying an extremely. And I mean extraordinarily, laughably modernistic world view. We were making accheulian hand axes for well over one million years. Plus. Technological development during that period really didn't look very exponential. How do you measure change when literally nothing new is developed for tens of thousands of generations?

If you only look at development since agriculture then yes, but that's less than one percentof the time since the taming of fire.


I wonder if this applies to human relationships as well; friendship, love, employment, etc.

I'm trying to figure out whether this can live peacefully with the Red Queen Hypothesis:

https://en.wikipedia.org/wiki/Red_Queen_hypothesis

Basically, from Van Valen's data, species have a constant chance of going extinct, regardless of how long they've been around. His hypothesis is that even though speciation appears to be a discrete event, these species are constantly jockeying with each other for survival in a dynamic environment -- they have to run faster and faster to stay in the same place, and are always subject to falling out of the race, as it were.

The devil's in the details, of course, but it's intriguing to jam these two ideas together.


Now imagine what the Lindy effect implies when technology becomes available that undoes the damage caused by the mechanisms called 'aging'.

Legal | privacy