I have a coworker who describes everything in a very anthropomorphic and casual way, and their code is excessively imperative: everything is accomplished through conditionals rather than by designing the code in such a way that functions only run on data structures that they support.
I would share this with him, but I imagine it would go completely over his head.
> The implied abstraction, in which time has disappeared from the picture, is however beyond the computing scientist imbued with the operational approach that the anthropomorphic metaphor induces. In a very real and tragic sense he has a mental block: his anthropomorphic thinking erects an insurmountable barrier between him and the only effective way in which his work can be done well.
I was a foreign language specialist in the military. When we were learning a new language, a few of the students hit a wall because they were approaching the material as "analogous to English." You can't think that way with most non-romance languages.
I'm now doing sysadmin work, and do a little bit of scripting here and there while working on teaching myself Python and C. Do you have any advice to avoid the kind of anthropomorphic thinking you mentioned your coworker does? The last thing I want to do is learn the "wrong way" and potentially struggle to rethink how to code.
I've been learning Japanese for a bit over a year now, and while I am still very far away from being happy with my level in the language, I definitely noticed a distinct shift for the better in my understanding of the language and how fast I picked up new grammatical constructs etc when I stopped trying to contrast it to the western languages I already knew.
You have to accept that Japanese grammar might have some basic concepts in common with the grammar of western languages (verbs, objects, adjectives, etc) but that really doesn't get you far.
Speaking as someone who is studying both linguistics and Japanese, this this type of comment is just cringy to read. In the grand scheme of potential "languages" Japanese and English (as well as all natural human languages; including signed languages) have incredibly similar grammars. It is only when you restrict your view to the range of human languages that Japanese and English start to look incredibly different.
With regards to your link. Japanese absolutely has subjects.
The question of presisly what -ga marks is a bit more controversial, but calling it a subject marker is a perfectly defensible position. Indeed, having studied linguistics before studying Japanese, I found that thinking of -ga as a subject marker and -wa as a topic marker made the distinction easy for me.
The point of this comment is not to say that we should be teaching English-native Japanese students that Japanese does have subjects and -ga marks them. I am not qualified to make that judgement, and my benefit likely came as a result of leveraging my linguistic studies.
As others have pointed out, thinking of Japanese as analogous to English is counter-productive for most students. This is despite the fact that Japanese is analogous to English in most cases, including (potentially) cases where seeing the analogy could be counter productive.
Returning to the original article; this is a good indication that just because something is wrong does not mean that it is not useful. Anthropomorphizing might not be technically correct, but it could still be a useful tool to help our human brains make sense of the world or subject matter. In the same way that saying "Japanese does not have subjects" might not be technically correct, but it might still be useful.
> Speaking as someone who is studying both linguistics and Japanese, this this type of comment is just cringy to read. In the grand scheme of potential "languages" Japanese and English (as well as all natural human languages; including signed languages) have incredibly similar grammars. It is only when you restrict your view to the range of human languages that Japanese and English start to look incredibly different.
That seems like a total non sequitur. Do you also find the expression "like comparing apples to oranges" cringey, since apples and oranges are quite similar when compared to black holes?
That might not have been the best way for me to make my point, which is just that, qualitatively, all human languages appear to be incredibly similar. I am not aware of any way to quantify this observation. I do know, however, that children who do not acquire any language during the critical period find it much more difficult to acquire a language as an adult relative to a typical adult acquiring a second language [0]. This means that a native English speaker would be realying heavily of his knowledge of English when learning Japanese (although not necessarily at a conscious level).
I tried to avoid the "apple to oranges" objection you are raising by restricting the domain to that of "potential languages", which seems like the "fair" point of reference to take. Since Japanese and English are both natural human languages, I would consider comparing them more akin to comparing red apples and green apples.
The main point I was trying to make (which admittadly got berried at the end of my post) was that "wrong" and "not useful" are two very different concepts.
[0] This is best observed in deaf children who are not exposed to sign language at a young age.
So what? You understood, and to all appearances agreed with, the point being made ("thinking of Japanese as analogous to English is counter-productive for most students"), right?
The poster clearly wasn't making any broader claim about the "qualitative similarity of all human languages" (whatever that might be).
I suppose learning a language that forces you to work in a declarative style (such as Haskell) rather than imperative might be enlightening, but if that's not practical you could try using declarative style in C and/or Python. For instance, don't use global mutable variables and whenever possible write functions that take in const arguments and produce some return value without changing the state of anything.
Another language I might suggest for exposing yourself to declarative / functional programming is Erlang or Elixir. I've tried to learn Haskell myself but the number of new concepts can be really daunting. Erlang is a little easier to get going with, and exposes you to many similar declarative concepts.
I think your language metaphor works here, but I won't try to extend it for fear of misleading you with anthropomorphism :)
Anthropomorphized code reads like someone reading off a gigantic checklist that keeps getting longer and longer. Each time a new problem comes up, we solve it by adding another "If A then B" except it's something like this
if foo.bar.baz[0][1]['value']
if (foo.bar.baz[0][1]['value'] == 'undocumented business rule')
globalVariable = false # why? who knows
else
globalVariable = true
globalVaraiable = foo.bar.baz[0][1]['value']
Relying on “If A then B” quickly get out of hand. The number of possible paths that your code can take grows quickly, even exponentially depending on the structure of your program. I don't want to be dogmatic, but always be asking yourself if you can structure your code as to avoid relying heavily on conditionals, especially nested ones.
Focus on the problem you're trying to solve, not the operational details of the language or other tools you're using. However, keep exposing yourself to more languages, that are different from each other in semantics, run-times. Learn about different programming paradigms: declarative, imperative, functional, etc, focusing on semantics and not syntax.
Dear god, I think this would be what would drive coders mad if some Lovecraftian old one were a coder. I have a colleague that writes code like that last one. I know he won't change, but I wish he'd at least use a damned switch statement.
Actually the first example looks perfectly reasonable to me. I would have used changes of "if () return" instead of a huge OR, but the result would have been the same.
The second example I can't judge. Probalby the variable name is bad, but not extraordinarily so. It's a lookup table, I can't tell if it is a good or bad idea without know what is being looked up.
I think in the second one, a simple list of values that correspond to true would be more reasonable than putting this in your source code, if you just need a look up table. The way the comments are structured though, I wonder if each row an option map, (annotated by the comments along the top)
Would could possibly be the advantage of hard coding divisibility checks against every prime number? There's so many problems with this. The simplest thing would be to just make an array of primes and iterate through it. There's also faster algorithms to check for primality.
That code is making an array of primes and iterating through it. It's just that the array is stored in instruction memory, and there is no explicit fiddling with indices or pointers.
As for more advanced primality testing methods, I guess it depends on where the the fancier algorithms begin to pay off. It would not surprise me if this simple algorithm was the fastest for any input small enough to be stored in a single machine word.
I think the answer here is analogous between spoken and programming languages. When I was learning Japanese the thing that made me grok the grammar was when I stopped "learning" and started imitating - that is, instead of trying to know all the rules needed to translate what I wanted to say from English, when I concentrated on reading lots of sentences in the target language, and saying things in Japanese using the patterns I saw there.
So for programming the same gimmick would amount to, rather than just building code out of the constructs you're most comfortable with, also read code written by experts and try to imitate the patterns you find there.
"But then Galileo made the troubling discovery that the heavier stone does not fall any faster than the lighter one."
It does fall faster, if you drop both stones in the atmosphere; the heavier stone has more mass per unit of surface area, to better overcome a constant level of air resistance.
Aristotle's physics were based on pretty accurate observation of the pre-industrial world, although they're surprisingly short on first principles. The really serious shortcomings were mostly related to impetus, the Aristotelian theory of motion -- it accidentally models friction well for objects in continuous contact with the ground, but it's very hard for Aristotelian physics to explain why an arrow keeps flying after it leaves the bowstring, and even gains speed after its apogee.
> It does fall faster, if you drop both stones in the atmosphere; the heavier stone has more mass per unit of surface area, to better overcome a constant level of air resistance.
Not necessarily; for one thing, if the masses of the two stones are not significantly different, the difference in the effects of air resistance would be unmeasurable given the instruments of the day.
For another, the shapes of the stones are important. One with a much greater sectional density, oriented correctly, could fall faster than the other, even if it were lighter.
But that's just nitpicking. If I remember correctly, Galileo used inclined planes because there were only imprecise water clocks available to measure the passage of time. His choice of apparatus was brilliant :)
I see I shouldn't have let the article's reference to stones stand; the specific experimental apparatus used by both sides was lead balls -- Galileo rolled them down inclined planes, while his opponents dropped them from the Leaning Tower of Pisa.
The article lost me when the author blithely claimed that it was Galileo who dropped two <droppable-item>s from the Leaning Tower of Pisa. The author clearly has no idea what happens when you do that; while Galileo was far too good at choosing the experimental setup that would give him the answer he wanted, while ignoring any such setup that would prove him wrong, to make such an obvious mistake.
while Galileo was far too good at choosing the experimental setup that would give him the answer he wanted, while ignoring any such setup that would prove him wrong
By this you mean that he was good at designing experiments that were sensitive to the basic physics of what was going on, while being insensitive to confounding factors?
This is the Renaissance we're talking about; I wouldn't be at all surprised if he just picked the experiment he wanted and laughed off the rest, applying Petrarch's style of rhetoric in the sciences.
This issue is pernicious in the biological sciences. Evolution is often depicted as having a will and is referred to as a marvelous creator which completely misses the point of evolution, a system driven by simple instructions that over time create emergent complexity (like a cellular automata). People assume that their is some mechanism for adaptation exercised. Adaptation happens at the species level and the only thing exercised is survival.
> What lets you determine when a composite object or pattern in an automata has crossed that threshold?
There isn't really a threshold beyond that pattern or object somehow communicating to you that it does indeed possess a will or some form of consciousness. The default assumption is to assume it doesn't until it shows it does instead of invoking an animist world view that imbues a spirit to every complex phenomenon (weather, death, etc.)
I'm in agreement with you though in general. It is arrogant to think that humans are at the terminal end of emergent complexity. Maybe our minds are too limited to conceive of something arising from a global or galactic scale.
Why is the animist view not the default assumption?
That seems a strictly more complicated model (with equivalent or even lesser predictive power), in that it supposes two classes of objects rather than a single class (in some sort of distribution), and supposes there must be some special quality to things, wherein they gain an extra trait.
The simpler assumption (at least to me) would seem to be the animist one, albeit that most wills don't look much like ours (since most things don't look like us).
I mean, I could see if you were arguing that humans don't have a will, but evolution doesn't either -- but to divide them in to categories based on feelings (which seems to be the case) seems to needlessly complexify the model.
> Why is the animist view not the default assumption?
That seems a strictly more complicated model (with equivalent or even lesser predictive power), in that it supposes two classes of objects rather than a single class (in some sort of distribution), and supposes there must be some special quality to things, wherein they gain an extra trait.
Although you raise an interesting point, that was the default view of many, if not most, societies until modern times. Once that threshold is crossed, we reflexively imbue it with other superstitious traits. In theory it makes sense, in practice, it'll lead to shit like human sacrifice.
> There isn't really a threshold beyond that pattern or object somehow communicating to you that it does indeed possess a will or some form of consciousness
I agree. It is a further anthropomorphism to even assume that the integrated sensory and memory perception phenomenon that we apparently experience as consciousness is inherently "human" in its quality rather than something more qualitatively similar to the bending of space-time by matter. An incidental consequence of homeostasis is that arbitrary pieces of matter are somehow a "self" and a cluster of sensory signals are somehow an "experience". The animist view isn't even necessary if we do not insist that "consciousness" be "a state of existence" for thinking creatures to "have" or not.
We don't have to get caught up in definitions of consciousness.
What we see in the case of microbe cell division is goal-directed behaviour. The physical details of how it happens are very important to cell biologists, but the bigger point is that one way or another the thing will behave in ways that help it grow and divide.
In that light it seems perfectly fine to say that the cell wants to divide. Indeed it is the most correct explanation I can think of -- and then I can invoke evolution as an explanation of how it came to want this.
You're assuming it's a matter of some threshold (presumably of something like "complexity").
We have good reason to think that anything like "will" (leaving aside "free will" here, and just taking about things like the ability to achieve ends by developing plans and utilising information about the state of the environment) requires specific information processing capabilities. Capabilities that a process like natural selection does not have.
Or a threshold in information processing capabilities, yes. But could you describe these information processing capabilities?
Since evolution does process information about iterative states, it's not the mere existence of feedback, so there must be some threshold in capabilities that requires crossing.
Please describe them in a manner that includes or excludes as you see fit babies, cats, and bacteria but doesn't leave it a continuum (ie, there must be a cutoff where on one side you have some and the other none).
Describing how evolution works, and why it is completely blind, is not something for the size constraints of a HN comment. I'd recommend books like Dawkin's "The Blind Watchmaker" or "Climbing Mount Improbable", or Dennett's "Darwin's Dangerous Idea" for good, readable descriptions.
> I disagree with your assessment that it's any blinder of a process than your own will is.
I understand what you're saying, but the two concepts cannot be effectively compared. An organism's "will" is a label for the reactive tendencies we observe in physical systems that exhibit homeostasis, whereas evolution is the description of a physical process that is perpetually ongoing and has no physically discrete meaning.
Your commemt is no more insightful than "I understand how to draw a box around one of them".
"You" as an entity are really just a process emergent from chemical reactions on some area, and your "will" is just the result of feedback between many regions of that process and outside stimuli.
Biological evolution is a process happening in more dispersed reactions, but still has all manner of internal feedback mechanisms and responds to external stimuli.
I don't see how you've drawn a meaningful distinction between them, except to say that one os easy to observe in total (eg, you can draw a box around it) and is sort of like you, so you feel you can understand it.
Homeostasis and physical locality don't seem partocularly germane traits when discussing whether or not something has a will.
Further, you (just as evolution) are ongoing until you're not, and "physically discrete meaning" sounds like a dressed up "well, I just know it when I see it".
> You" as an entity are really just a process emergent from chemical reactions on some area, and your "will" is just the result of feedback between many regions of that process and outside stimuli.
Agreed.
> Biological evolution is a process happening in more dispersed reactions, but still has all manner of internal feedback mechanisms and responds to external stimuli.
Disagree. Evolution does not "respond to external stimuli", there is no "internal" or "external" as far as evolution is concerned, that's like saying "erosion responds to external stimuli"; "it" does not respond, "it" is a description of a process.
> I don't see how you've drawn a meaningful distinction between them, except to say that one os easy to observe in total (eg, you can draw a box around it) and is sort of like you, so you feel you can understand it.
I can't draw a box around it because it is not a thing with a position in space unlike "you" which is.
> Homeostasis and physical locality don't seem partocularly germane traits when discussing whether or not something has a will.
Of course it does, if words are to have any meaning at all. Whatever it is you're trying to describe that is common between an organism and the description of physical process is not called "will" by any useful definition of the word. It's like if I said "evolution has no guiding principles" and you replied "well neither do humans really because our 'guiding principles' are just the result of a deterministic evolutionary process", but that's not true because "guiding principles" is a human concept that applies to beings that reason about their environment, even if the foundation of that reasoning is determined by constituent factors.
I think the point here was more that saying people have will but evolution doesn't is a lot like saying fish swim but submarines don't. Obviously the two can be distinguished, but we distinguish them because we look at the mechanisms by which the function - whereas if we only looked at the externalities, we might consider them one and the same.
Personally I think it's quite an interesting point; much moreso than the point you seem to have replied to (i.e. the notion that evolution isn't blind, which I don't think GP was arguing by any stretch).
I always found it interesting to ask if my fish toy swims: it has a fish shape and propels itself through the water by wiggling its tail.
Similarly, if cephalopods swim. And if they do, why not jetskis?
Also, I was arguing it, but mostly to see the responses -- everyone seems so sure it is, and yet, the answers come down to "because it doesn't remind me of monkey cognition". (Similar to why I asked about animism -- everyone is so sure it was wrong, but doesn't seem to know why.)
I think it is a bit presumptuous to say that evolution is completely blind.
Here is a paper that, based on deep learning, explores how evolution can be seen as learning from previous experience like a neural net.
In my experience that claim gets one an earful from biologists. (However it is of course always interesting what people in the different fields tell themselves all the time, because that are the easy mistakes.)
That one has always annoyed me a bit, too. I think it is partly to sugar-coat the mechanisms for humans. "Adaption" sounds much nicer when one thinks of adapting to a hotter climate or a smaller paycheck than what actually happens.
>This issue is pernicious in the biological sciences.
It is?
I only ever see this come up in layman's discussions of evolution. I've never seen the anthropomorphism of evolution come up among actual biologists / grad students.
Apparently Dijkstra thought that if we could somehow think more abstractly and mathematically then we'd avoid making mistakes.
But in practice, it seems to be the opposite: most people have a hard time thinking abstractly. We need analogies to make sense of things.
For example, there was an experiment [1] showing that even basic logic is easier to handle if it's thought of as "detecting cheating" than as a pure logic problem.
Dijkstra wanted us to think on paper, use abstractions that accurately describe real-world systems, and arrive at answers methodically with a trail of checkable work. If our data and methods are sound our answers will always be correct.
Using just-so stories does nothing to aid in the methodical derivation of conclusions from premises; and can serve to obfuscate and confuse the issues.
> Dijkstra wanted us to think on paper, use abstractions that accurately describe real-world systems, and arrive at answers methodically with a trail of checkable work. If our data and methods are sound our answers will always be correct.
I understand the aspiration, but I always think of the counterpoint implicit in Knuth's famous statement: "Beware of bugs in the above code; I have only proved it correct, not tried it."
The point is, if a system's behavior can be characterized with a set of equations, use the equations to talk about the behavior -- not a flawed metaphor for human cognition. If you find out that your equations incorrectly or incompletely characterize the system's behavior, Occam's razor requires you to assume that what you need is a better set of equations.
> if a system's behavior can be characterized with a set of equations, use the equations to talk about the behavior
I agree, and I don't think Knuth's quote was saying anything different.
> If you find out that your equations incorrectly or incompletely characterize the system's behavior, Occam's razor requires you to assume that what you need is a better set of equations.
Yes, but you might not have them, and they might not be easy to find. So you might have to face the fact that, now and for the foreseeable future, you might not be able to use your equations to completely predict or characterize the system's behavior, so you need to actually test your code instead of just proving it correct, as Knuth said.
As a separate point from my other post, characterizing the system's behavior isn't enough. You also need to characterize the requirements that the system is supposed to meet. Even if you have equations for the former, you might not for the latter. So your equations might completely and precisely predict system behavior that turns out not to do what the user actually wants. I think that's part of what Knuth's quote is talking about.
We somehow managed to repurpose it for mathematics in a very short amount of time on the evolution timescale.
So while we are capable of abstraction, our brains work better when we rethink the problem in terms of throwing rocks. Anthropomorphism help us make our primitive mind and higher functions cooperate.
reply