C implements a fairly low-level but general model of how a computer works, and a programmer's understanding of pointers is something of a proxy for their understanding of how that model works. Therefore, C has a sort of built-in test for a level of competence in a programmer. Unfortunately, it is by no means perfect, and you can still find people writing in C who have no idea why they should not return a pointer to an automatic variable.
If this were valid, there would be no need for an exception type hierarchy at all.
In reality, we have different exception types (and error codes) because there are circumstances when the type of exception determines how it may be handled. Therefore, a professional programmer needs to be able to find out what exceptions could occur in any given call (which is not the same as saying that she needs to consider each one individually after every call - this is where your 'what can be handled here?' principle has some use.)
Terminate-on-exception might just be a workable policy for a simple or unimportant application, but it is not how you create a robust software infrastructure or safety-critical software.
The original article describes a style of programming in which his programs are built directly on OS services without middleware, and in which the programmer is aware at all times of what might go wrong. I think it is this, rather than the distinction between error codes and exceptions, that contributes to the reliability of the programs he is writing about.
Creating a new exception type that could propagate out of the abstraction you are working on is a serious matter, so it would not be a bad thing if it were a non-trivial thing to do. Unfortunately, the attempts to make this so in Java and C++ put most of the burden on the users of the abstraction instead, and this seems to be built in to the nature of this problem.
Reasoning about the correctness of lock-free concurrency is much more difficult than doing the same for sequential algorithms, and gives the lie to the original author's claim that successful concurrent programming is just a matter of avoiding mutable global state.
"Manually using a lock is a smell in the same way that goto is a smell."
The presence of a lock tells you nothing about whether a program is well-designed. In fact, the whole idea that you can argue about the correctness of a program from the presence or absence of "code smells" is absurdly simplistic. I have seen plenty of bad code littered with while statements.
The author seems to think it is all about avoiding mutable global state, but it is more than that: it is about shared resources and temporal ordering, and these are not issues that can, in general, be avoided in the simple way he supposes.
I am curious as to how you think this should inform the way we program. You could also say 'coding errors are harmful to the usefulness of programs', but that gives us essentially no insight into how to do things better.
Do you realize that this observation does not answer the question? There are multiple possible suppositions as to what, if anything, you are implying, so why don't you stop beating about the bush and state clearly the message you are trying to convey?
My previous comment ("If that's the way you work, then your employer has made a mistake") was ambiguous, and actually illustrates a point regarding comments (and documentation in general). Like that comment, the short verb phrases that make up source code cannot always express the intent of the programmer (extreme 'self-documenting code' advocates think otherwise, but their experience with simple programs and algorithms doesn't generalize.) In such cases, comments are a good way to explain a design issue that is not obvious.
Who cares about these things, which don't change the code? Anyone who needs to understand the code, which includes anyone working on it, including its original author - writing a brief explanation of a tricky point can help you spot mistakes before you compile or test, or even before you write the code. The goal is to produce correct code, not just code, and the one thing that is faster than continuous testing is avoiding mistakes in the first place.
FWIW, I think the idea that comments need to be complete sentences is silly. Sentences can be as wrong, ambiguous, misleading or uninformative as simple phrases.
This article illustrates one of the many ways by which software becomes unjustifiably complex.
Firstly, there is nothing about the problem that calls for object-orientation. As the author realizes, the classes in the class-based implementation are non-contributing baggage.
All the problem description actually calls for is three functions and a conditional clause. The implementation using functions presented here, however, is considerably more complicated than that, possibly because the author chose (deliberately or not) to attempt to mirror the form of the class-using solution.
This code was written for illustrative purposes, but the same sort of thing happens in real-world programming when people make a-priori decisions about the style of a solution, rather than be guided by the requirement. Having an preference for 'sophisticated' styles exacerbates the problem.
A common justification for this sort of complexity is that it makes the code more reusable, and in some cases it is justified. Whenever using that justification, however, one should consider how much simplification is actually achieved, and whether its reuse is likely, and weigh that against the immediate costs of the extra complexity.
Another justification is that it promotes correct use, and again this is sometimes justified. Note, however, that neither of solutions enforce one of the few usage rules of the example: that the normalization function should be applied before the calculation. Anyone creating his own concrete derivation from AbstractCalc has to remember to call the normalization function, and gets no warning if he fails to do so.
'Will the state pay for all these expenses?' For the most part, no.
I strongly suspect that one of the reasons for using a subpoena here is that it transfers the effort and cost of performing an investigation to the respondents, who have been put in the position of proving their innocence, or rather that no crime has actually occurred.
The interrogatories show that the attorney-general's office has made no effort to understand what this software does, and has made no investigation into whether anything suspicious has actually happened. They are on a fishing trip, with the subpoena as the dynamite.
Are there no aspects of programming that present any sort of challenge to you? Lockless concurrency? cryptography? The various things that the AI community has been struggling with over the last quarter-century?
"What I'm saying is that if you're the kind of person that cares, you'll find a higher-paying job using that knowledge for something more important."
It is not just a matter of developers' professional and personal integrity. We have to live with the consequences of bad software produced through the employment of amateurs, even if we, personally, care about doing a good job.
This is safety-critical software, and the burden of proof rests on the manufacturer. One of the things that distinguishes an engineering culture from a tinkering one is that in the former, it is not acceptable to assume everything is OK just because we haven't noticed anything going wrong yet.
"Andrew Ng, who worked or works on Google’s AI, has said that he believes learning comes from a single algorithm."
As algorithms can be combined, the existence of any set of algorithms satisfying this goal would automatically imply the existence of a single algorithm incorporating all of them.
If a sufficiently-detailed physical simulation of a human's brain satisfied this goal, then that would be one such algorithm.
That's the most interesting and informative issue presented by this scenario.
The problem with the sort of test you propose is that just because a human uses intelligence to solve a problem, it does not follow that the task requires intelligence. For example, playing chess.
From an engineering point of view, your black box may be as good as the real thing, though you couldn't really trust it beyond the areas of its demonstrated competence. Knowing how it works, however, would be the most significant achievement.
Furthermore, it's going to be hard to build one of these black boxes without a reasonably good idea of how it is going to work.
There is also the risk that your weighting scheme will rule out the only algorithms that have a chance of succeeding, because I bet they are pretty complex.
In the article's comments, the author of the code in question says "I 'solved' the problem by sticking in one trace call that made the problem go away. That kind of fix makes me vaguely nauseous, but ugly working code beats pretty broken code every time." He does not appear to have an explanation of the error that the optimizer is allegedly making, or for how this change fixes it.
If you have made a 'fix' but you don't know what problem it solves and how it does so, you probably haven't fixed anything, and you can expect further trouble from the root cause.
This offers a counterexample to the simplistic notion that 'duck typing' results in programs that automagically do the right thing. The reality is that duck typing does not relieve you of the responsibility of understanding the semantics of the elements you use to construct a program from.
The risk is not primarily to electronics, but to the power distribution system. CMEs induce large, very low frequency currents in transmission lines, which can destroy the large transformers connected to them. There is no large reservoir of replacement transformers, and they take some time to build, so in the event of a widespread event, they will not be quickly replaced. IEEE Spectrum had a detailed discussion of the problem and steps to mitigate it: http://spectrum.ieee.org/energy/the-smarter-grid/a-perfect-s...
You are assuming that the lines of sight of each of the arrays are parallel with one another, so that their separation on the ground is the same as the separation of the arrays on the satellite. If they are at an angle to one another, the separation of these lines at the surface will be different from (and possibly in a different order than) the separation of the arrays. It is the surface separation, divided by the speed of the satellite, that determines the time between images of the same point on the surface by different arrays.
If the arrays all share the same lens, that would cause this effect.
Polarized glasses help with reflection at a shallow angle, but the observers will be looking down, not across the surface, and when the sun is low on the horizon, all observers will be looking away from the sun, as it would be a waste of time to try to see anything up-sun, even with polarized glasses.
In addition, using polarized glasses might lead to seeing the colored bands of stress birefringence in the window, depending on what material it is made of.
Each sensor will be on the focal plane of the lens it uses. If they are all are looking through the same lens, their lines of sight will all cross one another at the optical center of the lens, and so will be at an angle to one another. Therefore, each sensor has a different view of the surface - it is an example of parallax.
(EDIT: I have changed 'lens system' to 'lens', because I think the former might be misleading. All I meant by 'lens system' is that to make a decent camera, you have to have several lens elements along the optical axis. Subsequently, I realized 'lens system' might be taken as implying an array of lenses. What I actually mean by 'lens' is simply as in 'telephoto lens'.)
did a quick BOE calculation based on this assumption. The idea was to see if the implied focal length for the lens is plausible.
I actually used the green-blue pair for the calculation, because I think it is slightly more likely that each pair share a lens, than that they all do.
The green and blue images seem to be something more than a wingspan apart. We don't know what type of aircraft these are, so I picked a 737 as a mid-sized example. The latest models have a 34m wingspan, so let's say the aircraft has moved about 50m between images.
Assuming the aircraft are moving relatively slowly, because they are searching, I used 100 m/s (200kts) for their speed, which means there is 0.5 sec between the images.
The satellite is moving at "almost 5 miles per sec", so let's say 7 km/sec. That means the views of the two sensor arrays have a 3.5 km separation at the surface.
Zizzer gives the satellite altitude as about 630 km. The ratio of the focal length of the lens to the separation of the sensor arrays is the same as the ratio of the altitude of the satellite to the distance between what part of the surface each array is imaging: f/a = 630/3.5, or f = 180a (a is the array separation).
From the picture of the sensors, and principally using the connectors' pin-holes for a sense of scale, I guess the blue and green arrays are about 0.5 cm apart, implying f = 90cm, which seems plausible to me.
To the first question: yes, and after a bit more thought, I realize that this assembly must be designed to use a single lens, as the sensor arrays are close together, which would make it difficult to have any other optical arrangement. The pictured assembly would form the back of the camera, where the rectangular CCD array (or film) goes in a conventional camera.
The picture shows that the red sensor is further from the green one than the blue is, simply because it is on a separate sub-assembly. If the infra-red image showed on the pictures, it would be as far from the red one as the green one is from the blue. Given the order of the arrays on the camera, the IR image of the plane would be ahead of the red image, meaning that the IR is the last of the four images taken.
Incidentally, the pictures show the aircraft are experiencing considerable wind drift, especially in the case of the top-left picture, where you can see the contrail. This indicates strong winds, and probably also a relatively low airspeed for the aircraft. The orientation of the waves suggests a head- or tail-wind, but to get that drift, it must be different at the airplane's altitude (which probably is not high if it is searching.)
You do not actually have to define "truly random". You do have to trust that the mechanism you are using to generate keys (radioactive decay or rolling dice, for example) is unpredictable.
There is a more general point here, that sometimes seems to be lost in philosophical arguments: reality doesn't pay any attention to the meaning of words. If "unbreakable" is not well-defined (I am not sure that is so), then it is a problem within the domain of language, not cryptography.
It worked in the sense that the lifeboats on the Titanic worked - they did, after all, save hundreds of lives.
The combination of time and severity in this case should mean that we can move on from naive 'all bugs are shallow' dogma towards developing a more evidence-based approach to the verification of critical software.
If it can be satisfied in a way that is materially different from what was intended, then it is both under-specified and incorrect. This is a not-uncommon source of errors.
Natural language can be used with precision, and this is an important skill for engineers. Being able to identify ambiguity and inconsistency is an important part of that skill so yes, noticing it within a question should count in favor of the candidate. Dismissing it as being pedantic is the wrong response, because if you are working on something critical (secure communications software, for example) you need to handle ideas with precision.
I think there is an important and quite general point here: it is not just about programming languages, but also about programming knowledge and skills. You are replying to someone who was unaware of the state of the art in program verification (to be fair, he recognized the issue to be solved, which is an important start.)
As the example in the original post demonstrates, programming in languages that have this level of support for verification is very different from programming as it is currently commonly practiced. Not everyone will be capable of making the switch, and for an organization to simply say 'from now on, we are going to use this safe language', without addressing the skills issue, is setting up for failure.
I wasn't intending to cast doubts on any particular individual's skills, which is why I wrote 'knowledge and skills'. I am learning this stuff myself.
You make some good points about where safety matters most, but I think a greater general awareness would help drive adoption where it matters. Furthermore, while this problem had widespread consequences due to it being in widely-deployed system- or middle-level software, 'ordinary' programming can have quite serious vulnerabilities, too.
I think schools, especially below the first tier, could do more to promote awareness of static verification and other safe practices, and that might modify the way their graduates approach development, even though they probably will not be using formal methods.
There are things that can be done to improve safety in general-purpose programming languages. I feel certain that garbage collection and the avoidance of pointers has made programming safer, but I suspect 'duck' typing has had the opposite effect.
In the past, the DOD has been a driver of code safety, though it has backed down from its possibly ill-advised 'nothing but Ada' position. In fact, Ada might be the counter-example to the idea that you can drive safety through language choice.
You would think the banks would have a vested interest in improving things. Perhaps they could divert a fraction of their bonus payments to create incentives...
TDD is but one example of the apparently inevitable fate of good ideas in software development: it becomes the One True Way, universally applicable, and the new litmus test for distinguishing Real Developers from troglodytes. Whenever a developer gets a new tool, she is supposed to throw out the old one (there will never, of course, be more than one in the toolbox at any given time.)
That appears to be so, but the committee's application of that preference is rather self-contradictory. Without an explanation, the result would not even have been a contender for the prize, and it was Alpher who first provided the explanation.
As Alpher also showed that the universal ratio of hydrogen and helium isotopes can be explained by nucleosynthesis in the big bang, he seems to have been seriously overlooked.
"He has not insulted any people though. He has said bad things about the piece of software called gcc-4.9.0..."
A number of people have made that point.
So if I were to say "Hacker News' emotional maturity ranks below that of 12-year-olds, who generally recognize that contempt expressed for an inanimate entity is actually targeted at the persons who created the flaw in question", that would not be taken personally?
Perhaps your first paragraph explains why the Spitfire had thin wings. IIRC, one unintended consequence of this choice was that, as power and speed increased through WWII, the Spitfire avoided compressibility problems.
Documentation produced that way is actually consistent with the statement you first quoted. I think what is happening here is that the researchers, being properly careful, verified what the software does, rather than trust to descriptions by others.
If there had been a formal specification, I imagine they would have both analyzed the specification for vulnerabilities, and checked the code for correct implementation.
Part of the problem is that, thanks partly to precedents from earlier decisions, 'obvious' and 'prior art' have taken on special meanings that do not conform to either common usage or common sense.
This is a very timid conclusion. If the author had followed his reasoning to to the bitter end, he could have proclaimed 'each method should contain only one operator' and left no doubt as to the power of his insight.
For the most part, neither the financial industry nor economics are particularly interested in causal theories. If an investment manager or an economist finds a correlation that looks reasonably robust, they are not going to wait for causality to be established before taking a position (in the former case) or publishing (in the latter case).
The difficulty in performing controlled tests makes establishing causality difficult. Furthermore, numerical results look so much more certain than vague hand-waving causal explanations.
I am pretty certain that the problem the prof. is writing about is well-known, and controlled for, in the more sophisticated parts of the financial industry. He seems to acknowledge this, but suggests that the industry may be influenced by sloppy academic work. I think he is exaggerating the importance of academic work here: while the industry may look to academe for ideas, it doesn't take them on faith when real money is involved.
I am writing about a more sophisticated part of the industry than you are. The existence of this element does not invalidate or contradict your comments about the industry in general. The segment that practices insider trading and government hacking is pursuing an approach that owes little, if anything, to economics or econometrics, and so is outside the scope of the prof's comments.
This is an interesting perspective, but I think the author has made too much of it. OO is one of several developments that have added ways to put abstraction to work, and they are all double-edged swords, providing additional ways for a confused coder to express his confusion and work around his initial misconceptions of the problem he is attempting to solve.
The problem is not in the programming language features, any more than they are panaceas: the problem is in how we approach problem-solving. Guessing at the solution and then debugging it into acceptability, or 'guard-rail programming', as Rich Hickey has described it, is not the optimal approach in any language.
I agree with your skepticism of the idea that that dynamically typed languages are more expressive. It is not as if we are writing poetry here; we are actually trying to specify, with great accuracy, a virtual machine. Fortunately, we can state for sure that there is no fundamental difference between the capabilities of Turing-equivalence languages; if this were not the case, the arguments over expressiveness would be interminable.