Reminds me of an old contest - the 8 Queens problem, posed by Byte Magazine in the '80s as a programming contest.
All the solutions presented started by declaring an 8X8 board, with a 1 or 0 in each cell to represent a queen. Then there was some two-dimensional iteration over the board, testing if any queen was on the same row, column or diagonal as any other.
I'd dismissed that solution as inefficient from the start. Since only one queen can be in each column, I represented the board as an 8-element array with each element containing the row number of the queen in that column.
My 2nd insight was, only one queen can be in each row, so the elements of the board array were simply the numbers 1..8. The search consisted of permuting the digits, then testing for diagonals e.g. test pairs to verify abs(difference) != abs(i1-i2). That is, their row numbers were not the same as the difference between their column indexes.
Finally, when permuting, you could test the elements as you went and trim the permutation tree drastically if you found a failure. No sense permuting the right-most elements of the array if there's already a diagonal conflict in the digits placed so far.
The whole thing ran in trivial time. The contest winners ran in hours or days (this was all in BASIC on <1Mhz processors back then). Made me wish I'd actually entered the contest. But I didn't bother, I guessed many people would have found the same solution.
I entered a programming contest once, where the task was to take some input and construct an optimal solution to a geometric problem. I got irritated at the rules not being clear and well thought out, and I wasn't smart enough to work out a good implementation of the real-world algorithm they were hoping contestants would come up with. So I wrote code to provide pretty much the stupidest (most trivial) possible solution that fulfilled the specific constraints for a valid output that were stated. Because entrants were scored on a combination of speed, code size, and quality, my entry came in a close second, because it was by far the smallest and fastest. I kicked myself for not trying harder to optimize the quality even a little bit above "braindead".
A definition which doesn't differentiate is a definition which provides no information. If your terminology is broad enough to encompass everything, it's also too broad to tell you anything.
Defining a function is already a mechanism of abstraction. You take a snippet of code and give it a name. Usually you generalize by adding parameters. Maybe you can even make it generic (C++ templates, Java Generics, Haskell typeclasses, etc). Removing the abstraction of a function means to inline the code. You could do that manually (copy&paste).
Understanding the problem > any abstraction you pick without understanding the problem first. In this case chess was a terrible choice for an interview. In chess the biggest issue is tree size. If you don't start there then no abstraction will ever save you.
I'm going to disagree with you here. In chess the problem is not about reducing the tree size: you can get a "tree" by picking random moves. The problem is about reducing the tree size intelligently.
My chess program - Dorpsgek - has an 800 byte board structure (though I recently reduced that to about 400). It uses techniques that the computer chess community considered to be impractical. But still it works, and it works because the space that I use per board contains very useful information, such as attacks to a given square.
The advantage that bitboards give are not space by any meaningful quantity - for an alpha-beta search of depth d with a branching factor of b you will allocate at most d boards - or even 1 board - at any given node, not b^d boards. The advantages that bitboards give are speed and information density.
And to anybody who thinks that a bitboard approach is not sufficiently generic for different board sizes, wrap the bitboard using an arbitrary size bit set and the problem is essentially solved.
I have no idea what abstraction actually is. It seems to me it's never abstraction unless it leads to complicated designs for the simplest problems. I've been accused of hating abstraction because I wrote in C instead and programmed out my own concepts -- instead of just using a ready-made framework with ill-fitting ones (Qt in that case).
A much better term than abstraction to me is "semantic compression" (got that from Casey Muratori of Handmade Hero). This basically means factoring out common bits with the goal to express the solution with the least amount of repetition (while of course taking care not to factor out parts which are only coincidentally common. I figure that's the "semantic" part).
To do semantic compression you need abstraction, but not pointless abstraction -- just the right amount.
Abstraction, to me, is not about reducing lines of code but reducing cognitive load by hiding complexity. I don't need to know what kind of list, or how the list is implemented, I just need to know it's a list of some sort and has the usual list interface.
For example, I recently needed to randomly reorder a collection of objects, weighting some of them more heavily than others such that 'heavier' objects are more likely to come first. My first implementation was a mess, and was hard to test. So I created an abstract data structure that allowed inserts with weights and polls for a random value, applying the weighting. My business logic could use that abstract thing presuming it worked. The tricky part was hidden, and the code was cleaner.
In the case of the article, the abstraction the interviewer wants doesn't actually make the code clearer. It's just looking for checking off check boxes that the candidate wrote the word 'class' and jumped through the hoops the way the interviewer expected.
IMO, the key issue is to differentiate between abstractions in language-space and abstractions in problem-space. Turning a chess board into an object is language-space, since it affects vocabulary; writing a function to count the pawns is problem-space, since it defines a step towards the problem one wants to solve.
Nobody programs without problem-space abstractions any more; this is effectively what you get with functions and libraries. When someone uses a prebuilt tangent function, that's working in problem-space.
Language-space abstractions don't pull the same weight. If they did, Haskell programmers would be so much higher productivity than C programmers that the latter would be simply competed out of the market. Instead we see marginal benefits against marginal costs, and the gamut of C, C++, Go, Haskell, Python, Javascript, etc. are, if not equally good, at least sufficiently similar that there is genuine debate.
If in doubt, abstract the problem. If you do an abstraction and don't have less work to do afterwards, maybe you've abstracted the wrong thing.
> If they did, Haskell programmers would be so much higher productivity than C programmers that the latter would be simply competed out of the market.
That implies a much smarter market than I believe exists.
I think it’s possible (not arguing it’s true) that Haskell is dramatically better yet entrenched interests (e.g., managers afraid of not being able to find programmers, and programmers afraid of being obsoleted) block the logical conclusion.
I’m pretty sure most of the Haskell crowd doesn’t believe this. There are benefits to learning, thinking in, and using Haskell, but raw easily measured productivity increases isn’t one of them (nor is it for almost any other programming language).
Well, I think the lack of a definition for what is meant by abstract or abstraction is critical. For example there is some discussion of an abstract base class but the article keeps missing that in Object Oriented Programming the term abstract class exists. It describes an 'incomplete' class which cannot be used to instantiate objects but can be used to hold common behavior.
Abstraction is a tool to help in your thinking. The abstractions that are useful or needed depend on what you're doing. An abstraction is basically a license to not care about the details once you have sufficient proof that you can do so within relevant limits.
All work has abstractions, even physical work. Consider a hand saw. In detail, its operation and efficiency relies on the teeth, their angle and sharpness in relation to what you're sawing. But the abstraction is that you can take a hand saw, place it onto a tree branch or wooden plank, and start moving it back and forth until the cut is complete. Surely some of the teeth will be worse than others and eventually they will be so blunt that the abstraction fails and you cannot complete the cut until you sharpen the saw. But in the average, the abstraction of sawing works for hours and hours, generally days and days, in a row.
Having such an abstraction elevates our thinking to consider different ways to cut wood, make joints, fitting ends, etc. That is all possible only because of the abstraction of how a saw works: it frees you from thinking about how the saw operates while using it and using those mental cycles to think about what you want to saw and how.
Programming is the same. There are abstractions for modelling the problem, there are abstractions for building the program, there are abstractions for data presentation and flow, and the whole computer you're using is a huge stack of abstractions so that when you press 'b' the letter will appear on your screen as 'b' no matter what the hardware and system your computer is running.
Sometimes —— actually more often than not —— abstractions are an easy way to add useless complexity. It usually happens when you don't yet know how to solve a problem, so you first abstract it to finish something else and effectively post-pone the hard problem further. The problem is you can do it again, and then again.
When you finally do get around to solving the big problem you already have an interconnected mesh of various abstractions and mechanisms in place —— those that you concocted at the time you were still guessing what the solution might look like —— and you have to retrofit the correct solution on top of those.
Ideally you would've solved the hard part first at which point you'd already know which abstractions, if any, you will need to efficiently express the problem and the solution in simple and understandable terms.
As an example of a good abstraction regarding chess, a deep learning algorithm that wasn't hand tuned specifically to chess just beat the best computer chess player that had a hand tuned algorithm (in which case the neural network representation of the board performed better than bit representation)
The Bitboard is very abstract. The interface it instantiates will likely not let leak it’s concise internal representation and will discuss the operations on the game state at a domain level. This is abstraction.
So this is a failure of OO abstraction. Why? OO abstraction is really complex and makes many forced moves that provide little value here. Inheritance and a focus on “nouns” makes idiomatic code highly specialized to certain domain models. Unfortunately, these are rarely useful.
OO design is often about the tradeoff between over-engineering and over-specialization. The rules of chess can usually be assumed to not changed, which is different to most programming tasks.
If you would design for Chess 2.0 and you expect some game designers to change the rules every week (thinking up new kinds of pieces, changing the rules for existing pieces, changing the board layout, etc), would you still use Bitboard? Maybe it would be better to focus on the "nouns" the game designers use and keep optimizations like BitBoard in mind for later?
The point of abstraction is to not be tied to Bitboard--or any concrete representation whatsoever. Chess 2.0 changing its rules is exactly what can be protected against.
You assume a Queen object and also some compiler details. These might be in practice relevant some of the time but (a) how often, really? And (b) are those forced decisions or decisions of convenience and momentum?
Fwiw, as soon as you say “object” I’m betting you’re taking on expensive, unnecessary OO mental modeling.
The class-based solution provides more information about the developer's approach than the single struct for the BitBoard.
For example, where would the equivalent of "findMoves" be declared in the latter case? Is the bitmath abstracted out into a set of helpers, or is it done in one big function?
The OP has translated the interviewers question of "design a readable and maintainable abstraction for a chessboard" into "design the most space-optimized chessboard possible", then wonders why the interviewer doesn't like his solution.
Or maybe the OP understood that everyone who has ever implemented a chessboard are also building chess engines where memory efficiency and speed actually matter a lot.
It's the interviewers who are ignorant here, they took a real problem and translated it badly into a toy problem. Then they failed to realize what they did.
GP is correct. The Quora answer has a false dilemma that you've just perpetuated here, viz. that an OO program cannot use a compact representation. But there are standard patterns for using a compact representation of data for domain-model objects.
The article's author isn't just unaware of them, they've written a petulant, arrogant article demonstrating their own lack of experience and adaptability.
They could be building an online social game site, or implementing a UI but using an existing AI. In which case the OO version will be easier to code, understand, render, and render history in chess notation. The only thing his methods is good for is implementing an AI and serialization.
I think the author's approach is data-centric rather than object-centric.
His methods would be easy to persist: write the longs to a file. Easy to render: based on bit positions, draw a piece. Easy to understand. I looked at it and immediately knew what was happening (which almost never happens when I have to look at a massive class hierarchy). History would be trivial with his approach: since it's space-efficient, you could literally store the bit-space for each previous board configuration or if you wanted to optimize more, you could store a simple sequence of moves.
I highly disagree. Implementing an AI requires being able to work effectively in chess notation. Either you can choose long algebraic notation, which is pure squares (which can be encoded/decoded into a move type) or short algebraic notation, which requires knowing piece types (which is an AND between a bit representing the square and piece type bitboards) and being able to differentiate between multiple attacks to a square (which is an AND between the move square and attacks by other pieces of the same type).
This is complicated by the two approaches not being apples-to-apples, however.
Trivially, you'd iterate i from 0-63 and then do 10 bit checks per value of i in order to determine what piece, if any, is on tile i. You could probably optimize it further, but that'd be hella fast on any device I can think of, and would take very little code to implement.
for each type of piece if the bit is set, draw the piece at that position.
if you click on a square and there is a piece in that square( there is a bit set in that position in any of the types), get the valid moves and draw them.
if then you click an empty valid space, render the piece moving.etc..
most of the details are usually hidden behind functions. You renderer should not care about the internal representation of your data.
Yeah during reading this i was wondering how someone can miss the point to that kind of degree.
The point was to test for an understanding of object-oriented design. Yet he talks about how awesomely space effiencient his implmementation is compared to using objects.
But the interviewers seemed to also miss the ball, making irrelevant and false objections.
It's a poor choice of scenario for an interview, because no-one who knew anything about writing chess software would ever write that sort of OO code. If the point was to test for an understanding of OO design, wouldn't it be a good idea to provide a problem where OO design was appropriate?
Is there any reason to believe that code using efficient data representations like bitboards can't scale just as well, if not better, than OO versions full of classes?
Or that the interface to use that code would be harder to understand or maintain?
If not, why mandate the use of OO at all, and if it's OO you want to test, why not pick a better example where it might actually make sense?
Oh please. As if the interviewer's next question wouldn't be "make it faster/more efficient" anyway. The interviewer is the problem here because he wants to build a chess game in an extremely stupid way using OOP for no other reason than to use OOP. I'd say most applications of OOP fail in similar ways in real life, creating monsters of complexity where simplicity could have existed. The author should consider himself lucky: this sounds like an extremely shitty company and the interview has shown him that. It probably wouldn't even be worth it for him to complete the interview at this point. No one wants to work for idiots.
The interviewer gave the OP a simple question that could be answered within a short time to gauge how well they understood OO design, and OP specifically implemented it in a way to frustrate them. So again, whose the idiot?
If OP wanted to crush the question, they could quickly answer it by saying this is how you could implement using classes and inheritance, now let me explain why OO is a poor solution for this specific problem, and what kind of problem sets OO design is most useful in.
How the interviewee is supposed to even guess this is an OOP question, as opposed to a "solve this problem" question? Unless the interviewer goes out of his way to mandate an OOP solution, he will just get whatever is the best guess of the interviewee.
In my case, this is unlikely to yield an OOP solution. I'm even less likely to guess an OOP solution was even expected. And if I fail the interview because this particular company takes OOP for granted, I should fail the interview, and may even be glad I did.
It does not mention how the question was put, and in which context. If it was clear from the start that the OP was expected to use OO, then it would not be smart to start with the bitboard implementation and then continue to argue about it. ... But if the question simply was, "how would you design a chess game", with no other context, then it would be reasonable to assume that the interviewer would want to hear your best solution... and for that he chose the non-OO bitboard version, based on his previous experience with chess engines.
I'd like to see the rest of this implemented. What does this global functions look like? Are they swamped with if statements "I hope not" or are you storing some game logic in structs to replace polymorphism. I don't disagree with with the compact pawn struct as a starting point but to further prove your point I would love to see the rest / pseudo code.
Let's say black pawn takes the white queen: Unset the bit representing this black pawn, set the bit where the black pawn is now, set the white queen long to zero. One load and two write operations to memory. The bit fiddling is probably neglible in terms of time on a modern CPU.
in case anyone was curious how a bitboard chess engine works, I wrote one from scratch in python and included a writeup describing my general approach (mostly focused on the move generation aspect):
I don't know why he wants to waste so much space with twelve unsigned longs when he can do it in eight (one for each piece and two more masks for color).
If you're going to go that far, you might as well use 64 bits to say whether each square contains a piece. There are a maximum of 32 pieces, and for each (potentially) occupied square there are 12 possibilities for what might be there (not counting "nothing" as a possibility in a square since that's already covered by the 64 bits indicating whether each square is occupied), which can be represented in 4 bits per square. So that's 64 bits to say which squares have pieces, plus 32 * 4 = 128 bits to say what is in each occupied square, for a total of 192 bits to store the chessboard.
You can take it even further, too. For the 32 possibly occupied spaces, you can encode what is in them as a 32 digit base 12 integer, then convert to binary, which is guaranteed to fit in 115 bits. So you can get away with 115 + 64 = 179 bits.
It seems nuts to me that anyone would expect the Queen class to extend Bishop just because they happen to move diagonally. A Queen "IS NOT A" Bishop.
If anything, specify traits that define the movement of the pieces and have each piece extend that trait (with the Queen extending both CAN_MOVE_DIAGONALLY and CAN_MOVE_LATERALLY).
Yes. A similar way to argue is Liskov's substitution principle [0]. It is easy to come up with properties that always hold for bishops, but not for queens. For example, bishops never move from black to white fields or vice versa.
It's hard to ask this question without sounding snarky, but I hope you will understand that I'm seriously curious to hear what you think.
> No, my observation has nothing to do with composition vs/ inheritance.
> It's about defining what a class IS vs/ what a class HAS.
What do you feel is the difference between these two sentences? In other words, why is inheritance vs composition not the same as what a class is vs what a class has?
In practice inheritance is a very special case of composition that is only advantageous in a very narrow set of usecases. Basically almost never.
In fact the number of usecases is so low that I would prefer that modern languages just omit it entirely because the added value is incredibly tiny compared to the cost of confused developers.
The primary difference between an interface and inheritance is that with inheritance you do not have to implement every method and can use the default implementation of the base class.
Example:
class Piece {
int x
int y
boolean validMove(int newX, int newY) {
return false;
}
}
class Queen extends Piece {
boolean validMove(int newX, int newY)
//we use the default implementation
}
Is equivalent to this
interface IPiece {
boolean validMove(int newX, int newY);
}
class Piece() implements IPiece {
int x
int y
boolean validMove(int newX, int newY) {
return false;
}
}
class Queen implements IPiece {
Piece piece;
boolean validMove(int newX, int newY) {
//we use the default implementation
return piece.validMove(newX, newY);
}
}
Inheritance doesn't actually offer any code reuse. All it does is choose the default implementation of the parent class. If you have dozens of polymorphic functions in a class and only want to change 3 it might be convenient. I would question why you need so much polymorphism to warrant a special language feature. C and Rust developers are fine without it. Functional programmers are fine without it. But for some reason in OOP you need it everywhere and are actually worse off because it's overused.
Modern language designers, please do not implement inheritance of classes in your new languages!
> In practice inheritance is a very special case of composition that is only advantageous in a very narrow set of usecases.
You are confused. See my other message below where I show that composition and inheritance are not mutually exclusive (and arguably, the best of both worlds is using inheritance via composition).
Edit: Sigh... This message is badly written, mainly because I confused your point :-) I'm not quite sure why I was thinking you were arguing that it was not an ISA relationship. In any case, removing my addled brain from the equation is probably useful -- you can probably figure out that there is more than one way to look at this. And while you clearly think that your way is correct, it is useful to view it from other perspectives.
While it is interesting to see your viewpoint, you may also consider that not everybody will see it the same way that you do. For example, why do you suppose that inheritance via composition makes it not an ISA relationship? Is a Swiss Army knife a screwdriver? You may say "no", but others may say "yes" -- it is a screwdriver by way of containing a screwdriver. You can verify this usage of the language simply by listening to sales pitches for the crap they sell on the shopping channel :-) ISA and HASA do not necessarily need to be exclusive properties.
It's easy to get wrapped up in your own definitions of things and to block out how others may view the universe. This is precisely why I originally asked the question. I appreciated the enlightenment. I hope that the above will help you achieve the same.
Having read a python implementation of bitboards in the thread, I believe it's for search pruning. If you search many of the boards after three moves, then there will be some duplicates. Preferably each board should only be evaluated (for which player it favours) once and that can be done by storing them in a hash table or similar structure, which requires storing the board.
I can relate to the pain of going through an interview knowing that the interviewer is expecting a specific approach/answer (an OO answer!) that my hard-won experience has already long ruled out. And morals and/or my sense of identity — and/or a fear that the Gods are watching — refuses to allow me to play the interviewer’s game (at my own practical loss).
I don't quite understand. If I were interviewing the OP and asked this chess question, then got his answer presented much like it was on Quora ("Well, the thing to understand about representing a chess board is that if you want to build a usable chess engine, the size of chess board representation is a critical constraint..."), I would be impressed. Hire that guy. He understands that the important thing is to figure out the solution constraints.
On the flip side, if the interviewer learns something from you in an interview and that means he doesn't want to hire you, aren't you glad to have escaped that work environment? Assuming, of course, that the interviewer is representative of the people you would be doing actual work with.
I agree w/ you. So in that sense it’s not truly at my loss—that part I was being (somewhat) rhetorical about.
However it has definitely been the case where I’ve contended with some Senior Engineer who has found the light in OO programming, inheritance, Template Method, etc, etc and no amount of dialectic will change his mind.
This programmer often doesn’t see the severe impedance that his OO code, his type hierarchies, etc are causing to agile delivery of the new and newer software. How do you show an OO-religionist...the light, so to speak? These quesitions are not easy to get into a lab and give him the hard numbers.
Just an example here: Matrix algebra is easiest done with first-class matrix abstractions. So my compeer puts everything into some other abstraction. And I say, look, look how much easier it is to do our work when our data is modeled differently? And he says “that’s not easier” — okay, you are objectively right, but your peer doesn’t “see it” or refuses to see it, or what, I don’t know— but this is almost de facto industry situation—you can’t prove that your ergonomics are better because your peer doesn’t even speak in proofs anyway.
Now I could go find another job but my equity is actually worth something here (it really is), so I put up with it. But it’s frustrating nevertheless-and having a better abstraction only costs my coworker to learn something. It’s not going to make my equity any less valuable—might make it more valuable, if anything.
Is this for real? The solution he proposes is specific to Chess!
I mean, does this guy really think the company cares about efficiently representing chess game states??
Obviously, obviously, OBVIOUSLY, the Chess aspects of this question are irrelevant. What they are trying to find out if how well you will work on their actual codebase, which is presumably not comprised of global functions operating on bitfields
The guy is openly arguing with an interviewer after taking a question too literally. I suspect my interview impressions would've included "candidate may be on the spectrum".
However, this entire thesis is founded on a false dilemma, because you can write an OO domain model with a compact representation.
why the onus is not on the interviewer? Why is he bringing up chess if he just wants OO answers? Aren't there better problems to pose more suited for testing OO knowledge? Bitboard is a standard way of writing chess engines. I think most would agree that a good programmer is one that seeks the best solution, not one that follows dogmatically his preconceived ideas.
Yeah. Based on the HN comments, I went into this article expecting it to be an uninformed tirade. I came away from the article thinking, "I'd hire that guy immediately."
The interviewers are definitely using the wrong problem if they're trying to test knowledge of OOP.
Because you can write an OO domain model using a bitboard representation. It's an opportunity to demonstrate advanced OO knowledge, which this Quora answer most definitely fails at.
A variant of the flyweight pattern using contextual identity. All board representation remains a bunch of uint64_t, all messages sent to boards result in bit manipulation (i.e. not massive duplication of objects), and you instantiate the piece objects once per game.
There's simply no need for "instantiate all the things" approach assumed in the Quora answer. That is OO at its most naive.
OOP is messaging, encapsulation, and late binding. What you do inside the capsule is up to you; and at a higher level, how the behaviour emerges from a composition of objects is also up to you.
I have the sinking feeling that your first paragraph is probably what the original author had in mind all along… He advocated for a representation of the entire board, and that's apparently what you would do as well.
I originally though you were advocating for a representation of each piece on the board, were "messages" were "sent" to individual pieces, not the entire board. I was curious about how you'd map that interface to a compact representation.
I am. You should definitely be able to send messages to piece objects. That's not incompatible with the Board object having a bitboard for its internal representation of board configuration. The piece objects have contextual identity based on the board configuration. Hint: they don't have positional state.
That still doesn't tell me how you map the interface to the implementation. I need a piece of code here. You won't get away buy leaving this as an exercise to the reader.
sorry for going off-topic but I couldn't shake out of the back of mind. you said "candidate may be on the spectrum", do you discriminate against people based on their medical conditions?
Actually the opposite. Best practice is to correct for them once you realise they may be affecting someone's interview performance. As represented, this interviewer failed to do so.
Candidate on the spectrum is less suitable for position that requires communication with customer or other people, that is under stress/pressure, that requires you to interpret ambiguous analysis, cooperate with other departments independently and such.
On the other hand he/she can provide a deeper insight into your work AND be able to communicate. People on the spectrum with a special interest in computers tend to speak like they program a computer: simple, succinctly, efficient.
Another plus side: a worker on the spectrum will most likely not be able to lie to you and will often be completely honest.
Does it really matter, if he/she can't look you in the eye or can't understand the social hierarchy and games played in the office?
No they do not speak efficiently and they do to listen efficiently. That is very core of their communication problems. And they are able to lie and they are easy to misinterpret the situation (e.g. their honesty is often not accurate representation of reality).
> Does it really matter, if he/she can't look you in the eye or can't understand the social hierarchy and games played in the office?
No it does not matter whether they look it the eye. Many of them can do that tho. Yes, it does matter that their communication toward junior or customer communicate disdain and lack of regards, to the point where juniors were afraid to speak or have ideas, despite them not really wanting to cause that. It does matter that they confuse own preferences with objectively better. Inability to imagine themselves in shoes of someone else leads to unwanted unfairness.
It does matter that others are suddenly required to put up with insults and have to send a lot of time learning how to communicate and solving problems for that person. All those being symptoms of autism.
Yes, if other collegues have high social skills a lot of that can be mitigated. But when it is not the case, the communication can become quite toxic.
I worked with people on spectrum and they were benefit to the team. But the framing in which they "don't play politics" and thus it is all sun and roses is not accurate. It is naive. You have to learn to predict problems and have to spend additional time to solve them. And you have to put them on position that is not freaking set up to fail.
People with autism suffer from consequences of all that and should be helped. So does often those around them.
I did not know about your experiences, it comes very strange to me that people on the spectrum can lie. But I can imagine that some people on the spectrum can.
Maybe there is a difference between Autism and Aspergers? Many people on the tech scene start programming at an early age and are obsessed with computers at some point in their lives. Thus many of us at least can relate to people with Aspergers.
Nevertheless, I still think an interviewer should note that the candidate is stubborn or pretentious, rather than just writing "on the spectrum". As you've there could be people on the milder side on the spectrum who could work really well.
Asperger is now officially just mild Autism. Of course, they can lie, through it is harder for them to lie convincibly.
> Many people on the tech scene start programming at an early age
That has absolutely nothing to do with anything except right kind of adults around..
> and are obsessed with computers at some point in their lives
Sure, just like people in other professions get obsessed over this or that at some point in their lives. And it is not nearly the same as when someone with autism displays that symptome. Healthy person obsession is much different and causes way less difficulties to that person.
> Nevertheless, I still think an interviewer should note that the candidate is stubborn or pretentious, rather than just writing "on the spectrum".
> Candidate on the spectrum is less suitable for position that requires communication with customer or other people
A candidate with certain communication skills deficits is less suitable for those positions; while that may correlate to some degree with being on the spectrum, it neither necessarily implies nor is necessarily implies by being on the spectrum, and while an interviewer hopefully is qualified to assess that kind of a deficit, they probably aren't qualified to make an ASD diagnosis.
Further, while it's absolutely legal to discriminate on skills, doing so on actual or perceived medical condition is more legally troublesome. So, it's far better all around for the interviewer to confine themselves to observed facts and assessments that are both relevant and that they are qualified to make, rather than playing psychiatrist.
If you aren't argu^H^H^H^Hdiscusing with the interviewer, you're doing it wrong. If that results in unfavourable evaluation, that's good data for you whether you should take that job.
> Is this for real? The solution he proposes is specific to Chess!
Some aspects of his solution aren't. He just took a holistic approach instead of the classic reductionistic/analogy-with-not-so-real-world one. Holistic approaches are applicable to much more than chess. Game engine for instance often benefit from Entity Component Systems, which are effectively in-memory domain specific databases, quite unlike OOP. More generally, the Data Oriented Design from Mike Acton is applicable in any setting where performance matters. Another example is functional programming, highly suited to data transformation and symbolic processing, such as found in compilers and interpreters (probably not he high-performance ones, though).
Competent entity systems are OOP, entity is synonymous with object (bring out some other object-like system, and a theasurus is usually used to find another word for object). You just don’t use classes, but there have been plenty of classless OOP systems since the Treaty of Orlando. Anything with a nounish tint will lean more OO than functional (of course, taxonomies are never perfect).
Objects have always been about modeling and design, a way to talk about a problem using language that is more similar in how you might talk about it naturally. Object-thinking is much more important than anything else. There are many different ways to spin this, but most game engines and simulation environments benefit from this in spades (and have ever since Simula 2).
It is scary that the broad definition given in the 80s has been supplanted by a narrow “java or C++” definition today. If that what you mean by it, then no we aren’t talking about the same thing. I’ve built plenty of OO systems that don’t correspond to that definition at all, and at any rate, were still considered OO systems by the OO community (at least by the people who attend ECOOP and OOPSLA).
When I see "OOP" in a forum today, I assume it means the narrow "Java or C++". And I think that more than 90% of the time, I'm right. If you want to use a different, perhaps better, definition, the onus is on you to mention that you don't talk about the same thing as everyone else.
I didn't say that. Obviously, if you're talking about OO and mention Self in the same sentence, then you mean OO to encompass prototypes. But if you do not mention Self, don't be surprised when most people think you meant C++/Java.
We aren’t really arguing about that though. When people exclame that OOP is dead, they use Java/C++ as their strawmen, but that doesn’t mean objects are intrinsically flawed or really dead. It’s like declaring you hate FP because Haskell isn’t to your liking.
My original comment was that ECS is just another kind of object system. It has objects, it conforms to ontological object thinking, why would that not be OOP? Just because it isn’t Java/C++? Or would you argue that OOD and OOP are separate things?
Of course ECS has an object system, it simulates objects. Know of any program whatsoever that simulates stuff without having a stuff system? Does that mean it is Stuff Oriented?
By bundling ECS in the OOP umbrella term, you trivialize the fundamental differences between ECS and class/object hierarchies found in older game engines and Qt, and you water down the "OOP" term to the point of meaninglessness. You wouldn't be the first one to do so.
Anyway, ECS wasn't my main point. My main point was, and remains, when someone mentions "OOP" in Hacker News or /r/programming, unless mentioned otherwise they mean C++/Java most of the time. The most notable exception are discussions over the meaning of "OOP".
> Is this for real? The solution he proposes is specific to Chess!
Yes, that is, it is precisely addressing the question asked, not some unstated drrff hidden intent.
If the tester wants to test ability to apply a particular paradigm, that paradigm should be part of the explicit question. If you want to use a chess representation as a vehicle for testing OOP skills, than specify that object orientation is a requirement in the question.
Otherwise, you should be judging the interviewee on how well they did on the challenge you actually posed. Otherwise, what you are really judging on is whether OOP is the tool the interviewee reaches for no matter whether it's really appropriate to the problem they are faced with.
The CPU does not care about your abstractions. Abstractions are entirely wrt the human dealing with them. A good abstraction is only “good” with respect to some context/purpose—there is no such thing as a universally/generally good abstraction. And often a “great” abstraction will hurt your performance — so then is it so great?
But setting aside performance concerns, when we speak of a “good” abstraction we are usually (or should be) saying this is good for some purpose—good for readability, for example.
But even better—or of utmost importance in the real world-is this: is the abstraction under question “good for business”? And that is entirely asking this: does the abstraction allow for rapidly acting on business ideas, creativity, needs, etc.
However, I believe that once the context is fixed/agreed upon, that there is an objective answer to which of this or that abstraction is better. However experience in the practical world of today’s software development painfully has shown me that the “better” abstractions are harder to come by...and when “found”, don’t tend to stick. This is because most practitioners don’t have the ability to produce powerful algebraic systems (which is what “good” abstractions are—“alegebras” over a business domain) because practitioners are generally not mathematicians, even have a philistine dislike/disdain for powerful systems if they have a whiff of mathematical-like power to them at all.
In this sense one could argue for an abstraction being “good” with respect to the social context in which they are operated in (i.e., if your team members don’t understand how to correctly wield an abstraction, is that abstraction good?) However I don’t like these kinds of arguments bc a lesser system is still capped in its power even if all its users understand it.
There are limits in what you can do with, say, Euclidean Geometry, even if it is much simpler to understand than other Gemotries. An often retort to this is No it isn’t. But that usually comes form perspectives with limited imagination. That said, many businesses are fine and thrive with limited imaginations.
Imho an abstraction by itself can be incomplete or unsuitable [0]. If it is neither, it is a good abstraction. I concede that suitability is often hard to quantify.
What you describe with algebras, I would label the "precision" [1] of an abstraction. Precise abstractions/algebras require a good specification though and that is often missing in the business domain.
Great abstractions are not always algebraic. Words and vocabulary are incredibly successful as abstractions, as are ontologies. In fact, an entire programming paradigm has been constructed around such abstractions and has been reasonably successful.
Words and vocabulary aren’t enough. Not the colloquial/dictionary sense, at least.
So if we’re talking about “facts” in an “ontology” — these are still (they must be if they are going to be processed by a machine) concrete formalisms - that’s what I mean by algebras.
If we are not machine-processing these ontologies but just printing them out for users, then I don’t count that. Because we’re not really programming over those abstractions. We’re just giving them back to the humans.
Instructing someone how to do something and the computer how to do it is basically the same thing. Yes, humans have more latitude in how they follow instructions, but the same skills ofnabsteaction are applied. The whole point about OOP is that you could still weild abstractions without being a mathematician.
Now, how often do people learn chess algebraically?
“But first of all, the OO inheritance here is irrelevant. The queen is the only piece which actually “inherits” properties from other pieces! We don't need an object model to simply reuse some functions to calculate legal moves given a position. Just a few global functions.”
Don’t all pieces have a location? Aren’t all pieces movable? Might we want to display pieces? Do we want to journal them to files to save game state, or their moves to streams to play remotely?
All of these things can be done procedurally, but they also fit nicely into an OO design.
but then why use OO? what does OO offer that this method does not? In fact one could argue that this method offers better decoupling since it separates the rendering from the game Data.
You use OO because your interviewer wants an example that demonstrates you understand it.
Secondly, OO is more easily extensible. If the company wants the game to support other size boards, bit wise storage has to be ripped out. 90% of the work in 90% of the jobs is writing clear, easily maintainable and extensible code, not maximizing performance. Frankly, if I interview someone who answers a question like OP, i pass thinking their code will be a premature optimization nightmares.
Except, this is not premature optimization. Chess programs are a domain the author is familiar with. He goes out of his way to explain why-- based on his experience-- chess code should be written with space-optimization in mind.
Secondly, if you want to support non-standard boards, his solution requires a small tweak (depending on the language): long -> bigint and some parameter to denote board size.
Yea, you are describing code that’s more difficult to enhance and maintain.
Your pieces and board objects can have serialize/deserialize methods that write/read them to/from bitboards for most efficient storage for whatever variant of chess you want, while still having your code in an OO design that’s easier to understand and extend. And allow you to demonstrate the skills the gdmn interviewer wanted you to demonstrate, which is the actual point of the excercise!
I'd rather fail the interview than work for the one who failed me in this case.
Assuming I am asked to design something in a domain I am familiar with, where an OOP-looking solution is not a good approach, I will simply go with a good approach instead. If this triggers a discussion, wonderful. If the interviewer assumes I don't understand OOP, shame on them for not explicitly asking for an OOP design in the first place. And if the interviewer suggests I should go for an OOP design instead of my solution, I will contradict them on the spot —politely. I may comply for the sake of the exercise, but will repeat and insist that there are better approaches to this class of problems —politely.
More generally, I am highly sceptical of claims that OO designs are in general easier to understand, maintain, and extend (this includes this chess example, where I believe the bit-board approach is competitive even for variable board sizes). They're often not, if only because being OO for the sake of it tends to generate more code. More code to understand, more code to debug, more code to extend… you get the idea.
I also tend to believe OOP is not the best approach for most problem domains, possibly even including GUI (I have yet to study this in detail). I'd rather not work for shops that insist on using OOP everywhere.
OO is better because bundling data and code together is better, because code is data, and I have no idea what I'm babbling about. </strawman>
The real advantage of OOP, as used today in Java/C++, is instantiation. Instead of having procedures working on global variables, you have procedure working on local variables, including complex data structures.
Instantiation took some time to get widespread adoption. Originally, even local variables were actually global, scoped variables, thus forbidding even recursive calls (see earlier versions of FORTRAN). Programming languages since use a stack to instantiate their local variables (and return pointers), thus enabling recursion. The instantiation of more complex data structures (arrays, user defined structs…) followed.
Ironically, instantiation took some time to become ubiquitous in the C language even though it fully supported it from the very beginning, with a stack and user-defined compound types. Case in point: Lex/Yacc, which use global names (and state) by default.
Now however, instantiation is so pervasive (global variables are assumed evil by default), that we don't even call it OO any more. We just call it good practice.
>The real advantage of OOP, as used today in Java/C++, is instantiation. Instead of having procedures working on global variables, you have procedure working on local variables, including complex data structures.
You can have the same in functional programming, by creating closures on demand.
> Now however, instantiation is so pervasive (global variables are assumed evil by default), that we don't even call it OO any more. We just call it good practice.
This is another one of my pet peeves... If it's global (a static resource), make it a global. Local variables for static resources make code so much less readable. The only argument against globals is testing, and that's only an argument because common OO languages have no support for resetting global "objects"! Solution: Just don't use OO syntax in the first place - it's wrong. Just write init() and exit() functions.
No, the newer argument against globals is testing, but that's mostly a side effect of the older issue, that globals limit composability/reusability, which was the main objection to globals before TDD became a popular religion.
The "reusability" argument is just as wrong... True reusability (without any changes) is not possible in most cases anyway, and furthermore I don't see a reason why surrounding some static object (which can only exist once in a program, like a sound module, a network module, etc) with braces and a "class" keyword would somehow increase this vague idea of reusability.
It's only a syntactic change. It's not changing what should actually happen. It's just making it less readable. How in the world can that be an improvement on any frontier?
> I don't see a reason why surrounding some static object (which can only exist once in a program, like a sound module, a network module, etc) with braces and a "class" keyword would somehow increase this vague idea of reusability.
Often, because the assumption that it can exist only once is wrong; this is particularly true of instances of some descriptive class of interface to an external hardware resource, which covers all of your examples.
> It's only a syntactic change.
No, it's usually not; while the syntactic change is usually necessary, it usually isn't the whole difference between the desirable modular code and the bad global-using code that should be made, and quite often if you aren't writing it as a global resource in the first place, you never make the other wrong decisions that would need to be changed.
Using globals may occasionally be justified (either as an optimization or, even more rarely, as “correct” from a fundamental design perspective), but most often it's a symptom of sloppy thinking.
> Often, because the assumption that it can exist only once is wrong; this is particularly true of instances of some descriptive class of interface to an external hardware resource, which covers all of your examples.
The key here is how to understand the word "can". It's "only" a design decision! Of course you can do almost anything on a computer. However, most programs don't make sense with two sound modules or network modules or graphics modules. So "can" here means, "it absolutely makes no sense, and I'm never realistically going to instance two or more of this thing". (And if I really want to do that later, 1 in 1000 times, I'll just edit the code).
> it usually isn't the whole difference between the desirable modular code and the bad global-using code that should be made, and quite often if you aren't writing it as a global resource in the first place, you never make the other wrong decisions that would need to be changed.
Give an example: I don't think there are any. I can easily give you some bad things that happen when avoiding globals to represent static resources: Much more input and output arguments to type. Then, the syntactically ugly, useless, meaningless Singleton. I've seen it many times, and it is the best proof that it made no sense to avoid the global in the first place, and it even potentially leads to nondeterministic initialization.
And more importantly, an occasional reader has a much harder time browsing through the code because she never knows where the local variables are pointing at.
Most VR games make sense with 2 displays (Headset + screen), and 2 sound systems (binaural for the headset + stereo for the audience). You'd likely know that from the outset, though.
Yes - and for each of these examples, there would likely be still a single "object instance" managing these devices (e.g. "Display" module). Or two entirely different modules ("Headset" module + "Screen" module), again each having only a single "instance".
The concept of instancing is inappropriate to most situations. The things that we have more than one ("dynamically many") instances from are typically dead data, but then again these are typically managed in a single pool. (That's "tables", again).
> The concept of instancing is inappropriate to most situations.
At the application level, maybe. At the library level, this can easily kill reuse. See Lex/Yacc. They used to assume a program would only have one parser. Then some naive developer tried to parse both JSON and Lua tables in the same program. Oops.
Another example: standard libraries (including home grown ones). They instantiate everywhere: arrays, hash tables, file descriptors…
Sure, there are exceptions. The other side goes the same way: If it's not a static resource, don't make it a global...
> arrays, hash tables, file descriptors
Most arrays / hashtables in my own use are actually static resources as well...
File descriptors, same thing. They are (or correspond to) a process-wide resource managed by the OS. Most maintainable way is to manage them in a global pool.
There's no question you instantiate a substantial number of arrays and hash tables in your program. Maybe most of them are global variables, but you still spawn several instances of arrays and hash tables. You do not have a single giant array and a single giant hash table, you have several of them.
Same thing with parsers. Lex/Yacc used to allow only one parser of any kind in the whole program. But if you want to parse 2 languages, you need 2 parsers. Of course, you'd need only one parser per language, and perhaps those two parsers parser's state will be stored in global/static variables. You still instantiated 2 parsers.
Of course, I would use a global whenever appropriate. But I would try and limit the number of entry points to that global (for instance by limiting its scope, or putting a big fat comment about how we're supposed to use it), so I don't end up making implicit data dependencies spaghetti.
One should always avoid unnecessary dependencies. But I don't use special syntax or complicated scope rules to achieve that. This only makes the code more complex.
No -- I just don't depend. I don't reference the global where it's not needed. Simple :-)
Come on, please don't claim that you need to add more piece types independently so absolutely need virtual functions. That's hilarious for a chess game (and most other applications), but even if that was needed, one could still pass function pointers in those few places.
Anyway: No, they design their program to be efficient and readable. There is more often than not an alternative to switch statements: don't mix data of different types together. (Where by types I don't mean crazy artificial types built with a modern language -- but with regards to implementation of a functionality).
If this is about drawing chess pieces, you need only a single draw function for all of them. Just maintain a table that maps a piece to its material (sprite, mesh, whatever). Or alternatively, have a table that maps pieces to "types" and another static one that maps these types to materials. (Much better: Materialized type to simple enum. Can actually use this information, as opposed to overhyped static types).
More general answer: The best way to do it is usually not a switch statement but a data table. Because usually the switching is not about behaviour, but about data. (Not that switch statements are so bad).
Code gets so so simple, robust, and maintainable if developers are not preconcerned about static types, OO, and whatnot.
I'm not sure why that's apparent, but even if it is, is there any reason to believe that switch statements (or pattern matching, or look-up tables, or other related language constructs) couldn't handle any plausible requirement in this case?
My opinion is that this post reaches the same conclusion as I would, but I disagree with the logic behind it.
In a modern, state of the art chess program that uses alpha-beta with a branching factor b and depth d, you will have at most d boards allocated, or even 1 board allocated. Neither of those figures approach b^d. That makes board size largely irrelevant, except for the amount of memory copying needed, which for a d-board approach would be b^d copies (a single board approach would only mutate that board).
EDIT: One major issue I just noticed with the article is that the two differing implementations are not apples-to-apples equivalent. One uses bitboards, based around manipulation of bits, and another one is akin to representing the board with an array.
The whole point of abstract data types is to separate implementation from interface in order to allow a change in representation.
The point is to allow programming things like the AI algorithm without caring at all about if the board is represented with a hash map, nested arrays, or a bitboard. In all cases, all the AI cares about is move generation, not internal representation.
Abstraction does not hinder ideas like the bitboard, it enables them. Both the interviewer and the interviewee are missing the point.
All the solutions presented started by declaring an 8X8 board, with a 1 or 0 in each cell to represent a queen. Then there was some two-dimensional iteration over the board, testing if any queen was on the same row, column or diagonal as any other.
I'd dismissed that solution as inefficient from the start. Since only one queen can be in each column, I represented the board as an 8-element array with each element containing the row number of the queen in that column.
My 2nd insight was, only one queen can be in each row, so the elements of the board array were simply the numbers 1..8. The search consisted of permuting the digits, then testing for diagonals e.g. test pairs to verify abs(difference) != abs(i1-i2). That is, their row numbers were not the same as the difference between their column indexes.
Finally, when permuting, you could test the elements as you went and trim the permutation tree drastically if you found a failure. No sense permuting the right-most elements of the array if there's already a diagonal conflict in the digits placed so far.
The whole thing ran in trivial time. The contest winners ran in hours or days (this was all in BASIC on <1Mhz processors back then). Made me wish I'd actually entered the contest. But I didn't bother, I guessed many people would have found the same solution.
reply