Although the library coverage for Go is growing, it's tiny compared to what's available in Python. Otherwise, I much prefer Go to Python for many of the reasons given by the author.
I've been toying with the idea of doing massive rewrites of Python libraries to Go using syntax directed translation. Getting to somewhere like 98% fidelity is a very practical goal (I've done it and been paid for it!) and gives a tremendous boost to programmer productivity.
Not sure why the downvotes - perhaps the "wat" sounded too trollish - but yes, this strikes me as a very ignorant statement... even good old Java is another example. So are many compiled Lisps, Haskell, Lua...
Something which has been neglected, but which has clearly shown its value in VMs is the information gained by runtime tracing.
We should make this more available to the programmer! It should be possible for a programmer to highlight a section of code in an IDE and choose "Apply Trace," whereupon the IDE will apply static type annotations based from saved runtime trace information. The IDE should be also able to present the same information as profiling data to let the programmer quickly home in on the 15% or so of the app which is most performance critical. Using techniques like this should let us get C-like speeds from many dynamic languages.
(Admittedly, this sort of thing would also be highly dangerous. One couldn't apply such semi-automated annotations in ignorance. This could also break otherwise correct programs.)
Directing programmers to hotspots based on programming info is pretty reliable. In your case, it depends on how confident the annotations are. The Self papers have a lot about that - JIT compilers can nonchalantly make riskier optimizations, because they can also easily undo them.
My idea is that the programmers can add expert knowledge and human intelligence into the mix. They have the ability to just let the VM know: "Okay, in this part these things are always going to be this type and things go like that, so just go to town and optimize the bejesus out of it!"
In other words, a good programmer should be able to just let the VM know ahead of time what's up in critical sections of stable production code. Even better, we should be able to combine datasets from many different runs of many VM instances! This sort of technique would allow us very high degrees of confidence for things like web apps in server farms, where getting 10's of thousands of runtime tracing datasets would be easy to do.
This is where a dynamic language with optional type annotations would really shine.
I'm not sure about Go specifically (as I have yet to write a non-trivial project in Go) but D at least allows you to do manual memory management, if you so desire--down to using std.c.stdlib.malloc, which is exactly what it sounds like--although garbage collection is on by default. I usually tell people it's "...mostly garbage-collected," which tries to convey the flavor of a choice between fine-grained systems-level control and instantiate-and-forget application-level control.
To be fair to the author, I'm going to assume that the most esoteric (read: non-C-like) language he knows is probably Python, because he doesn't compare it to anything besides Python, C, C++, or Java. I also assume he left out Java because by "compiled languages" he meant "compiled to native code without relying on a runtime which is external to the compiled application." He's still missing a whole swath of compiled languages, but there is a point of view from which his comment makes sense, i.e. native-code compilation of C-like systems or applications languages.
("...you will find that many of the truths we cling to depend greatly on our own point of view.")
There. Manual memory management. What? It's not as if when you free() something that was malloc()'ed, it gets returned to the operating system right away. It just goes back on the free list. Garbage collection amortizes that.
as manual memory management, after a fashion (which it is). The kind of manual memory management I'm concerned about is less about having precise control over when memory passes in and out of my program's control, and more about controlling fragmentation and time spent searching for unused memory. For example, would slab allocation--an extremely useful technique for operating systems, databases, &c--be possible in Go?
Addendum: I'm not trying to denigrate Go, and I don't mean to suggest having or not having fine control over memory makes or breaks a programming language, because it's a valuable tool in certain instances and a hindrance in others. I honestly don't know whether or not it's possible in Go; I suspect it is, to some extent, although it would no doubt be discouraged by the design of the language.
"What", exactly. If Go really deleted the object pointed to by x when you set x to nil, it would be seriously flawed (though I wouldn't actually be surprised if this was the case, it certainly is flawed in many other ways).
From the FAQ: " Although Go has types and methods and allows an object-oriented style of programming, there is no type hierarchy. The concept of “interface” in Go provides a different approach that we believe is easy to use and in some ways more general. There are also ways to embed types in other types to provide something analogous—but not identical—to subclassing. Moreover, methods in Go are more general than in C++ or Java: they can be defined for any sort of data, not just structs."
http://golang.org/doc/go_faq.html#Is_Go_an_object-oriented_l...
The lack of type hierarchy makes code easier to understand.
Interfaces are explicitly defined in Go as they are in Java. The difference is that you do not declare the interfaces implemented by a type. If a type has methods that match an interface, then the type implicitly implements the interface.
Having never written Go, I suspect it means that Go isn't prone to code that requires you to read eleven different files in their entirety just to determine which parts of a method will be polymorphically dispatched to from a given line - code can be reasoned about locally.
The only thing I could see myself leaving Python for is Haskell.
Go seems to fall into a very small niche. The inclusion of a GC means it is not appropriate for a lot tasks that C/C++ are suited for. Also, it is verbose enough where you can't bang out stuff as fast as in Python/Ruby.
It's really a big deal for Haskell to sit in that position now instead of "that weird language with Spacesuitmonads". I hear the sentiment a lot and every time I'm somewhat surprised and happy to realize it.
I find that I can write working code with Go as fast as I can with Python. Although I need to type more with Go, the static typing helps me iterate faster to working code. With the exception of the library problem that I mention in another comment, Go can cover much of the space covered by Python.
Go, today, is a niche language by usage but is not niche by features.
It's designed as a server-oriented (i.e. aimed at a still growing programming category) language that is close in syntax to C/Java (i.e. already familiar to many), almost as easy to write in as Python/Ruby (both very successful, non-niche languages) but order of magnitude faster and much memory efficient that either.
As a bonus it has much better concurrency support.
It's designed for the same "niche" that Python/Ruby/Java serves on the server, with many important improvements over them.
Having written code in Python and Go, Go seems to me an improvement over Python in many important areas (speed, memory efficiency, concurrency) and the parts that are not as good are both not as important and not significantly worse (static vs. dynamic typing is a toss (it's nice not to declare types but it's also nice if compiler tells you about a type mismatch typo that Python will only complain about at runtime), Python still has slightly cleaner syntax etc.).
I didn't experience Go being more verbose than Python to a degree that it mattered. In some aspects the syntax is actually less verbose (Python class require more typing than Go interfaces).
Go is still a young language and young implementation. On one hand it means it's anyone's guess whether it'll become non-niche at some point but at the same time I'm pretty sure at year one both Python and Ruby were much less polished and much less popular than Go is at the same stage.
Personally, I'm bullish on Go and if I were writing server side code, I would use Go (even though at the moment I know Python better).
>As a bonus much better aligned with today's multi-core reality than Python or Ruby.
Not substantially. It made the concurrency decisions for you ahead of time, and if you need a different concurrency model, you're up shit creek.
> but order of magnitude faster and much memory efficient that Python or Ruby.
So is my Radio Flyer Wagon. You still can't do systems programming, embedded, high performance, or real-time work with it.
I know too much about C and the constraints it works well in to believe Go is anything but awkwardly crammed between two realms of programming.
A language that forces its own GC upon you (note that you can have GC in C/C++, you merely have to choose one), and its own concurrency model upon you without giving you at least a couple choices is not a well designed language.
Even Clojure gives you a couple ways to approach concurrency, and it doesn't even make the same claims as Go has.
I don't know a single person who's done substantial systems programming who takes Go seriously in terms of their field.
> Not substantially. It made the concurrency decisions for you ahead of time, and if you need a different concurrency model, you're up shit creek.
It's not true. I'm not sure what concurrency models you feel you need but let's go through the most popular ones. There's event based programming which is essentially single-threaded and predicated on structuring your code around poll/kqueue/whatever loop. You can do that in Go since you can call any OS syscall from Go. It just leads to awkward code.
There's shared memory multithreading with locks to protect data structures. You can do that in Go (except you use goroutines, which are multiplexed into OS threads by Go scheduler, instead of using OS threads directly as in C).
And then there's Go's prefferred (but not exclusive) solution of channels/goroutines and idea of sharing memory by communicating (as opposed to communicating by sharing memory as in threads/locks based model).
What concurrency models are available in C or Python or Java that you can't do in Go?
> So is my Radio Flyer Wagon. You still can't do systems programming, embedded, high performance, or real-time work with it.
As a rebuttal (?), it doesn't follow. As to your point, your definition of systems programming is different from that used by Go designers, since they are very insistent on calling Go a systems programming language. Please clarify what kind of systems programming you can do in C/Java that you can't do in Go?
As to high-performance: I don't follow. It's fast. What kind of high performance programming you can't do in Go that you can do in C or Java?
As to embedded - sure that niche is owned by C. How is it different from Python or Java and how does that make Go a niche language?
The same goes for real-time where real-time systems are even more niche and are as much a property of the OS as it is of the language. You can't do real-time in any language on stock Linux kernel given that kernel can pre-empt any application at any time for any period of time.
You make a lot of statements but nothing concrete enough to support them.
I've shown that in Go you can use 3 concurrency models, 2 of which are currently most popular in the C/Java worlds.
As to systems programming - you don't provide a definition or examples of what kind of programs do you consider as systems programming so it's hard to argue at that level.
Does a web server qualify? (you can write one Go).
Would a distributed database like HBase qualify? Even though one hasn't been written, HBase is Java and there's nothing that you can write in Java that you can't write in Go, with potentially better performance due to compilation to native code and more efficient memory usage.
goroutines sound a lot like private queues in Apple's GCD. In their documentation they are quite clear stating that in performance critical code ... surprise ... they might be a bad fit.
>Does a web server qualify? (you can write one Go).
Being able to write something in Go is besides the point. It being turing complete and having the essential faculties and libraries for a basic subset of tasks, yes, you can write a web server in Go.
Doesn't mean you should.
In any realm where real-time is a concern, where performance is the utmost concern, or where you need real control over the hardware, Go is inappropriate.
It's also inappropriate for Rapid Dev.
>I've shown that in Go you can use 3 concurrency models, 2 of which are currently most popular in the C/Java worlds.
No. C and Java aren't the same world, and your idea of popular is absurd.
I admired Pike and Ken when I was younger, but I can't speak to why they came up with Go and I strongly suspect its design wasn't actually their first choice.
You overestimate the power they have in a company as highly political as Google.
It's possible to critique things based on design, just as much as you can critique a project based on architecture if you know the goals/objectives it aims for.
That's the basis upon which I choose to dismiss Go for a selection of uses unless they make some fundamental changes.
Python -> Haskell seems like a serious leap. Even to the point of not seeming all that realistic. Care to elaborate on why you think that's a natural progression for you?
I like Python, but FP programming is a second-rate style there.
Well, Python does give you a little taste of the FP world with things like list comprehensions.
Haskell has very good concurrency solutions like sparks(async), MVars and forkIO (threaded), and data parallelism. Writing a program for 128 cores doesn't look very different than writing a program for 2 cores. I think that this will become very important sometime in the next decade as computer will come with more and more cores.
I left Python for Lua years ago, haven't looked back. Even if you don't care about tail-call optimization, etc., LuaJIT (http://shootout.alioth.debian.org/u32/benchmark.php?test=all...) is pretty compelling. (Also, OCaml, though I use Lua more day-to-day.)
LuaJIT has a memory limit of 4gb. When it is compiled for x64, it has a limit of 1gb. You cannot use Lua lanes to bypass this, the Lua states must be in different processes.
Lua has try/catch-like functionality, it's just called pcall ("protected call"), or xpcall (http://www.lua.org/manual/5.1/manual.html#pdf-xpcall) if you want to run a callback before the stack unwinds. Running debug.debug inside xpcall's handler will drop into a shell on error, which is particularly handy.
error("error tag") is comparable to raising an exception.
I wasn't aware of LuaJIT's 1gb limit, but it's neither hard nor unusual to run several Lua states in multiple worker processes.
I love lua (almost as much as I enjoy python and erlang), but how do you cope with the lack of libraries? Since last year I've been trying to use lua on personal projects, but I find the generally poor documentation and lack of libraries very annnoying (and unproductive).
I haven't found libraries lacking, but we probably work on different things, and I don't have any problem with writing quick wrappers for existing C libraries. There's a fair bit of Lua libs that are insufficiently documented or tested, or have bit-rot (esp. due to changes in Lua itself), but the situation is getting better.
Having picked up python as my first programming language earlier this year I am always really surprised to hear how so many programmers hate the indentation.
I don't ever have indentation errors - or if I do, they aren't ever of a sort that pyscripter can't spot them.
"So the danger to paste stuff at the wrong indent level is far less than in Python. "
I would argue that this is actually a feature of the language. This "danger" actually forces the programmer to read through the code he is pasting by indenting.
Also, how many pairs of source code fragments share the same variable names? This argument is quite shallow.
I use Emacs, and I can't remember the last time I manually indented a line of code. Most of my non-web programming is in Erlang, Lisp, and C. My editor does the indentation for me.
This "danger" actually forces the programmer to read through the code he is pasting by indenting.
Read through, or partly rewrite? More of the latter than the former, IMO. If you're only reading then 4 spaces is often indistinguishable from a tab.
Sounds like the "block indenting" style in Smalltalk. Kent Beck favored it because it made bad Smalltalk code look bad. (Hard to read.) I like the idea, but in reality, it's too idealistic and not nearly practical enough.
I built a fairly large app with Python years ago. I was neither ecstatic with then nor did I hate it. I didn't love or hate the whitespace thing then. I managed everything so it just worked.
A couple of years ago, however, I tried to help a friend trace an error in code on an active server. It came down to rewriting the code so that it looked exactly the same as before but somehow had the right whitespace instead of the wrong whitespace.
That experience left me with the strong belief that Python's whitespace approach is totally wrong and broken. Things that break in ways that are difficult to understand I can accept. Things that break in ways that are impossible to understand as such are unacceptable.
Yes, the whitespace thing can always be fixed - except on the random server where it desperately needs fixing. Systems where symbols that are meaningful don't appear are broken - it's not that a given problem with said symbol will appear often. It's how extreme the problem is when it does appear.
I have programmed in python for almost 5 years, with medium-large (several 100kloc) projects, this has never ever been an issue. If you mix tabs/space, you can have issue, but that's nothing that any decent text editor cannot fix/highlight.
I've never experienced anything like this - nor do I really understand what you mean (and nor could I without actually seeing some code).
Given this, it's impossible for me to understand your claim that the white space approach is 'totally wrong or broken'. The leap from vague example (with a sample size of one) to all encompassing abstraction is too great.
Having said that, I'm keen to ensure that I don't pursue the wrong path. I picked python because many people I respected said it was a good 'jam for beginners'. But that can't be true if it's 'totally wrong and broken'.
Care to pony up with a proper account of your position?
It was probably a tabs vs spaces issue, leading to an oddity of a block of code at the wrong depth.
The fix is to simply accept only tabs OR spaces as valid whitespace... then again, this is one of the things that annoys me about make, but I digress..
But all the fixes depend on completely controlling the environment and the code production process. They work but they add "global control" need that is a bit more extreme than other languages. This qualifies as a "weird fragility" in my mind. Considering how many other levels of any given system might have weird fragilities, why be willing to accept language that guarantees you'll need to deal with such fragility by it's very structure?
I'm sure I over-stated the problem when I first stated it ... but whenever I remember the situation, I feel the terribleness of it so I don't think it's unimportant.
I meant to suggest that the language be made in the language. Changing your process to accomodate a misfeature is not a fix, it is a work-around. We agree that this is a case where computer-enforced order is more important than programmer freedom, though you could create human processes to work around the danger that such freedom affords you.
I'm starting to get to the point of just wanting sexps everywhere...
I think it's a corner case.* While I don't personally care for Python anymore, I still think it's a good beginner language.
* And after debugging production issues with proprietary compilers that had almost perfectly reliable hot-code loading...that's pretty minor. Not mixing up tabs and spaces probably covers it.
I don't think the indentation is a big deal. It's just a surface detail that people new to the language tend to bitch about, like all of those parens in Lisp, ; vs . in Erlang, indexing from 1 in Lua, + vs +. in OCaml, etc. Once you're using the language idiomatically, it isn't a big deal. Good semantics beat good syntax any day.
The indentation issue is annoying when you use separate editors to edit the code and they have different indentation standards (2 space-tabs, 4-space tabs, tabstop tabs, etc). Also when you copy/paste code from another source (a website lets say) or download some other library, start editing it with your editor not realizing the tabs are different while your editing and then when trying to run it run into an indentation error. "Fix tabs" doesn't always work and it's not always evident how bad it is.
I don't see where he got the impression that D is not actively maintained. See http://www.digitalmars.com/d/2.0/changelog.html, which is the record of frequent, active and extensive updates.
"- It is compiled into machine code (no interpreter, unlike Python)."
anyone have any kind of measure on how bytecode compares to machine code these days? (most "interpreted" languages have a hidden compile step these days). I understand that this question is largely subjective to the runtime and the task being executed.
Your question is relatively meaningless, since 'bytecode' covers such a wide gamut of possibilities.
Python's bytecode is very high-level and basically just saves the expense of parsing the code. The interpreter is not JIT-ed and thus very slow (compared to machine code).
Java's bytecode is much less dynamic. 'a + b' can execute arbitrary code in python, but in java it is known at compile-time whether it is addition between native types or string concat. Java has a very good JIT compiler and thus can be as fast as 1.5x the speed of c.
Yeah, Python is bytecode compiled and run in a virtual machine. Hell, you can even JIT it into machine code (psyco, Jython with a JIT-enabled JVM, IronPython with a JIT-enabled CLR). Python never had a traditional interpreter.
There were tons of people who were walking around 10 years ago thinking Smalltalk ran on "traditional" bytecode interpreters, when the majority of VM instances used back then running were JIT VM. There was also widespread ignorance about Lisp implementation and performance. The sad part: the degree of misconception is still surprisingly bad. (Much the same held and holds true for Lisp.)
The Interpreter/VM distinction had become hazy and lost its meaning years ago. I think people still hold onto it only as a means of excusing poor language performance.
Speed? http://shootout.alioth.debian.org/ is an up-to-date, super detailed answer to that question. A rule of thumb simplification I go by is: current Python and Ruby implementations are at least 10x slower than C doing algorithmic work.
A really good bytecode VM can, of course, get much closer to C performance (as shown by Java, C# or even modern JavaScript implementations) especially if JIT is given enough time to profile the app at runtime and generate highly optimized code based on that profile data.
There are of course other aspect you can compare. Bytecode is, inherently, cross-platform and machine code isn't.
Bytecode is usually more compact than equivalent machine code (but then you need the constant overhead of the runtime to interpret that bytecode).
Python is my main language, and I think you underestimate the speed different for a same implementation by an order of magnitude. That is, CPU-boud tasks will most likely be around 100x slower.
The realy argument, of course, is that in a given time frame, with people of similar skills, you will not have the same implementation unless your team contains only vulcans. Several people in the scipy community have reported having gone from C++ to numpy/scipy and went faster at the same time - because C++ is so hard to use correctly, people whose job is not even programming ended up doing things very fast but one millions times because they don't understand their code.
This point is surprisingly not understood by a majority of programmers. Most of the time, you see benchmarks for some trivial or even non trivial algorithms, well specified, and get "look, this language is N times faster". But in my experience, this almost never happens in real life - code specification keeps changing, you need to redesign constantly what you're doing.
I think 10x understates the difference. Python (not familiar with Ruby) eats memory so for a lot of the number crunching I do it turns into a swap disaster. End up with Python using up 10% of the processor waiting around for disk I/O.
It's very poetic sentiment but doesn't really address the issue of trade-offs.
Is writing if a { .. } else {} so much worse than if a then .. else .. end (Ruby requires then/end instead of {}.
Python's indentation based syntax (which I think is great) cleverly solves the problem of requiring explicit statement delimiters but it's also a reason why lambda: is limited to one-liners. There's no free lunch.
To Go's credit they did put a lot of effort and design thought into removing line noise from syntax (compared to C) by e.g. adding an automatic semi-colon insertion rule, eliminating braces around if argument etc.
Overall, while syntax is not as clean as Python's, it comes pretty damn close.
(which prints 6). Looks like a major problem to me, and at any rate it's a far cry from having made real (or any) progress. Python _did_ get whitespace right.
Python's approach has always been highly pragmatic. Go's is as well, but with somewhat different goals. The Go team's goal seems to involve putting together a clean-enough syntax with fast-enough for systems programming performance combined with powerful-enough high-level concurrency model. They don't want to win the gold in any one category. They're going for the decathalon!
In any case, language syntax should always take into account the expectation of programmer communities. If you push syntax too far, you lose too many people.
The point of my example was simple - Go's gross mishandling of whitespace is a permanent trap for the unwary and a huge liability for people coming from other languages.
When D was nascent, many people proposed various schemes for making semicolons optional. I think it's great D didn't fall for such.
Only a fallacy if it's an argument for something. It's actually a criticism of your criticism. Oh hey, you tried to spin my comment with a different meaning. I guess you're guilty of an entirely different fallacy.
I don't really follow what in your example shows "gross mishandling of whitespace".
The most popular languages are C, C++, Java and C# and Go's syntax is very similar to those. If anything, Python is the weird one in how it treats the whitespace and hence most likely to confuse people coming from other languages.
As to optional semicolon elision - what is the problem with that? They make the code cleaner looking and gofmt will remove them for you even if you forget due to C/Java reflexes. I don't see the downside of making them optional.
Please make sure you give the example a close read, as it surprises even experienced Go programmers. Both examples look similar to code in the likes of C, C++, Java, and so on, _except_ the semantics gets radically changed depending on one line feed, which is very unlike those languages. I hope this clarifies things.
I wrote a lot of code of C/C++ and my implicit assumption is that C syntax is serviceable. Since I don't see C syntax (or JavaScript syntax or Java syntax) as a major problem and Go is an improvement over those, I don't see Go syntax as a major problem.
I agree with you that Python's syntax is cleaner but I would like to bring few novel syntax-related things that Go did that are not as commonly known and in some cases are improvements even over Python.
In general I don't like deep indentation and Go has few things to help keep indentation to a minimum.
1. defer statement
In Go you can say:
f := os.Open("myfile.dat", ...)
defer f.Close()
That guarantees that f.Close() will be called at function exit. In Python/Java/C# to get the same guarantee, you would have to use try/finally block, introducing additional indentation for the code using f.
Several of the things he cites are a matter of preference (and using the right tool for the job) - namely dynamic vs. statically typed, compiled vs. interpreted.
> The documentation is very good and can be consulted instantly form
the command line or from a browser, indifferently.
I can't speak for Go, but I've always found Python's documentation to be fantastic. And very few things make me as happy as docstrings.
reply