Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

It's a style and design problem. Ruby is designed to make it easier on the developer when he's trying to do stuff. Part of that design is being able to read the code. If I went into a codebase and saw classes being used for functional constructs instead of repositories for holding state, I'd quietly close the project down and find another gem.

Ruby has a way to do immutability. It has methods to do common stuff like nil-checking if you pull in ActiveSupport, something I now do on all my projects. You just don't need to inflict such a heavy-handed approach for the meagre gains it will bring. If you find yourself implementing another language in the one you're using, then it's either the entire point, as in something like Opal, or there's no point at all and you should just go use that other language you're implementing.

It's not a question of performance, it's a question of interfacing and code style. The biggest bottleneck is programmer attention, and you're doing a real disservice to the next guy down the line if you mutate Ruby's conventions this way, especially if you don't have clean interfaces written.



sort by: page size:

Part of the problem is that one of Ruby's good sides is that most of the time, it does what you want without requiring most developers to delve into the darker corners of the language. Or indeed care much about performance.

While there are lots of productive, opinionated Ruby developers, the vast majority of us don't know much about the internals of the language.

Heck, for my part I'm working on a Ruby compiler (and have been writing about it for 7 years now... Unfortunately I have far too little time to devote to it), and part of the challenge is that there's no proper language spec and even most people working on MRI does not know the full semantics of the language.

My favourite example of this, is that for a long time during the 1.8.x, the inheritance of one of the core classes was different than what was documented - I wrote an article about the Ruby object model and was "corrected"; except my diagram had been created based on instrumenting the interpreter to verify what actually happened. It was like that for years without anyone noticing (and to be fair, it was a dark corner of the language - very little code depends on the specifics of the inheritance of the eigenclasses of the core classes).

Of those of us who love Ruby and are interested in compiler technology and have the motivation to slog it out to try to get working compilers - whether JIT or AOT - there are extremely few.

And most of that effort is going into JRuby, which is definitively seeing some impressive improvements (e.g. with Truffle) but which a lot of us avoid because of the JVM.

Ruby will get faster. And will get better implementations, but it may still take some time to have much impact.


Ruby shines because it is a terse/expressive language that doesn't need heavy use of patterns, so you can grok a lot of logic at a glance. You want a hash? Write it inline, you don't need a lazy-loaded memoized function.

The original was the easiest to understand, and the refactored versions didn't expose much that would benefit from testing or code reuse. This feels over-engineered to me.

And not to nitpick, but since when is class << self an ugly form? It makes perfect sense if all methods will be class methods.


Slight counter point re: ruby, I complain A LOT about ruby's and the communities tendency to monkey patch everything. It's a dangerous practice that needs to be REALLY well thought out. just like C++ and the operator overloading, I've had bad and good experiences with both. I don't think they need to be removed from the languages but I do think good experienced developers should loudly and quite often proclaim the dangers of over use.

I'm not a Ruby programmer, so maybe I'm misunderstanding the article -- are we talking about changing an object's class hierarchy after it's already been instantiated and then complaining that it's the performance aspect that is pathological? That sounds like a completely insane way of writing software to me.

And yet Ruby has a global interpreter lock and is an interpreted language. Ruby, I assume as a language, has had all the time in the world to develop well. Why don't we fully utilize our hardware while still allowing for high usability such as with Rust with algebraic data types or (if you don't want to deal with a borrow checker) recent C++ with auto pointers, or Haskell which is garbage collected but is still quite fast and of course highly expressive?

Everything in Ruby happens at runtime. Even the definition `class X` becomes a runtime `Class.new` invocation. It's imperative from the inside out. Even a JIT is compiler won't and can't solve the fundamental flaws permanently baked into the language. If you want performance, Ruby probably shouldn't be the first tool you reach for.

I hate Ruby with a passion (it's also the language I mostly use in my day job currently).

It's not necessarily the language itself, which is a reasonable language, it's the patterns that are common in the ecosystem:

1. Use of inheritance for code sharing. It makes it extremely hard to reason about any piece of code I am looking at. Where does the `param` variable come from? Is it injected by any ancestors of the class I'm looking at? Who knows, I have to use the debugger to find out. I cannot reason about the code without it. Of course you can use Ruby while preferring composition over inheritance, it's just rarely done by the community. Other modern languages like Go and Rust wisely leave out inheritance for code sharing in their object model and arguably have much more readable (albeit verbose) code.

2. Global mutable state is everywhere. Not sure if it's Rails architecture that encouraged using global state as request local state (until they found out that this makes concurrency hard), but Ruby codebases are full of global mutable state. It's everywhere and again makes it hard to reason about dependencies between objects and how they interact with each other. Again, this is nothing that the language forces people to do, it's conventions.

3. Overuse of meta programming. Ruby's metaprogramming is really well done in my opinion. The problem is that lots of people want to use it and do so in places where it doesn't provide enough value to justify the costs. It's an authentication library. It doesn't need it's own DSL.


I think the general concept of using immutability, queues, and isolating side effects, is lost and muddied by using Ruby, and it's a misuse of the term "functional."

Ruby used functionally is great, but it takes an already time-inefficient language and amplifies that weakness (lots of creating new collections and returning them, whereas normally in Ruby you would modify elements within the collection).

Ruby has types. Not everyone enjoys working with statically typed languages.

Refactoring tools are nice (I guess; I've honestly only used them in Java; I'm mostly an Emacs person), but my main gripes with Ruby maintenance are harder to fix with type annotations:

- In duck-typed languages you have to write a lot of tests to do verify things that the compiler does for you in statically typed languages. That neuters much of the benefit of the concision of such languages. Crystal shoots for the best of both worlds. (Static type checking, but usually without explicit signatures.) My main refactoring tool in statically typed languages is the compiler: if I break something, it'll tell me.

- Monkey-patching, open classes, etc. I do it too. Pretty much every Ruby-ist does. But it makes it damn near impossible to track down bugs sometimes, because even finding out what file the relevant code is in isn't trivial. Again, Crystal seems to mostly side-step that pitfall.

- Speed. I don't even attempt to write fast code in Ruby (though I have written a few C++ extensions for Ruby in a pinch). But if I could get near-to systems-language level performance out of something that was almost Ruby, that'd be pretty amazeballs.


[Chris has done some fantastic work on a Truffle / Graal backend for jruby; for my part I'm (slowly) working on a "mostly-ahead-of-time" Ruby compiler]

I'm not sure I'd agree with "never", though I do agree Ruby is a hard language to optimize.

There are two challenges with optimizing Ruby: What people do and don't know the potential cost of, and what people pretty much never do, but that the compiler / VM must be prepared for.

The former includes things like "accidentally" doing things that triggers lots of copying (e.g. String#+ vs String#<< - the former creates a new String object every time); the latter includes things like overriding Fixnum#+, breaking all attempts at inlining maths, for example.

The former is a matter of education, but a lot of the benefits for many things are masked by slow VMs today that in many cases makes it pointless to care about specific methods, and an expectation not to think much about performance (incidentally, it's not that many years ago that C++ programmers tended to make the same mistakes en-masse)

The latter can be solved (there are other alternatives too), or at least substantially alleviated, in the general case by providing means of making a few small annotations in an app. E.g. we could provide a gem with methods that are no-ops for MRI but that signals to compilers that you guarantee not to ever do certain things, allowing many safeguards and fallbacks to be dropped.

Ruby's dynamic nature is an asset here in that there are many things where we can easily build a gem that provides assertions that on e.g. MRI throws an error if violated or turns into a no-op, but that on specific implementations "toggle" additional optimizations that will break if the assertions are not met. E.g. optional type assertions to help guide the optimizer for subsets of the app.

In other words: how optimized Ruby implementations we get depends entirely on whether people start trying to use Ruby for things where they badly need them.


1) I never said it wasn't a concern. In fact, I emphasized the important difference between can't and what translates to "shouldn't" in most circumstances.

2) I apologize if i was vague. I was trying to be concise. The nature of a "Type" is not a fixed thing. Some languages are said to be strongly-typed, some are said to be weakly-typed. Some languages are type-safe, others are not. If we were to simply draw a line and suggest that one end approaches absolute Typeness and the other approaches absolute untypeness, then sufficiently typed were to mean the lowest point along this line that supports the goal. In this instance, we are talking about re-factoring. So, you may say that my argument sounds circular: "refactoring is easy when you use a language in which refactoring is easy." However, the important point that I am making is that ruby's amount and implementation of Type is what facilitates its ease of refactoring. In particular, I have found its features of reflection, its implementation of polymorphism ("duck typing"), and its hooks into the Type system (look at Object, Module, Class and Method for more info) to be particularly helpful. Also, because there is not too much typedness, we don't have to worry about sending data of a new/different type to a function as long as it satisfies the assumptions that the function makes about it (which can be inferred by a number of techniques.) Delegator and Forwardable are also pretty nifty. One of the least appreciated tricks is calling .dup on an instannce of Class (that is, an object that represents a class definition, not an instance of a class,) but thankfully it is rarely necessary. -- You need to use it when you want to a) inherit from Foo and still b) have 'super' refer to the implementation in Foo's parent class.

3) I suppose the above might give some clues into my favorite debugging techniques. If you'd like me to go further in depth, contact me via email, my name @gmail.com I didn't think the details were salient to this discussion.

Edit: c++ makes some of the worst decisions wrt supporting the notion of Type, in syntax and in features. That being said, Bjarn is still one of my favorite authors and learning C++ was a great entre into OO for me.


The maintainability criticism is nothing new, or unique to Ruby — it has been something that has been leveled at every dynamically-typed language (eg., Python, Perl) ever since they started to become popular for writing big apps. And it's still untrue.

It comes from prejudice and fear about the lack of compile-time checks in dynamic languages. I know this personally, because I remember starting out with Python, and Java and C++ felt very safe — on a visceral level — in comparison, and Python felt very unsafe. I have since learned to recognize these feelings as irrational and unfounded.

Any language allows you to shoot yourself in the foot, in different ways. It's just as easy to write an unmaintainable app in C++ as it is in Ruby.


I think you're onto what might be biggest internal struggle that I've noticed since starting in Ruby in 2011: The tension between object oriented and functional. In many ways the two are not in conflict, but occasionaly they do clash, and that's where things get messy. As someone who loves the functional approach, it saddens me when object-oriented ways get in the way of a great functional feature or pattern, but at the end of the day Ruby is thoroughly an object oriented system. IMHO it should err on the side of object oriented. For people that really want functional, it's probably better to look at a language like Elixir that is inspired greatly by Ruby but functional from the start.

I think there is a great middle line for a language that is easy to work with, and provides enough safety to be maintainable for a long time. Environments like C++ and enterprise Java miss this middle line by being too cumbersome. However, Ruby equally misses this middle line, just in the other direction - it's too simple and dynamic leading to lack of maintainability in the long run. The best solution will be somewhere in between.

Until metaprogramming, autoloading, DSLs and everything that eventually makes Ruby code unreadable takes place due to personal preferences.

Ruby is clever, it can be beautiful, but I've never seen a good codebase using it grow well without enforcing strictly opinionated ways of writing it to ensure maintainability. Which breaks a lot of the expectations of some Ruby developers that chose the language because they like to write it in their own way.


Exactly - I did want to express that, forgetting to mention that it's not a black and white issue. I mentioned the healing process as fascinating, because the cells act as independent actors that when the wound happens, they first start arguing against each other, but then they start cooperating to reduce the wound, to provide cover for the new tissue and to form the new tissue. We are far from building systems that are as smart as the process of evolution could build.

Ruby is more dynamic than other languages and that's in a bad sense. I always get a kicker when thinking about the purpose of the "class" keyword, being to open the class context or create the class as a side-effect if it doesn't exist. Ruby is built for runtime mutation of types - which is good in certain contexts, but unfortunately you cannot scope those mutations, leading to the ultimate side-effecting hair ball if not careful about both your code and other people's libraries - it would be useful to say, modify the String type or import this library, but only for this block of code.


You're right. I guess I'm just a huge sucker for the whole dynamic languages with a REPL thing. I mean, Ruby is just so goddamn neat. The stuff that takes me 2 lines in Ruby can take me 10-15 in Java or C, and the difference only gets bigger as the program gets larger.

Maybe I should try out static languages "done right" (type inference, etc.) before I give up on them.

next

Legal | privacy