Why Is JRuby Slack?

14 minute read

    Up up to now:

Adam Gordon Bell %

Adam Gordon Bell

No longer too long ago, I made some contributions to the continuous integration job for Jekyll. Jekyll is a static spot generator created by GitHub and written in Ruby, and it uses Earthly and GitHub Actions to take a look at that it without a doubt works with Ruby 2.5, 2.7, 3.0, and JRuby.

The manufacture cases looked indulge in this:

2.5 8m 31s
2.7 8m 33s
3.0 7m 47s
JRuby 45m 16s
A Representative Jekyll CI Raze

The Jekyll CI does many of things in it that a straightforward Jekyll spot manufacture would possibly well moreover merely no longer nonetheless clearly JRuby was slowing the total job down by a important amount, and this stunned me: Wasn’t your complete point of the use of JRuby, and its new brother TruffleRuby, poke? Why was JRuby so gradual?

Even building this blog and the use of the total systems I’ve found, the efficiency of Ruby on the JVM restful appears indulge in this:

MRI Ruby 2.7.0 2.64 seconds
Quickest Ruby on JVM Raze 25.7 seconds
Building This blog is 10x slower

So why is Jruby gradual in these examples? It turns out that the answer is advanced.

What’s JRuby?

I was very glad to have a study the JRuby mission, my popular programming language running on what’s presumably the very most realistic virtual machine on the planet. – Peter Lind

JRuby is an different Ruby interpreter that runs on the Java Virtual Machine (JVM). MRI Ruby, now and again called CRuby, is written in C and is the conventional interpreter and runtime for Ruby.

Install JRuby

On my mac e book, I’m able to swap from the MRI Ruby to JRuby indulge in this.

Install rbenv:

brew install rbenv

Checklist probably install alternatives:

rbenv install -l     


rbenv install jruby-

Predicament a explicit mission to utilize JRuby:

rbenv native jruby-

Why invent other folks use JRuby?

OMG #JRuby +Java.util.concurrent FTW! Doing a recursive backtrace thru billion+, I’ve made it 30,000x sooner than 1.9.3. 30 THOUSAND.

— /dave/null (@bokmann) September 21, 2013

There are several reasons other folks would possibly well moreover want JRuby, several of which possess to invent with efficiency.

Getting past the GIL

MRI Ruby, grand indulge in Python, has a world interpreter lock. This implies that even supposing you presumably can moreover possess many threads in a single Ruby job, utterly one will ever be running at a time. Whilst you happen to behold at many of the benchmark shoutout results, parallel multi-core solutions dominate. JRuby lets you sidestep the GIL as a bottleneck, at the designate of having to grief about writing thread-procure code.

Library Collect entry to and Ambiance Collect entry to

A conventional driver for JRuby usage is the necessity for a Java-primarily primarily based library or the possess to center of attention on the JVM. You are going to be attempting to write an Android app or a Swing app the use of JRuby, or presumably that you can possess gotten already bought an present Ruby codebase nonetheless want it to plod on the JVM. My 2 cents is that if you birth from scratch and possess to center of attention on the JVM, JRuby must restful no longer be the first chance you factor in. Whilst you happen to invent want JRuby, be warned that you are going to desire a correct possess of Java, the JVM, and Ruby: if you’re coming to the JVM for java libraries and efficiency, then JRuby won’t build you from having to read Java.

Long-Working Direction of Performance

MRI Ruby is known to be gradual, as when put next to the JVM and even Node.js. In accordance with The Laptop Language Benchmarks Game, it’s normally 5-10x slower than a same Java resolution. Minute efficiency benchmarks are normally no longer the very most realistic plan to evaluate functional efficiency, nonetheless one direct the JVM is known to construct very effectively when put next to interpreted languages is in long-running server applications, where adaptive optimzations can build a huge distinction.

Why is my JRuby Program Slack?

The JVM will seemingly be swiftly at running Java in benchmark video games, nonetheless that doesn’t necessarily carry over to JRuby. The JVM makes different efficiency replace-offs than MRI Ruby. Notably, an untuned JVM job has a gradual birth-up time, and with JRuby, this would possibly occasionally procure even worse as many of same old library code is loaded on birth-up. The JVM begins by working as a byte code interpreter and compiles “hot” code because it goes nonetheless in a huge Ruby mission, with many of gem stones, the overhead of JITing the total Ruby code to bytecode can lead to a considerably slower birth-up time.

Whilst you happen to are the use of JRuby at the uncover line or starting up many of quick-lived JRuby processes, then it is seemingly that JRuby will seemingly be slower than MRI Ruby. Alternatively, the JVM is widely tunable, and it’s probably to tune things to behave more indulge in same old Ruby. Whilst you happen to want your JRuby to behave more indulge in MRI Ruby, you almost without a doubt want to identify the --dev flag. Either indulge in this:



jruby --dev file.rb

In my Jekyll use case, this replace and some different microscopic JVM parameter tweaking made a sizable distinction. I was in a spot to procure the manufacture time down from 45m 16s to 24m 1s.

JRuby 45m 16s
JRuby –dev 24m 1s
--dev gets us closer to MRI Ruby

Internal --dev

The --dev flag indicates to JRuby that you’re running it in as a developer and would relate swiftly startup time over absolute efficiency. JRuby, in turn, tells the JVM utterly invent a single level jit (-J-XX:TieredStopAtLevel=1) and to no longer grief about verifying the bytecode (-J-Xverify:none). More microscopic print on the flag can found right here.

Why is my JRuby Program Depraved?

Ruby’s constructed-in forms had been constructed with the GIL in mind and are no longer thread-procure on the JVM. Whilst you happen to transfer the JRuby to sidestep the GIL, agree with that you are going to be introducing threading bugs. Whilst you happen to procure sudden or non-deterministic results for your concurrent array usage, you would perhaps perhaps restful behold at concurrent info constructions for the JVM indulge in ConcurrentHashMap or ConcurrentSkipListMap. You would perhaps well moreover merely get that they no longer utterly fix the threading points nonetheless will seemingly be orders of magnitude sooner than the idiomatic Ruby arrangement. Jekyll is no longer multi-threaded, on the other hand, so right here is no longer a problem I wished to grief about.

What’s TruffleRuby?

GraalVM is a JVM with different targets than the conventional Java virtual machine.

In accordance with Wikipedia, these targets are:

  • To bolster the efficiency of Java virtual machine-primarily primarily based languages to ascertain the efficiency of native languages.
  • To sever the birth-up time of JVM-primarily primarily based applications by compiling them ahead-of-time with GraalVM Native Tell technology.
  • To enable freeform mixing of code from any programming language in a single program.

Increased efficiency and better birth-up time sound precisely indulge in what we want to reinforce on JRuby, and this fact didn’t toddle overlooked: TruffleRuby is a fork of JRuby that runs on GraalVM. Because GraalVM helps both earlier than time compilation and JIT, it’s probably to optimize either for height efficiency of an extended-running service or for birth-up time, which is functional for shorter running uncover-line apps indulge in Jekyll.

TruffleRuby explains the replace-offs of AOT vs. JIT indulge in this:

Time to birth TruffleRuby about as swiftly as MRI birth-up slower
Time to achieve height efficiency sooner slower
Prime efficiency (also pondering GC) correct easiest
Java host interoperability needs reflection configuration correct works

Install TruffleRuby

Install rbenv:

brew install rbenv

Checklist probably install alternatives:

rbenv install -l     


rbenv install truffleruby+graalvm-21.0.0  
rbenv native truffleruby+graalvm-21.0.0  
ruby --model
truffleruby 21.0.0, indulge in ruby 2.7.2, GraalVM CE Native [x86_64-darwin]

Predicament mode to --native


Performance Shoot-out

TruffleRuby is considerably better in CPU heavy efficiency assessments than JRuby, whose efficiency is considerably better than MRI Ruby. PragToby has a huge breakdown:

Performance of JVM Ruby runtimes in small tests looks good but is it too good to be true?

Alternatively, in my checking out with Jekyll and the Jekyll CI pipeline, JRuby and TruffleRuby are considerably slower than the use of MRI Ruby. How can this be?

I procure there are two reasons for this:

  1. Accurate-World projects indulge in Jekyll involve arrangement more code, and JITing that code has a excessive birth-up cost.
  2. Accurate-world code indulge in Jekyll or Rails is optimized for MRI Ruby, and loads of of these optimizations don’t aid or actively hinder the JVM.

Failure Of Fork

Basically the most glaring direct where you scrutinize this distinction is multi-job Ruby programs. The GIL is no longer a problem across processes and the comparatively swiftly birth time of MRI Ruby is an income when forking a new job. On the different hand, JVM Programs are normally written in a multi-threading style where code utterly must be JIT’d once, and the birth-up cost is shared across threads. And essentially, if you ignore language shoot-out video games, where every thing is a single job and as an different evaluate an MRI multi-job methodology to a TruffleRuby multi-threading methodology, many benefits of the JVM appear to go.

Multi-process MRI Ruby is close in performance to multi-threaded TruffleRuby

This chart comes from Benoit Daloze1, the TruffleRuby lead. The benchmark in build a matter to is an extended-running server-facet application the use of a minimal web framework. It is in the candy situation of the Graal and TruffleRuby, with exiguous code to JIT and much time to construct up for a gradual birth. Apart from, MRIRuby does effectively.

Which brings me aid to my usual build a matter to: Why is JRuby gradual for Jekyll? I invent no longer scrutinize same cases nonetheless considerably slower cases. Jekyll is no longer forking processes, so that’s no longer the topic. Hugo, the static spot builder for Stride, is signifcantly sooner than Jekyll. So we know that Jekyll is no longer at the boundaries of hardware where there is merely no more efficiency to squeeze out.

Take a look at with RubySpy

To dig into this, let’s desire a gaze at a flame-graph of the Jekyll manufacture for this blog the use of RubySpy:

MRI Ruby 2.7.0 2.64 seconds
TruffleRuby-dev 25.7 seconds
This blog is 10x slower to manufacture on TruffleRuby

Jekyll Take a look at 1

sudo RUBYOPT='-W0' rbspy file -- bundle exec jekyll manufacture --profile
A Flame Graph shows most time is File Access

What we scrutinize is that 50% of the wall time was spent in writing info:

    # Write static info, pages, and posts.
    # Returns nothing.
    def write
      each_site_file invent |item|
        item.write(dest) if regenerator.regenerate?(item)
      Jekyll:: Hooks.trigger :spot, :post_write, self

And 16% of time was spent learning info.

    # Study Residing info from disk and cargo it into interior info constructions.
    # Returns nothing.
    def read
      Jekyll:: Hooks.trigger :spot, :post_read, self

Total utterly 22% of the time was spent doing the staunch work of generating HTML:

    # Render the spot to the sail situation.
    # Returns nothing.
    def render

      payload = site_payload

      Jekyll:: Hooks.trigger :spot, :pre_render, self, payload


      Jekyll:: Hooks.trigger :spot, :post_render, self, payload

In numerous words, the total time is spent learning to and from the disk. Clearly, the hugo case reveals us this would possibly occasionally be sooner: we aren’t hitting a hardware limit. But why does this plod even slower in JRuby and TruffleRuby than it does in MRI Ruby? Let’s are trying one other take a look at.

Jekyll Take a look at 2

Checking out on the manufacture job for one other Jekyll spot affords same results timings: TruffleRuby is considerably slower.

MRI Ruby 2.7.0 20 seconds
TruffleRuby-dev 116 seconds
A elevated Jekyll Residing
This time most of the time is Liquid Template Rendering

This time the flamegraph reveals most time is spent with rendering liquid templates in preference to IO. I wasn’t in a spot to resolve out a technique to procure a flamegraph out of TruffleRuby.

So what does this indicate? My wager is that the filesystem Ruby code or the liquid templates invent no longer procure pleasure from being on the JVM. On the other, they appear to plod slower.

It will seemingly be probably to reimplement write and read to have a study JVM excessive-efficiency file procure entry to easiest practices, and it will seemingly be probably to reimplement liquid templates in a Java native arrangement. That ought to carry a poke-up, nonetheless I’m no longer obvious if that will build JRuby sooner than MRI Ruby for Jekyll or utterly carry it as a lot as a same efficiency.

Performance Advice

All this leaves me with essentially the most generic efficiency advice: You would perhaps perhaps restful take a look at your Ruby codebase with different runtimes and scrutinize what works easiest for you.

In case your code is long-running, CPU certain, and thread-primarily primarily based, and if the GIL limits you, TruffleRuby it is going to be a possess. Additionally, if that’s the case and also you tweak your code to utilize Java concurrent info constructions barely than Ruby defaults, you presumably can moreover presumably cease an convey of magnitude poke-up. If the rubbish collector is a bottleneck on your app, that will moreover be one other clarification for attempting out a different runtime.

Alternatively, in case your present ruby codebase is no longer CPU certain and no longer multi-threaded, this would possibly occasionally presumably plod slower on JRuby and Truffle Ruby than with the MRIRuby runtime.

Additionally, I will seemingly be detestable. If I overlooked something crucial, then I’d are desperate to listen to from you. Here at Earthly we want manufacture efficiency very severely, so if that you can possess gotten extra ideas for speeding up Ruby or Jekyll, I’d are desperate to listen to them.2


Change #1

Each and every @ChrisGSeaton, the creator of TruffleRuby and @headius, who works on JRuby possess replied on reddit with ideas and requests for replica steps. I’m going to construct together an instance repo to share.

Change #2

The situation where JRuby and TruffleRuby shine are long running processes which possess had time to warm up. In line with ideas I build together a repo of a straightforward microscopic Jekyll manufacture being constructed 20 cases by the an identical job in a repo right here. After 20 builds with the an identical running job the manufacture cases invent birth to converge, nonetheless even after that MRI Ruby is restful fastest.

Change #3

I in point of fact possess filed a malicious program with Truffle Ruby and got some efficiency advice that I procure is charge sharing right here:

Some Suggestions on The Assumptions of The Article

Hi there there, right here are some notes on the blog post.

“TruffleRuby is a fork of JRuby”

Technically appropriate from a repository point of see, and that’s what the README says (I’ll change that), nonetheless in be conscious it’s indulge in >90% of code is no longer from JRuby. It’s barely different applied sciences.

“Hugo, the static spot builder for Stride, is considerably sooner than Jekyll. So we know that Jekyll is no longer at the boundaries of hardware where there is merely no more efficiency to squeeze out.”

I’d bet that’s in half due to a different assemble. More than seemingly Hugo is better optimized and can invent grand less work due to different constraints.

“Alternatively, in case your present ruby codebase is no longer CPU certain and no longer multi-threaded, this would possibly occasionally presumably plod slower on JRuby and TruffleRuby than with the MRI Ruby runtime.”

I procure there is not any such uncomplicated rule and also there is the build a matter to of “no longer CPU certain” is how grand time spent in the kernel. TruffleRuby would possibly well moreover moreover be sooner on many Ruby workloads, as long as there is Ruby code to plod, there is skill for optimization. Clearly if 90% is spent in read/write system calls, utterly 10% of it’s going to moreover moreover be optimized by a Ruby implementation, nonetheless I’d build a matter to that’s graceful rare.

The main roar I procure is that if it’s no longer almost fully IO-certain, then there is skill to plod up. And the utterly plan to clutch for obvious is to desire a gaze at it, as you issue.

“I’d in particular are desperate to listen to how one can procure a flame graph out of TruffleRuby”

Thank you for the topic at Currently TruffleRuby has more than one Ruby-level profilers (–cpusampler, VisualVM, Chrome Inspector). We’re working on having an effortless plan to procure a flamegraph (perfect now we use this nonetheless one needs to clone the truffleruby repo which is less handy). Java-level profiling will seemingly be probably by technique of VisualVM. async-profiler needs something indulge in JDK>=15 to work effectively with Graal compilations IIRC.

eregontp on reddit

Some Advice for Making Ruby Single Direction of

Your easiest solution for both JRuby and TruffleRuby would be to identify it as a lot as utilize a single job to invent every thing. In JRuby, it is trivial to birth up a separate, isolated JRuby instance all the arrangement thru the an identical job:

my_jruby = org.jruby.Ruby.new_instance

The code given will plod in the an identical job nonetheless a fully different JRuby ambiance. It will seemingly be tailored to plod your “subprocesses” without starting up a sleek JVM every time.

TruffleRuby seemingly has something same consistent with GraalVM polyglot APIs.

Whilst you happen to managed to procure your CI plod to utilize a single job, I’d be stunned if it was no longer as swiftly or sooner than same old C-primarily primarily based Ruby.

headius on reddit

Trace: The article does duvet --dev and I did build together a single job instance in change 2. The outcomes are arrangement better, nonetheless restful slower.

The usage of Application Class Info Sharing

AppCDS is a technique to considerably strengthen startup poke without impacting complete runtime efficiency. It’d be fascinating to scrutinize if employing that helps with efficiency.

In the end, if you presumably can moreover, I’d be to scrutinize what occurs if you’re employed from a ramdisk in preference to the HD. I factor in the IO complications you had been seemingly seeing is due to a model of verbal replace between the app and the disk, what occurs if you abolish that off?

cogman on reddit

The feedback from the JRuby and TruffleRuby other folks has been wonderful. They’ve been providing ideas and soliciting for tickets and replica steps. These are valorous projects that retain bettering and I’m excited to listen to that making Jekyll plod sooner than it does on CRuby is terribly probably with some extra elbow grease.

More Resources

Related Articles

Back to top button
%d bloggers like this: