Boost ARM-JIT engine with Nitro Extreme. Brace for impact.

There was a post about Nitro Extreme some time ago on this site, so it is time to recollect what happened so far. Nitro Extreme is not a branch anyomore, it went to mainline, and replaced the old JavaScript Value (JSValue) representation on 32 bit machines. To be more precise, it happened only on x86.

Nitro Extreme developers encouraged all port maintainers to support this new representation (JSValue32_64), since maintaing several different JSValue representations is a nightmare, and they will surely get rid of the old representation (JSValue32) at some point in the future. Because of my preliminary results, I was not convinced that extreme representation would put our port to a new speed level. Regardless, it never hurts to give it a try.

Once the implementation was finished, we decided to measure the gain on different benchmark sets. We created 8 binaries (for revision 51068), with and without Nitro Extreme support, using interpreter and JIT, and with -O2 and -O3 compiler optimization flags. That is a total of 8 executables. To our surprise, it turned out that the gain of -O3 compared to -O2 is less than 1%, so we decided to omit them from the figures.

Comparison on SunSpider:

Interpreter JIT gain
JSValue32 16594.8ms 9800.4ms 1.69x as fast
JSValue32_64 14770.9ms 7218.3ms 2.05x as fast
gain 1.123x as fast 1.36x as fast

The math benchmarks had considerable speedup on JIT, usually around 50% - 150%. However, the benchmarks which copy (or move) large amount of JSValues, suffer performance drop (around 20% - 50%). In the case of the interpreter, the behaviour is similar to JIT, but the gain is scaled down (around 30% gain on math, and 20% slowdown on others).

Comparison on v8 benchmark:

Interpreter JIT gain
JSValue32 90290.0ms 32655.1ms 2.76x as fast
JSValue32_64 98274.8ms 35389.1ms 2.78x as fast
gain 1.088x as slow 1.084x as slow

In the case of v8, only the raytrace benchmark becomes faster by about 30% using JIT. The others suffer a performance loss around 8-20%. This is true for interpreter as well, although the values are sightly lower (25% gain on raytrace, and 5-15% loss on others).

Comparison on WindScorpion:

Interpreter JIT gain
JSValue32 159524.8ms 270898.3ms 1.70x as slow
JSValue32_64 170509.5ms 272213.8ms 1.60x as slow
gain 1.069x as slow 1.005x as slow

As for WindScorpion, the two JSValue representation have roughly the same runtime speed when using JIT, and about 7% loss when using interpreter. However, comparing JIT and interpreter is a different story. JIT suffers a great performance loss here, mainly because of one benchmark, called WS-email. The result looks better for JIT, if we omit this particular benchmark:

Interpreter JIT gain
JSValue32 145493.3ms 113798.3ms 1.28x as fast
JSValue32_64 155249.3ms 113914.5ms 1.36x as fast
gain 1.067x as slow same

JSValue32_64 is really effective for DES cryptography (2.34x as fast on JIT, and 1.62x as fast on interpreter). On the other hand, array handling algorithms (like bubble sort and floyd warshal algorithm) slowed down by about 10-20%.

cartman (not verified) - 11/24/2009 - 09:16

Is there any chance that JIT will work on WinCE any time soon?

zoltan.herczeg - 11/27/2009 - 12:04

Unfortunately, we are not yet working on WinCE platform (i.e: no software resources, budget, etc.). But it would be good te extend our work to more platforms if could find support for it.

liu (not verified) - 12/16/2009 - 10:16

could the JSValue32_64 feature work on ARM platform?

zoltan.herczeg - 12/16/2009 - 14:44

Yes, the necessary patches were landed. Just specify WTF_USE_JSVALUE32_64=1 for your build system or change the JavaScriptCore/wtf/Platform.h

liu (not verified) - 12/17/2009 - 09:26

Has the patch been landed on webkit trunk from r51068?If not, where i can find the patch.

zoltan.herczeg - 12/17/2009 - 11:39

Hi, the bug report is here

According to Zoltan Horvath the patch "Landed in 51067." - You are a lucky guy :)

Anonymous (not verified) - 04/12/2010 - 18:44


Have you an idea of when you will work on JIT WinCE implementation? Or does anybody knows where I can find one implementation?


zoltan.herczeg - 04/13/2010 - 07:14

As far as I know WINCE is supported in WebKit. I helped reviewing the WinCE-JIT patches, and they have already landed. Although I have never tried them myself.

Anonymous (not verified) - 04/13/2010 - 21:11

Do you know the release number with WinCE JIT patches?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Web page addresses and e-mail addresses turn into links automatically.
  • No HTML tags allowed
  • Lines and paragraphs break automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Fill in the blank