Upgrade to Jasmine `3.x`

Summary

The old version of Jasmine that Joplin currently uses to run tests fails silently on unhandledRejections. I propose we upgrade Joplin to a more recent version, which fixed that design flaw.

Details

Joplin is currently on version "^2.6.0" of jasmine, which does not fail when uncaught errors occur in a test. It only fails if something you've asserted is not true! Luckily, they've fixed this in 3.0.

What this means for Joplin is that unhandledRejections are not caught in tests.We get output like this, where upon a quick glance it appears there were no failures:

CliClient git:evernote-html-export āÆ ./run_test.sh EnexToHtml
> jasmine "tests-build/EnexToHtml.js"

Testing with sync target: memory
Started
ReferenceError: this_variable_does_not_exist_muahahaha is not defined
    at asyncTest (/Users/devonzuegel/dev/joplin/CliClient/tests-build/EnexToHtml.js:37:15)

1 spec, 0 failures
Finished in 0.217 seconds

While it's fairly obvious here that something went wrong, it would not be so easy to see if we were running many tests, and either way the 0 failures claim is just plain wrong since the code didn't even run, let alone run successfully. We're left logging the errors on our own, which is certainly better than leaving Jasmine to fail silently, but certainly not ideal, e.g.:

Next steps

I believe this is just a matter of (a) upgrading Jasmine and then (b) going through existing tests and removing the process.on('unhandledRejection', ...)s scattered through the test suite. Famous last words though. :see_no_evil:

2 Likes

I think this is a great idea.

But ultimately the decision lies with @laurent.

I’d accept a PR for this as long as the existing tests still pass.

1 Like

I think that’s the problem in the first place. Some tests fail, but noone even notices, because they are not marked as failures.

Edit: or could potentially not fail, because the appropriate code was not run.

Which ones? I'm almost certain they all pass. As mentioned by @devonzuegel a test might pass when there's an uncaught exception but those would still didplay an error message and I check for these messages when I run the tests.

Hmm, ok, but this means you have to check them manually. All I’m saying is that it would be a lot more convenient, if the tests were to handle those exceptions as well.

Yes definitely, initially I’ve just cobbled it together with a bash script but there’s room for improvement.