It's a 6th gen Intel Core i5
No errors when running
npm run test -- --filter="services_Revision"
50 times. Seems good to me.
Update: No failures in "integration_ShowAllNotes"
Update: I'm done testing. No failures
Good work!
It's a 6th gen Intel Core i5
No errors when running
npm run test -- --filter="services_Revision"
50 times. Seems good to me.
Update: No failures in "integration_ShowAllNotes"
Update: I'm done testing. No failures
Good work!
@mic704b, what was the issue with services_Revision? I remember seeing that error once on CI but didn’t look at the issue at the time, and it didn’t happen afterwards.
Thanks @PackElend! Hopefully we’re nearly there
Excellent! Thanks again @naviji
Some of the tests in Revisions rely on timing and the margins in the test, although generous, appear too tight for some machines.
For example, the test should delete old revisions (1 note, 3 revs) goes like this:
It expects to find two revisions, but when it fails, it finds only one.
The test has a 1s margin, which is pretty generous. But if the time taken to complete the steps with a (*) take longer than 2s, then the test would fail.
So we applied a patch that doesn't delete older than 2s. Instead, it records the actual time at the start of the first sleep, and then sets the delete interval based on that. It removes the real time aspect of the test, and it is passing.
The problem could be due to a bug, but by making the test deterministic, I think the chance of that is reduced. Or it could be that some machines are just slow.
The proposed solution, to only test the logic and ignore the performance aspects, assumes the latter. To be 100% sure though we would need to investigate and debug more on navijis machine, but that is not so practical to do remotely.
(Note that I was not able to reproduce the problem myself).
(The changes can be seen in here )
Thanks for clarifying. Timers and async side effects in the code make it harder to write test units for it, and your solution makes sense.
I guess ideally we’d mock the timers, db calls, etc to make sure it’s all deterministic but that would be a massive undertaking at this point.
Bags not me! ; )
Have the changes to the test harness been pushed to master? I’ve been getting similar issues on running tests. I opened an issue on GitHub, then I came across this topic. Seem to be getting similar failures.
Link to pastebin
Getting variable errors, so I don’t know which particular one to retry. The example in pastebin has 3 tests failing, but I’ve had as many as 7 fail.
Not yet, the PR is under review.
Both your errors look like they could be same problem identified here.
They are ready on master now. Please update your branch and try again. If you have any issues, please create a new discussion thread.
The tests are passing now. Thanks!
Excellent! Thanks for the update.
Just to bring to your attention, there are a few new tests that may fail in the same way. They have been fixed up, but are awaiting review and so are not merged yet. The affected test files have names starting feature_