That’s embarrassing. Thanks @ibilon, that was the missing command-line argument I needed to make it work.
Interestingly, the test used to boot up really fast (sub-second); adding lime
makes it boot in 4-5 seconds. I’m not complaining though
That’s embarrassing. Thanks @ibilon, that was the missing command-line argument I needed to make it work.
Interestingly, the test used to boot up really fast (sub-second); adding lime
makes it boot in 4-5 seconds. I’m not complaining though
After losing (and redoing) some work, I’ve created something that works. Sort of.
All three test cases currently fail (looks like SVGs are truncated on the right-side by a few pixels at least). I would like a preliminary check @singmajesty – is this what we discussed? Does this direction still make sense?
You can run it by cloning my fork here and running haxelib run munit test
from the root directory.
Ive just invited you as a contributor to the SVG project. Please feel free to add unit tests there
Thanks! This is a great honour (and heavy responsibility …) which I will do my best to uphold.
For now, I have to think about this more. I generated the PNGs (expected value) in GIMP, and they don’t exactly match the OpenFL ones. Instead of hash-comparing, I think I need a “softer” compare (eg. <= 10% of pixels are different).
How do I load a PNG using Lime/OpenFL in a test? The usual methods don’t work. (I just need access to the image’s pixel data.)
The OpenFL unit test is actually still built using “openfl build”, that would enable assets to work, or, you could use BitmapData.fromFile (etc) though that wont work in HTML5. You could also use a loader, but again, these will become reliant on the output file structure to find the assets
Thanks. I had to put this down for a while. I will come back to it again.
I think the current approach is a bit off – generating expected-value PNGs in GIMP won’t work. If we have 100 SVG tests, and we change the generation, do we have to regenerate the 100 PNGs? That won’t scale.
Instead, I think the tests should be two parts. The workflow will be:
At any point, we can regenerate the expected-value PNGs from the SVGs.
Does this make more sense?
Related point: telling if two images are “similar enough” is not an easy thing to do. There are lots of fun edge cases (like a black line that’s off by 1px) that make this difficult.
This doesn’t make sense. It’s too manual. (Feel free to keep ignoring my posts by the way )
My current idea is to try this:
This fits the svg
library philosophy better: it’s not perfect, but Good Enough.
Final post, for posterity: the bitmapData.compare
method handles much of what I need. I got this working, without splitting the image up – calculating the average pixel diff (expected vs. actual pixels) worked really well.
Cheers – your help was invaluable @singmajesty