Sun SVG icons truncated

That’s embarrassing. Thanks @ibilon, that was the missing command-line argument I needed to make it work.

Interestingly, the test used to boot up really fast (sub-second); adding lime makes it boot in 4-5 seconds. I’m not complaining though :slightly_smiling:

After losing (and redoing) some work, I’ve created something that works. Sort of.

All three test cases currently fail (looks like SVGs are truncated on the right-side by a few pixels at least). I would like a preliminary check @singmajesty – is this what we discussed? Does this direction still make sense?

You can run it by cloning my fork here and running haxelib run munit test from the root directory.

Ive just invited you as a contributor to the SVG project. Please feel free to add unit tests there :slight_smile:

1 Like

Thanks! This is a great honour (and heavy responsibility …) which I will do my best to uphold.

For now, I have to think about this more. I generated the PNGs (expected value) in GIMP, and they don’t exactly match the OpenFL ones. Instead of hash-comparing, I think I need a “softer” compare (eg. <= 10% of pixels are different).

How do I load a PNG using Lime/OpenFL in a test? The usual methods don’t work. (I just need access to the image’s pixel data.)

The OpenFL unit test is actually still built using “openfl build”, that would enable assets to work, or, you could use BitmapData.fromFile (etc) though that wont work in HTML5. You could also use a loader, but again, these will become reliant on the output file structure to find the assets

Thanks. I had to put this down for a while. I will come back to it again.

I think the current approach is a bit off – generating expected-value PNGs in GIMP won’t work. If we have 100 SVG tests, and we change the generation, do we have to regenerate the 100 PNGs? That won’t scale.

Instead, I think the tests should be two parts. The workflow will be:

  • User generates “expected value” images (or uses existing ones) via a script (via OpenFL)
  • User runs the tests, opens the HTML files, and verifies if everything looks good (we show expected and actual images side-by-side)
  • User tweaks SVG generation codez until the expected/actual images are Good Enough
  • User checks in and tests pass

At any point, we can regenerate the expected-value PNGs from the SVGs.

Does this make more sense?

Related point: telling if two images are “similar enough” is not an easy thing to do. There are lots of fun edge cases (like a black line that’s off by 1px) that make this difficult.

This doesn’t make sense. It’s too manual. (Feel free to keep ignoring my posts by the way :slight_smile: )

My current idea is to try this:

  • Split the image into 2x2 or 3x3 (test and see what works better)
  • Calculate the average hue, saturation, and value for each block of pixels (or maybe just the average RGB value)
  • Compare it to the same calculated values on the “expected” image
  • If they’re within some threshold, we’re good

This fits the svg library philosophy better: it’s not perfect, but Good Enough.

Final post, for posterity: the bitmapData.compare method handles much of what I need. I got this working, without splitting the image up – calculating the average pixel diff (expected vs. actual pixels) worked really well.

Cheers – your help was invaluable @singmajesty