I’m trying to get my head around what should be obvious, but when in actual fact is a lot more complicated than what you might expect.
I once asked a question here regarding loading an image from memory and I’ve made my method of compression a bit more advanced since then.
In a previous topic I had recently posted, I was having issues reading a file using GZip compression. Now that I resolved it after searching around, I have since been toying with TTF files within this folder. I used the .NET Framework and WinForms to do the compression for me, and assuming the fonts were compressed correctly, the resulting file size is 300KB and here is the code to do so:
using (FileStream fs = new FileStream(txtOutput.Text, FileMode.Create))
{
using (GZipStream zip = new GZipStream(fs, CompressionMode.Compress))
{
using (BinaryWriter writer = new BinaryWriter(zip))
{
//File count
writer.Write(list.Count);
//Calculate offset for starting position of compressed data
var intSize = Marshal.SizeOf(typeof(int));
var offset = intSize;
foreach (var file in list)
{
var relPath = file.Substring(txtPath.Text.Length);
offset += (relPath.Length * sizeof(char)) + (intSize * 3);
}
//Write the file data
foreach (var file in list)
{
var bytes = File.ReadAllBytes(file);
var relPath = file.Substring(txtPath.Text.Length);
writer.Write(relPath.Length * sizeof(char));
writer.Write(relPath);
writer.Write(offset);
writer.Write(bytes.Length);
offset += bytes.Length;
}
//Actual data compressed
foreach (var file in list)
{
var bytes = File.ReadAllBytes(file);
writer.Write(bytes);
}
}
}
}
So the resulting compressed data reads as follows:
-
count
- The number of files to compress. - The file data, which consists of the file path pointing to its respective offset and length within the compressed file.
- And the actual, raw, bytes encoded using the GZip compression algorithm for each of the files.
This is my code as implemented in Haxe:
class ResourceReader
{
private var fontCache:Map<String, Font>;
public var art:Map<String, Resource>;
public var sound:Map<String, Resource>;
public var maps:Map<String, Resource>;
public var data:Map<String, Resource>;
public var fonts:Map<String, Resource>;
public function new()
{
fontCache = new Map<String, Font>();
art = new Map<String, Resource>();
sound = new Map<String, Resource>();
maps = new Map<String, Resource>();
data = new Map<String, Resource>();
fonts = new Map<String, Resource>();
}
public function loadData()
{
var fontsPath = "data/fonts.dat";
var fontsFile = new Reader(File.read(fontsPath));
var data:Bytes = fontsFile.read().data;
var offset = 0;
var count = data.getInt32(offset);
for (i in 0...count)
{
var resource = new Resource();
offset += 4;
var pathLength = Std.int(data.getInt32(offset) / 2);
offset += 5;
var path = data.getString(offset, pathLength);
offset += pathLength;
var fileOffset = data.getInt32(offset);
offset += 4;
var fileLength = data.getInt32(offset);
resource.offset = fileOffset;
resource.length = fileLength;
fonts.set(path, resource);
}
}
public function getFont(path:String):Font
{
if (fontCache.exists(path))
{
return fontCache.get(path);
}
else
{
if (fonts.exists(path))
{
var resource:Resource = fonts.get(path);
trace(resource);
var fontsFile = new Reader(File.read("data/fonts.dat"));
trace("");
var data:Bytes = fontsFile.read().data;
trace("");
var bytes = data.sub(resource.offset, resource.length);
trace(bytes.length);
var font = Font.fromBytes(ByteArray.fromBytes(bytes));
trace(font.fontName);
fontCache.set(path, font);
return font;
}
else
{
return null;
}
}
}
}
I have tested and got the actual code to decompress and extract the files and their respective data into their appropriate Resource
instances which stores information on where the raw data is in the compressed files, and how long the bytes are.
In the below code, everything works until you reach var font = reader.getFont(file);
var reader = new ResourceReader();
reader.loadData();
var file = "\\Gudea-Regular.ttf";
var font = reader.getFont(file);
The reason why, and is based on past experience, is because of var font = Font.fromBytes(ByteArray.fromBytes(bytes));
as provided in the first Haxe code example.
It’s difficult to verify that the bytes are correct in Haxe because the data was compressed using C#. I’m not sure what the default headers in the GZip compression algorithm is in C#, but if they are the same in both implementations then it’s not a problem with the algorithm, but a problem with my implementation.
I’m just not sure what.
Unless there is a blatant and obvious solution that I have missed, then I think that this will involve thorough testing on both Haxe and on C# to verify certain things. I can always rewrite the Reader as a Writer using HaxeUI and see if the bytes are correct through that, and if they are then it confirms that I shouldn’t be using multiple languages for this manual resource system!