Jump to content
  • Advertisement

BlackJoker

Member
  • Content Count

    407
  • Joined

  • Last visited

  • Days Won

    1

BlackJoker last won the day on July 24

BlackJoker had the most liked content!

Community Reputation

1336 Excellent

About BlackJoker

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi, I am writing my own engine on C# using .Net core 3 and I wrapped macOS window to output my triangle demo there and it actually render, BUT.... I receive only ~140 FPS. I investigate a little and found out that calls to vkCmdEndRenderPass(), vkEndCommandBuffer() and vkQueueSubmit() takes toooo long and I have no idea why, but this is not the end. Most interesting thing start when I swipe to another virtual screen on my mac and right after that FPS becomes ~5000 (which is actually it should be always) And when I come back to screen where my window renders, FPS again becomes 120-140. I use MoltenVK and Vulkan SDK version 1.1.121 and I dont have such strange behaviour on windows. Obviously this is not normal. Does somebody knows why this can happen and how to fix that?
  2. BlackJoker

    Wrap macOS window using .Net Core 3.0

    OK, I managed to deal with it by writing custom macOS framework with C functions and callbacks and then p/invoke it in my engine on C# side. It was not so hard. Hopefully .Net core 3 allow to marshal macOS frameworks and dylibs.
  3. BlackJoker

    Wrap macOS window using .Net Core 3.0

    Do you have some code examples that I could see and understand how its all works?
  4. BlackJoker

    Wrap macOS window using .Net Core 3.0

    376 views and zero replies? Does nobody really know how to accomplish this?
  5. Hi, Sorry if this is not correct branch, but I would like to ask how to wrap macOS window using .Net core 3.0 Which frameworks/dylibs I need to reference and which functions to p/invoke to make minimum framework for running window on macOS. I dont want to use mono or Xamarin. I want to do it using pure .Net core 3.0 Didnt find any tutorial for this yet. Could someone help me with this or at least point me to the right direction?
  6. BlackJoker

    Custom GIF decoder issue

    So, after a huge amount of research I finally fixed all issues related to parsing GIF images. First of all I solved an issue with incorrect results which I described in previous post. Root cause of the issue was in that I should use TransparentColorIndex from PREVIOUS frame in case of DoNotDispose. This is how frame should be correctly decoded. And the second thing is that I actually ported LZW compressor/decompressor for C#, which works perfectly. Here it is in case if someone will need its implementation for C# internal class LZW { private int EOF = -1; private int MAXBITS = 12; private int HSIZE = 5003; // 80% occupancy private int[] masks = {0x0000, 0x0001, 0x0003, 0x0007, 0x000F, 0x001F, 0x003F, 0x007F, 0x00FF, 0x01FF, 0x03FF, 0x07FF, 0x0FFF, 0x1FFF, 0x3FFF, 0x7FFF, 0xFFFF }; public byte[] Compress(int[] indexData, int colorDepth) { var outData = new List<byte>(); int[] htab = new int[HSIZE]; int[] codetab = new int[HSIZE]; int hsize = HSIZE; // for dynamic table sizing int maxbits = MAXBITS; int maxmaxcode = 1 << MAXBITS; // should NEVER generate this code int initCodeSize = Math.Max(2, colorDepth) + 1; int remaining = indexData.Length; int curPixel = 0; int cur_accum = 0; int cur_bits = 0; int fcode; int i /* = 0 */; int c; int disp; int hsize_reg; int hshift; //g_init_bits - initial number of bits int g_init_bits = initCodeSize; bool clear_flg = false; int n_bits = g_init_bits; int maxcode = GetMaxCode(n_bits); int ClearCode = 1 << colorDepth; int EOFCode = ClearCode + 1; int free_ent = ClearCode + 2; // first unused entry int GetMaxCode(int bits) { return (1 << bits) - 1; } void ClearHash(int size) { for (var i = 0; i < size; ++i) htab[i] = -1; } int GetNextPixel() { if (remaining == 0) return EOF; --remaining; var pixel = indexData[curPixel++]; return pixel & 0xff; } void Output(int code, List<byte> outs) { cur_accum &= masks[cur_bits]; if (cur_bits > 0) { cur_accum |= code << cur_bits; } else { cur_accum = code; } cur_bits += n_bits; while (cur_bits >= 8) { outs.Add((byte)(cur_accum & 0xff)); cur_accum >>= 8; cur_bits -= 8; } // If the next entry is going to be too big for the code size, // then increase it, if possible. if (free_ent > maxcode || clear_flg) { if (clear_flg) { maxcode = GetMaxCode(n_bits = g_init_bits); clear_flg = false; } else { ++n_bits; if (n_bits == maxbits) { maxcode = maxmaxcode; } else { maxcode = GetMaxCode(n_bits); } } } if (code == EOFCode) { // At EOF, write the rest of the buffer. while (cur_bits > 0) { outs.Add((byte)(cur_accum & 0xff)); cur_accum >>= 8; cur_bits -= 8; } } } void ClearBlock(List<byte> outs) { ClearHash(hsize); free_ent = ClearCode + 2; clear_flg = true; Output(ClearCode, outs); } var ent = GetNextPixel(); hshift = 0; for (fcode = hsize; fcode < 65536; fcode *= 2) { ++hshift; } hshift = 8 - hshift; // set hash code range bound hsize_reg = hsize; ClearHash(hsize_reg); Output(ClearCode, outData); outerLoop: while ((c = GetNextPixel()) != EOF) { fcode = (c << maxbits) + ent; i = (c << hshift) ^ ent; // xor hashing if (htab[i] == fcode) { ent = codetab[i]; continue; } else if (htab[i] >= 0) // non-empty slot { disp = hsize_reg - i; // secondary hash (after G. Knott) if (i == 0) { disp = 1; } do { if ((i -= disp) < 0) { i += hsize_reg; } if (htab[i] == fcode) { ent = codetab[i]; goto outerLoop; } } while (htab[i] >= 0); } Output(ent, outData); ent = c; if (free_ent < maxmaxcode) { codetab[i] = free_ent++; // code -> hashtable htab[i] = fcode; } else { ClearBlock(outData); } } Output(ent, outData); Output(EOFCode, outData); return outData.ToArray(); } public int[] Decompress(byte[] compressedData, int minimumCodeSize) { uint mask = 0x01; int inputLength = compressedData.Length; var pos = 0; int readCode(int size) { int code = 0x0; for (var i = 0; i < size; i++) { var val = compressedData[pos]; int bit = (val & mask) != 0 ? 1 : 0; mask <<= 1; if (mask == 0x100) { mask = 0x01; pos++; inputLength--; if (inputLength == 0) { //break here if there is the end of data and still no EOI code (maybe because of buggy encoder) break; } } code |= (bit << i); } return code; }; var indexStream = new List<int>(); var clearCode = 1 << minimumCodeSize; var eoiCode = clearCode + 1; var codeSize = minimumCodeSize + 1; var dict = new List<List<int>>(); void Clear() { dict.Clear(); codeSize = minimumCodeSize + 1; for (int i = 0; i < clearCode; i++) { dict.Add(new List<int>() { i }); } dict.Add(new List<int>()); dict.Add(null); } int code = 0x0; while (inputLength > 0) { int last = code; code = readCode(codeSize); if (code == clearCode) { Clear(); continue; } if (code == eoiCode) { break; } if (code < dict.Count) { if (last != clearCode) { var lst = new List<int>(dict[last]); lst.Add(dict[code][0]); dict.Add(lst); } } else { //if (last != clearCode) { var lst = new List<int>(dict[last]); lst.Add(dict[last][0]); dict.Add(lst); } } indexStream.AddRange(dict[code]); if (dict.Count == (1 << codeSize) && codeSize < 12) { // If we're at the last code and codeSize is 12, the next code will be a clearCode, and it'll be 12 bits long. codeSize++; } } return indexStream.ToArray(); } } That`s all from my side.
  7. BlackJoker

    Custom GIF decoder issue

    OK, seems your basic idea is correct. I updated my decoder with the following code: pixels = new byte[gifImage.Descriptor.Width * gifImage.Descriptor.Height * 3]; var baseFrame = gifImage.Frames[frameIndex - 1]; var originalIndexStream = baseFrame.IndexData; var currentIndexStream = new List<int>(originalIndexStream).ToArray(); for (int i = 0; i < frame.Descriptor.Height; ++i) { var dstIndex = ((frame.Descriptor.OffsetTop + i) * gifImage.Descriptor.Width) + frame.Descriptor.OffsetLeft; var srcIndex = i * frame.Descriptor.Width; Array.Copy(frame.IndexData, srcIndex, currentIndexStream, dstIndex, frame.Descriptor.Width); } for (int i = 0; i < currentIndexStream.Length; i++) { if (currentIndexStream[i] == 63) { currentIndexStream[i] = originalIndexStream[i]; } var colors = colorTable[currentIndexStream[i]]; pixels[offset] = colors.R; pixels[offset + 1] = colors.G; pixels[offset + 2] = colors.B; offset += 3; } frame.IndexData = currentIndexStream; frame.RawPixels = pixels; Here I am copying current Index color buffer to the copy of previous to have full match for size and after that I start filling pixel buffer with colors, but I check if current index is 63, then I take index from previous frame. And here is my result for 10th frame: BUT: Currently my algorithm is hardcoded to index 63, because I know this index returns incorrect color. How do I solve this for general case? How to know which color index is correct and which is not? Seems color table has no incorrect indexes at all. How does standard GIF decoders solve this issue?
  8. HI, Currently I am writing my own custom decoder for GIF and faced with an issue that when file has only global color table, output image has incorrect colors. Here is my LZW decompression algorithm: private Image DecodeInternalAlternative(GifImage gif) { var frame = gif.Frames[70]; var data = frame.CompressedData; var minCodeSize = frame.LzwMinimumCodeSize; uint mask = 0x01; int inputLength = data.Count; var pos = 0; int readCode(int size) { int code = 0x0; for (var i = 0; i < size; i++) { var val = data[pos]; int bit = (val & mask) != 0 ? 1 : 0; mask <<= 1; if (mask == 0x100) { mask = 0x01; pos++; inputLength--; } code |= (bit << i); } return code; }; var indexStream = new List<int>(); var clearCode = 1 << minCodeSize; var eoiCode = clearCode + 1; var codeSize = minCodeSize + 1; var dict = new List<List<int>>(); void Clear() { dict.Clear(); codeSize = frame.LzwMinimumCodeSize + 1; for (int i = 0; i < clearCode; i++) { dict.Add(new List<int>() { i }); } dict.Add(new List<int>()); dict.Add(null); } int code = 0x0; int last = 0; while (inputLength > 0) { last = code; code = readCode(codeSize); if (code == clearCode) { Clear(); continue; } if (code == eoiCode) { break; } if (code < dict.Count) { if (last != clearCode) { var lst = new List<int>(dict[last]); lst.Add(dict[code][0]); dict.Add(lst); } } else { if (last != clearCode) { var lst = new List<int>(dict[last]); lst.Add(dict[last][0]); dict.Add(lst); } } indexStream.AddRange(dict[code]); if (dict.Count == (1 << codeSize) && codeSize < 12) { // If we're at the last code and codeSize is 12, the next code will be a clearCode, and it'll be 12 bits long. codeSize++; } } var width = frame.Descriptor.width; var height = frame.Descriptor.height; var colorTable = frame.ColorTable; var pixels = new byte[width * height * 3]; int offset = 0; for (int i = 0; i < width * height; i++) { var colors = colorTable[indexStream[i]]; pixels[offset] = colors.R; pixels[offset + 1] = colors.G; pixels[offset + 2] = colors.B; offset += 3; } ... } For gifs, which has local color tables, everything seems decodes fine, but If gif has global color table or has X and Y offsets, then first frame is good, other frames - not. Here is some examples: First - gifs with local color table and NO offset (all frames are the same size) And here is a gif file, which has only global color table and different frame size. I think that if first frame was built correctly, this issue might happening because of horizontal and vertical offsets (could be different for each frame), but the only thing I cannot understand in such case - why this is actually happening? After decoding I must have valid color table indices in decoded array and I should not care about offsets. Also must admit that LZW decoding algorithm seems working fine. At least it always produces array of the expected size. I attach also my full test GIF files here. So, could someone point me to the right direction and say why I see observed behavior in one case and dont see in another?
  9. BlackJoker

    Decode ICO file manually

    @frob Actually you are right. I found out that last structure I am reading should be 40 bytes instead of 46 bytes. So, this 6 bytes were producing all there errors. Seems now ico files reading correctly.
  10. BlackJoker

    Decode ICO file manually

    BTW, one more interesting things I found: If Ico file has 24 bit color depth, colors are good, but if it has 32 bit -> colors are messed up.
  11. BlackJoker

    Decode ICO file manually

    @xycsoscyx Regarding channels - I tried various combinations, but none of them give me at least nearly the same colors as in the original ico file.
  12. BlackJoker

    Decode ICO file manually

    I tried with another ico and found that there is also RGB, AND and XOR masks (There are no such data for the icon above). Color actually not correct, but seems these color should be changed by this masks somehow. Does anyone know how to correctly apply this masks?
  13. BlackJoker

    Decode ICO file manually

    @xycsoscyx Very interesting. I thought that colors inside ico is in BGRA format.... Its very very strange. Need to check. Thanks for aimingю
  14. Hi to all. Currently I am trying to decode ICO file with my own loader, but I am getting weird results. Here is screen from debugger. I have icon 170*170 px, with 32bit color, but in header info I see that height is twice bigger. It will be ok if image size also correspond to the size, but It does not. 170*170*4 = 115 600 170*340*4 = 231 200 But image size is 119 680. After I read 115 600, there are 4080 bytes array, which is mostly filled with zeroes. If I understood correctly, this is transparency buffer. I applied it, but in any case colors does not change. Here is how I read colors data: int realBitsCount = (int)iconImageInfo.Header.bitCount; bool hasAndMask = /*(realBitsCount < 32) &&*/ (height != iconImageInfo.Header.height); dataPtr = IntPtr.Add(dataPtr, Marshal.SizeOf<IconImageInfo>()); var buffer = new byte[width * height * 4]; var stream = new UnmanagedMemoryStream((byte*)dataPtr, iconInfo.BytesInRes); var sizeinBytes = realBitsCount / 8; int bufferOffset = 0; if (realBitsCount == 32) { for (int i = 0; i < buffer.Length; i += sizeinBytes) { stream.Read(buffer, bufferOffset, sizeinBytes); Utilities.Swap(ref buffer[bufferOffset], ref buffer[bufferOffset + 2]); bufferOffset += sizeinBytes; } } and here is how I read and apply transparency: int shift; int shift2; byte bit; int mask; int boundary = width * realBitsCount; //!!! 32 bit boundary (http://www.daubnet.com/en/file-format-ico) while (boundary % 32 != 0) boundary++; boundary = width; while (boundary % 32 != 0) boundary++; var bufSize = iconInfo.BytesInRes - (width * height * (realBitsCount / 8)); var transparencyBuffer = new byte[bufSize]; stream.Read(transparencyBuffer, 0, (int)bufSize); for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { shift = 4 * (x + y * width) + 3; bit = (byte)(7 - (x % 8)); shift2 = (x + (height - y - 1) * boundary) / 8; var b = transparencyBuffer[shift2]; mask = (0x01 & (b >> bit)); var before = buffer[shift]; buffer[shift] *= (byte)(1 - mask); } } And here is what I receive and what I should receive (from left to right) Does anyone know what I am doing wrong here? Please, help me.
  15. BlackJoker

    Vertices does not rendering

    Ok, thanks. I will try it.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!