Jump to content

  • Log In with Google      Sign In   
  • Create Account

Calling all IT Pros from Canada and Australia.. we need your help! Support our site by taking a quick sponsored surveyand win a chance at a $50 Amazon gift card. Click here to get started!


TheUnnamable

Member Since 20 Aug 2009
Offline Last Active Yesterday, 02:52 PM

Topics I've Started

Texture format for heightmap brushes

24 January 2014 - 04:51 PM

I'm writing a framework which makes it easier for my team to implement procedural terrain generation algorithms. The output would be a nice random-generated heightmap. The basic idea is to share work between the CPU and the GPU, so I've written a texture class which can be used for primitives, drawn onto ( GPU ), and written/read pixel by pixel ( CPU ). I upload/download texture data to/from GPU when needed. 

We're targeting DX9-compatible hardware.

 

When deciding about texture format, I settled for GL_LUMINANCE + GL_FLOAT, since I need only one channel for the height. Later I wanted to add brush support, so I switched to GL_LUMINANCE_ALPHA. I started changing my framework so that it uses the highest available precision ( so GL_FLOAT is more like a hint, no more a requirement ).

 

I did some empirical research about GL_LUMINANCE_ALPHA's support and realized that it's by far not optimal. On my NVIDIA GeForce 710M it's supported, but drawing it is ridiculously slow. My integrated Intel HD 3000 lacks support for both GL_LUMINANCE and GL_LUMINANCE_ALPHA.

Looking for a more supported replacement, I've noticed GL_RG, which has two components too, but by default doesn't expand the way I need it. I've checked texture swizzling, but it hangs around only since ~2008, which is not optimal for me.

I could expand my data to GL_RGBA which has the most chance to be supported but that way I have two unused channels.

Or I could have two separate GL_R textures, one for the height and one for the alpha. That seems valid, too, but I have to draw everything twice.

 

I'd appreciate any input on my plans, what method should I use, how much compatibility should I expect, etc.

 

tl;dr: GL_LUMINANCE_ALPHA not supported, need an alternative that can be used on DX9-compatible GPU's.


Best of Game Maker Hungary

16 November 2013 - 02:46 PM

Hello!

We have a local community of Game Maker users, and we decided that we would put together some of our doodles and games from the past years and create a video smile.png We would like to share this video with as many people as possible, to get some feedback, and maybe reach some fellow Game Maker users, Hungarian and non-Hungarian alike smile.png

capture_001_16112013_204408.jpg?token_hacapture_011_16112013_204543.jpg?token_hacapture_018_16112013_204704.jpg?token_hacapture_020_16112013_204719.jpg?token_hacapture_002_16112013_204417.jpg?token_ha

Click here or on the images to see the video!

( they are just thumbnails )

 

Also, there will be some new videos in the near future too, so stay alert :)


How to handle endianness?

17 August 2013 - 08:03 AM

I always wanted to write a library to write and read simple data types ( integers, floats, etc. ) to/from files. Also, I wanted the library to be as much cross-platform as it can be - if I write an integer to a file on Mac, I want to get the same number from that file on Windows.

 

I've actually written that library, but I feel I've made an overkill with it. It's designed to handle little-endian data. It gives you a buffer class, to which you can write integers, floats, or just random blobs of data. The overkill I mentioned happens when writing specific data formats. It constructs everything bit-by-bit, eg.:

    buffer& operator<<(buffer& os, unsigned short x)
    {
        if(os.size()<os.tellp()+2){os.resize(os.size()+2);}
        size_t offs=os.tellp()*8; //Bit index to start writing at

        for(byte_t i=0; i<16; i++){os.set_bit(offs+i,(x>>i)&1);}

        os.seekp(2,1);
        return os;
    };

I'm suspecting I could do something like this:

    buffer& operator<<(buffer& os, unsigned short x)
    {
        os.put(x%256); x -= x%256;
	os.put(x/256);
        return os;
    };

And it would be still readable on most of the platforms.

 

So my questions:

  • If I handle bytes instead of bits, will it still be readable on other platforms? Or on other computers?
  • With the simplified method, how would I go with signed integers? Use it's abs but write inverted bytes? ( ~x operator )
  • Floats?

Also, a note: Another concern while writing this library to conform to some kind of standard. So, for integers, I've just checked how Window's calc.exe handles them, for floats, I checked Wikipedia ( http://en.wikipedia.org/wiki/Single-precision_floating-point_format )

You can check the whole source at https://github.com/elementbound/binio. The interesting parts are binio/buffer.h ( buffer class ) and binio/formats.h and .cpp.


Mysterious Segmentation Fault around WSARecv

19 July 2010 - 02:34 AM

I'm using C++ and ( currently ) targeting Windows.
I wrote a socket handler class for easier use ( it uses select btw ), and I want to port it to Windows, to use Overlapped I/O, because I've read it's much better, especially for handling many connections.
Receiving is built around a loop, which is started by StartReceive(), then continued by checkRecv() and Recv().

The class may have more problems, but I want to solve this one first: at succesfull connecting, StartReceive() is called, which crashes the app by calling WSARecv. It gets a Segmentation Fault, and can't figure out why.

Here's a pastebin: http://pastebin.com/P9mxDR0d
I tried to make it well-commented.

[ Sorry, English is not my native language, if I didn't give some needed information, just ask ]

PARTNERS