# GPU friendly compression of 2D signal

## Recommended Posts

Hey

I want to try shade particles by compute a "small" number of samples, e.g. 10, in VS. I only need to compute the intensity of the light, so essentially it's a single piece of data in 2 dimensions.

Now I want to compress this data, pass it on to PS and decompress it there (the particle is a single quad and the data is passed through interpolators). I will accept a certain amount of error as long as there are no hard edges, i.e. blurred.

The compressed data has to be small and compression/decompression fast. Does anyone know of a good way to do this?

Maybe I could do something fourier based but I'm not sure of what basis functions to use.

Thanks

##### Share on other sites

Combine the results of the samples using Spherical Harmonics and output that in the VS.

##### Share on other sites

If you want it only for 2D, SH is more than necessary, but i'm unsure what you try to do.

However, i use code below to calculate the curvature directions of a mesh. The problem here is that curvature direction is the same on opposite sides, e.g. vec2(0.7, 0.7) equals vec2(-0.7, -0.7) so i can not simply add vectors to get average curvature direction.

Instead i express directions with a sine wave that has two lobes pointing forwards and backwards, phase is direction and amplitude is intensity. Now adding two sine waves always results in another single sine wave and this way i get an accurate result from summing any number of samples. (Same principle is used in SH and Fourier Transform).

So, if this sounds interesting to you, you could do the same for lighting, but you would want the lobe pointing only in one direction and not the opposite as well, which means replacing factors of 2 with 1 and adjusting some other things as well.

But: For lighting i would just sum up vector wise and accept the error coming from that. Also note that my approach does not have a constant band like SH, so the same amount of light coming from right and left would result to zero - might be worth to add this for lighting.

	struct Sinusoid
{
float phase;
float amplitude;

Sinusoid ()
{
phase = 0;
amplitude = 0;
}

Sinusoid (const float phase, const float amplitude)
{
this->phase = phase;
this->amplitude = amplitude;
}

Sinusoid (const float *dir2D, const float amplitude)
{
this->amplitude = amplitude;
phase = PI + atan2 (dir2D[1], dir2D[0]) * 2.0f;
}

float Value (const float angle) const
{
return cos(angle * 2.0f + phase) * amplitude;
}

{
float a = amplitude;
float b = op.amplitude;
float p = phase;
float q = op.phase;

phase = atan2(a*sin(p) + b*sin(q), a*cos(p) + b*cos(q));
float t = a*a + b*b + 2*a*b * cos(p-q);
amplitude = sqrt(max(0,t));
}

float PeakAngle () const
{
return phase * -0.5f;
}

float PeakValue () const
{
return Value(PeakAngle ());
}

void Direction (float *dir2D, const float angle) const
{
float scale = (amplitude + Value (angle)) * 0.5f;
dir2D[0] = sin(angle) * scale;
dir2D[1] = cos(angle) * scale;
}

};

##### Share on other sites

Thanks for the replies!

SH could be an option and it did cross my mind. It's 2D indeed. The domain when using SH is usually a sphere. In my case I want to use it on a quad. Would that have an impact? I understand that spherical coords can be regarded as a square. I'm just wondering if the domain has any impact & if there would be some other basis functions more suited for an actual quad.

@JoeJ your suggestion looks similar to a fourier series which also crossed my mind. The fact that sin/cos operations are expensive on GPUs made me a little less keen. The general idea of treating the problem as some sort of curve is good though. I could use something like a power function, that could be encoded in 4 params - uv intensity multiplier & uv exponent, given that I pass on the actual colour of 1 of the corners (on the other hand this approach would only be able to depict gradients).

I'm not 100% of how much detail I need to encode in the quad but preferably as much as possible for as little cost

##### Share on other sites
2 hours ago, 51mon said:

Thanks for the replies!

SH could be an option and it did cross my mind. It's 2D indeed. The domain when using SH is usually a sphere. In my case I want to use it on a quad. Would that have an impact? I understand that spherical coords can be regarded as a square. I'm just wondering if the domain has any impact & if there would be some other basis functions more suited for an actual quad.

You can warp fourier transform around a circle, and traet the circle as a square. You can then decice how much bands you need: 1st. Band is a constant term, 2nd can encode a lode towards a single directions, adding more bands means you can approximate multiple lights more accurate.

SH is similar: The smallest 2 band version has one number for the constant term (1st band), and a 3D vector (2nd band) for a directional bump. (This band tells you the dominant light direction if you gather many samples, similar to my curvature example.)

So for 2D you should need 3 numbers: Constant term, and a 2D direction (or angle and amplitude like i did, but direction avoids the trig functions when decoding).

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628645
• Total Posts
2984023
• ### Similar Content

•
I am coding the rasterization of triangles by the baricentric coordinate method.
Look a lot of code and tutorials that are on the web about the optimization of this algorithm.
I found a way to optimize it that I did not see it anywhere.
I Only code the painting of triangles without Zbuffer and without textures. I am not comparing speeds and I am not interested in doing them, I am simply reducing the amount of instructions that are executed in the internal loop.
The idea is simple, someone must have done it before but he did not publish the code or maybe in hardware it is already that way too.
It should be noted that for each horizontal line drawn, of the three segments, you only need to look at one when going from negative to positive to start drawing and you only need to look at one when it goes from positive to negative when you stop drawing.
I try it and it works well, now I am implementing a regular version with texture and zbuffer to realize how to add it to this optimization.
Does anyone know if this optimization is already done?
The code is in https://github.com/phreda4/reda4/blob/master/r4/Dev/graficos/rasterize2.txt
From line 92 to 155

• This is for a dissertation im working on  regarding procedural generation directed towards indie Developers so if you're an indie dev please feel free to share your thoughts
Does run-time procedural generation limit the designer's freedom and flexibility? if( Have you ever implemented procedural generation ==true){ talk about  some of the useful algorithms used}  else {explain why you haven't} Do you think indie Devs are taking advantage of the benefits provided by procedural generation? What are some of the games that inspired you to take up procedural content generation? If there is anyway i can see your work regarding proc gen please mention the link ( cz i need actual indie developers to make a valid point in my dissertation) Thank You So Much

• Hello,
when I have multiple Threads, reading and writing to a scene graph, how do I synchronize data over several nodes?
I.e. when a character with a gun moves, the gun moves with him. A thread dedicated to calculating matrices of both objects might update the character first but before it is able to recalculate the gun's matrix the render thread is already drawing both. Inevitably this causes the character and the gun to be out of sync...
Now this doesn't only apply to the renderer but for the other threads, too.
I recently bought a new mobile phone （Samsung Galaxy Note 8） , i just want to know , which the best Andriod app ?
I am going to install some app to manage phone  , it is mainly aimed at data management and cell phone space optimization, my phone has enough memory to store them
What do you recommend ?

• Hey folks,
I'm about to start work on a system that involves grouping NPCs located within a certain distance from the player in to cells; eventually each cell may exhibit different behaviors based on the 'mood' of the contained NPCs, although the NPCs themselves will have control over their own position & orientation (so no flocking).
Before I begin work on this system I wanted to reach out and see if there are any algorithms that I should check out with regard to creating the NPC cells.
At the moment my plan is pick a random NPC from a list of those around the player, and search within a small radius (from the chosen NPC) to find other nearby NPCs and create a cell. For each NPC added to a cell I repeat this radius check until there are no more valid NPCs for the current cell. Then repeat the cell creation process for any remaining NPCs in my list of those around the player:
Example!
So is that a reasonable approach to take?