Graphing RGB Waveform (HLSL?)

Started by
3 comments, last by remigius 14 years, 1 month ago
I'm at a loss as to how to approach this. I'm using Direct3D9 and want to take an image, and for each vertical line of pixels create a graph that plots the red components of each of the pixels on a new image. So if a point on some image to be analyzed has coordinates x, y, then for x=4 all the red components from y = 0 to image height would be plotted on a graph. The corresponding coordinates for drawing pixels on the new graph would be x=4 and y would range from 0 to 255 (the red component range). The result looks something like this: Waveform However, I can't figure out how to actually graph the red components without having to read back the texture into system memory (causing a GPU stall) and then create a graph from that. Is this even possible with a shader? The problem I'm seeing is that for a given pixel coordinate the shader alters the color attributes for the pixel at only that coordinate. ie. you can't alter other pixels based off of the current pixel you are analyzing in the pixel shader.. only the pixel you are currently modifying. - Michael Tanczos
Advertisement
How about "abusing" displacement mapping ? Normally displacement maps are used in 3D to change the vertices of a mesh according to a height map texture, so why not use it for 2D. As far as I know you do need Shader Model 3.0 for it to work, though (texture prefetch in vertex shader, I think).

Here's the idea:

- setup a "mesh" covering one column of your image (several rectangles, most likely each pixel!), probably a triangle strip is most appropriate.
- in the vertex shader transform position to graph space, interpret the (original) position as texcoord, sample your image there and use it the color channel as an offset in y-direction for the position.

Then just blend, for instance additively . This is done for every column and for every color channel.

Tweaks:
- Use a mesh that covers more columns, so less draw calls are needed.
- Sampling: If your image is really big, interpolation might come in handy (mesh with a smaller resolution, original image filtered)
- Use a float-valued render target. Maybe this gives better results.



Two possible approaches come to mind:

1. If you're targeting hardware that supports efficient vertex texturing (basically any DX10-capable hardware), you could just have a line or point list with the number of vertices equal to the height of your image. Then for each vertex, you just sample the texture and calculate the position on screen. The results wouldn't be pretty since would just be a hard line, but it would be pretty easy to set up.

2. Draw a quad covering the area on-screen where your graph will be drawn. For each pixel in the graph, determine where you want to sample your image (based on the x position of the pixel) and sample it to get the red component. Then based on the value of red component and y position of the pixel, determine whether you should output a bright color or a dark color. This would be more expensive, but could give you nice smooth lines. You also wouldn't need anything better than ps_2_0.
Quote:Original post by MJP
Two possible approaches come to mind:

1. If you're targeting hardware that supports efficient vertex texturing (basically any DX10-capable hardware), you could just have a line or point list with the number of vertices equal to the height of your image. Then for each vertex, you just sample the texture and calculate the position on screen. The results wouldn't be pretty since would just be a hard line, but it would be pretty easy to set up.

2. Draw a quad covering the area on-screen where your graph will be drawn. For each pixel in the graph, determine where you want to sample your image (based on the x position of the pixel) and sample it to get the red component. Then based on the value of red component and y position of the pixel, determine whether you should output a bright color or a dark color. This would be more expensive, but could give you nice smooth lines. You also wouldn't need anything better than ps_2_0.


I'll have to look into #1 and the idea of displacement mapping proposed by unbird..

For #2, if I'm writing a pixel shader that operates on the finished "graph" for each given x,y pixel I couldn't figure out how exactly to sample the source image. Whether x,y should be lit up at all depends on there being some pixel in the entire x column of the source that has a brightness of y.

Another thought, would some type of GPU-based particle system work? The waveform does resemble a point based particle system. I'm wondering if this could be done in two passes with a pixel shader to transform each rgb pixel into some type of particle data and then a second shader perhaps to render the particle data?

Something like this? I haven't found any demos of this technique yet..

I'm in territory I haven't explored before so it's a shot in the dark. I appreciate your help.

- Michael Tanczos
Quote:Original post by Michael Tanczos
I haven't found any demos of this technique yet..


My little demo here looks like it's doing something similar to what that paper suggests for D3D9.
Rim van Wersch [ MDXInfo ] [ XNAInfo ] [ YouTube ] - Do yourself a favor and bookmark this excellent free online D3D/shader book!

This topic is closed to new replies.

Advertisement