• Create Account

# Nik02

Member Since 18 Dec 2002
Offline Last Active Today, 08:01 AM

### #5306542The use of spherical harmonics

Posted by on 18 August 2016 - 06:33 AM

The array of spherical harmonic coefficients (that you now presumably have) can be thought of as a discrete cosine transform results of the lighting signal, where the spatial parameter space is a sphere.

To reconstruct the approximation of the signal (the light strength), you basically sum the cosines of the coefficients together, using the frequency multipliers and weights that you used to calculate the coefficients. This integration operation closely resembles the process with which you obtain said coefficients. The forward and inverse operations on SH:s are extremely similar to those in DCT and FFT, for example.

Note that you should have three sets of sh coefficients; one for each channel in RGB. The summation is relatively simple if you perform the sh calculation on the RGB signal so that you get an array of float3's (for RGB triplets) as a result; this is because in HLSL or GLSL, you can directly obtain both the cosine and the sum of 1 to 4-element vector primitives.

### #5300676Questions About Storing Opcodes With Operands

Posted by on 14 July 2016 - 03:34 AM

Opcodes are stored in the virtual address space of the process. They are simply data, with unique number for each opcode and variation. Parameters - or the actual data to be processed - can reside in other parts of the process address space.

If you open a debugger on a running process and go to memory view, you can see the actual opcodes in the binary section of the process address space. Or, you can dump an executable without running it and view the program as a chunk of memory.

The CPU has instructions to copy data between RAM and registers, and these instructions themselves have their own opcodes.

### #5300020IFFT?

Posted by on 10 July 2016 - 01:36 PM

Cricket FFThas a fast implementation of the inverse as well. The difference between forward and inverse algorithm is very small; the only effective difference is the power applied to the input terms when integrating over them.

Wikipedia has a good summary on the theory.

Here's the page for FFT specifically.

### #5295720alpha value of texture always be 1.0f

Posted by on 08 June 2016 - 11:13 PM

That code does not set the texel values, because you are using a read-only lock.

### #5295029Shape interpolation?

Posted by on 04 June 2016 - 08:44 PM

Two completely arbitrary meshes cannot be smoothly interpolated between. The minimum common attribute is the topology (number of holes and/or open edges). When you do have the same topology, you can try to parametrize the surfaces in 2 dimensions to establish the common mapping between them. After you have that, it is relatively easy to actually interpolate between the geometries.

### #5280049Registry Entry Locator

Posted by on 07 March 2016 - 01:05 PM

MSDN is Microsoft's official documentation

### #5280015Having problems with 3D pipeline

Posted by on 07 March 2016 - 08:30 AM

World space is too early - consider that the camera transformation can (and likely will) rotate the geometry so that faces which were "backfacing" (if that even makes sense without a camera) are no longer oriented in their original directions. Projection can also affect the orientation of the faces with regard to the screen. Backface culling should be done, at the earliest, after projection.

### #5278653How to clear this offset bit?

Posted by on 29 February 2016 - 02:16 AM

You could verify the function by reading the documentation of the bitwise operators used therein, and testing it by feeding various values to it and comparing the returned values with known, expected values. It would also be a good chance to learn unit testing.

It is dangerous to use copy/pasted code in production projects, if you don't understand what it does. When stuff breaks, it is your fault, whether or not you understand why.

### #5278045HLSL SM3 Loop into a 1280x720 sampler texture

Posted by on 25 February 2016 - 04:13 AM

256x256 does fit within your hardware's capabilities. Still, 65536 samples per pixel is a very large number.

Depending on what you're trying to do, you could use mip-mapping to downsample your source texture so that you'd drastically reduce the amount of samples needed. Of course, this sacrifices a little bit of precision, but you would gain a lot of performance in return.

Posted by on 21 February 2016 - 01:27 AM

Consider that the GPU can usually cache the output of the vertex shader, but not the geometry shader. This is because very little can be assumed about GS output in advance.

### #5275915Draw arc from 2 points and radius

Posted by on 16 February 2016 - 08:26 AM

See the attached file.

Consider that n1 is the normalized vector from the center to the arc start point p1. Its tangent t1 with respect to the circle is simply n1 rotated 90 degrees. If you need to "place" the tangent vector, move it to start it from the arc point. However, for measuring the screen-space angle of it, you don't have to offset it.

angleOfTangent1 = atan2 (t1.x, t1.y)

Since you calculate the n1 already, you can find the angle of the tangent vector by swapping the x and y components of the normal n1 (thus implicitly performing the 90deg rotation), and put them to the atan2.

You can find the t2 and its angle exactly the same way.

Note that common implementations of atan2 return the angle in radians. 90 degrees is pi/2 radians. Also, atan2 is usually a helper layer on top of atan, that considers if either or both components are zero or negative, and adjusts the return value accordingly to arrive to a correct angle across the full circle. Atan, by itself, would only make sense when both parameters are positive.

#### Attached Thumbnails

Posted by on 04 February 2016 - 08:20 AM

If you don't have the skills to develop the game prototype yourself, you'll have to pay for other people to develop it, or have such an impressive portfolio that you attract people to work for free initially, with the promise of future profits.

Professionals usually get the job done, but you generally need to pay at least something up front, because their livelihood is tied to the work. Some developers may work initially for free, but if you don't promise anything concrete, you do run into the risk of losing them without notice; they are not obligated to work for you, because you are not obligated to pay for them.

Depending on which of these is your approach, you could post in "help wanted" (if seeking free help) or "classifieds" (if seeking professional help for a fee).

Finally, everyone has ideas. Even if you have the greatest game idea ever, it is the execution of the idea that matters.

### #5273806How find sample positions in mulisampling rendering?

Posted by on 02 February 2016 - 01:50 AM

MJP's approach works even when GetSamplePosition is not available or applicable.

### #5272822Maximum image size of a WriteableBitmap?

Posted by on 27 January 2016 - 06:07 AM

It is very possible to load the compressed files to memory and unpack them when you need to display them. Simply load them to byte vectors and when you need the Bitmap object from the data, initialize that from memory instead of from a file.

Note that scroll viewer virtualization (as in not keeping all items in memory) is a very common performance optimization technique in data-heavy user interfaces.

In combination with the aforementioned things, you could use an alternative PNG library that lets you specify the rectangle to load from the image, while not consuming the memory for the rest of the image. The format itself makes this possible, but I don't know specific libraries off the top of my head that can achieve this.

### #5272352Going 64-bit, can't use Jet4 any more :(

Posted by on 23 January 2016 - 07:37 AM

Also, consider replacing the Jet engine with SQL Server

PARTNERS