Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 06 Dec 2011
Offline Last Active Feb 24 2013 05:25 PM

Posts I've Made

In Topic: which cpu to model in VM interpreter?

12 January 2013 - 07:29 PM

The 68k is a total CISC beast. I wouldn't recommend trying to emulate it as something "simple." 


If you want extreme in simple, do 6502. It only has a handful of registers and no more than 256 opcodes. 


Of course, if you don't mind using someone else's emulator core to create your VM, you could use just about anything. You could also create your own VM machine code, but that'd require you to write the assembly back end for any C compiler you used, which is fairly nontrivial. 

In Topic: Quality of my code

12 January 2013 - 12:14 AM

My pet peeve is not making it clear that variables are member variables, but that's more of a C++ism I guess than a C# one. The problem is someone reading your code will have a hard time telling what scope the variable is in without something like m_ or self. (as in the case of python) on the front of it. It's just a readability nitpick, but at work forgetting m_ or the less common using m_ for local scope variables is punishable by death. >:]

In Topic: On a scale from 1 to 10 how bad of an idea would it be to use a JSON like for...

10 January 2013 - 06:46 PM

Why not use google protocol buffers instead? It'll be much faster and it has inter-language communication capabilities. The only drawback is the awkward build process.

In Topic: Need some help with XAudio2

28 December 2012 - 02:36 PM

Clicks? Maybe you put an incorrect data into the buffer and play using the voice interface,

Do you parse the wave file for extracting the buffer?

PCM is supported by all versions of XAudio2.

This is a low level API and I thing you IEEE would not work.

(but I do not know this format)


P. S.

Which XAudio2 version is your program linked against?

For PCM, WAVEFORMATEX would be enough.


I'm not using a wav file; I'm generating the data for the buffers on the fly (as shown in the streamNextChunk function).  According to the header, I'm using XAudio2 version 2.7 (from the June 2010 SDK). 


The clicks I'm hearing are presumably a buffer underflow since I'm purposely starving the voice, but I would expect this to trigger a "voice starved for data" warning or a warning about an audio glitch; it does not. 

In Topic: Is OpenCL what I need?

28 December 2012 - 01:15 AM

OpenCL is certainly an option, but why not just use graphics shaders? OpenCL most likely won't give you access to hardware filtering, whereas a graphics shader will. OpenCL is more general purpose and can run on any kind of parallel hardware (multiple cores on a CPU, or CUDA cores on an nvidia card for example), but since you're working with a graphics algorithm anyway, you might as well use something like GLSL or HLSL.

The problem with GLSL/HLSL is that they are deeply integrated into the OpenGL/D3D pipeline. A shader operates on a single pixel but my algorithm draws columns at a time. Ideally I would be using a shader that instead of outputting a one pixel can write pixels wherever it chooses. In short if I were to use shaders to get the same effect I would need exponentially more rays. This is a 2d raycaster like wolfenstein 3d or doom and almost exactly like comanche its not a good fit for the shading pipeline.


In that case why not draw a series of 1-pixel wide quads on the screen and do your ray casting in the vertex shader? That way your vertex shader is your "column" drawer, and the pixel shader just uses whatever value was given it by the vertex shader to calculate the final color. 


Another idea is to have your pixel shader figure out what column its pixel falls in and use that information for the color. The pixel shader will always get executed for each pixel that falls on a rasterized primitive, so why not take advantage of that parallelism?