magicstix

Members
  • Content count

    46
  • Joined

  • Last visited

Community Reputation

191 Neutral

About magicstix

  • Rank
    Member
  1. which cpu to model in VM interpreter?

    The 68k is a total CISC beast. I wouldn't recommend trying to emulate it as something "simple."    If you want extreme in simple, do 6502. It only has a handful of registers and no more than 256 opcodes.    Of course, if you don't mind using someone else's emulator core to create your VM, you could use just about anything. You could also create your own VM machine code, but that'd require you to write the assembly back end for any C compiler you used, which is fairly nontrivial. 
  2. Quality of my code

    My pet peeve is not making it clear that variables are member variables, but that's more of a C++ism I guess than a C# one. The problem is someone reading your code will have a hard time telling what scope the variable is in without something like m_ or self. (as in the case of python) on the front of it. It's just a readability nitpick, but at work forgetting m_ or the less common using m_ for local scope variables is punishable by death. >:]
  3. Why not use google protocol buffers instead? It'll be much faster and it has inter-language communication capabilities. The only drawback is the awkward build process.
  4. Need some help with XAudio2

      I'm not using a wav file; I'm generating the data for the buffers on the fly (as shown in the streamNextChunk function).  According to the header, I'm using XAudio2 version 2.7 (from the June 2010 SDK).    The clicks I'm hearing are presumably a buffer underflow since I'm purposely starving the voice, but I would expect this to trigger a "voice starved for data" warning or a warning about an audio glitch; it does not. 
  5. Is OpenCL what I need?

    The problem with GLSL/HLSL is that they are deeply integrated into the OpenGL/D3D pipeline. A shader operates on a single pixel but my algorithm draws columns at a time. Ideally I would be using a shader that instead of outputting a one pixel can write pixels wherever it chooses. In short if I were to use shaders to get the same effect I would need exponentially more rays. This is a 2d raycaster like wolfenstein 3d or doom and almost exactly like comanche its not a good fit for the shading pipeline.   In that case why not draw a series of 1-pixel wide quads on the screen and do your ray casting in the vertex shader? That way your vertex shader is your "column" drawer, and the pixel shader just uses whatever value was given it by the vertex shader to calculate the final color.    Another idea is to have your pixel shader figure out what column its pixel falls in and use that information for the color. The pixel shader will always get executed for each pixel that falls on a rasterized primitive, so why not take advantage of that parallelism?
  6. Is OpenCL what I need?

    OpenCL is certainly an option, but why not just use graphics shaders? OpenCL most likely won't give you access to hardware filtering, whereas a graphics shader will. OpenCL is more general purpose and can run on any kind of parallel hardware (multiple cores on a CPU, or CUDA cores on an nvidia card for example), but since you're working with a graphics algorithm anyway, you might as well use something like GLSL or HLSL.
  7.   That certainly makes more sense from a patch standpoint, but the OP was discussing the benefits of a separate lobby program that doesn't necessarily have to be a patcher; it's just one of the other things that it could do if it were separated from the main game program.  :>
  8. The main reason to keep them separate is for modularity. It's easier to deal with interactions and bugs between the two programs when they're separate and specialized, plus as mentioned before you can use your lobby program to download/patch the real game program.    If you put them both into the same program, you're essentially writing 3 programs: one for your lobby, one for your game, and one to join them together into a single application. This makes things more complex and prone to bugs. 
  9.   In general, using rand() is perfectly fine for basically everything unless you're running a gambling site, ultra high precision probabilistic algorithm, cryptographic infrastructure, or other highly specialized tasks. Nothing like information overload to put somebody off programming.   Using rand() is perfectly fine *if* you only need a uniform distribution. There are a lot of times where this isn't the case (for example, generating white noise, where you want a gaussian distribution). In those cases you'll need some kind of random number generator. I'd recommend the boost::random library, which lets you specify any distribution you want.    Also, if you want a "truly random" number and you're on linux, call srand() once with a number read from /dev/rand or /dev/urand. These files provide about as close to a truly random number as you can get. Your sequence will still be predictable *if* you know the seed, but with a truly random seed, your numbers are also less predictable (i.e. "more" random).    If you're wondering why you can't just read from /dev/rand or /dev/urand all the time when you need a random number, technically you *can,* you just *shouldn't,* since it depletes the random-ness of the numbers in the file, and can affect other programs running on the machine that need random numbers, like ssh. 
  10. Need some help with XAudio2

    As an aside, if I purposely change the timing of the callback to say, 1.2 seconds, but still only generate 1 second of data each time around, I'll get audible clicks on a 1 second interval, but no debug warning messages saying the voice is starved for data nor any warnings about audio glitches. This seems a little suspicious to me.
  11. Need some help with XAudio2

    I gave your suggestion a try, and it still doesn't work unfortunately. I also tried using __int16 and WAVE_FORMAT_PCM as my format/datatype and still nothing. I'm not getting any debug warnings saying the buffers are starved nor any other complaints from XAUDIO2... I'm also positive the streamNextChunk function is being called since it has debug printouts that are indicating XAUDIO2  is queuing buffers and increasing its sample count.     Here's my latest setup code attempt: //Audio init section if(FAILED(CoInitializeEx(NULL, COINIT_MULTITHREADED))) { return false; } UINT32 flags = XAUDIO2_DEBUG_ENGINE; if(FAILED(XAudio2Create(&g_xAudioEngine,flags))) { MessageBox(NULL, L"Failed on XAudio2Create", L"Sadface", MB_OK); CoUninitialize(); return false; } if(FAILED(g_xAudioEngine->CreateMasteringVoice(&g_masterVoice, XAUDIO2_DEFAULT_CHANNELS,(UINT32) SAMPLERATE, 0,0,NULL))) { MessageBox(NULL, L"Failed to create mastering voice!", L"Sadface", MB_OK); CoUninitialize(); return false; } XAUDIO2_BUFFER buff = {0}; WAVEFORMATEX wfx = {0}; wfx.wFormatTag = WAVE_FORMAT_EXTENSIBLE; wfx.nChannels = 1; wfx.nSamplesPerSec = 44100; wfx.nAvgBytesPerSec = 44100 * sizeof(float); wfx.nBlockAlign = sizeof(float); wfx.wBitsPerSample = sizeof(float) * 8; wfx.cbSize = 22; WAVEFORMATEXTENSIBLE wfxe = {0}; wfxe.Format = wfx; wfxe.SubFormat = KSDATAFORMAT_SUBTYPE_IEEE_FLOAT; wfxe.Samples.wValidBitsPerSample = 32; wfxe.dwChannelMask = SPEAKER_FRONT_CENTER; std::stringstream debugstream; debugstream << "nBlockAlign: " << wfx.nBlockAlign << " bitspersample: " << wfx.wBitsPerSample << std::endl; OutputDebugStringA(debugstream.str().c_str()); if(FAILED(g_xAudioEngine->CreateSourceVoice(&g_sourceVoice, (WAVEFORMATEX*)&wfxe))) { MessageBox(NULL, L"Failed to create source voice!", L"sadface", MB_OK); }     Any other suggestions to try?
  12. Hi all, I've been trying to get some streaming code working with XAudio2. Unfortunately at best all I've gotten are a handful of clicks, and so I was wondering if someone could provide a little help.   I've implemented the code according to a few tutorials I've seen, and it's fairly simple. I just have one source voice and a master voice, and I'm submitting buffers to the source voice periodically.   Here's the setup code: //Audio init section if(FAILED(CoInitializeEx(NULL, COINIT_MULTITHREADED))) { return false; } UINT32 flags = XAUDIO2_DEBUG_ENGINE; if(FAILED(XAudio2Create(&g_xAudioEngine))) { MessageBox(NULL, L"Failed on XAudio2Create", L"Sadface", MB_OK); CoUninitialize(); return false; } if(FAILED(g_xAudioEngine->CreateMasteringVoice(&g_masterVoice, XAUDIO2_DEFAULT_CHANNELS,(UINT32) SAMPLERATE, 0,0,NULL))) { MessageBox(NULL, L"Failed to create mastering voice!", L"Sadface", MB_OK); CoUninitialize(); return false; } WAVEFORMATEX wfx = {0}; wfx.wFormatTag = WAVE_FORMAT_IEEE_FLOAT; wfx.nChannels = 1; wfx.nSamplesPerSec = 44100; wfx.nAvgBytesPerSec = 44100 * sizeof(float); wfx.nBlockAlign = sizeof(float); wfx.wBitsPerSample = sizeof(float) * 8; wfx.cbSize = 0; std::stringstream debugstream; debugstream << "nBlockAlign: " << wfx.nBlockAlign << " bitspersample: " << wfx.wBitsPerSample << std::endl; OutputDebugStringA(debugstream.str().c_str()); if(FAILED(g_xAudioEngine->CreateSourceVoice(&g_sourceVoice, (WAVEFORMATEX*)&wfx))) { MessageBox(NULL, L"Failed to create source voice!", L"sadface", MB_OK); }     I've also tried this with signed 16-bit PCM instead of IEEE float to no avail.   Here's where I'm feeding the source voice:   void streamNextChunk(const boost::system::error_code& error, boost::asio::deadline_timer & timer) { static bool comInitThisThread = false; if(!comInitThisThread) { CoInitializeEx(NULL, COINIT_MULTITHREADED); comInitThisThread = true; } timer.expires_from_now(boost::posix_time::milliseconds(1000)); timer.async_wait(boost::bind(streamNextChunk, _1, boost::ref(timer))); static __int64 sampleCount = 0; float* buff = new float[44100]; for(size_t i = 0 ;i < 44100; i++) { float t = (float)sampleCount++; t /= 44100.0; buff[i] = cos(2 * 3.14159 * t * 260); } XAUDIO2_BUFFER xAudioBuff = {0}; xAudioBuff.AudioBytes = (44100) * sizeof(float); xAudioBuff.pAudioData = (BYTE*) buff; if(FAILED(g_sourceVoice->SubmitSourceBuffer(&xAudioBuff))) { OutputDebugStringA("Failed on submit source buffer!!! D:\n"); } g_sourceVoice->Start(0, XAUDIO2_COMMIT_NOW); XAUDIO2_VOICE_STATE state; g_sourceVoice->GetState(&state); float volume; g_sourceVoice->GetVolume(&volume); std::stringstream debugstream; float gvolume; g_masterVoice->GetVolume(&gvolume); debugstream << "PTR: " << buff << " Q: " << state.BuffersQueued << " S: " << state.SamplesPlayed << " SC: " << sampleCount << " V: " << volume << " GV: " << gvolume << std::endl; OutputDebugStringA(debugstream.str().c_str()); }       This function is on a boost asio timer that ensures it's called once per second. The timer is handled in a separate thread from where the XAudio2 library is initialized (the ASIO run thread). The function also generates 1 second of data each call. As you can see I'm trying to generate a simple 2600Hz test tone, but at best all I have been able to get are some short clicks. (Yes I'm aware this leaks memory, but at the moment my main concern is getting any sound output at all). The debug printouts show that the source voice seems to be queuing the buffers and the samplesplayed count is increasing as expected.   This code seems to be simple enough, and mostly matches the code examples I've seen for streaming wav files off a disk (with the obvious exception that I'm creating the data on the fly).    Can anyone tell me what I'm missing?
  13. OpenGL Simulating CRT persistence?

    [quote name='Hodgman' timestamp='1354590401' post='5006939'] Did you try CryZe's blend mode, AKA "alpha blending"? [quote name='Such1' timestamp='1354585556' post='5006898']it will never fade completely(theoretically), but it should get really close.[/quote]You've got to keep the 8-bit quantization in mind with regards to this. If the background is 1/255, then when you multiply by 0.99, you still end up with 1/255 -- e.g. [font=courier new,courier,monospace]intOutput = round( 255 * ((intInput/255)*0.99) )[/font] Instead of directly blending the previous contents and the current image, there's other approaches you could try. e.g. you could render the previous contents into a new buffer using a shader that [i]subtracts[/i] a value from it, and then add the current image into that buffer. This way you'll definitely reach zero, even in theory [img]http://public.gamedev.net//public/style_emoticons/default/wink.png[/img] [/quote] Yes I tried Cryze's recommendation, however it didn't look right either. I like how color blending looks over pure alpha better anyway, since I can fade the individual channels separately and get a "warmer" looking fade that looks even more like a CRT. I see your point about the dynamic range, and I agree that subtracting would be best, except when you subtract 1 from 0 you still clamp at zero, so the accumulation buffer's dark bits would block out where the "new" accumulated yellow bits should go. I think I'll try and get around the dynamic range issue by rendering into a second texture, one that's 32-bit float, instead of using the backbuffer. This is how it'd be used in practice anyway, so using the backbuffer for this test is probably not a real representation of the technique. Hopefully the greater dynamic range will let the accumulation eventually settle on zero. Here's what I mean by the "warmer" look of using color blending instead of alpha, it looks a lot more phosphor-like: [img]http://s12.postimage.org/cklbi8jlp/warmblend.png[/img]
  14. OpenGL Simulating CRT persistence?

    [quote name='unbird' timestamp='1354568386' post='5006774'] This might actually be a precision problem. Are you using low-color-resolution rendertargets/backbuffer/textures (8 bit per channel) ? [/quote] I'm using 32-bit color for the backbuffer (R8G8B8A8) but 32 bit float for the texture render target. I didn't know your backbuffer could go higher than 32bit (8 bit per channel) color... When I try R32G32B32A32_FLOAT for the back buffer I get a failure in trying to set up the swap chain. Maybe I need to accumulate in a second texture render target instead of the back buffer? -- Edit -- I forgot to mention I've changed my blending a bit. I'm using a blend factor now instead of straight alpha blend, but I'm still having the same effect with not getting it to fade completely to zero. Here are my current settings: [CODE] rtbd.BlendEnable = true; rtbd.SrcBlend = D3D11_BLEND_SRC_COLOR; rtbd.DestBlend = D3D11_BLEND_BLEND_FACTOR; rtbd.BlendOp = D3D11_BLEND_OP_ADD; rtbd.SrcBlendAlpha = D3D11_BLEND_ONE; rtbd.DestBlendAlpha = D3D11_BLEND_ONE; rtbd.BlendOpAlpha = D3D11_BLEND_OP_ADD; rtbd.RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL; /* .... */ float blendFactors[] = {.99, .97, .9, 0}; g_pImmediateContext->OMSetBlendState(g_pTexBlendState, blendFactors, 0xFFFFFFFF); [/CODE] If I understand this correctly, it should eventually fade to completely black, since the blend factor will make it slightly darker every frame, yet I'm still left with the not-quite-black trail.
  15. OpenGL Simulating CRT persistence?

    [quote name='Such1' timestamp='1354491221' post='5006437'] I think you are not clearing the buffers after u used them. [/quote] Like I said in the post, I'm not clearing the back buffer. This is intended because it gives the accumulated trail in the first place. The problem is the trail never reaches zero.