Emulator plays faster than it should

Started by
11 comments, last by Wyrframe 8 years, 9 months ago

I'm trying to emulate a gameboy, but it plays a little too fast.

This how I'm doing my timing inside the main loop.


if (cpu.T >= CLOCKSPEED / 40) // if more than 1/40th of cycles passed
{
        // Get milliseconds passed
	QueryPerformanceCounter(&EndCounter);
	unsigned long long counter = EndCounter.QuadPart - LastCounter.QuadPart;
	MSperFrame = 1000.0 * ((double)counter / (double)PerfCountFrequency);
	LastCounter = EndCounter;

        // if 1/40th of a second hasn't passed, wait until it passes
	if (MSperFrame < 25)
		Sleep(25 - MSperFrame);

	cpu.T -= CLOCKSPEED / 40;
}

CLOCKSPEED is the cycles per second of the emulated cpu (4194304)

cpu.T is cycles passed.

I'm using Visual Studio 2013. I even tried switching to C++ and using steady_clock but nothing changed. What could be the problem?

Advertisement


if (MSperFrame < 25)
Sleep(25 - MSperFrame);

Read msdn: https://msdn.microsoft.com/en-us/library/windows/desktop/ms686298%28v=vs.85%29.aspx

This function causes a thread to relinquish the remainder of its time slice and become unrunnable for an interval based on the value of dwMilliseconds. The system clock "ticks" at a constant rate. If dwMilliseconds is less than the resolution of the system clock, the thread may sleep for less than the specified length of time. If dwMilliseconds is greater than one tick but less than two, the wait can be anywhere between one and two ticks, and so on.

You can't expect Sleep will delay your thread for exact amount of milliseconds.

Why are you syncing every 40th of a second when emulating a system that runs at 59.7 FPS?

Man, I wish I had problems with my code running too fast ;)

Devil; you need to time-slice your emulation processing, instead of trying to run it synchronously at a perfect rate. Read http://gafferongames.com/game-physics/fix-your-timestep/ , use a fixed framerate of, say, 60 or 120 "physics" ticks per second, except in this case the "physics" you are running is actually 1/60th or 1/120th of one second's execution of the emulated CPU, sound chips, and graphic chips of a Game Boy.

RIP GameDev.net: launched 2 unusably-broken forum engines in as many years, and now has ceased operating as a forum at all, happy to remain naught but an advertising platform with an attached social media presense, headed by a staff who by their own admission have no idea what their userbase wants or expects.Here's to the good times; shame they exist in the past.

Yes, that's how I have always done it.

I calculate the number of t states the emulated system can do in 1/60th of a second. So say the CPU runs at 8MHz , one t state is 0.000000125 seconds.

1/60 = 0.01666666666666666666666666666667

so the number of t states per frame is 133333.33333333333333333333333333

Each instruction has a t state cost, which you can get from a mapping table, For example the 6502 Lda #<byte> (load a immediate) has a t state count of 2.

So then all you have to do is run a loop until you have burnt at least 133333 t states, and Bob's your fathers brother.

The same applies to graphics chips.

If you want to get a real level of accuracy though I would advise you to run the systems more like they were run in the real hardware.

For example in my Speccy simulator I broke down each line of the display into three areas <left><visible><right> .

In the left area I called the interupt handlers for N t states, then I ran the CPU until I hit the right area of the line, then I ran the interupt handlers again.

I then repeated this for each line of the display.

The end result was that on screen glitches that happened on real hardware also appeared in the emulator.

More specifically; lots of early consoles and machines could do things to the framebuffer "in between" scan lines, and the code had to either have super-fast interrupt service routines or exact timing to take advantage of them. The Atari ST could completely change the 16-colour screen palette between scan lines, allowing for wild vertical gradient effects. The NES and SNES could change several GPU registers between scanlines, but most important of these was on the NES the scroll register (both X and Y), and on the SNES the 2x2 matrix used by the Mode 7 layer (which is how all of those perspective map effects were done).

So, to emulate those, you have to not just run your CPU for 1/60th of a second every physics timestep; assuming you have a vertical resolution of, say, 256 pixels, you have to run your CPU for 1/(60*256)th = 1/15,360th of a second, then run one scanline of the GPU, then repeat 256 more times, until you have a complete framebuffer which you can present.

If you start under-running on time (i.e. your host PC isn't fast enough to emulate the entire guest at full speed), you could disable the actual writing-to-screen in a cycle, getting "dropped frames" while still running the guest CPU at full speed... or you could just put a cap on emulation speed, and let the guest slow down to below realtime.

RIP GameDev.net: launched 2 unusably-broken forum engines in as many years, and now has ceased operating as a forum at all, happy to remain naught but an advertising platform with an attached social media presense, headed by a staff who by their own admission have no idea what their userbase wants or expects.Here's to the good times; shame they exist in the past.


The Atari ST could completely change the 16-colour screen palette between scan lines, allowing for wild vertical gradient effects.

The Atari 8 bit machines could change the screen resolution between scan lines biggrin.png

So you could swap between hires low colour and lowres high colour per screen line.

That was fun!

The Atari 8 bit machines could change the screen resolution between scan lines biggrin.png

So you could swap between hires low colour and lowres high colour per screen line.

That was fun!

Yeah, changing the horizontal resolution between scanlines is a pretty cool trick. But I'm betting those machines only had enough VRAM to pre-compose one scanline at a time, anyways, as opposed to a 75kB framebuffer?

RIP GameDev.net: launched 2 unusably-broken forum engines in as many years, and now has ceased operating as a forum at all, happy to remain naught but an advertising platform with an attached social media presense, headed by a staff who by their own admission have no idea what their userbase wants or expects.Here's to the good times; shame they exist in the past.

The Atari 8 bit machines could change the screen resolution between scan lines biggrin.png

So you could swap between hires low colour and lowres high colour per screen line.

That was fun!


Yeah, changing the horizontal resolution between scanlines is a pretty cool trick. But I'm betting those machines only had enough VRAM to pre-compose one scanline at a time, anyways, as opposed to a 75kB framebuffer?


... VRAM? You don't need no stinkin' VRAM!

You fill the target memory address with the color you want the beam to paint for wherever the beam currently is - so make sure you time everything with the correct cycle counts between each on-screen pixel smile.png And then you get time as the beam scans back to the beginning of the line to do some heavier calculation, and even MORE time when the beam scans to the top of the screen!

(Or I'm remembering farther back than Atari...)

This topic is closed to new replies.

Advertisement