Need help on performance problems

Started by
13 comments, last by Solias 19 years, 1 month ago
Well, I ran the demo...

I left it in the default resolution on my machine of 1600 x 1200...and it sat there for a very long time on your loading screen until I finally pressed enter and it closed down. Upon trying a different resolution (640 x 480) the loading screen came up and an error dialog box popped up saying "COuld not initialise graphics adaptor. Please run settings.exe ....."

The log file states :

"ERROR: Device creation failed, falling back to Software Vertex Processing...
Device creation failure!
->VB Error: Automation error
->DirectX Error: D3DERR_NOTAVAILABLE "

it's a crummy GeForce 4 MX 440 so it's not surprising. Thought I'd just let you know anyway. Perhaps you have some sort of hardware dependant feature enabled? (I hate this MX440 so much!!! - Work card :o\)

GCoder
GCoder
Advertisement
Quote:Original post by GCoder
Well, I ran the demo...

I left it in the default resolution on my machine of 1600 x 1200...and it sat there for a very long time on your loading screen until I finally pressed enter and it closed down. Upon trying a different resolution (640 x 480) the loading screen came up and an error dialog box popped up saying "COuld not initialise graphics adaptor. Please run settings.exe ....."

The log file states :

"ERROR: Device creation failed, falling back to Software Vertex Processing...
Device creation failure!
->VB Error: Automation error
->DirectX Error: D3DERR_NOTAVAILABLE "

it's a crummy GeForce 4 MX 440 so it's not surprising. Thought I'd just let you know anyway. Perhaps you have some sort of hardware dependant feature enabled? (I hate this MX440 so much!!! - Work card :o\)

GCoder


I also use GeForce 4 MX440. Have you tried many (if not all) screen resolutions? I think I remember that 32 bit mode for 640x480 & 800x600 cause problems. Try 16 bits too.

My initialising code is very designed such that it checks for availability of everythng before creating a device. It checks the validity for screen resolution, enumerates depht buffers and picks the best one, checks the vertex processing types (pure, hardware, software). And even if the create device fails at this position, it tries creating again using software vertex processing. Shouldnt cause problems.

I usually work on 1024x768x32 resolution on my GeForce 4.
Faraz Azhar-Intel 700mhz128 MB RAMnVidia GeForce4 MX440Using DirectX 8.1 SDK, VB6
I ran your demo on my computer and this was the results for me.

* In 640x480 resolution, the game ran as 160fps
* In 800x600 resolution, the game ran as 90fps

This is because I believe that there is way more stuff to draw,
but when you zoom in the game gets slower.

* When clicked the mouse's middle button, the game crashes.

* The sun effects and such does not work, I click the sun icon
and nothing happens.

* For determining which tile the mouse is over, your using the
way I'm doing it for my real-life simulation game.
But there is a method I believe would be faster but I ran into
a problem with it. Have you ever programmed a tile-based game
like a RPG? Well to determin what tile the mouse is over
you would take the mouse coords (x,y) and divide each coord
by the tile size. Tile.X = (Mouse.X / 32). There is more adjustment to
do that value, but roughly that gives you the X tile the mouse
is over.

If you need me for anymore information feel free to e-mail at
vbmaul@yahoo.com , I would be glad to help.
Quote:Original post by digital_phantom
* When clicked the mouse's middle button, the game crashes.


No idea why that is happening. I havent coded at all for mouse middle button. And Im not even using DirectInput, Im using plain GetAsyncKey api.

Quote:
* The sun effects and such does not work, I click the sun icon
and nothing happens.


Yes its turned off. Ignore that.

Quote:
* For determining which tile the mouse is over, your using the
way I'm doing it for my real-life simulation game.
But there is a method I believe would be faster but I ran into
a problem with it. Have you ever programmed a tile-based game
like a RPG? Well to determin what tile the mouse is over
you would take the mouse coords (x,y) and divide each coord
by the tile size. Tile.X = (Mouse.X / 32). There is more adjustment to
do that value, but roughly that gives you the X tile the mouse
is over.


Ive searched just about everywhere but I cant find a good method of determining which tile the mouse is over. The suggestion youre giving is only good for transformed graphics (fixed sizes, and pre-determined positiions of tiles and other objects; all in 2D coordinates) Im using actual 3D orthographic projection.. so cant use this method here. Well... thats what I think. Am I wrong? Any better way to solve this problem.
Faraz Azhar-Intel 700mhz128 MB RAMnVidia GeForce4 MX440Using DirectX 8.1 SDK, VB6
Quote:Original post by itz_faraz
Ive searched just about everywhere but I cant find a good method of determining which tile the mouse is over. The suggestion youre giving is only good for transformed graphics (fixed sizes, and pre-determined positiions of tiles and other objects; all in 2D coordinates) Im using actual 3D orthographic projection.. so cant use this method here. Well... thats what I think. Am I wrong? Any better way to solve this problem.


Well yes and no. As long as you are using an orthographic projection and your tiles are the same, you could use the same logic a 2d tile based game would use to calculate tile coords from screen coords. You are both putting a regular tile pattern into screen space.

The problem is that your tiles are not flat in the x,y plane, they have a z component (assuming that z is "up"). This means that more than one tile can "own" a given part of the screen, one will be drawn in front of the other. This problem actually exists for sprite based games as well if they support multiple heights.

In your case there is a well defined mapping between screen space and world space, which is inverse of the transform and projection matricies you applied to the terrain when you rendered it. The problem is that a point in screen space will become a ray in world space. One option would be to read back the depth value stored in the depth buffer and use that as the z coord for the window, then transform that value back into 3d space. The Opengl utility lib provides a helper function for this, d3d might have something similar in it's utility lib.

Another option would be to partition your terrain in 3d space such that you only need to test the tiles that could intersect the ray. An oct or quad tree could work for this (the quad tree would need to test against the projection of the ray), but you need to be careful that the overhead of the partition is worth it. You could also use this for culling portions of the terrain that are off screen when drawing.


This topic is closed to new replies.

Advertisement