Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


DwarvesH

Member Since 12 Jun 2013
Offline Last Active Jan 29 2015 05:27 AM

Posts I've Made

In Topic: Ways to render a massive amount of sprites.

29 January 2015 - 05:31 AM

I think you are over-thinking it!

 

What exactly are you trying to do? It is a 2D sprite based game? There is absolutely no way you will have performance problems. The more old school it is, the less you will have. Just don't render the whole level, do some frustum culling. Have a good map/level representation, sit down and code and you should see more than 200 FPS and dynamic 2D lighting.

 

But for a more scaleable solution, you should divide the level into chunks, like a 32x32 chunk of sprites. When you scroll create new chunks as needed and just cull on a chunk basis.

 

On the other hand, if you are incorporating sprites in a full 3D game, like for particles, the situation changes.


In Topic: Is there a way to draw super precise lines?

28 January 2015 - 05:23 AM

Review the rasterization rules and compare them to your code; they are likely related to your issue.

 

Thanks!

 

It seems there is no way to do what I want based on those rules.

 

But I did find a solution that works: drawing the line list a second time, but this time as points.

 

Another problem was that as the lines were snapping to pixels as I was moving my character/camera, they had horrible temporal aliasing with them jittering up and down one pixel.

 

There is again no solution for this except for line AA:

https://dl.dropboxusercontent.com/u/45638513/sprite19.png

 

Or alternatively a post-processing edge detection algorithm.

 

Well, at least now I know how Door Kickers got those thin but soft lines: they must have used line AA too!

http://inthekillhouse.com/wordpress/wp-content/uploads/2013/08/2013_7_24_17_48_59.jpg


In Topic: How to properly switch to fullscreen with DXGI?

23 January 2015 - 07:09 AM


Like how the back buffer in windowed mode can't have odd dimensions

 

Do you have any reference for that?

Never had that problem, can use windows with client area of prime x prime without issue...

 

 

The backbuffer can have odd dimensions and 3D rendering works great with it. It is 2D rendering that does not work well for GUI and stuff. I am using an orthographic camera and a SpriteBatch (written by myself) type class to render the GUI. I am also using filtering to render the GUI so that stretched controls look nice and smooth. Without filtering odd buffers work. With filtering, they don't. It is probably my fault, not a real issue with DX, since I did not pay attention to texel centers and whatnot that is needed for dealing with bilinear filtering.

 

I may investigate this further and if it is an issue with SpriteBatch, I may disable the code that forces the window to be even. But right now I need to get the port form C#/DX9 to C++/agnostic between DX10 and DX11 done ASAP! I will be at least one week late :).

 

 

Do you or do you not use MakeWindowAssociation with DXGI_MWA_NO_WINDOW_CHANGES?

 

No, I do not. I tried to do as little as possible and let DXGI do as much as possible. Probably would have been much easier to handle everything myself.


In Topic: no vsync means double buffering to avoid tearing, right?

23 January 2015 - 04:18 AM

single buffering = not possible on modern OS's

 

Sometimes I wish it was possible only to create less laggy GUIs...


In Topic: OpenGL to DirectX

23 January 2015 - 04:17 AM

 


But if you want the highest of the high level of graphics, super AAA quality rendering, then OpenGl is inferior. Maybe the latest version is great, but the ones I tried were a lot behind. For starters, a big chunk of OpenGL is implemented in the GPU driver. There is a huge quality gap between supper cheap GPUs never meant to do real rendering work and high quality stuff.
[citation needed]

 

 

I can't offer you a citation, only anecdotal evidence.

 

In historic order:

1. Linux. This is probably the problem of early Linux drivers and not a real problem with OpenGL, but OpenGL reliability on Linux was very bad. Sure, if you ran glgears there were no problems. But if you tried to do more complicated stuff, some things would not render exactly as expected with no way to fix it except to either try to change X setting and mess around with the driver or move to another PC. There were small bugs that effected only parts of it, allowing you to render the scene but some effects would not show up.

2. Windows Vista + some ATI cards. When Vista came out, some ATI cards had broken OpenGL drivers and it took so many months for them to fix it that I gave up on Vista. The performance was very bad.

3. Even today, there can be subtle differences. Things like Toksvig specular AA are highly dependent on the exact behavior of the GPU filtering system. Even right now if I tried, I can get at least 3 different results on 3 different PCs. Same code, same resolution, different GPUs. The probability of getting it pixel perfect is less than in DirectX and OpenGL has no mechanic to report to you that it is doing stuff differently than you would expect. It says: OK, rendered corectly! And indeed there is output on the screen. And if you examine it superficially it looks OK. Maybe just by a little and I had bad luck. But I'm pretty sure it is not higher.

 

What I got out of all these experiences is a common theme of slightly less reliability. This is of course an issue only on the PC. If you target some hardware that only has OpenGL and historically has only had OpenGL and everybody is using OpenGL do do 100% of the graphics stuff, there will be probably no such issues.


PARTNERS