Jump to content

  • Log In with Google      Sign In   
  • Create Account

JohnnyCode

Member Since 10 Mar 2008
Offline Last Active Today, 03:55 AM

Posts I've Made

In Topic: does video memory cache effect efficiency

Today, 02:33 AM

 

The cachelines started with 16 Vertices on the first Geforce generation and growing since then.

I think you mean 16bytes not 16 vertices.

 

 

Hardly?

I think it would be 16 indicies, of 16 bit size, to be rasterized, and if the vertex cache for this indexed rasterizing burst would be 16 bytes, we would be all set quite trivial? (this is also the reason why indicies should index as close as possible, to not escape vertex cache size much, to minimize vertex cache switches for such instanced indexed step draw )

16 bytes seems nearly like some operational register closest level cache.


In Topic: Antialiasing in 3D with Core OpenGL (Windows)

22 May 2016 - 07:50 AM

I was always thrilled with the solution of double resolution target resampled to actual screen size. It creates edge antialiasing, plus corrects the texture frequencies (texure antialiasing) without bluring actual texture content. The overkill is fillrate bound, but quite serious though.


In Topic: How can I optimize Linux server for lowest latency game server?

13 May 2016 - 08:56 AM


What can I do for my Linux server to minimize latency as much as possible? Are there any config params?

If your application correctly acquires a socket/tcp connection/udp endpoint, I don't see what problems you are trying to search for in the OS instead of the very application.

 

Java is memory managed language, a great deal for active documents in http protocol etc., but instead of adviceing you to code in c++ the server application, you can still do following:

-Make your server application allocate new objects as least as possible, to not have memory being freed, fragmented, moved, too often since you are using a memory managed language.

(you can design a server to allocate/free memory only for a per-client income/outcome, this would help latency, since I gess, your game protocol has more constantial clients, if you have maximum amount of clients known, preallocate them)

- What about your clients serving logic? How do you read their data? how do you serve them data back?

- How do you measure respond latency?


In Topic: Meshes rendered with the aid of shaders corrupted in windows 64 bit

10 May 2016 - 03:43 AM

OK problem solved. The rootcause is that really for Intel graphic card I should used software vertex processing not hardware one, as graphic card doesn't support shaders.

Here is article about that:

https://software.intel.com/en-us/articles/intel-gma-3000-and-x3000-developers-guide

So my code and shader is OK and there is none any bug in it.

Thanks all for hints and help.

Yes, extremly weak gpus suppport only rasterizing stage in hardware, but your device creation should have indicated a failure somewhere with HARDWARE_VX_PROCESSING, You should properly check that in your program and attempt conditional device creations.


In Topic: how to load compiled effect file?

09 May 2016 - 02:47 AM

You really should compile the effect file and shaders on the very machine they are supposed to run at.

I am also sure you have plenty of preprocessor declarations you can write in HLSL or effect files that should suit up your compilation custom issues.

 


PARTNERS