Jump to content

  • Log In with Google      Sign In   
  • Create Account

Quat

Member Since 15 Sep 2003
Offline Last Active Apr 29 2016 06:28 PM

Topics I've Started

Syncing Client/Server

05 April 2016 - 10:29 AM

First, I don't have a lot of network background, so sorry if this question shows my noobness.  In my particular scenario, there is going to be one service per client (one-to-one).  I'm sending game data from the server to client so that the client can do some processing with the data. In theory the data could change every frame, but in practice it does not.  So I'm trying to only send over deltas, but ran into a problem with my approach.

 

My approach was to keep a dictionary<ID, data> on the server, so when it came to sending data over the wire, I could look up the last data I sent, and check what pieces of data changed, and then only send that data over.  A set of bits would be flagged and also sent over so the client knew what data values were being updated.  The client also keeps the data cached so it only needs the updated data.

 

The problem I ran into is that the server starts off before any clients connect, and starts sending the data (to nowhere).  This builds the cache and so by the time a client connects, it is only receiving deltas (but the client never received the full object the first time around because it wasn't connected yet).  

 

Since the client/service is one-to-one, I could probably modify the server to not start building a cache until a client connects.  However, I wondered if missed packets would be a problem (maybe our socket API automatically resends so I don't need to worry about this situation).  I'm wondering what kind of systems games use to sort of sync up efficiently client/server data so that only deltas can be sent.

 

 


Updating Vertex Buffers OpenGL ES 2.0

05 March 2016 - 11:12 AM

I'm implementing a sprite system to draw sprites.  My first strategy was as follows:

 

1. Use one big dynamic vertex buffer.

2. Sort sprites by render state/texture.

3. For each sprite batch

  a. Fill dynamic vertex buffer with next batch using glBufferSubData.

  b. Issue draw call to draw the batch

 

My thinking was that the driver would discard the old buffer and allocate a new one for the next batch so there would be no stalls.  This is how Direct3D words with the Map/Discard/WriteOnly flags.  Can I expect similar behavior on mobile opengl es 2.0?

 

Later, I was thinking that even if the driver does discard the buffers, it still allocates a lot of buffers per frame (not sure if this is a big deal or not).  So then I was thinking of the following fill strategy:

 

3. For each sprite batch

  a. Fill system memory vertex array with next batch

  b. push_back a struct BatchData = {VertexStart, VertexCount, Texture*, etc.} so I know what region of vertex buffer corresponds to a draw call.

4. Copy all the vertices for all the draw calls to the dynamic vertex buffer using  glBufferSubData

5. For each batch

  a. Draw the BatchData using offset into vertex buffer

 

To me the advantage of the 2nd approach is that I won't discard a whole buffer if the sprite batch was small (like 1-2 sprites), but it requires a 2nd pass.

 

Any thoughts?

 

 

 

 

 


Basic TCP/IP Question

12 February 2016 - 05:07 PM

I'm new to network programming and have to write a tool that runs on Windows and communicates with a Linux box over the network.  The linux box has a TCP/IP server setup using c++ with boost.  

 

My tool in Windows needs to connect.  For the Windows side, I am writing the tool in C# and looked at this tutorial: http://www.codeproject.com/Articles/10649/An-Introduction-to-Socket-Programming-in-NET-using

 

Basically, the linux box is going to send packets with "event data" to the Windows client at certain times.  What's the best way for the client to wait for incoming data?  The tutorial above uses a while loop to send/receive data over the network stream.  But is just looping and continuously polling for a packet the right design?


Normalized Blinn Phong

02 February 2016 - 06:11 PM

In the 3rd edition of Real-time rendering, the authors base the specular reflectance off roughness and the fresnel effect:

 

(m+8)/(8pi) * R_F(a_h)cos^m(t_h)

 

 

It seems that for a really large m and where the normal and half vector are close to each other, the result of the above expression will be a large number much greater than 1.  

 

Won't this amplify the amount of reflected specular light?  It's not making sense to me because then won't the reflected light be greater than the incoming light?  How is that possible?


Reflections with Fresnel

29 September 2015 - 09:23 PM

My old material class used to have a "reflectWeight" to tweak how reflective an object is.  Now I am using the Shlick approximation for Fresnel to see how reflective an object is.  Also, all my materials can be reflective based on the materials "Fresnel 0" value.  The problem I am seeing is that for low reflective objects like a dull wood, at glancing angles (large angles between normal and reflection vector) the amount of reflection ramps up and makes even dull wood get reflective.  This does not seem right.  What's the right way to solve this? 


PARTNERS