Jump to content

  • Log In with Google      Sign In   
  • Create Account

CC Ricers

Member Since 04 Jul 2004
Offline Last Active May 13 2013 10:08 PM

Posts I've Made

In Topic: xna dead. now what?

09 April 2013 - 02:19 PM

MonoGame has the best future out of the open-source alternatives, though it still has its share of troubles for some with setting it up for various other platforms and building programs in them. It's also rough trying to port something more graphically intensive with many custom shaders. One big missing feature for me is hardware instancing. Still, it is the most complete in functionality, beating all of the other ones I've browsed- one in particular was full of NotImplementedExceptions!

 

I will be sticking to it for its multiplatform support. Before, I was entertaining the idea of doing a port such as going from XNA to pure DirectX, then to OpenGL, or sticking with C# and using the OpenTK library. MonoGame abstracts the use of OpenTK for Linux and Mac platforms.


In Topic: Texturing big landscapes

05 April 2013 - 08:21 PM

Another thing that you might need is triplanar texture projection for mountains so that the stone texture don't look stretched.

 

Triplanar texture mapping also carries another advantage- by using the normals of the terrain mesh to indirectly set the texture coordinates, it may no longer be necessary to have Texture Coordinate data in your vertex structure. I did away with it when I noticed the that Texture semantic was no longer being used, and for large enough terrain it can reduce the memory footprint a bit.


In Topic: Loading and storing game assets in a Entity Component system

05 April 2013 - 01:33 PM

So extending this into the component approach, here are my current systems and how they may act upon different objects. All except for Quad and Post-Process also use a Transform component.

 

Basic Forward Renderer -> uses Light, Camera, and Geometry components

Custom Forward Renderer (provide your own shader) -> Light, Camera, and Geometry

Debug Renderer -> Camera, Bounding Volume

---

G-Buffer Renderer -> Camera and Geometry

Depth Map Renderer -> Camera and Geometry

Light Pass Renderer -> Light, Camera, and Quad

Final Pass Renderer -> Quad

---

Post-Process Renderer -> Quad

 

A proper mesh object would have Transform and Geometry.

Camera objects have a Camera component of course, it would contain view and projection matrices. A Transform component for the World matrix, maybe I would consider that. But then, cameras would not use the scaling part.

A screen-space Quad is just a Quad, it would not need a Transform, as the renderer would do the job of positioning it to the screen.

Light objects are only a Light component for now. Reasoning for not having Transform is most lights don't need a lot of data to represent them. Point lights have a size and position but no rotation. Directional lights have neither position or scale.

 

Geometry contains all the information about textures within the model (embedding them is an offline process). Same with Quad. I don't know if I would separate the textures, as I think that's making it too generic for now.  I haven't started thinking about other kinds of rendering like blending transparent objects after the G-Buffer, restoring depth, etc.

 

I guess I'm talking out loud here. My architecture is actually set up a lot like this already, but the model and camera classes have gotten too big for my liking. It looks like my code would benefit from CES in organizing the data and making it easier to maintain.


In Topic: When does the failure end?

05 April 2013 - 12:46 PM

I'm not sure if you intended to post in the Music forum, sounds more like a Breaking In Topic. Unless you were applying for a musician job and you did want to post here. Also, some background on your education and jobs would be helpful. I can relate to Kylotan's words on the harsh reality. My own harsh analogy is, each interview is like taking a college exam, where the "teacher" will only pass the top 5% who take it :o

 

I recently intereviewed to a (non-gaming) developer job in which they commonly used a framework I am not familiar with, but my knowledge of MVC and other frameworks/CMSes using the methodology would make me good at learning it quickly. They were accepting of that fact, and would provide ramp-up time. Unfortunately, and perhaps not surprisingly, they interviewed someone who did know the framework, and he was hired.

 

So although on-the-job training and ramp-up to get working with the tools the company uses is not an impossibility. But with competition that can bypass that, it becomes more and more improbable to get in that situation.

 

Honestly, the closest I've ever gotten to get into the games industry (even if it was just QA) fell short, for what to me a pretty dumb and disappointing one- I have no car and their location was out of my reach, so I could not plan an interview in person, much less travel every day to work there. I was in college at the time.


In Topic: Suggestions for Keeping track of Lights and Entities in a scene. [Design Ques...

05 April 2013 - 12:35 PM

Some people go by uber-shaders that handle everything thrown at them, and others prefer compiling specialized shaders for each possible lighting and material combination at run-time. If you go the uber-shader route you would probably want to keep all your rendering in a single pass if possible to keep your GPU work low.

 

What you are suggesting- splitting up the work amongst several shaders, would fit better in setups like deferred rendering and lighting. Sending the geometry multiple times sounds wasteful, and I've only seen it justified in light pre-pass rendering, in hardware setups where the framebuffer memory is limited.

 

If you have many lights in your scene visible at one time, deferred rendering may be a preferred choice. Your idea of drawing spheres as point lights and additively blending the framebuffer is commonly used in a deferred renderer.

 

My deferred rendering setup works as follows- have a Scene class that contains lists of Models, Directional Lights, and Point Lights. (Spotlights and others I would like to support eventually...) It directly adds all these objects to the Scene, calling the constructors for each. Note the lack of Cameras in the scene. I wanted the flexibility to easily swap in the current camera to show the Scene with. 

 

In the drawing phase, the Scene and active Camera get passed to a SceneCuller class, which culls all the Models that fall out of the Camera's view. I'm just using a brute-force method for everything, it works so far with thousands of sphere-frustum tests (I don't use AABBs). Another culling function culls the Point Lights the same way. Then it's ready to draw all the objects to the screen.


PARTNERS