Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


lephyrius

Member Since 20 Apr 2003
Offline Last Active May 21 2015 07:17 AM

Topics I've Started

GLSL version management?

20 February 2015 - 02:29 AM

I have a shader it works good. The problem is that it's on my computer and I might port this to an iOS or Android device.

The thing is that on desktop it's things like:

in/out instead of attributes/varying (differences between GLSL and GLSL ES)

Sooo I feel like I need two different types of shaders.

I want to keep it simple and people expect things to run anywhere. Especially if it's a 2D game with some 3D effects.

What is the best course of action?

1. Have different shader files for different platforms.

2. Preprocessor defines for things like in/out and attributes/varying.

3. Something different like some kind of universal GLSL that basically compile different versions in the asset "baking" process.

Also is possible to define constants from the C++ code or is it easier to just prepend them to the GLSL source?


Algorithm for covering as much dots a as possible?

14 September 2014 - 10:24 AM

If I have a rectangle of random size and it has random dots in it.( = Rectangle A)

I want to take a rectangle that's inside A to cover as many dots as possible.

 

Is there any possible way of doing this without going bruteforce: iterating through all the dots and use them as a corner of the rectangle only saving the one with most dots in it?


Render queue ids/design?

04 November 2013 - 05:13 AM

I've deciding that I want a render queue!  

I have decided 2 things:

1. Not submitting a new queue on each frame.(seems like it creates a lot of memory fragmentation). I will have to have some reference to the scenegraph then. Is this a bad thing?

2. Have separate queues to "opaque 3D", "transparent 3D" and "2D". Because they have a bit different sorting order.

Im using OpenGL in my example but I feel that this could apply to any other API as well.

So the ID is 64-bit you need to sort after.

The parameters are in the priority order for the opaque 3D queue:
1. Shader id
2. Material id(probably an UBO or a combination of texture id)
3. VBO id
4. IBO id

Now how do I combine the ids into one 64 bit integer?
Something like this?

uint64_t queue_id = ((shader_id % USHRT_MAX) << 48) | ((material_id % USHRT_MAX) << 32) | ((vbo_id % USHRT_MAX) << 16) | ibo_id % USHRT_MAX

Is there some other way of compacting the 32-bits numbers into 16-bit?

Or should I maybe create more of a wrapper around the GL ids?
So the shader permutation class:

class Shader {
public:
uint16_t id;
GLuint shader_id;
};


And have a manager(factory) that takes care of assigning ids that is compact.
 

class ShaderManager {
  public:
 
  Shader& get(const char* filename);  // The shader configuration file.
 
  std::map<std::string, uint16_t> loaded_shaders;
  std::vector<uint16_t> free_ids;
  std::vector<Shader> shaders;
};

Hmmm..
This solution is probably a bit more compact and robust.

Hmmm...
I don't think I ever will have 65535 different shaders, materials, vbos or ibos at least not at the same time. Then I could use uint8 and add z order also.

Maybe I should have different kind of queues so that:
 

class ShaderCommand {
public:
  GLuint shader_id;
 
  std::vector<MaterialQueue> texture_queue;
};

class ShaderQueue { // The root of the tree.
public:
  ShaderCommand& add();
  void remove(ShaderCommand& command);
  void render();
std::vector<ShaderCommand> commands;

};
// This is just an example of the render function.
void ShaderQueue::render() {
  // Sort the commands based on shader_id.(skipped)
  GLuint current_shader_id = 0;
  for (ShaderCommand& command : commands  ) {
    glUseProgram(command.shader_id);     // Apply the shader!
    for (TextureQueue& queue : command.texture_queue  ) {
      queue.render();
    }
  }
}
class MaterialCommand {
public:
  size_t material_id;
  GLuint textures[4];  // Probably will be ubo:s but I use textures for now for simplicity.  
 
  std::vector<MaterialQueue> texture_queue;
};

class MaterialQueue { 
public:
  MaterialCommand& add();
  void remove(MaterialCommand& command);
  void render();
std::vector<MaterialCommand> commands;

};

The problem is that I feel that with this approach is that it probably creates more memory fragmentation(+ maybe some cache) and it's harder to change materials on things(I wonder how often that needs to be done). But this will be more of a bucket approach. Also another problem is that this needs a lot copying when I sort unless I use pointers for storing the command. Another problem I see is that like for instance a mesh node in the scene graph will need to have: ShaderCommand, MaterialCommand, VBOCommand and IBOCommand references in their nodes so they could change the Material, Shaders and VBO:s/IBO:s.
At least it will solve the generating of the ids.

Am I overthinking this now?
Is there something I have totally missed? Or I need to have/think about?


Scripting garbage collection and destructor

15 October 2013 - 01:18 AM

I've been thinking of using Squirrel as a scripting language.

My idea was to use the garbage collector as a memory manager:(squirrel as an example script)

class Model
{	
	constructor(filename)
	{
		model_id = load_model(filename);
	}

       destructor() 
       {
                unload_model(model_id);
       }
       								
	model_id = null;
}

Something like this.(load_model and unload_model is native functions that are really easy to bind no need to worry about binding complicated classes)

It's obvious that this isn't going to work. Because Squirrel is missing the destructor.

Am I thinking of something wrong?

Is there a reason why Squirrel doesn't have a destructor?

Am I just using scripting in a wrong way?

Are there any alternatives to Squirrel that is similar to it but with a destructor?

Should I rethink the problem with binding to the native functions?


GLExpert and GLee error?

29 October 2010 - 01:55 AM

I'm using GLee for GL 2.0 support. Thats not a problem.
But now I investigate a bug so I need NVidia GLExpert for debugging. What include order do I need for the "nvapi.h" and "Glee.h"?
I only get these errors:
1>graphicssubsystem.cpp(239): error C3861: 'glDrawBuffers': identifier not found
1>graphicssubsystem.cpp(257): error C3861: 'glGenBuffers': identifier not found
1>graphicssubsystem.cpp(258): error C3861: 'glBindBuffer': identifier not found
1>graphicssubsystem.cpp(265): error C3861: 'glBufferData': identifier not found
1>graphicssubsystem.cpp(277): error C3861: 'glDrawBuffers': identifier not found
1>graphicssubsystem.cpp(406): error C3861: 'glBlendEquation': identifier not found
1>graphicssubsystem.cpp(468): error C3861: 'glBindBuffer': identifier not found
1>graphicssubsystem.cpp(494): error C3861: 'glBindBuffer': identifier not found

In any order I include them.

PARTNERS