Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

336 Neutral

About too_many_stars

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. too_many_stars

    Dealing with Non Center of Mass Translations

    Thanks so much again for the help Randy, I will do as you suggest and try to make some sense out of it. Mike
  2. Hello Everyone, It's very common in 2D and 3D to use the center of mass ( COM ) as the point of rotation. This method additionally make it easy to fit the model into a bounding sphere, AABB, capsule whatever. This is what I have always done. However, now I am trying to model a 3D hinged door. This door, due to the nature of rotation, will not be centered at the COM, instead, it will have to be translated locally either to the left or right depending on where the hinge should be. My question is... What is the best way of dealing with Non-COM rotations? The problem is, once the symmetry is gone from Non-COM model my bounding volumes are thrown off as they rely on COM. And obviously now when I am transforming from local to world co-ords it becomes harder to gauge where the object will be and how it will fit into the rest of the world. Also, I have to introduce some new variables such as glm::vec3 local_position to keep track of the new object center. It makes everything a lot messier. Thanks again for all the help, Mike
  3. too_many_stars

    glsl - material, texture and light relationship

    Thanks for the response Silence, I do like the lookup idea. It's going on the to do list. Mike
  4. Hi Guys, A quick question. I have a 3d grid sector made out of AABB boxes ( see picture below ). When I turn the light on, I get the light catching the extremities of the boxes creating ugly line artifacts. Just wondering if there's a way to solve this problem. Thanks again, Mike
  5. too_many_stars

    glsl - material, texture and light relationship

    Thanks for the response Silence@Sid, The reason I want material with a texture is if I simply want to change the texture color. I guess I could introduce another RGB variable just to change the texture color but that seems like a waste of memory. Mike
  6. Hello Everyone, I am having a difficult time coming up with a relationship between light, material, and a texture inside a shader. I have a model that has the following properties. 1.) texture 2.) material and a world light source. If the light is on, inside the fragment shader, I do the following vec3 light_frag = phongADS( material ); gl_FragColor = vec4( light_frag , 1 ) * texture( sampler , uv ); This simply calculates the light fragment based on the material with an ambient, diffuse and specular component and the multiplies this by the texture fragment However, when the light is off, and I am only dealing with a material and a texture what is the relationship between the two? Does one just take for example the diffuse component of the material like so... gl_FragColor = vec4( material.diffuse , 1 ) * texture( sampler , uv ); More generally, my question is this: Given a model ( say an AABB ) with a material, a texture and a light source, what is the proper per fragment calculation ( if light is on or off ) that takes all three into account Thanks, Mike
  7. too_many_stars

    GLubyte array and getPixel function

    Hi Sponji, Thanks so much, that's the correct answer. I thought I was tying into the array properly ( I even did a few examples on paper to make sure it worked). Mike
  8. Hello Everyone, I am having issues with extracting a pixel from a GLubyte array. Here is what I have GLubyte* pixels = SOIL_load_image( file_path.c_str() , &width, &height, &channels, SOIL_LOAD_AUTO ); // load the pixel array //later in the code //RGB is holds GLint r, g, b RGB Texture::getPixel( const int x, const int y ) const{ if( x < 0 || x > width -1 || y < 0 || y > height -1 ) return RGB(); int w = channels * width; int idx = y * w + x; GLubyte r = pixels[ idx + 0 ]; GLubyte g = pixels[ idx + 1 ]; GLubyte b = pixels[ idx + 2 ]; return RGB( (GLint) r , (GLint) g , (GLint) b ); I think I am tying into the array correctly. I do get integer values for my RGB struct, but they are all wrong. I get colors that don't even appear on the texture. If anyone knows what I am doing wrong, please let me know. Thanks so much, Mike
  9. too_many_stars

    GLSL Rendering Mulptile Materials Across a Model

    Thank you so much Koen, that's a great answer. I figured there was going to have to be some kind of batching process. Mike
  10. Hello Everyone, Here is what I have so far when it comes to rendering a model with glsl glsl->setUniform( "Ka" , material.ambient ); glsl->setUniform( "Kd" , material.diffuse ); glsl->setUniform( "Ks" , material.specular ); glsl->setUniform( "mat_shine" , material.shininess ); glBindVertexArray( vao_id ); glDrawArrays( GL_TRIANGLES , 0 , vertices.size() ); glBindVertexArray( 0 ); This works fine for a single material applied across the entire model. However, I now import an .obj model with various materials from Blender. After mapping triangle faces to material ids I am unsure how the above call to glDrawArrays( GL_TRIANGLES , 0 , vertices.size() ); needs to change in order to render more than one material on a single model. Any help would be great appreciated, Mike
  11. too_many_stars

    Shader class instancing question

    Thanks for the reply. Maybe a singleton registry with std::map< std::string , GLSL > pair? That would give a quick look up table for a particular shader and would make it available in any render method. Thanks Mike
  12. Hello Everyone, I have been going over a number of books and examples that deal with GLSL. It's common after viewing the source code to have something like this... class Model{ public: Model(); void render(); private: GLSL glsl_program; }; ////// .cpp Model::Model(){ glsl_program.compileAndLinkShaders() } void Model::render(){ glsl_program.use() //render something glsl_program.unUse(); } Is this how a shader program should be used in real time applications? For example, if I have a particle class, for every particle that's created, do I want to compiling and linking a vertex, frag shader? It seems to a noob such as myself this might not be the best approach to real time applications. If I am correct, what is the best work around? Thanks so much for all the help, Mike
  13. too_many_stars


    That makes sense, thanks so much!
  14. too_many_stars


    Thanks so much again Swifcoder, that did the trick. I think I completely confused myself going back and forth. One more question if I may. 1.) Is there a performance increase to using indecies? Thanks, Mike
  15. too_many_stars


    Thanks for the response swiftcoder, I already tried that with the following code... GLfloat verts[] = { 1.0f , 1.0f , 1.0f , //[ 0 ] -1.0f , 1.0f , 1.0f , //[ 1 ] -1.0f , -1.0f , 1.0f , //[ 2 ] 1.0f , -1.0f , 1.0f }; //[ 3 ] GLushort indices[] = { 0 , 1 , 2 , 2 , 1 , 3}; GLfloat uv[] = { 1.0f , 1.0f , 0.0f , 1.0f , 0.0f , 0.0f , 1.0f , 0.0f }; And the result is a quad with a slice taken out of it on the left hand side. I should also note that when I use glDrawArrays ( after changing the number of verts and uv's I don't have any issues.)
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!