Jump to content
  • Advertisement

Search the Community

Showing results for tags 'OpenGL'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 1000 results

  1. So I've started in learning OpenGL , then I found the NeHe website and its Legacy Tutorials (here is the link http://nehe.gamedev.net/tutorial/lessons_01__05/22004/ ).But the problem is that I can't download any examples of code ( neither C code examples nor any other language ) .As you see it is impossible to download http://nehe.gamedev5.net/data/lessons/pelles_c/lesson01.zip . Does anyone has expamles of C code of this lessons ? And the second question. If I prefer coding on C , should I choose GLUT but not an OpenGL ?
  2. Hello! I'm trying to understand how to load models with Assimp. Well learning how to use this library isn't that hard, the thing is how to use the data. From what I understand so far, each model consists of several meshes which you can render individually in order to get the final result (the model). Also from what assimp says: One mesh uses only a single material everywhere - if parts of the model use a different material, this part is moved to a separate mesh at the same node The only thing that confuses me is how to create the shader that will use these data to draw a mesh. Lets say I have all the information about a mesh like this: class Meshe { std::vector<Texture> diffuse_textures; std::vector<Texture> specular_textures; std::vector<Vertex> vertices; std::vector<unsigned int> indices; } And lets make the simplest shaders: Vertex Shader: #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; layout(location = 2) in vec2 aTexCoord; uniform vec3 model; uniform vec3 view; uniform vec3 projection; out vec2 TextureCoordinate; out vec3 Normals; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TextureCoordinate = aTexCoord Normals = normalize(mat3(transpose(inverse(model))) * aNormal); } Fragment Shader: #version 330 core out vec4 Output; in vec2 TextureCoordinate; in vec3 Normals; uniform sampler2D diffuse; uniform sampler2D specular; void main() { Output = texture(diffuse, TextureCoordinate); } Will this work? I mean, assimp says that each mesh has only one material that covers it, but that material how many diffuse and specular textures can it have? Does it makes sense for a material to have more than one diffuse or more that one specular textures? If each material has only two textures, one for the diffuse and one for the specular then its easy, i'm using the specular texture on the lighting calculations and the diffuse on the actual output. But what happens if the textures are more? How am i defining them on the fragment shader without knowing the actual number? Also how do i use them?
  3. Hello everyone, While I do have a B.S. in Game Development, I am currently unable to answer a very basic programming question. In the eyes of OpenGL, does it make any difference if the program uses integers or floats? More specifically, characters, models, and other items have coordinates. Right now, I am very tempted to use integers for the coordinates. The two reasons for this are accuracy and perhaps optimizing calculations. If multiplying two floats is more expensive in the eyes of the CPU, then this is a very powerful reason to not use floats to contain the positions, or vectors of game objects. Please forgive me for my naivette, and not knowing the preferences of the GPU. I hope this thread becomes a way for us to learn how to program better as a community. -Kevin
  4. I'm having trouble with glew sharing some defines that I can't resolve. Does anyone know of a way to get the following statements working instead of an include with glew (glew resolves the red squigglies too.) glColor3f(0, 1, 0.); glRasterPos2i(10,10); I really want to use a quick glut command for now. The command uses the statements above. Thank You, Josheir
  5. I am trying to use : OpenGL and draw some simple text quickly. However I cannot get the top two commands to define. glColor3f(rgb.r, rgb.g, rgb.b); glRasterPos2f(x, y); glutBitmapString(font, string); I tried this : #include <C:/Program Files (x86)/Microsoft SDKs/Windows/v7.1A/Include/gl/GL.h> May I have some help please. The last command which is with glut is fine. Apparently, the problem is not with GLUT or FreeGlut, but with Visual Studio headers. I am using Visual Studio 2017, C++. Thank you, Josheir
  6. Hi all, I'm targeting OpenGL 2.1 (so no VAO's for me 😢) and I have multiple shaders rendering quite different things at each frame; for the sake of argument, say that I have just two shaders - one shader draws the background (so it uses vertex attributes for texture coordinates, 2D positions, etc.) while the other draws some meshes (so it uses vertex attributes for colors and 3D positions, and no textures). The data provided to each of the shaders changes for each frame. Now, I've realized I can choose between two different strategies when it comes to binding vertex attribute locations. With the first strategy I reuse vertex attribute indices, whereby the first shader would bind to, say, attribute locations 0, 1, 2, 3 and the second shader would bind to, say, 0, 1, 2. With this approach I'll have to constantly call glVertexAttribPointer for these indices, as for each frame one shader would require one set of VBO's to feed 0, 1, 2, 3 and the other shader would require another set of VBO's to feed 0, 1, 2. With the second strategy, instead, I use "dedicated" vertex attribute indices: the first shader would bind to 0, 1, 2, 3 and the second shader would bind to 4, 5, 6. In other words, each vertex attribute index has its own dedicated VBO to feed data to it. The advantage of this approach is that I need to call glVertexAttribPointer only once per index, say at program initialization, but at the same time it's limiting my capacity to "grow" shaders in the future (my GPU only supports 16 vertex attributes, hence it'll be hard to add new shaders or to add new attributes to existing shaders). Now, in my implementation I can't see any performance benefit of one versus the other, but truth to be told, I'm developing on an ancient Dell laptop with an Intel mobile card...😩 nonetheless I would like this code to run as fast as possible on modern GPU's. Is there any performance benefit to choosing one of the two strategies over the other? And in case there's no performance benefit, is one strategy preferable over the other for reasons that I can't think of at the moment? Thanks so much in advance for any tips!!!!
  7. Hello, I was just wondering whether it was a thing to use a component-based architecture to handle models in OpenGL much like in a entity component system. This structure has especially helped me in cases where I have models that need different resources. By that I mean, some models need a texture and texture coordinate data to go with it, others just need some color data (vec3). Some have a normals buffer whereas others just get their normals calculated on the fly (geometry shader). Some have an index buffer (rendered with glDrawElements) whereas others don't and get rendered using glDrawArrays etc... Instead of branching off into complicated hierarchies to create models that have only certain resources, or coming up with strange ways resolve certain problems concerning checking which resources some models have, I just attach components to each model such as a vertex buffer or texture coordinate buffer or index buffer etc... So, I was just wondering if I was using the other version of model handling wrong or whether this style of programming is a viable option and whether there are flaws that I am unable to foresee?
  8. So, there is one thing that i don't quite understand. (Probably because i didn't dive that deep into PBR lighting in the first place.) Currently, i implemented a very basic PBR renderer (with the BRDF microfaced shading model) into my engine. The lighting system i have is pretty basic (1 directional/sun light, deffered point lights and 1 ambient light) I don't have a GI solution yet. (Only a very basic world-space Ambient occlusion technique). Here is how it looks like: Now, what i would like to do is to give the shadows a slightly blueish tint. (To simulate the blueish light from the sky.) Unreal seems to implement this too which gives the scene a much more natural look: Now, my renderer does render in HDR and i use exposure/tonemapping to bring this down to LDR. The first image used an indirect light with a RGB value of (40,40,40) and an indirect light of (15,15,15). Here is the same picture but with an ambient light of (15,15,15) * ((109, 162, 255) / (255,255,255)) which should give us this blueish tint. The problem is it looks like this: The shadows do get the desired color (more or less) the issue is that all lit pixels also get affected giving the scene a blue tint. Reducing the ambient light intensity results in way too dark shadows. Increase the intensity and the shadows look alright but then the whole scene gets affected way too much. In the shader i basically have: color = directionalLightColor * max(dot(normal,sunNormal),0.0) + ambientLight; The result is that the blue component of the color will be always higher than the other two. I could of course fix it by faking it (only adding the ambient light if the pixel is in shadow) but i want to stay as close to PBR as possible and avoid adding hacks like that. My question is: How is this effect done properly (with PBR / proper physically based lighting)?
  9. Hello again Recently I was trying to apply 6 different textures in a cube and I noticed that some textures would not apply correctly, but if i change the texture image with another it works just fine. I can't really understand what's going on. I will also attach the image files. So does this might have to do anything with coding or its just the image fault??? This is a high quality texture 2048x2048 brick1.jpg, which does the following: And this is another texture 512x512 container.jpg which is getting applied correctly with the exact same texture coordinates as the prev one: Vertex Shader #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; layout(location = 2) in vec2 aTexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 proj; out vec2 TexCoord; void main() { gl_Position = proj * view * model * vec4(aPos, 1.0); TexCoord = aTexCoord; } Fragment Shader #version 330 core out vec4 Color; in vec2 TexCoord; uniform sampler2D diffuse; void main() { Color = texture(diffuse, TexCoord); } Texture Loader Texture::Texture(std::string path, bool trans, int unit) { //Reverse the pixels. stbi_set_flip_vertically_on_load(1); //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, 0); //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); //Not Transparent texture. if (!trans) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width, m_height, 0, GL_RGB, GL_UNSIGNED_BYTE, data)); } //Transparent texture. else { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data)); } //Generate mipmaps. GLCall(glGenerateMipmap(GL_TEXTURE_2D)); //Texture Filters. GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)); } //Loading Failed. else throw EngineError("The was an error loading image: " + path); //Free the image data. stbi_image_free(data); } Texture::~Texture() { } void Texture::Bind(int unit) { GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); } Rendering Code: Renderer::Renderer() { float vertices[] = { // positions // normals // texture coords -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f, -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, -0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f }; //Create the Vertex Array. m_vao = new Vao(); //Create the Vertex Buffer. m_vbo = new Vbo(vertices, sizeof(vertices)); //Create the attributes. m_attributes = new VertexAttributes(); m_attributes->Push(3); m_attributes->Push(3); m_attributes->Push(2); m_attributes->Commit(m_vbo); } Renderer::~Renderer() { delete m_vao; delete m_vbo; delete m_attributes; } void Renderer::DrawArrays(Cube *cube) { //Render the cube. cube->Render(); unsigned int tex = 0; for (unsigned int i = 0; i < 36; i += 6) { if (tex < cube->m_textures.size()) cube->m_textures[tex]->Bind(); GLCall(glDrawArrays(GL_TRIANGLES, i, 6)); tex++; } }
  10. Hi, I was studying making bloom/glow effect in OpenGL and following the tutorials from learnopengl.com and ThinMatrix (youtube) tutorials, but i am still confuse on how to generate the bright colored texture to be used for blur. Do I need to put lights in the area of location i want the glow to happen so it will be brighter than other object in the scene?, that means i need to draw the scene with the light first? or does the brightness can be extracted based on how the color of the model was rendered/textured thru a formula or something? I have a scene that looks like this that i want to glow the crystal can somebody enlighten me on how or what the correct approach is? really appreciated!
  11. phil67rpg

    plane game

    well I have drawn two plane sprites and have them move around the screen using keys. one them can also shoot bullets, I have drawn an animated collision sprite
  12. I'm currently learning how to import models. I created some code that works well following this tutorial which uses only one sampler2D in the fragment shader and the model loads just fine with all the textures. The thing is what happens when a mesh has more than one textures? The tutorial says to define inside the fragment shader N diffuse and specular samplers with the format texture_diffuseN, texture_specularN and set them via code, where N is 1,2,3, .. , max_samplers. I understand that but how do you use them inside the shader? In the tutorial the shader is: #version 330 core out vec4 FragColor; in vec2 TexCoords; uniform sampler2D texture_diffuse1; void main() { FragColor = texture(texture_diffuse1, TexCoords); } which works perfectly for the test model that the tutorial is giving us. Now lets say you have the general shader: #version 330 core out vec4 FragColor; in vec2 TexCoords; uniform sampler2D texture_diffuse1; uniform sampler2D texture_diffuse2; uniform sampler2D texture_diffuse3; uniform sampler2D texture_diffuse4; uniform sampler2D texture_diffuse5; uniform sampler2D texture_diffuse6; uniform sampler2D texture_diffuse7; uniform sampler2D texture_specular1; uniform sampler2D texture_specular2; uniform sampler2D texture_specular3; uniform sampler2D texture_specular4; uniform sampler2D texture_specular5; uniform sampler2D texture_specular6; uniform sampler2D texture_specular7; void main() { //How am i going to decide here which diffuse texture to output? FragColor = texture(texture_diffuse1, TexCoords); } Can you explain me this with a cube example? Lets say i have a cube which is a mesh and i want to apply a different texture for each face (6 total). #version 330 core out vec4 FragColor; in vec2 TexCoords; uniform sampler2D texture_diffuse1; uniform sampler2D texture_diffuse2; uniform sampler2D texture_diffuse3; uniform sampler2D texture_diffuse4; uniform sampler2D texture_diffuse5; uniform sampler2D texture_diffuse6; void main() { //How am i going to output the correct texture for each face? FragColor = texture(texture_diffuse1, TexCoords); } I know that the text coordinates will apply the texture at the correct face, but how do i now which sampler to use every time the fragments shader is called? I hope you understand why I'm frustrated. Thank you
  13. babaliaris

    OpenGL Fragment Position

    Take a Look at my Shaders: Vertex Shader: #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; uniform mat4 model; uniform mat4 view; uniform mat4 proj; out vec3 FragPos; out vec3 Normal; void main() { gl_Position = proj * view * model * vec4(aPos, 1.0); FragPos = vec3(model * vec4(aPos, 1.0)); Normal = normalize(mat3(transpose(inverse(model))) * aNormal); } Fragment Shader: #version 330 core out vec4 Color; uniform vec3 viewPos; struct Material { vec3 ambient; vec3 diffuse; vec3 specular; float shininess; }; uniform Material material; struct Light { vec3 position; vec3 ambient; vec3 diffuse; vec3 specular; }; uniform Light light; in vec3 FragPos; in vec3 Normal; void main() { //Calculating the Light Direction From The Fragment Towards the light source. vec3 lightDirection = normalize(light.position - FragPos); //Calculating the camera direction. vec3 camDirection = normalize(viewPos - FragPos); //Calculating the reflection of the light. vec3 reflection = reflect(-lightDirection, Normal); //Calculating the Diffuse Factor. float diff = max( dot(Normal, lightDirection), 0.0f ); //Calculate Specular. float spec = pow( max( dot(reflection, camDirection), 0.0f ), material.shininess); //Create the 3 components. vec3 ambient = material.ambient * light.ambient; vec3 diffuse = (material.diffuse * diff) * light.diffuse; vec3 specular = (material.specular * spec) * light.specular; //Calculate the final fragment color. Color = vec4(ambient + diffuse + specular, 1.0f); } I can't understand how this: FragPos = vec3(model * vec4(aPos, 1.0)); is actually the fragment position. This is just the vertex position transformed to world coordinates. The vertex shader is gonna get called n times where n is the number of vertices, so the above code is also going to get called n times. The actual fragments are a lot more so how can the above code generate all the fragments positions? Also what is a fragments position? Is it the (x,y) you need to move on the screen in order to find that pixel? I don't think so because i read that while you are in the fragment shader the actual pixels on the screen have not been determined yet because the viewport transformation happens at the end of the fragment shader.
  14. In the gif image below you can see the effect that my code produces. When I do not rotate the cube, diffuse lighting works properly (faces that face the light source are brighter) but when i rotate the cube this is what happens: GIF Cube's Vertex Shader. #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; uniform mat4 model; uniform mat4 view; uniform mat4 proj; out vec3 FragPos; out vec3 Normal; void main() { gl_Position = proj * view * model * vec4(aPos, 1.0); FragPos = vec3(model * vec4(aPos, 1.0)); Normal = normalize(aNormal); } Cube's Fragment Shader. #version 330 core out vec4 Color; uniform vec3 lightPos; uniform vec3 lightColor; uniform vec3 objectColor; in vec3 FragPos; in vec3 Normal; void main() { float ambient = 0.3f, diffuse; vec3 result; vec3 lightDirection = normalize(lightPos - FragPos); diffuse = max( dot(Normal, lightDirection), 0.0f ); result = (ambient + diffuse) * lightColor; Color = vec4(result * objectColor, 1.0f); } Cube's Bindings And Matrices Code (This code is called every frame for each cube object): void Cube::Render() { //---------------------Run The Brains---------------------// for (unsigned int i = 0; i < m_brains->size(); i++) { //Call the Start Method. if (m_brains->at(i)->m_started) { m_brains->at(i)->Start(); m_brains->at(i)->m_started = false; } //Call the Update Method. else { m_brains->at(i)->Update(); } } //---------------------Run The Brains---------------------// //Bind everything before the draw call. m_texture->Bind(); m_program->Bind(); //Initialize the mvp transformations. glm::mat4 model = glm::mat4(1.0f); glm::mat4 view = glm::mat4(1.0f); glm::mat4 proj = glm::mat4(1.0f); //Translate. model = glm::translate(model, m_pos); //Rotate the model. model = glm::rotate(model, glm::radians(m_rotation.x), glm::vec3(1.0f, 0.0f, 0.0f)); model = glm::rotate(model, glm::radians(m_rotation.y), glm::vec3(0.0f, 1.0f, 0.0f)); model = glm::rotate(model, glm::radians(m_rotation.z), glm::vec3(0.0f, 0.0f, 1.0f)); //Scale. model = glm::scale(model, m_scale); //View. view = m_core->m_cam->GetView(); //Projection. proj = glm::perspective(glm::radians(m_core->m_cam->m_fov), (float)m_core->m_width / m_core->m_height, 0.1f, 100.0f); //Set the tranformation matrices on the shader. m_program->SetUniformMat4f("model", model); m_program->SetUniformMat4f("view", view); m_program->SetUniformMat4f("proj", proj); //Draw. GLCall(glDrawArrays(GL_TRIANGLES, 0, 36)); } Vertex Data: float vertices[] = { //Positions //Normals -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, -0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, -0.5f, 0.5f, -0.5f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.5f, 0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f }; //Create the Vertex Array. m_vao = new Vao(); //Create the Vertex Buffer. m_vbo = new Vbo(vertices, sizeof(vertices)); //Create the attributes. m_attributes = new VertexAttributes(); m_attributes->Push(3); m_attributes->Push(3); m_attributes->Commit(m_vbo);
  15. thecheeselover

    Zone generation

    Subscribe to our subreddit to get all the updates from the team! I have integrated the zone separation with my implementation of the Marching Cubes algorithm. Now I have been working on zone generation. A level is separated in the following way : Shrink the zone map to exactly fit an integer number of Chunk2Ds, which are of 32² m². For each Chunk2D, analyse all zones inside its boundaries and determine all possible heights for Chunk3Ds, which are of 32³ m³. Imagine this as a three dimensional array as an hash map : we are trying to figure out all keys for Chunk3Ds for a given Chunk2D. Create and generate a Chunk3D for each height found. Execute the Marching Cubes algorithm to assemble the geometry for each Chunk3D. In our game, we want levels to look like and feel like a certain world. The first world we are creating is the savanna. Even though each Chunk3D is generated using 3D noise, I made a noise module to map 3D noises into the 2D to able to apply 2D perturbation to the terrain. I also tried some funkier procedural noises : An arch! The important thing with procedural generation, it's to have a certain level of control over it. With the new zone division system, I have achieved a minimum on that path for my game.
  16. Subscribe to our subreddit to get all the updates from the team! Recently I've been tackling with more organic low poly terrains. The default way of creating indices for a 3D geometry is the following (credits) : A way to create simple differences that makes the geometry slightly more complicated and thus more organic is to vertically swap the indices of each adjacent quad. In other words, each adjacent quad to a centered quad is its vertical mirror. Finally, by not sharing the vertices and hence by creating two triangles per quad, this is the result with a coherent noise generator (joise) : It is called flat shading.
  17. I'm doing research at the moment, and I came across some interesting things with Window Styles for OpenGL on Windows platform: WS_CLIPCHILDREN and WS_CLIPSIBLINGS are both required for OpenGL applications. WS_POPUP is preferred over WS_OVERLAPPEDWINDOW, WS_OVERLAPPED, and WS_POPUPWINDOW. But, it didn't say if it's required, or it's explicitly not necessary for OpenGL applications. WS_BORDER is also required for OpenGL applications. WS_VISIBLE is necessary as a precaution. So all of those explanations are just comments and posts spread out across many different forums, answer posts, StackOverflow, and on old MSDN blog comments. There's too many to list, so I just wanted brevity over having to cite out quite a bit of references. Anyway, what I personally don't understand about the Window Styles is: How do I describe the effects of WS_CLIPCHILDREN and WS_CLIPSIBLINGS on an OpenGL application? Its uses has some uses related to how OpenGL applications have their own pixel formats supported by the device context, and how it may be possible for a parent window to spawn children windows and somehow, miraculously clipping each other. Or, maybe the flag is put there mainly so the parent window, or the primary OpenGL application window, is prevented from being clipped by any children and parent siblings (windows)? Why is it preferrable to use WS_POPUP over WS_OVERLAPPEDWINDOW? I am speculating it has to do with toggling fullscreen and windowed mode for any OpenGL application, but, does that mean, just by setting WS_POPUP, it is enough to have fullscreen toggling support? Or, does it require that the window itself need to set/remove certain Window Styles, so WS_POPUP = fullscreen, and then WS_OVERLAPPED | ~(WS_POPUP) is used for windowed mode? Why is WS_VISIBLE necessary? Does that mean it can skip a step for setting the window to be visible? WS_BORDER is for windowed mode only, and thus it's required? Or that fullscreen and windowed mode both need WS_BORDER to function? So, those are my questions I have right now. Am I correct to assume all if these? Thanks in advance.
  18. EDIT: SOLVED - MORE OR LESS! I have been trying to get a .obj file to load and display in three dimensions. The object is a cube. I have spent three days on this and need help, please! I have tried numerous .obj files trying to get it to work. I replaced the double slashes with single slashes in the .obj file so that the super simple loader can get the vertexes and indices. Since I am very stuck, the help would be greatly effective to my experience/learning, may I have the help, please? ALso, recently I switched the functionality from : glDrawArrays (GL_TRIANGLES, 0, 3); to lDrawElements( GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0); The code is here (pastebin) https://pastebin.com/cTLuqaae The .png of the model and the .obj file are attached too. Thank you, Josheir My email is Joshuaeirm@gmail.com. cube100.obj
  19. phil67rpg

    shooting bullets

    well I am able to get my sprites to rotate and move in all directions, I have drawn two plane sprites, I am also able to shoot a bullet in the up direction, I want to shoot bullets in all directions just like my plane rotates, I just need a hint on how to proceed, go easy on me this is new stuff to me. However I am making progress.
  20. I tried to port DirectX program to OpenGL version. My planets were inverted horizontally (opposite rotation Y axis). Z-coordinate was inverted too. I figured them out for some months. I finally recognized that there are two different coordinate systems - left-handed and right-handed. DirectX uses left-handed coordinate but OpenGL uses right-handed coordinate as default. I googled it and learned a lot. I prefer left-handed coordinate system for easy programming instead. Also I tried google projection matrix formula for left-handed coordinate but did not find any sources. Does anyone know any matrix formula for left-handed coordinate system? Does anyone know any good guides for DirectX to OpenGL conversion for porting? Tim
  21. phil67rpg

    plane sprite

    well I have managed to get a plane sprite to move around the screen I want to know what to add to my game. I hope this question is appropriate.
  22. Hello. I'm trying to implement normal mapping. I've been following this: http://ogldev.atspace.co.uk/www/tutorial26/tutorial26.html The problem is that my tangent vectors appear rather obviously wrong. But only one of them, never both. Here's my code for calculating the tangents: this.makeTriangle = function(a, b, c) { var edge1 = VectorSub(b.pos, a.pos); var edge2 = VectorSub(c.pos, a.pos); var deltaU1 = b.texCoords[0] - a.texCoords[0]; var deltaV1 = b.texCoords[1] - a.texCoords[1]; var deltaU2 = c.texCoords[0] - a.texCoords[0]; var deltaV2 = c.texCoords[1] - a.texCoords[1]; var f = 1.0 / (deltaU1 * deltaV2 - deltaU2 * deltaV1); var vvec = VectorNormal([ f * (deltaV2 * edge1[0] - deltaV1 * edge2[0]), f * (deltaV2 * edge1[1] - deltaV1 * edge2[1]), f * (deltaV2 * edge1[2] - deltaV1 * edge2[2]), 0.0 ]); var uvec = VectorNormal([ f * (-deltaU2 * edge1[0] - deltaU1 * edge2[0]), f * (-deltaU2 * edge1[1] - deltaU1 * edge2[1]), f * (-deltaU2 * edge1[2] - deltaU1 * edge2[2]), 0.0 ]); if (VectorDot(VectorCross(a.normal, uvec), vvec) < 0.0) { uvec = VectorScale(uvec, -1.0); }; /* console.log("Normal: "); console.log(a.normal); console.log("UVec: "); console.log(uvec); console.log("VVec: "); console.log(vvec); */ this.emitVertex(a, uvec, vvec); this.emitVertex(b, uvec, vvec); this.emitVertex(c, uvec, vvec); }; My vertex shader: precision mediump float; uniform mat4 matProj; uniform mat4 matView; uniform mat4 matModel; in vec4 attrVertex; in vec2 attrTexCoords; in vec3 attrNormal; in vec3 attrUVec; in vec3 attrVVec; out vec2 fTexCoords; out vec4 fNormalCamera; out vec4 fWorldPos; out vec4 fWorldNormal; out vec4 fWorldUVec; out vec4 fWorldVVec; void main() { fTexCoords = attrTexCoords; fNormalCamera = matView * matModel * vec4(attrNormal, 0.0); vec3 uvec = attrUVec; vec3 vvec = attrVVec; fWorldPos = matModel * attrVertex; fWorldNormal = matModel * vec4(attrNormal, 0.0); fWorldUVec = matModel * vec4(uvec, 0.0); fWorldVVec = matModel * vec4(vvec, 0.0); gl_Position = matProj * matView * matModel * attrVertex; } And finally the fragment shader: precision mediump float; uniform sampler2D texImage; uniform sampler2D texNormal; uniform float sunFactor; uniform mat4 matView; in vec2 fTexCoords; in vec4 fNormalCamera; in vec4 fWorldPos; in vec4 fWorldNormal; in vec4 fWorldUVec; in vec4 fWorldVVec; out vec4 outColor; vec4 calcPointLight(in vec4 normal, in vec4 source, in vec4 color, in float intensity) { vec4 lightVec = source - fWorldPos; float sqdist = dot(lightVec, lightVec); vec4 lightDir = normalize(lightVec); return color * dot(normal, lightDir) * (1.0 / sqdist) * intensity; } vec4 calcLights(vec4 pNormal) { vec4 result = vec4(0.0, 0.0, 0.0, 0.0); ${CALC_LIGHTS} return result; } void main() { vec4 surfNormal = vec4(cross(vec3(fWorldUVec), vec3(fWorldVVec)), 0.0); vec2 bumpCoords = fTexCoords; vec4 bumpNormal = texture(texNormal, bumpCoords); bumpNormal = (2.0 * bumpNormal - vec4(1.0, 1.0, 1.0, 0.0)) * vec4(1.0, 1.0, 1.0, 1.0); bumpNormal.w = 0.0; mat4 bumpMat = mat4(fWorldUVec, fWorldVVec, fWorldNormal, vec4(0.0, 0.0, 0.0, 1.0)); vec4 realNormal = normalize(bumpMat * bumpNormal); vec4 realCameraNormal = matView * realNormal; float intensitySun = clamp(dot(normalize(realCameraNormal.xyz), normalize(vec3(0.0, 0.0, 1.0))), 0.0, 1.0) * sunFactor; float intensity = clamp(intensitySun + 0.2, 0.0, 1.0); outColor = texture(texImage, fTexCoords) * (vec4(intensity, intensity, intensity, 1.0) + calcLights(realNormal)); //outColor = texture(texNormal, fTexCoords); //outColor = 0.5 * (fWorldUVec + vec4(1.0, 1.0, 1.0, 0.0)); //outColor = vec4(fTexCoords, 1.0, 1.0); outColor.w = 1.0; } Here is the result of rendering an object, showing its normal render, the uvec, vvec, and texture coordinates (each commented out in the fragment shader code): Normal map itself: The uvec, as far as I can tell, should not be all over the place like it is; either this, or some other mistake, causes the normal vectors to be all wrong, so you can see on the normal render that for example there is a random dent on the left side which should not be there. As far as I can tell, my code follows the math from that tutorial. I use right-handed corodinates. So what could be wrong?
  23. I got a book called "High Dynamic Range Imaging" (published by Elsevier) and read some pages. I learned that HDR requires deep color depth (10/12-bit color depth). I heard that many vendors and independent developers implemented HDR features into their games and applications because UHD standard requires deep color depth (HDR) to eliminate noticeable color-banding. I had seen color banding on SDR displays and want to get rid of it. I googled HDR for OpenGL but learned that they require Quadro or FireGL cards to support that. How do they get HDR working on consumer video cards? That's why I want HDR implementation for my OpenGL programs. Tim
  24. phil67rpg

    sprite sheet

    I am learning how to use a sprite sheet to animate a sprite, are there any good tutorials that would help me or do you have any suggestions for me. I am using SOIL and opengl and c++.
  25. Hello! I would like to introduce Diligent Engine, a project that I've been recently working on. Diligent Engine is a light-weight cross-platform abstraction layer between the application and the platform-specific graphics API. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common front-end for all supported platforms and provides interoperability with underlying native API. Shader source code converter allows shaders authored in HLSL to be translated to GLSL and used on all platforms. Diligent Engine supports integration with Unity and is designed to be used as a graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. It is distributed under Apache 2.0 license and is free to use. Full source code is available for download on GitHub. Features: True cross-platform Exact same client code for all supported platforms and rendering backends No #if defined(_WIN32) ... #elif defined(LINUX) ... #elif defined(ANDROID) ... No #if defined(D3D11) ... #elif defined(D3D12) ... #elif defined(OPENGL) ... Exact same HLSL shaders run on all platforms and all backends Modular design Components are clearly separated logically and physically and can be used as needed Only take what you need for your project (do not want to keep samples and tutorials in your codebase? Simply remove Samples submodule. Only need core functionality? Use only Core submodule) No 15000 lines-of-code files Clear object-based interface No global states Key graphics features: Automatic shader resource binding designed to leverage the next-generation rendering APIs Multithreaded command buffer generation 50,000 draw calls at 300 fps with D3D12 backend Descriptor, memory and resource state management Modern c++ features to make code fast and reliable The following platforms and low-level APIs are currently supported: Windows Desktop: Direct3D11, Direct3D12, OpenGL Universal Windows: Direct3D11, Direct3D12 Linux: OpenGL Android: OpenGLES MacOS: OpenGL iOS: OpenGLES API Basics Initialization The engine can perform initialization of the API or attach to already existing D3D11/D3D12 device or OpenGL/GLES context. For instance, the following code shows how the engine can be initialized in D3D12 mode: #include "RenderDeviceFactoryD3D12.h" using namespace Diligent; // ... GetEngineFactoryD3D12Type GetEngineFactoryD3D12 = nullptr; // Load the dll and import GetEngineFactoryD3D12() function LoadGraphicsEngineD3D12(GetEngineFactoryD3D12); auto *pFactoryD3D11 = GetEngineFactoryD3D12(); EngineD3D12Attribs EngD3D12Attribs; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[0] = 1024; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[1] = 32; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[2] = 16; EngD3D12Attribs.CPUDescriptorHeapAllocationSize[3] = 16; EngD3D12Attribs.NumCommandsToFlushCmdList = 64; RefCntAutoPtr<IRenderDevice> pRenderDevice; RefCntAutoPtr<IDeviceContext> pImmediateContext; SwapChainDesc SwapChainDesc; RefCntAutoPtr<ISwapChain> pSwapChain; pFactoryD3D11->CreateDeviceAndContextsD3D12( EngD3D12Attribs, &pRenderDevice, &pImmediateContext, 0 ); pFactoryD3D11->CreateSwapChainD3D12( pRenderDevice, pImmediateContext, SwapChainDesc, hWnd, &pSwapChain ); Creating Resources Device resources are created by the render device. The two main resource types are buffers, which represent linear memory, and textures, which use memory layouts optimized for fast filtering. To create a buffer, you need to populate BufferDesc structure and call IRenderDevice::CreateBuffer(). The following code creates a uniform (constant) buffer: BufferDesc BuffDesc; BufferDesc.Name = "Uniform buffer"; BuffDesc.BindFlags = BIND_UNIFORM_BUFFER; BuffDesc.Usage = USAGE_DYNAMIC; BuffDesc.uiSizeInBytes = sizeof(ShaderConstants); BuffDesc.CPUAccessFlags = CPU_ACCESS_WRITE; m_pDevice->CreateBuffer( BuffDesc, BufferData(), &m_pConstantBuffer ); Similar, to create a texture, populate TextureDesc structure and call IRenderDevice::CreateTexture() as in the following example: TextureDesc TexDesc; TexDesc.Name = "My texture 2D"; TexDesc.Type = TEXTURE_TYPE_2D; TexDesc.Width = 1024; TexDesc.Height = 1024; TexDesc.Format = TEX_FORMAT_RGBA8_UNORM; TexDesc.Usage = USAGE_DEFAULT; TexDesc.BindFlags = BIND_SHADER_RESOURCE | BIND_RENDER_TARGET | BIND_UNORDERED_ACCESS; TexDesc.Name = "Sample 2D Texture"; m_pRenderDevice->CreateTexture( TexDesc, TextureData(), &m_pTestTex ); Initializing Pipeline State Diligent Engine follows Direct3D12 style to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions etc.) Creating Shaders To create a shader, populate ShaderCreationAttribs structure. An important member is ShaderCreationAttribs::SourceLanguage. The following are valid values for this member: SHADER_SOURCE_LANGUAGE_DEFAULT - The shader source format matches the underlying graphics API: HLSL for D3D11 or D3D12 mode, and GLSL for OpenGL and OpenGLES modes. SHADER_SOURCE_LANGUAGE_HLSL - The shader source is in HLSL. For OpenGL and OpenGLES modes, the source code will be converted to GLSL. See shader converter for details. SHADER_SOURCE_LANGUAGE_GLSL - The shader source is in GLSL. There is currently no GLSL to HLSL converter. To allow grouping of resources based on the frequency of expected change, Diligent Engine introduces classification of shader variables: Static variables (SHADER_VARIABLE_TYPE_STATIC) are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. Mutable variables (SHADER_VARIABLE_TYPE_MUTABLE) define resources that are expected to change on a per-material frequency. Examples may include diffuse textures, normal maps etc. Dynamic variables (SHADER_VARIABLE_TYPE_DYNAMIC) are expected to change frequently and randomly. This post describes the resource binding model in Diligent Engine. The following is an example of shader initialization: ShaderCreationAttribs Attrs; Attrs.Desc.Name = "MyPixelShader"; Attrs.FilePath = "MyShaderFile.fx"; Attrs.SearchDirectories = "shaders;shaders\\inc;"; Attrs.EntryPoint = "MyPixelShader"; Attrs.Desc.ShaderType = SHADER_TYPE_PIXEL; Attrs.SourceLanguage = SHADER_SOURCE_LANGUAGE_HLSL; BasicShaderSourceStreamFactory BasicSSSFactory(Attrs.SearchDirectories); Attrs.pShaderSourceStreamFactory = &BasicSSSFactory; ShaderVariableDesc ShaderVars[] = { {"g_StaticTexture", SHADER_VARIABLE_TYPE_STATIC}, {"g_MutableTexture", SHADER_VARIABLE_TYPE_MUTABLE}, {"g_DynamicTexture", SHADER_VARIABLE_TYPE_DYNAMIC} }; Attrs.Desc.VariableDesc = ShaderVars; Attrs.Desc.NumVariables = _countof(ShaderVars); Attrs.Desc.DefaultVariableType = SHADER_VARIABLE_TYPE_STATIC; StaticSamplerDesc StaticSampler; StaticSampler.Desc.MinFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MagFilter = FILTER_TYPE_LINEAR; StaticSampler.Desc.MipFilter = FILTER_TYPE_LINEAR; StaticSampler.TextureName = "g_MutableTexture"; Attrs.Desc.NumStaticSamplers = 1; Attrs.Desc.StaticSamplers = &StaticSampler; ShaderMacroHelper Macros; Macros.AddShaderMacro("USE_SHADOWS", 1); Macros.AddShaderMacro("NUM_SHADOW_SAMPLES", 4); Macros.Finalize(); Attrs.Macros = Macros; RefCntAutoPtr<IShader> pShader; m_pDevice->CreateShader( Attrs, &pShader ); Creating the Pipeline State Object To create a pipeline state object, define instance of PipelineStateDesc structure. The structure defines the pipeline specifics such as if the pipeline is a compute pipeline, number and format of render targets as well as depth-stencil format: // This is a graphics pipeline PSODesc.IsComputePipeline = false; PSODesc.GraphicsPipeline.NumRenderTargets = 1; PSODesc.GraphicsPipeline.RTVFormats[0] = TEX_FORMAT_RGBA8_UNORM_SRGB; PSODesc.GraphicsPipeline.DSVFormat = TEX_FORMAT_D32_FLOAT; The structure also defines depth-stencil, rasterizer, blend state, input layout and other parameters. For instance, rasterizer state can be defined as in the code snippet below: // Init rasterizer state RasterizerStateDesc &RasterizerDesc = PSODesc.GraphicsPipeline.RasterizerDesc; RasterizerDesc.FillMode = FILL_MODE_SOLID; RasterizerDesc.CullMode = CULL_MODE_NONE; RasterizerDesc.FrontCounterClockwise = True; RasterizerDesc.ScissorEnable = True; //RSDesc.MultisampleEnable = false; // do not allow msaa (fonts would be degraded) RasterizerDesc.AntialiasedLineEnable = False; When all fields are populated, call IRenderDevice::CreatePipelineState() to create the PSO: m_pDev->CreatePipelineState(PSODesc, &m_pPSO); Binding Shader Resources Shader resource binding in Diligent Engine is based on grouping variables in 3 different groups (static, mutable and dynamic). Static variables are variables that are expected to be set only once. They may not be changed once a resource is bound to the variable. Such variables are intended to hold global constants such as camera attributes or global light attributes constant buffers. They are bound directly to the shader object: PixelShader->GetShaderVariable( "g_tex2DShadowMap" )->Set( pShadowMapSRV ); Mutable and dynamic variables are bound via a new object called Shader Resource Binding (SRB), which is created by the pipeline state: m_pPSO->CreateShaderResourceBinding(&m_pSRB); Dynamic and mutable resources are then bound through SRB object: m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "tex2DDiffuse")->Set(pDiffuseTexSRV); m_pSRB->GetVariable(SHADER_TYPE_VERTEX, "cbRandomAttribs")->Set(pRandomAttrsCB); The difference between mutable and dynamic resources is that mutable ones can only be set once for every instance of a shader resource binding. Dynamic resources can be set multiple times. It is important to properly set the variable type as this may affect performance. Static variables are generally most efficient, followed by mutable. Dynamic variables are most expensive from performance point of view. This post explains shader resource binding in more details. Setting the Pipeline State and Invoking Draw Command Before any draw command can be invoked, all required vertex and index buffers as well as the pipeline state should be bound to the device context: // Clear render target const float zero[4] = {0, 0, 0, 0}; m_pContext->ClearRenderTarget(nullptr, zero); // Set vertex and index buffers IBuffer *buffer[] = {m_pVertexBuffer}; Uint32 offsets[] = {0}; Uint32 strides[] = {sizeof(MyVertex)}; m_pContext->SetVertexBuffers(0, 1, buffer, strides, offsets, SET_VERTEX_BUFFERS_FLAG_RESET); m_pContext->SetIndexBuffer(m_pIndexBuffer, 0); m_pContext->SetPipelineState(m_pPSO); Also, all shader resources must be committed to the device context: m_pContext->CommitShaderResources(m_pSRB, COMMIT_SHADER_RESOURCES_FLAG_TRANSITION_RESOURCES); When all required states and resources are bound, IDeviceContext::Draw() can be used to execute draw command or IDeviceContext::DispatchCompute() can be used to execute compute command. Note that for a draw command, graphics pipeline must be bound, and for dispatch command, compute pipeline must be bound. Draw() takes DrawAttribs structure as an argument. The structure members define all attributes required to perform the command (primitive topology, number of vertices or indices, if draw call is indexed or not, if draw call is instanced or not, if draw call is indirect or not, etc.). For example: DrawAttribs attrs; attrs.IsIndexed = true; attrs.IndexType = VT_UINT16; attrs.NumIndices = 36; attrs.Topology = PRIMITIVE_TOPOLOGY_TRIANGLE_LIST; pContext->Draw(attrs); Tutorials and Samples The GitHub repository contains a number of tutorials and sample applications that demonstrate the API usage. Tutorial 01 - Hello Triangle This tutorial shows how to render a simple triangle using Diligent Engine API. Tutorial 02 - Cube This tutorial demonstrates how to render an actual 3D object, a cube. It shows how to load shaders from files, create and use vertex, index and uniform buffers. Tutorial 03 - Texturing This tutorial demonstrates how to apply a texture to a 3D object. It shows how to load a texture from file, create shader resource binding object and how to sample a texture in the shader. Tutorial 04 - Instancing This tutorial demonstrates how to use instancing to render multiple copies of one object using unique transformation matrix for every copy. Tutorial 05 - Texture Array This tutorial demonstrates how to combine instancing with texture arrays to use unique texture for every instance. Tutorial 06 - Multithreading This tutorial shows how to generate command lists in parallel from multiple threads. Tutorial 07 - Geometry Shader This tutorial shows how to use geometry shader to render smooth wireframe. Tutorial 08 - Tessellation This tutorial shows how to use hardware tessellation to implement simple adaptive terrain rendering algorithm. Tutorial_09 - Quads This tutorial shows how to render multiple 2D quads, frequently swithcing textures and blend modes. AntTweakBar sample demonstrates how to use AntTweakBar library to create simple user interface. Atmospheric scattering sample is a more advanced example. It demonstrates how Diligent Engine can be used to implement various rendering tasks: loading textures from files, using complex shaders, rendering to textures, using compute shaders and unordered access views, etc. The repository includes Asteroids performance benchmark based on this demo developed by Intel. It renders 50,000 unique textured asteroids and lets compare performance of D3D11 and D3D12 implementations. Every asteroid is a combination of one of 1000 unique meshes and one of 10 unique textures. Integration with Unity Diligent Engine supports integration with Unity through Unity low-level native plugin interface. The engine relies on Native API Interoperability to attach to the graphics API initialized by Unity. After Diligent Engine device and context are created, they can be used us usual to create resources and issue rendering commands. GhostCubePlugin shows an example how Diligent Engine can be used to render a ghost cube only visible as a reflection in a mirror.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!