• Advertisement

Search the Community

Showing results for tags 'OpenGL'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 17465 results

  1. Good Evening, I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ... First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons. I am really stucked right now because of the fundamental question: Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit. If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. In addition to that I am planning to use some sort of ECS based architecture. So the other question would be: Should I treat those debug objects as entities/components? For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line? Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component? Regards, LifeArtist
  2. Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level. Let's go: Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program? Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right? Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity? What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff? There were several more but i forgot/solved them at time of writing Thanks in advance
  3. Modern OpenGL GLSL Camera

    Hi All, I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera). I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes: Vertex Shader: #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated: ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model); So, some doubts: - Why use it like that? - Is it okay to manipulate the camera that way? -in this way, are not the vertex's positions that changes instead of the camera? - I need to pass MVP to all shaders of object in my scenes ? What it seems, is that the camera stands still and the scenery that changes... it's right? Thank you
  4. Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations. int rgbValue = int(textureSample.w);//4 bytes of data packed as color // algorithm might not be correct and endianness might need switching. vec3 extractedData = vec3( rgbValue & 0xFF000000, (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000); extractedData /= 255.0f;
  5. OpenGL Uniforms

    While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1. Anyone has any idea .. what should I do?
  6. Hello everyone! I'm trying to complete OpenGL tutorial http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/ And I faced a problem when loading texture from dds file to my triangle. Here is my code (basically from the tutorial): Load DDS from file(Texture.cpp): https://pastebin.com/Y5TKPvue Vertex shader: https://pastebin.com/2pPXQkS9 Fragment shader: https://pastebin.com/4nF2jVMy dds file: https://drive.google.com/open?id=1yYF2oyLbqn-OMB_QxZKyzKwBwxl_9sCb rendered window: https://imgur.com/a/1pP92 In main file I call drawing triangle, I call texture.Bind(); Also I tried to manually set uv points in fragment shader, and it seems working(color might be changed based on coordinates). If you have any idea, will be really appreciated. Regards, Esentiel
  7. Hi all. I have been looking for a real-time global illumination algorithm to use in my game. I've found voxel cone tracing and I'm debating whether or not it's an algorithm worth investing my time researching and implementing. I have this doubt due to the following reasons: . I see a lot of people say it's really hard to implement. . Apparently this algorithm requires some Nvidia extension to work efficiently according to the original paper (I highly doubt it though) . Barely real-time performance, meaning it's too slow to be implemented in a game So in order to determine if I should invest time in voxel cone tracing, I want to ask the following questions: . Is the algorithm itself flexible enough so that I can increase the performance by tweaking it (probably lowering the GI quality at the same time, but I don't care) . Can I implement it without any driver requirement or special extensions, like the paper claims?
  8. Consider the following situation: - We have an FBO with two identical color attachments - Bind shader program 1 and render an object to FBO attachment 0 - Bind the texture on attachment 0 for sampling - Bind shader program 2 and draw a full screen quad. In the fragment shader we sample from the texture on attachment 0 and write it’s value to the texture on attachment 1. Can framebuffer objects be used in this way? The reason why I’m considering this is to reduce the number of FBOs I create. I’m experimenting to see if I can perform all of my rendering passes with a single FBO equipped with multiple attachments. In my current implementation this setup does not seem to work as expected, I'm trying to determine if there's a problem with my implementation or if this is even possible. Any insight would be appreciated!
  9. Hi all! I try to use the Sun shafts effects via post process in my 3DEngine, but i have some artefacts on final image(Please see attached images). The effect contains the following passes: 1) Depth scene pass; 2) "Shafts pass" Using DepthPass Texture + RGBA BackBuffer texture. 3) Shafts pass texture + RGBA BackBuffer texture. Shafts shader for 2 pass: // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D DepthSampler; varying vec2 tex; #ifndef saturate float saturate(float val) { return clamp(val, 0.0, 1.0); } #endif void main(void) { vec2 uv = tex; float sceneDepth = texture2D(DepthSampler, uv.xy).r; vec4 scene = texture2D(FullSampler, tex); float fShaftsMask = (1.0 - sceneDepth); gl_FragColor = vec4( scene.xyz * saturate(sceneDepth), fShaftsMask ); } final shader: // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D BlurSampler; // shafts sampler varying vec4 Sun_pos; const vec4 ShaftParams = vec4(0.1,2.0,0.1,2.0); varying vec2 Tex_UV; #ifndef saturate float saturate(float val) { return clamp(val, 0.0, 1.0); } #endif vec4 blendSoftLight(vec4 a, vec4 b) { vec4 c = 2.0 * a * b + a * a * (1.0 - 2.0 * b); vec4 d = sqrt(a) * (2.0 * b - 1.0) + 2.0 * a * (1.0 - b); // TODO: To look in Crysis what it the shit??? //return ( b < 0.5 )? c : d; return any(lessThan(b, vec4(0.5,0.5,0.5,0.5)))? c : d; } void main(void) { vec4 sun_pos = Sun_pos; vec2 sunPosProj = sun_pos.xy; //float sign = sun_pos.w; float sign = 1.0; vec2 sunVec = sunPosProj.xy - (Tex_UV.xy - vec2(0.5, 0.5)); float sunDist = saturate(sign) * saturate( 1.0 - saturate(length(sunVec) * ShaftParams.y )); sunVec *= ShaftParams.x * sign; vec4 accum; vec2 tc = Tex_UV.xy; tc += sunVec; accum = texture2D(BlurSampler, tc); tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.875; tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.75; tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.625; tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.5; tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.375; tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.25; tc += sunVec; accum += texture2D(BlurSampler, tc) * 0.125; accum *= 0.25 * vec4(sunDist, sunDist, sunDist, 1.0); accum.w += 1.0 - saturate(saturate(sign * 0.1 + 0.9)); vec4 cScreen = texture2D(FullSampler, Tex_UV.xy); vec4 cSunShafts = accum; float fShaftsMask = saturate(1.00001 - cSunShafts.w) * ShaftParams.z * 2.0; float fBlend = cSunShafts.w; vec4 sunColor = vec4(0.9, 0.8, 0.6, 1.0); accum = cScreen + cSunShafts.xyzz * ShaftParams.w * sunColor * (1.0 - cScreen); accum = blendSoftLight(accum, sunColor * fShaftsMask * 0.5 + 0.5); gl_FragColor = accum; } Demo project: Demo Project Shaders for postprocess Shaders/SunShaft/ What i do wrong ? Thanks!
  10. I'm trying to implement Weighted, Blended Order-Independent Transparency in my engine. I need to read/access default framebuffer's depth buffer in off-screen transparency pass. What is the best way to do that? Is there a way to read default depth buffer directly instead of copying it in every frame? If no how can I copy it then? Because I don't know default depth format do I?
  11. I've been dabbling in OpenGL for a past few days, and I've made a simple renderer which can display textures. Then after some time I realized that that will be too slow for a full game so i started researching Batching. I have successfully rendered a triangle/rect and managed to get it colored (whilst batching), but the texture have stumped me for a day. For my texture loading reference I used this OpenGL tutorial. This is the picture I'm trying to render: When I start my project, this is what happens: I have tried to change the RGB to RGBA settings and so and nothing seemed to work. Also when i remove the glBindTexture() , the result is the same. Here is my code: Main.cpp There is some initialization, but I doubt that'll be necessary. VertexData one; one.vertex = glm::vec2(-0.5f, 0.5f); one.color = glm::vec3(1.0f, 0.0f, 0.0f); one.texcoord = glm::vec2(0.0f, 0.0f); VertexData two; two.vertex = glm::vec2(0.5f, 0.5f); two.color = glm::vec3(0.0f, 1.0f, 0.0f); one.texcoord = glm::vec2(1.0f, 0.0f); VertexData three; three.vertex = glm::vec2(0.5f, -0.5f); three.color = glm::vec3(0.0f, 0.0f, 1.0f); one.texcoord = glm::vec2(1.0f, 1.0f); VertexData four; four.vertex = glm::vec2(-0.5f, -0.5f); four.color = glm::vec3(1.0f, 1.0f, 1.0f); one.texcoord = glm::vec2(0.0f, 1.0f); Shader shader(vertexSource, fragmentSource); Batch renderer(shader); while (!glfwWindowShouldClose(game.getWindow())) { glfwPollEvents(); if (glfwGetKey(game.getWindow(), GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(game.getWindow(), GL_TRUE); renderer.Begin(); renderer.Submit(&one); renderer.Submit(&two); renderer.Submit(&three); renderer.Submit(&three); renderer.Submit(&four); renderer.Submit(&one); renderer.End(); glfwSwapBuffers(game.getWindow()); } Batch.h #pragma once #include <GL\glew.h> #include <iostream> #include <SOIL2\SOIL2.h> #include "Renderer.h" #include "Shader.h" #include "VertexArray.h" #include "VertexBuffer.h" #include "VertexBufferLayout.h" #define RENDERER_MAX_VERTICES 10 #define WINWIDTH 800 #define WINHEIGHT 600 struct VertexData { glm::vec2 vertex; glm::vec3 color; glm::vec2 texcoord; }; class Batch { private: Renderer m_Renderer; unsigned int shaderProgram; unsigned int _vbo; unsigned int _vao; VertexData *_buffer; unsigned int _vertexCount; bool m_Drawing; public: Batch(Shader &shader) :_vertexCount(0), shaderProgram(shader.getProgram()) { glGenBuffers(1, &_vbo); glGenVertexArrays(1, &_vao); glBindVertexArray(_vao); glBindBuffer(GL_ARRAY_BUFFER, _vbo); glBufferData(GL_ARRAY_BUFFER, RENDERER_MAX_VERTICES * sizeof(VertexData), NULL, GL_DYNAMIC_DRAW); // Specify the layout of the vertex data GLint posAttrib = glGetAttribLocation(shaderProgram, "position"); glEnableVertexAttribArray(posAttrib); glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE, 7 * sizeof(GLfloat), 0); GLint colAttrib = glGetAttribLocation(shaderProgram, "color"); glEnableVertexAttribArray(colAttrib); glVertexAttribPointer(colAttrib, 3, GL_FLOAT, GL_FALSE, 7 * sizeof(GLfloat), (void*)(2 * sizeof(GLfloat))); GLint texAttrib = glGetAttribLocation(shaderProgram, "texcoord"); glEnableVertexAttribArray(texAttrib); glVertexAttribPointer(texAttrib, 2, GL_FLOAT, GL_FALSE, 7 * sizeof(GLfloat), (void*)(5 * sizeof(GLfloat))); // Load texture GLuint tex; glGenTextures(1, &tex); glBindTexture(GL_TEXTURE_2D, tex); int width, height; unsigned char* image = SOIL_load_image("sample.png", &width, &height, 0, SOIL_LOAD_RGB); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image); SOIL_free_image_data(image); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); } ~Batch() { } void Begin() { glBindBuffer(GL_ARRAY_BUFFER, _vbo); _buffer = (VertexData*)glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY); } void Submit(VertexData* data) { *_buffer = *data; _buffer++; _vertexCount++; } void End() { glUnmapBuffer(GL_ARRAY_BUFFER); glBindBuffer(GL_ARRAY_BUFFER, 0); glClear(GL_COLOR_BUFFER_BIT); glBindVertexArray(_vao); glUseProgram(shaderProgram); glDrawArrays(GL_TRIANGLES, 0, _vertexCount); glBindVertexArray(0); _vertexCount = 0; } }; Vertex and Fragment shaders // Shader sources const GLchar* vertexSource = R"glsl( #version 150 core in vec2 position; in vec3 color; in vec2 texcoord; out vec3 Color; out vec2 Texcoord; void main() { Color = color; Texcoord = texcoord; gl_Position = vec4(position, 0.0, 1.0); } )glsl"; const GLchar* fragmentSource = R"glsl( #version 150 core in vec3 Color; in vec2 Texcoord; out vec4 outColor; uniform sampler2D tex; void main() { outColor = texture(tex, Texcoord) * vec4(Color, 1.0); } )glsl"; I thank everyone in advance who tries to help :-), also im open to some optimization or better technique suggestions.
  12. Hello. I'm trying to make an android game and I have come across a problem. I want to draw different map layers at different Z depths so that some of the tiles are drawn above the player while others are drawn under him. But there's an issue where the pixels with alpha drawn above the player. This is the code i'm using: int setup(){ GLES20.glEnable(GLES20.GL_DEPTH_TEST); GLES20.glEnable(GL10.GL_ALPHA_TEST); GLES20.glEnable(GLES20.GL_TEXTURE_2D); } int render(){ GLES20.glClearColor(0, 0, 0, 0); GLES20.glClear(GLES20.GL_ALPHA_BITS); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT); GLES20.glBlendFunc(GLES20.GL_ONE, GL10.GL_ONE_MINUS_SRC_ALPHA); // do the binding of textures and drawing vertices } My vertex shader: uniform mat4 MVPMatrix; // model-view-projection matrix uniform mat4 projectionMatrix; attribute vec4 position; attribute vec2 textureCoords; attribute vec4 color; attribute vec3 normal; varying vec4 outColor; varying vec2 outTexCoords; varying vec3 outNormal; void main() { outNormal = normal; outTexCoords = textureCoords; outColor = color; gl_Position = MVPMatrix * position; } My fragment shader: precision highp float; uniform sampler2D texture; varying vec4 outColor; varying vec2 outTexCoords; varying vec3 outNormal; void main() { vec4 color = texture2D(texture, outTexCoords) * outColor; gl_FragColor = vec4(color.r,color.g,color.b,color.a);//color.a); } I have attached a picture of how it looks. You can see the black squares near the tree. These squares should be transparent as they are in the png image: Its strange that in this picture instead of alpha or just black color it displays the grass texture beneath the player and the tree: Any ideas on how to fix this? Thanks in advance
  13. Releasing is scary

    I have released my first free prototype! https://yesindiedee.itch.io/is-this-a-game How terrifying! It is strange that I have been working to the moment of releasing something to the public for all of my adult life, and now I have I find it pretty scary. I have been a developer now for over 20 years and in that time I have released a grand total of 0 products. The Engine The engine is designed to be flexible with its components, but so far it uses Opengl, OpenAL, Python (scripting), CG, everything else is built in The Games When I started developing a game I had a pretty grand vision, a 3D exploration game. It was called Cavian, image attached. and yep it was far to complex for my first release. Maybe I will go back to it one day. I took a year off after that, I had to sell most of my stuff anyway as not releasing games isn't great for your financial situation. THE RELEASE When I came back I was determined to actually release something! I lowered my sights to a car game, it is basically finished but unfortunately my laptop is too old to handle the deferred lighting (Thinkpad x220 intel graphics) so I can't test it really, going to wait until I can afford a better computer before releasing it. Still determined to release something I decided to focus more on the gameplay than graphics. Is This A Game? Now I have created an Experimental prototype. Its released and everything: https://yesindiedee.itch.io/is-this-a-game So far I don't know if it even runs on another computer. Any feedback would be greatly appreciated! If you have any questions about any process in the creation of this game design - coding - scripting - graphics - deployment just ask, I will try to make a post on it. Have a nice day, I have been lurking on here for ages but never really said anything..... I like my cave
  14. For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials I also run grhmedia.com where I host the projects and code for the tutorials I have online. Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month. Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views. Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags. At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site. I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month. I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like. I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month. Maybe another fee if it is related but not directly in the content. The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process. So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable? Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea. I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos.
  15. OpenGL glError 1282

    i got error 1282 in my code. sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg
  16. Hi, I have an OpenGL application but without possibility to wite own shaders. I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
  17. Does sync be needed to read texture content after access texture image in compute shader? My simple code is as below, glUseProgram(program.get()); glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI); glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI); glDispatchCompute(1, 1, 1); // Does sync be needed here? glUseProgram(0); glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer); glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0); glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues); Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
  18. Transforming Angular Velocity

    My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below: // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
  19. In hopefully the first blog of a (fairly!) regular series I would like to introduce the game I've been working on for a while now. With a working title of COG, the game is a 3D action adventure game with a Steampunk theme and strong platformer elements. This is very much an indie project ... the coding of the home grown engine is done by me, the 3D modeling is (largely) done by me, the level design is done by me and ... well you get the idea! About the only support I'm getting is from my kids who are doing the 'QA' (and who are super critical when something doesn't work!). So where am I with this project? The game engine itself is pretty much 'feature complete' and the game playable even if there is still a lot to do in optimising and polishing the different components. Most of the standard platformer elements (conveyor belts, lifts, moving platforms, spike traps, switches, etc, etc) are already working and available through the level editor with the 'levels' (more accurately map sections) loading on the fly to create a continuous, seamless world. Basic monster code is already in place with it being pretty easy to add new monster types as needed for the game. The lighting and particle systems are currently being reintegrated into the game (the code is complete, but was deactivated while creating the first builds) so hopefully the landscape will soon be full of splashing water and rising steam and smoke (pretty important to have rising steam in a steampunk game!). The camera system is nearing completion - it allows the player to have full control of the angle of the camera with objects close to the camera position fading out to improve player visibility (this is the last bit being worked on here with the drawing order of semi-transparent buildings being tightened up and optimised). Looking at the content there is currently some placeholder content being used with the worst offender being the player's character which is currently a (not very steampunk) Kiwi rather than what will eventually be a small robot. In addition, while the 3D models are almost all self created, the textures used are a mix of those I've created myself, that I've purchased and a handful of temporary placeholder textures. These placeholders have allowed for rapid prototyping (especially as creating textures is a slow and painful process for me), but will obviously need to be replaced in the near future. Fortunately this shouldn't be a huge task as I'm using texture atlases that allow for easy swapping in and out of new textures ... there should be relatively few cases where it may be necessary to tweak the texture mapping. For information the screenshots and video contain this placeholder content. As a result of all this there is now quite a nice library of 'Lego bricks' available including a range of basic building blocks (platforms, towers, walls, pipes, etc), more exotic props (the already mentioned spike traps, barriers, lifts and switches as wells as the mandatory barrels and movable crates), plants, rocks and other landscape elements. Finally, there is a series of animated models including waterwheels, steam engine, pistons and so on whose clanking and whirring will helpful give the impression of a living world around the player. So things are going pretty well at the moment (especially given the limits on the time I can devote to this) and COG seems to be getting to a stage where it is moving from a process of writing code to one where the current task is to complete the game - a pretty rare thing for me! Please let me know you think of the screenshots and video - one of the main reasons for starting this blog is to start getting some feedback to help shape the remainder of the game ... all constructive comments would be very much appreciated. Also if there is interest in a particular aspect of the game development process I would be happy to share more details in a later blog entry. Thank you for your time!
  20. I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case. #define NUM_LIGHTS 2 struct Light { vec3 position; vec3 diffuse; float attenuation; }; uniform Light Lights[NUM_LIGHTS];
  21. Hello, I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing Exe file (if you want to look) and models folder (for those who will download the sources): http://leteckaposta.cz/367190436 Thanks for any help...
  22. Hey, I recently upgraded to NSight 5.4.0.17240 (Visual Studio) and opengl debugging just crashes NSight. Anyone has similar experience. I would post on Nvidia DevForums, but they have not answer to any of my previous questions before... Thanks
  23. I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow. Here is my light space matrix calculation mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position the_sun->direction is a normalized vector so i multiply it by a constant to get the position. player->pos is the camera position in world space the light projection matrix is calculated like this: mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader: uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader: out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer My shadow calculation in the deferred fragment shader: float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this: get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now): sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg EDIT: Depth texture attachment: glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  24. Getting OpenGL to work with XCODE (on a Macbook) is a wrestling match. But I've managed to get everything to work except the image managing library SOIL2. Has anyone had success with this? I have done the following steps: compiled soil2 with premake and make on the Mac. added all the header files to /usr/local/include/SOIL2 added a reference to /usr/local/include to Header Search Paths in Build Settings added a reference to the directory containing the libsoil2-debug.a file to "Link Binary with libraries" under Build Phases added "#include <SOIL2/SOIL2.h> to the program The error message I get is from the linker, that it can't find the soil2 library. I tried adding a path to the .a file also to the Library Search Paths, but then I get lots more errors. After a couple of days going in circles, I'm really stuck. Had no problem getting soil2 working on the PC under VS. Any ideas for XCODE?
  25. The title is vague, so I'll explain. I was frustrated with UI dev (in general), and even more-so when working with a application that needed OpenGL with embedded UI. What if I wanted to make a full application with OpenGL? (Custom game engine anyone?) Well I did want to. And I'm working on it right now. But it started me onto making what I think is a great idea of a styling language; I present KSS (pron. Kiss) The multitudes more programmable version of CSS.... (only for desktop dev) /* It has all the 'normal' styling stuff, like you'd expect. */ elementName { /*This is a styling field.*/ font-color: rgb(0,0,0), font: "Calibri", font-size: 12px } .idName { color: rgba(255,255,255,0) } uiName : onMouse1Click{ color: vec3(0,0,0) } BUT It also has some cool things. I've taken the liberty to add variables, templates (style inheritance), hierarchy-selection, events (as objects), native function calls, in-file functions. var defaultColor: rgb(0,0,0) var number : 1.0 /*Types include: rgb, rgba, vec2, vec3, vec4, number, string, true, false, none (null), this*/ fun styleSomeStuff{ .buttons{ color: defaultColor, text-color: rgb(number,255,number) } } template buttonStyle{ color: rgb(255,255,0) } .buttons{ use: buttonStyle, otherTemplateName /*copies the templates styling field*/ color: defaultColor; } .buttons : onMouse1Click{ /* events not assigned as a value are initialized when read from file*/ styleSomeStuff();*/* call the in-file function*/ *nativeFunctionCall();/*call a native function that's binded*/ var a : 2; /*assign a variable, even if not initially defined.*/ } /*storing an event in a 'value' will allow you to operate on the event itself. It is only ran when 'connected', below.*/ val ON_CLICK2 = .buttons : onMouse2Click{ use: templateName } connect ON_CLICK2; disconnect ON_CLICK2; /*you can make a function to do the same.*/ fun connectStuff{ connect ON_CLICK2; } But wait, you ask... what If I need to select items from a hierarchy? Surely using element names and id's couldn't be robust enough! Well: /*We use the > to indicate the item next element is a 'child' to itemA in the hierarchy*/ itemA > itemAChild : onMouse1Click{ } .itemId > .itemAChild > itemAChildsChild{ } /*want to get all children of the element at hand?*/ elementName > .any{ /*this will style all the elements children*/ } /*What about if we want to use conditional styling? Like if a variable or tag inside the element is or isnt something?*/ var hello : false; var goodbye : true; itemA [var hello = false, var goodbye != false] { /*passes*/ } itemA [@tagName = something]{ /*passes is the tag is equal to whatever the value u asked.*/ } The last things to note are event-pointers, how tagging works, 'this', and general workflow. Tagging works (basically) the same as Unity, you say @tagName : tagValue inside a styling field to assign that element a tag that's referable in the application. You have the ability to refer to any variable you assign within the styling sheet, from say.. source-code. The backend. As such, being able to set a variable to 'this' (the element being styled) allows you to operate with buttons who are currently in focus, or set the parent of an item to another element. All elements are available to use as variables inside and outside the styling sheet, so an event can effectively parent an element or group of elements to a UI you specify. Ill show that in a figure below, Event pointers are so that you can trigger an event with a UI component, but affect another, below- /*We use the -> to point to a new element, or group of elements to style.*/ .buttons : onMouse1Click -> .buttons > childName{ visible : false parent: uiName; } /* In this case, we style something with the name "childName" whos parented to anything with the id 'buttons'. Likewise, we if there was a element with the name 'uiName', it would parent the childName element to it. */ Lastly: The results/workflow. I'm in the process of building my first (serious) game engine after learning OpenGL, and I'm building it with Kotlin and Python. I can bind both Kotlin and Python functions for use inside the styling sheet, and although I didn't show the layout-language (cause its honestly not good), this is the result after a day of work so far, while using these two UI languages I've made (In the attachments.) It's also important to note that while it does adopt the CSS Box-model, it is not a Cascading layout. That's all I had to show. Essentially, right now Its a Kotlin -backed creation, but can be made for Java specifically, C++, C#, etc. I'm planning on adding more into the language to make it even more robust. What's more, it doesn't need to be OpenGL backed- it can use Java paint classes, SFML, etc etc. I just need to standardize the API for it. I guess what I'm wondering is, if I put it out there- would anyone want to use it for desktop application dev? P.S. Of course the syntax is subject to change, via suggestion. P.S.[2] The image in the middle of the editor is static, but wont be when I put 3D scenes in the scene-view.
  • Advertisement