• Advertisement

Search the Community

Showing results for tags 'OpenGL'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 17453 results

  1. Releasing is scary

    I have released my first free prototype! https://yesindiedee.itch.io/is-this-a-game How terrifying! It is strange that I have been working to the moment of releasing something to the public for all of my adult life, and now I have I find it pretty scary. I have been a developer now for over 20 years and in that time I have released a grand total of 0 products. The Engine The engine is designed to be flexible with its components, but so far it uses Opengl, OpenAL, Python (scripting), CG, everything else is built in The Games When I started developing a game I had a pretty grand vision, a 3D exploration game. It was called Cavian, image attached. and yep it was far to complex for my first release. Maybe I will go back to it one day. I took a year off after that, I had to sell most of my stuff anyway as not releasing games isn't great for your financial situation. THE RELEASE When I came back I was determined to actually release something! I lowered my sights to a car game, it is basically finished but unfortunately my laptop is too old to handle the deferred lighting (Thinkpad x220 intel graphics) so I can't test it really, going to wait until I can afford a better computer before releasing it. Still determined to release something I decided to focus more on the gameplay than graphics. Is This A Game? Now I have created an Experimental prototype. Its released and everything: https://yesindiedee.itch.io/is-this-a-game So far I don't know if it even runs on another computer. Any feedback would be greatly appreciated! If you have any questions about any process in the creation of this game design - coding - scripting - graphics - deployment just ask, I will try to make a post on it. Have a nice day, I have been lurking on here for ages but never really said anything..... I like my cave
  2. For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials I also run grhmedia.com where I host the projects and code for the tutorials I have online. Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month. Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views. Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags. At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site. I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month. I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like. I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month. Maybe another fee if it is related but not directly in the content. The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process. So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable? Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea. I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos.
  3. OpenGL glError 1282

    i got error 1282 in my code. sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg
  4. Hi, I have an OpenGL application but without possibility to wite own shaders. I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
  5. Does sync be needed to read texture content after access texture image in compute shader? My simple code is as below, glUseProgram(program.get()); glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI); glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI); glDispatchCompute(1, 1, 1); // Does sync be needed here? glUseProgram(0); glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer); glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0); glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues); Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
  6. Transforming Angular Velocity

    My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below: // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
  7. In hopefully the first blog of a (fairly!) regular series I would like to introduce the game I've been working on for a while now. With a working title of COG, the game is a 3D action adventure game with a Steampunk theme and strong platformer elements. This is very much an indie project ... the coding of the home grown engine is done by me, the 3D modeling is (largely) done by me, the level design is done by me and ... well you get the idea! About the only support I'm getting is from my kids who are doing the 'QA' (and who are super critical when something doesn't work!). So where am I with this project? The game engine itself is pretty much 'feature complete' and the game playable even if there is still a lot to do in optimising and polishing the different components. Most of the standard platformer elements (conveyor belts, lifts, moving platforms, spike traps, switches, etc, etc) are already working and available through the level editor with the 'levels' (more accurately map sections) loading on the fly to create a continuous, seamless world. Basic monster code is already in place with it being pretty easy to add new monster types as needed for the game. The lighting and particle systems are currently being reintegrated into the game (the code is complete, but was deactivated while creating the first builds) so hopefully the landscape will soon be full of splashing water and rising steam and smoke (pretty important to have rising steam in a steampunk game!). The camera system is nearing completion - it allows the player to have full control of the angle of the camera with objects close to the camera position fading out to improve player visibility (this is the last bit being worked on here with the drawing order of semi-transparent buildings being tightened up and optimised). Looking at the content there is currently some placeholder content being used with the worst offender being the player's character which is currently a (not very steampunk) Kiwi rather than what will eventually be a small robot. In addition, while the 3D models are almost all self created, the textures used are a mix of those I've created myself, that I've purchased and a handful of temporary placeholder textures. These placeholders have allowed for rapid prototyping (especially as creating textures is a slow and painful process for me), but will obviously need to be replaced in the near future. Fortunately this shouldn't be a huge task as I'm using texture atlases that allow for easy swapping in and out of new textures ... there should be relatively few cases where it may be necessary to tweak the texture mapping. For information the screenshots and video contain this placeholder content. As a result of all this there is now quite a nice library of 'Lego bricks' available including a range of basic building blocks (platforms, towers, walls, pipes, etc), more exotic props (the already mentioned spike traps, barriers, lifts and switches as wells as the mandatory barrels and movable crates), plants, rocks and other landscape elements. Finally, there is a series of animated models including waterwheels, steam engine, pistons and so on whose clanking and whirring will helpful give the impression of a living world around the player. So things are going pretty well at the moment (especially given the limits on the time I can devote to this) and COG seems to be getting to a stage where it is moving from a process of writing code to one where the current task is to complete the game - a pretty rare thing for me! Please let me know you think of the screenshots and video - one of the main reasons for starting this blog is to start getting some feedback to help shape the remainder of the game ... all constructive comments would be very much appreciated. Also if there is interest in a particular aspect of the game development process I would be happy to share more details in a later blog entry. Thank you for your time!
  8. I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case. #define NUM_LIGHTS 2 struct Light { vec3 position; vec3 diffuse; float attenuation; }; uniform Light Lights[NUM_LIGHTS];
  9. Hello, I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing Exe file (if you want to look) and models folder (for those who will download the sources): http://leteckaposta.cz/367190436 Thanks for any help...
  10. Hey, I recently upgraded to NSight 5.4.0.17240 (Visual Studio) and opengl debugging just crashes NSight. Anyone has similar experience. I would post on Nvidia DevForums, but they have not answer to any of my previous questions before... Thanks
  11. I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow. Here is my light space matrix calculation mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position the_sun->direction is a normalized vector so i multiply it by a constant to get the position. player->pos is the camera position in world space the light projection matrix is calculated like this: mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader: uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader: out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer My shadow calculation in the deferred fragment shader: float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this: get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now): sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg EDIT: Depth texture attachment: glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  12. Getting OpenGL to work with XCODE (on a Macbook) is a wrestling match. But I've managed to get everything to work except the image managing library SOIL2. Has anyone had success with this? I have done the following steps: compiled soil2 with premake and make on the Mac. added all the header files to /usr/local/include/SOIL2 added a reference to /usr/local/include to Header Search Paths in Build Settings added a reference to the directory containing the libsoil2-debug.a file to "Link Binary with libraries" under Build Phases added "#include <SOIL2/SOIL2.h> to the program The error message I get is from the linker, that it can't find the soil2 library. I tried adding a path to the .a file also to the Library Search Paths, but then I get lots more errors. After a couple of days going in circles, I'm really stuck. Had no problem getting soil2 working on the PC under VS. Any ideas for XCODE?
  13. The title is vague, so I'll explain. I was frustrated with UI dev (in general), and even more-so when working with a application that needed OpenGL with embedded UI. What if I wanted to make a full application with OpenGL? (Custom game engine anyone?) Well I did want to. And I'm working on it right now. But it started me onto making what I think is a great idea of a styling language; I present KSS (pron. Kiss) The multitudes more programmable version of CSS.... (only for desktop dev) /* It has all the 'normal' styling stuff, like you'd expect. */ elementName { /*This is a styling field.*/ font-color: rgb(0,0,0), font: "Calibri", font-size: 12px } .idName { color: rgba(255,255,255,0) } uiName : onMouse1Click{ color: vec3(0,0,0) } BUT It also has some cool things. I've taken the liberty to add variables, templates (style inheritance), hierarchy-selection, events (as objects), native function calls, in-file functions. var defaultColor: rgb(0,0,0) var number : 1.0 /*Types include: rgb, rgba, vec2, vec3, vec4, number, string, true, false, none (null), this*/ fun styleSomeStuff{ .buttons{ color: defaultColor, text-color: rgb(number,255,number) } } template buttonStyle{ color: rgb(255,255,0) } .buttons{ use: buttonStyle, otherTemplateName /*copies the templates styling field*/ color: defaultColor; } .buttons : onMouse1Click{ /* events not assigned as a value are initialized when read from file*/ styleSomeStuff();*/* call the in-file function*/ *nativeFunctionCall();/*call a native function that's binded*/ var a : 2; /*assign a variable, even if not initially defined.*/ } /*storing an event in a 'value' will allow you to operate on the event itself. It is only ran when 'connected', below.*/ val ON_CLICK2 = .buttons : onMouse2Click{ use: templateName } connect ON_CLICK2; disconnect ON_CLICK2; /*you can make a function to do the same.*/ fun connectStuff{ connect ON_CLICK2; } But wait, you ask... what If I need to select items from a hierarchy? Surely using element names and id's couldn't be robust enough! Well: /*We use the > to indicate the item next element is a 'child' to itemA in the hierarchy*/ itemA > itemAChild : onMouse1Click{ } .itemId > .itemAChild > itemAChildsChild{ } /*want to get all children of the element at hand?*/ elementName > .any{ /*this will style all the elements children*/ } /*What about if we want to use conditional styling? Like if a variable or tag inside the element is or isnt something?*/ var hello : false; var goodbye : true; itemA [var hello = false, var goodbye != false] { /*passes*/ } itemA [@tagName = something]{ /*passes is the tag is equal to whatever the value u asked.*/ } The last things to note are event-pointers, how tagging works, 'this', and general workflow. Tagging works (basically) the same as Unity, you say @tagName : tagValue inside a styling field to assign that element a tag that's referable in the application. You have the ability to refer to any variable you assign within the styling sheet, from say.. source-code. The backend. As such, being able to set a variable to 'this' (the element being styled) allows you to operate with buttons who are currently in focus, or set the parent of an item to another element. All elements are available to use as variables inside and outside the styling sheet, so an event can effectively parent an element or group of elements to a UI you specify. Ill show that in a figure below, Event pointers are so that you can trigger an event with a UI component, but affect another, below- /*We use the -> to point to a new element, or group of elements to style.*/ .buttons : onMouse1Click -> .buttons > childName{ visible : false parent: uiName; } /* In this case, we style something with the name "childName" whos parented to anything with the id 'buttons'. Likewise, we if there was a element with the name 'uiName', it would parent the childName element to it. */ Lastly: The results/workflow. I'm in the process of building my first (serious) game engine after learning OpenGL, and I'm building it with Kotlin and Python. I can bind both Kotlin and Python functions for use inside the styling sheet, and although I didn't show the layout-language (cause its honestly not good), this is the result after a day of work so far, while using these two UI languages I've made (In the attachments.) It's also important to note that while it does adopt the CSS Box-model, it is not a Cascading layout. That's all I had to show. Essentially, right now Its a Kotlin -backed creation, but can be made for Java specifically, C++, C#, etc. I'm planning on adding more into the language to make it even more robust. What's more, it doesn't need to be OpenGL backed- it can use Java paint classes, SFML, etc etc. I just need to standardize the API for it. I guess what I'm wondering is, if I put it out there- would anyone want to use it for desktop application dev? P.S. Of course the syntax is subject to change, via suggestion. P.S.[2] The image in the middle of the editor is static, but wont be when I put 3D scenes in the scene-view.
  14. We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included. Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.
  15. We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included. Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming. View full story
  16. Hi, I'm learning about creating 2D games with SDL, but I want to learn OpenGL too, because it can give a much higher quality to the graphics, and also SDL supports the use of OpenGL at the same time. I want to keep making 2D games for a while because I feel that I still have a lot to learn about programming itself, so I don't want to jump into 3D this early (I'll learn 3D in school later btw). The game ideas that I'm thinking about at the moment would require a graphics quality like this: https://www.youtube.com/watch?v=fsbECSpwtig How hard is it to get to this level of OpenGL for a complet beginner, and could you recommend some useful sources (books mostly) that only deal with 2D problems?
  17. Hello there. I'm not really the blogging type. This is my first ever blog. So I'll do my best. I've been trying to showcase my video game engine from scratch in different professional forums with scant mixed results. I’m currently a happily employed 3D Graphics Programmer in the medical device field who also loves creating graphics programs as a side hobby. It's been my experience that most people who aren't graphics programmers simply don't appreciate how much learning goes into simply being able to code a small fraction of this from scratch. Most viewers will simply compare this to the most amazing video game they’ve ever seen (developed by teams of engineers) and simply dismiss this without considering that I’m a one-man show. What I’m hoping to accomplish with this: I’m not totally sure. I spent a lot of my own personal time creating this from the ground up using only my own code (without downloading any existing make-my-game-for-me SDKs), so I figured it’s worth showing off. My design: Oct Tree for scene/game management (optimized collision-detection and scene rendering path) from scratch in C++. 1. All math (linear algebra, trig, quaternion, vectors), containers, sorting, searching from scratch in C++. 2. Sound system (DirectSound, DirectMusic, OpenAL) from scratch in C++. 3. Latest OpenGL 4.0 and above mechanism (via GLEW on win32/win64) (GLSL 4.4). Very heavy usage of GLSL. Unusual/skilled special effects/features captured in video worth mentioning: 1. Volumetric Explosions via vertex-shader deformed sphere into shock-wave animation further enhanced with bloom post-processing (via compute shader). 2. Lens Flare generator, which projects variable edge polygon shapes projected along screen-space vector from center of screen to light-position (again, in screen-space: size and number of flares based on intensity and size of light source). 2. Real-time animated procedural light ray texture (via fragment shader) additively blended with Volumetric explosions. 3. Active camouflage (aka Predator camouflage). 4. Vibrating shield bubble (with same sphere data in Volumetric explosion) accomplished using a technique very similar to active camouflage 5. Exploding mesh: When I first started creating this, I started out using fixed-function pipeline (years ago). I used one vertex buffer, one optimized index buffer, then another special unoptimized index buffer that traces through all geometry one volume box at a time. And each spaceship “piece” was represented with a starting and ending index offset into this unoptimized index buffer. Unfortunately, the lower the poly-resolution, the more obvious it is what I was doing. Unfortunately, as a result, when the ship explodes you see the triangle jaggies on the mesh edges. My engine is currently unfortunately somewhat married to this design—which is why I haven’t redesigned that part yet (It’s on the list, but priorities). If I were to design this over again, I’d simply represent each piece with a different transform depending upon whether or not the interpolated object-space vertex (input to the pixel shader) was in-front of or behind an arbitrary “breaking plane”. If the position was beyond the breaking point boundary planes, discard the fragment. This way, I can use one vertex buffer + one optimized index buffer and achieve better looking results with faster code.
  18. Hey guys, I hope this is an appropriate question. Well, I tried to design a little part of my rendering layer. bgfx oriented. Question: Is this good design? And what changes would you suggest? vboID CreateDynamicVertexBuffer(VertexBatch, Duration); vboID CreateStaticVertexBuffer(VertexBatch, Duration); void UpdateVertexBuffer(vboID); void DeleteVertexBuffer(vboID); - My interface works with IDs that are structs for type safety . An ID maps to something the corresponding method(s) can interpret. It doesn't necessarily map to OpenGL handles etc. A create* method yields a valid ID. If the creation fails an exception is thrown. - VertexBatch is responsible for capturing the vertex data. It provides convenient OpenGL Intermediate Mode syntax. It has no virtual methods because it needs to be fast. - Duration is an enum that lets you specify how long the data will remain on the GPU memory. The reason is that I can batch static level geometry together in one big Vertex Buffer and just return the offset into this buffer. Other types of memory can be handled differently, but I don't know how... Question: How do you normally handle this scenario and what is with data that frequently changes? - Dynamic Vertex Buffer cannot be updated, StaticVertexBuffer can. - If a vboID is deleted I may or may not destroy the underlying resource. Depends on the type of memory (frequently changing etc.) If this question somehow doesn't belong here, it's okay Thank you Jan
  19. I'm currently learning opengl and started to make my own 2D engine. So far I've managed to render an rectangle made out of 2 triangles with `glDrawElements()` and apply it a custom shader. This is how my code looks like so far: #include <all> int main() { Window window("OpenGL", 800, 600); Shader shader1("./file.shader"); float vt[8] = { -0.5f, -0.5f, 0.5f, -0.5f, 0.5f, 0.5f, -0.5f, 0.5f }; const GLuint indices[6] = { 0, 1, 2, 2, 3, 0 }; GLuint vao, vbo, ebo; glGenVertexArrays(1, &vao); glBindVertexArray(vao); glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vt), vertices, GL_STATIC_DRAW); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0); glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); while(window.isOpen()) { window.clear(0.0, 0.5, 1.0); shader1.use(); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, NULL); window.update(); } glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteVertexArrays(1, &vao); return 0; } There multiple way to do it, but I've heard that drawing and switching shaders one by one is pretty expensive. I'm looking to preload all the data, vertices, shaders in one buffer or what's more efficient and when it's all ready start drawing them in one call, similar to a batch. Thanks!
  20. Hi,Please read the Linking Vertex Attributes section of this page. I've read the term a vertex attribute many times in this page but I'm not sure what it means for real or what the author meant by that.If possible please tell me the exact meaning the author meant by vertex attributes.
  21. I'm looking to render multiple objects (rectangles) with different shaders. So far I've managed to render one rectangle made out of 2 triangles and apply shader to it, but when it comes to render another I get stucked. Searched for documentations or stuffs that could help me, but everything shows how to render only 1 object. Any tips or help is highly appreciated, thanks! Here's my code for rendering one object with shader! #define GLEW_STATIC #include <stdio.h> #include <GL/glew.h> #include <GLFW/glfw3.h> #include "window.h" #define GLSL(src) "#version 330 core\n" #src // #define ASSERT(expression, msg) if(expression) {fprintf(stderr, "Error on line %d: %s\n", __LINE__, msg);return -1;} int main() { // Init GLFW if (glfwInit() != GL_TRUE) { std::cerr << "Failed to initialize GLFW\n" << std::endl; exit(EXIT_FAILURE); } // Create a rendering window with OpenGL 3.2 context glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // assing window pointer GLFWwindow *window = glfwCreateWindow(800, 600, "OpenGL", NULL, NULL); glfwMakeContextCurrent(window); // Init GLEW glewExperimental = GL_TRUE; if (glewInit() != GLEW_OK) { std::cerr << "Failed to initialize GLEW\n" << std::endl; exit(EXIT_FAILURE); } // ----------------------------- RESOURCES ----------------------------- // // create gl data const GLfloat positions[8] = { -0.5f, -0.5f, 0.5f, -0.5f, 0.5f, 0.5f, -0.5f, 0.5f, }; const GLuint elements[6] = { 0, 1, 2, 2, 3, 0 }; // Create Vertex Array Object GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); // Create a Vertex Buffer Object and copy the vertex data to it GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(positions), positions, GL_STATIC_DRAW); // Specify the layout of the vertex data glEnableVertexAttribArray(0); // layout(location = 0) glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0); // Create a Elements Buffer Object and copy the elements data to it GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(elements), elements, GL_STATIC_DRAW); // Create and compile the vertex shader const GLchar *vertexSource = GLSL( layout(location = 0) in vec2 position; void main() { gl_Position = vec4(position, 0.0, 1.0); } ); GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexSource, NULL); glCompileShader(vertexShader); // Create and compile the fragment shader const char* fragmentSource = GLSL( out vec4 gl_FragColor; uniform vec2 u_resolution; void main() { vec2 pos = gl_FragCoord.xy / u_resolution; gl_FragColor = vec4(1.0); } ); GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentSource, NULL); glCompileShader(fragmentShader); // Link the vertex and fragment shader into a shader program GLuint shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); glUseProgram(shaderProgram); // get uniform's id by name and set value GLint uRes = glGetUniformLocation(shaderProgram, "u_Resolution"); glUniform2f(uRes, 800.0f, 600.0f); // ---------------------------- RENDERING ------------------------------ // while(!glfwWindowShouldClose(window)) { // Clear the screen to black glClear(GL_COLOR_BUFFER_BIT); glClearColor(0.0f, 0.5f, 1.0f, 1.0f); // Draw a rectangle made of 2 triangles -> 6 vertices glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, NULL); // Swap buffers and poll window events glfwSwapBuffers(window); glfwPollEvents(); } // ---------------------------- CLEARING ------------------------------ // // Delete allocated resources glDeleteProgram(shaderProgram); glDeleteShader(fragmentShader); glDeleteShader(vertexShader); glDeleteBuffers(1, &vbo); glDeleteVertexArrays(1, &vao); return 0; }
  22. Hi guys, im having a little problem fixing a bug in my program since i multi-threaded it. The app is a little video converter i wrote for fun. To help you understand the problem, ill first explain how the program is made. Im using Delphi to do the GUI/Windows part of the code, then im loading a c++ dll for the video conversion. The problem is not related to the video conversion, but with OpenGL only. The code work like this: DWORD WINAPI JobThread(void *params) { for each files { ... _ConvertVideo(input_name, output_name); } } void EXP_FUNC _ConvertVideo(char *input_fname, char *output_fname) { // Note that im re-initializing and cleaning up OpenGL each time this function is called... CGLEngine GLEngine; ... // Initialize OpenGL GLEngine.Initialize(render_wnd); GLEngine.CreateTexture(dst_width, dst_height, 4); // decode the video and render the frames... for each frames { ... GLEngine.UpdateTexture(pY, pU, pV); GLEngine.Render(); } cleanup: GLEngine.DeleteTexture(); GLEngine.Shutdown(); // video cleanup code... } With a single thread, everything work fine. The problem arise when im starting the thread for a second time, nothing get rendered, but the encoding work fine. For example, if i start the thread with 3 files to process, all of them render fine, but if i start the thread again (with the same batch of files or not...), OpenGL fail to render anything. Im pretty sure it has something to do with the rendering context (or maybe the window DC?). Here a snippet of my OpenGL class: bool CGLEngine::Initialize(HWND hWnd) { hDC = GetDC(hWnd); if(!SetupPixelFormatDescriptor(hDC)){ ReleaseDC(hWnd, hDC); return false; } hRC = wglCreateContext(hDC); wglMakeCurrent(hDC, hRC); // more code ... return true; } void CGLEngine::Shutdown() { // some code... if(hRC){wglDeleteContext(hRC);} if(hDC){ReleaseDC(hWnd, hDC);} hDC = hRC = NULL; } The full source code is available here. The most relevant files are: -OpenGL class (header / source) -Main code (header / source) Thx in advance if anyone can help me.
  23. I've started building a small library, that can render pie menu GUI in legacy opengl, planning to add some traditional elements of course. It's interface is similar to something you'd see in IMGUI. It's written in C. Early version of the library I'd really love to hear anyone's thoughts on this, any suggestions on what features you'd want to see in a library like this? Thanks in advance!
  24. I have this 2D game which currently eats up to 200k draw calls per frame. The performance is acceptable, but I want a lot more than that. I need to batch my sprite drawing, but I'm not sure what's the best way in OpenGL 3.3 (to keep compatibility with older machines). Each individual sprite move independently almost every frame and their is a variety of textures and animations. What's the fastest way to render a lot of dynamic sprites? Should I map all my data to the GPU and update it all the time? Should I setup my data in the RAM and send it to the GPU all at once? Should I use one draw call per sprite and let the matrices apply the transformations or should I compute the transformations in a world vbo on the CPU so that they can be rendered by a single draw call?
  25. Hi! I've recently started with opengl and just managed to write my first shader using the phong model. I want to learn more but I don't know where to begin, does anyone know of good articles or algorithms to start with? Thanks in advance.
  • Advertisement