Jump to content
  • Advertisement


Popular Content

Showing content with the highest reputation on 10/10/19 in all areas

  1. 3 points
    Challenge: complete! It's basically a complete game, even though it's short. Normally I make the levels way too difficult and I have to tone them down, but this time I think they ended up about right. They aren't very difficult, just about right for the first three levels of a game. My project went mostly according to plan. I had planned to be done in six weeks, but it took me seven. Part of that was getting used to the level building tools. Part of that was forgetting to add in some things that would have made the game not work. Another part was down to balancing the mobs. Thankfully there was only one minor packaging error that took about five minutes to clear up. The last challenge I did baking the project was almost catastrophic. I had to roll back to a previous version. There are only three levels in the game, and one is a rip off from the first level of classic Doom. Turns out I need to work more on my own level building skills, because I can tell that there is a big difference in quality of design between the Doom level and my other two. It was a good learning experience for me. Well, it's done and here it is:
  2. 2 points
    Way back I went over how to do this using GLUT. All you need to be able to do is get the elapsed time in order to have a working time step. 'glutGet(GLUT_ELAPSED_TIME);' GLUT is not the proper tool here and isn't good for anyone requiring accuracy which is why it is never recommended in our day and age. Phil has been told countless times to dump GLUT as there are better options out there... ie. GLFW, SDL2, SFML You're free to type out the answer in code or any other method for Phil to explain it better.
  3. 2 points
    Compilation depends on many circumstances. First the compiler has to handle your preprocessor directives, these are includes but also anything conditional you define with a # in front. Therefore the preprocessing unit has to resolve all of these statements from top to bottom before anything else can take place. Every include will be processed, the file will be loaded and added to the process too. After the conditionals are resolved and the preprocessor knows what code to push to the compiler, macro replacement takes place. Any macro definition (so a define with additional arguments required) are resolved recursively wherever you used the definition in code. It is truely recursively until either the preprocessor dosen't find anymore defined identifiers in the macro code or the macro calls itself, then the processor breaks. This steps may happen at the same time, this is preprocessor dependant. The preprocessor I wrote in C# to detect dependencies between C++ files does all of this on the fly for example. Templates are resolved (specific code is generated for each template for each different arguments passed to them, this is why templates may cause code bloat) and then the code is pushed finally to the compilation unit. So to answer your question, it depends: how many include files do you have, how much and complex are the macros you use, how many templates do you use with different arguments. Did you set the include guards correctly (to not include a file twice as you already processed it in this compilation unit) or included more files as necessary, does a simple forward declare can be used instead of the whole header? By the way, 40 seconds are nothing. Huge projects like game engines (Unreal for example) use so called "Unity Files" where anything is included at once in one file. This is a try to reduce huge build times that may occure else, our Unreal project for example took more than 10 minutes to compile before we toggled it to generate a Unity File. What I like to do in such cases is to have our custom build tool generate a dependency graph of each include and where it has been used. This not just helps avoid circular dependencies and helps modularizing the project but also shows unnecessary include directives that can cause much higher compile times
  4. 2 points
    When you don't understand something, you ask for clarification, or look it up, or just don't join the discussion. What you don't do is try to convince the OP that they don't know what they are talking about. Games with simultaneous turns exist, and they are very much turn-based. Two examples off the top of my head are Frozen Synapse and Battlestar Galactica: Deadlock. There's more examples (and more nuanced subtypes of turns) on Wikipedia: https://en.wikipedia.org/wiki/Turns,_rounds_and_time-keeping_systems_in_games
  5. 1 point
    These two video tutorials are awesome. I will rewrite my instruction later. For example, we want to add a cube on a scene: bpy.ops.mesh.primitive_cube_add() You can read about this API function in the documentation: primitive_cube_add Create a work folder with the name: mock-object-for-primitive_cube_add-api Open Blender and safe project in the work folder Open "Scripting" tab in Blender from the top menu Open your favourite Python editor. I use VSCode. You can read about how to use VSCode with Python here Python in Visual Studio Code Create a file with the name "main.py" in you favourite Python editor. This file must be placed in the "mock-object-for-primitive_cube_add-api" folder Write in the "main.py": print("hello from blender") You can run this code from command line terminal or from VSCode internal terminal. Press in VSCode "Ctrl+`" and enter command: python main.py You will see in the console terminal this message: If you opened "Scripting" tab in Blender you will see an ability to open Python script in Blender. Click on the "Open" Button in Blender Script editor inside Blender Choose the "main.py" file and click the "Open Text Block" button Open the Blender console terminal. For this you need to select in the main menu of Blender "Window" and select "Toggle System Console" Run the "main.py" script from Blender. For this you need to place your mouse pointer on text area and press "Alt+P" button You will see this message in the Blender console terminal: If you will change a code in an external editor like VSCode you need to reload in the Blender text editor. For this you need to press the "Alt+R+R" button You need to add only one file: "main.py" to the Blender text editor. Another files you need place in the work directory: "mock-object-for-primitive_cube_add-api" Copy this code to the "main.py" file: main.py import bpy import sys import os # Get a path to the directory with .blend file # There are the scripts in this directory dir = os.path.dirname(bpy.data.filepath) # Is the directory in the list with included # directories? If no, include the directory if not dir in sys.path: sys.path.append(dir) import object3d_service # Reload module. It is necessary if you use # external editor like VSCode # For reloading module you need to press in # Blender: Alt + R + R import importlib importlib.reload(object3d_service) # Note. You do not need to open all scripts in Blender, # you need only this script from object3d_service import Object3DService def main(): objectService = Object3DService() objectService.create_cube() if __name__ == "__main__": main() This is another files that you need to copy to the work directory: test_blender_service.py import unittest from unittest.mock import MagicMock from object3d_service import Object3DService class BlenderServiceTest(unittest.TestCase): def test_myTest(self): # Arrange object3DService = Object3DService() object3DService.blender_api.create_cube = MagicMock("create_cube") # Act object3DService.create_cube() # Assert object3DService.blender_api.create_cube.assert_called_once() object3d_service.py from blender_api import BlenderAPI class Object3DService: def __init__(self): self.blender_api = BlenderAPI() def create_cube(self): self.blender_api.create_cube() blender_api.py import bpy class BlenderAPI: def create_cube(self): bpy.ops.mesh.primitive_cube_add() Delete a default cube from the scene. Now you can reload Blender code editor ("Alt+R+R") and run the code ("Alt + P"). You will see that a new code will be created: You can set breakpoints in "main.py" because there are mock-object for Blender API. And you can run unit tests using this command: python -m unittest You will see that unit test are passed.
  6. 1 point
    Several years ago I made a series of training videos to teach artists how to write real-time shaders in HLSL. The series starts out with very simple concepts so that someone with no programming experience at all can start right from the beginning and learn to write shaders. Further into the series, I cover more advanced topics such as parallax mapping, global illumination, reflection and refraction, and vertex animation. Everything is explained step by step so it's easy to follow. Originally, the videos shipped on 3 DVDs that sold for $60 each. Sales were pretty successful, but the company that sold them went out of business just a short time after the 3rd DVD was released. So I've decided to upload them to YouTube so that everyone can watch them for free! My hope is that they'll help aspiring artists and programmers learn the art of shader creation. Although some of the material is out of date, almost all of the principles taught are still valuable to learn and understand as a foundation to learning to write shaders. Here's my channel: https://www.youtube.com/user/bcloward The series consists of 40 videos. I'll be releasing a new video each weekday starting this week - so if you're interested, be sure to subscribe to the channel and check back often. If I get enough interest, my plan is to start creating new videos that demonstrate more recent shader techniques. Let me know what you think. I'd love to hear your feedback! -Ben
  7. 1 point
    I’m pleased to announce that the small3d game framework now works across all major desktop operating systems as was always the case and, thanks to having been migrated to Vulkan, it has also been adapted to Android and iOS. For the time being I will continue to maintain the OpenGL edition. It is easy to port a game’s codebase between that and the Vulkan edition because the two APIs are almost identical. Still, I am keeping them separate, with a view to someday dropping support for OpenGL completely.
  8. 1 point
    Wow (no pun intended:) ) You really took your time to answer that, thanks! And I never even thought of using vector graphics for that purpose. Actually I just dabbled in polygon sprites for the first time a couple of hours ago to create icons for the minimap, and although it's not quite vector graphics it still works very well to combat jagged edges for non-rectangular shapes. Textures tend to scale a lot better(within reason) so I'm not too worried about those.
  9. 1 point
    Introduction Yet another Ludum Dare event was around, and I found time to participate this time again (this time it is actually 10th Ludum Dare I've participated in so far). So, first of all - if you are interested, just roll down the post and play the game. Short gameplay video: Fig. 01 - A short gameplay video. What went right? This time around, we have decided to dive into the wonderful world of procedural generation quite fast, the actual work on the game started early saturday morning. Being heavily inspired by Angband, the decision was clear - our dungeon levels must be generated. Procedural generation of such dungeons is not really that hard, I have started off with simple algorithm and worked from there: Generate a 2D axis-aligned BSP tree, randomly terminating after at most N levels or when criteria is met Shrink rooms represented by leaves of the BSP tree Make connections in bottom-up order (first between leaves, then between interior nodes and leaves - picking always the closest room to the currently processed one). The results were quite promising, producing blocktober worthy images like: Fig. 02 - Randomly generated dungeon level At this stage entrance and exit were placed, followed by the monsters and power ups which you can pick up in the dungeon. The last thing to solve in level design were first and last level. Although as our levels during the generation phase are defined as byte arrays (containing actual chars defining what is on given tile) we simply used file and injected it into the process. Our first level looks like this: ### ###X### #.#...# #....## ##....# #...#.# ####### Where: # - represents wall . - represents empty space X - represents exit out of the level Apart from actual procedural generation, the development went quite smooth. I enjoyed making a non-realistic cartoon-ish models, and skeletal animation enabled just for the character hands. The models ended up being quite cute in the end: Fig. 03 - Mushroom in Unity engine editor, version without hands The game mechanics also overall felt quite comfortable (as I'd say finished enough to be playable), compared to some of my previous entries. Picking a smaller scope game definitely played in favor of this. What went wrong? As usual, time. Each game jam (and it doesn't matter whether it is Ludum Dare or anything else), it comes to the point where you have to start cutting features out to make it on time. Because at the end of the day for game jam, you either finish a game or you don't. We had to cut out various features that would make the game a LOT better, but most important of those was audio. Sound effects and music tend to improve the atmosphere a lot - or to be precise - it is one of the most underestimated feature of the game, with which atmosphere rises or completely falls. Due to our time constraints we ended up completely without audio in the game. User Interface, second item we left for the last moment of development was user interface, and we ended up having it very simple. While being informational enough for the game, making it graphically more attractive would improve the gameplay a lot (compared to having just default text giving you all the information you need). Next time around, I'd like to dedicate at least few hours on these two topics, even sacrificing some of the visual effects or details in world generation, to increase atmosphere of the game with audio and UI. Conclusion As said, this was 10th Ludum Dare for me, and I have enjoyed working on it very much. I'm quite satisfied with the game we have ended up with. For the next time, assuming I would have a free weekend, I'd like to participate again. Also, if any of you (who read this article) have participated in Ludum Dare, and would like me to play your game - please leave a comment here, and I will definitely play your game during the following weeks. Goodies So, for curious ones (I'm intentionally leaving just Itch.io link to the game, as I never know when I will remove it from the server - that would make the link inactive): Source code is available at: GitHub Game is available at: Itch.io Ludum Dare page is at: LDJAM
  10. 1 point
    I would have a pre-set UI scale per resolution then add in an option with a slider that allows the user to scale up or down by (x) points per stop with a preview prior to applying the setting. I normally just take screen shots of the different resolutions in Photoshop, and overlay my UI to see which sizes fit the best then reflect that through code. If your UI is done with vector graphics then scaling isn't a problem, but if you use raster graphics then you essentially make the largest scaled version first then scale down as needed. You still might have to make different versions depending on quality loss. WoW and many other games have this feature:
  11. 1 point
    Thanks:) It's not something I have even considered, maybe I should:) It's not going to be at the top of my list though I think - but perhaps it's worth taking into account when choosing sprite sizes and import settings for the final UI to make sure they will scale well. Could you give me some examples of how you might choose to scale it? Not UI - but I want to enable the player to zoom the scene camera in and out so they can look at their pretty character. I think though that most people will want to play zoomed out as much as possible - at least if it gets dangerous.
  12. 1 point
    Looking good! Are you going to consider having an option for scaling up or down the UI?
  13. 1 point
    InnovateHer is teaming up with Sony’s PlayStation brand to expand its eight week tech programme for teenage girls to more locations across the country. The Digital Bootcamp programme aims to give girls aged between 12-16 key tech and interpersonal skills whilst encouraging them to consider STEM subjects and careers in tech. Currently, girls make up only 20% of computer science entries at GCSE, and just 10% at A-level, with nine times more boys than girls gaining an A level in Computer Science this year. InnovateHer, whose mission is “to get girls ready for the tech industry, and the industry ready for girls”, recently pledged to tackle these figures by committing to work with schools to reach over 1,000 girls by 2020. PlayStation previously worked with InnovateHer’s sister brand Liverpool Girl Geeks to deliver a similar educational programme in Liverpool in 2016. The programme saw 20 girls take part in technology themed workshops across 6 weeks, and included an invitation to PlayStation’s Wavertree offices to meet technical staff and learn more about how games are developed and tested. Now, InnovateHer is working with PlayStation again to develop a scalable 8-week Digital Bootcamp programme in order to reach more girls in new locations across the UK. The after school programme will teach girls technical skills, build confidence, and highlight local opportunities within the tech and digital industries. Working with PlayStation has allowed InnovateHer to extend the programme further afield, including Guildford and London. Programmes will start in selected schools during January 2020, and graduates of the programme will have a chance to showcase their work at next year’s Develop conference in Brighton. Chelsea Slater, Co-founder of InnovateHer says, “We’re proud to be working with PlayStation again on our tech programme for girls. The issues we see around the gender pay gap and low numbers of women in the tech community are the culmination of the seeds that get sown early in young women’s academic careers. Our mission is to get girls ready for the tech industry, and to get the industry ready for girls, and a huge part of this is challenging the misconception that girls “can’t do” STEM subjects like Computer Science, equally that the STEM industry doesn’t cater for women. That’s why it’s important for us that our programme reaches girls not just locally, but nationally, too, and that it aims to show young women just what opportunities are open to them. Thanks to PlayStation’s support and recognition, we are able to do just that.” If schools in the three areas wish to have the InnovateHer programme, then an expression of interest form can be found at; http://bit.ly/iher2020 To find out more about InnovateHer, or to enquire about partnerships, visit: www.innovateher.co.uk If your school is based in London, Liverpool or Guildford and wishes to take part in the InnovateHer programme, then an expression of interest form can be found here: http://bit.ly/iher2020 To find out more about InnovateHer, or to enquire about partnerships, visit: www.innovateher.co.uk
  14. 1 point
    Last month I posted a tech demo for my game. Only a handful of people played it, but the ones who did gave me some useful feedback. Audio: Background music and sound effects is high on the list. I haven't implemented any audio yet. I'm currently talking to a composer - with a little luck he will be able to create something nice. If I get around to working with audio before that happens I will be putting in some free or cheap assets to get a feel for the workflow. The composer pointed me to fmod - it's worth taking a look at. Audio/Visual feedback: It wasn't immediately apparent when items dropped, or that you could click on an NPC to talk to them. I've implemented hover UI for items and mobs. This includes the name of the item/mob and for mobs it also includes a health bar. This can be expanded upon with more info - colors for item quality, extra info for mobs etc. I'm not sure if NPC hover should include a health bar - you probably won't be allowed to attack them in the final game anyway. A dynamic cursor was also implemented to show if you could pick up items, attack mobs, talk to NPCs etc. It does seem a little busy / confusing when the cursor changes so I'm not sure if that's a good idea. Perhaps using the same basic cursor with a sub-icon will make it easier on the eyes. The first NPC asks you to kill some spiders, but there are no spiders in the game... will do:) The first area needs some love - a bit of story, better flow and a basic quest line. Quests are not yet implemented but the flow can be simulated with dialogues. Monsters tend to clump together in one spot, and when you kill them they respawn immediately at their spawn point. The respawn behaviour is something I will address soon, but fixing the clumping requires quite a bit of work. Originally I wasn't planning on implementing any sort of monster-blocking or dynamic obstacle avoidance. I'm afraid this simplistic approach won't cut it. Monster blocking is not in itself a must-have, but the clumping must be dealt with. Implementing dynamic pathfinding / steering behaviour - in particular in a way that will perform well enough to scale to many players - is going to be a bit of a hurdle. I got started on a minimap. The approach I have used is to visualize the navmesh by extracting the triangulation and turning it into a mesh, which is then rendered by another camera into a RenderTexture and shown on the screen. I need to find or write a good shader to make it look right. Also I need to visualize enemies, NPCs, portals and other entities on the minimap, and visualize camera and player rotation. (Come to think of it I also need to implement camera rotation...) One thing I would really love to have is an installer and a launcher application that will be able to auto-update the game files. High on the wish-list is also a code-signing certificate, but due to costs this will have to wait unfortunately. But I really want to have the launcher in place soon. A new build can be downloaded at: https://treacherousjourneys.com/downloads/journeys.zip. I really appreciate all of the feedback I can get:)
  15. 1 point
    I'll give it a go this weekend and post back!
  16. 1 point
    I don't think I've seen anything on Surface Nets but it came up a lot regarding Dual Contouring. In examples I've seen the non-manifold sections usually look like hourglass shapes where two parts of a surface converge at the same vertex. This link (https://www.boristhebrave.com/2018/04/15/dual-contouring-tutorial/) describes it a little under the 'Manifolds' section. There is also a paper for Manifold DC (https://web.archive.org/web/20161020191119/http://faculty.cs.tamu.edu/schaefer/research/dualsimp_tvcg.pdf). As Surface Nets and DC only really differ in how they position the vertex inside the cell it might be possible to use the same solution.
  17. 1 point
    Yes, but depends on what physics you need. If it's just enemies / vehicles but no box stacks, LOD could work. Also you could replace physics simulation with procedural animation at the distance.
  18. 1 point
    The thing is I'm targeting very large worlds with sub half meter triangle sizes. Even something like a matrix of values that are fed into a voxel algorithm would take up far too much disk space on a players system. I therefore generate everything from terrain functions at run-time as a player walks around. That all takes some CPU cycles. The FPS can't actually bog because it's in a separate thread that's using the "current" build of the world while the next build is being calculated in the background. Also ideally I should be able to do sub second world builds. However in some odd cases where the world build lags, I don't want the physics to be affected. So as I said I have a second voxel tree where leaf voxels only exist right around the player for physics. They can also be built ahead of ranged projectiles and stuff like that if necessary. The general idea is to only build terrain right ahead of any possible collisions so it's fast. For physics I don't care about voxel level transitions because it's all at one level so that simplifies things. I also don't actually need a mesh because I can access triangles in the tree directly. .....So yes physics is done in an entirely separate octree in the same thread that calculates collision. However since it uses the same functions it matches the graphics terrain at it's highest resolution. This whole system is really particular to my project. If you aren't using run-time procedural generation there are probably better ways (I'm sure there are better ways anyway :P) . For one you don't have all the terrain available for path-finding at any given time, which is a problem. However I thought of a cheating way of doing path-finding but I'll have to try it out when I get that far. I'm basically forced to do all this because I want to support several thousand large planets. That's why I was saying I'm not sure how much help I can be since my project is kind of odd.
  19. 1 point
    Testing this on chrome, the embedding succeeds. Again on firefox, the video link stays as a text representation. (with my browser versions) The art here is amazing. Top shelf goodness.
  20. 1 point
    Hi gamedevs, We are happy to announce the release of Nanotale in early access on the 23rd of October 2019. http://www.youtube.com/watch?v=5IohCH2aCx8 Wishlist the game on Steam Why in Early Access? The initial plan was to release the full game in September 2019 but then, we noticed that if we really wanted to release the game we dreamed of along with our community, it would take a little bit more time. So, we decided to polish the first 35% of the game and put it in your hands to make sure that we are on the right track while finishing Nanotale. We give more info about the early access and its content in the following video. http://www.youtube.com/watch?v=Z2kd_ams2fo&t=4s The more feedback we get, the better our game will be at release. So, thank you in advance for your support! Have a wonderful week. -VirginRedemption
  21. 1 point
    Almost. The root actually has 20 children, but this is simply because I'm using an extruded icosahedron to generate my top 20 prism voxels. After that it's a normal octree. If you're not doing planets there is no advantage in this. It's realy done to meet my requirements. a) I'm building voxels on a sphere. b) voxels have to be oriented the same way in relation to the surface of the sphere and c) I want voxel size to be as as consistent as possible over the sphere. These last two are so that similar terrain generated at any point on the planet will give similar mesh results. Also I'm going to try to do voxel morphing to generate reasonably realistic looking trees (i.e. not blocky) so the starting orientation is important. Well ..... with certain algorithms like marching cubes, you can't easily change voxel size (i.e. adjacent voxels can't easily have a different size). To take care of this you have to use something like the transvoxel algorithm. I'm using voxel tessellation. However you mentioned you use surface nets so I don't think that's a problem with that algorithm. The down side with that is you can end up with non-manifold geometry, which can't happen with marching cubes (or prisms in my case) For physics you can normally just use your smallest voxel size for calculation of meshes, so it's not a problem. However depending on what you are dong you may have other issues. For instance if you are using the same set of voxels for collision as you are for graphics, you can end up with some race conditions especially if a player is moving fast. In my case I'm generating everything from functions at run time and I use a completely separate set of voxels for physics to avoid that problem altogether. With the physics there is no LOD supported, and geometry is regenerated just around the player. I actually don't even build meshes for physics. Since the geometry is already generated in an octree I just use it as it is. That's one other advantage of algorithms that keep geometry confined to a single voxel.
  22. 1 point
    Probably a missunderstanding on my side - because you metioned LOD i assumed you want more details near the player and less at the distance. So if the player would move around, the terrain would change and a resting stack of boxes could fall over for example. I did not mean the transition issues like gaps and discontinuities that often arise in LOD terrain approaches. A compromise might be to only detect contacts on GPU and send them to PhysX used by Unity, but that's still difficult and likely messes with contact caching the engine uses internally. So i would stick at your idea to download geometry. (Be sure to observe performance cost of submitting new gemetry to physics - usually this involves building an acceleration structure if it's not just a height map, so another reason to use low poly meshes.) I rough idea of mine - not sure if this ever has been used for games. I talked about it recently in a PM, copy paste: So the idea is, if you have a maze, put water to the target until it flows out at the source, and then just follow the path where height of water increases the most.
  23. 1 point
    F.E.A.R.'s AI success (or "advancedness", if you will) comes more from presentation than from algorithms. It's not GOAP that made the good impressions, but the barking system. In fact, I remember reading somewhere that GOAP in F.E.A.R. (or it might have been another game using GOAP) was underutilized, since plans changed so much that there was never time to execute deeper plans. And that is important for this question: in games, the player notices things on the surface, not things under the hood. What we see as "advanced AI" often ends up being presentation tricks, hardcoded special cases or even scripted sequences. Which leads to @IADaveMark's question again: what do you really mean by "advanced AI", and what do you really mean by "seeing" that advanced AI.
  24. 1 point
    Hello All, I am interested in this potential project team. I am a programmer. Here is a game I developed about 10 years ago: https://www.youtube.com/watch?v=OCPqUhgt7Dc
  25. 1 point
    Here's an example of using an index buffer. I'd recommend using the website docs.gl, it lets you filter out functions based on GL Version. GLuint vao, buffer, indexBuffer; Vertex object[] = { Vertex(glm::vec3(0.5f, 0.5f, 0.0f), glm::vec4(1.0f, 0.0f, 0.0f,1.0f)), Vertex(glm::vec3(0.5f, -0.5f, 0.0f), glm::vec4(0.0f, 1.0f, 0.0f,1.0f)), Vertex(glm::vec3(-0.5f, -0.5f, 0.0f), glm::vec4(0.0f, 0.0f, 1.0f,1.0f)), Vertex(glm::vec3(-0.5f, 0.5f, 0.0f), glm::vec4(0.0f, 0.0f, 1.0f,1.0f)) }; unsigned int indices[] = { 0,1,2, 3,0,2}; //Init glCreateVertexArrays(1, &vao); glCreateBuffers(1, &buffer); glNamedBufferStorage(buffer, sizeof(object), object, GL_MAP_READ_BIT); // GL_STATIC_DRAW isn't a valid param //position glEnableVertexArrayAttrib(vao, 0); glVertexArrayAttribFormat(vao, 0, 3, GL_FLOAT, GL_FALSE, 0); glVertexArrayAttribBinding(vao, 0, 0); //color glEnableVertexArrayAttrib(vao, 1); glVertexArrayAttribFormat(vao, 1, 4, GL_FLOAT, GL_FALSE, sizeof(glm::vec3)); // relative offset is the size in bytes until the first "color" attribute glVertexArrayAttribBinding(vao, 1, 0); glVertexArrayVertexBuffer(vao, 0, buffer, 0, sizeof(Vertex)); // The stride is the number of bytes between each "Vertex" // Create index buffer glCreateBuffers(1, &indexBuffer); glNamedBufferStorage(indexBuffer, sizeof(indices), indices, GL_MAP_READ_BIT); glVertexArrayElementBuffer(vao, indexBuffer); // Draw glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0); // Second parameter is the number of indices
  • Advertisement
  • Advertisement
  • Popular Contributors

  • Member Statistics

    • Total Members
    • Most Online

    Newest Member
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!