Advertisement Jump to content
  • Advertisement
    1. Past hour
    2. johannesg

      PDF manual?

      Just a quick question... Is there a PDF manual somewhere that describes the scripting language? I am of course familiar with the website, and it contains pretty much exactly what I need, but the dark forces of the universe (*cough*QA*cough*) love to see their printed, or at least printable, manuals... Thanks!
    3. This is arguable. Maybe we have two difference definitions on what realtime means, and I'm assuming your definition isn't as nuanced as it should be. Looking, and seeing are two different acts. The latter implies the answer is in front of you, the former implies you did some prior research before making an assertion
    4. Yes, I realize that. Hence the code here float sy = tmp.y + ( m_nMaxYOffset - glyph->offsetY ) * m_fScaleVert; And it seems that I fixed the issue with the various widths, until I increased the font size and the same thing happened again. 22 font size 23 font size Someone else had the same issue and it seemed he fixed it by subtracting the offset from the character T's offset. Thanks.
    5. This video is "old" now after work over the weekend but here it is anyways. When I am able to work on this project, I end up working until it's quite late trying to nail down just "one more thing". And I either forget to leave time for capturing a video or tell myself that tomorrow I'll have something better. This video shows the PC which I simplified to a rectangle with feet and a single dot for an eye walking around int the darkness of the labyrinth, picking up the occasion torch rock (that needs to be re positioned to appear in his hand) and exploring a bit. Eventually our hero comes to a section of the maze where some inhabitants from the Frogger challenge still persist, including some cars that have had their sprites changed to very plain squares. Nothing there is particularly harmful but he jumps past one of the sleeping denizens of the labyrinth a couple times before demonstrating that it is a bad idea to attempt to swim in the swamp.
    6. Oberon_Command

      Anyone who wants to write a little game engine?

      This. And it applies to physics and AI, too. Video games are fundamentally interactive magic shows - "smoke and mirrors." Very few games simulate anything you see on the screen with a high degree of physical accuracy. And this is usually intentional - artists and designers tend not to even want physical accuracy. They want the game to look like the image they have in their head of how it should look. They almost always prefer gameplay that is "fun" over gameplay that has accurate physics. They don't want AI that is genuinely "smart," they want AI that feels smart and loses to the player in an interesting way. In Doom, why do barrels of radioactive waste explode when you shoot them? Radioactive waste is not inherently explosive. Answer: because it's fun and players love the gameplay opportunities it affords.
    7. Promit

      Engine for 2D turn based games

      How about Godot? https://godotengine.org/
    8. SoldierOfLight

      D3D12 Fence and Present

      Isn't calling WaitForSingleObject on a fence block the CPU thread? Yes, explicitly calling WaitForSingleObject on an event which will be signaled by SetEventOnCompletion is related to fences. All I meant was that any implicit blocking within the Present API call is not necessarily related to fences, it's only related to the "maximum frame latency" concept. To answer your specific questions: 1. Yes. 2. Yes. 3. Yes. Work which is submitted against a resource that is being consumed by the compositor or screen is delayed until the resource is no longer being consumed. The fact that the command list writes to the back buffer is most likely detected during the API call to the resource barrier API, and implicitly negotiated with the swapchain and graphics scheduler at ExecuteCommandLists time, to ensure that the command list doesn't begin execution until the resource is available. Also to clarify, by "GPU thread" we're talking about the command queue. If you had a second command queue, or a queue in a different process, it'd still be possible for that queue to execute while the one writing to the back buffer is waiting.
    9. acerskyline

      D3D12 Fence and Present

      Isn't calling WaitForSingleObject on a fence block the CPU thread? Also, I am wondering does Present block GPU thread? Assume I have called Present 3 times very quickly, before the 4th time I call Present, I called ExecuteCommandList. After ExecuteCommandList, I called Signal and then I called Present. So it looks like this: 0.We have already completed step 1 to step 8 for 3 times.(i = 1, 2, 3. Now i = 1 again) 1.WaitForSingleObject(i) 2.Barrier(i) present->render target 3.Record commands... 4.Barrier(i) render target->present 5.ExecuteCommandList 6.Signal 7.Present 8.Go to step 1 Under this circumstance, please answer my following questions: A.Step 1 may block CPU thread if previous work of frame 1 is not finished on GPU. Am I right? B. Assume previous work of frame 1 has finished on GPU. Step 7 may block CPU thread if none of the previous 3 frames is done - really done, as in on-screen. Am I right? C.If the answer to B is yes, then CPU thread will be blocked at step 7 but command list has already been submitted, what will happen to GPU thread? Will GPU thread be blocked? If yes, by what (If the answer is yes, I'm suspecting by the barrier recorded in step 2 (present->render target barrier))? If no, where will GPU render to when none of the previous 3 frames is done - really done, as in on-screen?
    10. Today
    11. I think we're saying the same thing... Have you tried using the Reverse-Thrust + Holding-Right-Click(I should make this a toggle probably)? It aligns your view exactly with your velocity/firing trajectory. I'm still gonna work on the free camera though, but I know it's going to make me want to add twist movement into the player so you can aim where you look in free camera mode... Yeah, what I'll probably do is make the "align with velocity" camera angle a toggle that aligns the camera with the current velocity angle, not the reverse angle only... Then whenever you right click, it toggles you a perfectly aligned view of your shot no matter what direction you are currently heading. Then overlaying targeting information might be a little easier actually.... hmmm.
    12. There are a couple ways to approach this. The simplest, as mentioned above, is to simply implement the deformation effect in the vertex shader. If you're dealing with a simple one in, one out style of effect then this is a great way to do it and this is how skinning for example is done. The next step up in sophistication is to not supply the vertex directly to the vertex shader, but to give it access to the entire buffer and use the vertex index to look up the vertices in a more flexible format. (Some GPUs only work this way internally.) That way your vertex shader can use multiple vertices or arbitrary vertices to compute its final output. The most complex version of this is to write a buffer to buffer transformation of the vertices, which can either be done via stream out in the simple cases or a compute shader in the advanced cases. This lets you store the results for later, not just compute them instantaneously for that frame.
    13. Aha, yes, I'm working on how to filter out the clicks properly... It's very annoying when you are trying to click on GUI elements(the menu should be easy though since its GUI element takes focus anyhow. There are also a few other instances where fire triggers are caught when they shouldn't be at all that I've got on my fix list. Trajectory Indicator is also something I would like to do, it's an even tougher one though... Predicting physics math.. not my strength.. The snowball does always fire in the same direction as the board is traveling (y axis exactly), but the velocity is a combination of player velocity and applied forces + gravity.. I've been working this over in my head since I first built the mechanic, but I haven't tackled the code yet.. lol By free-camera/unlock I'm assuming you mean lock camera to mouse cursor? I had a free-camera mode initially but it wasn't very useful so I disabled it, I think if I actually locked the view to the cursor(so you can pan and tilt) and locked its follow position to its normal follow position that might make it work better, it was a little too free-wheeling and hard to control at these speeds in the previous design... That sound about right? haha Thanks again! Appreciate the feedback immensely!
    14. Yes, D3D 11 is still quite modern. D3D9 is over 14 years old at this point. Releasing a D3D9 game (other then for free) would be a technical support nightmare in 2019.
    15. What you described is exactly what the vertex shader does. You have the responsibility to determine the screen position of each vertex in that shader which also means that you are free to displace them on the fly however you like. You don't need to reupload vertex data everytime.
    16. stratusfer

      Engine for 2D turn based games

      Just C++ or I was considering for some reason D
    17. Okay then you have no choice and need to produce proper contact points and switch to OBB instead. For sphere vs OBB you need just one contact point, for OBB vs OBB you can have at max 4 contact points. I cannot explain how the generation in actual 3D works, but the concept for 2D are similar. First of, you use SAT to determine the normal with most overlap and record the penetration depth and the normal/axis -> This is what you actually have right now. Also you need to record the body and the face where the axis was found, this is either A or B. Treat A as the reference and B as the incident body first. But if the most penetrating axis is found on B, then B is your reference body and A is your incident body. Its just swapped. The recorded body and face A is your reference face. The most anti-parallel face on B is your incident face. Next is to clip the incident face against the side planes of the reference face (Sutherland Hodgman clipping) -> You dont clip against the reference face. You save all the clipped points below the reference face. Note that this clipped points are on the incident face - not on the reference face! For better coherence you project the saved clip points to the reference face. Now you have all the contact points on A. To get the second contact point its as simple as: PointOnB = PointOnA + -Normal * PenetrationDistance; There you have it. Again, everything i am talking about is good explained in "DirkGregorius_Contacts.pdf" - even with code! One last tip i can give you: Visualize everything! Draw the face planes, the clipping planes, the contact points and the penetration depth. Colorize the reference body red, the incident body blue Trust me, this helps a lot!
    18. Constant buffer is used to change transformation matrix of objects and it happens per user interaction. This only changes the camera. What if I wanted to deform and move cube(Let's say I am rendering a cube) on user input. Since cube vertices/index are on vertex and index buffers I can modify them per frame using some upload buffer. Using upload buffer to update coordinates(due to user input) doesn't seem performance friendly. Let's say that cube is a character user is controlling. User might be moving the character very fast. So I was thinking if it was possible to access a large buffer of vetex and index resource type in shader and then I could manipulate the character directly from GPU; by sending some transformation data through constant buffer. Which resources are used to represent highly movable objects in game world? index/vertex buffer, constant buffer or what?
    19. You need to hook up this one soon : Check from 3:07
    20. https://docs.microsoft.com/en-us/windows/desktop/gdi/wm-displaychange It says the message only informs you of the change when its already done. Maybe dont change it in first place or dont use other programs which are doing it?
    21. drawing.kai

      Neo arcana is looking for an artist

      Hello! I'd love to do some art for this kind of game. My style is very cutesy, which sounds like it could work for your project. Would you like me to send some examples of my art?
    22. Rutin

      GameDev - Dungeon Crawler Challenge - Part 2

      Thanks! I didn't work on this over the weekend, but I'll be doing some level design today. New post coming shortly!
    23. Rutin

      Dungeon Crawler Challenge - Update 3

      I like the concept of having jumps reduced depending on weight. You could make certain areas only accessible to near empty inventories for special loot, and maybe have a one way door to get out of the room or portal.
    24. Hi all, I'm really new so excuse my inexperience with this forum. A few friends and I have been thinking about setting up a dev team. At the start, we want to create something simple in our spare time (runner game for mobile is one of our ideas), and if it makes any revenue continue with creating small games until we have the budget for more. The problem that we have is that we have 2 people who know how to model in 2d/3d, I have more experience with the business aspect/management (and for the future HR and recruitment). I was hoping to get any advice on where I could find an artist and programmer. any revenue we make will be split equally of course! If this is the right place to find people: please contact me! our goal is to start small, but make better and bigger games once/if we achieve financial stability.
    25. Sound Master

      Windows 10 makes old games crash fullscreen

      I might be installing a newer DirectX and start over again. The problem is i like to release my DX9 game. ( a bit late ) If the persons who plays get these crashes that you have to turn of the power : they will blame it on my game. I have a ATI Radeon HD 5850 thanks to a donation It can handle DX11.2 DX11.2 is ok without problems ?
    26. Hey guys, I just was looking for some advice on what to look for/steps to take as far as how to get started coming from my background/and what I'm wanting to do w/ the program. I'm mostly a classically trained musician, focusing almost exclusively on solo piano. I was way too much of a purist on just the piano growing up, and I never showed any interest or discipline in actually understanding the electronic composing/recording/production side of things. I'm trying to right my wrongs nowadays and I would like to use my musical skill set on the computer. I've only recently spent some months learning how to record and mix my own live recordings through an interface and Studio One 4, which I'm still learning about. The only real experience I've had composing on a computer without adding live instruments or recorded tracks is working with Finale 2008 for a few years off and on. It's very comfortable to me but it's obviously outdated, and from the looks of things, serious DAWs aren't as old school or simple as something like finale. But I know that you can assign nice VST instruments to midi composed tracks so.. if I'm most comfortable just composing on a score can I actually make nice electronic music that is appropriate for projects/business/my own production just sequencing midis, assigning VST instruments and I guess mixing it appropriately? Or should I really just learn/start messing with one of the popular DAW programs that people generally use? I'm not trying to limit myself to classical instruments or just electronic orchestral music, I'm open to composing many styles of music. I'm sorry I'm very new to this whole thing but I'm trying to figure out what's right for me and I'm trying to ask around. This is generally the kind of stuff I'm composing: https://www.instagram.com/p/Br9B1vNBQCm/
    27. Hi everyone! I'm looking for an internship for the 2019 summer! I'm a graduate student of NJIT now, but I have over 4 years of working experience in game development and real-time rendering development. I'm looking for an internship about game development or graphics development for the summer of 2019 in the USA, if you have any internship opportunities, please contact me. Thanks so much! ps: the following pictures are the screenshots of the real-time renderer I'm developing by C++ and OpenGL.
    28. I'm following this tutorial about the stencil test and I can't figure out how it works. What I'm trying to do for now is to draw a white cube and clip all the rest cubes (I'm rendering totally 6 cubes, 5 with a texture and 1 with just white color). So I'm trying to draw the white cube and clip all the other fragments. This is what I get: On the beginning of the video the camera is facing towards the 5 textured cubes (the white cube is behind the camera at this moment) and as you can see everything is clipped as I expected. But when i look at the white cube and then back on the textured cubes they appear! And not only that, if you check at the bottom left you will see that there is an area that is getting clipped some times. This is what I do before and after I draw the white cube. void OnUpdate() override { GLCall(glStencilFunc(GL_ALWAYS, 1, 0xFF)); GLCall(glStencilOp(GL_KEEP, GL_REPLACE, GL_REPLACE)); GLCall(glStencilMask(0xFF)); //Render the white cube. this->core->GetRenderer()->DrawBasicCube(this->mesh, this->program); GLCall(glStencilMask(0x00)); GLCall(glStencilFunc(GL_EQUAL, 1, 0xFF)); //after this the rest cubes are going to be rendered. } What I thing I'm doing here is that I always draw the white cube and update it's stencil data to be 1. After that I'm changing the function to render only fragments that they stencil value is equal to 1. Since I changed the stencil values to 1 only for the white cube, shouldn't the rest textured cubes not get rendered because their fragment's stencil value is not 1?
    29. @Finalspace I'm making a cricket game. So far, I've been able to resolve collision for ball-ground(ball without any rotation after hitting the ground) & ball-stump(not accurately). I need accurate contact point (c->p) to compute rotation part in the collision resolving code. As you can see the above posted code (c->p) is computed using (c->n), cross product of ra n rb with n goes to zero in the collision resolve code. void CollisionResponse(Contact *c, float epsilon) { XMVECTOR padot = c->a->GetVelocityAtPoint(c->p); XMVECTOR pbdot = c->b->GetVelocityAtPoint(c->p); XMVECTOR n = c->n; XMVECTOR ra = (c->p - c->a->GetPosition()); XMVECTOR rb = (c->p - c->b->GetPosition()); XMVECTOR vrel = XMVector3Dot(c->n, (padot - pbdot)); float numerator = (-(1.0f + epsilon)*vrel.m128_f32[0]); float term1 = (1.0f / c->a->GetMass()); float term2 = (1.0f / c->b->GetMass()); XMVECTOR term3 = XMVector3Dot(c->n, XMVector3Cross(XMVector4Transform(XMVector3Cross(ra, n), c->a->GetIInverse()), ra)); XMVECTOR term4 = XMVector3Dot(c->n, XMVector3Cross(XMVector4Transform(XMVector3Cross(rb, n), c->b->GetIInverse()), rb)); float j = (numerator / (term1+ term2 + term3.m128_f32[0] + term4.m128_f32[0])); XMVECTOR f = (j*n); c->a->AddImpulse(f); c->b->AddImpulse(-f); c->a->AddImpulsiveTorque(XMVector3Cross(ra, f)); c->b->AddImpulsiveTorque(-XMVector3Cross(rb, f)); } ALM 2019-01-21 19-59-58-83_x264.mp4
    30. SoldierOfLight

      D3D12 Fence and Present

      Blocking the CPU thread has nothing to do with fences. If you call Present 3 times very quickly, then the 4th time you call Present, it'll block until one of your previous 3 frames is done - really done, as in on-screen. Unless, of course, you use the waitable object swapchain, in which case the waits for frame latency are done manually by the application, instead of in the call to Present. Additionally, this is the only way to change the maximum frame latency value in DX12 from the default of 3, and if you use this waitable object, the default for that scenario is 1.
    31. Zakwayda

      Engine for 2D turn based games

      Are you wanting to stick with C++? Or are you open to other languages?
    32. If you set up the projection matrix the "default way", it will transform the near plane to -1 (or 0 for Direct3D) and the far plane to 1 (for both apis). If you'd prefer the other way around, you can do that too, but you'll have to properly build your projection matrix for your convention. edit: reversing might even be a good idea: see this blog post
    33. Depth values are between 0 and 1. The z coordinates get mapped to this range with a formula, so that 0 is closest and 1 is furthest away. When comparison happens with GL_LESS, a fragment that is in front (closer, has a lower depth value) will pass and be rendered, and a fragment that is in the back (further, has a higher depth value) will be discarded, so it won't get rendered. In short, fragments that are behind other fragments will be skipped.
    34. rh_galaxy

      Looking for a Hobby-Dev Partner

      Hi Lucas, Although this game is in the making, and the game idea loosely decided already, I think it might suit you (Both C# and Blender)... Have a look. I live in Sweden. The problem with not having something decided in the beginning is that it may never be finished. This will be, and it's not an overly complicated project. If you don't have VR I will buy you an Oculus Rift when it is finished
    35. That's shadows. Shadow mapping is a different thing from lighting, but you'll get to it.
    36. Typically, yes. Imagine you have doors and windows. Here it would become visible that the walls are like thin like paper. What you talk about is called... a 'shadow' https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-Mapping
    37. fatamorgana-999

      2D

    38. Hi, I'm seeking people interested in help making "Galaxy Forces VR" a great game. I'm currently alone and like someone that can create simple 3D-models, and a programmer to join the team. The project is being developed using Unity/C#. It's a 2D game, but viewed in 3D. The game is based on my project "Galaxy Forces V2", but this will be single player only, and support the game modes race and mission. There will be global hi-scores on the website with replays of the record scores and ranking of top players, as it adds a lot to the game. And also keeps players interested for longer time. Everything is not written in stone, there are possibilities for someone creative to add their own ideas. In fact I encourage any team member to test and think about different options of how to make the game better. A change to the original is that this will be easier to play for beginners to make it more attractive but not easier to master fully. This is the original complete version: http://www.galaxy-forces.com/ I'm new to Unity, and only have a little knowledge of how to create 3D models. I know C++ fully, but C# is mostly the same. I might get the coding done myself, but it would feel better to have one more person coding and testing their ideas. I share a picture from the development, and a demo version in current state, so you can decide if you want to join. The plan is to release it on the Oculus Store in half a year. I have a todo list, and I promise to do what I can and spend much time on this to get it done to completion. I like to share the profits with those who want to join and have actually done something that pushes the project forward. The demo runs on Windows without VR, but Oculus Rift is recommended to be able to choose level in the menu (otherwise press Return to play the default level)... Galaxy Forces VR v0.1 https://drive.google.com/open?id=1GpcfMzAsgsBPkht_RV3lTJcRR0zI3AKo The presentation right now may not be the final result, especially the menu needs a new look with more contents. It is true that it is a hobby project, but I think it has great chances to get accepted by Oculus. There is a large contents of 50 levels for one thing, and the levels don't need much work to reuse for this project. Also since VR is not main stream, and there aren't so many games released, it will not drown in the noise as easily. And I think VR people buy more games than most, at least I do. This is the full TODO list, that covers most of the needed work from now to release. I think the time plan is realistic... Map elements - Landing zone, hangar building - low poly model (only visual) - Landing zone, antenna - low poly model (only visual) - Map decorations, trees - low poly model (visual, and collision in map) - Map decorations, barrels - low poly model (visual, and collision in map) - Map decorations, red/green house in 3 parts (left, center, right) - low poly model (visual, and collision in map) - Z-objects for decoration. Objects that can be placed in a map that are larger and sticks out towards the player to make it more visually pleasing to play VR, for example brick walls in different shapes (only visual, placed inside walls not accessible to the ship) The levels - Now there are 23 race levels and 23 mission levels. There also exist levels for dogfight and mission_coop, take these levels and convert them to race and mission to get 50+ levels - Need to fix the editor to make it possible to place the new Z-objects in the maps and go through each map and add them Door element - A low poly model for the end points of the door (only one needs to be created, can be rotated in 4 different angles to make all parts) - Implement the door element in code Enemies - Create them in 3D [enemy 0..6] - Implement them in code Game Status - Race: show Time, Current checkpoint, Current lap/Total laps, Health bar - Mission: show Lives, Health bar, Fuel bar, Cargo bar, Score Sound - Add existing music to the menu and game - Add existing sound fx in the game Menu - More contents (could be Game Name text/logo, animated objects, clouds, anything really) - Better gfx (different textures for the level select elements) - Show your score vs the world best on each level, also show your rank (gold, silver or bronze) - Make part 2 of the menu - after a level is selected in part 1, shows 3 options - play, play your best replay, play the world record replay - Settings to turn the music on/off (+a minimum of other settings). The VR room around the player - More contents (could be clouds and a skybox, or a room, or blackness in space, anything really) Replay - Implement replays in code - Online hiscores - that is, be able to send/load the replays to the website (either HTTP or HTTPS if possible, maybe easy to do HTTPS with C# ?) - The hiscore implementation on the website. (mostly done already) Website - www.galaxy-forces-vr.com exists. - Better/more contents + the hi-scores Release - Images in different sizes for release on the Oculus Store - Game play promotion video - Test/fix it working on minimum req hardware The demo + the todo list should help get a picture of what this game will be and help you decide if you want to join and if you have the skills needed. Hope to hear from you.
    39. Well, your function computes the minimum translation vector - which is nothing more than a vector you can apply, to resolve a penetration in the most basic way. Dont know what you are making, but if this is enough for you - then its totally fine. For example, for a simple game this may be totally fine. If not, check out dirks great post about contact generation or other resources regarding this topic. Also, if you want to apply rotation to your bodies, then this system does not work at all - then you definitly need to deal with OBB´s instead.
    40. Hello! I'm currently learning how the depth testing works in OpenGL from these tutorials and the tutorial says that By default the depth function GL_LESS is used that discards all the fragments that have a depth value higher than or equal to the current depth buffer's value. If i guess that the depth value it the z coordinate that I pass through the vertex data, then the above statement should not be true. Fragments with small Z values should be discarded because the depth is towards the -Z axis not fragments with higher z value. Does the depth values are created somehow else by using the z coordinate of the fragment? So the depth value is a number from 0...N so lets say a fragment has a depth value of 5 and the one that is behind it has 10, the 5 will pass the test?
    41. Kuxy

      SAT OBB-OBB - collision normal

      Normals are working perfect and now i feel pretty bad because it really is easy and intuitive I've look a bit into clipping and i'll do my best to make it work. And after that i'll try the Gauss map optimisation that you showed in your talk. Thanks again for all the help and all the information you guys provide.
    42. babaliaris

      Lighting: Inside faces are getting lighted too?

      I have one more question. My lighting right now works great but it does not have any logic if an object is in front of another and is blocking the light source, then the object behind it should not get any light. I understand why this is happening. For example in my diffuse calculations you can see that I'm using the direction from the fragment towards the light source and the normal of the fragment in order to get the angle between them and create the factor that is going to reduce or increase the light of the fragment based on that angle. But this considers only the current fragment and the light source, not the other objects fragments, so if an object is behind another and its facing the lighting source this face is going to be lighted any way no matter how many objects are in front of it blocking the light. Is this an advanced topic in lighting? Should I wait? I'm following these tutorials and I just finished with the Model Loading and heading to the advanced OpenGL tab.
    43. suliman

      MMORPG Brilliant Game Idea.

      Im sorry but such a vague idea is hardly brilliant. Brilliance is finding something unique and make it work. Often something simple but that stands out or work better than something else that already exists. Your "idea" comes off as just saying "put all game ideas into one super large game. So all players will like the game since all players like SOME kind of game". Compare to: Many players like call of duty Many players like fifa18 Now make a game that includes the machanics of both in the same game. You will double your player base! In theory that could work. In reality that is a VERY bad idea. Games needs to be polished and work well. Add to much different stuff and you'll just end up with lots of boring or half-completed pieces of the puzzle. We dream in theory. We make games in reality (sadly!)
    44. babaliaris

      Lighting: Inside faces are getting lighted too?

      Oh, I understand now. So the logic is to use multiple cubes to build the house right? Not using a big cube and see the inside of it.
    45. Hi everyone, it's been a long time coming but we've finally released on Itch.io and Steam. We are in Early Access at the moment but we've just added update#3 and are well on shedule to release fully in March. We're a small team, there's just the two of us, myself and a programmer called Dazza. This really has been a labour of love especially as this is pretty much the fourth year of development (Not full time though) and we're really happy to have got this far. What is it? I hear you ask, well, Smith and Winston is a voxel-based twin stick shooter with a focus on exploration. With fully destructable hand crafted levels and fun puzzles. Along with engaging combat with a variaty of enemies and bosses and an assortment of items and achievments to collect. We think its a lot of fun and from feed back with players, they agree. Smith and Winston can be found here on Itch.io You take on the role of Smith or Winston (or both as we have local co-op planned), two hapless adventurers, as you explore a shattered ring world and uncover its secrets, fight invading aliens and prevent the impending doom of the star system. Only you can save the world from the evil that's approaching and perhaps ultimately the galaxy itself... or not. Smith and Winston can be found here on Itch.io Also head over to our tigsource devblog if you would like to see how the game has developed over the years. Also if you have any questions please ask away
    46. I left some forum pages open for several days. Most of them I left open as background tabs without touching them. Today, when I looked at the memory usage of those tabs, it was hundreds of megabytes per page, with the heaviest pages taking over 2 gigabytes. Closing any of those tabs and opening the same URL again results in normal memory usage (<100MB in this case). This is a Windows 10 PC with Google Chrome displaying the pages. I didn't have any similar leaking problems with other sites I leave open, and I have quite a few of them.
    47. The word "engine" gets thrown around too much these days. And every time someone says it, a bunch of people show up to argue about what an engine is and why you shouldn't make one. (case in point: this topic) It's like they do it on purpose. My advice to you is to never use the word "engine" if you can help it. You will get a much better discussion that way. In your case, just say that you want to make "a renderer". That should get you more useful responses. And it seems to be closer to what you really want to do.
    48. GameDev2017

      BIZARRE Sprint 33 plus 34

      Happy new year! 🙂 The new sprint no. 34 is published today. Let's see what we did during the last two sprints 32 and 33. First of all, we finished texturing the first game scene, Clearwater's apartment. All furniture is modelled and textured, placed and polished. As you can see below: We put more effort into the park design opposite Clearwater's home. We placed benches, trees and lanterns in the park and made them all looking good. The metro is running regularly, the little fountain is built up but what we still miss is the grass. But we installed lamps, lights and bulbs in the houses around Clearwaters apartment, therefore his neighbors look a bit more livelier. Plus, we worked on Clearwater's gunplay behavior. He is now able to pull his gun when a button is pressed, he is able to shoot as long as his magazine isn't empty, he reloads when the magazine is empty and in case, he has another one. He stops shooting when all bullets are fired. You can see an extract here in this video (#WIP): We implemented all the gunplay behavior in C++ but also in Blueprint. We melted various animations together, so that he can walk and aim, shoot and switch fluently. Clearwater's gunplay is still unfinished. We still need to do a looot of things. #WIP I wrote dream no. 12 and felt that I forgot about everything I wrote before, that's why I decided to re-read the whole book hoping I'll like it and I can easily write dream no. 13. With dream no. 13 the writing is finished and the book is completed. The #WORDSproductions team is currently reading the book, too, at least all released chapters I felt OK with. The book BIZARRE Episode I is pretty thick, isn't it? Our plans for the current sprint: We have to improve Clearwater's gunplay (+ appearance) We have to improve Clearwater's walking and running animation We have to improve Clearwater's facial animations We have to implement a shoot button and a gun status bar (UI) We have to adapt the collision boxes in Clearwaters apartment We have to furnish Clearwater's terrace on the first floor We have to implement a stop walking button and to model a proper animation when he is touching a collision I have to re-read and complete the book BIZARRE Episode I We have to texture the metro, and some stuff in Clearwaters-world-outside environment It's a damn long way to our new Gameplay Video as everything is built up but nothing is 100% completed let alone perfect. I meditate now.
    49. You give your house walls thickness. So you have triangles on the outside wall (which are front-facing) and you have triangles on the inside wall (which are different triangles, and also front-facing). If you want to be able to enter a house, it should have thick walls anyway. I wanted to find a nice image to demonstrate this, but best I could find was this: In this image, the "house" is not a cube. Instead, there are 4 cuboids and the "inside" is outside, but between them. Every triangle there is front-facing, and you never have to go inside a cuboid.
    50. Of cource hardware optimisation (that able to give 10x speed-up of computations) is very importent where it can work together with algorithmic optimisation (that able to reduce required calculations to thens and hudreds of thousands times). For example for collision detection non-chache friendly space partiation algos will be many times fatser than ideally hardware optimized each-to-each tests. And collision prediction algos is much faster and accurate than any collision detection algos, becouse require recalculate something for collided objects only and at collision time only, instead of every frame tests for each object for collisions detection. So it again a mathematical/phisical trics, covered by thin layer of hardware optimisation trics where it possible.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!