Jump to content
  • Advertisement

All Activity

This stream auto-updates     

  1. Past hour
  2. Hello Brody, sadly I allready commited myself to another project and made myself pretty indispesable and I won´t let them down. I hope you find someone because your game seems pretty neat. With best wishes, Ansgar
  3. tamlam

    Change Animation in OpenGL

    Thanks for comment. I tried but it did not work. The problem is that in order to roll a cube along its edge. There are some steps (just like youtube link : 1. translate the cube until the contact edge for rolling == Y_axis.) 2. rotate the cube with new coordinate (90 degrees) around Y_Axis. 3. translate the rotated cube to a new coordinate. When we do animation (only for rotation) we have to do 3 steps in OpenGL, how can I only show the rotation because it has to pass these 3 above steps?
  4. Today
  5. Psychopathetica

    Specular Lighting seems Backwards

    Sure I can post some code and screenshots. It went through lots of changes over time though since I added cubemapping for reflection and other goodies. But I can't for the life of me figure out why specular lighting is backwards. The normals I know are correct because the objects I created on Autodesk 3D Studio Max show the normals are pointing out of the object. Vertex Shader #version 300 es uniform mat4 u_model_matrix; uniform mat4 u_mvp_matrix; uniform vec3 u_camera_position; layout (location = 0) in vec4 a_position; layout (location = 1) in vec4 a_color; layout (location = 2) in vec2 a_texturecoordinates; layout (location = 3) in vec3 a_normal; out vec2 v_texturecoordinates; out vec4 v_color; out vec3 v_normal; out vec4 v_position; out vec3 v_model_normal; out vec3 v_model_position; out vec3 v_model_camera_position; void main() { v_color = a_color; v_texturecoordinates = a_texturecoordinates; gl_Position = u_mvp_matrix * vec4(vec3(a_position), 1.0); v_normal = a_normal; v_position = a_position; v_model_position = vec3(u_model_matrix * vec4(vec3(a_position), 1.0)).xyz; v_model_normal = vec3(u_model_matrix * vec4(a_normal, 0.0)); v_model_camera_position = vec3(u_model_matrix * vec4(u_camera_position, 1.0)).xyz; } Fragment Shader #version 300 es precision highp float; // Texture units uniform sampler2D u_textureunit; uniform samplerCube u_cubemap_unit; // Texture enabled uniform int u_texture_enabled; uniform int u_cubemap_enabled; // Matrices uniform mat4 u_view_matrix; uniform mat4 u_model_matrix; // model: local and world matrices combined uniform mat4 u_model_view_matrix; // Light enabled uniform int u_light_enabled; // Light Color uniform vec4 u_ambient_color; uniform vec4 u_diffuse_color; uniform vec4 u_specular_color; // Enabled light types uniform int u_ambient_enabled; uniform int u_diffuse_enabled; uniform int u_specular_enabled; // Light intensities //TODO diffuse intensity uniform float u_specular_intensity; // Positions uniform vec3 u_object_position; uniform vec3 u_camera_position; uniform vec3 u_light_position; // Angles uniform vec3 u_camera_angle; uniform vec3 u_light_direction; // Selectable color; uniform vec4 u_RGBA; // Enabled two sided lit side uniform int u_two_sided_enabled; // Reverse reflection uniform int u_reverse_reflection; uniform int u_invert_normals; uniform int u_invert_x_normal; uniform int u_invert_y_normal; uniform int u_invert_z_normal; // Light type uniform int u_light_type; in vec2 v_texturecoordinates; in vec4 v_color; in vec3 v_normal; in vec4 v_position; in vec3 v_model_normal; in vec3 v_model_position; in vec3 v_model_camera_position; out vec4 color; vec4 ambient_color; vec4 diffuse_color; vec4 specular_color; mat4 view_matrix; mat4 model_matrix; mat4 model_view_matrix; vec3 model_position; vec3 model_camera_position; vec3 model_light_position; vec3 model_normal; // Not used ////////////////////////////////// vec3 modelview_position; vec3 modelview_camera_position; vec3 modelview_light_position; vec3 modelview_normal; ////////////////////////////////// float diffuseFactor; float specularityFactor; vec3 finalLitColor; vec3 linearColor; vec3 gamma = vec3(1.0/2.2); vec3 hdrColor; vec3 toneMap; vec3 normal; vec3 camera_world_position; vec3 object_world_position; vec3 light_world_position; vec3 light_direction; void ambientLight(){ if(u_ambient_enabled == 1){ ambient_color = u_ambient_color; } else{ ambient_color = vec4(1.0, 1.0, 1.0, 1.0); } } void diffuseLight(){ if(u_diffuse_enabled == 1){ // Use only modelNormal, not modelViewNormal // Reason is because it will change colors as you move the camera around, // which is not a real world scenario // Observations: // normal = normalize(v_model_normal) means the values of the norm will never change. // For example when the object rotates, the light colors are stuck on // that side of the object! diffuseFactor = max(0.0, dot(normal, light_direction)); diffuse_color = clamp(vec4(diffuseFactor * u_diffuse_color.rgb, 1.0), 0.0, 1.0); } } void specularLight(){ if (u_specular_enabled == 1){ // Use model_position, not vec3(v_model_position). When the object rotates, once the normals // point the other way, specular light disappears! With model_position, the light // stays on them at least the whole way round. // the reflect() method does this formula: reflect(I, N) = I - 2.0 * dot(N, I) * N vec3 reflect_direction = reflect(-light_direction, normal); vec3 camera_direction = normalize(camera_world_position - object_world_position); float cos_angle = max(0.0, dot(camera_direction, reflect_direction)); specularityFactor = 0.0; if (diffuseFactor >= 0.0) { specularityFactor = pow(cos_angle, u_specular_intensity); } specular_color = clamp(vec4(vec3(u_specular_color) * specularityFactor, 1.0), 0.0, 1.0); } } void main() { model_matrix = u_model_matrix; mat4 TI_model_matrix = transpose(inverse(u_model_matrix)); model_position = vec3(model_matrix * vec4(vec3(v_position), 1.0)); model_camera_position = vec3(model_matrix * vec4(u_camera_position, 1.0)); model_light_position = vec3(model_matrix * vec4(u_light_position, 1.0)); model_normal = normalize(vec3(TI_model_matrix * vec4(v_normal, 0.0))); if (u_invert_normals == 1) { model_normal = model_normal * vec3(-1.0, -1.0, -1.0); } if (u_invert_x_normal == 1) { model_normal = model_normal * vec3(-1.0, 1.0, 1.0); } if (u_invert_y_normal == 1) { model_normal = model_normal * vec3(1.0, -1.0, 1.0); } if (u_invert_z_normal == 1) { model_normal = model_normal * vec3(1.0, 1.0, -1.0); } object_world_position = model_position; camera_world_position = model_camera_position; light_world_position = u_light_position; normal = model_normal; vec4 texture_color0 = texture(u_textureunit, v_texturecoordinates); /* reflect ray around normal from eye to surface */ vec3 incident_eye = normalize(camera_world_position - object_world_position); vec3 reflect_normal = model_normal; vec3 reflected = reflect(-incident_eye, reflect_normal); reflected = vec3(inverse(u_view_matrix) * vec4(reflected, 0.0)); vec4 texture_color1 = texture(u_cubemap_unit, reflected); if (u_light_enabled == 1){ if (u_light_type == 0) { // Point Light light_direction = normalize(light_world_position - object_world_position); } else if (u_light_type == 1) { // Directional Light light_direction = normalize(u_light_direction); } else if (u_light_type == 2) { // TODO Spot Light light_direction = vec3(0.0); } ambientLight(); diffuseLight(); specularLight(); finalLitColor = vec3(1.0, 1.0, 1.0); if (u_light_type == 0) { // Point Light float dist = distance(light_world_position, object_world_position); float attenuation_constant = 1.0; // Infinite light emmission for now. float attenuation_linear = 0.0; float attenuation_exp = 0.0; float attenuation = 1.0 / (attenuation_constant + attenuation_linear * dist + attenuation_exp * dist * dist); linearColor = vec3(u_ambient_color.r + attenuation * (diffuse_color.r + specular_color.r), u_ambient_color.g + attenuation * (diffuse_color.g + specular_color.g), u_ambient_color.b + attenuation * (diffuse_color.b + specular_color.b)); vec3 gamma = vec3(1.0/2.2); finalLitColor = pow(linearColor, gamma); } else { // Directional Light linearColor = vec3(u_ambient_color.r + diffuse_color.r + specular_color.r, u_ambient_color.g + diffuse_color.g + specular_color.g, u_ambient_color.b + diffuse_color.b + specular_color.b); vec3 gamma = vec3(1.0/2.2); finalLitColor = pow(linearColor, gamma); } if (u_texture_enabled == 1 && u_cubemap_enabled == 1){ // Incase you forget how to add a toneMap overtime... // gl_FragColor = vec4(toneMap.rgb, texture(u_textureunit, v_texturecoordinates).a) * // Problem is, is that it is not that colorful. its a dull resident evil greyish world then color = vec4(texture_color0.r + texture_color1.r, texture_color0.g + texture_color1.g, texture_color0.b + texture_color1.b, texture_color0.a + texture_color1.a) * vec4(v_color.r * finalLitColor.r * u_RGBA.r, v_color.g * finalLitColor.g * u_RGBA.g, v_color.b * finalLitColor.b * u_RGBA.b, v_color.a * u_RGBA.a); } else if (u_texture_enabled == 1 && u_cubemap_enabled == 0){ color = vec4(texture_color0.r, texture_color0.g, texture_color0.b, texture_color0.a) * vec4(v_color.r * finalLitColor.r * u_RGBA.r, v_color.g * finalLitColor.g * u_RGBA.g, v_color.b * finalLitColor.b * u_RGBA.b, v_color.a * u_RGBA.a); } else if (u_texture_enabled == 0 && u_cubemap_enabled == 1){ color = vec4(texture_color1.r, texture_color1.g, texture_color1.b, texture_color1.a) * vec4(v_color.r * finalLitColor.r * u_RGBA.r, v_color.g * finalLitColor.g * u_RGBA.g, v_color.b * finalLitColor.b * u_RGBA.b, v_color.a * u_RGBA.a); } else{ color = vec4(v_color.r * finalLitColor.r * u_RGBA.r, v_color.g * finalLitColor.g * u_RGBA.g, v_color.b * finalLitColor.b * u_RGBA.b, v_color.a * u_RGBA.a); } } else{ // No light if (u_texture_enabled == 1){ color = texture(u_textureunit, v_texturecoordinates) * vec4(v_color.r * u_RGBA.r, v_color.g * u_RGBA.g, v_color.b * u_RGBA.b, v_color.a * u_RGBA.a); } else{ color = vec4(v_color.r * u_RGBA.r, v_color.g * u_RGBA.g, v_color.b * u_RGBA.b, v_color.a * u_RGBA.a); } } } Now, with this current code, I get this on the front side... ...and this on the back side when i move the camera over. And here you can clearly see the light source which is slightly behind the camera towards the left assuming I was facing front As you can see, the specular light is backwards. Hitting the opposite of side of where it suppose to be. But its only backwards for the Z's and Y's, not the X. Besides it being backwards for the Z side, when the objects above the light source, the light hits the top of the object (suppose to be the bottom). When its below the light source, the light hits the bottom of the object (suppose to be the top). However the X's are correct which is strange. So in my specularLight() method where I'm reflecting: vec3 reflect_direction = reflect(-light_direction, normal); and I take away the negative light direction and make it positve, I get this: Which doesn't make sense because now the math is backwards. But I get what seems to be correct results.....sort of. Keyword there. Now this is ok for the Mario guy and sphere in the background, seems legit on all sides and positions and angles, but when I use a tubular object such as this ship in the image or a flat polygonal quad, the specular light is now going the wrong way in the X direction. Take a look at what happens when I move the ship to the left of the light: And its true for vise versa. I don't know if the code written in those articles are for left handed coordinate systems or what. But I'm using right handed. And OpenGL by default is right handed. So I'm completely confused as to why its backwards whether just in the X direction or the Y-Z direction. Hopefully I was clear enough there
  6. Yyanthire Studio


    Album for Moonrise
  7. TotoGuau

    I need ideas

    Hello friends Well for a long time I wanted to create a game or mobile application because I have basic knowledge about programming but I have never had an idea in particular so I come to ask any creative mind to help you see this post to help me decide what to do. It's my first post so I do not know if there are rules for this and if there are any, let me know. Goodbye
  8. Fulcrum.013

    A good 3D math library in C for OpenGL?

    it is 2 options to use DLL - import a separate functions from it and import a interfaces using COM. Both seriously affect perfomance, becouse any call of DLL function is indirect i.e. same as virtual call and can not be inlined. So best way to use it for pluggable modules is to break system to huge closed subsystems that require intermodular calls very rare. For example physic engine into one DLL, render into another DLL and scene into main module. As result you can just export only function from each of dlls and call it once per frame. Internal implementation of functions inside dll can use inlining, clasess and other advantages of C++ for anything that not require a interdll calls.
  9. Wyrframe

    Specular Lighting seems Backwards

    Are you clamping reflection angle to zero, so that light only reflects off the "outer" face of a shape? Post a minimum-reproduction sample of code, post screenshots, explain your observations and how they differ from the expected results.
  10. Have you considered this Single-pass Wireframe Rendering approach? It works by rendering filled triangles and calculates the outlines of the triangles in a fragment shader. The algorithm is pretty efficient however the major drawback is that it requires you to add an additional per-vertex attribute that stores the Barycentric coordinate of the vertex. This Barycentric coordinate would then be an input to your vertex shader where you pass it through to the fragment shader to get the interpolated point on the triangle. Using this point you can derive the distance from the edge of the triangle. If the point is near an edge of the triangle, you color the fragment black (or whatever color you want your outlines to be). You should be able to find some tutorials for implementing this algorithm online.
  11. GoliathForge

    GC : Explosive Balls (game play)

    Attached is an updated version. Hope to get a bow on it this next week. GDN_SideScroller_goliathForge_TEST_02.zip
  12. Hello, I'm currently searching for additional talented and passionate members for our team that's creating a small horror game. About the game: The game would be a small sci-fi/post-apocalyptic survival horror 3D game with FPS (First person shooter) mechanics and an original setting and story based in a book (which I'm writing) scene, where a group of prisoners are left behind in an abandoned underground facility. It would play similar to Dead Space combined with Penumbra and SCP: Secret Laboratory, with the option of playing solo or multiplayer. Engine that'd be used to create the game: Unity About me: I'm a music composer with 4 years of experience and I'm fairly new in this game development world, and I'm currently leading the team that'd be creating this beautiful and horrifying game. I decided that making the book which I'm writing into a game would be really cool, and I got more motivated about doing so some time ago when I got a bunch of expensive Unity assets for a very low price. However, I researched about how to do things right in game development so I reduced the scope of it as much as I could so that's why this game is really based in a scene of the book and not the entire thing. Also I'm currently learning how to use Unity and learning how to program. Our team right now consists of: Me (Game Designer, Creator, Music Composer, Writer), 3 3D Modelers, 2 Game Programmers, 1 Sound Effect Designer, 1 Concept Artist, 1 3D Animator and 1 Community Manager. Who am I looking for: We are looking for a talented and passionate programmer that's experienced with Unity and C#. Right now the game is in mid-early development and you can see more information about it and follow our progress in our game jolt page here: https://gamejolt.com/games/devilspunishment/391190 . We expect to finish some sort of prototype in 3 months from now. This is a contract rev-share position If you are interested in joining, contributing or have questions about the project then let's talk. You can message me in Discord: world_creator#9524
  13. Basically I copied some code from https://www.tomdalling.com/blog/modern-opengl/07-more-lighting-ambient-specular-attenuation-gamma/ to do specular lighting but I ran into an issue. The specular highlights are on the opposite side of the object. On top of that, not only does it seem reversed on the Z side, but on the Y side as well. If I make the incidence vector positive instead of negative though, the X side of the specular highlight is reversed on flat surfaces facing the viewer while on round 3d objects such as a person or sphere is perfectly fine. Like in this bit of code for the shader: vec3 incidenceVector = -surfaceToLight; //a unit vector vec3 reflectionVector = reflect(incidenceVector, normal); I literally tried everything such as flipping normals of the object or flipping just the x normals but it comes out wrong at certain angles. Im using the right handed system in OpenGL. Any help is appreciated. Thanks.
  14. Sounds interesting. Do you have a blog or website? I'm way into the planetary stuff: simplex noise, run-time smooth terrain generation with voxels, JIT terrain for collision, rebasing the origin on the GPU, etc ........... but unfortunately I'm fully committed doing my own C++ planetary engine at the moment. I wouldn't mind following you guys though. Good luck!
  15. Haven't looked at your system yet because I don't have time right now, but it DOES remind me that I need to get the C# version of my entire IAUS up on the Unity Asset Store.
  16. bok!

    Bad Bunch - Announcement Trailer

    Dive-bomb into an alternate reality where air combat rages above land and sea. Red versus Blue. Choose your color, launch you air frame, and fight to be the baddest bunch of Aces in the sky. Visit https://badbunch.net for more info.
  17. GoliathForge

    GC : Explosive Balls (game play)

    No worries.. That will give me a chance to put an update in maybe tomorrow. Sound has been improved by a long ways. I was missing on my sound release time and it made it muddy. But an accidental fix cleared it up. Sounds great now. Maybe not so accidental, but certainly a surprise. Re-arranged the three bar looping music. It's groovy now instead of just noise. Added difficulty by scaling time. It's pretty fast, some collision types ghost through at the faster speeds, so I need to break up the step to ball radius(x2?) perhaps. Been playing it, trying to make it fun. meh, not bad-ish. Still need to finish up the pipe configuration arrays. It's a play, add/smudge/play routine now with the driving data.
  18. Yesterday
  19. Rutin

    GC : Explosive Balls (game play)

    I'll give it a try in a few days and post back.
  20. Hmmm, maybe we'll get a couple of pets instead. We got it right the first time so no sense in rolling the dice a second and third time.
  21. JustinKase

    Battletech Developer Journal - 07

    Very glad you put in that randomization code for the drop timing, being able to vary an animation like that to provide a degree of randomness is another excellent touch on an already superb game. I can see where some might not like the more 40k style drop pods over jet packs and chutes, but these definitely look amazing!! Maybe down the road for BT 2 they'll let you introduce variable drop types Congrats on the HCT-3X!!!! I suppose you better warn your wife that once the BT universe moves up to 3050 and the Clans you will need to have 2 more kids for the HCT-5X
  22. I'm Chris Eck, and I'm the tools developer at HBS for the Battletech project. I've recently been given permission to write up articles about some of the things I work on which I hope to post on a semi regular basis. Feel free to ask questions about these posts or give me suggestions for future topics. However, please note I am unable to answer any questions about new/unconfirmed features. The last few weeks I've been in an unofficial bug fixing contest. A bug fixing contest is a lot like a pie eating contest. The reward for fixing a bug is you get more bugs to fix. >.< I cleaned up some performance fallout, worked on updating region labels, and put some finishing touches on drop pods. Of those, drop pods are the coolest so let's talk about that. Drop Pods The Urban environment is a lot more cramped than the open terrain maps of old. Tall skyscrapers sometimes prevent a dropship from coming in to drop off mechs. The dropships are so large that the wings will clip right through the buildings in the flyby animations and that just won't do. Someone modeled a bomb like drop pod. Someone else modeled the open drop pod. Will worked up a VFX and designed a ParticleSystem that had the flaming Drop Pod slamming into the ground and then kicking up a huge cloud of debris (and built a separate one for each biome). Rob worked on new sound effects. And I wired it all up for the designers and spawn it during the game. First, I had to add a new SpawnMethodType so that designers could specify when a lance should spawn with this new animation. When Drop Pods is selected, they need to plummet from the sky during spawn. I instantiate the prefab at the UnitSpawnPoint location and the VFX plays out. Here is the particle system in action. ParticleSystems don't have a method for communicating events so I had to watch the VFX and write down when different things happened (when the pod hits the ground, when a cloud is big enough to hide the mech teleporting in, when the effect was over). After I had numbers in constants for different events, I wrote a Coroutine to wait the appropriate amount of time. A coroutine is like a function that has the ability to pause execution and return control to Unity but then to continue where it left off on the following frame. My first test had all the pods landing at the exact same time. I didn't like that so I introduced another constant that would put a delay between each one. I still wasn't satisfied so I added a random delay on top of that. Now there won't be a regular pattern and if two lances spawn drop pods at the same time there will be some variance instead of their units landing in the exact same pattern in the exact same timing. It's very jarring when it happens. private const float dropPodImpact = 1f; private const float dropPodSpawnDelay = dropPodImpact + 2f; public IEnumerator StartDropPodAnimation(float initialDelay, ParticleSystem dropPodVfxPrefab, GameObject dropPodLandedPrefab, Action unitDropPodAnimationComplete, int sequenceGUID) { // Only do work if we actually have a unit to spawn. if (HasUnitToSpawn) { // Wait a random amount of time before starting float delay = Random.Range(.5f, 1.75f) + initialDelay; yield return new WaitForSeconds(delay); // Play the sound effect WwiseManager.PostEvent(AudioEventList_play.play_dropPod_projectile, WwiseManager.GlobalAudioObject); // And start the VFX if (dropPodVfxPrefab != null) { ParticleSystem instance = Instantiate(dropPodVfxPrefab, transform); instance.transform.position = hexPosition; instance.Play(); } else { LogError("Null drop pod animation for this biome."); } // Wait until the drop pod hits, play the sounds, and kill whoever is standing in the spot yield return new WaitForSeconds(dropPodImpact); WwiseManager.PostEvent(AudioEventList_play.play_dropPod_impact, WwiseManager.GlobalAudioObject); yield return ApplyDropPodDamageToSquashedUnits(sequenceGUID); // Wait a bit more and teleport the units in and spawn the landed drop pod. yield return new WaitForSeconds(dropPodSpawnDelay); TeleportUnitToSpawnPoint(dropPodLandedPrefab); // Wait a couple more seconds for the drop pod vfx to finish playing then we can say we're done. yield return new WaitForSeconds(2f); } unitDropPodAnimationComplete(); } If you aren't familiar with how Coroutines work this might look a little weird with all these yield returns. Basically you're yielding control back to the caller and will continue doing work on future frames. Here's the Coroutine documentation if you're interested in learning more: https://docs.unity3d.com/Manual/Coroutines.html While testing, another thing that I noticed is that we bunch our spawn points up but the physical drop pods are pretty big and they overlap each other like so. To give the designers some in editor indication I added some code to the UnitSpawnPoint "Gizmo" (or in editor widget). Lookup Unity's OnDrawGizmos for more information. It gives you ways to draw things in the editor's scene view. After stringing everything together, here's what it looks like. (in my super ugly all-flat test level). Also I'm in the editor so pay no attention to the lag spike. *jedi hand wave* There are definitely some Rule of Cool physics going on here. Any person inside the cockpit of that drop pod animation would be turned into strawberry jam, and the mech would be a mass of twisted metal and myomer. Drop Pods are supposed to split up in the atmosphere and then jump jets and parachutes are supposed to let the mech drift safely down. I brought this up as a concern, but I also said I wouldn't change anything after seeing the drop pod animations in the game. B) New Hatchetman Variant It was announced that we were releasing a few variants and one of those is a brand new Hatchetman variant. I was tasked with stating it out so I did some research but most of the variants are all way past the 3025 era. I saw the HCT-5K and that sounded pretty interesting so I designed an earlier prototype version the HCT-3K. There was some lore debate on whether or not a Kurita variant would be this far "south" in the periphery so after the forum goers read about the 3X consideration they demanded that we change it. In lore the X stands for experimental. But for me it stands for 3 Ecks (Me, my wife, and my daughter). I can't wait for this to ship because I can finally point at something concrete in the game and said I MADE THAT! Also, there's a small chance it will get added to Battletech cannon so it's yet another cool thing about #livingthedream. Blood Bowl Update In more casual news, I've been streaming my Wood Elves and finally won a couple of games. It's vastly different from my typical bashy-ork play style but I feel like I'm learning. The league that I joined TRBBL (Totally Relaxed Blood Bowl League) has a cool bunch of laid back coaches in it. Season 3 will be starting soon and I plan to stream those matches when I play them (roughly every other week). Lately I've been streaming sporadically about once a week. Feel free to come hang out when I do: https://www.twitch.tv/eck314 or you can check out the matches that I upload on Youtube: https://www.youtube.com/watch?v=VQaN12XU5Wg&list=PL2M43bS2cfSMNvtzWDR0DxuQgh0Ky7yrA If you're interested in playing Blood Bowl 2, there's a big steam sale for all Games Workshop related products. I think Standard Edition costs like $5 and Legendary Edition is around $15 which comes with all the teams. It's a good time to pick it up. https://store.steampowered.com/sub/192166/ Links Previous Journal: https://www.gamedev.net/blogs/entry/2267220-battletech-developer-journal-06/ Next Journal: Stay tuned! Twitter Post: https://twitter.com/Eck314/status/1130221812179185664
  23. VoxycDev

    Dynamic lighting in Fateless

    When I put out the desktop/VR version, the controls will not be showing. For now, too much work/cost to remove them (need to develop support for a Bluetooth gamepad controller and buy a new phone). The Android screen recorder app I use won't record internal audio (it's not allowed by Google), so unfortunately I have to mic the sound from the tablet's speaker. Yeah, the shadows were working on Windows and Mac OS at one time and they were written in desktop OpenGL (GLSL 330), so most of the code won't work on mobile. I tried to bring them back at one time, unsuccessfully, and gave up. There is a huge gap between the walls and the shadows they cast. I played with the math, but could not solve it. Will definitely try again sometime. Then there is a matter of porting that to OpenGL ES. I'm guessing for now my time is better spent on more dynamic lighting, doors, keys, more levels, weapons, monsters. What do you think? Also, what about driving tanks? Especially to fight the big mechs. That could be fun. I already wrote most of the tank driving code for a different game, so it wouldn't be that hard to plug that in.
  24. I'm interested. I would definitely like to contribute to your project. I am a working professional so I'll not be available full time. I have 2-years+ Programming experience in C++/Python. I'm a complete novice and have just begun in Game Programming. I have experience with AI (Reinforcement Learning in particular) as well.
  25. Second Life, which is halfway between a game and a game engine, supports a pathfinding system. It combines a reasonably good path planner with a terrible path follower. I've been bugging Linden Lab about this for a while. The bugs are not fixed yet. So I made a video. This is Second Life pathfinding under overload conditions in a hard situation. The video is to demonstrate to Second Life developers what needs be fixed. In open ground, not under overload, pathfinding performs much better. But it's never been reliable enough for wide use. The NPC won't reach its goal. This is why there are few moving NPCs in Second Life. So I've been trying to work around the bugs and make better NPCs. I wrapped recovery code around the pathfinding operations, to restart them when they give up or just stop working, do a bit of random motion when they get stuck, and take recovery action if they get out of bounds. The built in pathfinding system is supposed to handle that. It doesn't. Pathfinding in Second Life has a reasonably good planner, and you can see its waypoints with a dev tool. That part is not bad, except that it clips corners and causes unnecessary collisions. It also sometimes generates near-coincident waypoints, which confuses the path follower.Then there's the execution of the path. That works through the physics engine. For physics purposes, the NPC is a capsule, always upright. The path follower gives it a velocity vector and starts it on its way. Collisions, including collisions with ground surfaces, can get it off course. The NPC continues to move in the direction indicated until the pathfinding system does another cycle and provides a new movement vector. Under overload, there's lots of overshoot, of course. That's the cause of most of the trouble. We can slow down the movement under overload, which helps. (As a user, I don't have access to the simulator code, or I'd try to do something about this. The client is open source C++, and I've put fixes into that. Not fun. Little internal documentation, few comments, and not enough separation of policy and machinery. I'm told that the simulator code is equally bad. Second Life remains the main big, shared, persistent, user modifiable world that works, and I like the options that offers. The Spatial OS people have made a lot of claims that they can do all this better, and spent an insane amount of money on it, but so far, nothing has shipped that demonstrates that. Nostos just slipped to October 2019.)
  26. Our project is a scifi game that takes place 50 years in the future, as humanity takes its first steps onto the galactic stage. It takes elements from a few game genres but is primarily a 4X-RPG. We currently have progress in the Metagame and Space Combat areas and are looking to expand our programmer team to fill out some other areas. Ideal applicants are highly self-motivated and have plenty of free time. Unity, 3D, C# Gameplay Programmer If you're interested in spaceship, infantry and vehicle combat. As well as tactics and strategy. Planetary Programmer If you're interested in procedural planets, floating origins, double precision coordinate systems, and relative scientific accuracy. Volumetric Programmer If you're interested in gorgeous volumetric effects for engine trails, planetary accretion disks, and more. If one or more of these positions interest you feel free to contact me on discord at mushroomblue#0384
  27. Are you sure you need this? Images of that dimension easily run into the gigabytes and might choke up the average computer. You also likely want to use compression on such a thing, which kind of eliminates the 'easy' options. Not sure what you're aiming at but it's probably more manageable if you cut it up in sections and use a library that works with a more common format (TGA is rather old and superseded). Decoding file formats is a nice exercise but usually a waste of time.
  28. Unfortunately, the Targa TGA format only supports widths and heights up to 65535 pixels (or, 5.55 metres at 300 pixels per inch), stored as a couple of short unsigned ints in the TGA header data. Do you have experience with a simple format that doesn't have such a limitation?
  1. Load more activity
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!