Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

366 Neutral


About Axiverse

  • Rank
    Advanced Member

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm wondering when upload buffers are copied into the GPU. Basically I want to pool buffers and want to know when I can reuse and write new data into the buffers.
  2. Axiverse

    Open Source Direct3D 12 Game Engine?

    @237cookies I'm currently working on building DirectX12 game engine as well, possibly a bit further along rendering 3D models as well as having 2D UI. I'm looking work co-working buddies or collaborators, and would be happy to chat if you're interested. I generally find that it's more motivating if you have others that you can talk to who are working towards similar goals. I've been looking at a lot of game engines, many have abstracted the hardware layer (DX/GL) from the core of the engine itself. For open source Xenko is the one that I've been looking at. DirectX12 samples has their MiniEngine (https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/MiniEngine) and Unreal engine 4 is open source (you just have to register on their website). In relation to this, I recently posted:
  3. Hey Everyone, I'm looking for people who would be interested in working on a hobby game engine and server. I'm looking to build a game engine in order to get a good end to end understanding of all the components as well as building a distributed entity-component-system backend as well (think SpatialOS). I'm wondering if there are other proficient developers who are also interested in understanding and building this system from scratch, learning and building on the best practices from existing game engines. I'm open to collaborating on the same engine as well as people working together, but on their own engines as well. I'm open to the general engine being open source as well as possibly providing some compensation depending on the circumstances. In conjunction with the engine, I'm also aiming to build a lightweight space econ/battle simulation game much in the style of Eve Online. I would like this to be developed into release game, though that would be further in the future. My background: I've been in the game dev community for quite a while now, with a pretty broad range of knowledge. During the day I'm an engineer at Google, having worked on large projects at Amazon and Microsoft as well in the past. So experience in large scale systems and projects as well. I have decent background in building scalable and distributed systems as well. Technology stack: C# DirectX 12 via SharpDX Protobuf & GRPC for network Current progress: I have a working renderer which renders obj models via a (primitive) scene graph. There 2D UI framework which is built on top of Direct2D using DirectX11 interoperability layer. Next steps include flushing out the UI system as well as implementing deferred rendering, physics, etc. Interested? I'm partly gauging interest at this point. If you're interested, please let me know what your background and skillsets are as well as what particular areas you're interested in. Thank you! Aaron
  4. Axiverse

    Need Help Choosing Art Style

    I agree that the bottom one looks more polished. However I think that's because you only use black for line art in the top screen. I think if you played with the colors in the line art, it could look quite good. Look around at other examples, you'll see that usually it's a dark color, but there are cases where the line art can be a very light color against a dark background, etc. It also can be utilized to give the art more character as well. If you can get that to work, I'd say go with the top, otherwise bottom.
  5. So, I've found that PID systems solves for what I'm trying to do. This thread seems to be a pretty good starting point. I'm still trying to figure out do velocity as I'm shooting through the target at max velocity at the moment. https://forum.unity.com/threads/spaceship-control-using-pid-controllers.191755/
  6. I'm working on a spaceship simulation and am working on implementing autopilot. Setting the right torque in order to make the ship's heading in the right direction has been tricky. The parts I'm not sure about are how to properly translate the quaternion difference to torque and how to do it ignoring the roll of the ship. When I set the target to a identity quaternion, it seems to align just fine, but anything else, and it goes haywire. The simulation starts with a random quaternion normalized. The steps I'm currently taking are: Compute the target heading (forward is unit Y, I want it to point to unit X, I get {[ 0, 0, -0.7071068, 0.7071068 ]} in xyzw) Compute the delta orientation (I don't know how to tell it to ignore roll or choose the closet roll with this rotation). Covert it to euler. Subtract the current velocity from the euler velocity, The euler velocity is the target we are aiming for. Apply it as torque. Quaternion target = Quaternion.FromVectors(Vector3.UnitY, Vector3.UnitX); //Quaternion.Identity; Quaternion current = Entity.AngularPosition; Quaternion deltaOrientation = target * current.Inverse(); Vector3 euler = Quaternion.ToEuler(deltaOrientation); Vector3 difference = euler - Entity.AngularVelocity; //Quaternion deltaVelocity = deltaOrientation * Quaternion.FromEuler(Entity.AngularVelocity).Inverse(); //difference = Quaternion.ToEuler(deltaOrientation); //Entity.AngularVelocity = new Vector3(); Entity.ResetForces(); Entity.ApplyTorque(difference); // Primary thrusters var heading = Entity.AngularPosition.Transform(Vector3.UnitY); Entity.ApplyCentralForce(heading); base.Step(delta); Console.WriteLine($"\tNavigating {euler}\n\tHeading {heading}"); The integration code is based on bullet physics. I think that the angularVelocity is convertable to euler rotation, but I'm not sure. Also generally euler is clamped, which doesn't work here. But since the equations I've come across all seem to expect the clamp, I don't know if it works correctly with unclamped values. float angle = angularVelocity.Length(); Vector3 axis; if (angle < 0.001f) { // use Taylor's expansions of sync function axis = angularVelocity * (0.5f * delta - (delta * delta * delta) * 0.020833333333f) * angle * angle; } else { axis = angularVelocity * ((float)Math.Sin(0.5f * angle * delta) / angle); } Quaternion deltaOrientation = new Quaternion(axis, (float)Math.Cos(0.5f * angle * delta)); angularPosition = angularPosition * deltaOrientation; angularPosition = angularPosition.Normalize();
  7. I was trying to debug some zfighting issues in my d3d_12. I was using a depth stencil with the D32_FLOAT format, my camera was set to perspective with near being 0.0001f and far being 20.0f. The camera was 5 units away and the 3d model in question was approximately 1 unit deep. I was experiencing z-fighting issues which I found strange since the camera range was so narrow. When I used the graphics debugger, I found that the depth buffer only used values in the range of 0.9999+ - 1, which seems like a really narrow range given how the camera was set up. Changing the near to 0.01f fixed the z-fighting issues, but the z-buffer is still only utilizing 0.998-1. I thought that depth buffers were linear by default, have they change to logarithmic by default or how is it calculated?
  8. I've gotten pretty far with earth rendering as in the attached picture. I'm looking into rendering stars. I really like how the stars are rendered in elite dangerous but I have no clue how to start working towards this. Has anyone worked on something similar that can give me some pointers on where to start?
  9. I'm trying to cut polygons into smaller polygons constrained by a grid. I'm wondering if there's a fast way to do this other than normal poly-poly clipping because I'm dealing with a large amount of points on a comsumer's computer (via javascript).   The application is that I want to cut country boundries by longitude and latitude lines before tessellating them so they don't get clipped by the sphere that they will be matted on. If I don't do this then large triangles will go inside the sphere.
  10. I'm making a planet renderer, kind of like Google Earth. I want to implement tiling, so that different parts of the map can load like in Google Maps when you pan or zoom in or out. I'm having a hard time figuring out how to figure out what areas of the sphere exists in the view frustum and should be loaded or displayed.   Any suggestions?   Thanks.
  11. I'm trying to render a starfield properly. I'm doing something similar to the new google maps zoomed out where you can see the entire planet and stars. However, right now in my implementation i'm basically drawing a sphere with the stars in the scene. When you zoom in or out the stars get zoomed in or out, but i think the right implementation is that they don't move. How would I properly render that?   basically right now the sky is a sphere(5000), and the earth is a sphere(10) and the camera moves in and out looking at the earth. when i zoom in, a narrower area of the sky is shown, I don't think this is correct, is it? Would I scale the sky inversely to the camera zoom? -> the more the camera is zoomed, the smaller the sky is?
  12. What approaches would be best for space partitioning on a sphere? I'm working with a visualizing data on the surface of the earth.   Some operations that would be relevant Find the closest stored point to a given point See if a the given point is within a region Adding and removing points and regions   Currently, some of the approaches that I've thought of taking are: Start with a octahedron and do triangle tessellation, 4 triangles to each parent triangle. I'm not sure how to approach indexing nodes from latitude, longitude. The pros would be that the partitions would be basically equal area. Quadtree, with the first level separated by north/south hemisphere (positive/negative y) and then a normal quadtree based on the x, z coordinates. This might be a bit easier to index, but leaves might not be balanced well. Are there any other common approaches that you guys know of or can think of?   Thanks!
  13. Yeah, I'm working with arbitrary polygons. I think the better approach is converting between projections. I found a number of projections called equal-area projections that means that the polygon area is correct because points near the poles are closer together vertically.
  14. Axiverse

    How to calculate polyhedra in realtime...

    The internet is slow here so I won't post any links, cause I do any searches... It's annoying being out of country. Anyways, from a technical standpoint..., you can create a sphere using n points if you look for "distributing points on a sphere" - there seems to be a formula based on minimizing electrostatic repulsion which looks pretty good. Then you can convert those points to a mesh using a triangulation purposes.   This is probably way overkill for any practical purposes - but just for you to know for intellectual purposes. =)
  15. How would you go about, or does anyone know of any articles/references on calculating the area of a polygon? (Remember that maps are distorted towards the poles, to the actual area isn't just the polygon area) Or any suggestions on how I might go about calculating this?
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!