Sign in to follow this  
noodleBowl

DX11 Models, model matrices, and rendering

Recommended Posts

I was thinking about how to render multiple objects. Things like sprites, truck models, plane models, boats models, etc. And I'm not too sure about this process

Let's say I have a vector of Models objects

class Model
{
  Matrix4 modelMat;
  VertexData vertices;
  Texture texture;
  Shader shader;
};

Since each model has is own model matrix, as all models should, does this mean I now need to have 1 draw call per model?

Because each model that needs to be drawn could change the MVP matrix used by the bound vertex shader. Meaning I have to keep updating/mapping the constant buffer my MVP matrix is stored in, which is used by the vertex shader

Am I thinking about all of this wrong? Isn't this horribly inefficient?

Share this post


Link to post
Share on other sites

The options

One drawcall per model OR

instancing which is one call per N models of the same type OR

pre transform vertices for some static models so the vertices are in world space OR

If dx12 use drawindirect with CPU or with a GPU driven pipeline OR

if dx11 use instancing with manual vertex fetch for clustered rendering

if dx11 use draw indirect with either virtual texturing or thin gbuffer with deferred texturing.

I think thats all of them.

edit - there's also merge instancing but thats similar to the fifth one I listed.

edit2 - look into texture atlas's to help batching draw calls.

Edited by Infinisearch

Share this post


Link to post
Share on other sites
2 hours ago, noodleBowl said:

Since each model has is own model matrix, as all models should, does this mean I now need to have 1 draw call per model?

There are other options listed above.

2 hours ago, noodleBowl said:

Am I thinking about all of this wrong? Isn't this horribly inefficient?

You aren't necessarily thinking about this wrong just incomplete since there are other options.  As far as inefficient goes it depends on the order you draw your models in for dx11.  And you do have a "draw call budget" to think about but it depends on your CPU/CPU load.  Basically if you are DX11 the simple thing to do is sort by shader then texture then other state changes and use instancing where possible.

Share this post


Link to post
Share on other sites

Might be a stupid question here but what is considered a "static" model? Would a Sprite be considered a static model because it does not move although you can animate it

If I were to pretransform my vertices would it only work for static models?

Share this post


Link to post
Share on other sites

It means no movement too.  Think about it if you pretransform the vertices then if you move them they would need to be transformed again... whats the point.  Think walls that don't move in any way or buildings in a cityscape.  I'm no expert on this technique since I never bothered using it.  Maybe @Hodgman can explain the different variations of the technique better than me, I think I remember him mentioning it once.

But like I said for now you're better off just batching properly and using instancing where possible with texture atlases to make the batches bigger.

Oh and here are some presentations and papers that describe some of the techniques

http://www.humus.name/Articles/Persson_GraphicsGemsForGames.pptx

http://advances.realtimerendering.com/s2015/aaltonenhaar_siggraph2015_combined_final_footer_220dpi.pptx

 

Edited by Infinisearch

Share this post


Link to post
Share on other sites

@Infinisearch Thanks for the above!

After looking at those presentations I do have some basic/general questions.

Using these as an example, lets say I have the following meshes

Knight
Airplane
Cruise Ship

And because I would want to draw a variety of the above, which may all have different transforms. I cannot put them into one buffer and save on draw calls correct (excluding instancing in this case)?

Even if I needed to draw something enough to warrant instancing and they all had different positions, rotations, and etc can I still instance? I though for instancing to work everything had to be the same

Share this post


Link to post
Share on other sites
34 minutes ago, noodleBowl said:

And because I would want to draw a variety of the above, which may all have different transforms. I cannot put them into one buffer and save on draw calls correct (excluding instancing in this case)?

 

36 minutes ago, noodleBowl said:

Even if I needed to draw something enough to warrant instancing and they all had different positions, rotations, and etc can I still instance? I though for instancing to work everything had to be the same

First of all if they are of the same vertex type you should be able to stick them in the same buffer thus reducing state changes between draw calls. (look at the arguments to a draw call to understand what I mean)  Alright forget about pretransforming vertices since that would be for static objects only.  So option one is to use one draw call per model.  Option two is for each model that has exactly the same geometry data(not the transform and other constants) use instancing.  Option three is packing textures into a texture atlas (dx11 and before, dx12 is different) and then using instancing on the same models (but now with different textures in addition to different tranforms and constants).  Option four is merge instancing in which you combine instancing with manual vertex fetch and a texture atlas (with this you can have different geometry, textures, transform, and constants).  The only constraint is that the different models should be approximately the same size otherwise you waste performance on degenerate triangles.  Option five is an extension of merge instancing in which instead of using a instance size as big as the biggest model of the group you use an instance size that is much smaller than the model size.  This requires you to split your models into triangle clusters of the same size and potentially use triangle strips.  But this technique allows you to do cluster based gpu culling which can be a big performance win.  Then there is draw indirect which is different in DX11 and DX12, but in directx12 it will allow for some nice tricks on models that vary.

So to answer your question for standard instancing in dx11 the geometry and texture has to be the same but the transform and other constants like color can vary.  In dx11 if you implement a texture atlas you would be able to vary texture while using instancing but the geometry would be the same.  In dx11 if you use manual vertex fetching you throw away using the post transform vertex cache but now you can use instancing to draw different geometry.  There are two way to do this, I described them above and posted links to the techniques in my previous post.

Share this post


Link to post
Share on other sites
6 hours ago, Infinisearch said:

First of all if they are of the same vertex type you should be able to stick them in the same buffer thus reducing state changes between draw calls. (look at the arguments to a draw call to understand what I mean)

I'm actually not really sure what you are talking about here? The Draw call only has a vertex count and startVertexLocation. Am I looking at the wrong function? The only thing I can think of is the D3D11_INPUT_ELEMENT_DESC needed for a input layout

D3D11_INPUT_ELEMENT_DESC inputElementDescription[] = {
	{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
	{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

 

I'm looking at this tutorial on standard instancing and I don't 100% understand the input layout when it comes to the instance data. More specifically I don't understand why they have changed the InputSlot to 1. Is this because the are binding 2 buffers and using 1 would point to the second buffer (m_instanceBuffer) where the instance modifications are stored? OR is it really just that they are reusing a semantic (TEXCOORD) and the two bound buffers (m_vertexBuffer and m_instanceBuffer) are treated as one big buffer?

In the tutorial they create a InstanceType struct to hold the modifications they want to do to the vertex positions. But in a case of using a transform (model) matrix to do vertex data modifications would it be done the same way instead of using a constant buffer? 

Edited by noodleBowl

Share this post


Link to post
Share on other sites
8 hours ago, Infinisearch said:

Alright forget about pretransforming vertices since that would be for static objects only.

Wouldn't it make sense to pretransform dynamic meshes too?

Thinking of skinning, tesselating, etc. multiple times for each shadow map, i assume pretransforming would be faster even if this means additional reads / writes to global memory. Drawing all models with one call is another advantage, GPU culling another, everything becomes less fragmented.

But i never tried that yet.

One thing i tried is to store an matrix index in vertex data (position.w), and load matrix per vertex. That worked surprisingly well, although on AMD it wastes registers. I did not notice a performance difference between drawing 2 million boxes with unique matrix per box or just using one global transform. Seems the rasterizer limited (boxes were just textured but not lit).

 

Share this post


Link to post
Share on other sites
14 hours ago, noodleBowl said:

And because I would want to draw a variety of the above, which may all have different transforms. I cannot put them into one buffer and save on draw calls correct (excluding instancing in this case)?

 

13 hours ago, Infinisearch said:

First of all if they are of the same vertex type you should be able to stick them in the same buffer thus reducing state changes between draw calls. (look at the arguments to a draw call to understand what I mean)

6 hours ago, noodleBowl said:

I'm actually not really sure what you are talking about here? The Draw call only has a vertex count and startVertexLocation. Am I looking at the wrong function? The only thing I can think of is the D3D11_INPUT_ELEMENT_DESC needed for a input layout

I think I might have read you wrong in the first quote and added the statement in parenthesis after, I don't really remember what I was thinking when I wrote that.  Ignore it for now... if I remember my line of thought I will post it.

 

7 hours ago, noodleBowl said:

More specifically I don't understand why they have changed the InputSlot to 1. Is this because the are binding 2 buffers and using 1 would point to the second buffer (m_instanceBuffer) where the instance modifications are stored? OR is it really just that they are reusing a semantic (TEXCOORD) and the two bound buffers (m_vertexBuffer and m_instanceBuffer) are treated as one big buffer?

This is vertex streams... you should look into them not just for instancing.  Basically lets say you have three vertex components, position, normal, and texturecoordinate.  You can stick that data into one struct, two structs, or three structs (when I say structs, I mean arrays of structs).  If you use multiple arrays you need a way to bind all the arrays, this is what input slots are for.  But for instancing the reason you use multiple slots is that the step rate for fetching data from those buffers is different.  Per vertex vs. per instance.

 

7 hours ago, noodleBowl said:

In the tutorial they create a InstanceType struct to hold the modifications they want to do to the vertex positions. But in a case of using a transform (model) matrix to do vertex data modifications would it be done the same way instead of using a constant buffer? 

Yeah you don't need to use a constant buffer.  But there is also another way to implement instancing using the system value SV_instanceid. ( I think thats it)  But you should learn that later.

Share this post


Link to post
Share on other sites
4 hours ago, JoeJ said:

Wouldn't it make sense to pretransform dynamic meshes too?

Thinking of skinning, tesselating, etc. multiple times for each shadow map, i assume pretransforming would be faster even if this means additional reads / writes to global memory. Drawing all models with one call is another advantage, GPU culling another, everything becomes less fragmented.

But i never tried that yet.

One thing i tried is to store an matrix index in vertex data (position.w), and load matrix per vertex. That worked surprisingly well, although on AMD it wastes registers. I did not notice a performance difference between drawing 2 million boxes with unique matrix per box or just using one global transform. Seems the rasterizer limited (boxes were just textured but not lit).

I was speaking about the context of reducing draw calls for static data, as in not data with per frame changes.  But if there is per frame changes you're right there might be gains to be had by pretransforming skinned or tessellated meshes.  But pretransforming per frame on the gpu will reduce calls depending on how you implement... stream-out or compute shader.  Thats interesting that you had no performance degradation with a matrix index per vertex.  But like you seem to imply the results might differ with more complicated shaders.

Share this post


Link to post
Share on other sites
On 7.10.2017 at 3:58 PM, Infinisearch said:

But pretransforming per frame on the gpu will reduce calls depending on how you implement... stream-out or compute shader.

I never considered pretransforming by vertex shader and stream-out. Is it possible to stream out to GPU memory with DX12/VK?

Actually i planned to do it with compute shader but somehow it feels wrong to reimplement tesselation on my own if there is already hardware for that. On the other side compute seems more flexible than hardware, e.g. if we want catmull clark subivision.

Also, having good compute but weak graphics experience i tend to think: 'geometry and tesselation stages are useless - use compute and pretransform instead.' But then why did AMD spend so much effort to improve those things for Vega?

Any thoughts about this should help me to get a better picture... :)

Share this post


Link to post
Share on other sites
2 hours ago, JoeJ said:

Is it possible to stream out to GPU memory with DX12/VK?

I've never done it but I don't see why not.

2 hours ago, JoeJ said:

Actually i planned to do it with compute shader but somehow it feels wrong to reimplement tesselation on my own if there is already hardware for that. On the other side compute seems more flexible than hardware, e.g. if we want catmull clark subivision.

I've read like one or two papers where they implement tessellation using the compute shader, and I think they said flexibility was one of the benefits... don't remember much else.  I think the last presentation I posted above (optimizing graphics with compute) has a section on using compute on tessellation.

3 hours ago, JoeJ said:

Also, having good compute but weak graphics experience i tend to think: 'geometry and tesselation stages are useless - use compute and pretransform instead.' But then why did AMD spend so much effort to improve those things for Vega?

I don't have any tessellation experience and have kept away from it on purpose.  IIRC Vega hasn't really improved tessellation performance that much and Nvidia still kicks their butt in it. (at least with high tessellation factors)

3 hours ago, JoeJ said:

Any thoughts about this should help me to get a better picture...

Well like I said I have no experience with tessellation but again like I said earlier IIRC there were a few papers I read that seemed to implement it using compute.  The only thing I can definitively say is that implementing through the graphics pipeline with take up draw calls, doing it through compute will have a performance advantage over FF on hardware with lots of compute units but this advantage might be lost because instead of the expanded vertices's being stored in the cache they'd go through memory.

Maybe someone with more experience can chime in.

Share this post


Link to post
Share on other sites

I've worked on some games recently where we pre-transformed skinned meshes on the CPU. It's not ideal or a typical way to do things, but we did have spare CPU cycles available and were struggling for every GPU cycle we could find, so there was no reason for us to move that logic from the CPU to a compute shader.

5 hours ago, JoeJ said:

Is it possible to stream out to GPU memory with DX12/VK?

It's possible in DX10/GL, and DX12/VK haven't lost the ability ;)

5 hours ago, JoeJ said:

But then why did AMD spend so much effort to improve those things for Vega?

Because they're playing catch-up with NVidia :D Tessellation is used quite a bit by some games, and not at all for others. "Pass-through" geometry shaders (no geometry amplification) are useful for some things, and NVidia is really good at doing them with no typical GS penalty.
GS is used in a few modern tricks that might catch on soon -- e.g. NV encourages people to use the GS stage as part of a technique to perform variable resolution rendering for VR, where the edges of the viewport have less resolution than the center.

Share this post


Link to post
Share on other sites
11 hours ago, Infinisearch said:

BTW @noodleBowl have I been clear enough?  Is there anything you don't understand?

There is a whole lot I don't understand haha, but that is because my graphics experience / knowledge is very fragmented. Just need more practice

On 10/7/2017 at 9:45 AM, Infinisearch said:

This is vertex streams... you should look into them not just for instancing.  Basically lets say you have three vertex components, position, normal, and texturecoordinate.  You can stick that data into one struct, two structs, or three structs (when I say structs, I mean arrays of structs).  If you use multiple arrays you need a way to bind all the arrays, this is what input slots are for

Not sure if you are talking about interleaved vs non-interleave buffers? Or if you are talking about streaming out data back to the CPU from the GPU to do further processing?

If you are talking about interleaved vs non-interleave buffers (I think this is what you mean or the option that make the most sense to me), why would I want to have non-interleave buffers?

On 10/7/2017 at 9:45 AM, Infinisearch said:

Yeah you don't need to use a constant buffer

Just a general question about constant buffers/buffers, might be stupid, but in that tutorial they created an extra vertex buffer to hold position modifications, so then for certain situations should I just use/bind an extra (non-constant) buffer?

For example the MVP matrix is not really constant and it can change every frame so would it be better suited in a buffer that does not use the D3D11_BIND_CONSTANT_BUFFER flag (even though you can set the usage to dynamic)? Where as something like a light's brightness, a value that wouldn't change, should go into a buffer that is created with the D3D11_BIND_CONSTANT_BUFFER flag?

Or is that all nonsense? That there are some optimizations going on behind the scenes or that it is just better to have them split up (coming from the viewpoint that there are probably way less constant buffer binds then binds that involve other buffer types like vertex data which would need to be rebinded per mesh)

Share this post


Link to post
Share on other sites
7 hours ago, noodleBowl said:

Just a general question about constant buffers/buffers, might be stupid, but in that tutorial they created an extra vertex buffer to hold position modifications, so then for certain situations should I just use/bind an extra (non-constant) buffer?

For example the MVP matrix is not really constant and it can change every frame so would it be better suited in a buffer that does not use the D3D11_BIND_CONSTANT_BUFFER flag (even though you can set the usage to dynamic)? Where as something like a light's brightness, a value that wouldn't change, should go into a buffer that is created with the D3D11_BIND_CONSTANT_BUFFER flag?

Or is that all nonsense? That there are some optimizations going on behind the scenes or that it is just better to have them split up (coming from the viewpoint that there are probably way less constant buffer binds then binds that involve other buffer types like vertex data which would need to be rebinded per mesh)

Constant buffer are basically made for data that changes per frame, so you're fine if you use them.  In fact IHV's optimize constant buffer access if I'm remembering right.  As far as that tutorial goes its because its instancing they're using an extra vertex buffer, most probably because there is size constraints on constant buffers. (64KB if IIRC)

7 hours ago, noodleBowl said:

Not sure if you are talking about interleaved vs non-interleave buffers? Or if you are talking about streaming out data back to the CPU from the GPU to do further processing?

If you are talking about interleaved vs non-interleave buffers (I think this is what you mean or the option that make the most sense to me), why would I want to have non-interleave buffers?

I guess they are called interleaved buffers, that makes sense, but I am definitely NOT talking about streaming data back to the CPU.  However you should know the other names for it... vertex streams or structure of array layout (as in SOA vs AOS layout).  As far as why you'd want interleaved buffers lets start with what I said before with instancing.  With instancing some data is indexed per vertex other data is indexed per instance... to simplify fetching this data they are separated into separate buffers.  Also some hardware for performance reason likes SOA format (AMD IIRC) although using to many vertex streams can hinder performance.  Finally there is multiple passes and shaders and the wasting of space in a cache line which again is about performance.  For example lets say your engine uses shadow maps, so when you render your shadow maps the only data you need is position.  If your data is packed tightly into one buffer then when the GPU loads a cacheline most of it goes to waste.  A quick search of google shows this page:  https://anteru.net/blog/2016/02/14/3119/index.html which explains in more depth.

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628329
    • Total Posts
      2982104
  • Similar Content

    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By OpaqueEncounter
      I have a very simple vertex/pixel shader for rendering a bunch of instances with a very simple lighting model.
      When testing, I noticed that the instances were becoming dimmer as the world transform scaling was increasing. I determined that this was due to the fact that the the value of float3 normal = mul(input.Normal, WorldInverseTranspose); was shrinking with the increased scaling of the world transform, but the unit portion of it appeared to be correct. To address this, I had to add normal = normalize(normal);. 
      I do not, for the life of me, understand why. The WorldInverseTranspose contains all of the components of the world transform (SetValueTranspose(Matrix.Invert(world * modelTransforms[mesh.ParentBone.Index]))) and the calculation appears to be correct as is.
      Why is the value requiring normalization? under);
      );
      float4 CalculatePositionInWorldViewProjection(float4 position, matrix world, matrix view, matrix projection) { float4 worldPosition = mul(position, world); float4 viewPosition = mul(worldPosition, view); return mul(viewPosition, projection); } VertexShaderOutput VS(VertexShaderInput input) { VertexShaderOutput output; matrix instanceWorldTransform = mul(World, transpose(input.InstanceTransform)); output.Position = CalculatePositionInWorldViewProjection(input.Position, instanceWorldTransform, View, Projection); float3 normal = mul(input.Normal, WorldInverseTranspose); normal = normalize(normal); float lightIntensity = -dot(normal, DiffuseLightDirection); output.Color = float4(saturate(DiffuseColor * DiffuseIntensity).xyz * lightIntensity, 1.0f); output.TextureCoordinate = SpriteSheetBoundsToTextureCoordinate(input.TextureCoordinate, input.SpriteSheetBounds); return output; } float4 PS(VertexShaderOutput input) : SV_Target { return Texture.Sample(Sampler, input.TextureCoordinate) * input.Color; }  
    • By GalacticCrew
      In some situations, my game starts to "lag" on older computers. I wanted to search for bottlenecks and optimize my game by searching for flaws in the shaders and in the layer between CPU and GPU. My first step was to measure the time my render function needs to solve its tasks. Every second I wrote the accumulated times of each task into my console window. Each second it takes around
      170ms to call render functions for all models (including settings shader resources, updating constant buffers, drawing all indexed and non-indexed vertices, etc.) 40ms to render the UI 790ms to call SwapChain.Present <1ms to do the rest (updating structures, etc.) In my Swap Chain description I set a frame rate of 60 Hz, if it's supported by the computer. It made sense for me that the Present function waits some time until it starts the next frame. However, I wanted to check, if this might be a problem for me. After a web search I found articles like this one, which states 
      My drivers are up-to-date so that's no issue. I installed Microsoft's PIX, but I was unable to use it. I could configure my game for x64, but PIX is not able to process DirectX 11.. After getting only error messages, I installed NVIDIA's NSight. After adjusting my game and installing all components, I couldn't get a proper result, because my game freezes after a few frames. I haven't figured out why. There is no exception or error message and other debug mechanisms like log messages and break points tell me the game freezes at the end of the render function after a few frames. So, I looked for another profiling tool and found Jeremy's GPUProfiler. However, the information returned by this tool is too basic to get an in-depth knowledge about my performance issues.
      Can anyone recommend a GPU Profiler or any other tool that might help me to find bottlenecks in my game and or that is able to indicate performance problems in my shaders? My custom graphics engine can handle subjects like multi-texturing, instancing, soft shadowing, animation, etc. However, I am pretty sure, there are things I can optimize!
      I am using SharpDX to develop a game (engine) based on DirectX 11 with .NET Framework 4.5. My graphics cards is from NVIDIA and my processor is made by Intel.
    • By Connor Rust
      I am currently attempting to make a navigation mesh for our 2D top down game, which is a multiplayer game using Node.js as the server communication. At the moment, I have implemented A* over an obstacle hardnessmap, which is awfully slow and laggy at times when we test our game on Heroku. I have been trying to find an algorithm to automatically generate the navmesh after map creation, instead of me having to do this manually. I am currently attempting to use Delaunay's Triangulation Divide and Conquer algorithm, but I am running into some issues. I have already asked a question on StackOverflow and am not getting many suggestions and help from it, so I figured I would come here. Is there another algorithm that might be better to use for the navmesh generation in comparison to Deluanay's Triangulation? My current implementation seems extremely buggy during the merge step and I cannot find the error. I have checked over the code countless times, comparing it to the description of the algorithm from http://www.geom.uiuc.edu/~samuelp/del_project.html. 
      My current code is this:
      class MapNode { constructor(x, y) { this.position = new Vector(x, y); this.neighbors = []; } distance(n) { return this.position.distance(n.position); } inNeighbor(n) { for (let i = 0; i < this.neighbors.length; i++) { if (this.neighbors[i] === n) return true; } return false; } addNeighbor(n) { this.neighbors = this.neighbors.filter((node) => node != n); this.neighbors.push(n); } addNeighbors(arr) { let self = this; arr.forEach((n) => self.neighbors.push(n)); } removeNeighbor(n) { this.neighbors = this.neighbors.filter((neighbor) => neighbor != n); } } class Triangle { constructor(p1, p2, p3) { this.p1 = p1; this.p2 = p2; this.p3 = p3; this.neighbors = []; } addNeighbors(n) { this.neighbors.push(n); } } function genSubMat(matrix, ignoreCol) { let r = []; for (let i = 0; i < matrix.length - 1; i++) { r.push([]); for (let j = 0; j < matrix[0].length; j++) { if (j != ignoreCol) r[i].push(matrix[i + 1][j]); } } return r; } function determinantSqMat(matrix) { if (matrix.length != matrix[0].length) return false; if (matrix.length === 2) return matrix[0][0] * matrix[1][1] - matrix[1][0] * matrix[0][1]; let det = 0; for (let i = 0; i < matrix.length; i++) { let r = genSubMat(matrix, i); let tmp = matrix[0][i] * determinantSqMat(r); if (i % 2 == 0) det += tmp; else det -= tmp; } return -det; } // if d is in the circle formed by points a, b, and c, return > 0 // d is on circle, return 0 // d is outside of circle, return < 0 function inCircle(a, b, c, d) { let arr = [a, b, c, d]; let mat = [ [], [], [], [] ]; for (let i = 0; i < arr.length; i++) { mat[i][0] = 1; mat[i][1] = arr[i].position.x; mat[i][2] = arr[i].position.y; mat[i][3] = arr[i].position.x * arr[i].position.x + arr[i].position.y * arr[i].position.y; } return determinantSqMat(mat); } function walkable(from, to, hardnessMap) { let diff = new Vector(to.x - from.x, to.y - from.y); if (Math.abs(diff.x) > Math.abs(diff.y)) diff.scale(Math.abs(1 / diff.x)); else diff.scale(Math.abs(1 / diff.y)); let current = new Vector(from.x + diff.x, from.y + diff.y); while (Math.round(current.x) != to.x || Math.round(current.y) != to.y) { if (hardnessMap[Math.floor(current.y)][Math.floor(current.x)] === 1) return false; current.x += diff.x; current.y += diff.y; } return true; } function getLowest(nodes) { let lowest = nodes[0]; for (let i = 1; i < nodes.length; i++) { if (nodes[i].position.y < lowest.position.y) lowest = nodes[i]; } return lowest; } // returns the angle between 2 vectors, if cw is true, then return clockwise angle between, // else return the ccw angle between. b is the "hinge" point function angleBetween(a, b, c, cw) { let ba = new Vector(a.position.x - b.position.x, a.position.y - b.position.y); let bc = new Vector(c.position.x - b.position.x, c.position.y - b.position.y); let v0 = new Vector(0, 1); let angleBA = v0.angleBetween(ba) * 180 / Math.PI; if (angleBA < 0) angleBA += 360; let angleBC = v0.angleBetween(bc) * 180 / Math.PI; if (angleBC < 0) angleBC += 360; let smallest = Math.min(angleBA, angleBC); let largest = Math.max(angleBA, angleBC); let angle = largest - smallest; return (cw) ? angle : 360 - angle; } function sortSmallestAngle(a, b, list, cw) { list.sort((m, n) => { let vab = new Vector(a.position.x - b.position.x, a.position.y - b.position.y); let vmb = new Vector(m.position.x - b.position.x, m.position.y - b.position.y); let vnb = new Vector(n.position.x - b.position.x, n.position.y - b.position.y); if (cw) return vab.angleBetween(vmb, cw) - vab.angleBetween(vnb, cw); else return vab.angleBetween(vnb, cw) - vab.angleBetween(vmb, cw); }); } // a is in list, b is in the other list function getPotential(a, b, list, cw) { sortSmallestAngle(b, a, list, cw); for (let i = 0; i < list.length - 1; i++) { let angle = angleBetween(b, a, list[i], cw); if (angle > 180) return false; else if (inCircle(a, b, list[i], list[i + 1]) <= 0) return list[i]; else { a.removeNeighbor(list[i]); list[i].removeNeighbor(a); } } let potential = list[list.length - 1]; if (potential) { let angle = angleBetween(a, b, potential, cw); if (angle > 180) return false; return potential; } return false; } function merge(leftNodes, rightNodes, leftBase, rightBase, hardnessMap) { leftBase.addNeighbor(rightBase); rightBase.addNeighbor(leftBase); let newLeft = leftNodes.filter((n) => n != leftBase); let newRight = rightNodes.filter((n) => n != rightBase); let potentialLeft = getPotential(leftBase, rightBase, newLeft, false); let potentialRight = getPotential(rightBase, leftBase, newRight, true); if (!potentialLeft && !potentialRight) return; else if (potentialLeft && !potentialRight) merge(newLeft, newRight, potentialLeft, rightBase, hardnessMap); else if (potentialRight && !potentialLeft) merge(newLeft, newRight, leftBase, potentialRight, hardnessMap); else { if (inCircle(leftBase, rightBase, potentialLeft, potentialRight) <= 0) merge(newLeft, newRight, potentialLeft, rightBase, hardnessMap); if (inCircle(leftBase, rightBase, potentialRight, potentialLeft) <= 0) merge(newLeft, newRight, leftBase, potentialRight, hardnessMap); } } // divide and conquer algorithm function delaunay(nodes, hardnessMap) { if (nodes.length <= 3) { for (let i = 0; i < nodes.length; i++) for (let j = 0; j < nodes.length; j++) if (i != j) nodes[i].addNeighbor(nodes[j]); return nodes; } else { nodes.sort((a, b) => { let tmp = a.position.x - b.position.x; if (tmp === 0) return b.position.y - a.position.y; return tmp; }); let l = nodes.length; let leftNodes; let rightNodes; if (l === 4) { leftNodes = delaunay(nodes.slice(0, 3), hardnessMap); rightNodes = delaunay(nodes.slice(3, 4), hardnessMap); } else { leftNodes = delaunay(nodes.slice(0, Math.floor(nodes.length / 2)), hardnessMap); rightNodes = delaunay(nodes.slice(Math.floor(nodes.length / 2), nodes.length), hardnessMap); } let leftBase = getLowest(leftNodes); let rightBase = getLowest(rightNodes); merge(leftNodes, rightNodes, leftBase, rightBase, hardnessMap); console.log("=============================MergeComplete================================"); return nodes; } }  
    • By Hilster
      Hello 2D Artists,
      I've started making a 2D Puzzle Adventure game for mobile and I'm looking for someone who would want in on creating assets for the game. The core of the programming is pretty much complete, you can walk within the grid laid out and push boxes, when there is an object on top of a pressure pad it will activate the linked objects or if there is one object with multiple linked pressure pads it requires you to activate all points for the object to become active. 

      The level iteration for the game is quick and simple, a Photoshop file that is made of individual pixels that represents objects is put into the game and it creates the level out of those pixels with the assigned objects.
      The objects that need sprites created so far is the character, box, pressure pad, door, trap door, the walls, the stairs and the tiled background.
      I intend to add more objects so the amount I'd like to add will be extended.
      My motivations for posting here is to have something that looks nice to be able to display on my portfolio, so if you're looking for a working game that you can place your art into and improve the look of your portfolio then we're in business.
      Please reply with a few past examples of your art below and I'll be in touch!
  • Popular Now