Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 May 2011
Offline Last Active Today, 05:23 AM

#5238195 Portal view

Posted by Waterlimon on 03 July 2015 - 09:32 AM

To render the scene from the other portals viewpoint, the camera position/rotation must be same relative to the other portal as it was to the first portal.


Of course the other portal will be facing out of the wall, while the first portal will be facing into the wall.


So if you camera is 5 meters in front of the first portal, it needs to be 5 meters inside the wall when rendering the scene from the other portals point of view.



This 'relative position/rotation' would normally be encoded in a 4x4 affine matrix and you would use matrix math to calculate the correct location for the camera.

The logic would be something akin to (portal1 and portal2 are matrices that hold the pos/rot of the portals):

Matrix relative = portal1:toObjectSpace(camera)
Matrix renderFrom = portal2:toWorldSpace(relative) //if portal2 faces out of the wall and portal1 faces into it, this works. If not, I think you need to transpose/inverse the relative matrix first

I dont know how to express that using 'proper' matrix notation. Probably something like

relative = camera * inverse(portal1)

renderFrom = portal2*relative

(renderFrom = portal2*inverse(relative) if portal2 is pointing into wall just like portal1)

But Im really not sure.


If you are not using matrices, you can use the same logic as in the first piece of code except instead of matrices you do the operations on position/rotation or whatever it is youre using.


Also since the camera will be inside the wall when rendering the scene from other portals PoV, you need to make sure nothing behind the portal gets rendered somehow (unless required for reflections or whatever).

#5238038 Edge artifact with basic diffuse shading

Posted by Waterlimon on 02 July 2015 - 12:51 PM

Looks like you have face-culling disabled and the depth-buffer precision isn't enough to avoid Z-fighting at the edges.

Create a 24-bit depth-buffer, or increase the distance to your near-plane in the projection matrix, like changing 0.01 to 0.1 or even 1.


Its implicit here, but you should enable face culling if you dont need it to be disabled, as well. At least this particular scene does not need it disabled, assuming the cube meshes are defined correctly (counterclockwise order defined triangle vertices)

#5237212 Hash Table Element Lookup

Posted by Waterlimon on 28 June 2015 - 01:16 AM

If you did that, you would also have to compare the actual values at the end (due to possibility of collisions).


So either you:

A. Compare each key in list until correct one is found

B. Do A, except add a hash as part of the key to more quickly reject nonmatching keys


Given that:

1. You probably wont have very many keys in the list

2. The keys will probaby be rejected pretty quick anyways (for random strings in the map, most can be rejected after a single char-char or string size comparison!)

3. You have to do the full comparison anyways at least once


Storing the hash and using that doesnt seem to add much when it comes to optimizing the search for the correct element in the list. Perhaps if all your strings tend to begin with the same initial characters and theres many of those characters (so you can use the hash to 'early out' from the costly equivalence check).



But, there is another good reason to store the hash with the key-value pair. If your key is expensive to hash (depends on key and hash func), resizing the hash table will be very expensive since all keys have to be hashed to place them in the newly resized table. If you store the hash, no rehashing needs to be done. This wont be very useful if the table doesnt need to be resized, or if the keys are not expensive to hash as mentioned.

#5236919 Researchers, Archeologists and...

Posted by Waterlimon on 26 June 2015 - 08:16 AM

Reverse engineers?

#5236483 Glitches in Navigating my NavMesh

Posted by Waterlimon on 23 June 2015 - 08:59 PM

it seems you want the agent to overshoot the waypoint if entering the portal 'region' from left, and undershoot if from right (eg would leave portal region if overshot)


Let p1 and p2 be the waypoints that define the portal, with p1 our waypoint and p2 the other one.


Find (p2-p1).unit and take dot product of that and agent position.


If negative, overshoot. If positive, undershoot. This should ensure you never go too far or get stuck in corners etc.

#5236472 how to get length of a int vector without overflow on 32 bit int

Posted by Waterlimon on 23 June 2015 - 06:20 PM

Divide the vector by a constant before finding length, then do the sqrt(dot(v,v)), then multiply it back up. This will avoid overflow in the dot operation. Ideally you would only do this when required, eg if any component is so big it would overflow if squared (>65535 ?) and the constant would 'normalize' the vector WRT this 'maximum limit', so the largest vector is brought down to 65535 or below.



int largest = findLargestComponent;
int bits = 0;
if (largest > 65535) //squaring would overflow
bits = ceil(log2(largest)); //how many bits in the int the largest component takes - squaring will double this (usually? always? dunno)
int scale = bits - 16; //by how much we must adjust to avoid overflow
vec >> scale; //adjust every component (division by 2^scale)
return sqrt(dot(v,v)) << bits; //make sure shift by 0 is ok or dont do it in that case...

So thats how I would solve the issue with overflow when squaring in dot.


Then theres the issue with overflow when adding the squared components in dot. this can increase the bit usage my at most 2 bits (multiplication by 3 if each component is maximum size) so maybe just add 2 to the 'bits' variable (or calculate if its 0, 1, or 2, but that feels like a waste of performance):

int largest = findLargestComponent;
int bits = 0;
if (largest > 65535) //squaring would overflow
bits = ceil(log2(largest)) + 2; //how many bits in the int the largest component takes plus extra to make room for addition
int scale = bits - 16; //by how much we must adjust to avoid overflow
vec >> scale; //adjust every component (division by 2^scale)
return sqrt(dot(v,v)) << bits; //make sure shift by 0 is ok or dont do it in that case...

Now the only issue is the final shift to right to scale it back to the original coordinate system. Since as you say, vector length can be bigger than 2^32.


The only solution is to either:

-convert the sqrt result to 64 bit before returning, shift, return that

-dont shift by bits, but return the unshifted number and the bitcount.

-return as float. We have the mantissa (what the sqrt returns) and the exponent (bits). This is pretty much what floating point numbers were made for. Eg convert sqrt result to float, multiply that by 2^bits (=shift by bits). But at this point you might just want to do all the length calculation using floats...


EDIT: oh you already did this... :>

#5236125 Read a text file from a .zip container

Posted by Waterlimon on 22 June 2015 - 03:02 AM

So what exactly do you get instead of the expected data?

#5235871 Model View Controller

Posted by Waterlimon on 20 June 2015 - 10:16 AM

AFAIK the controller exists to allow having an interface between the source of input and the model. So you could control the model using keyboard, AI, network, or some complex mess with little bits from here and there.


Maybe have something like




And in the controller you somehow link that to the correct input 'device' (at the simplest maybe youll just read keyboard state to determine the x and y to do the above call)


Just design the interface between model and controller such that the division is actually useful for different controllers you might have - worldModel.attemptMovePlayer(keyboardkey) would be kind of pointless.

#5234911 Calculate volume of convex solid?

Posted by Waterlimon on 15 June 2015 - 10:57 AM

The volume algorithm works by taking each face. Then we work in a space where the face is axis aligned. The dot product in this space gets the distance from the face plane to the origin (doesnt matter which vertex is used - theyre all on the same plane!).


The edge/area of the polygon is proportional to this origin-to-plane distance - its 0 at origin, full value at the plane end.


For 2D we integrate length of edge over this distance to get area. For 3D we integrate face area over the distance to get volume.



Integrate(0->1) edge*x dx  (edge(x) = edge*x)

=> edge*x*x/2 (x = 1 for full area)

=> edge/2 (so in 2D youd divide by 2 instead of 3)


Integrate(0->1) area*x*x dx (area(x) = area*x*x because area grows/shrinks on 2 dimensions as x changes instead of just 1)

=> area*x*x*x/3 (x = 1 for full volume)

=> area/3 (so we divide by 3 in 3D)


I guess you can generalize it so it always divides by the amount of dimensions...

#5234879 Scale world geometry with an aspect ratio?

Posted by Waterlimon on 15 June 2015 - 07:26 AM

If user is looking at center of screen, more 'stretched' image at edges looks correct (the edges of screen take less space of your visual field because theyre far, so this 'reverses' the stretching from users point of view)


It only looks incorrect if:

-User doesnt look at center of screen (And unless youre going to use a webcam or one of those head direction trackers to figure out where they look, you can do nothing about this)

-FoV is wrong


I dont recommend doing any math on vertices. Triangles will remain linear, so if you do nonlinear math on vertices, triangles will not line up.

You need a per-pixel process.


Eg render as:

1. Draw scene to offscreen texture normally

2. Use shader to draw a fullscreen quad that reads from the texture. Modify the texture coords based on position, so the result is stretched in center and compressed at edges.


I dont recommend that either as it will probably make other stuff harder to implement, and probably wont even look correct.


If your game doesnt have the mouse fixed at center of screen, perhaps you can use that as "position user is looking at" and translate the projection so its centered there. This way stretching will not happen where user is focused.

#5234876 Scale world geometry with an aspect ratio?

Posted by Waterlimon on 15 June 2015 - 07:13 AM

Are you setting aspect ratio as x/y or y/x? (should be former)




Or do you perhaps mean how the geometry at edges of screen appears "stretched"?

#5234848 Slow down when using OpenMP with stl vector per thread

Posted by Waterlimon on 15 June 2015 - 03:06 AM

You could try using vector.reserve(n) if you know how large they need to be at the end (depends how much memory youre willing to waste if you approximate it too high) - this will ensure no allocations need to be done (apart from the first one) when resizing the vector until it grows bigger than n elements.


EDIT: You say its single threaded, how long execution time overall are we talking here? Theres probably some overhead for creating and running the thread etc. Hows the performance with 4 threads or whatever youre going to use in practise? (and this is in release mode right?)

#5234508 Good name for a zombie game

Posted by Waterlimon on 12 June 2015 - 02:50 PM

"Generic zombie apocalypse simulator"


Then you shorten it to GZAS to make it shorter and stylize the hell out of that shortened name to make it not as generic as it reads. And figure out a catchy way to pronounce it.

#5233894 Primitives scrambled when vertex buffer becomes "large"

Posted by Waterlimon on 09 June 2015 - 02:13 PM

Are your index buffers 16 bit only? If so, try using 32 bit indices instead. (300*300 ~ 90000 while 16 bit index can hold a max value of around 65000 - maximum map size that works would be around 250*250)

#5233456 evers and momentum

Posted by Waterlimon on 07 June 2015 - 06:37 PM

You probably want to find the impulse at the point of contact as well as the time to calculate damage (more time = biomass has time to displace, reducing damage)

Also area (the smaller area the impulse is distributed over, the more damage - sharp weapons)







Area is basically a weapon-specific constant (small for sharp weapons, big for blunt weapons), and time is how long it takes for weapon to stop after making contact (I dont know if time really matters - if it matters, the difference wont probably be too great - you could just apply some "reduction factor" between 0.5 (slow hammers etc) and 1.0 (swords/arrows etc) or something)


And more realistically those variables would vary over time (think a sword, it might at first have very small area but then it cuts deeper and area increases and force decreases over time - while for a hammer area is constant and force decreases)


So, how much impulse is at point of contact?

This depends on the point of contact, the velocity, the angular velocity, the mass, moment of inertia, whether the weapon stopped fully when hitting or continued movement after hitting etc.


So whats the velocity/angular velocity at point of contact?

Well that depends for how long a time the weapon was accelerated, how much force each hand applies (your fulcrum approach assumes the other hand applies no force and is static), where hands are placed, what path the weapon is moved (if you lift it up, it will take longer because of gravity and thus more energy will be 'charged' in the weapon), where it starts from (if you start holding the weapon high in the air, you get gravity potential energy as extra - if you start with weapon at ground, you lose overall energy due to lifting it more distance than lowering it)


And much of this is just raw data you have to provide (path of weapon, how much force hands apply....) so unless you actually intend to display everything visually, might as well skip some of the simulation and use numbers that make it feel correct.