Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


tonemgub

Member Since 21 Oct 2011
Offline Last Active Dec 16 2014 03:10 AM

#5188269 input in realtime.

Posted by tonemgub on 21 October 2014 - 02:40 AM

You will never be able to get real-time input from any Windows operating system, as these are not real-time operating systems (RTOS). Windows lets you get keyboard presses and depresses as events (and you can also get the internal timer value for when this happened) as window messages sent to a window, or to get the state of the keys (pressed or not) at any time. It's up to you to use all of these in such a way that the user thinks everything is happening in real-time.

 

Also, console programs are a bit different, as they put the key events into an input buffer that you can read from, and in this case, I don't think you can get the timer value for when the event happened...

 

Since you haven't specified what sort of program you are writing (console or GUI), the question still remains too broad for us to give you a straight answer.




#5188087 Rotation Around a Faked Axis

Posted by tonemgub on 20 October 2014 - 12:56 AM


It ever rotates around an primary axis in the effective space.

Fixed that for you. The distinction between "primary axis", "arbitrary axis" and "primary axis-aligned" is really important here. If your persistent matrix is the identity matrix (no rotation or translation), then the first rotation you apply to it (multiplying either to the left or to the right with a translation matrix) will be a rotation around a primary axis. If you already have translation in the persistent matrix, the rotation becomes rotation around an primary axis-aligned axis (so this rotation will not result in a "look-at" type of transformation). If you also have rotation (in addition to translation) in your initial persistent matrix, and you want to apply new rotation around an axis, independently of the translation or the other axes, then you'd have to somehow remove the rotation around the wanted axis from the persistent matrix (which I think involves at least a matrix inversion), change the rotation, and re-multiply the persistent matrix with the new rotation... this is way too much work than just keeping separate variables for position and rotation.

 

Anyway, is anything I said not true? I can't handle your math explanations right now, sorry... The reason I know that what I said IS true, is because I had the same problem described in the OP at one time, and it turns out I couldn't just keep a persistent matrix that I could infinitely multiply to the left or right to get yaw and pitch rotations to act independently of each other and independently of the translation (position).

 

 

The simple explanation for how matrix concatenation works is that the transformations are applied in the order in which their corresponding matrices are multiplied. Also, each transformation by itself is relative to the origin (translation) or the axes (rotation) of the coordinate system, but when you concatenate transformations together, the origin and the axes of the coordinate system are relative to the previous transformations. So a translation matrix will move the origin of future transformations, and a rotation matrix will rotate the axes of future transofrmations.

 

So if you have YawRotation * Translation * YawRotation in the persistent matrix, you will obviously not get what you want, because the second rotation will be affected by the previous rotation and translation (which results in a new origin and rotated X,Y,Z axes). Similarly, if you wanted to apply only two transformations (translation and yaw), then you could always multiply the yaw rotation matrix to the right of the persistent matrix, and the translation matrix to the left, and you would get correct results. However, if you also have pitch, which needs to be applied to the right as well, but AFTER the yaw rotation, then your pitch and yaw will end up affecting each other...

 

Note: My previous recommendation of using Translation * RotationYaw * RotationPitch works for a view matrix, but I think it should work the same for a world matrix, except the angles and position coordinates are negated... In the end, I think it's always best to figure out your own explanation for how matrix concatenation works. Different people will have different ways of explaining it (even though they're explaining the same thing :) ). For myself, I found it better to stick to the simple explanation above (third paragraph) than a mathematical one...




#5187798 Rotation Around a Faked Axis

Posted by tonemgub on 18 October 2014 - 02:08 AM


Currently, for each object I am storing a persistent matrix that represents all of the translations and rotations that have occurred since it was created. Each frame, this matrix is incrementally updated.

I believe this is where your problem lies. You cannot use the same matrix over and over again and just multiply it with another matrix (y-rotation matrix) to get what you want.

 

If you just re-use the same matrix and multiply it "to the right" with the new rotation matrix, your new rotation will always be affected by whatever rotation and translation there already is in the old matrix. If you multiply "to the left", your new matrix will affect all of your old translations and rotations. So there's no way to get your new matrix to act independently of the old, persistent one.

 

Instead, you should keep separate yaw, pitch and position variables for each object, and re-build the whole matrix for each object every time one of the yaw, pitch or position variables changes (or just before drawing your objects). To get a first-person-like camera, you have to multiply all the matrices obtained from the three variables in this order: Translation * RotationYaw * RotationPitch.

 

You could also use a lookAt transformation like others suggested, but I think that might be more computationally-expensive (rotating the lookAt point requires sine/cosine operations, which is more work than just incrementing/decrementing the yaw or pitch variables and delaying the sine/cosine rotations until draw-time).




#5187616 DLL-Based Plugins

Posted by tonemgub on 17 October 2014 - 03:20 AM

Yes, you do have to avoid memory allocation/deallocation across the DLL boundaries, but C makes it easier for you to not make that mistake, because you can't pass C++ object pointers to the plugin's C-only functions. You'd have to specifically cast your objects to C-pointers in order to be able to pass them to your DLL, and you'd have to cast them back to the original C++ object in your plugin's code. If you avoid this, you can just stick to passing C structures and data types, and if you still want objects, you can implement them as COM interfaces, for which you have to implement specific allocation/deallocation, which you will do either in your plugin or in your main program (not in both, as the C++ new/delete operators and the C malloc/free functions allow). Although if you stick to malloc/free, you shouldn't have any problems across different runtime versions... they are just wrappers around the LocalAlloc/LocalFree Win32 functions, IIRC.




#5187359 scan a string character one by one & hexadecimal code

Posted by tonemgub on 16 October 2014 - 05:09 AM

I think you need to "normalize" your unicode string before you can extract the characters you want: http://www.unicode.org/reports/tr15/tr15-23.html

The Win32 function to do this is NormalizeString: http://msdn.microsoft.com/en-us/library/windows/desktop/dd319093%28v=vs.85%29.aspx I don't know if VB.NET already has an equivalent function, or if you have to PInvoke the Win32 one...




#5187350 scan a string character one by one & hexadecimal code

Posted by tonemgub on 16 October 2014 - 04:45 AM

The characters in the string you posted are (unicode UTF-16): 0x0645, 0x0627, 0x0644. Why would you expect the first two of them to be 0xFEE3 and 0xFE8E ?




#5187335 Rendering a texture with transparency

Posted by tonemgub on 16 October 2014 - 02:52 AM


How do I make the red transparent instead of black, or in other words, how do I clear the texture to transparent instead of a solid color?

I think you also need to use ID3D11DeviceContext::OMSetBlendState to enable alpha blending when rendering the second cube.

 

 

 


How do I make it so that the inside faces of the cube render a color or texture when exposed to the camera via transparency or otherwise?

This is the age-old question of how to do order-independent transparency. But for a cube, or any other convex object, it is simple: First, render only your backfaces (draw the object with (counter-)clockwise front faces), then the front faces on top (draw the same object again with the opposite front face setting)... For non-convex objects, you can either split them into convex parts, and draw each one separately using the above method, or sort all of your obejct's faces in back to front order, then draw them in that order without any backface culling. If there are faces that intersect, you'd also have to split them in two separate faces along the intersection line.

 

 


The inner result of scaling the wooden crate texture is kind of jaggy I assume due to not being affect by the antialiasing sampler. Is there a setting I need to change to alleviate this, or is simply that the wood texture is too large (1024x1024) and I need to use mimapping to achieve a better result?

Mipmapping and trilinear (or anisotropic) filtering (both during rendering the first cube to the texture and when rendering the second cube).




#5186694 Trying To Plot Points Around A Disk From Direction Vector.

Posted by tonemgub on 13 October 2014 - 07:28 AM


Can Someone verify that thats the right way to get a perpendicula vector.

The vector perpendicular to two other vectors in 3D is the vector given by by the cross-product of the two other vectors. The direction of the resulting perpendicular vector follows the "right- hand" rule.

 

I'm not sure what your "Perpendicular" function is trying to do. All this does is return the (absolute of) x, y, and z values from "direction" into id, jd and kd:

// three mutually perpendicular basis vectors
float3 i = float3(1, 0, 0);
float3 j = float3(0, 1, 0);
float3 k = float3(0, 0, 1);

// measure the projection of "direction" onto each of the axes
float id = abs(dot(i, direction));//i.dot (direction);
float jd =abs( dot(j, direction));//j.dot (direction);
float kd = abs(dot(k, direction));//k.dot (direction);

 

And the final return value from your Perpendicular function will be the cross product between the input "direction" (your p1p2 vector) and one of the i, j, k vectors (the one "least parallel" to "direction"/p1p2)...

 

Anyway, it seems all other position vectors are in world space, so maybe you need to rotate the i, j and k basis vectors with the rotation part of your world matrix as well...




#5186644 Trying To Plot Points Around A Disk From Direction Vector.

Posted by tonemgub on 13 October 2014 - 01:57 AM

Why are you passing p1p2 (assuming this is the P2-P1 vector from you algorithm ?) and then calculating p1 and p2 separately? Also, isn't the "dir" variable the same as the normalized P2-P1 vector? Why are you passing it separately to your shader (or is it just a constant)? Are you sure all of these variables are according to the algorithm you described? By the looks of it, they aren't. You should only pass P1 and P2 into your shader, then calculate everything else based on those. If you just change of any of these variables (p1, p2, dir, p1p2) - even just the sign of one of their components - it affects what all of the other variables should be. For example, if you just change the sign of dir.y, then P1 and P2's y values should also be swapped with one another (assuming x and z are 0), and the sign of p1p2.y should also be reversed.

 

Also, are you initializing all of your shader constant buffers properly?

 

Also, you are declaring the local variable "Particle p" with the same name as the global variable "float3 p" (or are they all local variables? - if so, then your shader really makes no sense at all, because most of them are not initialized anywhere)... things can go wrong here as well.

 

Is that even a shader or just C++ code? :) Anyway, the problem is clear: a lot of your variables are not initialized anywhere.




#5185499 avoid localhost // verify url

Posted by tonemgub on 07 October 2014 - 05:14 AM

You could use gethostbyname or getaddrinfo to detect if the DNS name you're trying to connect to resolves to the same IP/address as localhost (or any non-routable IP/addresses)...

Of course, anyone who could circumvent your public HTTPS certificate could probably also disable the IP/address checks even faster.

 

The best you can do is hide the public certificate in your executable (the code section if possible), and maybe even encrypt it with a password stored somewhere else.




#5185497 WinAPI - wait for certain calls to finish before continuing?

Posted by tonemgub on 07 October 2014 - 04:46 AM

Each two (or more) of your click messages (WM_LBUTTONDOWN immediately followed by WM_LBUTTONUP?) are probably getting translated (by TranslateMessage) into double-click messages (WM_LBUTTONDBLCLK) because they are happening faster than the double-click speed threshold. You can disable this behaviour by removing the CS_DBLCLKS class style from the window class(es) of the windows you're trying to click. But for this to happen, it also means that the windows you're sending the messages to belong to a different thread than the thread you're sending from?

 

Also, if you can let us know what exactly it is you're trying to do, we could propose alternative methods... Sending mouse messages to windows the way you're trying to doesn't feel like a very good idea to me.




#5185229 speeding up terrain collision checks

Posted by tonemgub on 06 October 2014 - 01:49 AM

If your terrain is height map-based, you could just sample the interpolated height from the height map texture, and use that for the collision math - most of the time, all you have to do is clamp each object's position against the terrain's height.

For the water, if it's always at the same height, you can just check your objects against that height and the terrain height (the water bed), to see if the objects are in water... You'll always have to do this, so I don't see how a collision map helps.

 

Anyway, it does not seem like your problem is the amount of memory used, but the number of objects that you have to check against each-other? What I was trying to explain is that you shouldn't do collision checks of one object from one of your sparse maps against ALL objects from ALL other sparse maps:

 

 


a single entity is currently checked for collisions with 260 trees and 1300 rocks in an 800x800 sparse matrix

For example, if you have four sparse maps - one for each object type, you should only have to do four checks - one for each map, at the grid position of the object. And you also don't have to do this every frame - only when placing the objects from the sparse maps into the world and/or collision maps - and I'm assuming you'd only be doing this with collision checks for ALL objects only once, in a "world generation" step which takes place during game load anyway, so the speed should not matter that much. After the objects were already placed in the collision map, you only have to check the dynamic objects for collisions against each other, and against the static objects. The static objects were already checked for collision against each other during the world generation (or sparse map generation), and they are not going to move, so there's no point doing checks between them every frame. In fact, there's no point in adding them to the collision map in the first place - it will be a lot faster to just check each dynamic object directly with the static objects from the sparse maps, even if you still have to do the extra check for the object-in-water case for each static object. Also, if your sparse maps do not contain any overlapping objects to begin with (you could make sure there are no collisions when you generate the sparse maps, as a "loading" step), you won't have to check them for collisions with each other - you'd only have to check for dynamic object collisions with the static objects - and the only "special case" that remains to be checked every frame is the object-in-water case, and for the static objects, assuming that the water is also static, you could just cache this check, maybe by using a "objectInWater" flag in your collision map.

 

 

 


at 800x800, there are 1089 plant maps in a single map square, and there are 250,000 map squares in the world map. that works out to 272.25 million plant maps. obviously, with that many, they need to be procedurally generated

I thought you said the plant maps are tiled (i.e., the same plant map is tiled many times?). Do you mean to say that this tiling is implemented by memcopy-ing the same "plant map" over and over 250,000 times? Wouldn't you be better off just using the original map every time, and just displacing the position of the objects inside them by the current tile's position during collision checks? If this is not the case (if each tile is generated differently), then why do you need the sparse maps in the first place - couldn't you just use values from your RNG directly (every time for all static objects, and for the initial position of  the dynamic objects) (assuming your RNG is seed based and always returns the same value for the same (x, y, seed) input)?




#5184732 speeding up terrain collision checks

Posted by tonemgub on 03 October 2014 - 02:49 AM


movement collision checks test each entity vs a sparse matrix list of rocks, a sparse matrix list of trees, and a list of world objects

You could simplify this, and get rid of the tree-in-hut problem by using only one sparse matrix for all of your object types, instead of one matrix for each type. You can also represent objects that are allowed to overlap with other objects (like grass around a tree) by using bitmasked-values in the matrix, instead of straight-forward object-type values.

 


a quick check of the code reveals that in the test case above, a single entity is currently checked for collisions with 260 trees and 1300 rocks in an 800x800 sparse matrix "plant map".

You should only check each entity against the matrix cells that contain the entity's position... that's the whole point of doing grid-based collision.




#5184718 Trying to find a webpage that teaches you how to do normal maps for textures...

Posted by tonemgub on 03 October 2014 - 01:06 AM

http://www.katsbits.com/tutorials/textures/how-not-to-make-normal-maps-from-photos-or-images.php

Next time you just can't seem to find what you're searching for, try using private/incognito browsing mode, or disable personalized search results in your Google account.




#5184291 DirectX to OpenGL matrix issue

Posted by tonemgub on 01 October 2014 - 05:20 AM

Seems to me like you're not setting the viewport: https://www.khronos.org/opengles/sdk/docs/man/xhtml/glViewport.xml

 

In your Ubuntu screenshot, this is obvious, because the default viewport is (0, 0, 1, 1), so only the center of your cube shows through it.

 

In your Windows example, it's not clear what's happening - OpenGL probably takes the viewport size from the window's client area size, when you , but then you resize the window, and the old viewport no longer matches the new window size?

 

Check the examples in this thread for how to implement a resize callback and use it to set the viewport: http://www.gamedev.net/topic/661407-why-triangle-is-not-drawing/






PARTNERS