1. Past hour
2. ## How to make Windows executable from VS 2017 maximally toolset non-specific

Using the older platform toolset should suffice. And obviously you may have to look out for API calls that are only available in newer Windows'. I've used VS 2015 with the XP toolset (and statically linked CRT) for a game which runs fine on XP.
3. Today
4. ## Starting a Career in the Gaming Industry - Help Required

Yes. Working as a tester. Applying for a game job. Changing careers into games.
5. ## How do I pass this view and projection matrix?

I'm using a library for gizmos (translate, rotate, scale) so that I can control my objects in a scene. I'm using this https://github.com/CedricGuillemet/ImGuizmo. There is a function called Manipulate that takes const float* as the type for a projection and view matrix. also float* for a matrix. I have those Matrices stored in a dx11 XMMATRIX. How do I pass that data?
6. ## 2D Tile Art, Looking for suggestions

Hello friends! I've been working on some pretty simplistic pixel art lately, so I put together a picture of the parts of a tile set I'm making. In the picture, you can see grass (one patch with a lighter shade), a stone path, and some cliffs/hills. I've uploaded it to the post. I'm hoping to get a little bit of feedback on it - is it too simplistic that it doesn't do a good job of portraying what it's supposed to be? Is there anything you would recommend changing to make it look a little more realistic? Thanks so much, have a great rest of your day!

8. ## Best Practices for Organizing a Sprite Sheet that needs to support concurrent states

Hello, I'm working on developing a 2D platformer. In the game, the player can run/jump/etc, and they can also often shoot at the same time that they're performing these other actions. For programming these actions, I have no issue constructing the relevant FSMs and getting them to behave as intended. My issue comes when I need to find the right frames or animations for the sprite to show. Should the game just check if the player is shooting and what state they're in and then have the right animations hardcoded in to play in each possible outcome of that if statement? I've considered some potentially more elegant solutions like having a normal sprite sheet and then a shooting sprite sheet where the shooting sheet has the corresponding shooting frame at the same position in the sheet. Then the animator would just have to load the correct frame of its current sheet no matter what, and whenever the player is shooting, it simply swaps out the regular sprite sheet for the shooting one. Basically I can think of a few different ideas that sort of solve this problem, but none of them seem ideal. I wanted to know what the generally accepted best practices are for this type of problem. Any advice would be appreciated.
9. ## Time complexity considerations of Sutherland–Hodgman clipping in one-shot contact generation

Consider worst case of roughly 12 vertices against 12 vertices. That’s 144 loops. However this will operate on memory inside of a very small space in L1 cache. Pretty much, it’s going to be extremely fast.
10. ## Orx now uses gamepad controller mapping

Last week, support was added to Orx for gamepad controller mapping as well as half-axes, analog thresholds and stick/button/trigger controls remapping. This is made possible by utilizing the SDL Game Controller Database initiative in order to standardize controller sticks and buttons. What this means for games is that no matter what type of controller you plug in, JOY_LX_1 means the X axis on the Left Stick of the first controller. This is much better than the previous JOY_X_1 which meant the same thing, but no guarantee which stick on your controller this would be. This greatly removes the need for creating a re-mapper screen to your games (though still good for convenience). An example of use would look like the following. Firstly the config: [MainInput] JOY_LX_1 = LeftRight Then in code: if (orxInput_IsActive("LeftRight")) { orxFLOAT x = orxInput_GetValue("LeftRight"); //do something } There are a number of joystick tutorials that cover the new features.
11. ## Silly Input Layout Problem

The OP's VS code is fine, the shader compiler automatically converts a float4x4 to to 4 float4 attributes in the input signature (with sequential semantic indices).
12. Yesterday
13. ## Move view(camera) matrix

Your view matrix is just the inverse of a matrix representing the world-space transform for your camera. "LookAt" functions will automatically invert the transform for you, but you can also build a "normal" transformation matrix for your camera and then compute the inverse to get a view matrix.
14. ## Mobile touchscreen performance expectation.

your input implementation just sucks alot, you need to implement your own system

16. ## The blackboards of behavior trees

My understanding of behavior trees is that you define a whole bunch of functions in your game code that return a boolean. These can be functions that get information about the situation (Is there an enemy nearby? Do I have ammo? Is my health above 60%?) or functions that actually do something and return whether they succeeded (Move to take cover. Shoot at the weakest enemy within range. Say "Halt!"). A behavior tree is then an expression that joins these functions with the equivalent of the operators || and && from C. The strength of this paradigm is that it's impossible for the game designer to go crazy with the complexity of the behaviors. If you need to define a whole bunch of variables in a blackboard, perhaps this very simple architecture is not a good fit for your task.
17. ## FBX SDK skinned animation

While Dirk is searching his beautiful code you can try reading this code for becoming familiar with skeletal animation and skinning. However, this code ignores the geometric transform. Since most of my test models are from mixamo, I haven't needed such a transform. If you need this transform then simply transform the mesh vertices and normals using it, as Dirk pointed out.
18. ## Time complexity considerations of Sutherland–Hodgman clipping in one-shot contact generation

I've been using Sutherland-Hodgman for years with good results. Yes, it's fast enough and simple to implement. The good thing is that it preserves the polygon vertex order. Not informed about temporal coherency in the context of clipping. However, in the context of contact creation, temporal coherency can be exploited by reclipping the touching features. If two faces are chosen as the touching features in one frame, you run clipping and cache these features. If these two faces still are the touching features in the next frame, you check if the relative orientation of the shapes has changed up to some small tolerance. If not, you reclip those features. Of course we assume you can detect the touching features using an algorithm such as the SAT.
19. ## Time complexity considerations of Sutherland–Hodgman clipping in one-shot contact generation

Hi, Most people seem to prefer some version of Sutherland–Hodgman clipping when generating one-shot contacts. Sutherland–Hodgman has O(n*m) time complexity. Is this fast enough for most practical purposes or is there a compelling reason to pick a different algorithm with better time complexity? I guess I could always profile, but I'm also asking in case there are other considerations that I may have overlooked... Is there for example some case of temporal coherency that can be exploited? Thanks!
20. ## Moving the Masses

I'm very pleased to showcase the new functionality I've incorporated allowing the user to select many Simulin at once. This feature was always on the back-burner because there were more pressing things to address. I'll quickly go through the technique I used. But first, please watch the following videos! I created a two faced plane where the opposite corners are defined by the vertices obtained from the right-click down and up events. Then raycaste to the planet twice and I have all four corners for the plane. After that I normalized the vertices and then raycaste from the position of every visible Simulin to the center of the planet. It's not 100% perfect, because the perspective camera obscures things a bit and the box that is drawn is an html element. For this reason Simulin along the edges of the selection box will sometimes not be selected. But really? big deal... Once the target Simulin are identified the user selects the destination. This is where the scripted behaviour of the Simulins movement comes in. Rather than having all the Simulin converge on a single point I created a move-able plane whose vertices are defined in a spiral $cINT.definePositionPlane = function( u ){ // u is the unit scale for the world var a = new THREE.Geometry(); var mat = new THREE.MeshLambertMaterial( { side: THREE.DoubleSide } ); a.vertices.push($m.v( 0 , 0 , 0 ) ); for( var i=0, j=61, k=61, l=1, q=-1, p=1, b=15; i<121; i+=0 ){ if( l < 11){ for( var n=0; n<l; n++ ){ j += q; a.vertices.push( $m.v( ( j - 61 ) * u * b , 0 , ( k - 61 ) * u * b ) ); i++; } for( var n=0; n<l; n++ ){ k += p; a.vertices.push($m.v( ( j - 61 ) * u * b , 0 , ( k - 61 ) * u * b ) ); i++; } q*=-1; p*=-1; l++; } else { for( var n=0; n<10; n++ ){ j += q; a.vertices.push( \$m.v( ( j - 61 ) * u * b , 0 , ( k - 61 ) * u * b ) ); i++; } i = 121; } } var f = new THREE.Face3( 0 , 10 , 120 ); a.faces.push( f ); f = new THREE.Face3( 0 , 120 , 110 ); a.faces.push( f ); a.computeFaceNormals(); var m = new THREE.Mesh( a , mat ); m.visible = true; return m; }; The plane is moved and then oriented over the point the user chose. Each Simulin requesting a path is given an incremental vertices obtained from the selection plane. A raycaste call is done from that vertices location once the vertices is extrapolated to its global position. From there I get the sphere position and that become the unique goal for the Simulin in question. I think the effect looks awesome. Right now you draw the selection box by holding down the right mouse button and then when you release it the calculations are done. I'm also thinking that in the future I can program a custom routine and color for the direction in which you draw the selection box. If you go left to right, up to down then maybe you select every Simulin. right to left, up to down then maybe you select only hunters or some such idea. Let me know what you think....
21. ## Thumbstick position doesn't match input angle

Hello all, I'm working on some code to position an arrow in the direction of the left thumb stick. I believe I cannot get the behavior I want do to imprecise hardware. What do you think? Is there a way to solve this? Here is what I want to happen: - As the player does a full 360 degree rotation with a full push on the left thumb stick I want the aim arrow to do a smooth 360 degree rotation. Here is what happens: - As the player does the full 360 degree rotation the arrow sticks in the angles of 45, 135, -42, -135 degrees (four cournrs). Why this happens: - The raw input I get for the analog stick hits a value of 1 well before full extension. This means that there is a large region where x and y's absolute value will be greater than one. This in turn leads to that stickiness. My thoughts: - because this happens at the hardware level and the value's max is one there is nothing I can do about it. What do you think? Is there a way to get the behavior I want, that is a nearly exact mapping of a extended joystick angle to angle I get from the input.
22. ## shooting bullets

glPushMatrix(); glTranslatef(-4.5f, 0.5625f,0.0f); glRotatef(angle, 0.0f, 0.0f, 1.0f); glTranslatef(4.5f, -0.5625f, 0.0f); glBegin(GL_POLYGON); glTexCoord3f(0.0f, 0.0f, 0.0f); glVertex3f(-4.4375f, 0.625f+up+vertical, 0.0f); glTexCoord3f(1.0f, 0.0f, 0.0f); glVertex3f(-4.4375f, 0.5f+up+vertical, 0.0f); glTexCoord3f(1.0f, 1.0f, 0.0f); glVertex3f(-4.5625f, 0.5f+up+vertical, 0.0f); glTexCoord3f(0.0f, 1.0f, 0.0f); glVertex3f(-4.5625f, 0.625f+up+vertical, 0.0f); glEnd(); glPopMatrix(); I am trying to draw a bullet down but it can only draw an up bullet
23. ## C++ Temporary string literals

Consider following code: std::string function() { std::string test("This is some very long string..."); // ... use test return test; } I know that string literals in pointers (char* psz = "some string";) are stored in .rodata of an exe file and that string literals defined in an array are stored locally or wherever the array is stored, on stack if array is on stack or in object heap if object is stored on heap. This can be seen here. What about the code above. For long strings std::string stores chars on the heap. But before they get to the heap, is the above string "This is some very..." stored somewhere? Just by intuition I would say no. If it would be stored somewhere, let's say .rodata, it would just clutter unnecessarily the exe file so it doesn't make a lot sense. Am I right? I know that it might be implementation defined but I'm asking only for x86 system.
24. ## Mobile touchscreen performance expectation.

I appreciate the feedback and have done some more testing with it in mind... The "touch" seems to work fine with connection with the tips of my thumbs, but not so much on the undersides - as you would with a gamepad. I notice this is easier to do with a smaller device, and with my Huawei y300, where its not so much a problem and actually works very well... However, it is harder to do on my Nexus 7 Tablet...which is because its clamped into a sodding protective cover! Taking it out of the cover and suddenly its not so much a problem and a far better experience... Sigh, if there was ever a time for face-palm then its definitely now! So really, the problem appears to be a matter of thumb reach of the buttons. Improvements I can think of is placing the left & right buttons closer to the left-hand side of the screen, and instead of the three buttons current "corner" arrangement I shall try them in a vertical arrangement alongside the right-hand side of the screen... Once again, thank you all for the feed back and your time. I feel I am much closer to a far better control system for my game. Cheers! Steve.
25. ## Starting a Career in the Gaming Industry - Help Required

Hi, unfortunately i don't really have any programming experience which i feel will be something that potentially holds me back. My current background is electrical engineering and ideally i'd love to make it up to game development level one day and have control over making something and producing games. Thanks for the reply
26. ## Starting a Career in the Gaming Industry - Help Required

Do you have any programming experience? Even though I think it's not mandatory, I've heard that QA positions in general (I guess in gaming industry as well) have started to consider at least basic scripting programming a valuable skill. I guess probably Python is a good technology to learn. Is your current job related at all with tech? What is your target position? I mean, what job would you like to get to after X years in the industry? Regards!

Alright https://discord.gg/9HTVyWg
28. ## Just Smash It! - New hardcore physics arcade

Hi guys, let me introduce my new project - Just Smash It! It's all about destruction! Break your way smashing objects with aimed shots! * Realistic physics of destruction * Smooth game flow * Pleasant graphic and sound design * Infinite mode after passing the basic set of levels * Small size, great time-killer! Play Market: https://play.google.com/store/apps/details?id=com.blackspoongames.smashworld Feedback are welcome!
29. ## Water Realism

Just a little preview of the water in our game at night. It's a work in progress. Since down the road I wish to do simulations, the water material takes light and calculates the color. The opacity is based on muck and various particles in the water, which changes. The ripples will generally travel more in the direction of the wind. The wind is determined by barometric pressure in the varying layers in the atmosphere, and temperature is determined by the angle of a geographic location and the distance from the sun, lack of sunlight getting thru, clouds, surface temps, and time of day. Hope you enjoyed this little preview. Getting water to look like this was no simple task!
30. ## C++ How to make Windows executable from VS 2017 maximally toolset non-specific

I have made a simple 2D game supposed to run on Windows only using SFML. The executable is built using Visual Studio 2017. I would like to make sure that it can run on as many Windows machines as possible (even on Windows 7/8 if possible). What steps can I take to ensure that? I have done the following: Build for Win32 (x86) not x64 platform In Project Properties->C/C++->Code Generation, I set Runtime Library to Multi-threaded (runtime library should thus be statically packed with the exe) not Multi-Threaded DLL Would the following help with compatibility (and not cause problems with forward compatibility)? Using Windows SDK version 8.1? Use older platform toolset (not Visual Studio 2017 v141 but perhaps VS 2015 v140 or even VS 2015 Windows XP) What else can I do, and what should I be aware of? Thanks.
31. ## Starting a Career in the Gaming Industry - Help Required

Hi all, I'm looking for a career change as the job that i currently do is neither a passion or something that i really want to be doing for the rest of my life. I would ideally like to begin a career in the gaming industry as like most others i have a strong passion for gaming and all things related. I have been looking into a junior test analyst QA job and was wondering if this is the correct place to start. I'm a dedicated worker so don't mind working my way up and I love being hands on with things. I was wondering if anyone had any advice regarding this or how i can go about gaining experience in this field to give myself the best chance. I'm more than willing to do either weekend work or free work to get my foot in the door so if there is any advice or help anyone could give me that would be great. Thanks for reading, Dan
32. ## Renderdoc can see my model but...

For me, the next step is usually to rule out depth/stencil/backface issues. I'd go over to the "Texture Viewer" tab, select the color/depth output textures, and check these "Overlay" visualizations: Highlight drawcall. It should show something since your mesh is visible in the VS Output window. Depth test + Stencil Test (if you're using stencil). It should be green. If it's red, then you have something wrong with your depth buffer or depth test settings. Backface Cull. It should be green. Red means it's being culled.

34. ## FBX SDK skinned animation

Vector is a 3D vector and the std:fill initializes them to zero. This is so I can correctly accumulate. I need to search through my repository to find the FBX cluster and mesh extraction code. Then I will post it here, but this might take until tomorrow. Sorry for the delay, I switched to exporting directly from Maya since I added ragdolls and cloth and wrote my own plug-ins. It didn't make sense to use FBX for me anymore. I need to dig it up if I find time later today...
35. ## Beginning developing

To further continue on question what happens when your games fail (or generally you fail business side miserably) - I will post a real world example - Telltale Games. https://twitter.com/telltalegames/status/1043252010999410689 The studio recently terminated majority of the positions, and seems to be closing up.
36. ## Programming and Higher Mathematics

Everyone, thank you for responding. I was wondering, can I see a code example of how much easier it is to do something with knowledge of Calculus/Trigonometry? Preferably in Java or some other C-Like language? Currently, I am learning about logarithms and graphing in my college Precalculus class.

39. ## One week to Summer Baseball Challenge

The idea isn't necessarily the problem, but it looks like we need to improve the way it's implemented. Alternatively and as a temporary fix, follow the Challenges blog and the Forum.
40. ## Move view(camera) matrix

I see world much difficult than it is...
41. ## Move view(camera) matrix

You have to translate by the vector between pos and m_pos, so use m_pos - pos instead of just m_pos. (But in this case I'd simply call D3DXMatrixLookAtRH again with m_pos instead of pos.)
42. ## DirectX interfaces

The idea of information hiding via interfaces is a very general concept that exists far beyond the scope of OOP. It's a common recurring concept in almost every different software paradigm. It's a key technique to manage complexity of software, by reducing the number of moving parts to be considered at any one time... Within OOP languages, there tends to be specific keywords that make creation of "interfaces" (the general idea) easy -- e.g. in C# the "interface" keyword, or in C++ the "class" and "virtual" keywords. Within OOD, there's a lot of thought given to how interfaces (the general idea, not the language-specific features) should be used in your software architecture. e.g. "smaller interfaces are better than big ones", or "Decouple modules by creating interfaces that sit in between them", or "If using polymorphic interfaces, implementations must always behave according to the base interface", or "Try to make interfaces that aren't going to need to be changed by future programmers, and then put the bits that will change over time inside the implementations of those interfaces"... You could use these bits of advice in any language -- even ones that don't have a "class" or an "interface" keyword. e.g. in C libraries, it's common to write something that embodies the idea of an interface by using function pointers. They use it for polymorphism too -- e.g. ID3D11Resource is a polymorphic interface (ID3D11Buffer is a ID3D11Resource). When you call CreateTexture2D/etc, internally, it might have some internal class that's called something like CTex2D, which implements ID3D11Texture2D, for all we know... but that's hidden from us. All we know is that it creates some kind of object that implements the ID3D11Texture2D interface, and we can use this interface to talk to the object. Yep. For an example of the first concept -- non polymorphic interfaces, look at PIMPL (aka Opaque pointers).
43. ## DX11 Move view(camera) matrix

D3DXVECTOR3 m_pos; D3DXVECTOR3 pos; D3DXVECTOR3 lookAt; D3DXVECTOR3 up; D3DXMATRIX m_cameraMatrix; D3DXMATRIX translate, result; D3DXMatrixLookAtRH(&m_cameraMatrix, &pos, &lookAt, &up); D3DXMatrixIdentity(&translate); D3DXMatrixTranslation(&translate, m_pos.x, m_pos.y, m_pos.z); result = m_cameraMatrix * translate; return result; I want to move camera to m_pos position, as i do it with world matrix, but it seems it doesn't work, any ideas?

45. ## Shooting towards the player after a few seconds

For this case you don't need the distance and angle, only the direction. As @Lactose sad you can calculate it direction = enemy - player (direction toward enemy) and normalize it. Since enemy moves at some speed, by doing this calculation you will probably miss it. If you want to hit it every time you will have to shoot in front of the enemy which complicates a bit the calculation.
46. ## Horizontal lines of pixels "missing" in windowed mode...

Migrating all my code to another engine would be close to impossible for this project unfortunately... What pixel sizes should I make the window to avoid the problem? Im unsure on how to progress. Just setting it to much larger doesnt help (which seems to me would mean there is enough pixels to not need to press the screen together, thus avoiding it skipping lines? But doesnt work either).
47. ## DirectX interfaces

Could you please explain this a bit further as I think I have not grasped it fully yet? So by this you are saying we would use an interface for two possible scenarios. In one case to hide the implementation and to provide only the "public" interface and in other to use it for polymorphism. I always kinda connected the two concepts. For DirectX part I guess they use the interfaces only for the first scenario, if I understood you correctly? Where in the second scenario we would create a class interface to be able to use a base pointer for inherited classes, for polymorphism.
48. ## Mobile touchscreen performance expectation.

On my phone I noticed holding one point, then touching another while still holding, will make it forget the first, though it might depend on the app. Maybe you are experiencing something similar and could work around it?
49. ## Silly Input Layout Problem

You don't get access to them by a float4x4 in your shader, you retrieve your ins. matrix by four "float4 INSTANCEx" and then you assemble those into a matrix again Something like this: struct VS_INPUT { ..... float4 World0 : MWORLD0; float4 World1 : MWORLD1; float4 World2 : MWORLD2; float4 World3 : MWORLD3; }; cbuffer Buffer1 : register(b0) { float4x4 MatrixWorld; ..... }; VS_OUTPUT main(VS_INPUT Input) { VS_OUTPUT Output; float4x4 mworld = float4x4(Input.World0, Input.World1, Input.World2, Input.World3); mworld = mul(MatrixWorld, mworld); .....

1. 1
2. 2
3. 3
Rutin
18
4. 4
JoeJ
14
5. 5

• ### Who's Online (See full list)

There are no registered users currently online

×