• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

??????? ?????

Members
  • Content count

    46
  • Joined

  • Last visited

Community Reputation

143 Neutral

About ??????? ?????

  • Rank
    Member
  1. [b]Long post ahead[/b](you should probably ignore the last 2 code snippets) I searched trough the forums and noticed some old threads about people having troubles with ditching the D3DXVECTOR# values and working with the XMFLOAT# ones and there I saw a bunch of links with explanations from MSDN and the likes,so I got the basics of how to work around with them(using XMVECTOR for the intristic XM functions and XMFLOAT# for storage/same w/ XMMATRIX and XMFLOAT4X4).The problem is whenever I do anything with it I get a break point at some place in the xnamath header that doesn't really tell me anything.For instance when I just use a test variable like this:[CODE] XMFLOAT4X4 test = XMFLOAT4X4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 2.0f, 0.0f, 0.0f, 0.0f, 0.0f, 4.0f, 0.0f, 1.0f, 2.0f, 3.0f, 1.0f); XMMATRIX I = XMMatrixIdentity(); XMStoreFloat4x4(test, I); [/CODE]It triggers a break point and the top 3 lines of the call stack are the calling of XMStoreFloat4x4(test,I); ,the next one is some "XMASSERT(pDestination);"line in xnamathconvert.inl: [CODE]XMFINLINE VOID XMStoreFloat4x4 ( XMFLOAT4X4* pDestination, CXMMATRIX M ) { #if defined(_XM_NO_INTRINSICS_) || defined(XM_NO_MISALIGNED_VECTOR_ACCESS) XMStoreFloat4x4NC(pDestination, M); #elif defined(_XM_SSE_INTRINSICS_) XMASSERT(pDestination); // <<<<<<<THIS ONE _mm_storeu_ps( &pDestination->_11, M.r[0] ); _mm_storeu_ps( &pDestination->_21, M.r[1] ); _mm_storeu_ps( &pDestination->_31, M.r[2] ); _mm_storeu_ps( &pDestination->_41, M.r[3] ); #else // _XM_VMX128_INTRINSICS_ #endif // _XM_VMX128_INTRINSICS_ } [/CODE]The third and top call is at xnamathmisc.inl at ''__debugbreak();'' : [CODE]XMINLINE VOID XMAssert ( CONST CHAR* pExpression, CONST CHAR* pFileName, UINT LineNumber ) { CHAR aLineString[XMASSERT_LINE_STRING_SIZE]; CHAR* pLineString; UINT Line; aLineString[XMASSERT_LINE_STRING_SIZE - 2] = '0'; aLineString[XMASSERT_LINE_STRING_SIZE - 1] = '\0'; for (Line = LineNumber, pLineString = aLineString + XMASSERT_LINE_STRING_SIZE - 2; Line != 0 && pLineString >= aLineString; Line /= 10, pLineString--) { *pLineString = (CHAR)('0' + (Line % 10)); } #ifndef NO_OUTPUT_DEBUG_STRING OutputDebugStringA("Assertion failed: "); OutputDebugStringA(pExpression); OutputDebugStringA(", file "); OutputDebugStringA(pFileName); OutputDebugStringA(", line "); OutputDebugStringA(pLineString + 1); OutputDebugStringA("\r\n"); #else DbgPrint("Assertion failed: %s, file %s, line %d\r\n", pExpression, pFileName, LineNumber); #endif [b] __debugbreak(); // <<<<<<THIS ONE[/b] } [/CODE] (Same thing happens with vectors and xmfloats) So XMStoreFloat4x4 calls XMASSERT,which somehow fails to execute,so it calls __debugbreak(); to trigger a break point?(excuse my ignorance here).But that doesn't really tell me why xnamath isn't working in my code.My CPU has xna math support for sure,since I can compile and run Frank Luna's XMMATRIX code without a problem and yet the same things he does crash when I try them in my code.The d3dx10math and the rest of the framework worked perfectly before I tried using xnamath.Here is Frank Luna's tutorial code: XMMATRIX: [CODE]#include <windows.h> // for FLOAT definition #include <xnamath.h> #include <iostream> using namespace std; // Overload the "<<" operators so that we can use cout to // output XMVECTOR and XMMATRIX objects. ostream& operator<<(ostream& os, FXMVECTOR v) { XMFLOAT4 dest; XMStoreFloat4(&dest, v); os << "(" << dest.x << ", " << dest.y << ", " << dest.z << ", " << dest.w << ")"; return os; } ostream& operator<<(ostream& os, CXMMATRIX m) { for(int i = 0; i < 4; ++i) { for(int j = 0; j < 4; ++j) os << m(i, j) << "\t"; os << endl; } return os; } int main() { // Check support for SSE2 (Pentium4, AMD K8, and above). if( !XMVerifyCPUSupport() ) { cout << "xna math not supported" << endl; return 0; } XMMATRIX A(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 2.0f, 0.0f, 0.0f, 0.0f, 0.0f, 4.0f, 0.0f, 1.0f, 2.0f, 3.0f, 1.0f); XMMATRIX B = XMMatrixIdentity(); XMMATRIX C = A * B; XMMATRIX D = XMMatrixTranspose(A); XMVECTOR det = XMMatrixDeterminant(A); XMMATRIX E = XMMatrixInverse(&det, A); XMMATRIX F = A * E; cout << "A = " << endl << A << endl; cout << "B = " << endl << B << endl; cout << "C = A*B = " << endl << C << endl; cout << "D = transpose(A) = " << endl << D << endl; cout << "det = determinant(A) = " << det << endl << endl; cout << "E = inverse(A) = " << endl << E << endl; cout << "F = A*E = " << endl << F << endl; system("pause"); return 0; } [/CODE] XMVECTOR: [CODE] #include <windows.h> // for FLOAT definition #include <xnamath.h> #include <iostream> using namespace std; // Overload the "<<" operators so that we can use cout to // output XMVECTOR objects. ostream& operator<<(ostream& os, FXMVECTOR v) { XMFLOAT4 dest; XMStoreFloat4(&dest, v); os << "(" << dest.x << ", " << dest.y << ", " << dest.z << ", " << dest.w << ")"; return os; } int main() { cout.setf(ios_base::boolalpha); // Check support for SSE2 (Pentium4, AMD K8, and above). if( !XMVerifyCPUSupport() ) { cout << "xna math not supported" << endl; return 0; } XMVECTOR p = XMVectorSet(2.0f, 2.0f, 1.0f, 0.0f); XMVECTOR q = XMVectorSet(2.0f, -0.5f, 0.5f, 0.1f); XMVECTOR u = XMVectorSet(1.0f, 2.0f, 4.0f, 8.0f); XMVECTOR v = XMVectorSet(-2.0f, 1.0f, -3.0f, 2.5f); XMVECTOR w = XMVectorSet(0.0f, XM_PIDIV4, XM_PIDIV2, XM_PI); cout << "XMVectorAbs(v) = " << XMVectorAbs(v) << endl; cout << "XMVectorCos(w) = " << XMVectorCos(w) << endl; cout << "XMVectorLog(u) = " << XMVectorLog(u) << endl; cout << "XMVectorExp(p) = " << XMVectorExp(p) << endl; cout << "XMVectorPow(u, p) = " << XMVectorPow(u, p) << endl; cout << "XMVectorSqrt(u) = " << XMVectorSqrt(u) << endl; cout << "XMVectorSwizzle(u, 2, 2, 1, 3) = " << XMVectorSwizzle(u, 2, 2, 1, 3) << endl; cout << "XMVectorSwizzle(u, 2, 1, 0, 3) = " << XMVectorSwizzle(u, 2, 1, 0, 3) << endl; cout << "XMVectorMultiply(u, v) = " << XMVectorMultiply(u, v) << endl; cout << "XMVectorSaturate(q) = " << XMVectorSaturate(q) << endl; cout << "XMVectorMin(p, v = " << XMVectorMin(p, v) << endl; cout << "XMVectorMax(p, v) = " << XMVectorMax(p, v) << endl; system("pause"); return 0; } [/CODE] And one last thing just to make sure I got this straight - for shaders when you want to pass say..a constant buffer with a matrix in it,you use XMFLOAT4X4 and not XMMATRIX,right?
  2. I'll be working on a game this summer and I need to be able to fly a spaceship into a huge planet and land on the surface.Now I got the ''fly space ship'' part,but I have no idea how to generate a planet.As far as I know procedural planets are way too heavy to be used in a game.For instance I get stable frame rates on the new Alien vs Predator game on highest setting,but every ''Procedural Planet Engine'' I've tried has FPS drops and yet the terrain just doesn't look all that good.Are there any other good methods on simulating a planet?Like maybe having different zones/levels for each height level from the atmosphere to the surface and while in space the planet being a premade 3d model?Or maybe I haven't stumbled on a good procedural planet example.I'm not really sure how to write one tho,so I can't judge.
  3. [quote name='mhagain' timestamp='1339060195' post='4946986'] Don't assume that VS is broken and needs patching - it's most likely your code and/or project settings. Why are you running it under a command shell? You'll need to post some code so that people can see what you're doing and comment further. [/quote] I'm using Frank Luna's directx11 sample: [url="http://www.d3dcoder.net/d3d11.htm"]http://www.d3dcoder.net/d3d11.html[/url] Pretty much all of them give me this error,I'm trying to compile the GeometryShader example right now in Part1.
  4. [color=#000000][font=arial, helvetica, sans-serif][size=3]When I played around with sample Directx9 projects from tutorial sites,they would run with no problem,but whenever I try to compile a Directx11 project,I get [/size][/font][/color] [quote]error MSB6006: "cmd.exe" exited with code 9009[/quote] Is there a way to fix this?Do I need some kind of patch for VS? (I'm using Visual Studio 2010 on Windows 7 64 bit.)
  5. [left][size=4]Well how close or far the mountains are depends on how you've drawn your heightmap.You shouldn't use pure white as primary color at the Clouds filter(I assume you made the heightmap in photoshop),as white means maximum height,which will make them look pointy.Try applying some grey here and there with a soft brush.Also it's a good idea to split the terrain into chunks and render only the ones that are actually going to be close to the camera and for the farther ones use a low poly version and some fog/blur to make the transition smoothe[color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif]r,otherwise you're rendering millions of vertices just for the terrain.For the texture to look realistic in 3DSMAX click the model,open modifiers and select UVW Map,then make it Planar and give it a scale 5-6 times smaller than the actual size of the model,so it would tile 5-6 times along the surface.You would need a seamless texture for that,tho. Edit:I've attached a heightmap that you might want to try:[/font][/color][/size] [attachment=9201:heightmap.bmp][/left]
  6. [quote name='Bacterius' timestamp='1338521955' post='4945190'] The client really should be the one generating this stuff, sending actual assets over the network isn't the best use of bandwidth (of course, maybe you have something different in mind, but from what you said it sounds like an overly complicated method to do things) [/quote] [i]The thing is,the terrain must be persistent and stay on the server and during the game players can affect and change it from time to time,but it's not really a problem - each zone has a tileset and each tileset has 8 different terrain types,so that's 0-7(3 bits for the tile type) and same for the tileset - 8 tileset types.[/i] The point is I have to get the client to turn this received info into a blendmap pixel by pixel when receiving the terrain chunk data.
  7. [quote name='InvalidPointer' timestamp='1338516602' post='4945166'] Since this hasn't been asked yet, I'll go ahead and bite the bullet: What Are You Doing, Really? Any time the subject of CPU readbacks come up, there's like a 90% chance a better way to do what you're trying to accomplish exists. [/quote] Well,I have a server program that sends to the client a data struct with info for the current terrain piece.The info is generated randomly by the server and sent to the client,so I want the client to draw a terrain blendmap with the info from the received struct and then use the blendmap in the shader.This only happens when the client receives a new terrain piece(the player walks into a new zone)or when he burns the ground or something.Still it's not every frame,so I suppose it shouldn't cause a noticeable slowdown?
  8. [quote name='jamby' timestamp='1338500528' post='4945100'] You need to call Map() on the texture to get access to the texture data. Don't forget to call Unmap() afterwards. You will have had to create the texture with the proper flags to be able to write to the texture. [url="http://msdn.microsoft.com/en-us/library/windows/desktop/bb173869%28v=vs.85%29.aspx"]ID3D10Texture2D[/url] [/quote] wait will this work on directx [b]9[/b]? edit:[b]IDirect3DTexture9 [/b][b]has LockRect() method,but I'm not sure exactly how to use it,could someone explain it's arguments:[/b] [b][CODE] [in] UINT Level, [out] D3DLOCKED_RECT *pLockedRect, [in] const RECT *pRect, [in] DWORD Flags [/CODE][/b]
  9. I know that in Allegro it's [CODE] BITMAP* tex[10]; int& pixel(BITMAP* bmp, int x, int y) { return ((int**)(bmp->line))[y][x]; } [/CODE] but what's the syntax for this in DirectX?
  10. [quote name='taby' timestamp='1338390050' post='4944690'] Are you familiar with convolution? Perlin noise? If you'd like an effect kinda (sorta, maybe) like what you've got going in that sample image, I guess you could make a separate, temporary grayscale channel out of Perlin noise and then use the grayscale value at each pixel in that channel as the number of times you'd run a blur filter at that same pixel in the main channels (the RGB channels with the boxes). Just an idea. Oh, this looks like a good reference: [url="http://www.jhlabs.com/ip/blurring.html"]http://www.jhlabs.com/ip/blurring.html[/url] Wow: [url="http://www.jhlabs.com/ip/filters/index.html"]http://www.jhlabs.co...ters/index.html[/url] [/quote][quote name='taby' timestamp='1338390050' post='4944690'] Are you familiar with convolution? Perlin noise? If you'd like an effect kinda (sorta, maybe) like what you've got going in that sample image, I guess you could make a separate, temporary grayscale channel out of Perlin noise and then use the grayscale value at each pixel in that channel as the number of times you'd run a blur filter at that same pixel in the main channels (the RGB channels with the boxes). Just an idea. Oh, this looks like a good reference: [url="http://www.jhlabs.com/ip/blurring.html"]http://www.jhlabs.com/ip/blurring.html[/url] Wow: [url="http://www.jhlabs.com/ip/filters/index.html"]http://www.jhlabs.co...ters/index.html[/url] [/quote] Ok I got it: The trigonometric functions I tile the squares with + Smear + Blur
  11. I am trying to create a blendmap on the CPU once at program start and then used it in the shader.So far I've done the first(left) picture,however I need to smear and blur it like the second(right) picture.The program basically makes a 2d array of random square types(red,green,blue) and paints the 1st picture.Now I have to make it smear and blur,so when used as a blendmap in the game,it would look more smooth and realistic,since squares look ugly. [attachment=9146:example.jpg]
  12. I can give you a sample where the camera is pretty much FPS(controlled by mouse,locked to terrain height).If you are using Visual Studio 2010 when you convert it,find any reference of dxerr9 and change it to just dxerr and also in the Linker->Input change dxerr9.lib to just dxerr.lib The camera class in that sample is pretty clean and simple - http://www.2shared.com/file/lW3WhoUH/Walk_Terrain_Demo.html
  13. [quote name='MJP' timestamp='1338335610' post='4944489'] Framerate is not linear, so you shouldn't use it to determine performance impact. Going from 1000 to 700 fps is equivalent to adding ~0.4ms of frame time, while going from 700 to 500fps is equivalent to adding ~0.6ms of frame time. AMD has [url="http://developer.amd.com/tools/shader/Pages/default.aspx"]GPU ShaderAnalayzer[/url] which can convert your HLSL to actual hardware-specific microcode (not D3D shader assembly, which is just an intermediate format) and can give you cycle counts for the shader. But overall performance at runtime is still going to vary based on a lot of things, such as how many pixels you shade or what else is currently executing on the GPU. [/quote] Exactly what I was looking for,thanks.I was planning to use a parallax normal mapping on every object in my game,so I was curious at how heavy the technique actually is.
  14. For instance,without any shaders passing,I can render models made of thousands of polys at 800 fps and maybe of millions at 200-300 FPS,but if I use a simple shader of any kind on a simple mesh,it causes a noticeable FPS drop(from 1000 to 700).Then if I use a more complex shader,like one that calculates multiple lights,the framerate drops to 500,so it's not a linear decrease,but is there a way to exactly compare shader performances without having to compile the whole solution?Like maybe an HSLS -> ASM converter that would allow me to see the instruction counts?
  15. [quote name='Tom KQT' timestamp='1338190187' post='4943923'] I don't think there is a surface type with 5 channels. But if you want to try this method then I have a question - do you really need the Alpha channel for anything? Terrain usually isn't transparent, so instead of RGBAT where T is terrain type, couldn't you use RGBA where A is terrain type? Anyway, I would suggest something completely different, if I understand your problem correctly. The term for this approach probably is [b]texture splatting[/b] as ryan20fun mentioned. The point is that you have for example three different tiled textures (let's say grass, dirt, sand). Then you have one special texture covering the whole terrain without tiling - this texture defines in its color channels how the tiled textures should be combined. If the color is (1, 0, 0) then only grass will be visible etc. And the huge bonus is that you can blend the textures because (0.7, 0.3, 0) will give you 70 % of grass blended with 30 % of dirt. This will be much easier to work with and much more flexible, because in your suggested idea you have only one channel defining the terrain type, while here you have a full texture (4 channels). And also - this texture can have different texture coordinates (different tiling) than the subtextures. [/quote][quote name='Tom KQT' timestamp='1338190187' post='4943923'] I don't think there is a surface type with 5 channels. But if you want to try this method then I have a question - do you really need the Alpha channel for anything? Terrain usually isn't transparent, so instead of RGBAT where T is terrain type, couldn't you use RGBA where A is terrain type? Anyway, I would suggest something completely different, if I understand your problem correctly. The term for this approach probably is [b]texture splatting[/b] as ryan20fun mentioned. The point is that you have for example three different tiled textures (let's say grass, dirt, sand). Then you have one special texture covering the whole terrain without tiling - this texture defines in its color channels how the tiled textures should be combined. If the color is (1, 0, 0) then only grass will be visible etc. And the huge bonus is that you can blend the textures because (0.7, 0.3, 0) will give you 70 % of grass blended with 30 % of dirt. This will be much easier to work with and much more flexible, because in your suggested idea you have only one channel defining the terrain type, while here you have a full texture (4 channels). And also - this texture can have different texture coordinates (different tiling) than the subtextures. [/quote] Yeah,I figured it out - I'll generate 2 blendmaps(for 2xRGBA (8 terrain textures total)) on the CPU and each time a piece of the terrain is changed,I'll lock the blendmaps and change them in the cpu,then unlock again.It shoudln't cause a slowdown,since the terrain won't change more often than once every 5-10 seconds.