• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Martin Perry

Members
  • Content count

    113
  • Joined

  • Last visited

Community Reputation

1553 Excellent

About Martin Perry

  • Rank
    Member

Personal Information

  • Location
    Czech Republic
  1. I am creating multiplatform game, that will run on several devices. In my font rendering engine, I have used FreeType. Now, I have set font size to be 12pt and "pixel" sizes are calculated based on DPI. However, this doesn´t look nice. Fonts have the same size across all devices, but on mobile phones with high DPI, fonts are too big with respect to overall screen size. On the other hand, on classic 24" monitor, heights are quite OK. How should I handle font sizes? Setting size in pixels directly doesn´t seem to be correct, since with high DPI, it could end up being unreadable.
  2. I have used transform feedback for particle system update in my Win32 app. All is working correctly. I have transported the app to OpenGL ES 3.0 on iOS. However, when I run my app, no data are shown. If I look at content of vertex buffer, I can see positions (OUTPUT.xy) to be numbers like 1e+38 etc. Tha rand function return numbers in [0,1], so there is no reason why x and y should end in such high values (run time of app is aprox 5s, before I pause it and examine buffers). Also, I dotn see the reason for app to fullfill the shader condition for RebornParticle, since the initalized z is always 0 and w is always > 0. It seems to me, that input values to transform feedback are not inited, but on the other hand, on Win32 all is OK and rendering  buffers without transform feedback gives correct initial result. If I dont run transform feedback and just render initialized buffers, I can see correct random points on the screen at their initial positions.   Here is my code (only relevant parts) typedef struct WindParticle {    float x;    float y;    float t;    float maxT; } WindParticle; Initialization:     auto mt = std::mt19937(rd());     auto dist01 = std::uniform_real_distribution<float>(0.0f, 1.0f); std::vector<WindParticle> particles; for (int i = 0; i < particlesCount; i++)     {         WindParticle wp;         wp.x = dist01(mt);         wp.y = dist01(mt);         wp.t = 0;         wp.maxT = 5 * dist01(mt);                  particles.push_back(wp);             }     GL_CHECK(glGenTransformFeedbacks(2, transformFeedback));     GL_CHECK(glGenBuffers(2, vertexBuffer));     for (int i = 0; i < 2; i++)     {         GL_CHECK(glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, transformFeedback[i]));         GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer[i]));         GL_CHECK(glBufferData(GL_ARRAY_BUFFER, sizeof(WindParticle) * particles.size(), particles.data(), GL_DYNAMIC_DRAW));         GL_CHECK(glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, vertexBuffer[i]));     }     //init transform feedback vao     G_Effect * ef = MyGraphics::G_ShadersSingletonFactory::Instance()->GetEffect("particle_position_update");     int posId = ef->attributes["PARTICLE_DATA"][0]->location;     GL_CHECK(glGenVertexArrays(2, this->transformFeedbackVao));     for (int i = 0; i < 2; i++)     {         GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer[i]));         GL_CHECK(glBindVertexArray(this->transformFeedbackVao[i]));         GL_CHECK(glEnableVertexAttribArray(posId));         GL_CHECK(glVertexAttribPointer(posId, 4, GL_FLOAT, GL_FALSE, sizeof(WindParticle), 0)); // x, y, time, maxTime     }          //init render vao     ef = MyGraphics::G_ShadersSingletonFactory::Instance()->GetEffect("particle_position_render");     posId = ef->attributes["PARTICLE_DATA"][0]->location;     GL_CHECK(glGenVertexArrays(2, this->renderVao));     for (int i = 0; i < 2; i++)     {                 GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer[i]));         GL_CHECK(glBindVertexArray(this->renderVao[i]));         GL_CHECK(glEnableVertexAttribArray(posId));         GL_CHECK(glVertexAttribPointer(posId, 2, GL_FLOAT, GL_FALSE, sizeof(WindParticle), (const GLvoid*)0));  // x, y             }     GL_CHECK(glBindVertexArray(0));     GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, 0));     //init default id     currVB = 0;     currTFB = 1; Update:     G_Effect * ef = MyGraphics::G_ShadersSingletonFactory::Instance()->GetEffect("particle_position_update");     ef->SetFloat("rndSeed", dist01(mt));     ef->Start();     GL_CHECK(glEnable(GL_RASTERIZER_DISCARD));     GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer[currVB]));     GL_CHECK(glBindVertexArray(this->transformFeedbackVao[currVB]));     GL_CHECK(glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, transformFeedback[currTFB]));                GL_CHECK(glBeginTransformFeedback(GL_POINTS));     GL_CHECK(glDrawArrays(GL_POINTS, 0, this->particlesCount));        GL_CHECK(glEndTransformFeedback());     GL_CHECK(glBindVertexArray(0));     GL_CHECK(glDisable(GL_RASTERIZER_DISCARD));     ef->End(); Render: G_Effect * ef = MyGraphics::G_ShadersSingletonFactory::Instance()->GetEffect("particle_position_render");              ef->Start();     GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer[currTFB]));         GL_CHECK(glBindVertexArray(this->renderVao[currTFB]));         GL_CHECK(glDrawArrays(GL_POINTS, 0, this->particlesCount));     GL_CHECK(glBindVertexArray(0));     ef->End(); currVB = currTFB;     currTFB = (currTFB + 1) & 0x1; And in shader, I have created juat a dummy update: #version 300 es //for iOS ... in Win32, there is #vesrion 330 in vec4 PARTICLE_DATA; out vec4 OUTPUT; uniform float rndSeed; float rand(vec2 co) {     float a = 12.9898;     float b = 78.233;     float c = 43758.5453;     float dt = dot(co.xy, vec2(a, b));     float sn = mod(dt, 3.14);     return fract(sin(sn) * c); } vec4 RebornParticle(vec4 old) {     vec4 p = vec4(0.0);     p.x = rand(old.xy * rndSeed);     p.y = rand(old.yx * rndSeed);          p.z = 0.0;     p.w = old.w;     return p; } void main() {         vec4 p = PARTICLE_DATA;         p.x += 1.0;     if (p.z > p.w)     {         OUTPUT = RebornParticle(p);         return;     }              OUTPUT = p; }
  3. I have changed quadtree to fixed grid. I can ocasioanlly use quadtree advantage by rendering higher-level nodes, but usually I need to traverse the full tree to the desired depth.   I dont want to render minimap, I want to render full-screen map. Eg. you pause the game and show the world-map with missions and other things. Tne number of render calls may be a problem in the future, but the solution can be done in the current logic. I can render one quad and solve tiling by textures only - use cubemap (reduce 6 draw calls to 1) or use texture arrays (depending on HW support - I am planing mobile version of this as well). The only thing I will have to do, is update texture coordinates and texture IDs (from cubemap or from array).
  4. I am rendering world map in 2D fullscreen.I would like to know, what are you tjinking about my design, if there should be something improved / done differently: 1) Calculate current view bounds 2) Iterate quadtree to find tiles within bounds with desired zoom 3) Rendering 3.0) for every visible tile 3.1) Load tile texture from file / web 3.2) if texture loaded - render tile with actual depth 3.4) goto 3.0 3.6) for every visible tile 3.7) if texture not loaded - go to tile parent while there is an existing texture and render tile. Update depth to be behind already rendered tiles from loop 3.0 3.8) goto 3.6 Basically, I am rendering tile after tile and if there is no texture for tile, I go up to tile parents and render "lower" resolution one that fills the hole.   Is this an appropriate design? Number of draw calls does not seem to be an issue, since I can only see about 10~20 tiles at once, so one render cal for each of them is OK.
  5. I have a 2D planar "model" - basically a triangulated geometry in a plane. On this, I have set of vertices selected as control points. I have a second model, this time 3D, where I have the same number of control points. I know the mapping (which control point should be mapped on which).   What I need to do, is to map 2D "model" to this 3D. However, the mapping it a single transformation. 2D model must stretch / shrink / translate / rotate differently in different parts. Think it as a base 2D rubber plate being anchored to 3D control points. How can this be solved (if it is even possible)?
  6. I am using navigation mesh (triangulated) together with Bullet. In navigation mesh, I find path using A*. This gives me set of "points" inside triangles from navigation mesh. From those points, I create bezier, catmull or other interpolant.   The problem is, how to actually "move" NPC   My ideas: 1)  I calculate new position via "dt" and forward vector (or local coordinate system) via derivation in a new point. This gives me correct NPC orientation and rotation, but it kinds of "break" other logic, because I "teleport" NPC from position to position (I have constant update rate, but still... if there is some small / thin Bullet collision body, it may goes through)   2) Use similar approach as for player. Methods like "MoveForward", "MoveLeft", "RotateAroundUp" etc. However, in this case, I am not really sure, how to calculate order of operations from interpolant. I have a current and final position (and orientation) and I need to get serie of "move" and "rotate" operations to get to that final position.   3) something else?   What is preferred solution? And how to potentioanlly do 2). I have beem looking over net, but found different ways, how to do this based on 1) or 2), but none of them was quite descriptive.  
  7. Turning off / on raw input if window lost focus helped... Thanks for the tip.
  8. In my C++ engine, I am using rawinput for keyboard controls. Now, I have added some external GUI with input text boxes. However, the inputs from keyborad are "eaten" by raw input and never passed to GUI, even if it has focus. If I focus other windows, like web browser, all is OK.   I register keyboard via this: RAWINPUTDEVICE Rid[1]; Rid[0].usUsagePage = 0x01; Rid[0].usUsage = DEVICE_KEYBOARD; Rid[0].dwFlags = RIDEV_NOLEGACY; Rid[0].hwndTarget = windowHandle; if (RegisterRawInputDevices(Rid, count, sizeof(Rid[0])) == FALSE){ MY_LOG_ERROR("Failed to register device"); } and than check for WM_INPUT message   If I do not call RegisterRawInputDevices, GUI is working correctly, but the app obviously not, since I have not registered keyboard.   Is there a way to solve this?      
  9. Unity

    Ideally precise... I am using it in an "editor" like system, where I need to pick the actual triangle and position within it. I would also like to try ray-tracing on my scene, so I need precise ray intersections. I am currently trying to implement it based on the outputsfrom deferred renderer (depth G-buffer and added triangle ID buffer) with a combination of octree (quadtree)
  10. Unity

    I have already implemented bounding primitives / ray intersection. The problem I have is, how to detect actual triangle. Calculate it brute-force (even if I have the object from AABB or other primitive-ray intersection), is quite slow. With bounding geometry, I am unable to get precise points on surface.
  11. What is the best approach for scene ray-casting? I want to use it for calculating shoot target (decals positions), A.I. (visibility), picking (moving units accross the map) and possible ray-casting (some effects).  There could also be some other use cases in the future.   I was thinking about an octree, but I am not quite sure what to store inside. Storing single triangles in leafs will lead to the huge waste of space (ID of geometry, ID of triangle plus the tree itself). Yes, I can have a simplified models (so less triangles), but this will work for A.I., but not for obtaining shoot target position for decals rendering and picking can also be inaccurate.   For only picking, I can use depth buffer, but it is also not quite precise for positions further from near plane. I was looking into Unity solution and they have Physics.Raycast. However, I am not able to find whats behind.
  12. I have cam across these two terms in new DirectX - Tiled Volume Resources and Sparse volume texture.  From the descrition, they both seems same to me.. is this just another name for the same thing?   Another thing is, I am unable to find sample OpenGL code for either of technique. Is there some sample or some other name maybe?
  13. I have a large volume data (60GB+) and I need to visualise its isosurface (and also store triangles). What is the best free SW for this task? It must run on this configuration: 32GB RAM, GTX 970, Core i7@4GHz and SSD in RAID1. I could programm this by myself, but I thought to look for an existing solution first,   Thank you
  14. Everything that has been said here is correct, but no one answered the subquestion "what is the difference (mathematically) of both methods if they are used for a single triangle".
  15.   But If I look at the first approach and compare the values needed to compute tangent with ShaderX5 book, they calculate du1 = u1 - u0 (and other differences similar way), which is the same as the first approach does.   I understand why it is good to move calculations to the GPU to pixel shader (you can calculate it independently on geometry, plus today you can generate or tesselate geometry). What I dont understand from both articles is, why I cant use the first one in shader, because for a single triangle, if i wrote solution on the paper, they both use the same differences.