Jump to content
  • Advertisement

Martin Perry

Member
  • Content Count

    115
  • Joined

  • Last visited

Community Reputation

1554 Excellent

About Martin Perry

  • Rank
    Member

Personal Information

Social

  • Github
    https://github.com/MartinPerry

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Not a mobile game, but a mobile/desktop OpenGL app. On older devices, when shaders are inside assets (on Android) unpack and load takes longer time.
  2. I am shipping my app. I am facing the issue of shaders and their representation. Should I embed shaders to code directly or use external file and load them via "fopen"? For the first approach - in binary 1) faster load time 2) shaders are "less easy to copy" (but only for beginners, the text is visible in hex binary) Against the first approach - in binary 1) Larger binary - more memory needed 2) Shader is basically loaded only once at startup -- For the second approach - external file 1) Easier shader update - just edit file, no rebuild needed 2) Smaller binary - less memory needed 3) Unified Virtual File System - load everything with one logic Against the scond approach - external file 1) Slower load (especially on mobile devices, where files are in resource - unpacking them may be slow) What are your thought? For me, I am more inclined to the second approach - extrenal file, but maybe I am missing something?
  3. I am creating multiplatform game, that will run on several devices. In my font rendering engine, I have used FreeType. Now, I have set font size to be 12pt and "pixel" sizes are calculated based on DPI. However, this doesn´t look nice. Fonts have the same size across all devices, but on mobile phones with high DPI, fonts are too big with respect to overall screen size. On the other hand, on classic 24" monitor, heights are quite OK. How should I handle font sizes? Setting size in pixels directly doesn´t seem to be correct, since with high DPI, it could end up being unreadable.
  4. I have used transform feedback for particle system update in my Win32 app. All is working correctly. I have transported the app to OpenGL ES 3.0 on iOS. However, when I run my app, no data are shown. If I look at content of vertex buffer, I can see positions (OUTPUT.xy) to be numbers like 1e+38 etc. Tha rand function return numbers in [0,1], so there is no reason why x and y should end in such high values (run time of app is aprox 5s, before I pause it and examine buffers). Also, I dotn see the reason for app to fullfill the shader condition for RebornParticle, since the initalized z is always 0 and w is always > 0. It seems to me, that input values to transform feedback are not inited, but on the other hand, on Win32 all is OK and rendering  buffers without transform feedback gives correct initial result. If I dont run transform feedback and just render initialized buffers, I can see correct random points on the screen at their initial positions.   Here is my code (only relevant parts) typedef struct WindParticle {    float x;    float y;    float t;    float maxT; } WindParticle; Initialization:     auto mt = std::mt19937(rd());     auto dist01 = std::uniform_real_distribution<float>(0.0f, 1.0f); std::vector<WindParticle> particles; for (int i = 0; i < particlesCount; i++)     {         WindParticle wp;         wp.x = dist01(mt);         wp.y = dist01(mt);         wp.t = 0;         wp.maxT = 5 * dist01(mt);                  particles.push_back(wp);             }     GL_CHECK(glGenTransformFeedbacks(2, transformFeedback));     GL_CHECK(glGenBuffers(2, vertexBuffer));     for (int i = 0; i < 2; i++)     {         GL_CHECK(glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, transformFeedback[i]));         GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer[i]));         GL_CHECK(glBufferData(GL_ARRAY_BUFFER, sizeof(WindParticle) * particles.size(), particles.data(), GL_DYNAMIC_DRAW));         GL_CHECK(glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, vertexBuffer[i]));     }     //init transform feedback vao     G_Effect * ef = MyGraphics::G_ShadersSingletonFactory::Instance()->GetEffect("particle_position_update");     int posId = ef->attributes["PARTICLE_DATA"][0]->location;     GL_CHECK(glGenVertexArrays(2, this->transformFeedbackVao));     for (int i = 0; i < 2; i++)     {         GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer[i]));         GL_CHECK(glBindVertexArray(this->transformFeedbackVao[i]));         GL_CHECK(glEnableVertexAttribArray(posId));         GL_CHECK(glVertexAttribPointer(posId, 4, GL_FLOAT, GL_FALSE, sizeof(WindParticle), 0)); // x, y, time, maxTime     }          //init render vao     ef = MyGraphics::G_ShadersSingletonFactory::Instance()->GetEffect("particle_position_render");     posId = ef->attributes["PARTICLE_DATA"][0]->location;     GL_CHECK(glGenVertexArrays(2, this->renderVao));     for (int i = 0; i < 2; i++)     {                 GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer[i]));         GL_CHECK(glBindVertexArray(this->renderVao[i]));         GL_CHECK(glEnableVertexAttribArray(posId));         GL_CHECK(glVertexAttribPointer(posId, 2, GL_FLOAT, GL_FALSE, sizeof(WindParticle), (const GLvoid*)0));  // x, y             }     GL_CHECK(glBindVertexArray(0));     GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, 0));     //init default id     currVB = 0;     currTFB = 1; Update:     G_Effect * ef = MyGraphics::G_ShadersSingletonFactory::Instance()->GetEffect("particle_position_update");     ef->SetFloat("rndSeed", dist01(mt));     ef->Start();     GL_CHECK(glEnable(GL_RASTERIZER_DISCARD));     GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer[currVB]));     GL_CHECK(glBindVertexArray(this->transformFeedbackVao[currVB]));     GL_CHECK(glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, transformFeedback[currTFB]));                GL_CHECK(glBeginTransformFeedback(GL_POINTS));     GL_CHECK(glDrawArrays(GL_POINTS, 0, this->particlesCount));        GL_CHECK(glEndTransformFeedback());     GL_CHECK(glBindVertexArray(0));     GL_CHECK(glDisable(GL_RASTERIZER_DISCARD));     ef->End(); Render: G_Effect * ef = MyGraphics::G_ShadersSingletonFactory::Instance()->GetEffect("particle_position_render");              ef->Start();     GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer[currTFB]));         GL_CHECK(glBindVertexArray(this->renderVao[currTFB]));         GL_CHECK(glDrawArrays(GL_POINTS, 0, this->particlesCount));     GL_CHECK(glBindVertexArray(0));     ef->End(); currVB = currTFB;     currTFB = (currTFB + 1) & 0x1; And in shader, I have created juat a dummy update: #version 300 es //for iOS ... in Win32, there is #vesrion 330 in vec4 PARTICLE_DATA; out vec4 OUTPUT; uniform float rndSeed; float rand(vec2 co) {     float a = 12.9898;     float b = 78.233;     float c = 43758.5453;     float dt = dot(co.xy, vec2(a, b));     float sn = mod(dt, 3.14);     return fract(sin(sn) * c); } vec4 RebornParticle(vec4 old) {     vec4 p = vec4(0.0);     p.x = rand(old.xy * rndSeed);     p.y = rand(old.yx * rndSeed);          p.z = 0.0;     p.w = old.w;     return p; } void main() {         vec4 p = PARTICLE_DATA;         p.x += 1.0;     if (p.z > p.w)     {         OUTPUT = RebornParticle(p);         return;     }              OUTPUT = p; }
  5. Martin Perry

    Rendering map of the world

    I have changed quadtree to fixed grid. I can ocasioanlly use quadtree advantage by rendering higher-level nodes, but usually I need to traverse the full tree to the desired depth.   I dont want to render minimap, I want to render full-screen map. Eg. you pause the game and show the world-map with missions and other things. Tne number of render calls may be a problem in the future, but the solution can be done in the current logic. I can render one quad and solve tiling by textures only - use cubemap (reduce 6 draw calls to 1) or use texture arrays (depending on HW support - I am planing mobile version of this as well). The only thing I will have to do, is update texture coordinates and texture IDs (from cubemap or from array).
  6. I am rendering world map in 2D fullscreen.I would like to know, what are you tjinking about my design, if there should be something improved / done differently: 1) Calculate current view bounds 2) Iterate quadtree to find tiles within bounds with desired zoom 3) Rendering 3.0) for every visible tile 3.1) Load tile texture from file / web 3.2) if texture loaded - render tile with actual depth 3.4) goto 3.0 3.6) for every visible tile 3.7) if texture not loaded - go to tile parent while there is an existing texture and render tile. Update depth to be behind already rendered tiles from loop 3.0 3.8) goto 3.6 Basically, I am rendering tile after tile and if there is no texture for tile, I go up to tile parents and render "lower" resolution one that fills the hole.   Is this an appropriate design? Number of draw calls does not seem to be an issue, since I can only see about 10~20 tiles at once, so one render cal for each of them is OK.
  7. I have a 2D planar "model" - basically a triangulated geometry in a plane. On this, I have set of vertices selected as control points. I have a second model, this time 3D, where I have the same number of control points. I know the mapping (which control point should be mapped on which).   What I need to do, is to map 2D "model" to this 3D. However, the mapping it a single transformation. 2D model must stretch / shrink / translate / rotate differently in different parts. Think it as a base 2D rubber plate being anchored to 3D control points. How can this be solved (if it is even possible)?
  8. I am using navigation mesh (triangulated) together with Bullet. In navigation mesh, I find path using A*. This gives me set of "points" inside triangles from navigation mesh. From those points, I create bezier, catmull or other interpolant.   The problem is, how to actually "move" NPC   My ideas: 1)  I calculate new position via "dt" and forward vector (or local coordinate system) via derivation in a new point. This gives me correct NPC orientation and rotation, but it kinds of "break" other logic, because I "teleport" NPC from position to position (I have constant update rate, but still... if there is some small / thin Bullet collision body, it may goes through)   2) Use similar approach as for player. Methods like "MoveForward", "MoveLeft", "RotateAroundUp" etc. However, in this case, I am not really sure, how to calculate order of operations from interpolant. I have a current and final position (and orientation) and I need to get serie of "move" and "rotate" operations to get to that final position.   3) something else?   What is preferred solution? And how to potentioanlly do 2). I have beem looking over net, but found different ways, how to do this based on 1) or 2), but none of them was quite descriptive.  
  9. Martin Perry

    RawInput blocking input for other windows

    Turning off / on raw input if window lost focus helped... Thanks for the tip.
  10. In my C++ engine, I am using rawinput for keyboard controls. Now, I have added some external GUI with input text boxes. However, the inputs from keyborad are "eaten" by raw input and never passed to GUI, even if it has focus. If I focus other windows, like web browser, all is OK.   I register keyboard via this: RAWINPUTDEVICE Rid[1]; Rid[0].usUsagePage = 0x01; Rid[0].usUsage = DEVICE_KEYBOARD; Rid[0].dwFlags = RIDEV_NOLEGACY; Rid[0].hwndTarget = windowHandle; if (RegisterRawInputDevices(Rid, count, sizeof(Rid[0])) == FALSE){ MY_LOG_ERROR("Failed to register device"); } and than check for WM_INPUT message   If I do not call RegisterRawInputDevices, GUI is working correctly, but the app obviously not, since I have not registered keyboard.   Is there a way to solve this?      
  11. Martin Perry

    Scene / ray-intersection

    Ideally precise... I am using it in an "editor" like system, where I need to pick the actual triangle and position within it. I would also like to try ray-tracing on my scene, so I need precise ray intersections. I am currently trying to implement it based on the outputsfrom deferred renderer (depth G-buffer and added triangle ID buffer) with a combination of octree (quadtree)
  12. Martin Perry

    Scene / ray-intersection

    I have already implemented bounding primitives / ray intersection. The problem I have is, how to detect actual triangle. Calculate it brute-force (even if I have the object from AABB or other primitive-ray intersection), is quite slow. With bounding geometry, I am unable to get precise points on surface.
  13. What is the best approach for scene ray-casting? I want to use it for calculating shoot target (decals positions), A.I. (visibility), picking (moving units accross the map) and possible ray-casting (some effects).  There could also be some other use cases in the future.   I was thinking about an octree, but I am not quite sure what to store inside. Storing single triangles in leafs will lead to the huge waste of space (ID of geometry, ID of triangle plus the tree itself). Yes, I can have a simplified models (so less triangles), but this will work for A.I., but not for obtaining shoot target position for decals rendering and picking can also be inaccurate.   For only picking, I can use depth buffer, but it is also not quite precise for positions further from near plane. I was looking into Unity solution and they have Physics.Raycast. However, I am not able to find whats behind.
  14. I have cam across these two terms in new DirectX - Tiled Volume Resources and Sparse volume texture.  From the descrition, they both seems same to me.. is this just another name for the same thing?   Another thing is, I am unable to find sample OpenGL code for either of technique. Is there some sample or some other name maybe?
  15. I have a large volume data (60GB+) and I need to visualise its isosurface (and also store triangles). What is the best free SW for this task? It must run on this configuration: 32GB RAM, GTX 970, Core i7@4GHz and SSD in RAID1. I could programm this by myself, but I thought to look for an existing solution first,   Thank you
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!