• Advertisement

All Activity

This stream auto-updates     

  1. Past hour
  2. Problem with sleep

    I don't know if this happens to you guys, but everyday which I program, I usually do wake up 4-6 times in the night with some kind of "resolution" which I don't even remember what it was about, which makes me wake up feeling a little tired. Yesterday (sunday), I didn't program at all, and I really slept very well, and woke up feeling restored. Do this happen to you too?
  3. Lighting space

    Oh yeah missed that one. Though, I currently use this to apply tangent-space normal mapping (with the surface normal in view space). Though, without pre-computation; just on the fly in the PS. Currently, I have a separate object-to-camera and camera-to-projection inside my vertex shader. But if I am going to switch, object-to-world and world-to-projection are the "obvious" replacements. Though for precision, I best leave the camera-to-projection separately. What do you advise for the typical gamma of "Unity3D-kind-of" games? All my code uses camera space, though I become tempted to use world space (which can also easier be changed later on to offset-world-space, I guess)? Especially, the reduction in map/unmap invocations seems like a holy grail at the moment. Not that I experience any bottlenecks so far, but it still seems wasteful in case of multiple passes.
  4. Lighting space

    I agree with your conclusions and think that's a pretty common way to go. If you're dealing with a very large world, float starts to get really bad precision a few kilometres away from the origin, enough to cause moving lights to flicker as their positions are quantised. In that case, it's common to use an offset-world-space, where you move the "shading origin" to the cameras position from time to time. Also, going from object, to world, to camera has worse precision when going from object to camera directly. In something like a planetary / solar scale renderer, you would notice this, so would be back to updating constant data per object per frame just to get stable vertex positions, in which case you may as well do camera space shading. Another older convention you missed is tangent space, where +Z is the surface normal and X/Y are the surface tangents. This was popular during the early normal mapping era, as you could transform your light and view directions into tangent space in the vertex shader, and then in the pixel shader the raw normal map is the shading normal (no TBN matrix multiply required per pixel).
  5. Hello! Have a problem with reflection shader for D3D11: 1>engine_render_d3d11_system.obj : error LNK2001: unresolved external symbol IID_ID3D11ShaderReflection I tried to add this: #include <D3D11Shader.h> #include <D3Dcompiler.h> #include <D3DCompiler.inl> #pragma comment(lib, "D3DCompiler.lib") //#pragma comment(lib, "D3DCompiler_47.lib") As MSDN tells me but still no fortune. I think lot of people did that already, what I missing?
  6. How Important are extra features?

    Coming up with extra features is easy. The hard part is throwing out the ones that distract from the good parts of the game or only serve to complicate things without making it more fun.
  7. But there is also the need to actually write a replacement SpriteBatch in order to support instancing... I'll go head-in-the-sand for the time being
  8. Nah, I wouldn't call it premature optimization, since the actual decision in both cases can have a large impact on the code base. I mean, one should think things through at a certain level before coding instead of rushing off and pushing your feet at every stone on the way. Of course, if you want to foresee everything in advance, you won't have written down a single line of code by the end of the day. On the other hand, continuous refactoring when the problems start to appear is wasteful as well, since refactoring by itself does not result in added value. So just find a balance between designing (standing still) and coding (moving forward)
  9. Trickjumping in games

    Partly true, some techniques are pretty hard to learn (circle jumps, plat-strafes, air-strafe). Some are pretty easy if you have a good tutorial - but there was always a lack of tutorials for that stuff or the tutorials was just a pretty showcase... For example, vertical plasma climbing is the easiest thing: stand on a wall, look at the edge between floor and wall but move the cursor above the edge, jump, held down the forward key, starting shooting plasma - without moving the mouse or pushing additional keys at all. After a few minutes everyone can do that. Of course doing it higher, climbing curves, switching sides, etc is more hard...
  10. Lighting space

    From a theoretical point of view, it does not really matter in which space, or coordinate system, you evaluate the rendering equation. Without introducing artificial spaces, you can chose among object, world, camera and light spaces. Since the number of objects and lights in typical scenes is way larger than the number of cameras, object and light spaces can be skipped. If you selected object space, all the lights (of which the number of lights is dynamic) need to be transformed to object space in the pixel shader. If you selected light space, all objects need to be transformed to each light space in the pixel shader. Both cases, clearly waste precious resources on "useless" transformations on a per fragment level. So world and camera space remain. Starting with a single camera in the scene, camera space seemed the most natural choice. Lights can be transformed just once in advance. Objects can be transformed inside the vertex shader (on a per vertex level). Furthermore, positions in camera space are always equal to the inverse lighting direction used in BRDFs. So no offsetting calculations need to be performed, since the camera will always be located at the origin in its own space. Given that you can use multiple cameras, each having its own viewport, the process repeats itself, but now for a different camera space. The data used in the shaders must be updated accordingly. For example: the object-to-camera transformation matrix should reflect the right camera. This implies many map/unmap invocations for object data. If however, lighting is performed in world instead of camera space, I could just allocate a constant buffer per object and update all the object data once per frame and bind it multiple times in case of multiple passes. Finally given my current and possible future voxelization problems, world space seems more efficient than camera space. It is possible to use a hybrid of camera and world space, but this will involve many "useless" transforms back and forth, so I rather stick to one space. Given all this, I wonder if camera space is still appealing? Even for LOD purposes, length(p_view) and length(p_world - eye_world) are pretty much equivalent with regard to performance.
  11. Hey thanks for the replies. I think I was chasing down performance issues I don't actually have. Premature optimisation I already have a solution for depth sorting in the CPU by only allowing a small finite number of layers, and I will ignore the GPU instancing until I have more experience with Direct X and stick with SpriteBatch.
  12. Today
  13. Trickjumping in games

    I played Q3 LAN for years with friends, later Q3 Arena for some time. I love the physics and the flow of the game. But those trick jumping skills never made it up to me. None of them. I tried this quite a few times, looking some video... but i don't get it. So what i think is: This stuff is too unaccessible to the average player, badly introduced and too hard to learn. Probably because lots of it was not intended by the developers AFAIK. But this could be improved and reused in actual games.
  14. This can be really advantageous given that you have quite some data stored at the vertices. Currently, DirectXTK only uses a position, color and pair of texture coordinates per vertex for sprites, which is pretty match the equivalent of a single transformation matrix.
  15. In a very basic setting, your sprites will contain opaque and transparent fragments (e.g. text). These transparent fragments need to be blended correctly with the fragments behind them to "leak" the correct background. This can be achieved with the depth buffer in at least two separate passes for one layer of transparency (you can use more passes for multiple layers you want to support on top of each other as well). Alternatively, you can sort the sprites on the CPU, while only requiring a single pass when rendering the sprites in order (based on the sorting). For transparent 3D objects, you can use the depth buffer as well, but CPU sorting is not guaranteed to be possible in all cases. You can have interlocked transparent triangles, for example, which cannot be sorted once for the whole image, but should rather be sorted per pixel. Sprites, on the other hand, are just stacked on top of each other. So you can always sort once for the whole image, instead of per pixel, allowing CPU sorting in all cases. So given the above, I would say to carefully profile the GPU depth buffer approach for sprites, because I expect the CPU sorting to be faster for a common load of sprites. Even if you have an extreme number of sprites, you can always rely on insertion sort based sorting algorithms while exploiting coherency between frames.
  16. Trickjumping in games

    Yeah thats speed-capturing. Really important if you want to be really fast, a lot of modern games does this totally wrong. With horizontal overbounces in Q3 this gets more insane, for example in q3ctf4 you can capture the flag in around 10 secs normally, but with a good timed rocket and one horizontal OB you get it in third of the time. If you can do such a thing in a professional match without getting fragged you rule that match pretty fast.
  17. Trickjumping in games

    I like trickjumping in games. Reminds me of a friend of mine who was top-ranked back in the Unreal Tournament 2004. He and his buddy were so good at trickjumping that on one CTF map, they would trickjumped from the top of enemy tower back to the home tower to score a point (you are supposed to go back through the middle the crossings). This is the map: That kind of move was one way to confuse the opponents. You know your flag is taken, but you don't know where the fuck it is as you are not seeing it crossing through the middle. Instead it's flying right above you at high speed.
  18. C# 10,000 PNG files!

    When developing a game, we have always Texture memory limits so what every game needs to do is first manage the resources currently used and loaded into memory and secondly pack and compress those resources that re predicted to be used together. This will result in more than one but less than 10.000 texture maps on an arbitary size (most games tend to use POT textures). You then need a texture manager that will control certain ammount of memory you consider to be the right one, loading necessary ones and unloading unused ones. There exists some technics like ref-counting or prediction-loading to manage this. Sure, access is faster when those are already in memory but it is obvious bad program design to load 1.5GB of data into memory just for access it fast. I have had this on a commercial project using Unity where all the sounds have been added to RAM when the game loads. It was incredible slow at startup and incompatible with non-64bit systems (exceeded 6 GB totally) so I have had to fix this by using some manager classes. At the end the loading time reduced to a few seconds which also quitted the needs to have a loading screen and the more important thing was that user dosent got any impact from the system. Sounds were there as though they have been added to memory at startup. At least; data packing is important. Storing 10.000 pngs mean to store also 10.000 png headers that need to be processed. Game Engines for example intend to process those assets before shipping to something they can store and read more rapid and efficient in a data format that can be accessed quick. Zip is ok but to get the best disk performance you should either use or write your own one that is more disk cache friendly. HDDs have a fixed ammount of cache space that data is copied from while reading and loaded if needed. Smal data triggers a more frequent read from the real physical disk while chunking your data into packages tend to less caching but faster reading. I use in my game engine a data format that is packed to 64kb block size and when generated, fits those data chunks to hold larger data in adjacent chunks. Smaller pieces of data are puzzled into remaining spaces if possible so that space wasting is reduced to a minimum. Not going to deep into the algorithms but paired with some other technics lets get you loading times of several GB in a few seconds. Even if this are game dev technics, you could also use those for any other kind of program
  19. Trickjumping in games

    What do you think about trickjumping in games - or games which are heavenly based on this concept? Which games do you like? Are there enough games for that kind of field? My answer: I still play quake 3 defrag (A trickjump mod since 2000) regulary and still think there are no game which are equal to its trickjump abilities: - Strafejumping with multiple ways (Half-beat, Inverted, Side-ways, Forward-only, Backward, etc.) - Circlejumps - Air strafing - Plasma climbing - Rocket jumps (3x, 4x, 5x, 7x stacked rocks in one place...) - Overbounces - Promode is a great addon (Ramping, Stair-jumps, Teleport-Jumps, Bunny-Hopping, Faster in general) - Its so easy to make maps for (GTKRadiant) Sure there are games which comes close, but i still think Q3 is the best in trickjump abilitities. Reflex-fps comes close, but lacks a few things... And there are a hundres of good movies which was made from this mod: https://www.youtube.com/watch?v=UYbQIsAtlnY https://www.youtube.com/watch?v=7tG9xoyNVzY https://www.youtube.com/watch?v=MP9IZju7L_U https://www.youtube.com/watch?v=x-ScsQD92BY
  20. Hello, I have 3 years experience in mobile software dev. I always wanted to work in GameDev and make games but unfortunately I had not so many opportunities to do it. During my computer science studies I learned basics needed to work as software developer. After my graduation I get job easily as a iOS developer. It is good job, well paid, but it is not what I always wanted to do. I feel that I'm missing something, wasting my life for inapropriate things. Now my life is more stable, I have money to live and I want to try start making games for real. I heard gamedev is hard and not paid good. Is it true that you have to work much more than in software dev? Is it true that people accept to earn less and work more because they love to do it so much? And they usually have to stay late in work? How much time I need to learn to work as junior PC game developer? Should I quit my job to follow dreams? Or It is to much risk? I recently released a small game for iOS as a warm-up. I don't think I want to make mobile games. They looks so limited with gameplay and controlling. I read that mobile game market is saturated and it is very hard to live from it. Please, give me some advices Thanks!
  21. Goodbye!

    I'll never forget those times you hit me with warning points. Also, that one time you told me a lot about swing dancing. RIP Josh. I wish you could see how much I've grown.
  22. Let's say you have a game like flappy bird. Basically zero extra features. You go for the high score and that's it. Somehow it becomes one of the most downloaded games ever. I'm planning on making a game similar to that. But I could add extra features to it. These include temporary powerups, permanent upgrades, levels, story, maybe bosses, challenge modes, achievements, and so on. How important are extra features to you, and when do you prefer a barebones game vs. a game with lots of extra features?
  23. [UNITY] Character Control Problem

    I multitask I was on Unity and Chrome with dual monitors. Also, it worked
  24. [UNITY] Character Control Problem

    Lol, that was fastest reply in my life. I just closed the tab and got the email notification xDD I hope it worked
  25. C++ OOP and DOD methodologies

    Regarding the difference between OOP and DOP i made a sample n-body particle simulation a while ago, to see the actual differences in maintenability and performance. I tried that by making the same application in 4 different code-styles: - Naive, poorly OOP without thinking about hardware at all (Most software is written in this style!) - Still OOP but a bit more thinking, but not that hard - Still OOP but with performance and cache in mind - DOP but poorly implemented But this experiment had a few problems: - It has a lot of shared code (App handling, Math, Multithreading, Rendering etc.) which was equal on all 4 code-styles, so the differences can only be seen in some areas (Neighbor search for example). - The DOP approach is just poorly implemented, i am pretty sure this can be done much better - so this thing need a fifth style: Good DOP! - A SPH fluid simulation may not be the best scenario for that experiment Why did i create this? First of all to see it for myself. Is there actually a difference and how strong are that? How good is the compiler at optimizing? But more importantly i wanted to make this to show people that naive poorly written OOP may be really bad for performance. My result: The compiler is pretty good at optimizing the obvious things, such as optimizing getter and setters out, inlining code, eliminate branches etc. So some OOP concepts makes no difference between POP and/or DOP style. Removing virtual functions is unneccesary when you just call it a few times per frame - but makes a difference when you call it ten-thousands times per frame. If one is interested and wants to try it out yourself or wants me to help me writing the fifth style: https://github.com/f1nalspace/nbodysimulation_experiment
  26. [UNITY] Character Control Problem

  27. [UNITY] Character Control Problem

    Activate Apply Root Motion in the Animator component. That will apply movement from animations.
  1. Load more activity
  • Advertisement