Jump to content
  • Advertisement

NikiTo

Member
  • Content count

    117
  • Joined

  • Last visited

Community Reputation

169 Neutral

1 Follower

About NikiTo

  • Rank
    Member

Personal Information

  • Role
    3D Artist
  • Interests
    Art
    Design
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. NikiTo

    Assembly language?

    For Uncharted 4, they used intrinsics to make the multicore architecture working at max potency for the Ambient Occlusion. Assembler gives you the deepest control over the hardware. You can combine the beginner tutorials with the documentation of Intel, which is extremely explicit and available for free online. You can search for the assembler community in G+. They like to help and I have seen people there making games for assembler, still nowadays, still now. If you are gonna use ASM, please be aware that in a perfect scenario ASM should be used in conjunction with C/C++.
  2. I am interested in bitwise operations. I use ^ & | << >> ~ the most. And + - from time to time. Never / * %. What about firstbitlow/high?
  3. NikiTo

    When a shader is too long?

    In the beginning a novice could think that the GPU computes the whole 16K x 10K texture in parallel in one thick of the clock. In such case, a pixel shader approach is better in my situation. But knowing that GPU handles it all with only 1K of cores(384 in my case), changes the situation. And Compute is better as long as I can have the GPU doing something most of the time. If I had 160mln of cores, I would get back to my pixel pipeline, and have it all computed in 100 ticks of the clock. My current compute pipeline would never be able to saturate 1mln of cores. So my old pixel pipeline is still good, it's super parallel, but practically it is not faster, because GPUs fake parallelism to certain extent. I started to code with the idea of maximum parallelism, but I was wrong. GPUs are not perfectly parallel.
  4. NikiTo

    When a shader is too long?

    Ah, ok. I've thought about it. But with the previous pipeline I was moving huge textures around in device memory. I mean 1.5G of textures moved around about 5000 times per second. My idea now, my challenge now, is to do all this with one fetch and one write only. That's why I use so much of the LDS. So I try to not use the device memory at all. I try to effectively save 1.5Gx5000 per second work for the device memory bus. What is more, I load only once the compute shader to do that and call only once dispatch(x,y,z). With the previous pixel based pipeline, I was changing the shaders all the time too. My challenge is to say goodbye to all of this. one read -> one shader -> one write That's why my shader is huge too. Another advantage I can feel now is the memory usage. My GPU has 2GB dedicated memory, but I rarely can create resources above 1.5G in total. So now I not only save work for the bus, but I save space too. This is not the whole app, just part of it. But effectively for the same work, before, I was moving mountains in device memory. Something that could force me to do what you say, to slice the problem to multiple shaders, is if I surpass the max amount of operations per shader. Wikipedia says in total a shader can execute up to 64K operations. So if my nested loops, produce in total more than this, I will be upon the wall. If I understand Wikipedia correctly. I wish the manuals about GPU were so explicit like the manuals of Intel CPUs. Manuals for GPU are like: "This instruction sums stuff. Bye!"
  5. NikiTo

    When a shader is too long?

    I guess you mean something like this: float4 A, M; float4 B, C, L, R, N; for (int i = 0; i < 4; i++) { A[i] = B[i] + C[i]; M[i] = L[i] * (R[i] + N[i]); } // to this A = B + C; M = L * (R + N); But this in a modern GPU would produce 4 times more instructions, because vector operations are broken down to single element operations. A.r = B.r + C.r; M.r = L.r * (R.r + N.r); A.g = B.g + C.g; M.g = L.g * (R.g + N.g); A.b = B.b + C.b; M.b = L.b * (R.b + N.b); A.a = B.a + C.a; M.a = L.a * (R.a + N.a); And it is harder to slice when inside the loop, there are another loops and IFs too. It could be done, but sometimes it takes more operations to accomplish. This is the main reason I am not slicing the code inside the loop body, because it is very complex. This is a simple example, but when the body of the loop is huge, the saved extra operations for computing the loop conditions don't pay it, if it overflows to slow memory. BTW, in the last days I am unrolling it manually, and at the cost of few register more, I was able to reuse big part of the code. So my manual unroll is much better than just using [unroll], because I guess the compiler just creates four copies of the code and replaces the "i"s with the counter. I guess a compiler does little or nothing more than this.
  6. NikiTo

    When a shader is too long?

    Can not make those work. I'm using DX from a console. It shows me all kind of errors. The application runs and saves the correct result to disk, but the profiler fails on capturing anything. I can see the messages the profiler injects into the console. (I am not using swap chain. I render to resources and then I copy those resources to .txt and .bmp files on the hard drive) Anyways. I am used to work without debuggers. Sometimes inside the shaders I introduce dummy outputs to can see what is happening. Only problem is I can not measure performances.
  7. NikiTo

    When a shader is too long?

    After unrolling, the complete amount of instructions I can count are nearly one thousand. I am not calling any functions, not even intrinsic functions in order to can better count instructions and register usage. In total almost 1000 instructions. I hope this much instructions will not make the shader overflow into the slow memory.
  8. NikiTo

    When a shader is too long?

    I am even afraid that, my good working shader, could work nicely one month and the next month, when the manufacturer updates the driver, it could work slowly. Even Visual Studio could update the compiler from one month to other. That's why I try to find always "a rule of thumb" or get near to the best practices. for (int i = 0; i < 4; i++){ //..lot of code here using vect[i] } I think the code was changed when I posted it as plain text. It should be "vect" indexed by "i". My bad.
  9. In CPU they have instruction cache, but it is not much. Apart from the time needed to load a new large shader. Is there some issue with already loaded large shaders? I have like: float4 vect = { 0.0f, 0.0f, 0.0f, 0.0f }; for (int i = 0; i < 4; i++){ ..lot of code here using "vect" } So I consider manually unrolling this loop in order to can use "vect.r .g .b .a" instead. Because I don't want numerical indexing to cause cause switch statements. But it will make the shader code longer. (I don't have to reload those shaders very often. With every shader, I dispatch a pretty good amount of threads, so I don't worry about reloading times as much as execution times)
  10. NikiTo

    WHO recognising 'gaming disorder'

    Not actually. Reflexes and autonomic nervous system are working at their own. Or you are going to tell me that a child that makes its first steps, can explain, inverse kinematics?
  11. NikiTo

    WHO recognising 'gaming disorder'

    You are right! My bad. The problem of games is that the brain enters in learn mode. For hours and hours the player observes how the hair of Aloy passes through her clothes. Have you never tilted your whole body trying to take a curve in Need For Speed? For a movie of bad CGIs, I don't think this is so harmful, because there is no learning for the autonomic nervous system. When the game is 2D, the player does not lose the notation of reality so much. It is like learning to play the piano. But when the game pretends to fake reality and fails in any technique used, it is then, when in my opinion it could be harmful. When you observe somebody playing to a 2D game at hard mode, you can see, his fingers working pretty fast. But his torso is reacting little if at all to the game. When you observe how people play CS, you can observe how they are ducking to the side to look behind the corners of the walls. The player ducks behind 3D wall made out of a single polygon, with a triangle rasterized to have straight sides, and in nature, in infinite point perspective, rarely there are any straight lines observed.
  12. NikiTo

    WHO recognising 'gaming disorder'

    @Fulcrum.013 I wasn't meaning to learn physics for an exam in school. Rather, their bodies to learn how to use physics. Having more realistic for the real world reflexes. An extreme example is to make a real life boxer to fight against a Tekken champion. VR is ok, after they are at certain age. Still, it produces headaches in adults too. I would prefer kids to manipulate holograms in schools instead of wearing VR helmets. Holograms would be healthier.
  13. NikiTo

    WHO recognising 'gaming disorder'

    Well, hunting Pokemons can kill people(driving a car while hunting pokemons) I personally think games who try to fake reality are harmful for a growing brain. All those realistically looking games are not real and fool a growing brain. Parallax, bump maps, ambient occlusion(and almost any technique in those games) are only almost real. I would not let my kid play 3D videogames or VR for more than 1h a day. He has to learn real physics kicking a real ball. Playing with a real Lego is building the neuronal connections of a kid between hands, space and all. While minecraft is building a fake link between 3D space projected into 2D screen surface and a joystick. Two players playing on a video game, are watching for hours the wrong angle of the perspective. The perspective was meant to be seen from the center. (don't get me wrong, I love 3D games, and I am addicted to them, so I am not playing at all. When I start playing I can play 20h without going to pee. That's why I don't play, because I know myself. I love 3D games, but I would not recommend them to my kids. It is like smoking parents who don't like their children to smoke.) Still, I am not sure that taxing the game producers is a fair thing to do. Maybe parents have to be penalized somehow. (Many of logical/puzzle 2D games are actually making the brain think. So they are good for ppl) I can foresee, people suing the game companies, because they lost their jobs for playing, or because of playing they forgot to take their medicine and almost died.
  14. NikiTo

    WHO recognising 'gaming disorder'

    This could lead to abuses of the gaming industry. Like game producers having to pay some tax for addicting people. Like Spain taxing memory devices because of piracy. Or taxing tobacco because of cancer.
  15. NikiTo

    Is an IF always a bad thing?

    AVX-512 is 16 dwords wide warp. And if it has 32 cores, it would be (much?) more expensive than the expensive GPUs. Such CPU could become faster because of the assembler language. If somehow GPUs provide genuine assembler language, they will be unbeatable. It could be like ASM for ATI and for NVidia. Now there is ASM for AMD CPUs and Intel. Would not be a big problem to introduce and maintain it for the future products. For example, we would no more have to create various shaders in order to semi-blindly find the best code. We would be able to directly go for the best code. (also they could decide to keep that ASM for important game developers only)
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!