Jump to content
  • Advertisement

pcmaster

Member
  • Content count

    263
  • Joined

  • Last visited

Community Reputation

1031 Excellent

2 Followers

About pcmaster

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. pcmaster

    Size of enum class? (c++)

    As everybody said, there's no guarantee they'll be the same size, if you don't use class enums. One practice I've seen is to put a final member into the enum, like this: enum action : uint8_t { hold, attack, defend, MAX_ACTION } and then have hand-hardcoded compile time assertions: static_assert(action::MAX_ACTION == 3 && sizeof(action) == sizeof(uint8_t), "Did we change a serialised enum?") Although it seems very redundant, and it is, if you change serialised types, your code will stop compiling and you'll have to take time to think how it broke and what to do about it. In this example it's a total overkill, don't do it :) But for some more complicated structures, not (only) enums, which are serialised without any reflection, it might be useful. I can't also recommend enough getting used to using very explicitly sized types like int8_t, uint8_t, int32_t, uint64_t, etc., for everything that goes to disk. Only use int/uint when you know you don't care about their size.
  2. pcmaster

    CopyResource with BC7 texture

    Can you show the descriptors of the source and destination texture? CopyResource between same-sized resources with the same format shouldn't be a problem. "Decompressing" BC in compute or pixel shader will work, of course, and won't even be slow, if that's what you're after. But writing into BC... not so easy, as @turanszkij says.
  3. pcmaster

    Avoid branch instructions

    Let's talk the expected case - many bottles and the empty one is around the middle. The first branch cost on HasLiquid() will be totally negligible. If AddLiquid() isn't simple and/or won't be inlined -> branch in case 2. No matter if there's a ternary operator. I don't think omitting the break helps - looping over the rest of the array can't be worth it. Rather, you can hint the compiler that bottle.Empty() is unlikely (see __builtin_expect()) in the first case, so it will expect to skip AddLiquid and you'll pay the price of branch only once, when you encounter the first (unexpected) empty bottle and actually end. EDIT: Okay, oops... CPU missing branch prediction. Well... Are there still such architectures? Anyway. The worst overhead I can imagine is discarding the instructions already in flight on the individual stages in the CPU, as it happens on simple architectures like MIPS (taught at schools). A CPU lacking branch prediction won't have a very deep instruction processing pipeline. Am I wrong? Are we talking PS3 SPUs or something?
  4. And while we're at it, I'll throw in an optimisation you can try out immediately, while doing it, for alpha-tested objects (grass, leaves, fences, ... not alpha-blended, though). Naive approach: In PrePass, do the discards based on e.g. alpha texture sampling In main geometry pass(es), do the discards the same way as in PrePass Better approach: In PrePass do the discards based on e.g. alpha texture sampling In main geometry pass(es), use EQUAL comparison (not LESS EQUAL/GREATER EQUAL) and don't even bother computing alpha again Why does this work? Depths of fragments which you want discarded will not match the depths in the depth buffer - there's already a hole from the PrePass - and will not be shaded at all (early Z) or won't write the to colour targets.
  5. I'm not sure OP tried to start another programming language flame I insist that trying to make a small game in ANY language is good for everyone. Nothing prevents anyone from trying 2 or more (environments) and eventually decide which one is good for their further little experiments.
  6. Learning to make SMALL games in ANY language IS the right way. Go ahead! Make a 2D small game in JS/HTML5 for example. You'll learn a lot. I do C++ on consoles... but I also made a tiny JS game back in the day just to try JS out: http://pcmaster.koinbahd.com/game0. I've also tried (and made similarly small and unfinished games) in Unity3D, C#, Java, Python, Delphi, OpenGL and many more longer ago. There's never enough
  7. pcmaster

    Need help on how to start.

    Hi OP, while studying mechanical engineering, you'll surely learn things that are at least somehow useful for game development: they'll force a lot of math upon you physics, numerical methods, simulations, electrical engineering basic programming CAD / modelling Matlab, Mathematica, Excel economics, company finances, law foreign languages For the start, do you have an idea whether you'd like to do technical stuff (programming) or artistic stuff (modelling, animation, ...)? Or all of it? The most obvious advice is... obvious :) Start making SMALL games. Go make snake. Tetris. Asteroids. Space Invaders. Go. Make. One. Now. You'll figure EVERYTHING out while making a small game. Good luck! .P
  8. On GCN, LDS and GDS are totally useable from pixel shaders. It just isn't exposed in DX11 or DX12 SM5 (for obvious reasons) so it isn't going to help you on PC Also, from AMD GCN block diagrams - 1 rasteriser reads 1 triangle per cycle and outputs 16 pixels per cycle. After shading, each Render Back-End (there are usually 2) can do multiple blends, depth and stencil samples per cycle. Looks pretty dedicated to me
  9. Hi! Just out of curiosity, since I'm a bit out of context, how do you "add" instruction level parallelism? CUs always execute a whole thread worth of instructions (64 threads on AMD GCN doing the same instrution) at once in a lock-step. But I'm sure you know this. If you have such a low occupancy, you'll suffer on memory operations, since the GPU won't be able to switch to a different thread group to execute some other instructions, in order to hide the memory latency. If bandwidth is blocking you 90% of the time, optimising the other 10% (ALU) isn't going to help you much. Profile profile profile?
  10. Render Targets (and Depth-Stencil Targets) won't be UAVs internally, at least not on AMD GCN. Where did you read it? AMD pixel shaders have the exp instruction which exports colour and those writes go through a different cache hierarchy, namely the colour-block cache and bypass the L1 and L2 caches completely. The dedicated colour-block hardware ("output-merger"-ish) serialises concurrent writes (there is NO guarantee that 2 polygons won't write to the same pixel, nor the order they'll do so), handles blending, etc. UAV writes, on the other hand, use the image_store class of instructions, which just write to an address, usually via L2, bypassing L1. For reference, of what the HW is capable of, you can have a very quick look at the AMD Southern Islands Instruction Set PDF. Does anyone know how different this is on NVIDIA or Intel? Why do you think that writing to a UAV will be not "optimised"?
  11. pcmaster

    Do you still play video games?

    I've been in the industry for 8 years and I've almost stopped playing. I'd say I play 1-8 hours a month. I generally try to avoid staring at any kind of screen outside of work and spend time out with friends and family. I do play many indie under-construction games-to-be, though, but that doesn't add up to much time EDIT: Oh, how sad it is, when I read it :( I'd also like to state that I like new games a lot, as well as I like old games.
  12. Hey GC! Many, many more. Bigger games use even more. For example, pulled out of my finger: Z-PrePass One per each dynamic shadow-casting light Several for the directional lights (Sun, Moon, ...) G-Buffer pass Screen-space ambient occlusion Transparent surfaces Countless complicated post-process passes: Light + shadow application Water Anti-aliasing Depth of field Motion blur Fog, volumetrics... Global illumination Reflections... Glows Vignette Edge-detection / highlight Rear-view mirror GUI Easily 10-20. Each might contain several sub-passes. Depends a lot (deferred? forward? ...). So I'd say you're safe if you're within your performance budget.
  13. It's exactly as @jbadams says. For example, our studio has offices in Europe and the United States. The offices are a combination of open space, office rooms and even cubicles. In all locations, you "punch in" with a keycard when you come in in the morning and "punch out" when you leave in the evening. During the day, you don't punch in/out (for example for lunch). Most people usually stay for 8 hours anytime between 07:00 and 19:00. On some days, you come later, on some earlier. As long as nobody abuses the hours and is available for all the scheduled meetings (if any), the company is fine and trusts us a lot. We, too, trust our company not to pull any bullshit. At the end of a full month, you should have (number of days times eight) hours on your "punch card", the exact distribution is up to everyone. Most people have a bit more. There's almost no home-office in our company. There are occasional over times, well communicated, not enforced and paid if it's a lot. On average, developers (coders, artists, testers, etc) won't spend more than 1-4 Saturdays a year, usually 0 Saturdays. Crunches depend on the production ability to estimate how long the operations will take. Their estimate depends on developers' ability to estimate how long will it take them. Ideally, there's enough buffer and there are no crunches.
  14. pcmaster

    Bits & Pieces Music

    I like your tunes :) What about teaming up with someone doing a game jam like Ludum Dare 42 (ldjam.com) in a few weeks and seeing your music in a game?
  15. pcmaster

    Rendering and compositing G-buffers.

    COLOR2 isn't a depth buffer. It's just a colour buffer on slot 2. What does mean "output a stencil value on a float2"? What samplers are you talking about? On the OUTPUT of a pixel shader, there's 0-1 depth-stencil target and 0-8 colour render targets. Stencil and depth are implicit, i.e. depend on the fragment position and external stencil settings, i.e. you cannot set them manually in your pixel shader, there's specialised HW handling them; the colours are explicit and you must calculate them in the pixel shader. On the INPUT to any shader, there's 0-128 of textures (I'm simplifying). Some of those textures can be the same memory as your render targets or depth-stencil targets. A sampler is a recipe to sample the neighbourhood (neighbouring pixels and neighbouring mipmaps) at a UV coordinate on your texture to fetch you a colour. Any texture can be sampled with any sampler. Example samplers are: point, bilinear, anisotropic, etc. The definition of a sampler in GLSL (texture + recipe) and in HLSL (recipe only) is fundamentally different, watch out.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!