Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Krohm

Member Since 27 Aug 2002
Offline Last Active Today, 11:32 AM

#5038471 Writing my own programming language

Posted by Krohm on 02 March 2013 - 12:04 PM

  1. Image as native data type? Wow. I made it a native-class in my system. I'd like to exchange some words on that. 
  2. operator+ operator-, "used to assign pointer". What is that? I see from the "pointer" description it's... I don't understand what. Some kind of autocast? It also seems to me your "pointers" are actually references.
  3. I see you have while, but I don't see for. Oddly, I only have for, because I think it's more convenient. What made you take this decision?
  4. I see your functions are like in C. To be honest, I think there's at least one very good reason to use the new func()->rettype syntax, which relates to type resolution. I'd like to hear your opinion!
  5. Some functions in your standard library are fairly advanced (such as those involving drawing). By contrast, you have a "basic" rand(). Do you plan to improve support for random generation? In general, I'd like to hear something on your perspective for the future.
  6. Are arrays a type or not? Can you inquiry about their contents/length etc?
  7. I also thought using group instead of struct would make sense. But I couldn't quite make it (I actually only have class). What made you take this decision? I mean, struct is conventional wisdom (mere naming issue). Do you have support for functions taking a this pointer? (it appers not to me)
  8. Inheritance. What's the result of group
    group Foo {
        int a;
    }
    
    group Oops : Foo {
        int a;
    }
  9. You have threads? Man, that's outright scary! I mean, there are so much implications I don't even know where to start. Are you sure?
  10. Are image objects "real"? I mean, if I assign an image object to another, will I copy all the pixels? That's going to be... well, like C++ I suppose.
  11. Ok, there's startThread... and no other thread control?
  12. I'm not even sure what's the deal with weak pointers.
  13. You know what? I cannot actually do "Hello world" with per-frame changing color in my language ;)
  14. Are you really sure you want to pass objects by value? It's a common source of grief in C++.
  15. I'm afraid I don't understand the deal with passing by reference and group (in the "other" page).
  16. Don't give people the layout of your image structure. Give them functions to operate on them as opaque data!
  17. "If you use '=' copy one 'file' type to another, it will copy the address (like in the case of 'image' type)." What does that mean?
  18. The way parameters are cast... it scares me quite a bit...
  19. Ok, weakPtr is a cast. How does it work in the case of
    group Parent {
        int x, y;
    }
    
    group Child : Parent {
        int z;
    }
    
    void test(pointer Child p) {
        //body
    }
    
    Parent parent;
    test(weakPtr parent);
    

     

    I mean, it's a cast depending on function parameter type... to be resolved against overloading... ouch. And by the way, why should one downcast a struct to its parent?
    I guess I missed the part of the documentation in which you stated groups are passed always by reference. Perhaps that's what you meant a few points above?

Having my own lang as well I am very interested in hearing different opinions!




#5037511 Projectile Class Format

Posted by Krohm on 28 February 2013 - 03:12 AM

I want to set up the system that delivers different types of effects to an enemy from a bullet fired by the player. For instance, it could have a freeze property that slows the enemy down, or a shock property that does added damage to shields  and so on.

...
My concern/question is: is that too much information for each projectile to contain, or is this a somewhat typical setup? If I have dozens or maybe even hundreds of projectiles, each with that much information associated, could that become a dangerous amount of processing to run for every frame?

Is it too much? I don't think so. Is it typical? Sort of, RPGs for example often have elemental damage properties. For your initial design, implement whatever gets you there. Iterate if necessary.

Keep in mind you'll need far more design than you probably expect. For example, how do you freeze a generic player-controlled entity? A generic enemy? A generic object in the world?

Stacking effects will be a problem. In certain games, ice cancels fire. On others (such as Torchlight) the various elemental damages are applied independently.

 

As said, those projectile types are going to be "static" in behaviour (albeit they might have random values on a per-instance basis).  I strongly suggest the use of a reference type or a pointer. As for the values themselves, a preallocated std::vector will probably be fine (personally I think all bullets should be born equal).

I don't think the system "by itself" should care about who shoot the projectile. But sure it would be handy to have this property on another system to understand who-killed-who. As a side note, there's the need to generalize this in Damage entities if you care about environmental damage.

Bit fields at this stage are very early optimization, they don't look very appealing in a very hi-level system like this.

 

As a side note, consider lambda functions and std::function.




#5035714 Jelly Kid (Updated 2013-03-04)

Posted by Krohm on 23 February 2013 - 03:20 AM

Looks very nice. Following the thread.

As for the sword, why to rely on an icon only when the sword could be ... red? fizzling with fissures? on fire?

Perhaps it's just me but fire swords are often like flames in shape. This one is even blue. I would have never guessed it's fire.




#5035001 alpha maps?

Posted by Krohm on 21 February 2013 - 08:37 AM

Consider distance-field alpha maps. They give extreme resolution for the same footprint with no extra HW requirements. You'll have to compute them though.




#5031723 My Engine(Toolset)

Posted by Krohm on 13 February 2013 - 01:50 AM

You give them out for free. Without critical mass first, very few people will be interested.




#5031147 Object fading

Posted by Krohm on 11 February 2013 - 01:03 PM

Then, seeing we have the same understanding, I believe we only have a different standard of quality. Because I personally believe C2A does not give adeguate quality when filling big areas.

I mean, here are a few shots from Humus C2A demo (with a modified texture).

 

Close-up of the screen-door effect mentioned.

closup.png

But let's leave that alone as OP is not interested.

From distance:

far.png

Personally I believe the pattern on the middle window is quite apparent.

 

Nvidia thinks it's not sufficient, but they turbocharged the concept with Stochastic Transparency.

AMD also thinks it's not sufficient: OIT using per-pixel linked lists.

 

Now, I understand this is worth a try. But saying that C2A could replace blend... I don't think so. It looks fairly different from blend in my opinion. Therefore, I am not suggesting anyone to try it with the intention of replacing blend, not even for the easiness of implementation.




#5031071 Object fading

Posted by Krohm on 11 February 2013 - 09:46 AM

Maybe I'm just too close minded but I still don't understand how alpha-to-coverage fits the picture.

 

My apologies for not quoting the full post. I would have never figured out this was going to offend you in any way. I speculate however you're cutting a lot of corners to state I wrote the same thing as you.




#5030965 Object fading

Posted by Krohm on 11 February 2013 - 02:22 AM

Alpha to coverage is a simple way to get this effect. You'll need at least 4x MSAA for it to look any good, and it still wont be perfect, but it's better than popping.

Alternatively you can use alpha blending, but then you have to worry about sorting and the extra GPU cost

I don't understand what does that even mean. Alpha-to-coverage does a very different thing from alpha blending. For a terrain, most of the time alpha=1 so how does that fixed OP's problem?

 

In a shader-driven world, the only "easy" solution is to provide a way for the shader to figure out the alpha to use. Possibilities include:

  • uniform for VS, set on a per-tile basis (not recommended)
  • VS outputting a per-vertex depth value, replicated in a dedicated interpolator (easy to go). PS always taking alpha in consideration lerping against a min and max range. Problem: mandates blending is always enabled.

In both cases at least the fragment shader will need to be properly authored.




#5028060 Best way to do terrain?

Posted by Krohm on 02 February 2013 - 04:03 AM

I'd explore the possibility to use generic meshes by going through the standard level geometry system. If you have one. This appears to me the only solution flexible enough to do what you want to accomplish.

 

Keep in mind we're in 2013. If you think about "wasted vertices", you're over thinking it. Now, we could go great lengths discussing how rasterizing small triangles gives issues and all sort of things but truth is vertex transform is rarely a problem in itself.

It is acceptable to use a render mesh with higher detail than physics mesh. My convex hulls for example have roughly 1/4 geometric complexity.




#5028058 Trying to understand Lighting general.

Posted by Krohm on 02 February 2013 - 03:52 AM

As i understand it i need 3 different shaders (same shader compiled with different defines or one that is branched). In a more standard approach (I guees its called Forward Rendering) Do I need to render render each object 3 times with a different shader each time seems a bit inefficient in my untrained eyes.

Yes, it is. The main problem with forward techniques (or the main advantage of deferred) is exactly this decoupling of lighting complexity from shaders.
Nobody says you need 3 shaders and three drawcalls. If you know you have 1 point, 1 spot and 1 area (I suppose that's your diffuse) then just write a shader than evaluates those three.

Accumulating more lighting through multipass is not automatic, it's a blend operation to be turned on. You might have heard blend is slow. Basic ("dumb" would be a better terminology) deferred blends even more so I guess we can all afford it. Or so they say.
What you report as the solution is basically a multi-pass technique doing basically the same thing by using render-texture. It doesn't radically change what's going on conceptually, although the performance will hopefully be different.


#5027149 Any tips on structuring 3d models for great code?

Posted by Krohm on 30 January 2013 - 05:20 AM

When should I use multiple models?  This seems useful for things like weapons, but what about vehicle wheels or destructible pieces - how far should you take this concept?

I would take this as far as required to reach my target.

Do your game require this feature. If not, let it out. Wrapping your head around things you might need in the future in some circumstances for some use-case is not going to take you anywhere.

Your example is ill.

Weapons come in their own model and get attached to the mesh.

Same applies to destructible pieces (although the models are likely bundled in the same resource file).

So I'd personally provide named attach points / joints / whatever you call them and allow the model to be assembled by multiple meshes.

 

When should I use separate meshes within a single model? Is there a 1:1 realtionship between meshes and materials that should be honored? It seems like meshes get sent to the shaders as a unit, although you could possibly break them down into primitives and send those pieces to different shaders...

Define what a mesh is. Everything that goes through draw calls is a mesh. Every draw call is a mesh. What is a model? For years, "models" had a single material and often got rendered in a single drawcall. Then, models started to have more drawcalls, each with a different material.

Basically, nobody in general breaks the drawcall to primitive level to issue them to "different shaders". Drawcalls are the basic unit of operation for generic usage.

 

Is it acceptable to animate a rigid model (like a mech) using a single mesh with disconnected pieces, assigning all the vertices in each piece a 100% weight to a corresponding bone?

Might work in some situations, I wouldn't be very proud of it and wouldn't consider it good practice (it's against the whole idea) but if it saves the day, I'm all for it!

 

Should you ever create animation channels for meshes (rather than a bone)?

They are sometimes still used.

 

Are there standards for how many texture files should be used with a single mesh or a model?

Some engines (especially hobby engines you read about in those forums) have a limit. In general you must support a "practically unlimited" amount of resources to be loaded. For my mesh format I have an hard limit at about 2^10 resources if memory serves.




#5026306 Home-made BSP map editor

Posted by Krohm on 28 January 2013 - 02:40 AM

Doing your own editor (and compiler!) because compile takes over than 4 min? You must be joking.
Editor I could understand but... the compiler? Are you sure? Let me quickly summarize BSP history.
If you want to be able to modify live data, do not use BSP. Use AABB trees or everything else incremental.
But even this would take some effort. Hint brushes and areaportals can speed up computation quite a lot (I think 4x easily) so if you are not using them already those are your first step.


#5025366 Robust Shader Systems for a Game Engine (Advice&Discussion)

Posted by Krohm on 25 January 2013 - 01:53 AM

how to implement robust shader systems in my engine

I suggest iterative design. I've spent quite a few months in actually designing my shader system on paper. Wasted effort. You cannot think a solution for a problem you don't have, much less understand.

 

What do you need for your actual project? Make this work. Nothing else. Then iterate. Hopefully by that time you'll have a better understanding of your needs and perhaps better machinery supporting you.

 

Anyway. The key point is strings (if you want to mess up with uniform values) or opaque blobs to load in device registers (D3D9 slang) or uniform buffers (D3D10/GL slang). Those blobs come from the shader itself, the material or the specific object. I got quite some mileage out of this.




#5024634 Is Clustered Forward Shading worth implementing?

Posted by Krohm on 23 January 2013 - 02:23 AM

There is a thing I don't understand.
It appears there's this thing still going on which plain forward can only do 8-10 lights per pass. How? In the past I've had quite some success encoding light data in textures and looping them on entry-level SM3 hardware. Perhaps I'm not seeing the whole picture but in SM4 with the much higher resource limits and the unlimited dynamic instruction count... shouldn't we go easily in the thousand range? Of course we'll neeed a z-only pass first.
So I guess there are additional practical reasons to stay in the 8-10 range.

 

At the top of page 2, I read about extra pressure and lower execution efficiency. I understand.

But, as much as I love lighting modularity coming from deferred, as a DDR3 card owner I still don't understand how the improved processing makes up for the bandwidth increase required. The trend on bandwidth is set. It looks to me we want to trash compute in the future.




#5022091 Source Control - Perforce

Posted by Krohm on 16 January 2013 - 02:21 AM

After switching to Mercurial, I cannot see why should I go back to P4. I was lured into trying it as it was rumored to be "easier". I found myself the hard way this was not the case.

I strongly suggest Mercurial+TortoiseHG.






PARTNERS