Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


phil_t

Member Since 06 Oct 2006
Online Last Active Today, 12:42 PM

#5214377 What physical phenomenon does ambient occlusion approximate?

Posted by phil_t on 03 March 2015 - 11:54 PM


I'm simply wondering how to keep the contribution physically correct and variable depending on the environment.

 

I'm not sure it's realistic to make things like the sky brightness physically correct. Clouds might give off more light than a clear blue sky, but it's going to depend on so many factors. Probably more practical to just have some artistic control that is modulated by the overall sunlight.

 

For the "ground" part of your sky dome, the amount of light given off can be calculated assuming you know the approximate albedo of the ground and the amount of light coming from the sun or top half of the sky dome.

 

I did some experiments to try to improve my ambient lighting which may be of interest:

https://mtnphil.wordpress.com/2014/05/03/global-illumination-improving-my-ambient-light-shader/




#5214369 Game Perfomance

Posted by phil_t on 03 March 2015 - 11:20 PM

Some links for you:

 

 
To gain a decent understanding of what might impact graphics performance, read and understand this thoroughly: 
 


"too many textures would slow the perfomance, but not as much as too many triangles in the scene" (something like that)
 
No one will be able to make any general claims like that. It will depend on your situation. Sometimes vertex processing will be your bottleneck, sometimes (more commonly? though I hesitate to say anything like that) texture bandwidth will be your bottleneck. Sometimes something else will be.
 
And of course, your bottleneck might be on the CPU, not be the GPU.
 
So yeah, you can't really answer this question generally:
 


what are all those "things" that slow the perfomance?
 
Like Nypyren, said, "everything". You can only answer "what are all those things that are taking the longest in this particular game in this particular frame" (using a profiler).



#5213795 Dynamic Memory and throwing Exceptions

Posted by phil_t on 01 March 2015 - 08:40 PM


whereas smart_ptr specifies an unknown number of owners.

 

What's smart_ptr? Do you mean shared_ptr?

 

 

To the O.P.: As others have said, use vector. Possibly unique_ptr<int[]> is another option, if the semantics of unique_ptr fit your usage scenario.

 

Also, this is essential reading if you are allocating memory or acquiring any unmanaged resources (file handles, critical sections, etc...) in an environment where exceptions can be thrown:

http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization




#5213706 CRPG Pre-rendered Bacgrounds - Draw depth to Z-Buffer

Posted by phil_t on 01 March 2015 - 01:43 PM

That sounds like a reasonable approach. I'm not familiar with writing shaders in Unity, but a search for "unity shader output depth" returns some threads about it, with problems and solutions.

 

Keep in mind you may incur some performance penalty with this, since it will likely disable hierarchical depth testing for anything drawn after.




#5213562 I think I don't understand dynamic memory

Posted by phil_t on 28 February 2015 - 04:38 PM


yes, but I don't understand exactly what's Dynamic about the _student = new Student("whatever arguments they used"); part

 

You're using runtime logic to allocate space for a Student. ie. at runtime you're controlling the allocation and deallocation of the Student objects and the memory associated with them. You can create and delete as many or as few as you want.

 

If you need control of the number of them you create, or when they are created and destroyed, then you need to use "dynamic memory".

 

Another time you need dynamic memory is if you want to create an object and have its lifetime exist outside of the function it was created in. A function can return a pointer to a dynamically allocated object (which is very different than just returning a copy of the object).




#5213516 I think I don't understand dynamic memory

Posted by phil_t on 28 February 2015 - 12:05 PM


I dind't understand Rip-Off's explaination completely. 

 

 

 


the explicit memory management is done by the implicitly called destructors. So the std::make_shared<> will dynamically allocate the object, and the destructor deallocates it.

 

When any object goes out of scope, its destructor is called. When the main function is exited, sp goes out of scope, so its destructor is called. That shared_ptr destructor ends up deallocating the memory it allocated when it was constructed. Just like your IntArray class allocated in its constructor, and deallocated in the destructor (I'm assuming so, you didn't post your destructor code).

 

So if you had the code:

int main() {
    IntArray blah(10);
    // etc... do some more stuff
}

When the main function exits, IntArray goes out of scope and its destructor will be called, thus freeing the *p it allocated in its constructor.




#5212851 MonoGame and bugs/incomplete features?

Posted by phil_t on 24 February 2015 - 11:27 PM

You should ask this question at the monogame forums, or file a bug against them with a simple repro project.

 

I've definitely encountered a few issues with graphics device setup in monogame, but was able to work around them (and filed bugs so they got fixed). It's possible the current version of monogame fixes whatever issue you're running into (assuming you're running the latest official release). You can also build monogame yourself... I had to do this for an audio issue whose fix hadn't been pushed into the official source code yet.

 

fyi, I just tested IsFullScreen with a Monogame (v3.0) Windows OpenGL project, and it worked fine. I also don't see any issues with GraphicsAdapter.CurrentDisplayMode.




#5212353 How to disable depth write?

Posted by phil_t on 22 February 2015 - 06:11 PM


D3D11_DEPTH_STENCIL_DESC depth_write_enabled_desc;
depth_write_enabled_desc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
hr = d3d11Device->CreateDepthStencilState(&depth_write_enabled_desc, &pDepthenabledStencilState);

 

Is this your actual code verbatim? In that snippet you're only setting one value of the D3D11_DEPTH_STENCIL_DESC structure, so the rest will be stack garbage. What's the return value of the call to CreateDepthStencilState?

 

If you use the directx debug runtime, is there any interesting spew?




#5211935 Easiest way to pass a 2D array?

Posted by phil_t on 20 February 2015 - 11:23 AM

I would just pass the 1-d vector, along with a width so you can do the math and index it as a 1d array.

 

Or, wrap your map information in a small class that has a GetAt(int row, int column) method and just pass that. Or you overload overload the [] operators to index it like a 2-d array. I just whipped this up (might still have bugs):

class Grid
{
    class Row
    {
        friend Grid;
        Row(int width, int column, std::vector<int> &internalGrid) : internalGrid(internalGrid), column(column), width(width) {}
        int column;
        int width;
        std::vector<int> &internalGrid;

    public:
        int &operator[](int index)
        {
            return internalGrid[column + width * index];
        }
    };

public:
    Grid(int width, int height) : internalGrid(width * height, 0), width(width) {}

    Row operator[](int index)
    {
        return Row(width, index, internalGrid);
    }

private:
    std::vector<int> internalGrid;
    int width;
};

Then:

Grid grid(100, 200);

grid[4][6] = 4; // etc...

findPath(start, end, grid);

Looking at the disassembly for the optimized code (VS 2013), using grid[4][6] is just as fast as directly indexing a raw array, so there shouldn't be any performance impact.




#5211536 Do everything through ECS? Or implement other systems?

Posted by phil_t on 18 February 2015 - 02:49 PM

I've shipped one game with an ECS, and built part of a much larger project with an ECS.

 

In the first case, I'd say roughly half the gameplay mechanics were part of systems (which operate over components or sets of components), and half were part of one-off scripts attached to an entity. Attaching scripts to an entity was kind of an "escape valve" for stuff that doesn't fit nicely within the ECS... the scripts are kind of a black box that is just told to update.

 

As for the buffs and modifiers you talk about, I agree with SotL that this isn't really relevant to the ECS. I don't think it gets any easier or harder or much different whether or not you're using an ECS.

 

To start with, I would just have a "Stats" component that includes: base stats, modifiers, and possibly "final stats". e.g.

struct Stats
{
    int BaseSpeed;
    int BaseAttack;
    int SpeedModifier;
    int AttackModifier;
    int Speed;
    int Attack;
};

A stats system would iterate over these things and ensure that Speed and Attack are kept up-to-date with the result of whatever formula calculates them from base and mod. Other code (such as the combat logic) would get the Stats component for an entity and just look at Speed and Attack. That way the combat code knows nothing about modifiers. And the stat calculation code has nothing to do with combat.

 

Another alternative is maybe the "FinalStats" could be a separate component, and the combat code only cares about that. The stats system would then be responsible for transforming the data from the "Stats" component to the "FinalStats" component for an entity. And if some enemies are very simple and don't have such a thing as base stats and mods, then they only need a "FinalStats" component. The combat logic continues as before (since it only cares about FinalStats), and the stats system would just ignore anything without a "Stats" component.

 

The ECS "pattern" makes it easy to think about decoupling code like this, but you could do essentially the same thing in a more traditional OOP way too.




#5211316 Resource objects in stl containers

Posted by phil_t on 17 February 2015 - 07:13 PM

fyi, in c++11, you can delete the assignment operator and copy constructors for a class (which the compiler would otherwise generate by default). This is a good idea if you don't want your class ever to be copied:

struct MyClass {
	HugeResource* r;
	MyClass() { r=new HugeResource(); }
	~MyClass() { delete r; }
	MyClass(const MyClass &src) = delete;             // Tell compiler to delete this method
	MyClass& operator=(const MyClass &src) = delete;  // Tell compiler to delete this method
};

// Now code like this will generate a compile error, instead of you finding
// problems at runtime:

std::vector<MyClass> list;
list.push_back(MyClass());



#5210553 Entity-Component Confusion

Posted by phil_t on 13 February 2015 - 03:03 PM


[Data] [Data] [Data] [Data] <- [Data appending from various threads]
     |
    \/
[Data] [Empty] [Empty]


And anything that fails, doesn't get added?

 

Not sure what "anything that fails" means, or why threading is relevant. I understand it like this:

 

free indices: [2] [3] [6]

valid indices: [0] [1] [4] [5]

component list: [DATA] [DATA] [empty] [empty] [DATA] [DATA] [empty]

 

Then I need to allocate one of these components for an entity. After, it would look like this:

 

free indices: [3] [6]

valid indices: [0] [1] [2] [4] [5]

component list: [DATA] [DATA] [DATA] [empty] [DATA] [DATA] [empty]

 
Then I delete the two entities that have indices to components 0 and 1 in this list. After, it would look like this:
 

free indices: [0] [1] [3] [6]

valid indices: [2] [4] [5]

component list: [empty [empty] [DATA] [empty] [DATA] [DATA] [empty]

 



#5210529 Heightfield Normals

Posted by phil_t on 13 February 2015 - 01:17 PM

That's an artifact of interpolating between 3 triangle vertices. You should notice that the artifact is much more prominent on one diagonal than the other, right? Like RobMaddison said, it will become less noticeable when you texture the terrain.

 

Also, if you orient your triangulation so that the boundary between two triangles in a quad is aligned with the slope, then it will become less noticeable. See "additional optimizations" here: https://mtnphil.wordpress.com/2011/09/22/terrain-engine/

 

If you want something even smoother, you could probably precalculate and put your normals into a texture. Then sample from that in the pixel shader based on the xz world position. Then you'll be getting the weighted avg of 4 normals instead of 3. Of course now you incur the cost of an extra sample in your pixel shader, and the memory footprint of the normal texture.




#5210412 Entity-Component Confusion

Posted by phil_t on 12 February 2015 - 11:24 PM


What is a "Bag" this is actually the first time I've heard of this storage method.

 

It's a class in the Artemis framework. Read the source code. The comment says:

 

Collection type a bit like ArrayList but does not preserve the order of its entities, speedwise it is very good, especially suited for games.


#5210386 Entity-Component Confusion

Posted by phil_t on 12 February 2015 - 06:24 PM


So far... from what I can gather from studying Artemis is that the components are stored in maps. Aren't hash tables less efficient for the cache? Given that it's normally scattered around the memory?

 

I just looked at the source code, and the components appear to be stored in arrays.






PARTNERS