Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ryan_001

Member Since 23 Apr 2003
Offline Last Active Today, 08:22 PM

#5297880 Is there any reason to prefer procedural programming over OOP

Posted by on 24 June 2016 - 10:19 AM

Considering the amount of religious wars fought about what OO really means

...

No, you don't know what you are talking about.

 

My opinion was not that of my own.  I really don't care what its called, OO, procedural, imperative, all these definitions are useless as far as I'm concerned and just hinder things more than help.  Its what my professors and the books we used in school stated (yes I have a degree, no I don't think it matters).  Its a pretty common notion that OO without inheritance isn't 'true' OO.  Whether you agree or disagree, just shutting down dissenting opinions with nothing other than a 'you're wrong I'm right' isn't constructive.  State why OO doesn't need inheritance, or how OO without inheritance is still OO and not just procedural with structs (and I'm not implying those are the only arguments, or good ones at that).  In computer science the 'why' is always far more important that the 'what'.

 

/sigh...

 

and yet in true gamedev fashion yet another interesting conversation is shut down by the standard 'my way is the right way' argument.  I try... I really do try to get these posts to be more than a simple 'do X not Y' type conversation and to get into why we do what we do.  And yet time and time again I am repeatedly shut down for simply attempting a real conversation.  

 

I will refrain from posting in the future...




#5297727 Is there any reason to prefer procedural programming over OOP

Posted by on 23 June 2016 - 12:35 PM

I disagree. I find that in my modern code inheritance plays a very minor role. Polymorphism is used even less frequently. Runtime polymorphism, that is. Compile time polymorphism happens much more frequently.


Just for kicks, take a look how much inheritance or polymorhism is used in the C++ standard library. There are the streams of course but I would be hard pressed to find another inheritance/polymorphism example quickly. Significant parts of the standard library are intentionally inheritance-unfriendly.

Of course in some domains (for example UI frameworks like Qt) you cannot throw a stone without hitting something polymorphic but that is not true for the general case.

 

Lost a long post... :(

 

Long post made short: I'm sure your code is fantastic, but is probably not true OO.

 

C++ is a rather poor OO language, lacking virtual constructors and multi-dispatch (amongst other things).  Its a multi-paradigm language and any good C++ coder (as I'm sure you are having seen prior posts) leverages multiple paradigms in any non-trivial program.  Not using inheritance in C++ simply means you're not trying to use a hammer where a screwdriver would be better.

 

I think the distinction between paradigms and languages is important, as they serve different purposes.




#5297710 Is there any reason to prefer procedural programming over OOP

Posted by on 23 June 2016 - 10:12 AM

It really depends on what you consider OO vs. what you consider Procedural.  I've met people who claim that in order to have OOP, you must have Inheritance, Encapsulation, and Polymorphism.  Basically, in their view, a class without any inheritance (subclass or superclass) isn't actually OOP at all.

 

That's a ridiculously strict definition, but if you follow that, OOP is usually pretty garbage, and you should do most of your code procedurally.  On the other hand, if you see OOP, like I do, as globs of packed data called objects being interacted upon by functions that know how to handle those objects, OOP is actually pretty awesome, and you should do very little of your code procedurally.

 

I think in terms of a definition its not a bad thing.  Inheritance is a very critical part of OO and I would argue without it you're probably not doing OO.  Likewise functional programming with explicit state is really not functional programming.

 

But as far as coding goes, whatever works, don't get bent out of shape over 'strict' definitions.  I've seen extensive OO code done in C, have personally coded a lot of procedural code in java, and done large amounts of pure functional in C++ (the fun of meta-programming with templates, a pure functional language wrapped in the worse syntax imaginable).  A definition and a language serve two completely different purposes.  A language is there to actually get a completed program out and done, where-as a definition is primarily there as proof for papers and to teach concepts.




#5295044 Pros/Cons of coding alternatives to std::algorithm?

Posted by on 04 June 2016 - 11:23 PM

Another fun collection are the heap operations. While people are generally more comfortable working with the bigger containers that maintain order, heaps have many useful properties and used with the wrapper for priority_queues. That's useful for anything with priorities like the typical A* implementation or any kind of processing involving the first N, or nearest N, and similar. Quite a few of the more CS-intensive algorithms rely on the heap data structure.

 

In any event, prefer the built in functionality if you can make it work.  If not, look for other libraries like Boost or EASTL.  Try not to re-invent or re-implement the wheel.

Every C++ dev should have Boost.  Granted this was more-so a few years ago, less so now that nearly 1/2 of Boost has been added to the standard library; but Its still exceptionally useful.




#5285793 Should getters and setters be avoided?

Posted by on 08 April 2016 - 07:47 AM

As usual, you should not take everything too seriously.

 

There is a common behavioural pattern of someone doing something, then someone else noticing that it's actually a clever idea and calling it "best practice". Then someone calls it a "pattern", and soon a witch hunt starts against anyone not applying the pattern. A year later, someone says "considered evil" and the word "antipattern" comes up, and the same witch hunt starts in the opposite direction. And then, another year later, someone says "no longer considered evil".

I can't upvote this enough ; )




#5284872 In terms of engine technology, what ground is left to break?

Posted by on 03 April 2016 - 10:12 AM

The biggest advances will not be in visual rendering (IM very HO) but rather in physics and gameplay/simulation.  Simulating battles with millions of entities, realistic fluid interaction, building destruction, etc...




#5283047 Looking for a *.dds texture loading library in C++ that isn't specific to...

Posted by on 23 March 2016 - 08:43 PM

WIC supports DDS files as well: https://msdn.microsoft.com/en-us/library/windows/desktop/ee719654(v=vs.85).aspx




#5281656 Brace yourself, Shader Model 6.0 is coming

Posted by on 17 March 2016 - 07:27 AM

When they said procedural textures I was thinking something else, but this does look interesting none-the-less.

 

What I want to see...  I'll call 'sampler shaders'.  Instead of using a normal sampler, you could create and use a 'Sampler Shader'.  When the GPU gets to the point in a shader where it needs to sample a texture, it would stop (conceptually only) and execute the samper shader.  The sampler shader would be executed over a range (say a 16x16 or 32x32 block, whatever the driver/hardware thinks is necessary) to produce unfiltered texels which would be stored in a cache (type, size, policies, etc... would be driver handled).  The shader performing the sampling would then be able to read from the cache, perform whatever filtering is necessary, and continue on.  The sampler shader would have a minimum blocking size (much like a compute shader) and access to on-chip shared memory.  This way complex procedural texture data could be created and used in real-time, dynamically adjusting its resolution/quality based on its usage in the scene.  Not only procedural texturing but higher quality texture compression (wavelet, VQ, fractal, whatever), would be rather trivial...

 

Alas this seems to be something else... /sigh




#5281601 Template madness - template operator not found

Posted by on 17 March 2016 - 01:45 AM

This works in VS2015:

# include <cstdint>
# include <iostream>
# include <vector>
# include <type_traits>
# include <string>


using namespace std;

template<typename T> class A {
	public:

		// internal B
		template<size_t x> class B {
			T asdf[x];
			};

		template<typename T> struct IsTypeB { 
			static const bool value = false;
			static const size_t size = 0;
			};

		template<size_t x> struct IsTypeB< B<x> > { 
			static const bool value = true;
			static const size_t size = x;
			};

		// internal C
		template<size_t x> class C {
			T asdf[x];
			};

		template<typename T> struct IsTypeC { 
			static const bool value = false;
			static const size_t size = 0;
			};

		template<size_t x> struct IsTypeC< C<x> > { 
			static const bool value = true;
			static const size_t size = x;
			};
	};

//template<typename T, Size x> inline A<T> operator+(T value, const typename A<T>::template B<x>& vec ) {
//	return A<T>();
//	}

template<typename T0, typename T1> 
	auto operator+(T0 v, const T1&  x) -> typename std::enable_if< A<T0>::IsTypeB<T1>::value , A<T0> >::type {
	cout << "size of B<x> = " << A<T0>::IsTypeB<T1>::size << endl;
	return A<T0>();
	}

template<typename T0, typename T1> 
	auto operator+(T0 v, const T1&  x) -> typename std::enable_if< A<T0>::IsTypeC<T1>::value , A<T0> >::type {
	cout << "size of C<x> = " << A<T0>::IsTypeC<T1>::size << endl;
	return A<T0>();
	}

	
// ----- main -----
void main() {

	A<float>::B<2> b;
	A<float>::C<3> c;

	5.0f + b;
	6.0f + c;

	cout << "done" << endl;

	getchar();
	}

As Pink Horror pointed out you're not gonna be able to deduce A<T1>::B<T2> directly. But you can 'hoist' it into the main class then use SFINAE to filter functions as necessary.




#5277033 How fast is hardware-accelerated ray-tracing these days?

Posted by on 19 February 2016 - 04:08 PM

 

You'd have to perform the vertex animation, apply it to the mesh, then take the mesh triangles and build an oct-tree/BVH heirarchy/binary-tree/sort into grid cells/whatever?


Like a character? Precompute a static BVH for the character in T-Pose. At runtime keep the tree structure but update the bounding boxes.
The animated tree might not be as good as a completely rebuild tree but still pretty good.
If you have 100 Characters you only need to build a tree for those 100 root nodes.
I'm using it for a realtime GI solution i'm working on for many years now.
Don't know any papers but i'm sure i'm not the inventor of this simple idea smile.png

 

Interesting...  You don't find there are too many triangles in your leaf BVH nodes?




#5277024 How fast is hardware-accelerated ray-tracing these days?

Posted by on 19 February 2016 - 03:07 PM

While that's common opinion in graphics community, i disagree.

Say you have 10000 dynamic objects: Prebuild a tree per object, and at runtime build a tree from only 10000 nodes, that's <1 ms on GPU.

Research projects rebuild the entire tree every frame, thus they often show similar time for building and tracing.

 

For simple linear transformations that is pretty straight forward, but what do you do about more complex animations?  You'd have to perform the vertex animation, apply it to the mesh, then take the mesh triangles and build an oct-tree/BVH heirarchy/binary-tree/sort into grid cells/whatever?  In all the papers I've read this part is very slow even when performed in parallel.  It also undermines your raytracing time complexity, doing an O(n) prepass before an O(log(n)) final trace still yields an O(n) algorithm (granted there's more to algorithms than just big O time complexity...).

 

Course if you have any papers, evidence, or experience to prove me wrong (and don't take my tone wrong, I'm not trying to be argumentative, I really enjoy reading the latest up-to-date papers and seeing videos of this stuff) I would LOVE to read/see them :)  I love raytracing and I have a few ideas of my own I want to try when I get some time.




#5276951 How fast is hardware-accelerated ray-tracing these days?

Posted by on 19 February 2016 - 09:00 AM

My understanding is that the ray tracing isn't really the hard part.  Its building the data structure that is the issue.  Whether you're using a BVH, oct-tree, binary tree, grid, or something else, the data structure is essential since it allows you to move your tracing time complexity from O(n) to O(log(n)).  So for large enough static scenes, ray tracing can actually beat out standard triangle rasterization.  The problem is building that data structure in real time.  Sure simple linear transformations within certain limitations can be handled relatively quickly, but vertex skinning, dynamic vertex displacement (like animated water), anything stretching/oozing, many particle effects, basically animation, is what really is the bottleneck at this point.

 

Really the problem should be re-stated, its not how fast GPUs can raytrace these days, its how fast can they build the raytracing data structure these days.




#5275986 instancing with multiple meshes

Posted by on 16 February 2016 - 11:58 AM

One technique I used for drawing a large number of small mesh's was to abuse the tessellator. This of course won't work for D3D9 level hardware, but I thought I'd mention it none-the-less.

In the vertex shader I computed all the per-instance data (one vertex per instance). The hull shader computed the number of triangles needed for the given instance and created a quad patch with enough triangles to render all the triangles in the mesh. The Hull shader did nothing but pass on the per-instance data. The geometry shader would then transform each triangle created from the tessellator using the per-instance data passed in, and looking up the specific instances vertex data from a SRV. It did mean duplication of some vertex computations, but it also meant that you got very efficient culling/LOD, and that per-instance data only needed to be calculated once, and there was no intermediary UAVs, streams outs, etc...

Now I never benchmarked the performance, and after watching the wonderful link by Phantom I have a feeling that it would probably be slower than other options, but it seemed pretty cool at the time smile.png


#5273932 Cost of Switching Shaders

Posted by on 02 February 2016 - 02:18 PM

I don't think so. AFAIK, D3D11's cmd list recording is all done in the user-mode library, and doesn't actualy call into the driver until you submit the list.


It does state in the documentation "Pre-record a command list before you need to render it (for example, while a level is loading) and efficiently play it back later in your scene. This optimization works well when you need to render something often.". I've also seen it stated (though I don't remember exactly where) that the driver can perform some optimizations on the command list, and that the user-mode library is only used when the driver doesn't support it.


#5273350 Warning conversion from size_t to int in x64

Posted by on 30 January 2016 - 09:00 AM

-




PARTNERS