Jump to content

  • Log In with Google      Sign In   
  • Create Account


Hodgman

Member Since 14 Feb 2007
Online Last Active Today, 02:07 AM
*****

#5168813 Looking for advice to construct class structure for efficiency

Posted by Hodgman on Today, 12:34 AM

I don't like either of those options -- a model is too high level to know about a ID3D11DeviceContext, and the Device is too low-level to know about complex things like models.

 

I would choose option #2, but instead of the Device knowing how to draw "models", break that down into it's simplest parts -- vertex streams, a stream layout, a shader program, lists of texture bindings, lists of cbuffer bindings, and a "DrawPrimitives" operation. I'd have the device know how to draw something use all of those objects internally, and then I'd have the "Model" class have these as members (i.e. be composed out of these simpler objects).




#5168774 Pipeline Configuration for Post-Processing Shaders

Posted by Hodgman on Yesterday, 07:07 PM

You still need a vertex shader - the geometry is a quad or triangle that covers your entire screen!

To work without any geometry at all, you'd use a compute shader instead of a pixel shader, an Dispatch instead of Draw.


#5168608 Create Vertex Shader from compiled file

Posted by Hodgman on Yesterday, 04:26 AM

There is an API to take text shaders and compile them into binary shaders in memory, which can them be passed to CreateVertexShader... But I just use FXC.exe - the shader compiler that comes in the SDK.


#5168586 Inter-thread communication

Posted by Hodgman on Yesterday, 01:33 AM

This relies that reads and writes of LONGLONG's between CPU<->RAM are atomic operations.

 

That's the kind of thing where you're writing code that's making explicit assumptions about the CPU architecture that you're running on... This kind of code should only exist within low-level system-specific modules, such as the internal implementation of std::mutex, std::atomic, etc... i.e. code that you know you'll have to rewrite if you want to recompile for a different CPU.

 

P.S. writing to a LONGLONG is not one atomic operation on x86 -- the data will be transferred in 2 operations (or 3 if it's unaligned), which means that this is exactly as unsafe as your previous code!

 

P.P.S. if you're using the volatile keyword (except in the above mentioned situation - writing CPU-dependent implementations of synchronisation primitives) then you have a bug. That's hyperbole, of course, but seriously, in many code bases the use of volatile is automatically rejected or flagged as a potential bug (because it doesn't do what most people think it does, and in 99% of cases, it's not actually useful in writing multithreaded code). In my entire engine, that keyword gets used once.




#5168566 Inter-thread communication

Posted by Hodgman on 22 July 2014 - 10:06 PM

There's two main categories for multi-threaded communication -- shared state and message passing (with the latter being the right default™, though the former usually being the first that's taught...).

 

You're mixing both of them here -- to change the window size you use message passing, to discover the window size you use shared state. Just pick one! tongue.png

Either put a lock around the data, or use messages to retrieve the size as well as setting it (you could either send a 'getSize' message to the window containing the object who wants to know, and have the window send a 'setSize' message back to that object, or, have objects register to receive 'onResized' messages).

 

Also, shared state requires synchronization. If some bit of data is going to be shared between multiple threads, it needs to be synchronized. Simply putting a mutex/critical-section around the width/height data is fine. Locking like this is only actually a huge performance penalty if there is contention (many threads trying to use that data at once). Keep in mind that lock-free synchronization generally also performs very poorly in situations of high contention!




#5168556 Post process order?

Posted by Hodgman on 22 July 2014 - 08:50 PM

SSR is part of the lighting - so it gets done at the same time as lighting (early).

Eye adaptation takes it's input from the final HDR lighting buffer (after all lighting is calculated), but it doesn't change the screen -- it's just an extra input into the tone-mapper.

Lens flare and bloom are the same effect - blurring/distortion/etc in the camera lens, so they happen at the same time.

I would do bloom after DOF, otherwise the results from the bloom will be smudged around by the DOF -- you might notice the sillhouette edges of objects forming lines through your bloom... but neither order is physically correct, so try both wink.png

I do color correction, tone-mapping, gamma and vignetting all in the one shader. First I do vignetting, then tone-map from HDR to 8-bit, then do gamma correction, then colour correction using a LUT. I do vignetting on the HDR data, because it's basically a lighting/shadowing effect. You can also do it after tone-mapping, but you get different results (try both!).

FXAA doesn't work on HDR data, so you have to do it after tone-mapping.

 

The above is for high quality / high system specs.

For older hardware, I do tone-mapping/colour-correction/gamma-correction before DOF -- because it's much faster on old hardware to blur 8-bit data than HDR data!

In this configuration, you've got some more options with FXAA -- It's interesting to do FXAA before DOF, because then you're blurring an anti-aliased image, which gives better results in the DOF'ed areas... However, the DOF itself creates aliasing around edges... which can be fixed up by doing FXAA after DOF. Again, try both!



#5168259 Converting STL heavy static library to shared

Posted by Hodgman on 21 July 2014 - 06:56 PM

Also, some of the methods take a reference to a vector and fill it with objects, am I going to be forced to roll my own data structures for these cases? And same with areas where strings need to be returned and/or passed (some I might be able to get away with converting to C-style strings, but an actual string class would be much better)

These sound like parts of the public API, not the internal implementation?
If so, yes, they will cause problems unless the program that uses the DLL is built with the exact same compiler and the same compiler-settings.
 
If you're using a different compiler, and/or different settings, then the DLL and the program that use it might have two different implementations of the STL -- so the DLL's idea of how a "std::string" works might be different than what the EXE expects, etc...
 
Also, it's possible that the DLL will use new (inside a string/vector/etc) and the EXE will then use delete. This can cause issues if they are using different heaps / different versions of the STL...

On Windows, you can avoid this problem by making sure that you're using the DLL version of the STL, e.g. http://i.imgur.com/ARYDZcz.png




#5168245 Concern on "paying to enter" a project

Posted by Hodgman on 21 July 2014 - 05:45 PM

Why would you want to mod a game that you don't already play? tongue.png

 

I'd be suss of any project who wanted staff to pay them to enter... (but they're not paying you, they're buying the necessary software -- does he want you to buy him his programming tools as well?)

I'd also be suss of any prospective modder who doesn't own the game that they want to mod.




#5168241 Exception question

Posted by Hodgman on 21 July 2014 - 05:31 PM

Exceptions tend to make code faster (if you're using them for error states and not flow control), safer, and easier to read. Their only downside that I'm aware of is they do increase code size due to the dispatch tables

Increasing code-size, with all the extra potential unwinding cases added everywhere, can have a huge performance impact. Even just having exception support enabled in your compiler (regardless of whether you even use them or not) makes your code slower sad.png

Libraries targetted at games generally are forced to avoid using exceptions at all, because many game-developers disable them for performance reasons (the compilers for consoles have them disabled by default, with a compiler option to enable them if they're really required!).

n.b. my statements are only true for C++'s exception system -- the mechnisms in newer languages are much nicer... Also C++'s mechanism is much nicer on x86-64 than it is on x86/PPC/ARM/etc...

Writing exception-safe code is just as hard/easy as writing error safe code, no matter your error mechanism.
...  You are correct in that it is hard to write code to enforce the strong or nothrow guarantees.

Yeah - strong exception safety is very hard. You've got to be constantly mindful that any line in your program could potentially be a hidden return statement, and make sure that the program will always be in a valid state at all times (except when you're completely sure that the lines in question give you the nothrow guarantee).

When exceptions are disabled (or you give up on the strong guarantee), that massive mental tax goes with them wink.png I personally hate the exceptions-mental-tax, because most IDE's suck at informing you whether a line of code can potentially throw or not, so it's not easy at a glance to tell how safe some bit of code is. If throw-specifiers / noexcept weren't broken features, this tax might be lifted somewhat...

If you're using return values to report errors, the alternative mental tax is that you've got to be constantly mindful that functions have return values that you may have to check...

 

Usually in games, "expected errors" are extremely rare anyway, so the debate over the correct style only concerns a very small amount of code. If you expect an error to occur though (e.g. you tried to insert a new key-value into a map, when that key was already present) then most people will say that exceptions are the wrong mechanism.

"Unexpected errors" in games are usually just given straight to the crash-handler, with no need to write recovery code (just a need to generate good debugging information to fix the code/data).




#5168223 Patenting an Algorithm?

Posted by Hodgman on 21 July 2014 - 03:35 PM

The worst "X used for Y" that I've run into was when I workin in gambling. A competitor patented "16:9 aspect ratio for a gambling display", so we were legally obliged to install plastic strips that covered at least one column of pixels in our products at our own expense, of course....
We had some terrible ones ourselves, such as a computer visualization of a 12-sided dice. 12-item reels or snippers - that's fine - but only one campany can let you gamble with 12-sided dice.

We had 400 staff in R&D; to score high on your performance review, you were supposed to lodge 12 patents a year, with no time specifically given to that task. Even a run-of-the-mill game programmer or mathematician should submit one idea from their day-to-day work each month to the legal department.

There was also a similar task to jury duty, where you'd be pulled into a retreat with the legal team and forced to try and find loopholes in their drafts (e.g. "What if they just used 15.99:9 ratio?") so they could make our patents as broad, vague and watertight as possible...

All the ideas patented are obvious to an expert in the field. Most are obvious to a layman! They shouldn't be granted, but the US patent office is horribly broken.

These are not being used to encourage innovation and protect inventor's rights. They're just weapons used by huge businesses to create unfair artificial monopolies and to hobble competitors in whatever underhanded manner possible.


#5168067 Prevent Paging

Posted by Hodgman on 20 July 2014 - 11:04 PM

Is there a way to prevent paging?
If I have to go os specific I found MAP_LOCKED for linux probably mac but I havent found a way to do this in windows which is going to be the main target unfortunately.

Yeah, you can pin (this is what the LOCKED thing does) the pages that you don't want to be swapped out of RAM... but it's very harmful to overall system performance to pin too much memory, so this feature is generally only used in moderation by device drviers. I've not done it on Windows either, but it will be somewhere in here.
 
The alternative is simply to not allocate too much RAM tongue.png




#5168060 why c++ is the most widely used language for professional game development?

Posted by Hodgman on 20 July 2014 - 10:16 PM

PS3 / Cell is pretty much a 7-core DSP taped onto a weak PPC core. All previous consoles, bar the original Xbox, were pretty much a collection of DSPs that had to be programmed individually.
It isn't until the current crop of x86 consoles that you can treat them as general purpose computers.

The original Xbox used a Pentium 3, DDR RAM, a normal HDD -- all regular consumer parts. [edit] Oops, I read "bar the Xbox" as "even the Xbox"... unsure.png [/edit]

I guess it depends on how you define general purpose computer, but the PS3/360 were pretty much regular PC's too, but with PPC CPUs instead of x86 (plus the Cell co-processor, which is similar to the consumer-available Xenon Phi co-processor -- also, you can compile regular, ugly, way-too-virtual C++ code for the Cell and it will just work™, inefficiently!). Nintendos aren't that different, using general purpose PPC or ARM processors. Until Apple recently switched to x86 for their Mac PC's, PPC CPU's were a viable choice for a desktop CPU as well wink.png
 
For the most part, programming games for them is the same as programming games for PCs -- except on consoles there'll be a few small parts of your engine that do some low-level hardware communication, which on PC is done by your device drivers. On the Microsoft consoles, they're even nice enough to give you analogues of many PC APIs that you're used to, so you can port a lot of Windows code without any changes at all. If you're not working on the guts of the engine, it's basically the same as working on a PC game.

For the most part, the biggest change is just that most of the console OS's don't implement virtual memory (even though they still use regular virtual address spaces), so you've got to be very careful about not running out of RAM.
 

In any case, C++ is fine provided you have a good, modern compiler and a powerful enough CPU to hide the costs of its higher-level features (exceptions, RTTI, virtual inheritance, the STL, etc.) The great thing is that you *can* choose to disable these features to gain performance, at the expense of convenience. It's pretty much the only general purpose language that gives you this amount of flexibility.

And that's why it's popular for game engines cool.png It bridges the divide all the way from C's style of systems/hardware programming, almost all the way up to 'modern' languages like Java/C#. Plus, it's fully compatible with C -- practically every other language is also "compatible with C", but you can write C code inside C++ files, or link C/C++ files together, etc... If you ever need to, it's simple to mix some C99 code into your project biggrin.png

 

You can write stuff that looks like driver code, and you can also write lambda-based game event systems... You can then also control the way those high level systems work under the hood, optimizing them to be cache friendly or to use SIMD instructions, or multiple cores, etc...
 
However, the "higher level" features have zero cost when compared to equivalent C code.
e.g. comparing something like polymorphism in C++:

	class IFoo
	{
	public:
		virtual void Stuff(int i) = 0;
		virtual ~IFoo() {};
	};
	class Foo : public IFoo
	{
	public:
		Foo() : m(1337) {}
		~Foo() { g_state *= m; }
		int m;
		void Stuff(int i) { g_state += i*m; }
	};

	void Test()
	{
		std::unique_ptr<IFoo> base( new Foo );
		base->Stuff(42);
	}

The equivalent C code is:

	typedef void (FnStuff)( void*, int );
	typedef void (FnShutdown)( void* );
	struct IFoo_vtable
	{
		FnStuff* pfnStuff;
		FnShutdown* pfnShutdown;
	};
	struct IFoo
	{
		IFoo_vtable* vtable;
	};
	void IFoo_Stuff(IFoo* self, int i) { (*self->vtable->pfnStuff)(self, i); }
	void IFoo_Shutdown(IFoo* self) { (*self->vtable->pfnShutdown)(self); }

	struct Foo
	{
		IFoo parent;
		int m;
	};
	void Foo_Stuff(Foo* self, int i) { g_state += i*self->m; };
	void Foo_Shutdown(Foo* self) { g_state *= self->m; };
	IFoo_vtable Foo_vtable = { (FnStuff*)&Foo_Stuff , (FnShutdown*)&Foo_Shutdown };
	void Foo_Init(Foo* self) { self->parent.vtable = &Foo_vtable; self->m = 1337; }

	void Test()
	{
		Foo* derived = (Foo*)malloc(sizeof(Foo));
		Foo_Init(derived);
		IFoo* base = (IFoo*)derived;
		IFoo_Stuff(base, 42);
		if( base )
		{
			IFoo_Shutdown(base);
			free(base);
		}
	}

The kinds of people who are doing systems programming in C++ should be able to do this translation in their head at all times, to be aware of what they are actually asking the compiler to do for them. Both of the above snippets run at the same speed on my PC cool.png Same goes for vectors/lists/etc - most of the time that there's performance issues, people are just writing silly C++ code, where the equivalent C code would look absolutely horrible and make their mistakes obvious.
 
If, for whatever reason, you needed to write code using polymorphism, the C code makes the costs involved obvious (and makes a few micro-optimisation opportunities more obvious), but the C++ code is much easier to read, write and maintain - and again, there's no performance difference between them.

 

As well as the "shortcut" features such as the above, there's plenty of completely free features that are just designed to aid in having decent software engineering practices - such as enforcing invariants, detecting errors are compile-time, etc. At my last job we used a template for pointers, which acted like (and had the same cost as) a raw pointer, but during development builds it would alert of cases where it had been used when uninitialized, or where it had been leaked / not cleaned up. That kind of template has absolutely zero cost in the shipping build, but simply enhances your engineering practices.




#5167954 Inter-thread communication

Posted by Hodgman on 20 July 2014 - 08:48 AM

msg = disp.QueryFreeMessage();
if(msg)
....
T* QueryFreeMessage() { return &queue[iWriteMarker]; }

if(msg) will always be true.
 
Dispatch can fail, but the caller doesn't know sad.png
Also, in this case, the caller has already used QueryFreeMessage to get a pointer to a queue-slot, and has written data into that slot, even though the queue is full (overwriting not-yet-consumed data).
You probably want to make QueryFreeMessage actually return false to solve this, and change the error inside Dispatch into an assertion failure, because it shouldn't ever happen if the client is using the class correctly.
 
GetMessage increments the read cursor, which lets the other thread know that it's safe to override that slot... but if the write thread does reuse that slot before HandleMessage is called, then you'll have data that's being leaked (never actually getting consumed), and other data that gets consumed twice. To solve that, you'd have to only increment the read cursor after the data has been consumed.
Or, Instead of returning T*'s to the user, return a T by value, which has been copied before the cursor has been incremented.
 


Additionally boost has some lock free containers in it, and if I recall correctly intel released a whole library of lock free data structures.

Cool, last time I looked at that, it was just the idea / submission-for-review stage, not actually accepted into boost yet.




#5167946 why c++ is the most widely used language for professional game development?

Posted by Hodgman on 20 July 2014 - 06:31 AM

C gives you access to the "restrict" keyword which can result in a measurable performance improvement on DSPs; it lacks templates that bloat your generated code; ditto for implicit copy constructors and other "helpful" compiler-generated bloat; finally, C99 designated initializers are pretty useful in practice.
 
You *can* emulate the efficiency of C in C++, provided you do not use any C++ feature. However, without templates, exceptions and the STL what's the point of using C++ in the first place? You simply get worse compilation times and lose C ABI compatibility in exchange for... namespaces? Pretty weak.
 
Edit: even worse C++ new/delete cannot take advantage of multiple heaps, unless you write your own allocator (good luck), and tend to fail horribly when called during an interrupt. Once you lose the ability to call "new Foo", then a whole range of C++ constructs become impossible. And since everything has to be a POD, then you can just use C and be done with it.
 
(Yes, this is not your run-of-the-mill, out-of-order, branch-predicting x86_64 environment that will swallow all kinds of inefficiencies without complaint.)

As Sean mentioned above, even though restrict isn't in the spec, all the decent compilers support it anyway, in some form. As you know, informing the compiler that two pointers won't alias can produce some great results, so there's a good incentive for the compilers to support that C99 feature in C++ wink.png

 

(the main compilers used by PC and console game-devs are GCC, MSVC, Clang, and SN systems, which are all decent these days happy.png)
 
The PS3 and 360 CPU's are in-order, PowerPC variants, where excessive loads (caused by aliasing) can ruin performance with load-hit-store stalls - restrict can do wonders when that's occurring in tight loops... The PS3 also has the crazy Cell CPU, which doesn't even have access to RAM directly (it's basically got a massive, software-controlled L1 cache, with batch/async memcpy communication to RAM) -- to write code for it, you need all the regular systems programming features of either C or C++.
 
As Sean said, this is GameDev.net, and the topic is why C++ is widely used in professional game development. The fact is that it is widely used. I don't mean to resort to argument by popularity or authority, but seeing that there's hundreds of organizations full of ridiculously talented programming teams who are using it for these kinds of low-level, embedded, high-performance projects (and who are producing amazing products), chances are that there's some good reasons for it... Maybe when you're working in one of these teams you'll get a chance to gain a new perspective on the language cool.png
I've used it on academic research projects and large corporate projects as well... and these experiences were not the same as in the games industry. It's a very different language on every project, depending on who's leading the project, and which sub-set of the language you're using. It is pretty ridiculous and over-complicated, so every project does basically use a different sub-set of the features. Actually, even the C projects at those places were pretty horrible laugh.png 
 
Game-developers generally avoid exceptions and RTTI (the compilers for game consoles often have these features disabled by default, with a command line option to turn them on!!), and often avoid the parts of the STL that deal with memory allocation, because embedded hardware requires much more care in that area.
A few hardware generations ago, we avoided templates, because compiler support was terrible -- one in particular who shall not be named didn't do 'COMDAT folding', which meant that vector<int*> and vector<float*> resulted in identical/duplicated asm routines being included in the executable, instead of one of them being merged/stripped...
 
Since then though, templates are one of the best reasons to use C++. You seem to be suggesting that templates result in "bloat" that makes your program run slower (which might be true on the above mentioned compiler *cough* CodeWarrior *cough* if measuring L1 I$ pressure... or if your compiler doesn't know how to inline... or if you're using STL templates with your STL-implementation's debug features turned on) but the simple counter-example is C++'s std::sort vs C's qsort.
qsort uses a function-pointer to call the comparison function in the inner loop - on PPC, this results in endless and completely unavoidable branch misprediction penalties.
std::sort uses a tempalted functor to call the comparison function in the inner loop, which results in it being inlined at compile time, avoiding the need to branch to an address fetched from memory. The resulting code from the templated sort algorithm is much more optimal than the C function pointer alternative.
 
You don't seem to be aware of "placement new". Default new does two things -- gets memory via something like malloc and also calls the constructor (and delete does something like free and also calls the destructor). Many game engines avoid using the default new / delete altogether (and default malloc / free altogether too!). Instead you can write your own new, which uses memory fetched from anywhere (malloc, the stack, a mapped file, whatever) and then use placement new to call the constructor at that address. You can then call the destructor manually when freeing that allocation.
Many games that I've worked on actually make liberal use of stack/linear/mark-and-release allocators. C++ lets you marry it's object model with any kind of allocation scheme such as this, if you care to - so you're not just limited to POD when using alternative allocation schemes.
 
On that topic, many game console OS's don't even provide malloc / new out of the box. It's often up to you to implement these routines yourself, using the raw OS functions for allocating physical memory ranges, allocating virtual address ranges, choosing page sizes, and binding the physical and virtual allocations together.
Most game engines are also extremely careful with memory management, often banning the use of malloc/new altogether, and forcing you to choose from several alternatives (multiple heaps, temporary stacks, pools, rings, etc), which usually provide amazing debugging capabilities (in development builds).
e.g. Here's one of EA's stack allocators, married with the C++ object model: http://dice.se/wp-content/uploads/scopestacks_public.pdf
 
Back to the STL - as I mentioned, many game-devs avoid the parts that deal with memory allocation (especially if coming from the previous generation of hardware, where total memory was very tight), but even these parts should provide a 1:1 performance with equivalent C code. The catch is that you've got to be experienced enough to know what that C code is. If you're using a fixed-size POD array in C, and vector of non-POD types in C++ that you haven't pre-reserved memory for, then of course you're going to notice a difference because that's apples-to-oranges. This is simply down to the author being decent at C and not experienced enough to write equivalent C++ code.

 

There's also the rest of the STL that doesn't have anything to do with memory management - such as the algorithm part; It's still handy to have sorting, searching, set logic, etc available out of the box... though people don't really use C or C++ because of their great standard libraries wink.png They use them because they let you talk to OS's and hardware, and also to be at a level where you can still make a decent guess about what the resulting ASM will look like.

 

On that note, ASM is pretty much never used in game engines. If something is so performance-critical that you'd resort to lovingly hand crafting ASM, usually just using intrinsics from C/C++ is enough (and perhaps actually portable across CPU's). Intrinsics can actually be faster, because they're understood by the optimizing compiler, and thus can be glued into the surrounding high level code better.

 

Regarding C vs C++, C certainly has it's merits so I wouldn't dismiss it... But even if ditching the STL, RTTI and exceptions completely, you still get templates (which let me write code that's simpler/maintainable, and also faster -- but also let you shoot your foot off in terms of both of those facets...), RAII (which formalizes C's error handling practice and greatly simplifies error-handling code, formalizes C's memory lifetimes by allowing you to connect heap lifetimes to stack lifetimes, and enables many debugging techniques to boot -- I seriously can't emphasize how amazing RAII is), dynamic dispatch in the rare case that I need it (and if you do really need it, the asm generated by C++'s virtual is likely way better than the equivalent code written in C -- on the flipside it's tempting to overuse this feature), template metaprogramming (which can be the devil's work, creating unreadable code that explodes your compile time... but also results in amazingly simple binding systems for scripting languages, reflection, etc...), much better support for OOD (which is often overused badly, but is solid as a rock when used properly), much cleaner math code with operator overloading (which can still be compiled into amazing SIMD ASM), different access specifiers (public/private, to make class invariant's more explicit - shouldn't have to explain why private is a good thing™), and almost everything in C99 too.

The downside in C++ is knowing when to restrain yourself from writing bad code, because the language makes it so easy... sad.png 
 

I dont really understand the compile time argument. Surely you guys use a build system that only compiles the source units that change since last build iteration right?

If LTCG is enabled (e.g. in optimized builds), then unfortunately the linker stage can take 1 minute+, regardless of how much code has actually changed sad.png
Game studios usually have 3 build profiles - full debug, internal development (debugging features, but optimized compiling), and retail (no debugging features, fully optimized compiling and linking).

 

Game engines typically avoid "typical C++ bullshit" that you might find in academic code-bases.

Man... it always depresses me when you use that link, because to this day I still have no idea what's going on.

Should we make a new thread to discuss it? Mike Acton was being deliberately smug and esoteric when he wrote it, I think.




#5167804 Patenting an Algorithm?

Posted by Hodgman on 19 July 2014 - 08:58 AM

You could try and apply for a patent on the method of preparing lemonade / the method of selling a drink via a stand... but hopefully the patent office would reject your application because it's too obvious.

 

Unfortunately, this doesn't happen with software patents. There's millions of really obvious algorithms and data structures that are actually covered by patents -- everything from the linked-list, to sending emoticons over a network, to the 16:9 aspect ratio, to the visualization of a 12-sided dice.

Thankfully most of these patents are held by large corporations who just horde them as weapons of mass destruction, in a kind of cold war against other corporations. If one of them decides to sue over a patent violation, they can pull out a thousand of their own and launch their own lawsuits as a kind of MAD...

 

The actual recipe itself, as it's written on paper is covered by copyright. If I copy your recipe and start up my own stand, there's not much you can do about it (unless you have a patent)... But if I photocopy your recipe and publish it in my own cookbook, then I've committed copyright infringement.

 

Likewise, I can implement any non-patented algorithm as code myself, but I cannot just copy&paste someone elses code.






PARTNERS