I played around with this some time ago. Interesting idea, but since it depends quite a bit on a particular GPU memory architecture, this is a nightmare to push to ARB status (read: won't happen).
I am in a similar situation as Hodgman, the performance improvements I got on a quick real-world test were close to zero, which basically stopped any further consideration of an NV-only path right there. It aims to solve a bottleneck which doesn't exist in my engine (I'm heavily shader limited). YMMV.
Can't really say much on the topic, not being Christian nor knowing the Bible and all that, but I think that when a religion promises paradise by requiring the mindless following of some rigid dogma, then something is fundamentally wrong.
If you lead your life according to some basic and fundamental ethical guidelines, like not voluntarily harming others and trying to make this place a better place for all (within your possibilities), then a loving god is gladly going to accept you regardless of your beliefs, whether or not you read some old book or visited temples or churches. It's just basic logical reasoning. If you believe that our world was created by that god, then he must be a quite rational and logical thinking fellow.
I am afraid I won't be able to, as I do not currently have a PhD in graphics programming. Nor do I meet any of your other royal standards.
No need to get defensive. You are planning to write what basically amounts to a significant amount of teaching material targeting beginners. Well written teaching material can be an excellent resource, while badly written (often due to lack of knowledge) is highly counterproductive. Unfortunately, most online tutorials fall into the later category. As such it is absolutely valid to ask about your own expertise on the subject. If someone offered to teach me how to fly an airplane, I would inquire about his own license first.
we need an MSDN equivalent for modern OpenGL.
No Are you under a delusion that OpenGL is so complex that it requires professional experience to simply explain it to beginners?
Yes, absolutely. Significant experience is a prerequisite for teaching any form of advanced concept to a beginner. No need for a PhD, but if you don't know the material inside-out, then you have no business teaching it.
Adding more crap to the pill of crap won't help, and waiting for someone else to do it is lazy, what harm could come from making another website that would not anyways happen from bogging down a already unhelpful website?
You mean like NeHe, for example ? Spreading incorrect information, even unintentionally, is worse than not spreading any information at all. I am not saying the OP is in this position, that's why I inquired about his experience with OpenGL.
This is your first post on these forums and we don't know your level of expertise on the subject. Can you provide some references ? How many years have you been professionally working with OpenGL ? Do you have a proven track record in this domain, did you publish any publications ? Were/are you involved with the ex-ARB, Khronos or one of their major contributors ? Are you working on the GL driver development team of a major GPU manufacturer or of an open source driver ? Were you directly involved in the development of an OpenGL based rendering backend for a major game, application or middleware ?
If the answer to all of these questions is no, then you shouldn't be writing "another modern OpenGL guide".
Sorry for being so blunt, and I applaud your motivation. But the answer to your title question is, no, we don't need another such guide. What we do need is a single, authoritative development and learning resource for OpenGL, published and maintained by an officially endorsed and highly qualified organization. We don't need another open wiki, we don't need another gazillion of tutorials, we don't need to "beat" NeHe. We need Khronos to finally get their stuff together and release an actual OpenGL SDK and updated human-friendly documentation (ie. not the extension doc text format). In other words, we need an MSDN equivalent for modern OpenGL.
So if you have some good ideas, then you should try to contribute to the more or less 'official' OpenGL wiki instead, helping to bring it out of its current desolate state.
Thanks for all your answers. I figured out, that the driver doesn't support the needed extension.
Linux Intel drivers use Mesa to provide OpenGL support. Mesa, being open source software, is not allowed to use floating point textures, as this is a patented feature. You can compile the newest mesa with a specific flag to get floating point support in the base package, but the Intel drivers still won't support it at this time. And even if they did, it is extremely unlikely that this flag will be set by default anytime soon by any major Linux distribution, due to legal concerns. So you could only use FP textures on machines where the user has recompiled both Mesa and the Intel driver on their own, specifically adding said compile flag manually. Rather unlikely to happen in practice.
In short, don't expect to get floating point support anytime soon under Linux open source drivers. Linux closed source drivers from NVidia and AMD are not affected by this, and will provide full FP support.
But when you start diverging in such theories, consider the "story" of the aether which had all sorts of magical properties, only to be shot down by Einstein.
And to be 'resurrected' in the form of vacuum energy. Sometimes a wild idea may have some truth to it, but in a completely different way than originally thought.
Anyway, I think the article linked by the OP shows some symptoms of, let's call it 'popular creative journalism'. I was very surprised that it mentioned possible gravitational repulsion, which antimatter does most certainly not exhibit. Of course, as expected, the original CERN paper does not mention gravitational repulsion in any way. The main idea, apart from the obvious analysis of long term trapping, is to cool down the antimatter to a level where observations about the validity of the charge, parity and time reversal symmetries can be made.
Visual C++ has actually implemented additional pessimizations when you use volatile keyword because people misuse the keyword as you described.
I wouldn't call that "misuse" per-se, in the strict sense of the term. People just use the keyword the way it should have been defined by the standard, and as it is defined in C#. The way current C++ standard defines volatile is complete nonsense and makes it virtually useless in todays MT environments. It should have been redefined ages ago in favour of a more modern use case. Microsoft simply went ahead and completed that logical step on their own, making C++ volatile behave as it should and compatible with C# volatile behaviour.
As a consequence, using volatile in a multithreading context is perfectly safe when using Visual C++ (MSDN link):
This allows volatile objects to be used for memory locks and releases in multithreaded applications.
FWIW, the same seems to apply to gcc on Linux and OSX in practice.
Of course volatile won't provide synchronisation or atomicity. Still, for certain scenarios, the use a simple unsync'ed shared memory between threads is a perfectly valid and much higher performance approach than going over OS primitives such as semaphores or CVs.
Seems like you could simply cut the brain out, plug it up to the iron lung, connect electrodes to it, and drop it in a sealed vat of saline solution.
Well, the thing is that you cannot just easily connect it. If connecting nerves with other nerves or electronics would be that easy, then a spinal cord injury wouldn't be a big deal either - rather than putting you in a wheelchair for life. Same thing goes for blood vessels. That said, if you don't connect anything at all except for basic blood supply, then it might be possible to keep a brain alive at least for some time (although not very ethical). Head transplants have been successfully performed, although not yet in humans (according to Wikipedia).
If we figure out how to reliably connect nerves in the future, along with some other technological advances, then brain transplants will certainly become an option.
I personally really enjoy working with math guys when doing graphics programming !
While it can't be entirely generalized, from my personal experience CS / SE graduates have a distinct lack in advanced math background. On the most basic level, 3D graphics is really just linear algebra. Thus many people think that they can get along with learning the basics about vectors and matrices. And often even this knowledge is rather superficial. A surprising amount of graphics programmers could not even write a matrix inversion algorithm from scratch. But there comes a point where all this is just not enough anymore. When doing advanced graphics programming, especially the simulation of natural / physical phenomena, math is all that counts. As you go deeper into graphics, you will very often come across monstrous equations, where a good understanding of advanced math is required (especially multi-variate calculus). But that's actually the most fun part of the job, if you ask me. Nothing is better than finding an elegant and fast solution to a huge differential equation that runs with 50 fps and produces a gorgeous image !
As far as I see it, if your aim is to get into graphics programming and/or research, then an MS in math is perfectly fine. Usually there is no need for an additional CS degree. In fact, a math degree is preferable in such a field. Many companies (nvidia for example) will hire a math degree over a CS degree for graphics research positions. 80% of your time in graphics programming will be spent with pencil and paper over your equations and diagrams. The remaining 20% will be fighting against the shader compiler