Jump to content

  • Log In with Google      Sign In   
  • Create Account

Yann L

Member Since 06 Feb 2002
Offline Last Active Mar 30 2012 02:53 PM

#4853468 Deus Ex: Human Revolution?

Posted by Yann L on 24 August 2011 - 06:59 PM

*mumble* regional restrictions *mumble* *mumble*two days *grumble*

That. 22 hours to go.

NEED TO PLAY NOW... Grrrrr...


#4851994 DirectX vs OpenGL ?

Posted by Yann L on 21 August 2011 - 01:54 PM

As for OpenGL vs. Direct3D 9, I kept reliable records.
My records are based off the construction of my engine from the very start, when the engine was nothing more than activating a vertex buffer, an optional index buffer, a shader, and 0 or 1 textures.
Both the Direct3D and OpenGL pipelines were equally this simple, and both fully equipped to eliminate any redundant states that could occur from this primitive system.

My results.

I'm sorry to be so blunt, but your results are bogus from a performance analysis point of view.

First of all, you are comparing benchmarks in frames per second, ie. in non-linear space. Read this article for an explanation how this approach is flawed.

Second, you are benchmarking with a far too high framerate. As a rough and dirty guideline, all benchmarks that include FPS rates above 1000 are useless. The frametime at 1000 fps is 1ms. No API is designed to operate at this framerate. You will run into all kinds of tiny constant overheads that may affect performance in every possible unpredictable way. You don't leave the time for the GPU to amortize overheads. For real solid benchmarking results, you must do much heavier work. Not necessarily more complex, just more. Get your frametimes up, remove constant driver and API overheads. Benchmark your API within the range it is supposed to operate in. And be sure to know what part of the pipeline you're actually benchmarking, which leads us to the next point.

Third, benchmarking a graphics subsystem is much more complex than just hitting out a single number. What part of the pipeline are you benchmarking ? Do you actually know ? In your case, you essentially always benchmark the API overhead - function calls, internal state management, maybe command buffer transfer. In other words, you are benchmarking the CPU, not even the GPU ! Read this document. Although a bit dated (and it doesn't represent modern pipelines very well anymore), it gives a good basic overview over how to measure performance in a graphics system. A real engine is going to be bottlenecked within the GPU. So you should actually measure this.

As a conclusion, it is important to learn about how to properly conduct meaningful benchmarks before making claims based on flawed data.

As of the latest DirectX API, there is no contest. And unless OpenGL can make a 5-fold improvement on their speed without multi-threaded rendering, there never will be, compared to DirectX 11.

This does not make any sense. Again, read up on bottlenecks. Even if an API would reduce its overhead to zero (which is partially possible by removing it entirely and talking to the hardware directly, as it is done in many consoles), the final impact on game performance is often very small. Sometimes it's not even measurable if the engine bottleneck is on the GPU (which is almost always the case on modern engines). The more work is offloaded to the GPU, the less important API overhead becomes.

The much more important question, which can indeed make a large difference between APIs and drivers, is the quality of the optimizers for the APIs native shader compiler.


#4847392 DirectX vs OpenGL ?

Posted by Yann L on 10 August 2011 - 05:29 PM

But take a look at a Hello World Example of both before you make a decision! ;)

A Hello World example is about the worst possible way to judge an API or language.

My experience was that OpenGL (1.X though) was waaaay easier to get up and running and rendering a few triangles, but actually working with it was a pita compared to working with Direct3D (9.0c). It was mostly small things, such as math libraries, texture loading and font rendering IIRC. It was several years ago, but I can imagine some of it being the same due to cross platform and licensing issues.

It's neither of these. It's just that these things are out of the scope of a graphics API. They have never and will never be part of OpenGL. Note that D3D11 (and partially D3D10) went the same way, essentially removing all this bloated D3DX stuff. The only really useful remains were moved to XNAMath, which became largely API independent.

Maybe someone with more (recent) OpenGL experience can clarify the following?
* Is there a math library in OpenGL for matrices/vectors/planes/etc?
* Can you load textures in png or jpeg formats without using external libraries such as DevIL?
* Is there any simple way to render a string in OpenGL? I seem to recall there being some function to draw a single char using a supplied bitmap font..?

No, no and no. The first (lack of math library) may be debatable, but everything else should never be part of a graphics API. Keep in mind that the main focus of a competitive graphics API is not beginner friendliness, but flexibility, performance, scalability, standarization and ease of driver implementation for IHVs. If you are looking for a beginner friendly framework, then both D3D and OpenGL are the wrong choices. A third party graphics or game engine is a much better solution for such scenarios.


#4837803 Unlawful stuff going on...

Posted by Yann L on 19 July 2011 - 11:41 PM

On a final note - and I'm not accusing you of this - every time I've ever heard someone defend a product's prospects by pointing to brand powers, it's a sign of impending doom.

Global brands are unbelievably valuable. Sometimes a single brand name can make over 75% of the entire value of a company. Take a look at the Interbrand 2010 top 100 brand list (Interbrand is the worlds largest brand analytics company).

The value of the "Microsoft" brand is estimated at about $61 billion. Yes, just the name. The Google brand is just below at about $43bn.

For mass consumer goods (and MS products like Windows fit this category), brand is everything. Products rise and fall with their brands. Second comes price. Quality comes maybe third, probably lower. The average consumer wouldn't have a clue on how to judge the quality of an OS. Heck, he probably doesn't even know what on OS is to begin with.

Why did Linux fail on the mass market ? Besides the technical issues, which are both over the head and irrelevant for Joe Average who just wants to do basic home office work, pirate movies and download porn, Linux has an unbeatable price point. Yet, it failed miserably. Why ? Because it failed to establish brand recognition. No one knows Linux. Everybody knows Microsoft, Apple or Google. Why are iPod, iPhone and friends so incredibly successful despite being technically mediocre ? Brand.


#4837771 Unlawful stuff going on...

Posted by Yann L on 19 July 2011 - 09:10 PM

How is having Windows own the OS market not a "free market"? If you can bring a team together and write a better OS, Im sure yours would become a standard...

Because it's not always that simple, it's not a black or white situation. We don't have a truly free market. We have a regulated free market economy, where the amount of regulation varies from one country to the other. Truly unregulated markets would lead to complete corpocracies as every company will strive for full market domination and abuse their positions of power to eliminate competition in unfair or illegal ways. In our regulated free market economy, we have antitrust laws to prevent this. Microsoft has certainly violated competition laws on several occasions, but so have many (most ? all ?) multinational companies to some degree, and not only in the software sector.

Anyone who thinks that a superior product will automatically lead to market domination has no idea about how economics work. Brand recognition is much, much more important than product quality. Thinking that you can commercially outperform Microsoft, Apple, Sony, Walmart, ExxonMobile, Vivendi and co by putting a better product on the market would be entirely delusional, unless you have the same amount of financial resources to invest into marketing. But this is not really the fault of our "free" market, it's an inherent property of our Western style consumer driven societies. Which would lead to an entirely different discussion.


#4837750 Unlawful stuff going on...

Posted by Yann L on 19 July 2011 - 08:22 PM

Windows acts like a standard, except that only one company gets income from it. If you want to benefit from that 90% you have to buy Windows and only Windows. If you decide to go for the other 10%, you will not be able to use many software products, but only some created by companies that decided to cover that 10% too.

Yes. And where is the problem here ? You're free to travel using a horse drawn carriage instead of a car. But if you do, don't complain you can't use the highway.

Windows is indeed a de-facto standard. And that is not only good for Microsoft, it is also good for developers and it is good for end users. Developers can target one single standard and trustworthy platform backed by one of the largest software companies in the world. End users can trust the Windows and Microsoft brands, they will get higher quality software certified to run on their OS without confusion (counter-example would be the 100 billion incompatible Linux distros) and they will get the look and feel they are comfortable with.

In fact, I am very happy that an organization like Microsoft owns this standard. They are a reliable, open and developer friendly company. On the two other ends of the spectrum are Apple, with their consumer-centric but extremely closed and developer-unfriendly environments. And then there is the open source alternative that is a complete mess of randomly glued together pieces of code of highly varying quality, dubious licenses and without any type of commercial accountability. So yep, feels quite good in the middle with Microsoft.

If we refer strictly to PCs, most games "require Windows". If you want to play a specific game for PC then you must buy Windows. If Microsoft decides to make more money, they will release a new version of Windows, developers will be forced to release products to comply with the new "mood" and the end-user will have to pay the new Microsoft "tax".

Microsoft is the only OS developer that literally bends over backwards to provide full backwards compatibility for basically any type of technology they have ever released. Want to target Win32, MFC or DX9 while developing on Windows 7 ? No problem, the same binary will run perfectly on anything from the latest Windows (and even on not yet released future versions !) back to the more than 10 year old XP. Try that with OSX or *gasp* Linux...


#4837198 why is C++ not commonly used in low level code?

Posted by Yann L on 18 July 2011 - 10:21 PM

Indeed, I am a shit C++ programmer

And that is exactly the problem. How can you make statements about a language you admittedly have low expertise in ? The "benchmark" example you provided earlier is the perfect example of that. You were comparing two pieces of code that had nothing at all in common except for outputing text. That text editor written in C is faster than that protein-folding simulator written in C++. Obviously this must mean that C is better...

In fact this is something one can observe in almost all C vs C++ discussions. The most rabid C defenders (explicitly including Linus Torvalds) have actually very little knowledge of C++, sometimes leading to almost absurd arguments.

which doesn't take away the fact that C++ is bloaty when compared to C(yes, I provoke on purpose. Prove me wrong :P)

Prove yourself wrong by learning C++ before making factually incorrect statements about it. Counter arguments to your points are trivial common knowledge to every semi-competent C++ programmer.


#4836054 how to debug glsl shader

Posted by Yann L on 16 July 2011 - 11:30 AM

Is transform feedback a possible method?

Yes it is. In fact, that's what glslDevil uses AFAIK.

Another possibility is to channel the VS/GS data under investigation to a special debug fragment shader, which can then write the results into an FP32 target. Sometimes the geometry can be rendered as points in order to avoid problems with the interpolators of the rasterizer. Or a flat rasterization mode can be selected.


#4835862 Negative Reputation

Posted by Yann L on 15 July 2011 - 07:49 PM

Yet more political correctness and Facebookification for GDNet. Yay. No hate, love like for everybody ! Group hug ! Ugh.

I mean seriously, if you feel like watering down the rating system that much is necessary, then the best thing might be to just remove it entirely. I really don't see any sense in this "Like" stuff, as much as it may currently be the hype thing to do, with all that social media craze around.

Michael brought up the point that the system would not limit consecutive downrates from user A to user B. That's a very valid concern. But how would the modified system prevent user A consecutively "liking" (God, I hate that term so much) user B, while B being his friend/alter-ego ? Abuse can go both ways. If fixing the feature is not an option, then shut it down until it can be fixed. Don't replace it with a dumbed down happy version for the Facebook/Twitter/whatever generation.

In the light of all this PC feel-good thing, I sometimes miss the evil Oluseyi from the times before he decided to become a good guy :)

Oh well.

Edit:

If I want facebook, I'll login into facebook and discuss this in the GD.Net facebook fan page.

We have a GD.Net facebook fan page ?! Please tell me this is a joke.


#4835703 Why are there so many bad installers out there? and Windows questions on Syml...

Posted by Yann L on 15 July 2011 - 10:29 AM

About Google and Adobe etc. installing their crap just where you want (and installing what they want, too, without asking), this is easily answered: There is a massive misunderstanding from your side. You believe because you paid money for the computer, you are the owner. That's of course fundamentally wrong.
Google, Adobe, and Gigabyte own your computer, and as a user you are too stupid to decide what's good for you anyway. Therefore, they not only have every right of installing programs where they want, and secretly installing services that run at startup and send data over the internet, it is even their civil duty to protect you from yourself.

Sarcasm aside, matter of fact is that install location is an irrelevant technical detail that will overwhelm 99% of all non-technical users. It is also a way to phenomenally screw up an install when you don't know what you're doing (keep in mind that we technically literate users are a small minority). Why increase workload on support lines and make your main user base confused/unhappy just to please maybe 0.1% of power users ? Not asking for install location makes indeed perfect sense for an installer. You wouldn't believe how many people try to install software on a DVD drive or USB key without even realizing it.

What doesn't make perfect sense is the whole installer framework on OS side. It should not be the reponsability of the installer to decide on install location. All that should be handled transparently by the OS, configurable through policies for power users and/or administrators.

As a sidenote, this problem is much much worse on Linux. It is almost guaranteed (and even accepted practice) that even the smallest piece of software can distribute its files to more or less fixed locations all around your file system as soon as you do a make install. OSX on the other hand, takes a completely opposite approach. An application is a sealed and self contained package that can be moved around at will.

Am I not right in thinking that as of Windows 7 - microsoft is now actively encouraging programmers to install applications to the Program Files folder? I believe only applications installed in Program Files can access or modify resources within their own folder, if they are installed to, say, C:\MyApp then the MyApp folder would be read only, and the user would receive a permission prompt if the application decides to write to this folder.

MS has indeed created strict guidelines on where code and user files should go. That is a good thing. But unfortunately not all developers adhere to these guideline, and MS needed to provide backwards compatibility. Basically, no one should ever write to Program Files, except for install, update or uninstall. In normal operation, this location should be strictly read only. And that is indeed enforced by Windows, if you tell it that your software adheres to the guidelines (using the application manifest). If you don't let it know that you're a good boy, then Windows will run your app as legacy code, virtualizing part of the filesystem. Your code may think it writes to Program Files, but it actually doesn't.

The single correct way to install software in my opinion would be to not install it at all. Installing in this case would mean to copy the folder containing the program from the CDROM to some location on your harddisk or unpacking a zip file that you have downloaded.

Shared libraries are in the same folder as the executable from where the OS will load them. Only libraries that are not found there (e.g. the system-internal libraries) are loaded from the system folder. ---> every program has the correct versions of all shared libraries, no tampering with operating system files needed.

That's basically the way OSX does it.


#4826931 Internet Bandwidth

Posted by Yann L on 23 June 2011 - 12:47 PM

I'm with Luckless. I wish all ISPs were controlled by the government and more focused on building the best infrastructure for people.

Since when do governments act in the interest of the people ? Rather than, say, large lobbying organizations. Like big ISPs for example. Oops.


#4820204 Debate me about the bible

Posted by Yann L on 06 June 2011 - 12:37 PM

What you're saying makes sense if you equate success with technological advancement and knowledge acquisition. If God's goal is to create a loving relationship with people, I would say it's rather irrelevant, more likely it's contrary to that kind of success. I've seen that from experience during mission trips. Oftentimes the simplest conditions create the happiest people. Secularism has proposed similar ideas, Fight Club comes to mind, "The stuff you own ends up owning you." Sometimes I think if the word went to hell and everyone had to farm, hunt, and watch out for their neighbor to survive, Xanax would be irrelevant.

Antibiotics, ie. not dying from a simple wound infection would not be irrelevant. Access to clean water and food wouldn't be. Having a life expectancy over 35 would be quite relevant. Not being enslaved by your neighbor because he happens to belong to a tribe that is more powerful than yours would also be quite nice. We take all these things for granted, at least in the western world. It's technological advancement, knowledge acquisition and social evolution that brought us all that, not a personal relationship with a deity. Maybe some technological achievements don't contribute to happiness. You don't really need an iphone, a flatscreen TV and a sports car to be happy. You could certainly live a very happy life in a pre-industrial civilization. But some parts of technology do absolutely and objectively increase your well-being, especially those related to the medical sector.

The day that people create the technological breakthrough to live forever is the day that suicide rates start to climb exponentially.

I'm not sure if artificially induced eternal life will increase suicide rates. It will depend on the type of environment you would be living in and how adaptable it would be. Given the digital consciousness version, we would probably progressively modify the way we think and the way our mind works in order to fit this new society. Essentially artificially guided evolution. And who says that the eternal life religions promise would not also lead to increased suicide rates (if such a thing would be possible there), because some people could not cope with it any longer ? If you say that they will be eternally happy because they're close to their god, what is the difference to eternally pumping large amounts of serotonin and dopamine into your brain (or doing the digital equivalent to an AI) ? After all, the feeling of happiness is just a biochemical reaction.

I fail to see why this should allow us to exalt ourselves before God, assuming you believe in that.

I'm a pragmatic agnostic, so I don't believe in a god, at least not in the form it is described by any major religion. I believe that whatever entity created the universe (if any) did not meddle with it after its initial creation. Everything humanity has achieved, we achieved it on our own. While this doesn't give us the right to put ourselves above some god, we shouldn't put us under a god either. I equate divinity with knowledge, and as always, everything is relative. I think that given 'enough' knowledge, we can become gods ourselves.

People believing in god have experienced something that agnosticists have not. Whether that "something" is being talked dumbed by missionaries, or a spiritiual revelation, what matters after all is the faith: Because faith can get you through life much better than apathy.

How can you say something like this ? This holier-than-thou attitude is really off putting. How can you know that you experienced something that we lack ? Maybe it is the other way round ? You don't need faith to lead a happy and fulfilling life. And there are many things in life besides apathy and faith into a deity. Believing in yourself, your own abilities and the ones of your loved ones, for example.


#4819605 1 VBO - multiple textures?

Posted by Yann L on 04 June 2011 - 07:18 PM


uniform sampler2DArray Texture;

uniform float Select;



varying vec2 TexCoord0;



void main()

{

 	gl_FragColor = texture2DArray(Texture, vec3(TexCoord0, Select));

}




#4819261 The Transparency Conundrum

Posted by Yann L on 03 June 2011 - 05:37 PM

There are many different approaches to OIT, depth peeling is only one class of algorithms. Just Google for order independent transparency, there are myriads of papers out there.

You can peel from the front, from the back, from the front and back at the same time, or even multiple layers in batches. The number of peeled layers can be dynamically adjusted to what is needed for a particular frame, so to not overpeel.

On modern DX11 hardware you can do the transparency sorting per-pixel, directly on the GPU, without any multipass peeling.

And multiple passes are not a big deal either. GPUs are really good at that. However, GPUs are really bad at handling lots of small geometry batches submitted via the CPU. And the latter happens easily if you have slightly more complex transparent objects in your scene. In fact, CPU-sorted transparent geometry is the worst case scenario for batching, because you need fine spatial granularity in order to get good sorting results. OIT doesn't suffer from this issue, you can just send all your transparent geometry in big GPU-friendly batches, just as you would (should) do with opaque geometry.

In conclusion, if the complexity level of the transparent geometry in your scene is anywhere above the occasional window glass quad, then OIT is definitely worth the try. On modern GPUs, it is really damn fast. And the quality is in a totally different league than the CPU sorting approach, both in terms of spatial resolution (intersecting transparent faces, transparency-in-transparency, etc) as well as in terms of the actual shading you can apply (since you are not limited to the old fixed-function blending anymore).

Just go ahead and try it. The basic algorithm is really easy to implement. And if it looks good to you (it will !), then you can use more advanced algorithms to speed it up. I shiver at the thought of having to implement a CPU sorted approach ever again. It feels so primitive compared to OIT, a bit as if you had to write a renderer without having a z-buffer.


#4818727 The Transparency Conundrum

Posted by Yann L on 02 June 2011 - 10:48 AM

Graphics hardware has now advanced to a point where OIT (order independent transparency) is often just as fast, or even faster than the age-old CPU presorted transparent geometry. With the added benefit of pixel perfect transparency in all cases, the availability of much more advanced programmable blending and the possibility to easily add thickness dependent effects like refraction or fogging.

I've switched my engine to OIT some time ago, removing all this distance sorting crap. I never looked back.




PARTNERS