Jump to content
  • Advertisement
Sign in to follow this  
noodleBowl

Intel gives better FPS than NVidia?

This topic is 1326 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I got to the part where I finally started to rewrite my spritebatcher and I noticed something strange.

My Intel 4600 HD was returning more FPS than the dedicated GTX 850m card. I tried just a normal window test

where the window was 800x600 and I was drawing nothing (the only happening was glClear and the SDL swap buffer call).

 

Intel gives me 4.5K FPS on average, where my nvidia card gives me only 1.8K?

I also tried doing a 3DMark test (I think this uses DirectX) and the Nvidia out performed the Intel card significantly, so I;m not sure whats going on? 

 

I have tried the newest drivers for the nvidia card, I tried old nividia drivers, the same results sad.png

Has this happened to anyone? Anyone have a solution?

 

This is a laptop with optimus just in case that matters

Edited by noodleBowl

Share this post


Link to post
Share on other sites
Advertisement

Irrelevant when numbers are higher than like a couple of hundred FPS. Display (1000000 / FPS) instead and compare number of microseconds.

Actually, display (1000000 / 60) - (1000000 / FPS) to determine how many microseconds you have left until 60 FPS (or use 120 if you target 120 Hz displays), and then to compare your graphics cards use ((1000000 / 60) - (1000000 / FPS1)) - ((1000000 / 60) - (1000000 / FPS2)) to determine the difference.

And then divide the answer with (1000000 / 60) for relative difference.

 

So you have 222 microseconds difference, which is like 1%, and noise.

So if your Intel card is a million times faster than your Nvidia card in that 1% and 50% slower in everything else, then it should be ~49% slower than your Nvidia card in reality (like in 3D mark). (I assume it's actually worse than 50% slower in reality, just an example).

 

You only measure the noise.

Draw a model with a couple of million vertices (in a properly set up vertex-buffer in GPU memory) and preferably an expensive fragment shader that covers most of the screen, and you will be able to measure graphics performance.

Edited by Erik Rufelt

Share this post


Link to post
Share on other sites
I'd just continue developing and measure every once in a while. In the end you'll get a more representative situation. And when you do, any total frame time under 16.67ms is fine (60fps). When you're there, I'd guess the frame time on the Intel will be higher then on the 850m (depending on complexity of your scene, shaders etc).

Share this post


Link to post
Share on other sites

This is normal.
 
You essentially have zero workload here, you're just measuring the cost of clearing the framebuffer and swapping buffers, nothing more.
 
On Optimus systems the final output always happens via the Intel card, irrespective of which GPU you have selected.  Please see the NVIDIA Optimus whitepaper at http://www.nvidia.com/object/LO_optimus_whitepapers.html for more info; I'll quote the relevant section (on page 11):
 
So long as you have essentially zero (or at least incredibly low) graphics workload it should therefore be expected that the Intel would run faster; using the NVIDIA incurs an additional overhead of routing your graphics through two GPUs, two sets of drivers, and all of the extra overhead that goes with that.
 
When the graphics workload starts increasing the opposite becomes the case as the graphics workload is now the dominating factor in your frame times, and therefore using the GPU that handles it faster will show the greatest performance.


Now that I know what is going on at the hardware level its interesting, but also its a little disheartening. Just do a pass through in the end instead of doing a direct display kind of makes me sad.

Share this post


Link to post
Share on other sites
Now that I know what is going on at the hardware level its interesting, but also its a little disheartening. Just do a pass through in the end instead of doing a direct display kind of makes me sad.

 

It shouldn't make you sad.  You're getting over 1,000 FPS with both GPUs under zero workload.  Go worry about some real problems. biggrin.png

Share this post


Link to post
Share on other sites

Now that I know what is going on at the hardware level its interesting, but also its a little disheartening. Just do a pass through in the end instead of doing a direct display kind of makes me sad.


The alternative would be terrible. You'd only be able to use the discrete GPU in 'true' fullscreen mode or be forced to have your entire desktop run on the battery-draining discrete GPU just to get one window drawn with it. I for one enjoy being able to use it in fullscreen-borderless and in windowed modes while allowing the majority of my desktop to use the lighter and more energy-conscientious integrated GPU. I remember the day when laptops with dual-GPUs had physical switches you had to toggle and then reboot in order to change the active GPU; I won't those distant memories to remain distant. smile.png

The pass-thru might not even explain the perf you're seeing. An analogy that I find applicable to _many_ things in software development is to think of a Formula 1 car and a Kenmore eighteen wheeler. If you need to move a tiny bit of data (cargo) as quickly as possible, the race car is the obvious choice. If you're trying to move a lot of cargo, the Kenmore will need fewer trips to get the job done even though it takes longer to complete a single trip. And then if you really want to muddle the analogy you'd think of power-per-watt and toss in a Tesla analogy. smile.png

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!