Jump to content
  • Advertisement
Sign in to follow this  

How to determine requirements

This topic is 4129 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've recently finished a DirectX application that uses Direct3D, and am now struggling with failures of different sorts in various hardware environments due to (I suspect) inadequate feature support. I need help figuring out how to do one or more of a few things: 1) Determine what the hardware/driver requirements of my application are 2) Simply state requirements to prospective users 3) Report to the user why the application doesn't work in a particular configuration 4) Make the application work in more configurations Using the increased debug output from DirectX and a debug viewer, I was recently able to determine that DirectX is failing for various reasons from incompatible texture sizes to unsupported display modes. Is there some level of requirement I can put out there that will just make sure everything offered by DirectX will work in a hardware accelerated environment (not the reference rasterizer)? I'm actually not doing anything terribly complicated -- it's just 2-D stuff. The only things I can think of that I'm doing that might be considered complicated are real-time (2D) matrix transformations, real-time color channel modulation, alpha translucency in textures, and clipping (scissor rect). I was surprised to find that some environments don't allow textures whose size isn't a power of 2, or that support hardware acceleration with full screen 640x480x16 but not in any windowed mode or any 24/32-bit color modes. I thought DirectX somehow hid that level of complexity from the application now, but apparently not. How am I going to find all these little nits and/or find a list of video cards where I don't have to worry about them and say "these are the cards I support" or what else can I do?

Share this post


Link to post
Share on other sites
Advertisement
Time for one of my favourite phrases - To assume is to make an ass out of u and me [lol]

Seriously though, Direct3D does provide a lot of abstraction but it is still very flexible based on the capabilities a given chipset exposes. Think about the abstraction more as "if your hardware can do X then it will expose it to this application as Y" - therefore any hardware that supports the feature will be usable in the same way, but it does NOT state that it MUST support the feature...

I wrote about enumeration in the updated forum FAQ and I suggest you go read it. Its simply a case of writing robust and well designed code - lots of people skip enumeration and far too many tutorials gloss over it.

Quote:
Original post by BlueMonk
1) Determine what the hardware/driver requirements of my application are
You'll have to go through every enumeration and try and work out the "top level" ones. There will be lots of optional parts, but you'll find that some are more important than others.

Quote:
Original post by BlueMonk
2) Simply state requirements to prospective users
The holy grail unfortunately - no easy way to do this. Go look at your average retail box and you'll quickly realise the minimum/recommended specs are rarely user friendly. Windows Vista's WinSAT tool goes someway to helping this though..

Quote:
Original post by BlueMonk
3) Report to the user why the application doesn't work in a particular configuration
Enumeration will tell you the hardware cannot do what you want it to do, so simply output a message after you've failed an enumeration test. Bare in mind that the user may not understand the finer details - why should they know or care about 2n texture requirements? [smile]

Quote:
Original post by BlueMonk
4) Make the application work in more configurations
Enumeration will reveal which features are and are not available. It is common to write multiple "paths" for different levels of hardware. A simple case might be an "ideal" and "basic" path - the former being where the hardware is super-cool and supports everything, and the latter where almost all effects are disabled and it runs on the bare minimum features. Add as many paths as you want to bother with - more paths = more support = more maintenance.

Quote:
Original post by BlueMonk
Is there some level of requirement I can put out there that will just make sure everything offered by DirectX will work in a hardware accelerated environment (not the reference rasterizer)?
Moot point, but the RefRast is only available in the SDK so its never an option for production code.

Hardware can be roughly grouped into 6 categories:
  1. Fixed function - usually old Direct3D6 hardware, TNT2's and ATI Rage's iirc
  2. Hardware Transform & Light - Direct3D7, GeForce 1, 2 and 4mx and the earlier Radeon's iirc.
  3. Shader Model 1 - Direct3D8, GeForce 3,4 and Radeon 7x00 and 8x00's.
  4. Shader Model 2 - Direct3D9, GeForce 5, Radeon 9x00 and X8xx
  5. Shader Model 3 - Direct3D9, GeForce 6,7, Radeon X1xxx
  6. Shader Model 4 - Direct3D10, GeForce 8, R600's


You can therefore determine where yours fits in and try and go from there. Within each 'bucket' the capabilities will be fairly similar and as a general rule of thumb the next most advanced will be a superset of all its predecessors...

Look up the 'Graphics card capabilities' spreadsheet in the SDK - it will give you a good idea of the required and assumed features for each major generation of hardware/API. It's a hugely valuable resource!

Quote:
Original post by BlueMonk
The only things I can think of that I'm doing that might be considered complicated are real-time (2D) matrix transformations, real-time color channel modulation, alpha translucency in textures, and clipping (scissor rect).
None of those should stretch a recent piece of hardware but if you go back to older IGP's and fixed-function hardware a lot of that could be unsupported. Blending operations aren't guaranteed for example.

Quote:
Original post by BlueMonk
I was surprised to find that some environments don't allow textures whose size isn't a power of 2
Its only relatively recently that this restriction has been completely removed. Its still an advisable rule to stick to if you can simply for performance reasons.

Quote:
Original post by BlueMonk
hardware acceleration with full screen 640x480x16 but not in any windowed mode or any 24/32-bit color modes.
These can be dependent on the monitor as well as the GPU. You really REALLY should be enumerating these before your application even attempts to start - its not difficult [smile]

Designing to be resolution-independent is well worth the effort. I'm using a 1600x1200 20.1" display right now, if I run it at 640x480 it looks fugly - really fugly! I want to run it at 1600x1200 and have your application scale accordingly - this is easy with D3D provided you put the effort in...

hth
Jack

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers
Seriously though, Direct3D does provide a lot of abstraction but it is still very flexible based on the capabilities a given chipset exposes. Think about the abstraction more as "if your hardware can do X then it will expose it to this application as Y" - therefore any hardware that supports the feature will be usable in the same way, but it does NOT state that it MUST support the feature...

At least with things like the CLR and HTML there's discrete sets/levels of functionality so you can talk about a whole set instead of researching a zillion individual features independently. I don't even know what most of the capabilities mean, let alone if I'm using them (admittedly, I haven't taken a close look at the list yet, but I was overwhelmed by the amount of information it looked like I'd have to sift through in the DX Caps viewer). If there were some "leveling" of hardware capabilities, it would also be much easier for a user to understand if their hardware can support it, otherwise, what sense does it make to a user for me to say that, "Your video card must support the creation of textures whose size may not be a power of 2"? They're not going to know. It'd be nicer to say "requires a Level 3 DirectX compliant driver" and they look at their video card box or driver info and see immediately whether it will work or not.

The other complicating factor is that my application is not "closed" -- in other words I can't predict what kind of textures are going to be used with it because it's not just a game. It's a game creator. I'd hate to force all the users to be familiar with every detail of their video hardware in order to create a game that's going to work, but I also hate to force everyone's textures to always be a power of 2 in size if it doesn't have to be. If this were just about managing the texture size, I could manage it by specifically checking a video card capability and the texture size and making sure they're going to be compatible (whether the game they create will play on another computer complicates things even more, though), but I can't possibly manage to work out a hundred individual capability-checks based on all the features it looks like I have to be conscious of. Then go back to the fact that making it work on the user's computer doesn't mean their game will work elsewhere, and it starts to look like I'm going to have to start exposing a lot of the finer details of DirectX to users who don't necessarily want to deal with it, unless these capabilities I'm talking about aren't half as complicated as I think they are.

Quote:
Original post by jollyjeffers
I wrote about enumeration in the updated forum FAQ and I suggest you go read it. Its simply a case of writing robust and well designed code - lots of people skip enumeration and far too many tutorials gloss over it.

Thanks, I'll look at that when I get a chance. I actually did look at the DirectX SDK sample code and saw it doing something similar. And I actually have something similar, but much simpler in my own code that obviously doesn't handle all the cases I may run accross (I didn't think modern video hardware would still be so varied... or rather that the video hardware still in use out there would still be so non-modern). I thought maybe that was where I needed to fix things to make them work. But then I realized that after I get past that hurdle there are going to be on the order of a hundred more, and I don't know if I can deal with it. I've also made the mistake of exposing the screen resolution as a design-time property of the generated games, when it really should be a runtime property. I wonder how long it'll take to work that design problem out.

Quote:
Original post by jollyjeffers
You'll have to go through every enumeration and try and work out the "top level" ones. There will be lots of optional parts, but you'll find that some are more important than others.

What do you mean by "top level" ones?

Quote:
Original post by jollyjeffers
Enumeration will tell you the hardware cannot do what you want it to do, so simply output a message after you've failed an enumeration test. Bare in mind that the user may not understand the finer details - why should they know or care about 2n texture requirements? [smile]

How do I even determine which capabilities my application is using if I can't turn each one off and test if my application works without it? Is there a way to turn them off, or do I just have to intimately understand each one?

Also, my users may want to understand the finer details if it means the difference between making their game work or not work.

Quote:
Original post by jollyjeffers
Hardware can be roughly grouped into 6 categories:
  1. Fixed function - usually old Direct3D6 hardware, TNT2's and ATI Rage's iirc
  2. Hardware Transform & Light - Direct3D7, GeForce 1, 2 and 4mx and the earlier Radeon's iirc.
  3. Shader Model 1 - Direct3D8, GeForce 3,4 and Radeon 7x00 and 8x00's.
  4. Shader Model 2 - Direct3D9, GeForce 5, Radeon 9x00 and X8xx
  5. Shader Model 3 - Direct3D9, GeForce 6,7, Radeon X1xxx
  6. Shader Model 4 - Direct3D10, GeForce 8, R600's


You can therefore determine where yours fits in and try and go from there. Within each 'bucket' the capabilities will be fairly similar and as a general rule of thumb the next most advanced will be a superset of all its predecessors...

And there are no simple ways to refer to these "buckets"? I just have to come up with a list of every video card in the "Hardware Transform & light" category (and later) if I think that's adequate as a requirement level?

(I'm gonna have to look into shaders someday -- looks like that's where all the excitement is, eh?)

Quote:
Original post by jollyjeffers
Look up the 'Graphics card capabilities' spreadsheet in the SDK - it will give you a good idea of the required and assumed features for each major generation of hardware/API. It's a hugely valuable resource!

Ooh, I'll have to look at that when I get back to my development system and see if I can find it. Is it easy to find?

Quote:
Original post by jollyjeffers
hardware acceleration with full screen 640x480x16 but not in any windowed mode or any 24/32-bit color modes.
These can be dependent on the monitor as well as the GPU. You really REALLY should be enumerating these before your application even attempts to start - its not difficult [smile]
[/quote]
Yes, I intend to do that. I was just wondering if it was worthwhile if this was going to be only the first of an impossible series of hurdles.

I'm starting to lean towards just improving the error reporting if there's any way to do that so the user will know why their game doesn't work, and how they might change their project to adapt to their video card. Does the DirectX control panel applet come with DirectX itself or only the SDK? Is there a way to change the debug output level on a end-user-runtime-only installation? (I assume there's some way I can capture that output in my own application rather than relying on a separate debugger... do you happen to know how? I could probably figure it out with a search, but as long as I'm writing... :))

Well, I guess the reply I'm fishing for is "it's not half as complicated as you make it out to be." :) Hopefuly things will become clear when I look for that list in the updated FAQ.

Share this post


Link to post
Share on other sites
You are perhaps having difficulty with requirements because you are kind of going about it backwards. Typically you pick the min-spec machine before you even start development. The decision is based on market research, typically.

Then you know all the way through development the min-spec so you can test it on that platform continuously. You then completely ignore anything below min-spec for testing (maybe it'll work maybe not). Generally graphics are always happy when you have more features available then when you have less.

-me

Share this post


Link to post
Share on other sites
Quote:
Original post by BlueMonk
I don't even know what most of the capabilities mean, let alone if I'm using them
Get friendly with the documentation, it lists the capabilities alongside the features - its not intuitive at times, but the information is there if you take the time.

Quote:
Original post by BlueMonk
If there were some "leveling" of hardware capabilities, it would also be much easier for a user to understand if their hardware can support it
...
It'd be nicer to say "requires a Level 3 DirectX compliant driver" and they look at their video card box or driver info and see immediately whether it will work or not.
You're not wrong in the slightest - such a system would be absolutely perfect!

WinSAT with its numerical ranking is getting close, but even then I think its more a measure of performance than it is features.

The logic behind the no-caps approach to D3D10 was to eliminate this problem - instead of competition being 2D (features+performance) it is set to become 1D (performance) simply to reduce the testing overhead...

Quote:
Original post by BlueMonk
The other complicating factor is that my application is not "closed" -- in other words I can't predict what kind of textures are going to be used with it because it's not just a game.
Surely you're aware of other applications (games or otherwise) rejected input data because it doesn't match requirements? Know your limits and inform the user accordingly, OR just work around them.

For example, D3DX will by default load textures in 2n dimensions - its not hard for you to do the same AND adjust any texture coordinates (etc..) accordingly. More work on your part, but not impossible!

Quote:
Original post by BlueMonk
I've also made the mistake of exposing the screen resolution as a design-time property of the generated games, when it really should be a runtime property. I wonder how long it'll take to work that design problem out.
Switch to using projection space dimensions - the output of the world*view*proj matrices is defined as -1 to +1 which is resolution independent. Work according to that using an orthogonal projection matrix for 2D work (or bypass it with a software/hardware vertex shader). It's not as hard as you might think [wink]

Quote:
Original post by BlueMonk
Quote:
Original post by jollyjeffers
You'll have to go through every enumeration and try and work out the "top level" ones. There will be lots of optional parts, but you'll find that some are more important than others.

What do you mean by "top level" ones?
A forward reference to the 6 categories mentioned later on. If you can take the 'core' capabilities of those 6 buckets then you can take a pretty good guess at the generation of hardware which supports it. Ultimately you're looking at most of the gaming audience owning a GeForce or a Radeon and the rest owning Intel - if you can put a GeForce and Radeon series number of your requirements (e.g. "Nvidia 6-series or higher") then you're getting close to what you want. Identifying 6-series isn't too hard..

Quote:
Original post by BlueMonk
How do I even determine which capabilities my application is using if I can't turn each one off and test if my application works without it? Is there a way to turn them off, or do I just have to intimately understand each one?
No easy way - cross reference against the help files for any configuration that you use in your application. It is very easy to miss some things...

Quote:
Original post by BlueMonk
And there are no simple ways to refer to these "buckets"? I just have to come up with a list of every video card in the "Hardware Transform & light" category (and later) if I think that's adequate as a requirement level?
Try the other-way around. Work out what capabilities are NOT supported by any hardware prior to your determined cut-off point (refer to Paladin's post suggesting you look at it "backward" [wink]).

Quote:
Original post by BlueMonk
(I'm gonna have to look into shaders someday -- looks like that's where all the excitement is, eh?)
Without a doubt - bare in mind that FF is dead as of D3D10...

Quote:
Original post by BlueMonk
Quote:
Original post by jollyjeffers
Look up the 'Graphics card capabilities' spreadsheet in the SDK - it will give you a good idea of the required and assumed features for each major generation of hardware/API. It's a hugely valuable resource!

Ooh, I'll have to look at that when I get back to my development system and see if I can find it. Is it easy to find?
Cardcaps.xls if you run a search for it.

Quote:
Original post by BlueMonk
Does the DirectX control panel applet come with DirectX itself or only the SDK? Is there a way to change the debug output level on a end-user-runtime-only installation? (I assume there's some way I can capture that output in my own application rather than relying on a separate debugger... do you happen to know how? I could probably figure it out with a search, but as long as I'm writing... :))
Debug runtimes and the control panel applet are only included with the SDK, the end-user will NOT have access to them.

D3D10 allows, via the ID3D10InfoQueue interface, access to the debug output (an unbelievably cool feature) from within your application. Under D3D9 its much more difficult though...

Quote:
Original post by BlueMonk
I guess the reply I'm fishing for is "it's not half as complicated as you make it out to be." :)
Once you get into the right mindset it isn't so complicated (but it is easy to make mistakes). But, prior to D3D10 it really is a bit of a b***h [wink]

hth
Jack

Share this post


Link to post
Share on other sites
Quote:
Original post by Palidine
You are perhaps having difficulty with requirements because you are kind of going about it backwards. Typically you pick the min-spec machine before you even start development. The decision is based on market research, typically.


Well, I figured that since this is simply a 2D engine that the requirements of whatever I ended up with wouldn't be that outlandish so I'd support whatever it happened to run on and make that the requirement. But now I can't even figure out how to determine and describe what it will run on. I know it runs fine on my desktop and my brother's desktop, and presumably a lot of other modern hardware. But I think people are trying to run it on laptops and I know it doesn't work on my laptop from work. I just can't concisely describe the difference between these systems... I don't think it makes sense to make "Desktop computer" a hardware requirement... does it?

Quote:
Original post by Palidine
Then you know all the way through development the min-spec so you can test it on that platform continuously. You then completely ignore anything below min-spec for testing (maybe it'll work maybe not). Generally graphics are always happy when you have more features available then when you have less.


The problem is I just do this development in my off hours as a hobby, so I don't have a lot of hardware, expertise and time at my disposal to figure these things out. So I can't formally work out in detail what the software requires and what the hardware supports, and I can't be running this is different environments all the time because I just don't have the resources. That is why I'm releasing my alpha version of this software so others can tell me where it works and where it doesn't and I can try and find a pattern.

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers
The logic behind the no-caps approach to D3D10 was to eliminate this problem - instead of competition being 2D (features+performance) it is set to become 1D (performance) simply to reduce the testing overhead...
Wow, I wish I had time to read up on all this stuff, but I have a full time job that has nothing to do with graphics let alone games. I was just getting used to DX9, and now DX10 is out (that's the SDK I downloaded a few days ago, and will probably try upgrading to). I'm not aware of this "no-caps approach" idea/term. But I liked a couple things I've seen about DX10 now -- I can supposedly get at the debug output programatically (thanks for the tip), and apparently they've better formalized how to do installs for applications that rely on DirectX. Although I hope there's a way to deliver my application over the web without having to zip the whole DirectX installer into my own install -- my application will pale in comparison size-wise, so I hope there's a way to do one of those web-installs that just downloads what you need.

Quote:
Original post by jollyjeffers
Surely you're aware of other applications (games or otherwise) rejected input data because it doesn't match requirements? Know your limits and inform the user accordingly, OR just work around them.
Certainly, but so few cards/drivers have this limitation now it seems a shame to impose it on everyone just because a few cards don't support arbitrary sizes. And I can't check the local hardware to determine whether or not to impose the limitation because the final product will be running on unknown hardware (distributed to other systems). So I guess that just leaves working around them by changing my code to automatically up the texture to 2n at runtime on systems that require it. But if this is only one issue of many, it's going to be very difficult to identify and work around them all.

Quote:
Original post by jollyjeffers
For example, D3DX will by default load textures in 2n dimensions - its not hard for you to do the same AND adjust any texture coordinates (etc..) accordingly. More work on your part, but not impossible!
Wait, I'm using D3DX (for its 2D sprite class, which I use for all my drawing) -- does that mean I shouldn't be getting this error if I'm doing things properly? I have a feeling I'm misunderstanding what you're saying here.

Quote:
Original post by jollyjeffers
Switch to using projection space dimensions - the output of the world*view*proj matrices is defined as -1 to +1 which is resolution independent. Work according to that using an orthogonal projection matrix for 2D work (or bypass it with a software/hardware vertex shader). It's not as hard as you might think [wink]
Yeah, I realized early on that since I'm using D3D for my 2D now I can probably do all sorts of cool things really easily like just change some global matrix (excuse my lack of 3D vocabulary) to affect the whole display -- stretch it or rotate it or show it as a tilted plane with perspective. I haven't tried that yet... I wonder if my use of ScissorRectangle combined with that will be a problem, though.

Quote:
Original post by jollyjeffers
Ultimately you're looking at most of the gaming audience owning a GeForce or a Radeon and the rest owning Intel - if you can put a GeForce and Radeon series number of your requirements (e.g. "Nvidia 6-series or higher") then you're getting close to what you want. Identifying 6-series isn't too hard..
Maybe I can just add, "You and those to whom you distribute your games must be computer gamers, and own systems reflecting this fact," to the requirements [smile].

Quote:
Original post by jollyjeffers
Quote:
Original post by BlueMonk
(I'm gonna have to look into shaders someday -- looks like that's where all the excitement is, eh?)
Without a doubt - bare in mind that FF is dead as of D3D10...
Once again, my vocabulary is lacking... what is FF?

Thanks for all your helpful comments and suggestions!

Share this post


Link to post
Share on other sites
Quote:
Original post by BlueMonk
Quote:
Original post by jollyjeffers
The logic behind the no-caps approach to D3D10 was to eliminate this problem - instead of competition being 2D (features+performance) it is set to become 1D (performance) simply to reduce the testing overhead...
Wow, I wish I had time to read up on all this stuff, but I have a full time job that has nothing to do with graphics let alone games. I was just getting used to DX9, and now DX10 is out (that's the SDK I downloaded a few days ago, and will probably try upgrading to). I'm not aware of this "no-caps approach" idea/term. But I liked a couple things I've seen about DX10 now -- I can supposedly get at the debug output programatically (thanks for the tip), and apparently they've better formalized how to do installs for applications that rely on DirectX. Although I hope there's a way to deliver my application over the web without having to zip the whole DirectX installer into my own install -- my application will pale in comparison size-wise, so I hope there's a way to do one of those web-installs that just downloads what you need.


DirectX10 is AFAIK an essential part of Vista, and doesn't work with any previous version of the OS (because of its model), so you shouldn't have to bundle any part of DirectX with your game/application. In fact, I'm not sure it's possible (e.a. will there be anything like an end-user runtime distro?)

Quote:
Original post by BlueMonk
Quote:
Original post by jollyjeffers
Surely you're aware of other applications (games or otherwise) rejected input data because it doesn't match requirements? Know your limits and inform the user accordingly, OR just work around them.
Certainly, but so few cards/drivers have this limitation now it seems a shame to impose it on everyone just because a few cards don't support arbitrary sizes. And I can't check the local hardware to determine whether or not to impose the limitation because the final product will be running on unknown hardware (distributed to other systems). So I guess that just leaves working around them by changing my code to automatically up the texture to 2n at runtime on systems that require it. But if this is only one issue of many, it's going to be very difficult to identify and work around them all.


Incidentally, that's how developers take care of things. You anticipate compatibility issues (amongst others) and handle them accordingly (=> scalability). How far you wish to go ultimately determines how many people you will amount to.

Quote:
Original post by BlueMonk
Quote:
Original post by jollyjeffers
Switch to using projection space dimensions - the output of the world*view*proj matrices is defined as -1 to +1 which is resolution independent. Work according to that using an orthogonal projection matrix for 2D work (or bypass it with a software/hardware vertex shader). It's not as hard as you might think [wink]
Yeah, I realized early on that since I'm using D3D for my 2D now I can probably do all sorts of cool things really easily like just change some global matrix (excuse my lack of 3D vocabulary) to affect the whole display -- stretch it or rotate it or show it as a tilted plane with perspective. I haven't tried that yet... I wonder if my use of ScissorRectangle combined with that will be a problem, though.


I don't know what ScissorRectangle is in this context, but if you mean the scissor test, then no, it won't. Both operate on independent primitives. Unless you mean that the result of a transformation may place the scene outside the scissor rectangle, but that's trivial.

Quote:
Original post by BlueMonk
Quote:
Original post by jollyjeffers
Ultimately you're looking at most of the gaming audience owning a GeForce or a Radeon and the rest owning Intel - if you can put a GeForce and Radeon series number of your requirements (e.g. "Nvidia 6-series or higher") then you're getting close to what you want. Identifying 6-series isn't too hard..
Maybe I can just add, "You and those to whom you distribute your games must be computer gamers, and own systems reflecting this fact," to the requirements [smile].


Now there's an original thought. "Please be warned that if you are for the most part a dedicated office user, your machine may probably skyrocket"

Quote:
Original post by BlueMonk
Quote:
Original post by jollyjeffers
Quote:
Original post by BlueMonk
(I'm gonna have to look into shaders someday -- looks like that's where all the excitement is, eh?)
Without a doubt - bare in mind that FF is dead as of D3D10...
Once again, my vocabulary is lacking... what is FF?

Thanks for all your helpful comments and suggestions!


FF stands for Fixed Function, or the predecessor of programmable hardware (of which shaders are the most prominent feature).

Share this post


Link to post
Share on other sites
Quote:
Original post by Todo
DirectX10 is AFAIK an essential part of Vista, and doesn't work with any previous version of the OS (because of its model), so you shouldn't have to bundle any part of DirectX with your game/application. In fact, I'm not sure it's possible (e.a. will there be anything like an end-user runtime distro?)
I downloaded the DirectX 10 SDK, and it came with redist components. Also, I don't expect the people using my software to be running Vista. (Not that I'm ruling it out, I just expect the majority to be using Windows XP.)

Quote:
Original post by Todo
Incidentally, that's how developers take care of things. You anticipate compatibility issues (amongst others) and handle them accordingly (=> scalability). How far you wish to go ultimately determines how many people you will amount to.
Yes, I anticipated compatibility issues with different systems supporting different display modes. But DirectX is much more complicated to scale than other things because there are so many variables to check. So I tried only to deal with scalability only at the simplest level (I don't have the resources to go into it too deeply with this being a project I'm working on alone in my free time), but I'm having unexpected problems (I'm sure you know, not every issue can be anticipated). So now I'm trying to work out just what requirements/variables there are that I didn't catch. The strange thing is that this is just a new version of a product I've already made (albeit a rewrite in a new language with a new version of DirectX) and I didn't have these kinds of problems with the first version. I think I just went about enumerating display modes wrong, so I'm working on fixing that, and hopefully that will eliminate some of my problems.

Quote:
Original post by Todo
I don't know what ScissorRectangle is in this context, but if you mean the scissor test, then no, it won't. Both operate on independent primitives. Unless you mean that the result of a transformation may place the scene outside the scissor rectangle, but that's trivial.
Well, I use ScissorRectangle to clip some pieces of the output, but depending on what type of output it is, I might want to clip it before the transformation instead of after, and I doubt scissor testing supports that. Maybe I should be using something else for 2D clipping, but this is the best/only thing I found that works.

Share this post


Link to post
Share on other sites
I think I'm making progress, but I've hit one little snag. I don't fully understand the vertex processing flag specified on the display create statement. I verified that there is hardware support for a particular display mode, and I can create a device in that mode with software vertex processing. It fails when I try to create it with hardware vertex processing or specify 0 for my CreateFlags. But I don't see any obvious flag to indicate whether the device will support hardware vertex processing or not. How can I determine what to pass into the CreateFlags parameter on the Device constructor (I'm using MDX, BTW)? Why doesn't DirectX itself degrade gracefully on this one and automatically perform software vertex processing if hardware vertex processing is unavailable, unless hardware vertex processing is explicitly requested?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!