No support for ARB_FP

Started by
2 comments, last by Krohm 18 years, 8 months ago
Hi I have a geforce4 Ti 4200 The card does not support ARB_FRAGMENT_PROGRAM. thus i would hav to use the BASIC PROFILES as mentioned in CG_TOOLKIT.pdf . Can any one suggest good tutorials for writing Fragment progrms without ARB_FP support ?
Advertisement
You're in a position which is even worse of what I was few months ago.
When I used NV20 I had to pass thuru NVEmulate (get it from nvidia's website). I remember it enables ARB_fp but I'm not sure about CG, which is another extension.
Testing things on NVEmulate is a real pain. Get used to the extreme performances you'll get!

Yes, this is going to hurt.

EDIT: fixed Image tag.
EDIT: forgot to tell you that the number in the upper left corner is a real FPS counter. Yes, it's less than 1 frame per second and I think it's a screenshot at full size.

Previously "Krohm"

Quote:Original post by Krohm
You're in a position which is even worse of what I was few months ago.
When I used NV20 I had to pass thuru NVEmulate (get it from nvidia's website). I remember it enables ARB_fp but I'm not sure about CG, which is another extension.
Testing things on NVEmulate is a real pain. Get used to the extreme performances you'll get!

Yes, this is going to hurt.

EDIT: fixed Image tag.
EDIT: forgot to tell you that the number in the upper left corner is a real FPS counter. Yes, it's less than 1 frame per second and I think it's a screenshot at full size.



I think u dint get my problem. there is pixel shader available thru some NV20 , i think it has something to do with NV_Texture_Shader extension. I wanted to know where i cud find tutorials for the same . Google search yielded unsatisfactory results :(
I understand.
So, you don't want to use ARB_fp or later, you want to use what you've got.

Before ARB_fp, you had to use two extensions (which in fact spawned more).
Programmable texturing is made of two components: fetching the texel value and computing something with it.
The fetch is effectively handled thuru NV_texture_shader. NVIDIA developer website has all the information you need on this.
Computing things is then handled thuru register combiners, which is NV_register_combiners.

I don't know however how much this is supported on other brands so you may be on a dead end.

So, you probably want to use CG directly in GL. This is effectively The Right Way but I'm sorry I can't help you there, I never looked at CG for GL in detail. I know for sure however there are some articles on this on NVIDIA developer support website.

Previously "Krohm"

This topic is closed to new replies.

Advertisement