Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

138 Neutral

About harry_x

  • Rank
  1. harry_x

    Making applications use OSS instead of ALSA

    they have to support it... But very few applications actually use ALSA directly, most of them have OSS support (which is in turn emulated by alsa)
  2. harry_x

    Using EAX on Linux?

    OSS emu10k1 driver supported loading emu10k1 programs, ALSA driver doesn't support it AFAIK
  3. harry_x

    gl contexts

    you have to access one glcontext from just one therad
  4. harry_x

    OpenGL Image does not render

    and does it render anything when texturing is turned on? (try glDisable(GL_TEXTURE_2D);)
  5. harry_x

    OpenGL Image does not render

    glClearDepth( 0 ); glDepthFunc( GL_LEQUAL ); hmm that doesn't make much sence to me - first you set depth buffer to 0 (to be precise, value to which depth buffer will be cleared when glClear is called with depth_buffer_bit) and set depth function to 0? that means only fragment with depth 0 would pass and imo that's not what you want (and comparing float equallity of float values has it's problems too)
  6. I would consider writing generic routine for inverting matrices, it's quite easy using gauss elimination
  7. harry_x

    Self-contained programs :)

    id depends on what is set in LD_LIBRARY_PATH and in ld.so.cache (generated from /etc/ld.so.conf). First LD_LIBRARY_PATH is searched, then entries in ld.so.cache, then standard locations like /lib,/usr/lib,etc. Some steps are skipped for some programs(setuid/gid for example) see man ld.so
  8. harry_x

    framebuffer copying with alpha?

    I don't seew why do you render terrain twice. If you don't do complex lighting calculations based on if fragment is in shadow or not, then it is possible to render gray polygon over entire screen(you just use identity as modelview matrix) and modulate colors. If you don't,then maybe shadow mapping would be more effective. But back to you question. AUX buffers are legacy, they aren't widely supported either. If you want to render to texture,you should seiously consider implementing FBO (which is quite efficient)
  9. Quote:Original post by muhkuh Well, from what I heard Cg only produces good code for nVidia cards. So I had to eventually optimize code for all ATI cards what would eat up the advantage of being able to handle older nVidia cards. When you use ARB code paths, it emits universal shader code. But shaders in nvidia fragment shader lang. (nv_fragment_shaderX) are shorter and more eficient (and can take advantage of newer chips). Cg however doesn't support (hardly a surprise) ATI's shader extensions (which isn't really an issue if you don't develop for PS1.x cards)
  10. Quote:The other issue is DRM being integrated in the hardware. Imagine buying a new Intel or AMD processor, but it won't run XP or earlier because the machine code is not digitally signed. That doesn't make sense. Windows aren't the only OS that is being used on x86... And digitaly signing machine code on CPU level doesn't make much sense either. How would it be done? Who would provide certificates?
  11. harry_x

    OpenGL OpenGL game engine?

    allegrogl uses opengl... But algl isn't very good choice... I use it my engine because of legacy 2D gui (which use allegro). If I wanted to start new 3D engine I would go for SDL. It's portable and very simple
  12. Shipping custom linux boot disk? That's completely unreal - there are many cards which linux don't support (and it will surely don't support any newer cards than bootdisk itself).. And I can still see why wouldn't opengl be usable when you install vendor's driver (which is a need always) which can provide own opengl32.dll (if it wouldn't be possible just to install icd).
  13. harry_x

    Checking GfxCard Caps

    you can normalize vectors on any ARB_fragment_program capable card. There is no instruction for normalization, but it can be easily computed via DP4,RSQ and MUL instructions... But you should always let Cg select the best (e.g. highest) codepath available for shaders...
  14. harry_x

    Checking GfxCard Caps

    for max texture units use glGetIntegerv(GL_MAX_TEXTURE_UNITS_ARB,&tex_units); or (on newer cards) glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS_ARB,&tex_image_units); and glGetIntegerv(GL_MAX_TEXTURE_COORDS_ARB,&tex_coord_units); Newer cards have separate number of texcoord/image units so you should always check for it. Max number of instructions depends not only on card itself but on extensions you choose to use for shaders (which Cg selects for you). Simplest way would be just to try to compile Cg shader in current profile and if it fails then select lower shader :-) Or you can check what codepath Cg has selected and load shaders for it...
  15. harry_x


    you can turn of VBO using glBindBufferARB(GL_ARRAY_BUFFER_ARB,0); Remember that you have to turn off indices VBO separately. For the rendering code itself, are you sure that you pass offsets to the buffers instead of real adresses when using vbo?
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!