Jump to content
  • Advertisement

harry_x

Member
  • Content Count

    45
  • Joined

  • Last visited

Everything posted by harry_x

  1. harry_x

    Making applications use OSS instead of ALSA

    they have to support it... But very few applications actually use ALSA directly, most of them have OSS support (which is in turn emulated by alsa)
  2. harry_x

    Using EAX on Linux?

    OSS emu10k1 driver supported loading emu10k1 programs, ALSA driver doesn't support it AFAIK
  3. harry_x

    gl contexts

    you have to access one glcontext from just one therad
  4. harry_x

    Image does not render

    and does it render anything when texturing is turned on? (try glDisable(GL_TEXTURE_2D);)
  5. harry_x

    Image does not render

    glClearDepth( 0 ); glDepthFunc( GL_LEQUAL ); hmm that doesn't make much sence to me - first you set depth buffer to 0 (to be precise, value to which depth buffer will be cleared when glClear is called with depth_buffer_bit) and set depth function to 0? that means only fragment with depth 0 would pass and imo that's not what you want (and comparing float equallity of float values has it's problems too)
  6. I would consider writing generic routine for inverting matrices, it's quite easy using gauss elimination
  7. harry_x

    Self-contained programs :)

    id depends on what is set in LD_LIBRARY_PATH and in ld.so.cache (generated from /etc/ld.so.conf). First LD_LIBRARY_PATH is searched, then entries in ld.so.cache, then standard locations like /lib,/usr/lib,etc. Some steps are skipped for some programs(setuid/gid for example) see man ld.so
  8. harry_x

    framebuffer copying with alpha?

    I don't seew why do you render terrain twice. If you don't do complex lighting calculations based on if fragment is in shadow or not, then it is possible to render gray polygon over entire screen(you just use identity as modelview matrix) and modulate colors. If you don't,then maybe shadow mapping would be more effective. But back to you question. AUX buffers are legacy, they aren't widely supported either. If you want to render to texture,you should seiously consider implementing FBO (which is quite efficient)
  9. Quote:Original post by muhkuh Well, from what I heard Cg only produces good code for nVidia cards. So I had to eventually optimize code for all ATI cards what would eat up the advantage of being able to handle older nVidia cards. When you use ARB code paths, it emits universal shader code. But shaders in nvidia fragment shader lang. (nv_fragment_shaderX) are shorter and more eficient (and can take advantage of newer chips). Cg however doesn't support (hardly a surprise) ATI's shader extensions (which isn't really an issue if you don't develop for PS1.x cards)
  10. Quote:The other issue is DRM being integrated in the hardware. Imagine buying a new Intel or AMD processor, but it won't run XP or earlier because the machine code is not digitally signed. That doesn't make sense. Windows aren't the only OS that is being used on x86... And digitaly signing machine code on CPU level doesn't make much sense either. How would it be done? Who would provide certificates?
  11. harry_x

    OpenGL game engine?

    allegrogl uses opengl... But algl isn't very good choice... I use it my engine because of legacy 2D gui (which use allegro). If I wanted to start new 3D engine I would go for SDL. It's portable and very simple
  12. Shipping custom linux boot disk? That's completely unreal - there are many cards which linux don't support (and it will surely don't support any newer cards than bootdisk itself).. And I can still see why wouldn't opengl be usable when you install vendor's driver (which is a need always) which can provide own opengl32.dll (if it wouldn't be possible just to install icd).
  13. harry_x

    Checking GfxCard Caps

    you can normalize vectors on any ARB_fragment_program capable card. There is no instruction for normalization, but it can be easily computed via DP4,RSQ and MUL instructions... But you should always let Cg select the best (e.g. highest) codepath available for shaders...
  14. harry_x

    Checking GfxCard Caps

    for max texture units use glGetIntegerv(GL_MAX_TEXTURE_UNITS_ARB,&tex_units); or (on newer cards) glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS_ARB,&tex_image_units); and glGetIntegerv(GL_MAX_TEXTURE_COORDS_ARB,&tex_coord_units); Newer cards have separate number of texcoord/image units so you should always check for it. Max number of instructions depends not only on card itself but on extensions you choose to use for shaders (which Cg selects for you). Simplest way would be just to try to compile Cg shader in current profile and if it fails then select lower shader :-) Or you can check what codepath Cg has selected and load shaders for it...
  15. harry_x

    VBO's

    you can turn of VBO using glBindBufferARB(GL_ARRAY_BUFFER_ARB,0); Remember that you have to turn off indices VBO separately. For the rendering code itself, are you sure that you pass offsets to the buffers instead of real adresses when using vbo?
  16. harry_x

    framebuffer_object

    NVIDIA's FBO implementation seems to be buggy. On Linux with latest nvidia driveres I ran into problems when creating stencil buffer renderbuffer attachment. Even the examples from specification fail. But otherwise it seems to be working - didn't try on Windows however
  17. it won't work. Perl does some caching on ppid/pid because of threading issues (for example linuxthreads and other threading implementation implements threads via clone() as different proccess - so getpid() doesn't return same values across threads.). The solution would be to use Linux::Pid module from CPAN (which always return true pid/ppid values) and then it should work as supposed.
  18. harry_x

    Rendering a mesh using VBO

    VBO works as follows - you bind a buffer and then supply pointers for vertex/texcoord/normal arrays as if you're not using VBO. Except that when using VBO you supply offsets to buffer bound to VBO. So no, NULL is there for a purpose... So BUFFER_OFFSET returns you a index to ith float value in buffer bound to VBO (it's basically the same as #define BUFFER_OFFSET(i) (GLfloat *)(sizeof(GLfloat)*i - maybe it will be more clear to you) and the index buffer is just array of GLuint specifying vertex numbers
  19. harry_x

    Rendering a mesh using VBO

    Either order vertices or use glIndexPointer and glDrawElements (indices can be stored in VBO too) and supply opengl an buffer of indexes of vertices which to render (in specified order)
  20. If you want to display projected textures then you don't need to do multipass rendering - you can simply apply another texture to all objects and make vertex shader to compute its texture coordinates
  21. harry_x

    Texture seams issue

    Probably your texture coordinates are larger than 1.0. Just discard everything before decimal point
  22. harry_x

    Valgrind: your experience?

    definitively best tool for debugging memory access. It has only one major drawback - it's very very slow (few dozens times when compared to regular run). But still worth a try (especially when debugging leaks or buffer overflows). And it call used for debugging racial conditions, cache profiling and other userfull stuff too
  23. harry_x

    Blue Square on Red BG

    first of all, you have to clear color buffer and depth buffer. Second, you are specifying coordinates like 25 - without setting the modelview matrix - which mean that theese coordinates are larger than max. device coordinates (1.0)
  24. harry_x

    Again glReadPixels Problem

    you should always call glFlush before reading any buffers...
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!