• Content count

  • Joined

  • Last visited

Community Reputation

100 Neutral

About bootstrap

  • Rank
  1. I wrote SIMD/SSE2+ assembly language for my: #1: f64mat4x4 times f64mat4x4 multiply (matrix times matrix) #2: f64mat4x4 times f64vec4 multiply (matrix times vector-or-position) and it was much, much faster than optimized C or even intrinsics (and I mean several to many times faster, not 20% or 40%). I was rather surprised the speedup was this large, but quite happy. In #2 especially, carefully designing the algorithm for exactly what I wanted to accomplish made a substantial (but not huge) improvement too. What I mean is, my code needs to transform: 1: position 2: zenith-vector AKA "normal-vector" 3: north-vector AKA "bitangent-vector" AKA "binormal-vector" 4: east-vector AKA "tangent-vector" so I wrote the assembly language to transform all 4 of those elements of each vertex in its routine. In fact, the assembly-language function is passed the number of consecutive vertices to process, and it performs the loop itself. I was surprised to find it took only about 40 nanoseconds per transformation! Not bad, I thought. Obviously the AMD Athlon/Phenom chips have good implementations of SIMD/SSE2+ hardware. And, BTW, these measures were transforming on one core, not all 4 cores, so now each transformation consumes only 10ns. I'm happy with that (good FPUs AMD, thanks). I'm not willing to make my code open-source (not yet, anyway), but if you want to adopt and modify (or learn from) for your own projects, I'm okay with that. You must agree not to give my code (or code derived from my code) to anyone, but they can ask me (for personal, non-commerical purposes), if they wish. Also note this. If you are willing to run in 64-bit mode, you get twice as many SIMD/SSE registers to work with, and that can further speed up these routines by 25% or so (my routines, anyway). Also, if you process f32 (single)arrays and vectors instead of f64 (double) arrays and vectors, you can fit more in registers and gain another 20% or so. I perform all my math in f64 however. Good luck. To learn SIMD/SSE2+ assembly language is totally worth the effort.
  2. Which Linux distrubution?

    I suggest ubuntu. I was developing my portable 3D game/graphics/simulation engine on fedora8 before fedora9 and beyond came out, but never could get the newer fedora releases to work... just endless, endless troubles. Then I switched to ubuntu out of desperation, and have been able to cope with ubuntu ever since. It seems the linux torch has passed from fedora to ubuntu, in my mind, and the mind of many others it seems. And in case it matters, this being a game/graphics forum, my application that requires OpenGL v2.10 ~ OpenGL v3.00 and GLSL v1.20 ~ GLSL v1.30 compiles and executes on both the eclipse IDE and CodeBlocks IDE (though I just ditched eclipse, since CodeBlocks is better).
  3. Phase1 - UP, Phase 2..., Phase 3 - Profit

    Yes, you need to understand that "programming is beside the point". I am not kidding. The first program I ever wrote was an optical design and analysis program - in junior high school. I am not kidding. Yes, it was a pretty sophisticated software application all things considered ### BUT ### Every step of learning to write that program was simply HOW I was solving the individual problems of optical design. For example, part of that process is "tracing a [light] ray through a series of optical surfaces". So I needed to figure out how to store the position of the ray at each surface, which meant I needed to learn about "variables" - to store the ray position in variables x, y, z. And I certainly needed to learn about "data-types" of those variables at that point, because integer positions certainly would not work (unless my units were microns or angstroms). And that lesson unleashed me to create a dozens of variables, like k, l, m for the direction the ray was traveling, and r, p, q, n for the "radius of curvature of the current surface", "position of current surface", "conic shape of each surface", "index of refraction of each surface", etc. But wait, my optical system has 10 or 15 or 21 surfaces. I can't store the position of every surface in one variable "p" can I? How do I process a whole series of surfaces without writing the same code 10 or 15 or 21 times... and change the code ever time I changed the optical design? Enter structures (where one structure can hold all the information for one surface), and enter arrays (where one array can hold any number of those structures). Now my program can access all information about every attribute of every ray at every surface at every tick (angular position) in every zone at every color at every skewangle (off-axis angle) like this: for (skew = 0; skew < skew; skew++) { for (color = 0; color < colors; color++) { for (zone = 0; zone < zones; zone++) { for (tick = 0; tick < ticks; tick++) { for (surface = 0; surface < surfaces; surface++) { error = trace_ray_through_surface (surface); x = design[surface].x; y = design[surface].y; o = design[surface].o; if (((x*x)+(y*y)) > (o*o)) { design[surface].vignette = true; } } } } } } And so forth. My point is, the only relevance "programming" had to my brain was --- "how am I going to make my computer accomplish this or that aspect of MY PROBLEM. So the person who said "you need a problem to solve" hit the nail on the head. You don't focus your attention on how to make your arm and leg muscles move when you build a house, you focus on the problem you are trying to solve, and only pay as much attention to your tools as necessary to accomplish your goal. Yes, you need to understand how a hammer and nail work, but you only pay attention to the hammer and nail at the point when you need to make one piece of wood "stick" to another. And then you survey the tools you have available, and ask yourself "what is the best way to accomplish what I want with the tools I have available". And THAT is how to learn to program. ----- PS: In a way, you will have less trouble than me, because you have already filled your brain with "hints" about the tools you have available. I had none, and at every step I had to go find something in a totally lame 50-page reference manual for the programming language, try to understand what all (or some of) those strange words meant, and write tiny 10-line experiment programs to figure out how each feature worked... to see whether I could accomplish what I needed in my program. The focus is *always* on what you are trying to implement, NOT the programming language, or any of the endless bogus "frameworks" that everyone is always trying to sell you as a way to save yourself time and effort. My advice about that is, always program at the lowest level you can. Okay, program in C instead of assembly (sometimes), but avoid frameworks at all costs, especially while learning! Otherwise, you'll habituate to some total piece of crap and never be able to accomplish anything without their crutches (and you'll never invent better architectures yourself, you'll just adopt whatever lame scamware/scamwork you bump into on the web). Anyway, that's my advice. GET A TOTALLY SPECIFIC PROBLEM TO SOLVE.
  4. Though I have never drawn directly onto the desktop before, other programmers have told me they did. OTOH, that was not with OpenGL, but probably just Windows API functions. I guess this mean all screensavers must-be/are windows created by those applications, appearances to the contrary notwithstanding. Either that, or they are all drawn with simple Windows API functions. This is certainly possible. Yes, I agree WS_EX_TOPMOST is appropriate. Thanks for reminding me; I haven't seen (or tested-on) a Windows98 computer in nearly a decade! Somehow, I doubt any Windows98 computer will support OpenGL 2.1+ and GLSL 1.2+ anyway, but then again, that is just a guess. Unless anyone knows how to do the "desktop" trick... I'll go ahead and create a fictional desktop window with CreateWindowEx(). Thanks for your replies.
  5. Is it impossible to make wglCreateContext() return a valid rendering context for the desktop window? So far, everything I try fails. I try getting the desktop_hdc argument with GetDesktopWindow() followed by GetDC(), and also by calling GetDC with a null/zero argument (which some sources claim returns the HDC for the desktop). I understand that it is possible to create a borderless "popup" type window that covers the entire desktop, but I would prefer to make OpenGL draw directly on the desktop window - unless it simply cannot be done. Anyone know? Thanks.
  6. OpenGL OpenGL Books

    Good luck with your game. And your decision to keep a journal is excellent! Good move. The best OpenGL books in my opinion are: #1: OpenGL SuperBible #2: OpenGL Shading Language (Randi Post) #3: OpenGL Distilled (for a short, concise, to-the-point reference) #4: Real Time Rendering #5: Advanced Graphics Programming Using OpenGL #4 has nothing specific to do with OpenGL, but is the best book for games and realtime simulations and such. #5 you can live without, but is pretty good. Also make sure you download the free OpenGL 3.0 and GLSL 1.3 specifications which are free download PDF files on
  7. Exactly what determines whether GLSL supports integer variables in my shader programs are executed as integers by the GPU hardware? Apparently some versions of GLSL let vertex/fragment programs contain integer variables, but GLSL keeps the values in floating point internally (leading to strange behavior sometimes). How do I know whether my GLSL integer variables are being processed as integers in the GPU hardware? My engine is passing flag bits in a vertex variable, and the vertex shader passes this value to the fragment shader. Sometimes I get strange results that I traced to wackiness in the interpolation process. For example, all the vertices in the VBO being rendered have a value of 12.0000 in the flags variable, and my code tests the flags like this (until I definitely have real integers to bit-test properly!!!): if (flag >= 8.0) { flag = flag - 8.0; // // processing specified by bit #3 set // if (flag >= 4.0) { flag = flag - 4.0; // // processing specified by bit #2 set // if (flag >= 2.0) { flag = flag - 2.0; // // processing specified by bit #1 set // if (flag >= 1.0) { flag = flag - 1.0; // // processing specified by bit #0 set // However, the behavior of the shader changes randomly at various pixels. That problem vanished when I changed the 8.0 values in the first lines to 7.9999 !!! Apparently when all 3 vertex shaders output values of 8.0 to the fragment shader, the fragment shader sometimes gets values less than 8.0 !!! Yikes! I can't wait to have real integers in my GPU shaders! I assume integers can be passed from vertex shaders to fragment shaders, right? Presumably integers will not exhibit strange (and erroneous) interpolation artifacts like illustrated in the above code segment. ----- Separately, in trying to figure out the above, I noticed that GLSL generates a compile-time error when I put a #version directive in both my vertex and fragment shaders, even when the version number is the same in both (#version 110, #version 120, #version 130). When the directive is in only one shader, it works okay. Isn't the #version directive supposed to be supported in both shaders?
  8. OpenGL What is so open about OpenGl?

    However, the most important point about OpenGL has nothing to do with "open". The most important point about OpenGL is, you can write an OpenGL program that runs on all three major operating systems (Mac, Linux and Windoze), and potentially others. Though I don't pay attention to this myself, OpenGL can run on all sorts of mobile devices too. The "open" part isn't very important to most OpenGL programmers. But being multiplatform IS.
  9. DirectX and OpenGL

    The anger at OpenGL is actually anger at the OpenGL standards group more than anything. And the anger seems to be mostly because the standards group (called ARB or Khronos usually) promised (or led to believe) OpenGL would be a brand new, totally cleaned up, object-oriented, super-efficient API. That would be great, of course - how could anyone disagree with that? Well, the ARB went silent for a year or more (after leading everyone to believe the next release would blow the socks of everyone in the universe) - then released a version that jumped a whole major version number, but was only incremental improvements on the "old, moldy API" (as those excited about the promises now see it). But OpenGL is still better than D3D in many minds, either because it is simply better in ways some people care deeply about (but not in other ways they care less about). And if you want to write multi-platform (Windows/Linux/Mac/etc) graphics applications, you have no choice --- OpenGL is the only option. For the record, I switched from D3D to OpenGL and STILL am very glad I did, and have no temptation to switch back. This is true even though I wrote [most of] two commercial game engines (with released games), and had to learn a new way of doing things "all over again". We must let each person make their own judgement about this, but the anger will die down soon, and OpenGL will continue to be one of the two real options for fast, sophisticated 3D. I would still suggest OpenGL to anyone starting graphics, any day of the week. DirectX AKA Direct3D supports a very, very few extra features of the latest graphics cards that "core" OpenGL does not. However, these advanced features are also available on OpenGL, though only through GPU vendor specific "extensions" --- that work perfectly well, but have not been officially added to the official "core" OpenGL. The primary "pro" of DirectX/Direct3D is --- you like its way of doing things. If you like the way OpenGL does things better, that "pro" goes to OpenGL. With D3D you can write efficient Vista applications. With OpenGL you can write efficient Vista/XP/2000/... applications. The choice is yours. I believe OpenGL is vastly easier to learn for beginners. However, I suppose this might not be true for everyone. I guarantee you will find some programmers who claim D3D is easier for beginners. I find that virtually impossible to believe myself - yet in truth, I believe them (meaning, I believe there are SOME people who just "take" to certain styles that are more difficult for most people). Yes, I see the Windows OS dying (and therefore D3D) --- in about 25 years (which is a guess, and too long to worry about). The reason for that will be people just get fed up being trashed by viruses and other problems inherent in Windows AND because a sufficiently large collection of totally free software to do just-about-everything will obsolete the notion of "paying" for mainstream software. I am not familiar with SDL, because I try to develop everything possible myself. But plenty of graphics programmers will advocate it. I cannot answer whether it is better for you, and doubt anyone else can - unless they know your style, strengths, preferences, etc. You do not need the Win32 API to create games. You can do everything necessary through a small-to-large collection of function libraries. And if you might ever care about portability, that is a good policy. If you do call some win32 API functions, you may call as few as you wish (see WGL). I appreciate the effort that people have invested to create tutorial websites, so I hate saying this. I found books that I bought [from] to be much better than anything I could find on websites. However, having forums like to ask questions on can be very helpful (personalized answers!). I hope this helps a little. You will get totally diverse answers, I can almost guarantee you that!
  10. I don't understand some of these comments. Such a vastly overwhelming quantity of the fun, code, work, ideas, results, functionality, so-forth is in the code I/WE add to the API (which simply *gives us access to the rendering hardware*). Though I understand this is a personal preference that could go either way, I much prefer the existing OpenGL2 approach over D3D - that's part of the reason I switched from D3D to OpenGL. Even so, I think that is totally beside the point. The real juice is what WE do with OUR application. Though I happen to ***prefer*** AMD over Intel in CPUs, and nvidia over AMDATI in GPUs, and OpenGL over D3D in graphics APIs, (and linux over windoze in OS), guess what? If someone with an Intel CPU and AMDATI GPU runs my application on windoze --- well, they still get the same results/enjoyment/producitivity. So, except for those of us just learning/sampling (and that's fine too), the API significance seems way overblown here, especially as I/we dump more and more work onto our ever-more-multi-core CPUs and ever-more-multi-core GPUs via cuda (or equivalent), all we need is a graphics API that gives us fast/efficient rendering from the core of our engine. Like most people here, I rarely even [need to] dive down into that part of my engine any more.
  11. I have been so busy working on projects, I am way behind the curve on many issues directly and indirectly related to the future state of fast/realtime 3D OpenGL applications. I should also admit (cuz I forgot to previously) that I switch to OpenGL almost exactly when VBOs and FBOs became practical. So I am truly oblivious to (read ignorant of) the problems of OpenGL developers with pre-VBO engines/architectures/infrastructure. Anyway, I had also not read anything about CUDA for a long, long time - since the information was vague handwaving. Well, I'm almost half way through a quick skim/read of the cuda programming guide and what do I begin to notice? Well, it appears CUDA has an extremely clean, convenient and efficient connection to OpenGL buffer objects. In fact, at first glance, the interface to OpenGL looks cleaner/better than D3D (? surprise, surprise ?). But that's not the point. The point is, for those several/many people driven nuts by lack of geometry shader support, CUDA seems to provide an excellent (faster, cleaner, more flexible/general/capable) alternative to generate geometry - by spewing procedurally generated geometry straight into VBOs, FBOs, textures! Since I *am* behind a few curves here, I may be missing some gotchas. So please set me straight (Michael Gold or anyone else who is "ahead of the curve"). For example, maybe CUDA setup/breakdown and/or CUDA/OpenGL interoperability has too much overhead. From my brief read/skim, however, that doesn't appear to be the case. This experience has made me *just begin* to seriously grapple with a set of potentially important questions *** for those of us "stuck" with OpenGL *** by inertia, preference, stubbornness, linux-support or platform-independence. The general nature of the question is this. Since most (actually ALL) of my applications are realtime and/or will-never-run-fast-enough AND contain physics and other compute-intensive subsystems that will eventually need GPU support, is it actually BETTER to shift everything except "explicitly graphical" aspects of the process out of OpenGL (or D3D if I was stuck there) and into CUDA (or OpenCL --- if I infer correctly that OpenCL approximately equals CUDA). My off the top of the head reaction is, well, that does make sense. This does not answer everything people have been complaining about, but I almost wonder if somebody else had similar thoughts, and that led to plans to shift everything non-explicitly graphical OUT of OpenGL. If nothing else, this makes me less worried that "everyone will abandon OpenGL and eventually no fast/realtime linux/multiplatform support will exist". Anyway, I'm curious what less oblivious OpenGL gurus think of this speculation.
  12. OpenGL OpenGL3.0.. I mean 2.2

    Quote:Original post by hikikomori-san I hope Nvidia picks up the torch - these guys are seemingly the most active and most productive people out there! They've accelerated physics and are pioneering GPGPU, they have PerfHUD, FX Composer, two great SDKs with loads of samples, a scene graph SDK, GLExpert, ShaderPerf, the Cg toolkit... I say let these fanatics handle this!I agree. Fact is, if they create something super-efficient (with low-level appearance or not), people who care can wrap it in pretty wrapper. We can always hope that nvidia doesn't get fat and lazy.
  13. Quote:Original post by phantom Quote:Original post by bootstrap I am staying with OpenGL. I have no choice, because I cannot allow my application to become dependent upon the evil empire. Either people forget how many times they have been shafted by macroshaft, or they haven't been developing software long enough yet to *repeatedly* find out and get angry enough to abandon THEM on principle - and out of self-preservation. I must admit, it took several times to sink into my lame brain.As I said in the other thread; name them. And lets not ignore the constant shafting the ARB has given OpenGL over the last 8 years either, you can't be selective in these things...Just for the record, I answered in the other post. The brief version of the answer is two-fold. #1: I did not "ignore" anything the ARB/OpenGL has done over the past 8 years. In fact, my post asked whether we could ditch ARB/OpenGL and write a new API for games/realtime/interactive applications that need high performance. The answer may be "no" because we can't get the information we need (of the low-level hardware interface), but how does asking whether we can "abandon ARB/OpenGL" ignore/forgive their faults? #2: You may have [reasonably] misunderstood that the shaftings I mentioned were not meant to be limited to D3D only - they refer to every hunk of software they create, from super-virus Windows/ActiveX to the "registry" (which causes more disasters than books could enumerate) to DirectX/Direct3D --- and everything in between. But I refuse to start writing a book about these things in this forum. I will allow you to love Microsoft and not hassle you about it. You can give me the same deference and freedom to observe, evaluate and mention what I observe and infer. Believe me, every big, greedy, malicious bully has plenty of defenders (out of fear, ignorance and many other mistakes/motives). They don't need you on their side. Or me.
  14. OpenGL OpenGL3.0.. I mean 2.2

    Quote:Original post by phantom Quote:Original post by bootstrap Okay, now for my questions and comments. Do understand that my focus is NOT past history (since I mostly ignored that), but future history - perhaps including what WE can or cannot make it (instead of just will be done to us). Wait.. so we aren't allow to ignore how MS have 'shafted' you in the past, yet we are allowed to ignore the constant screw ups of the ARB over the last 8 years? You might want to adjust your clothes, your anti-MS bias is already showing....You completely misread the point of my post! Or else I didn't say clearly enough (for you to understand) that I did not pay attention to things like OpenGL politics, ARB-promises, etc. They MAY have shafted the OpenGL community, but I explicitly stated I was not in a position to judge that. Also, I openly admitted my personal inclination (or bias) to accept an "unpretty" API so long as it gets the work done. I totally accept that some people want a nice, clean, pretty, object-oriented API, and I find nothing wrong with people holding those personal desires. They are simply less important than other aspects of the OpenGL situation - to me. Perhaps if I had been listening to ARB/Khronos promises for the past two years, and getting myself totally jazzed-around about their promises and prospects of switching to the new, nice, clean, pretty, object-oriented API, then quite possibly I'd be torked off too. Lucky for me, I didn't create that emotional problem for myself, as it turns out. You also seem to have completely missed the fact that I was discussing the possibility of ditching OpenGL entirely for the "niche" in question, namely "games". Tell me, how does the prospect of abandoning ARB/Khronos/OpenGL for all realtime/interactive/simulation applications constite "ignoring" the weakness of OpenGL3 or the "screwups" of the ARB. Can you explain that to me? I can ask whether we should abandon OpenGL for realtime/interactive/simulation applications - and you don't even notice. But to even mention horrors by fat cat bullies... well, that's just unforgivable, now isn't it? Quote:Quote: My first observation is, I have been shafted so many times by microsoft and windows that I always call them macroshaft and windoze, no matter how many points people subtract from me in forums (LOTS). I say this only to remind everyone who is sore at ARB/Khronos that they will quite certainly be shafted much worse by the fiends they find themselves dependent upon in the D3D world. I know, first hand, repeatedly. Been there, done that, again and again. Shame on me. But finally learned the lesson after a couple dozen iterations (see #2 above).I'd like to know how you have been 'shafted' by MS. About the only two major changes MS have done in... well, forever... is D3D10 locked to Vista and DirectSound not being hardware accelerated (and given the quality of the drivers that wasn't a bad thing). MS bend over backwards to help people and maintain compatibility between versions as best they can. So, yeah, how have you been 'shafted'?I refuse to warp this thread into something about those poor defenseless kittens, that you love so much. But I do admit you made a semi-reasonable misunderstanding of my comment. My comment was not intended to refer only to abuses in DirectX and/or Direct3D, but in the totality of software they released [that I sampled]. That does include Direct3D, however. Even after I wrote [most/much of the guts/core of] two game engines (with commercially released games) with their software, I got torked enough at D3D to abandon D3D forever and switch to OpenGL - not a trivial investment of time and effort. But I ended up very glad that I did, on balance. What were the specific problems that drove me away from D3D? You know, it's been a few years now, and I forget (and I'm not anxious to recall and regenerate the aggrevation it took to drive me away). Quote:Quote:To continue the above thought, will someone here who knows more about the bits and pieces of software between the hardware and software inform me (and others) what IF ANY lower levels can be accessed by developers. Specifically, what do we have to "work with" if we decide to solve the problem for ARB/Khronos and create our own API *explicitly* designed for "games"? On Windows you have D3D and OpenGL. In Linux you have OpenGL or writing directly to the hardware as you can produce your own drivers; however afaik only AMD have released substantial specs and NV are still closing their specs off. And if NV aren't giving them out I don't expect they will give them out to anyone. Oh, so you *did* notice I was not giving the ARB/Khronos/OpenGL a free pass! I am still far behind others understanding the consequences of recent happenings with OpenGL3, but my first read through the GLSL document adds most features I wanted to find - though I am rather surprised to find such a huge number of deprecations. The significance of that hasn't sunk in yet, perhaps.
  15. WGL_EXT_swap_control

    Quote:Original post by LeadBreakfast Because this is for a commercial software project and in order to use it I have to get all sorts of red tape cleared in order to add a new external lib to our build environment.I'm not sure this gets around your problem, but GLEE is equivalent to GLEW, and you can simply include the glee.h and glee.c files in your project and presto-chango, all the [available] extensions are available. In other words, unlike GLEW, GLEE does not require you to adopt their lib file (if I remember GLEW correctly, I adopted GLEE).