wall

Members
  • Content count

    90
  • Joined

  • Last visited

Community Reputation

134 Neutral

About wall

  • Rank
    Member
  1. Thanks alot for clearing that up guys! I've now got it working. I wasn't aware of the 0-1 range that you pointed out. I'll look into the 8bit way of doing it, sounds smart. Also, are there graphics cards that supports linear filtering on floating point formats? Or are we still waiting for this? Thanks again!
  2. Sorry if I wasn't clear on that. My intermediate render target(s) is A16B16G16R16, which seems to lose the high dynamic range I'm after. When I switch to the floating point version - A16B16G16R16F, the high dynamic range is kept, but I won't get linear texture filtering that way, which I need. Thanks
  3. Hi! I'm using a 64-bit integer(A16B16G16R16) cubemap in my project and rendering that to the framebuffer works just fine, I can adjust the exposure and everything looks all neat. However, when I render to an intermediate render target to do some blooming effects I seem to lose the high dynamic range. It works if I use a 64-bit floating point(A16B16G16R16F) render target, but not the integer format, why is that? My graphics card(ATI X700 Mobile) only supports nearest-neighbour filtering with floating point textures, so I really want to use A16B16G16R16 instead. Have anyone else had this problem? Thanks in advance for any thoughts on this.
  4. Not sure if this is what you're really after but here are a couple of things I've thought about when trying to figure out why a lot of my stuff never gets finished. I'm a hobbyist, so I don't really know what I'm talking about but... One thing I think is important is to establish the overall concept of the game at an early stage, the "feeling" you want to deliver throughout the game or whatever. This could be anything really, I got the idea for the project I'm currently working on from a movie and a song, together they sum up the concept of the whole idea quite well. It's good to have something like this to keep you from losing focus from your initial idea. Go back to your sources of inspiration some times during development. Another thing that isn't really about code structuring and stuff like that is to restrict the game concept to something manageable. If you're working alone or with a few friends it easily becomes more work than you originally planned. You don't want to have to cut back on features after months of development because you realize that it won't get finished if you don't, this tends to result in crap. Instead, try to point out the features that are most important for your game and focus on these things first. This way you can polish the important stuff to "perfection" and add the not-so-important-but-cool stuff later if you find the time.
  5. That is one funny commercial though...good actors
  6. Procedural texturing?

    Thanks for your replies guys! I'll try to do some texture and see how it comes together. The reason I was wondering about the UV coordinates is that if I have some model where one face has its coordinates in the lower left corner of the map, and a face next to that have coordinates in the upper right corner and in between there's a bunch of other stuff. Then a procedural texture would look all "chopped up" since it's homogeneous, right? But I suppose you just have to make sure the UV mapping is suited for procedural texturing..
  7. I'm working on my little raytracer and just implemented texturing using maps loaded from disk. Now I want to extend that to also support procedural texturing. I'm completly new to this so I'm not really sure of how to approach this. A procedural texture is a *procedure* that creates a color value based on some input value(s), right? So what kind of parameter would you typically use? The geometry's UV coordinates that you use to map images? The 3D coordinate at the ray intersection point? Or something else? I have a feeling it depends on if you're doing 2D or 3D textures, but say we focus on 2D. Is there a general way, or does the type of input depend on what kind texture/look you're after? Thanks!
  8. Radiosity in Practice

    I've got no experience of radiosity lighting, but just a couple of quick questions in your discussion... Are you doing all calculations in software? Like, as a part of a raytracer? If so, how well does this translate to realtime rendering? Are programmable GPU's capable to do all this in hardware now? And what kind of trade-offs do you have to do in order to get it fast? Thanks, William
  9. A thousand ships

    Looks very good! It's always interesting to follow the progress in your journal, keep it up!
  10. applying textures to model

    Well, if you include texture coordinates when you export the 3DS model, you can just modify your loader to use those coordinates to apply textures. The principle is the same as with a quad, you just pass the coordinates that corresponds to each vertex in the model.
  11. Hi! I've just started to wrap my head around HLSL shaders and effect files using DX. I have a few questions though: 1. If I've got a scene where all objects are supposed to be lit using some light function defined in a .fx file, and then I wan't a few of those objects to be affected by other shaders in some other .fx file. How do I go about achieving this? Do I have to change that other effect to also contain the light computations and use two diffrent effects depending on if the object uses that other effect or just normally lit? Or is there a way to to draw objects using effects defined in two diffrent .fx files? Basically, If I have an effect that are used on many objects, how do I combine that with some other effect? 2. How do I typically render a scene where all object use shaders? Should I sort by effect and draw all objects that use a certain effect, go to the next effect and draw all objects using that and so on..? Or should I sort by object and draw it using the effect that particular object is specified to use? A scene graph is not needed at the moment... 3. Does .x files support effects? I.e. can materials in the .x files reference to some .fx file, or do I have to specify that manually in the application code? I've understood that "Effect instances" can somewhat help with this...? Any insight on these questions are apreciated! Thanks in advance!
  12. Thanks for the responses, I'll look into the boost stuff... How would I go about doing it with stringstream? I've not found a good resource for this... Thanks!
  13. Hi! I'm working on a logging system for my current project and I've got a function to print strings coupled with some variables, like this (c++): void Print(std::string str, ...) { char tmpStr[256]; va_list marker; va_start(marker, str vsprintf(tmpStr, str.c_str(), marker); va_end(marker); str = text; //Continue doing stuff with str... } My question is, is there a way to do this without the fixed array size on tmpStr? I can't dynamically allocate it because there's no way to know the length of the formatted string, right? Is there a way to get the arguments into the std::string without using sprintf() and a char array? It's working as it is ofcourse, but I'd rather not have a fixed size array if it's possible, what if the string is longer than the size of the array? I've googled and all the examples of doing this I've found did it this way.
  14. Camera class

    Thanks for the help guys!
  15. I'm working on a simple camera class for my Direct 3D project, using C plusplus. My vector and matrix knowledge isn't what it should be so I use D3DX to help me out with the math. It doesn't behave the way I want though. Here's some code to show how I've done it. The class stores three data members which are later used to set up the view with D3DXMatrixLookAtRH(): D3DXVECTOR3 pos; D3DXVECTOR3 target; D3DXVECTOR3 up; I've got methods to move and rotate the camera: void CCamera::Move(float speed) { //Create a vector between the camera and the target D3DXVECTOR3 Vec = this->target - this->pos; //Move the position and target along this vector this->pos += Vec*speed; this->target += Vec*speed; } void CCamera::Rotate(float yaw, float pitch, float roll) { //Create a rotation matrix D3DXMATRIXA16 mat; D3DXMatrixRotationYawPitchRoll(&mat, yaw, pitch, roll); //Transform the vectors by this matrix D3DXVec3TransformCoord(&this->pos, &this->pos, &mat); D3DXVec3TransformCoord(&this->target, &this->target, &mat); D3DXVec3TransformCoord(&this->up, &this->up, &mat); } And before each frame is rendered this code is executed to set up the view: D3DXMATRIXA16 view; //Create a matrix from the camera vectors and use it to transform the view D3DXMatrixLookAtRH(&view, &camera.pos, &camera.target, &camera.up); if (this->pD3DDevice->SetTransform(D3DTS_VIEW, &view) != D3D_OK) MessageBox(NULL, "SetTransform() error!", NULL, MB_OK); } The Move() method works as I want it to, but not the Rotate() method which makes the triangle I use for testing to rotate around it's own axis, and not around the camera...I'm new to both 3D math and Direct 3D so any help is apreciated. One "weird" thing I noticed is if I also transform the world matrix by the camera matrix the triangle rotates both around it's own axis and around the camera. Only transforming the world matrix yields the same result as when only transforming the view though, i.e. the triangle rotates around its own axis(?).