TriKri

Members
  • Content count

    61
  • Joined

  • Last visited

Community Reputation

124 Neutral

About TriKri

  • Rank
    Member
  1. According to the reference page for glGetUniformLocation, it will return -1 if the variable name does not correspond to an active uniform variable in the shader program or if the name starts with the reserved prefix "gl_". Except from that, I do not know. It seems like it should find your variable offset. In some cases, the compiler can decide to optimize some variables away, since they are not used, hence you will get -1 when you try to find their location, but that doesn't seem to be the case here either. A thing I noticed though is that you have written [font="Courier New"]if (location > -1)[/font]. Are you sure that location can't be negative even if the variable was found, i.e. location < -1 also? I haven't checked this myself though, but just in case I think that [font="Courier New"]if (location != -1)[/font] would probably be better to use. I guess you have also made sure that there isn't anything wrong with your shader program before you try to find the variable (it's linked and compiled properly). Maybe you should check the [font="Courier New"]glGetError[/font] directly after calling [font="Courier New"]glGetUniformLocation[/font], before trying to do anything else with OpenGL, like rendering.
  2. Hi! I want to draw edge lines around edges over which the z-coordinate makes a sudden jump, how can I accomplish this? I want to implement it as follows After the image has been rendered, I look at the z-buffer value for each pixel. I loop though all of the eight neighboring pixels and look at their z-buffer values too, and if they differ by more than, say, 1 % or so of the z-buffer value in the middle (this value is multiplied by the square root of two for the diagonal pixels on the diagonal), then I add one to a counter. For the counter value 0 the pixel should be left unchanged, for 1 it should 1/3 darkened, for 2 it should be 2/3 darkened and for 3 or higher it should be completely black. How do I implement this? I know that I first have to render the entire scene to get the z-buffer I want to use, but then how can I process all the pixels again? And how do I access the z-buffer?
  3. 3D wave PDE

    [quote name='Wasting Time' timestamp='1146129253' post='3579856'] For the 3D problem, there are solutions of the form u(x,y,z) = F(n1*x+n2*y+n3*z+c*t) + G(n1*x+n2*y+n3*z-c*t) where (n1,n2,n3) is a vector with length 1. [/quote] That is true, but those are far from all the solutions. To better understand how one can do to obtain all possible solutions (like thread maker did in his 1d example), one usually consider the Fourier transformation of the solution. A Fourier basis function e^(i[b]k[/b]·[b]r[/b]), where [b]k[/b] is the wave vector and [b]r[/b] is the position in space, is naturally a solution to the wave equation, given it is multiplied with time dependent function e^(-iwt), where w is an appropriate angular velocity. As it turns out, there are two w for each wave number k (= |[b]k[/b]|, the length of the wave vector), which upon differentiation of the original PDE solves the equation w^2 = c^2k^2, or equivalently, w = [u]+[/u]ck. Furthermore, since any well-posed differential equation has exactly one unique solution, it turns out that by forming linear equations of these basis solutions, one can form any other solution of the initial PDE. So in the 3d case, the general solution looks like e^(i[b]k[/b]·[b]r[/b])*(A([b]k[/b])e^(-ickt)+B([b]k[/b])*e^(+ickt)) and then an integral of this over all possible wave vectors [b]k[/b], where A and B are arbitrary complex valued (or real) scalar functions of [b]k[/b]. A and B are obtained as linear combinations of the Fourier transforms of the initial conditions U and U[sub]t[/sub]. On the other hand, if this is going to be used for modelling oceans in some kind of simulation, where the grid has a uniform grid size,, the best thing to do is probably to stick with the original PDE and use either a finite difference method or a Runge-Kutta method of high degree or precision enough (to prevent high frequent waves to explode or implode, the low frequent waves are usually not a problem because of their naturally low time derivatives in any degree), since those methods are both faster than performing a Fourier transform in each step, and can cope with varying parameters like speed of the waves, for example if the wave enters a medium with lower speed or an ocean wave reaches shallow water (although realistic ocean waves are even more complex and, as it turns out, a lot more challenging than than).
  4. Okay, so interruptions it is! :)
  5. Quote:Original post by TriKri Since I think I know the answer to my question (using async_read instead of read), I want to ask: How does an asynchronous function work? Will an interrupt be used which pauses the program flow and opens the handler function in the same thread, or will the asynchronous function open a new thread in which the handler function later will be called? Seems like the former to me when I'm reading this forum post.
  6. Since I think I know the answer to my question (using async_read instead of read), I want to ask: How does an asynchronous function work? Will an interrupt be used which pauses the program flow and opens the handler function in the same thread, or will the asynchronous function open a new thread in which the handler function later will be called?
  7. How can I check if there is any incoming data at all to the serial port? If there is no incoming data, I don't want the function to lock...
  8. Okay, but should I use boost::asio::streambuf to buffer the data? And then how can I extract the data from it so I can parse the message that has been sent? You say that it can only store 16 bytes or so (I guess it uses some kind of cyclic buffer); the rest will be lost. I wonder if we possibly need to have a separate thread which only reads incoming messages, in case it can take so long time between two calls to the function that some data can be sent and lost in between? I'm using Qt as a widget toolkit and I don't know how long time it takes for it to process all it's events, although the application appears to run very smooth.
  9. Hi, we are programming a robot in a project at the university, and I thought you might be able to answer a question about boost asio's read function. We are going to communicate from a laptop with the robot, over bluetooth, via a firefly connected to the computers serial port. So long, I think I have managed the outgoing data correctly, using boost::asio::write. However, now I think I need some help with the incoming data (the boost asio documentation is, well ... limited). All messages starts with a header at one byte. The header specifies which message type has been sent. Each message has a unique length which I store in an array called message_lengths. I had imagined the code to loke something like this: void user_interface::check_for_incoming_messages() { boost::asio::streambuf response; boost::system::error_code error; std::string s1, s2; if (boost::asio::read(port, response, boost::asio::transfer_at_least(0), error)) { convert_streambuf_to_string(response, s1); int msg_code = s1[0]; if (msg_code < 0 || msg_code >= NUM_MESSAGES) { // Handle error, invalid message header } if (boost::asio::read(port, response, boost::asio::transfer_at_least(message_lengths[msg_code]-s1.length()), error)) { response >> s2; // Handle the content of s1 and s2 } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } else if (error != boost::asio::error::eof) { throw boost::system::system_error(error); } } Right now the code will just ignore a second message, if there will happen to be one. And I don't know what convert_streambuf_to_string will look like. Now I have just assumed that response will be reset each time read is called, but maybe that's wrong. Can someone who knows how to do this please help me? I would be really grateful.
  10. Hi! In the documentation for SDL_CreateRGBSurface (http://docs.huihoo.com/sdl/1.2/sdlcreatergbsurface.html), there is an example at the bottom: /* Create a 32-bit surface with the bytes of each pixel in R,G,B,A order, as expected by OpenGL for textures */ SDL_Surface *surface; Uint32 rmask, gmask, bmask, amask; /* SDL interprets each pixel as a 32-bit number, so our masks must depend on the endianness (byte order) of the machine */ #if SDL_BYTEORDER == SDL_BIG_ENDIAN rmask = 0xff000000; gmask = 0x00ff0000; bmask = 0x0000ff00; amask = 0x000000ff; #else rmask = 0x000000ff; gmask = 0x0000ff00; bmask = 0x00ff0000; amask = 0xff000000; #endif surface = SDL_CreateRGBSurface(SDL_SWSURFACE, width, height, 32, rmask, gmask, bmask, amask); if(surface == NULL) { fprintf(stderr, "CreateRGBSurface failed: %s\n", SDL_GetError()); exit(1); } On my system, the byte order is not SDL_BIG_ENDIAN , so I guess it is little endian. The only thing is that, when I call the function SDL_SetVideoMode, I get the masks rmask = 0x00ff0000; gmask = 0x0000ff00; bmask = 0x000000ff: amask = 0x00000000; Wouldn't they rather be rmask = 0x000000ff; gmask = 0x0000ff00; bmask = 0x00ff0000: amask = 0x00000000; ? I was trying to perform fast buffer manipulations, hence storing the masks, the shift and the losses of each channel as constants, but my video buffer was clearly giving me trouble! Why does it get these bit masks? Later I discovered that if I create a SDL_Buffer from an image loaded from disk, using SDL_DisplayFormatAlpha, I will get these bit masks: rmask = 0x00ff0000; gmask = 0x0000ff00; bmask = 0x000000ff: amask = 0xff000000; :S I'll also get these bit masks if I generate an image from an SDL_Font, a text and a color, using SDL_ttf and the function TTF_RenderText_Blended. I also discovered that if I load an image from the disk, using SDL_img and the function IMG_Load, I will get these bit masks: rmask = 0x000000ff; gmask = 0x0000ff00; bmask = 0x00ff0000: amask = 0x00000000; ??? This even differs from the bit masks of the video buffer, which has exactly the same channels enabled! The last empty bitmask for the alpha channel is probably because the images I have loaded doesn't have an alpha channel. Because I suppose IMG_Load handles alpha channels, right? This byte order thing is making me kind of confused; Why does images generated in different ways have such different bit masks for their channels? Does it matter at all in what order I choose the channels to be in, when I call SDL_CreateRGBSurface (the ones who have created the example clearly think so)? And if it does, what matter does it do?
  11. Hi! How can you check whether a true color SDL_Buffer (not 8 bits per pixel) has an alpha channel or not? SDL has to know this itself when it blits a buffer to another buffer, so it must be possible to check it some how. For example, an image that is loaded from a png file into a SDL_Buffer, should afterwards be converted to a SDL_Buffer with the same pixel format as the screen buffer, for faster blitting. This can be done with SDL_DisplayFormat for example, but then the alpha channel won't be copied, and the per buffer alpha value will be used instead when blitting the image if it has alpha-blending enabled. If you use SDL_DisplayFormatAlpha instead, the alpha channel will be copied, and then that will be used as alpha instead of the per buffer alpha value. The thing is that blitting an image with an alpha channel and alpha-blending enabled on another buffer with the same settings is kind of buggy. Apparently the alpha channel of the destination image is replaced by the alpha channel of the source image, on the part that is being blitted on. So a fully opaque image can become transparent or semi-transparent after blitting. That is why I'm writing my own blit function, but then I need to know if an image has an alpha channel or not, so I know from where the alpha value should be taken.
  12. I think I have found the problem. The function SDL_SetAlpha sets the per buffer alpha value of an image, i.e. the alpha value that is associated with the image itself, not with single pixels. On the other hand, if a SDL_Buffer has an alpha channel, i.e. one alpha value for each pixel, the per buffer alpha value is ignored when blitting, which was what happened in my case. Concerning the image loaded from file, they lost their alpha channels, since I used the function SDL_DisplayFormat to convert them to a format with faster blitting. If I would have used SDL_DisplayFormatAlpha instead as you said (which does copy the alpha channel of a buffer, in opposite to SDL_DisplayFormat which does not, in fact it doean't create an alpha channel at all), the alpha value from the loaded-from-file-buffer would have been copied into the new buffer and then the per buffer alpha value wouldn't have had any impact on the blittinf of this image either.
  13. Hi, I'm writing a program in SDL, in which SDL_Surfaces are created in three different ways: 1. Loading images using IMG_Load. These images will let you alpha blend them so that they can be transparent or semitransparent. To load an image with IMG_Load, no flags have to be specified, just the file name. 2. Generating image surfaces using SDL_CreateRGBSurface. These won't let you alpha blend them, but I don't understand why. SDL_SWSURFACE is given as the flags parameter, and depth is set to 32. 3. Generating image surfaces from a text, a font and a color using TTF_RenderText_Blended. These won't let you alpha blend them either. Why?! For doing the alpha blending, SDL_SetAlpha is called and SDL_SRCALPHA is given as the flags parameter so the surfaces would be able to be transparent. alpha is set to 128 typically. Anyone who has had the same problem before and who knows how to solve it? That would be great, I'm kind of stuck here.
  14. Best AI programming language?

    alvaro, maybe I misunderstood you, I guess that happens easily on the web. Didn't know you had the experience you have about chess programs either. What I had in mind wasn't the next top rated chess program, but simply an experiment to see if code that made sense could be generated (only for the evaluation function), and if a computer could "learn" to play chess this way. What motivated me was that I thought I had a an especially good method for telling what's good (and should be given a higher value) and whats not, since I found source that better matches the "true value" of the position (see my second post in this thread if you don't know what I'm talking about). However, I originally developed that idea for neural networks (to use with some form of backpropagation), since it would be possible to train them this way using supervised learning, and just let the program run for itself, while improving. I guess it could be used just for tuning parameters as well. Maybe it's overkill to try to generate code, but since it's the computers language, I thought it might be what has the greatest potential to achieve something really good. Note that this would NOT be a replacement for min-max or alpha-beta pruning; those algorithms are very essential for any good chess program, and they would still be there. That would totally be to reinvent the wheel. :) However, now I kinda don't feel like doing it any more. I feel a little bit discouraged as well. I planned of starting the project, but then it took way to long before anything happened, and then I just lost interest of it. Hopefully I've given someone else an idea or inspired anyone (anyone feels inspired?). For so long it's put it on ice, maybe I will make a try sometime later, in a few years or so. Thank you anyway for the suggestions that I've got. -Kristofer [Edited by - TriKri on October 10, 2009 7:59:55 AM]
  15. Best AI programming language?

    Quote:Original post by alvaro And why would you? You can gladly come with some constructive comment, but right now you are not contributing to anything.