• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Suen

Members
  • Content count

    90
  • Joined

  • Last visited

Community Reputation

160 Neutral

About Suen

  • Rank
    Member
  1. OpenGL

    Well changing to this actually solved the problem, thanks. Thanks for the name-change suggestion, you're right that I shouldn't confuse a position with a vector, my bad.   But honestly I'm still wondering what exactly I'm doing wrong in the original code. I'm merely changing the position of my camera's reference(target) point. To rotate it I switch to a spherical coordinate system, alter the correct element of the coordinate and switch back to the cartesian coordinate system, and then calculate the look/forward vector. The camera position remain the same but since the camera's reference point now has changed (cameraTargetPos-cameraPos) should result in a new vector which should be rotated by some amount. Am I thinking this wrong?   edit: changed the name of cameraTarget to cameraTargetPos in my first post as suggested.
  2. I recently went back to look at some old OpenGL(3.x+) programs I did and checked my camera class. In my camera class I have a function which calculates the view matrix. The calculations are based on OpenGL's gluLookAt function Here's what the class looks like (I've omitted several identical and irrelevant functions for less code): #include "camera.h" //Create view matrix glm::mat4 Camera::createViewMatrix() {     glm::vec3 lookDirection = glm::normalize(cameraTargetPos); //Look direction of camera     static glm::vec3 upDirection = glm::normalize(glm::vec3(0.0f, 1.0f, 0.0f)); //Up direction of camera in the world, aligned with world's y-axis.     glm::vec3 rightDirection = glm::normalize(glm::cross(lookDirection, upDirection)); //Right direction of camera     glm::vec3 perpUpDirection = glm::cross(rightDirection, lookDirection); //Re-calculate up direction, basis vectors may not be orthonormal     static glm::mat4 viewMatrixAxes;     //Create view matrix, [0] is first column, [1] is second etc.     viewMatrixAxes[0] = glm::vec4(rightDirection, 0.0f);     viewMatrixAxes[1] = glm::vec4(perpUpDirection, 0.0f);     viewMatrixAxes[2] = glm::vec4(-lookDirection, 0.0f);     viewMatrixAxes = glm::transpose(viewMatrixAxes); //Transpose for inverse     static glm::mat4 camPosTranslation; //Translate to position of camera     camPosTranslation[3] = glm::vec4(-cameraPosition, 1.0f);     viewMatrix = viewMatrixAxes*camPosTranslation;     return viewMatrix; } void Camera::yawRotation(GLfloat radian) {     glm::vec3 spherCameraTarget(cartesianToSpherical(cameraTargetPos)); //Convert camera target to spherical coordinates     spherCameraTarget.y += radian; //Add radian units to camera target (in spherical coordinates)     cameraTargetPos = sphericalToCartesian(spherCameraTarget); //Convert camera target to cartesian coordinates } void Camera::pitchRotation(GLfloat radian) {     glm::vec3 spherCameraTarget(cartesianToSpherical(cameraTargetPos)); //Convert camera target to spherical coordinates     spherCameraTarget.z += radian; //Add radian units to camera target (in spherical coordinates)     spherCameraTarget.z = glm::clamp(spherCameraTarget.z, 0.0001f, PI); //Clamp the pitch rotation between [0 PI] radians    cameraTargetPos = sphericalToCartesian(spherCameraTarget); //Convert camera target to cartesian coordinates } void Camera::moveBackward(GLfloat moveSpeed) {     cameraPosition.x += (viewMatrix[0][2]*moveSpeed);     cameraPosition.y += (viewMatrix[1][2]*moveSpeed);     cameraPosition.z += (viewMatrix[2][2]*moveSpeed); } //Note: Change from standard convention. Z-axis is Y-axis, Y-axis is Z-axis and Z-axis should be negative //Change this in formula below glm::vec3 Camera::cartesianToSpherical(glm::vec3 cartesianCoordinate) {     GLfloat r = (sqrt(pow(cartesianCoordinate.x, 2) + pow(cartesianCoordinate.y, 2) + pow(cartesianCoordinate.z, 2)));     GLfloat theta = atan2(-cartesianCoordinate.z, cartesianCoordinate.x);     GLfloat phi = acos(cartesianCoordinate.y/r);     glm::vec3 sphericalCoordinate(r, theta, phi);     return sphericalCoordinate; } //Note: See notes for cartesianToSpherical() function glm::vec3 Camera::sphericalToCartesian(glm::vec3 sphericalCoordinate) {     GLfloat theta = sphericalCoordinate.y;     GLfloat phi = sphericalCoordinate.z;     glm::vec3 cartesianCoordinate(cos(theta)*sin(phi), cos(phi), -sin(theta)*sin(phi));     return cartesianCoordinate * sphericalCoordinate.x; } This worked as expected. I could strafe left/right, up/down, forward/backwards and I could rotate around my up vector 360 degrees and rotate around my right vector +-90 degrees. However I realized that according to the OpenGL documentation I'm calculating my view matrix wrong. If you look the first line of code in the createViewMatrix() function then you can see that my look direction is only a normalized camera look-at point. According to the documentation it should be (cameraTargetPos-cameraPosition). When I corrected it the entire camera system broke down, both the strafing and rotation. I fixed the strafing so it works now as it should, here's the new code (note that the view matrix is calculated correctly now and that all strafe methods update the camera look-at point): //Create view matrix glm::mat4 Camera::createViewMatrix() {     glm::vec3 lookDirection = glm::normalize(cameraTargetPos-cameraPosition); //Look direction of camera     static glm::vec3 upDirection = glm::normalize(glm::vec3(0.0f, 1.0f, 0.0f)); //Up direction of camera in the world, aligned with world's y-axis.     glm::vec3 rightDirection = glm::normalize(glm::cross(lookDirection, upDirection)); //Right direction of camera     glm::vec3 perpUpDirection = glm::cross(rightDirection, lookDirection); //Re-calculate up direction, basis vectors may not be orthonormal     static glm::mat4 viewMatrixAxes;     //Create view matrix, [0] is first column, [1] is second etc.     viewMatrixAxes[0] = glm::vec4(rightDirection, 0.0f);     viewMatrixAxes[1] = glm::vec4(perpUpDirection, 0.0f);     viewMatrixAxes[2] = glm::vec4(-lookDirection, 0.0f);     viewMatrixAxes = glm::transpose(viewMatrixAxes); //Transpose for inverse     static glm::mat4 camPosTranslation; //Translate to position of camera     camPosTranslation[3] = glm::vec4(-cameraPosition, 1.0f);     viewMatrix = viewMatrixAxes*camPosTranslation;     return viewMatrix; } void Camera::moveBackward(GLfloat moveSpeed) {     cameraTargetPos.x += (viewMatrix[0][2]*moveSpeed);     cameraTargetPos.y += (viewMatrix[1][2]*moveSpeed);     cameraTargetPos.z += (viewMatrix[2][2]*moveSpeed);     cameraPosition.x += (viewMatrix[0][2]*moveSpeed);     cameraPosition.y += (viewMatrix[1][2]*moveSpeed);     cameraPosition.z += (viewMatrix[2][2]*moveSpeed); } The rest of the functions in the old code (cartesianToSpherical(), yawRotation etc.) remain unchanged.   The rotation still remains broken. If I'm close enough to my object it works as it should. It's hard to describe how it's broken. If, for example I move my camera far back enough that the camera look-at point  has a positive value (it starts with a negative value a first) one of the spherical coordinates (theta) ends up being negative so when I rotate around the up vector to the left I end up rotating right instead, not only that but I never complete a whole revolution. It's as if I rotate CW 45 degrees and then CCW 45 degrees, it just keeps going on like that. There's some other weird behaviour that goes on as well.   I'm quite certain it has to do with how I go back and forth between cartesian/spherical coordinates and the formulas I use but I'm lost in how to solve it. That I got it to work properly with wrong code is a miracle in itself already. If you want to try and compile the code and run it to see the effect yourself let me know and I'll attach the source code here. Any help is appreciated, thanks!
  3.   I'm more than likely not going to use something like in a serious project as there already great VecX-libraries out there that get most of the job done (and I wouldn't be surprised if many of them are SIMD-optimized and even if not the compiler might do it for you). It's just me playing around mostly but I have to admit using Vec2 and Vec3 with SIMD does give a headache, might explain why it's not all that useful :P
  4.   Yep was simply thinking of doing this, going in that direction now. I will be operating on the Y-values but much less in comparison. Might as well change my design from AOS to SOA while it's still possible as it fits better with the way SIMD works.
  5.   Vector2* ar = CacheAlignedAlloc<Vector2>(SIZE); //Some values are set for ar here.... __m128i v0v1; __m128i v2v3; __m128i v4v5; __m128i v6v7; __m128i sse2 = _mm_set1_epi32(5);      for(int i=0; i<32; i=i+8) {     v0v1 = _mm_load_si128((__m128i*)&positions[i]);     v2v3 = _mm_load_si128((__m128i*)&positions[i+2]);     v4v5 = _mm_load_si128((__m128i*)&positions[i+4]);     v6v7 = _mm_load_si128((__m128i*)&positions[i+6]);    _MM_TRANSPOSE4_PS(_mm_castsi128_ps(v0v1), _mm_castsi128_ps(v2v3), _mm_castsi128_ps(v4v5), _mm_castsi128_ps(v6v7));     v0v1 = _mm_add_epi32(v0v1, sse2);    v3v4 = _mm_add_epi32(v3v4, sse2);     v5v6 = _mm_add_epi32(v5v6, sse2);     v7v8 = _mm_add_epi32(v7v8, sse2);     _MM_TRANSPOSE4_PS(_mm_castsi128_ps(v0v1), _mm_castsi128_ps(v2v3), _mm_castsi128_ps(v4v5), _mm_castsi128_ps(v6v7));     _mm_store_si128((__m128i*)&positions[i], v0v1);     _mm_store_si128((__m128i*)&positions[i+2], v2v3);     _mm_store_si128((__m128i*)&positions[i+4], v4v5);     _mm_store_si128((__m128i*)&positions[i+6], v6v7);         } Ended up with this, haven't checked it's performance yet but it does look like a bad solution :/
  6. I've been playing around with SSE to get a better understanding of it, using SSE2 intrinsics for integer operations. Currently I've done a very common yet simple code example:   Vector2* ar = CacheAlignedAlloc<Vector2>(SIZE); //16 or multiple of 16-byte aligned array, SIZE is a largue value //Some values are set for ar here.... __m128i sse; __m128i sse2 = _mm_set_epi32(0,5,0,5); __m128i result; for(int i=0; i<SIZE; i=i+2) { sse = _mm_load_si128((__m128i*)&ar[i]); result = _mm_add_epi32(sse, sse2); _mm_store_si128((__m128i*)&ar[i], result); }   Vector2 is a very simple struct:   struct Vector2 { int x, y; };   The way things work now is that I can at most load a maximum of two Vector2 in a 128-bit register. This is fine if I were to perform an operation on both values of each Vector2 at the same time. However if you look at the code above the y-value of each Vector2 only gets added with zero so it remains unchanged, thus 64 bits of the 128-bit register are essentially doing nothing. Is there a way to load four x-values from four Vectors2 instead, perform operations and then store the result back again?
  7. The above was fixed by tweaking some of the values (for now at least). I've hit another problem (unrelated to the topic) but I'm not sure if a whole new thread is needed for that as I've created this thread. Either way I'll give it a try: I'm trying to render triangles. For the moment I just tried rendering one triangle using a geometric solution (inefficient but but it will have to do for now as I'm just experimenting). I create a triangle consisting of three vertices, then perform a ray-triangle intersection by checking first whether the ray intersects the plane which the triangle lies in and then whether it lies within the triangle or not. The code for this is below along with the code that renders my scene. The problem I have is that the z-values seem to be flipped when I translate my triangle. If I transform the triangle from it's model space to the world space by a simple translation (no orientation involved) I expect +20 in the z-axis to bring the triangle closer to the camera, yet the result is that my triangle is placed further away from the camera. Using -20 brings the triangle closer to the camera. I thought my matrix transformations might be at wrong here so I specified the coordinates of the triangle's vertices in it's local space (model space) with different z-values to see if I would get the same result and I did.   Update: After going through my code again I noticed I was calculating the constant D of the plane equation wrong in my triangle intersection code. It should be D = -(N*V) where N is normal of plane and V is any of the triangle's vertices. I forgot to add the minus sign to it. It works as it should now.  
  8. The FOV used in a game depends primarily on the genre. FPS need large FOVs to be playable, and distortion is expected. The first thing I found on Google describes how Valve does it. If you are making an RTS with overhead camera (like Warcraft III), you can probably use a smaller FOV. Isometric projection can be seen as the limit of perspective projection where FOV is 0, and even that is acceptable. There is a Wikipedia page on FOV in video games, but it doesn't seem very informative. This sound interesting and nothing I really thought about. I always thought that the only thing the field of view does is that it works as a scaling factor (common example would be the perspective projection matrix used in OpenGL etc.). A smaller fov, from what I understand, would mean that my objects get scaled bigger which would be something similar to zooming in with a camera. How does this affect the depth perception of the scene? Yes that works in practice. Kind of hard with a rasterizer as you need pretty well tessellated geometry, but for a ray tracer it should be pretty easy to set up.This is a proposal that I've seen in a few places which could solve the problem. I believe I read about this in another thread here in gamedev when searching for information about my problem where the same idea was proposed but according to people in that thread a curved surface would result in other artifacts in the scene instead. Still being able to implement a curved surface just to see the difference between it and my current screen would be interesting except I have no idea where to even start let alone how I could implement one. I will have to search more about it, relevant links to this would be appreciated. I've managed to minimize the effect of the distortion somewhat (well it's really only minimized for specific settings). The biggest problem I failed to notice was that the camera position was way too close to the scene objects. This would cause the projectors/rays to be spread in a really wide angle for objects close to the edge which would just increase the distortion the closer the camera got to them. Moving the camera and the view plane further away from the objects did lessen the distortion significantly. Of course the problem now was that I was "zooming" out and so my objects became smaller but this is where I used the field of view to correct that by decreasing it. Not sure if this is a correct approach but it did give me less distortion (see image 1). I think it might as well be good to ask it in this topic instead of creating a new one. If you look at the image rendered by my ray tracer you can see that I've merely done diffuse shading. What is bothering me is how bright the sphere becomes at the area where they receive most light. The changes are quite noticable, basically it looks like I've just added one big white spot on each sphere and this effect seem to get more visible the smaller my sphere is (which I suppose is expected). I realize much of this effect happens because I've added an ambient component to my calculations but it seems somewhat severe. I was thinking that accounting for light attenuation might change this but I'm not entirely sure about that although I plan to implement it regardless. Interesting enough I decided to compare these rendered spheres with a sphere I rendered using OpenGL and GLSL (see image 2) before, again diffuse shading with an ambient component as well. The result I received from using OpenGL is what I would expect, as the color there seem to smooth out much better. Of course I do understand that the process of rendering an image does differ between the two but the given results caught my attention. Either way it made me wonder how the same shading method could be produce such different results? - Fixed, see below
  9. [quote name='Álvaro' timestamp='1355662096' post='5011244'] [quote name='Suen' timestamp='1355631522' post='5011168'] [quote name='Álvaro' timestamp='1355625333' post='5011137'] The effect is a mismatch between the FOV you used to generate the image and the conditions under which you are looking at the image. If you move your eye close enough to the screen, the projection will seem correct. The way I am sitting in front of my laptop right now, the horizontal span of the screen covers an angle of about 24 degrees from where I stand. If your FOV is much larger than that, I will see the distortion. [/quote] Thanks for the explanation but I don't quite to understand how to connect everything. Basically you mean to say that this distortion is pretty much dependent on my viewing condition?[/quote] Yes, that's correct. [quote]Wouldn't that mean, if I understood it correct, that the distance between me and the PC screen and the amount of degrees it occupy of my FOV affecs whether I see the image distorted or not?[/quote] You got it again. [quote]And a stupid question for that, how would I account for it in my code?[/quote] You can't. In some sense, it's not your code's problem. [quote][quote name='Álvaro' timestamp='1355625333' post='5011137'] Exactly what FOV to use is for you to decide. If you are trying to create photo-realistic images, perhaps you should use FOVs that are typical for photography. [/quote] What are some commmon FOVs for photography?[/quote] [url="http://en.wikipedia.org/wiki/Angle_of_view"]This Wikipedia page[/url] can give you some idea. [quote]And assuming I would use such FOVs, wouldn't the image still have the possibility of appearing distorted to me on my PC screen due to being dependent on my viewing conditions? [/quote] Yes, and that's a problem shared with photography or movies. But we are so used to looking at photographs and movies that I don't think anyone would complain. [/quote] So my conclusion is that if I want to look at the rendered image with minimal distortion on my PC(!) I will need to tweak my parameters in the code until I find some kind of sweet spot for it (for example that would be the settings in second image in my viewing condition as the distortion is not that noticable)? It does kind of explain why I can't account for it in my code then since viewing conditions from one PC station to another can/will vary and accounting for all these would be ridiculous. But how does this then work in the game industry then? Whether you are playing on some game console or on your PC, does the developer follow a certain standard to determine what the best FOV is to use? By standard I mean that it might be (random example here) a standard to sit two meters away from the TV when playing a game whereas the TV of size X will cover Y degrees of your FOV.
  10. [quote name='Álvaro' timestamp='1355625333' post='5011137'] The effect is a mismatch between the FOV you used to generate the image and the conditions under which you are looking at the image. If you move your eye close enough to the screen, the projection will seem correct. The way I am sitting in front of my laptop right now, the horizontal span of the screen covers an angle of about 24 degrees from where I stand. If your FOV is much larger than that, I will see the distortion. [/quote] Thanks for the explanation but I don't quite to understand how to connect everything. Basically you mean to say that this distortion is pretty much dependent on my viewing condition? Wouldn't that mean, if I understood it correct, that the distance between me and the PC screen and the amount of degrees it occupy of my FOV affecs whether I see the image distorted or not? And a stupid question for that, how would I account for it in my code? [quote name='Álvaro' timestamp='1355625333' post='5011137'] Exactly what FOV to use is for you to decide. If you are trying to create photo-realistic images, perhaps you should use FOVs that are typical for photography. [/quote] What are some commmon FOVs for photography? And assuming I would use such FOVs, wouldn't the image still have the possibility of appearing distorted to me on my PC screen due to being dependent on my viewing conditions?
  11. I'm writing a ray tracer and while I have my simple scene up and running I've encountered what seems to be a common problem. My scene suffers from perspective distortion and this is of course most noticable on spheres. Basically when I create a sphere who's center is not focused on the z-axis of my camera the sphere get elongated in a radial manner. I've attached an image of my scene to show what I mean. The green and blue sphere shows the distortion quite well. I do understand that this effect is bound to occur due to the nature of perspective projection. Having a rectangular projection plane will create this distortion and after seeing a helpful image on the topic I at least have a basic grasp as to why this distortion occurs. I believe I even saw the same effect on a real photo when I was searching for more information about it. The thing is that I don't have the slightest clue on how to lessen this distortion. This is the code which renders my scene: [CODE] const int imageWidth = 600; const int imageHeight = 600; float aspectR = imageWidth/(float)imageHeight; float fieldOfView = 60.0f; // Field of view is 120 degrees, need half of it. Image testImage("image1.ppm", imageWidth, imageHeight); int main() { /* building scene here for now... */ std::cout << "Rendering scene..." << std::endl; renderScene(sceneObjects, 5); std::cout << "Scene rendered" << std::endl; return 0; } //Render the scene void renderScene(Object* objects[], int nrOfObjects) { //Create light and set light properties Light light1(glm::vec3(5.0f, 10.0f, 10.0f)); light1.setAmbient(glm::vec3(0.2f)); light1.setDiffuse(glm::vec3(1.0f)); //Create a ray with an origin and direction. Origin will act as the //CoP (Center of Projection) Ray r(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f)); //Will hold ray (x,y)-direction in world space float dirX, dirY; //Will hold the intensity of reflected ambient and diffuse light glm::vec3 ambient, diffuse; //Loop through each pixel... for(int y=0; y<imageHeight; y++) { for(int x=0; x<imageWidth; x++) { //Normalized pixel coordinates, remapped to range between [-1, 1]. //Formula for dirY differs because we want to swap the y-axis so //that positive y-values of coordinates are on the upper half of //the image plane with respect to the center of the plane. dirX = (2.0f*(x+0.5f)/imageWidth)-1.0f; dirY = 1.0f-(2.0f*(y+0.5f)/imageHeight); //Account for aspect ratio and field of view dirX = dirX*aspectR*glm::tan(glm::radians(fieldOfView));; dirY = dirY*glm::tan(glm::radians(fieldOfView)); //Set the ray direction in world space. We can calculate the distance //from the camera to the image plane by using the FoV and half height //of the image plane, tan(fov/2) = a/d => d = a/tan(fov/2) //r.setDirection(glm::vec3(dirX, dirY, -1.0f)-r.origin()); r.setDirection(glm::vec3(dirX, dirY, -1.0f/glm::tan(glm::radians(fieldOfView)))-r.origin()); //Will hold object with closest intersection Object* currentObject = NULL; //Will hold solution of ray-object intersection float closestHit = std::numeric_limits<float>::infinity(), newHit = std::numeric_limits<float>::infinity(); //For each object... for(int i=0; i<nrOfObjects; i++) { //If ray intersect object... if(objects[i]->intersection(r, newHit)) { //If intersection is closer then previous intersection if(newHit<closestHit) { //Update closest intersection and corresponding object closestHit = newHit; currentObject = objects[i]; } } } //If an object has been intersected... if(currentObject != NULL) { //Get intersection point glm::vec3 intersectionPoint = r.origin()+closestHit*r.direction(); //Get light direction and normal glm::vec3 lightDirection = glm::normalize(light1.position()-intersectionPoint); glm::vec3 normal = glm::normalize(currentObject->normal(intersectionPoint)); //Factor affecting reflected diffuse light float LdotN = glm::clamp(glm::dot(lightDirection, normal), 0.0f, 1.0f); //Get diffuse and ambient color of object ambient = currentObject->diffuse()*light1.ambient(); diffuse = currentObject->diffuse()*LdotN*light1.diffuse(); //Final color value of pixel glm::vec3 RGB = ambient+diffuse; //Make sure color values are clamped between 0-255 to avoid artifacts RGB = glm::clamp(RGB*255.0f, 0.0f, 255.0f); //Set color value to pixel testImage.setPixel(x, y, RGB); } else { //No intersection, set black color to pixel. testImage.setPixel(x, y, glm::vec3(0.0f)); } } } //Write out the image testImage.writeImage(); }[/CODE] I do want to mention that I have been playing around a bit with the parameters involved in my code. I tried putting the position of the ray/camera (CoP) further away from the image plane. As expected the scene zooms in as the field of view is supposed to become smaller. This did obviously not solve anything so I increased the field of view parameter (to neutralize the zoomed-in effect). Doing so removed the distortion (see second image) but this was just a temporal solution because I moved the blue sphere further out from the center of the scene which resulted in it being stretched again with the new settings (see third image). This is the settings used for the three images: [CODE] Image 1: float fieldOfView = 60.0f Ray r(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f)); Image 2: float fieldOfView = 85.0f Ray r(glm::vec3(0.0f, 0.0f, 20.0f), glm::vec3(0.0f)); Image 3: float fieldOfView = 85.0f Ray r(glm::vec3(0.0f, 0.0f, 20.0f), glm::vec3(0.0f)); [/CODE] Again I am totally lost in how to solve this problem. I don't even understand why the new settings I tried removed the distortion (temporarily). I realize some distortion will happen regardless but this seems extreme, furthermore the fact that I had to increase my fov to such a high value above gives me a bad feeling and makes me suspect my code might be at fault somewhere. Any help and explanation will be appreciated [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  12. [quote name='mrbastard' timestamp='1349987385' post='4989243'] The raw file doesn't contain any information about the dimensions of the image, so photoshop has to try to infer them from the number of pixels in the file. The ppm file [i]does [/i]contain the image's dimensions, so photoshop doesn't have to guess. Look more closely at the code you posted and you'll see for yourself ... [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] [/quote] Yeah, basically this is what I suspected and said above, thanks. I guess the best thing to do is to switch to .ppm format instead as it seems easy to implement. Thanks again.
  13. Hello. I'm not sure if this is really the right place to ask this but here goes. I'm trying to write an image to my hard drive. Basically I allocate space for an array big enough to hold my image and then simply insert my image data in it and write it to the hard drive. Since I'm fairly new when it comes to writing code that writes out an image to the hard drive I've decided to go with a very simple approach, I just write out a raw binary file which contains my image data and then use Adobe Photoshop CSS Extended (Ver 12.1) to open it to check that the result is correct. If I want to use some image format then it requires that I provide information for the image header and more which I want to avoid doing for the moment (although depending on my problem I might have to switch to another format). Here's the relevant code for writing out the image to my hard drive: [CODE] typedef unsigned char byte; int imageWidth = 400; int imageHeight = 400; int imageSize = imageWidth*imageHeight; std::fstream imageStream; imageStream.open("image.raw", std::ios::out|std::ios::binary); if(!imageStream.is_open()) { std::cout << "Could not open file" << std::endl; exit(1); } for(int i=0; i<imageSize; i++) { imageStream.put((byte)imageData[i].r); //Red channel imageStream.put((byte)imageData[i].g); //Green channel imageStream.put((byte)imageData[i].b); //Blue channel } imageStream.close(); [/CODE] Now this works perfectly fine IF my image width and height are of the same value. In other words I need to have the same amount of pixels horizontally as I have vertically. So if I have a 400x400 image as above I get the correct image. Photoshop will ask me, before opening the file, to describe the raw file options (dimensions of image, number of color channels etc.). Normally the default options provided by the application are already correct if I have the same value in my dimensions. Here's the thing though, if I change my dimensions to be different from each other, say changing it to 600x150, then some problems arise. The image still gets written to the hard drive but when I open it the default raw options in Photoshop suggest that the image is a 300x300 image. If I choose this my image ends up being heavily distorted. If I choose to specify the original dimension (600x150) the image show correct results but this is only because 300*300 = 600*150 = 90000 (thus same image size) So it's a special case. If I went with something more random like 250x532 then the default Photoshop raw options consider my image to be 665x600. Changing this back to 250x532 in the Photoshop raw options will only produce a result with rubbish in it as 250*532 != 665*600 (the application will warn me that I'm decreasing the size of the image which it consider to be 665*600). Now if I change my code to this instead (writing out a .ppm file instead of a .raw file): [CODE] imageStream.open("image.ppm", std::ios::out|std::ios::binary); if(!imageStream.is_open()) { std::cout << "Could not open file" << std::endl; exit(1); } imageStream << "P6\n" << imageWidth << " " << imageHeight << "\n255\n"; for(int i=0; i<imageSize; i++) { imageStream.put((byte)imageData[i].r); //Red channel imageStream.put((byte)imageData[i].g); //Green channel imageStream.put((byte)imageData[i].b); //Blue channel } imageStream.close(); [/CODE] then everything works regardless of my dimensions. Photoshop won't ask anything but just open the file with the correct dimensions and correct image result. Now I most likely think that the problem here is not the code itself but just how Photoshop internally handle .raw file (although why it change the dimensions of an image with different values for the dimensions when I try to open it is beyond me). Especially if you compare it with the .ppm format where information about the dimension of the image is provided before writing out the data and the .raw file...provide no information at all. In the case that this is related to Photoshop then this topic probably does not even belong in this forum [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] But I just want to make sure that it's not my code which is at fault here. If it is my code then what is exactly wrong and would I change this? Best regards
  14. Apologies for the late reply. But I think I got help with the stuff I was wondering about, not only do I have a better understanding of cache locality but I also got some great suggestions on how to solve some problems that might be common, thanks lots guys. Very much appreciated
  15. [quote] On most implementations, decreasing the size of a vector won't cause it to reallocate - it will just reduce it's internal size counter to keep track of what's left, and then when you re-add an element, it won't need to reallocate either. However, this behaviour isn't specified by the standard, so using a std::vector may reallocate when used like this. You could use a alternate implementation, like rde::vector so you know what it's going to do, or you could just keep your current system and change to Foo* fooVector = new Foo*[size] [/quote] That's true. I think I for some reason mixed up the re-allocation with the moving of data inside the vector [img]http://public.gamedev.net//public/style_emoticons/default/biggrin.png[/img]. What I really meant to say is that if I remove an element in the middle of my vector I would create a gap which results in all other elements, following after the element that is removed, to be moved one step down which results in the assignment operator (or the copy ctor, I forgot) of the element to be called x number of times depending on the amount of objects. One of the reasons to why I went with a swapping method. [quote] The solution of keeping track of 'active' objects via a list of pointers is a good solution, but for the sake of alternatives: If you've got 32 players, you can use a 32-bit integer as a collection of booleans to tell which ones are active. To then iterate through only the active players, you can use a loop like: [CODE]for( u32 mask = activePlayers; mask; mask &= mask-1 )//while bits set, do loop body, then clear least significant bit { int offset = LeastSignificantBit(mask);//implemented via _BitScanForward on MSVC players[offset].Update(); }[/CODE] Larger collections can use an array of 32-bit masks (or 64-bit masks would be optimal for x64 builds). [/quote] I'm probably being really stupid here but how is this code exactly working out? From the way I understand it I have my 32 objects and when all of them are active all bits are supposed to be set. A quick look at _BitScanForward says that it search for the LSB that is set so each loop I would find this index and use it when I updated my player. Do I understand it correct? Also I understand you said this is just an alternative but I was wondering how it would differ in general from having something like this: [code] for(int i=0; i<32; i++) { if(object[i].use()) //If object[i] is to be processed... { object[i].update() //do stuff with object[i]... } } [/code] [quote] You can use this function to test each of your development PCs to see what their cache-line size is. A 32KiB cache split into 128B lines is probably a good guess for a lot of PCs. These days the numbers are probably larger, not smaller. [edit]Actually I just tested my 4core Q6600 from 2007, and each core has two 32KiB L1 caches (instruction/data) with 64B lines, and each pair of cores has a 4MiB L2 cache also with 64B lines.[/edit] N.B. if your 128B vector isn't aligned to a 128B memory address (i.e. if pointer&0x7F != pointer), then it will be split over two cache lines. so iterating through your vector should cause at most 2 cache misses. [/quote] Very helpful code, thank you [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] The reason I'm assuming smaller cache (and smaller cache lines) is due to my target hardware probably being worse than your 4core Q6600. I don't have any specifications at hand but let's say that the hardware I would want to test on only have one core. Perhaps it is still wrong to assume a 32 byte cache line but that is outside my knowledge. But yes if it's 128 bytes it would split over two cache lines, even more so if it's 64 or 32 bytes but I suppose that's still quite good. It also makes me wonder, if I know beforehand how many objects I will have, wouldn't I be better off creating an array as a member variable of my Player (call it Player for simplicity) class that will hold variables that are often updated? To clarify, say I have this [code] struct Player { Vector2 position; //Updated often //some other variables... }; [/code] and that I create 32 instances of this struct, but that I will most of the time be updating the position variable, wouldn't I be better of doing something like this to fit in more data that will be used in a cache line: [code] struct Player { Vector2* array = new Vector2[size] //Updated often and will hold positions for 32 objects... //create array for the others variables that are not used as much... } [/code] [quote] A 128B cache-line will 'download' 128B of RAM. Whatever is in that RAM is downloaded. If your object only takes up half the line, then whatever is in the other half-a-line of RAM will now be in the cache. [/quote] Thanks! I still do have to ask though...if for example three foo objects can be fit in one cache line then when reading foo[1] could foo[0] be put in that cache line as well? Or would I only be reading anything that comes after foo[1] (foo[2], foo[3]...) [quote] If you know you're going to iterate through this whole array, you could be greedy and just prefetch the whole 5/6 lines in advance. e.g. for( const char* data = (char*)players, *end = data+sizeof(Player)*size; data<end; data += cacheLineSize ) Prefetch( data ); ...but, adding prefetch commands is really something that you want to do after and during profiling, and you also want to test it carefully to make sure you've not done more harm than good. In your case, you could just sort your vector of player pointers before updating, and now you'll be iterating through your player array in linear order (which hopefully convinces the CPU to prefetch for you automatically). [/quote] I do agree that I should do profiling first before getting into prefetching. I suppose the same thing could be said about sorting. Sorting would decrease possible cache misses but then it's a question on how much penalty I receive for sorting so many times. That might be a bit hard to know without profiling as well. Really appreciate all the help (and explanations) given here, although going by what we have so far the next step seems to be profiling and to decide from that how to change my code based on what's been discussed here. While I'm at it, what's a good profiler to use? I'm using Windows XP and AMD Athlon X2 Dual Core 4600+