Makers_F

Members
  • Content count

    26
  • Joined

  • Last visited

Community Reputation

936 Good

About Makers_F

  • Rank
    Member
  1. The complexity is based on the total number of nodes that you look at, not at the depth (the number of moves). With depth = 0 (no move), you will look at 1 node (the initial state). If you have 2 moves possible, at depth = 1 you'll look at 2 nodes, at level 2 at 4.. At level 30, you will look at 2^30 nodes. You know how many nodes you will look at, if you check with max level 0, then max level 1, ... , then max level 29? Let's see: solution So, if you check FROM level 0 TO level 29, you are actually analysing less nodes that just looking at level 30 (exactly 1 node less). If you have 4 moves, it gets waay worse. With your method, you are actually looking at level 30, then level 29, and down that way until you don't find a solution, which might be at something like level 10. This has a complexity that saying it's terrible is a great compliment. Your method would probably fail for grids really little bigger than the current one. With the one I told you, you would get a way faster solution if there is one in less than 30 moves, and if there isn't one, it would take a slightly more than your current one. So, you have two options: you either implement UC, or you implement interative deepening like I described it. Space state explorations is a kind of problems studied a lot and with precise optimal approaches, I highly suggest you to follow them, unless you want your algorithm not be able to tackle bigger problems (or just be slow with current ones ;) ) @apatriarca since he talks about a LIFO, I think he implemented a depth first search, so it's not guaranteed to be optimal. On this aspect he was right ;)  
  2. Can you tell us a bit about your background? Anyway, there are a couple of theorical problems in your post 1) The correct search strategy you are looking for is Uniform Cost. It is obrained by using a priority queue sorted on the number of moves to reach the state, instead of using your LIFO (which behaves like a depth first) 1.1) Right now you are making a strange combination of Iterative Deepening strategy. That's pretty bad as complexity. If you don't want to change to uniform cost, you should probably implement a correct iterative deepening. (so you start with a max moves of 1, and then proceed to increase it every time it fails to find a solution. When it stops you find the optimal solution) Still have to read it thoroughly, but I noticed these at a first read
  3. "Pixel-Perfect" Collisions...

    https://github.com/MakersF/CollisionCore This library implements pixel perfect collision detection for sprite, supporting every affine transformation (translation, rotation, skew, scale) You need to implement IBitmap as an adapter for your image, and use it to generete the pixelperfectmask. Once you have the pixel perfect mask, you can check if they collide by passing them and the transformation of the sprites. If you use java, in javabingind you already find an implementation for IBitmap for adapting BufferedImage
  4. Grounded Pointers

    After reading this article I installed static code analysis plugins for my personal projects. I already used sonar for my thesis project, but didn't bother to set any S.C.A.T. up for my personal projects. Even if I (surprisingly) had just 2 minor warning, definitely the situation in programming is getting better
  5. Hi everybody   I am facing a problem implementing an "algorithm" I have a list of priority queue (each letter can appear only once per queue, 3 times in total. The priority can be 1, 2 or 3, nothing else), like First| 1:{X,V,Y} 2:{U}    3:{W} Second| 1:{Z,U}   2:{V} 3:{} Third| 1: {}    2:{X}     3:{V} Fourth| 1:{}      2:{Y,W,Z} 3:{X} Fifth| 1:{W}     2: {}     3:{U,Y} Each queue represent a preference toward a position (for example, X has as first preference First, as second preference Third and as third preference Fourth) But there are some constraints: To each letter can be assigned only one position To each position can be assigned no more than N letters (N is an argument of the function). If less then N letters have a preference for that position, all the letters get placed in that position or a higher preferred one. If in a queue for a position a letter A have an higher priority of a letter B, and B is assigned to that position, then A is assigned to that position too or in a position where A have an higher preference (example: if X is assigned to Fourth, then Y,W,Z are assigned to Fourth too OR  Y,W,Z are placed in a position where they have an higher preference than 2) Two possible solutions are (with 5 position, the preferences listed as above, and 2 places for each position) First: [X(1), U(2)] Second: [Z(1), V(2)] Third: [] Fourth: [Y(2)] Fifth: [W(1)] or First: [X(1), V(1)] Second: [Z(1), U(1)] Third: [] Fourth: [Y(2)] Fifth: [W(1)] Between the parenthesis there is which preference have been used to pick that letter While a wrong one is First: [X(1), U(2)] Second: [Z(1), V(2)] Third: [] Fourth: [Y(2),W(2)] Fifth: [] because W would like to be placed Fifth, but even if there is space it is placed in a position less preferable. Note: it is possible that a letter can not be assigned (for example if X, Y, and Z have as only preference First, only 2 can be assigned there.) I am not sure how to write this algorithm.. Do you have some hints? When I try doing it by hands i end up making lots of comparisons, but i fear cyclic dependency if i implement it like that..
  6. Well, the answer to your title's question is simple: no. If you have no guarantees of how the data is collected and what phenomenon you are observing, there is absolutely no reason for which past data describe a future event or a model. Since you do not have a model you can not use any method to infer future events. You can notice a pattern in which whenever a bird sings, a dolphin in the ocean is jumping. But since this two actions are not correlated,  if you'll use it you'll have a wrong (or better, not reliable) prediction. If you know the domain of the problem instead, you can start to build some rules (there are already some tools to do this. Search for "data mining", some programs do exactly want you want, they find patterns in data) TL;DR Without a knowledge of the domain you are observing, you can find patterns, but they do not mean anything and you can not safely use them to infer future events. (Note: it can be that the data you are looking at actually belongs to a specific domain with some patterns, even if you don't know it. In that case you can use the pattern to predict future events, but since you do not know that the data belong to a specific domanin, you should not use them.) @Alvaro: I'm not really informed about data compression, but i think the goal is to find the more common pattern and "replace" it with a shorter identifier. At least in the old days. It is not about predicting data that is not present. Am i right?
  7. OpenGL OpenGL 4.4 spec is published

    I think the biggest improvement is the conformace test. From what i head from who prefer DX over GL is that the drivers sometimes have different behaviours for different cards (with openGL). With this change, all the driver will(?should?) have the same behaviour, making it easier to develop openGL programs.
  8. It doesn't matter. It depends on the coordinate system. It can even be the centre. Try to understand the concepts, not the single example! Replace "left bottom" with "top-left", "center", "a point a hundred times the dimension of the sprite away from the sprite" and it will still be corrected! The transformation translate a whatever point in the local coordinate system to the world coordinate system
  9. The position is taken into account by the transformation. The transformation matrix contains the translation, rotation, skew and scale. This means that if you multiply {0,0} (the left bottom corner of your sprite) for the matrix of this sprite, it will give the coordinates of the point in the world coordinate system which touches the left bottom corner of your sprite. This is valid for every point of the sprite
  10. How To Publish on GameDev.net

    I read the article and made a search on the forum, but i didn't find any information about how to peer review. I'm interested in it, should i sign my name somewhere to get the approval?
  11. I searched for a solution to the same problem a few months ago. I found a fantastic work from microsoft. It is the fastest pixel perfect collision idea i have found, and support whatever affine transformation(but you need a transformation matrix) http://xbox.create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel_transformed The standard idea to support rotations is to work in the local space of the sprite A and bring the sprite B in local A space. Then, microsoft suggests to transform the X and Y versor of A local space to B's one, and then just add the transformed versors to the B's local transofrmation currently checked coordinate, in order to make 2 additions instead of a matrix multiplication The archive in the link contains a better explaination, but i hope you can get a grasp of what i meant. Here's my implementation. It works really nice even on mobile hardware. Obviously you can't abuse pixel perfect collision https://github.com/MakersF/AndEngineCollisionsExtension/blob/master/src/com/makersf/andengine/extension/collisions/pixelperfect/PixelPerfectCollisionChecker.java As waterlimon stated, use pixel perfect collision only if it is the only one that make sense (you have sprites that should have holes and you must check against point that can potentially be smaller that that holes. Or something extremely unregular)
  12. Hello, i'm working on an horizontal side scroller game where the player shouldn't touch the walls while it automatically proceed. [img]http://www.rockbox.org/wiki/pub/Main/PluginChopper/dump2.PNG[/img] (not actual content, this is from an old game) The player must also catch some objects while proceeding into the maze. I need to pass to the vertex shader 8 key vertices coordinates, and it will take care to interpolate the position for all the remaining vertices. in pseudo-code the generator should be [CODE] update(elapsed_time) { space = velocity * elapsed_time; for(i=0; i<8; i++){ verts[i] -=space; } if(verts[0]<0) { //out of screen shifLeft(verts); verts[7] = generate_new_vert(); } total_space +=space; //passed to the fragment shader to correctly repeat the tiled texture // the actual code will be a bit different since the X coordinate of the vertices in the array must match the ones of the vertices in the gpu in order to be able in the shader to use the correct value for the correct vertex, but it's just a matter of a few subtractions and checks } [/CODE] What is not clear to me is how should i write the (random) generation of the maze. What i mean is:[list] [*]should i generate what path the player should follow, and create the maze according to it so that the player is almost forced to follow it? [*]or should i just generate the maze with some random parameters(min-max distance between the maze and the obstacles, max player changing direction velocity,...)? [/list] What will make it more controllable? What will give be the best choice in case i want to allow the player to[list] [*]not to force a single route [*]guarantee that, whatever route s/he takes, s/he will be able to always proceed (obviously the "correct" way will be easier that the "wrong" way, but the "wrong" one will give more reward) [/list] Can you maybe point at an analysis of this types of games and what it takes to make them fun? (i initially started this project as an occasion to learn openGL and shaders, so i didn't really thought too much into the design of the game, but now that i'm working on it i would like to make at least an interesting game to show to some friends, and maybe a basis for a bigger game [img]http://public.gamedev.net//public/style_emoticons/default/rolleyes.gif[/img] )
  13. I have a mesh composed by numerous vertices (600 ± 300 ). The vertices are simply disposed on 2 parallel line, so that every vertices on a line have the y component equal to the other(they practically are along the sides of a rectangle orthogonal to the screen). I'm working in a 2D world. I'm trying to position some vertices in specific points i pass to the shader, and make the other vertices interpolate their position from them. I generate in my code this matrix [CODE] u_specialCoordinatesLocation = { x1, x2, x3, x4, y1, y2, y3, y4, x5, x6, x7, x8, y5, y6, y7, y8 }; [/CODE] and pass it to the shader as an uniform (transposing it). This is the code of the shader. Below i explain it. [CODE] /* upperVertexShader*/ /* externally defined: * QUALITY_CODE * SCREEN_WIDTH * SCREEN_HEIGHT */ uniform mat4 u_modelViewProjectionMatrix; uniform vec2 u_playerPosition; uniform float u_elapsedSeconds; /* * mat4 = { * {x1, x2, x3, x4}, * {y1, y2, y3, y4}, * {x5, ...}, * {}} */ uniform mat4 u_specialCoordinatesPosition; attribute vec4 a_position; attribute vec2 a_textureCoordinates; varying vec2 v_textureCoordinates; varying vec2 v_playerPosition; varying vec2 v_pointPosition; const float ERROR_LIMIT = 0.01; const float MIN_VALUE = -2e6; const float MAX_VALUE = 2e6; float linearInterp(float n0, float n1, float a) { return ((1.0 - a) * n0) + (a * n1); } float scurve5(float a) { float a3 = a*a*a; float a4 = a3 * a; float a5 = a4 * a; return (6.0 * a5) - (15.0 * a4) + (10.0 * a3); } float min4fv(vec4 vector) { return min(min(vector.x, vector.y),min(vector.z, vector.w)); } float max4fv(vec4 vector) { return max(max(vector.x, vector.y),max(vector.z, vector.w)); } /* only return the correct value to offset the vertex if the vertex is one of the ones in the uniform, 0.f otherwise*/ float offsetValue(float Xposition) { float offSet = 0.0; vec4 tmp_vector4; vec4 positionX = vec4(Xposition); // first row // 1.0 if positionX is almost equal to the one specified in the uniform tmp_vector4 = 1.0 - step(ERROR_LIMIT, abs(u_specialCoordinatesPosition[0] - positionX)); tmp_vector4 = tmp_vector4 * u_specialCoordinatesPosition[1]; offSet += tmp_vector4.x + tmp_vector4.y + tmp_vector4.z + tmp_vector4.w; // second row tmp_vector4 = 1.0 - step(ERROR_LIMIT, abs(u_specialCoordinatesPosition[2] - positionX)); tmp_vector4 = tmp_vector4 * u_specialCoordinatesPosition[3]; offSet += tmp_vector4.x + tmp_vector4.y + tmp_vector4.z + tmp_vector4.w; return offSet; } vec2 interpolationVertices(float vertexX) { /* .x = left distance * .y = right distance */ vec2 distances; /* 0 if at the left of the vertex, else 1*/ vec4 left_filter; /* 1 if at the left of the vertex, else 0*/ vec4 right_filter; vec4 positionX = vec4(vertexX); /* First row*/ left_filter = step(positionX + ERROR_LIMIT, u_specialCoordinatesPosition[0]); // grants that the vertices at the right of the point will not count, by adding only to them a very negative number distances.x = max4fv(u_specialCoordinatesPosition[0] - positionX + left_filter * MIN_VALUE); right_filter = 1.0 - step(positionX - ERROR_LIMIT, u_specialCoordinatesPosition[0]); // grants that the vertices at the left of the point will not count, by adding only to them a very positive number distances.y = min4fv(u_specialCoordinatesPosition[0] - positionX + left_filter * MAX_VALUE); /* Second row*/ left_filter = step(positionX + ERROR_LIMIT, u_specialCoordinatesPosition[2]); // grants that the vertices at the right of the point will not count, by adding only to them a very negative number distances.x = max(distances.x, max4fv(u_specialCoordinatesPosition[2] - positionX + left_filter * MIN_VALUE)); right_filter = 1.0 - step(positionX - ERROR_LIMIT, u_specialCoordinatesPosition[2]); // grants that the vertices at the left of the point will not count, by adding only to them a very positive number distances.y = min(distances.y , min4fv(u_specialCoordinatesPosition[2] - positionX + left_filter * MAX_VALUE)); return distances; } float interpolatedValue(float vertexX) { vec2 interpVertices = interpolationVertices(vertexX); vec2 interpValue = vec2(offsetValue(vertexX + interpVertices.x), offsetValue(vertexX + interpVertices.y)); float int_value = scurve5(abs((vertexX - interpVertices.x) / (interpVertices.y - interpVertices.x))); return linearInterp(interpValue.x, interpValue.y, int_value); } void main(void) { v_textureCoordinates = a_textureCoordinates; v_playerPosition = u_playerPosition; v_pointPosition = a_position.xy; v_pointPosition.y += interpolatedValue(a_position.x) * (a_position.y) / (a_position.y + 0.0001); vec4 temp_position = u_modelViewProjectionMatrix * vec4(v_pointPosition, a_position.z, a_position.w); gl_Position = temp_position; } [/CODE] The vertices y position in the VBO have only 2 different values: 0, or another fixed value (SCREEN_HEIGHT/3). The vertices that have position.y = 0 shouldn't be moved, so [CODE] v_pointPosition.y = interpolatedValue(a_position.x) * (a_position.y) / (a_position.y + 0.0001); [/CODE] is equal 0 if a_position.y is 0, else is almost equal [CODE] v_pointPosition.y = interpolatedValue(a_position.x); [/CODE] The function [CODE]float offsetValue(float Xposition)[/CODE] should find the value of the Ya in the u_specialCoordinateLocation when passed Xa, 0 otherwise. The 1.0 - step function is used in order to only account for the correct value each time (since only if the passed vertices distance from the Xa is less than ERROR_LIMIT it will be set to 1.0 and then multiplied by the Ya value) The function [CODE]vec2 interpolationVertices(float vertexX)[/CODE] should find the Xa and Xb value of the 2 vertices inside the u_specialCoordinateLocation matrix that are nearest to the passed x coordinate. This is achived by calculating the max of the distance from the left (that should be negative) without taking into account the values bigger than 0 In fact [CODE] left_filter = step(positionX + ERROR_LIMIT, u_specialCoordinatesPosition[0]); [/CODE] will set each component to 0 for each vertex on the left of the passed X position, and to 1.0 otherwise, while [CODE] u_specialCoordinatesPosition[0] - positionX + left_filter * MIN_VALUE [/CODE] will subtract a big value if the filter component is set to 1, thus making impossible to vertices at the right of the position X to contribute once this value is passed to max4fv() The same is done for the right distance, and then the 2 X coordinates of the 2 vertices are returned. Finally [CODE] float interpolatedValue(float vertexX) [/CODE] find the 2 vertices "bounding" the passed vertex, retrieve their values, interpolate between them, and return the new position. I think everything should work, but when i test it on my phone the mesh doesn't show up. What i'm doing wrong? Is there a better way of doing it, i must admit that my code isn't really stright forward, and i wish there is a more elegant way? What do you suggest? (rebuilding the VBO on the cpu and streaming is not a viable option since i would need to do it every frame for 2 meshes of this kind) Thanks for the help and the patience
  14. I have the 4 vertices of the rectangle representing my sprite. Since i want to enable pixel perfect collision i need to make the fewest check per pixel possible, thus the choice of finding the exact bounding box of the intersecting area. In addition to the bit mask of the sprite, i have subdivided the rectangle in regions of 32x32 pixels, and store if they are empty, full, or half-filled (so that if a half filled region of a sprite maps to an empty region i can skip 1024 checks and say that they don't collide, and if it maps to a full region i can skip again 1024 checks and say they collide). So, having the bounding box axis aligned to the local axis of the rectangles make the code of checking the square sub-region far easier and more readable. I'll look into point vs convex polygon, but i'm curious about the Minkowski sum. Can you explain a bit further?
  15. Hello, i'm implementing pixel perfect collision, and in order to make the check faster i want to cull the unused part of the 2 sprites colliding. I need to find the red area in the pictures (the rectangles can be scaled, rotated, screwed), possibly the one from the last 2 pictures(where the rectangle is parallel to one of the 2 rectangles), but i'm a little confused. I have the vertices in world space, but i can't figure a way to correctly test the right intersections(for each way i think it could be done i can think of a corner case in which it is wrong). Is there some algorithm to perform this check? I wasn't able to find one, despite the fact it should be a well studied/known subject (at least, i thought 2D collision is a well known and already deeply studied subject..) Thanks [attachment=10268:intersection.png] bounding box parallel to the world space axis. [attachment=10269:intersection2.png] [attachment=10270:intersection3.png] bounding box parallel to the local X axis of one of the 2 rectangles Ps: i have the transformation matrices of both rectangles