• Advertisement


  • Content count

  • Joined

  • Last visited

Everything posted by haegarr

  1. For terrain a possible outline to start: 1) compute the height field 2) perhaps clamp heights below sea-level to the height of the sea-level 3) use a color ramp For clouds a possible outline to start: 1) compute the height field 2) clamp the heights below "blue sky level" to that level 3) colorize the sky level to blue and blend other heights with a dark gray, using the blend factor proportional to the relative height Nevertheless, I don't understand why to inherit colors from corners somehow. I think because the entire area is the first square, using inherited colors would ever give a much too smooth colorization. There may be a sophisticated way I'm currently not aware of, so if you have a reference why inheriting colors please tell us...
  2. The diamond-square algorithm is not about coloring but about height: It generates a height-map. I.e. it adds the 3rd (height) component in a semi-random fashion to a given 2D grid. You can, of course, do colorization by mapping height values into colors, but currently I don't see a meaning in "inheriting" color from the corners. So, what exactly is the goal of your attempt? What are the colors you mentioned good for? E.g., if you want to colorize the heigh-map like a terrain (white for snow on the mountains and such): Make a color ramp, take the resulting height of a grid point, normalize the height, and use the normalized height to address a color in the ramp.
  3. C# Load COLLADA Weigts

    AFAIS: The value of vertex_weight/@count is identical to the count of vertices in the skin mesh. It is also the count of numbers in the vertex_weight/vcount array. For each vertex in the mesh, in the order of the vertices, the value at index n in the vertex_weight/vcount array denotes how many influences are working on the vertex with index n. So in your example the vertex #0 has 5 influences, the vertex #1 has 5 influence, and so on. The numbers in vertex_weight/v array are to be interpreted as pairs. In your example the sequence of pairs is (0,1), (1,2), (2,3), (3,4), (4,5), (0,6), and so on. The first value of each pair denotes the bone (with the exception that a value -1 denotes the bind shape). The second value of each pair denotes the index into the weights array. Beginning with vertex #0, read the first number from vertex_weight/vcount as n, and then read the first n pairs from vertex_weight/v and interpret them being all influences on vertex #0. Then the next vertex, read the next number from vertex_weight/vcount as n, and then read the next n pairs from vertex_weight/v and interpret them as all influences on that vertex. Continue this for all remaining vertices.
  4. Algorithm Logic in ECS?

    But collision may change position and/or velocity of your player entity, wouldn't it? Hence is has been settled after collision.
  5. Algorithm Logic in ECS?

    The terms "manager" as well as "system" are widely used. In general a "system" is a set of components that work together to fulfill a purpose (see software system and also self-contained system). In game development we often speak of sub-systems as parts of the engine like the graphics sub-system, the animation sub-system, the physics sub-system, ..., so grouping components by functionality. Moreover, we obviously allow (sub-)systems to exist in a more or less shallow hierarchy. I'm not aware of an analog definition for "manager". The term is often used when something is responsible for the lifetime of objects (memory manager or resource manager). Personally I use that term mostly for the front-end object of a facade implementation, because it is in the limelight as seen by the clients and delegates all the work to the hidden objects E.g. the resource manager is the public interface in front of a resource cache, resource directory, and resource loader. Systems w.r.t. ECS means software that handles ECS component data and runs the necessary processes on the data. It is often chosen as an opposite to the distributed component management when the components are strongly coupled to the entities. That is IMHO a different definition as the one given above, so that the same term seems me to be used for different things, and looking at the context is necessary to understand. With the above being my understanding of things, let's talk a bit about your questions. BTW: Here the term "manager" is hard to be interpreted. What does it mean? A better name would hint at the purpose. Dependent on how obstacles are implemented, does it spawn game objects, or is it a procedural map generator, or ...? It needs access to position and velocity. It makes no sense to use values that are not validated, i.e. values that may still be altered due to collision correction or so. Hence it need to be placed behind "3) Transform Update" if I understand your loop correctly. Should it be placed necessarily before "4) Render System"? Well, that again depends on how the obstacles are implemented. If they are game objects, then no (although I would place it before rendering anyway, just because it counts to the logic part of the game loop). If so, you may want to read the thread Question on organization of draw loop where I explain my approach a bit deeper. But if the obstacles are just static objects in a generated map, then putting the task before rendering allows the obstacles to occur already just after creation. A game loop should IMO be there to execute systems in the general meaning of the term. That may or may not be a system in the ECS meaning. With respect to the thread I've cited above, any system is allowed to have one or even more GameLoop::Task objects bound to the game loop. Again, "manager" is somewhat an empty phrase. Please chose another name or, at least, prepend it with something meaningful. However, whether it could be "a manager" or lie in a (ECS like) system depends on how an obstacle is implemented (I know I repeat myself). See the discussion above.
  6. Then obviously the full scan need to be replaced by a sparse scan. How many "combinations" of opponents may occur? E.g. an algorithm like this: for the current AI agent calculate the (squared) distance to all opponents on the map sort the list by distance foreach opponent in the list do if opponent is known to see the current agent break if opponent is known to not see the current agent continue determine and store visibility(agent, opponent) by casting a ray if visible break
  7. C++ comparing 2 floats

    At least: You need to use the absolute value of the difference, something like if (fabs(a-f) < epsilon) or else you check whether a is greater than f and is close to f or a is lesser than f regardless how close.
  8. The opening post allows for much freedom, because it specifies no constraints or optimizations that may already be working. So I throw in some thoughts an hope they help ... Static geometry defines a-priorily known combinations of tiles between which LOS will never work. Hence baking a list of possible target tiles into each tile gives a subset of where further calculations are meaningful at all. This may not work well in case that the "sight" is very far. A variation of the above may be to bake a list of angle ranges describing direction that are blocked (or unblocked). This may work better in case that the "sight" is very far, or different characters have different demands on "sight". Pairs of tiles need to be investigated only if the one tile is occupied by the AI agent itself and the other is occupied by a (living) member of a hostile party. The effect range may restrict the distance of tiles that can be reached. Looking further than that would be meaningless. An order like scanning an inner ring (around the AI agent of interest) of tiles before scanning the next outer ring allows the algorithm to stop at the first hit; you're looking for the "nearest" tile, right? The computations are bi-directional. If an uninterrupted LOS is found from agent A to agent B, then it is also valid from agent B to agent A (but may perhaps be qualified with another view distance and/or effect range). Other way around the same: If agent A cannot see agent B, then agent B cannot see agent A.
  9. Converting 3D Points into 2D

    There are some things that make understanding your post problematic: A "center" is a point in space. How should "0,0 to -1,1" be understood in this context? You probably mean "a point in range [-1,+1]x[-1,+1]". Otherwise, you can normalize a point (with the meaning to make its homogeneous coordinate to be 1), but that is - again probably - not what you mean, is it? A position can be given in an infinite amount of spaces. Because you're asking for a transformation of a position from a specific space into normalized space, it is important to know what the original space is. Your code snippet show "out.pointlist" without giving any hint in which space the points in that pointillist are given. Are they given in model local space, or world space, or what? In the end I would expect that perhaps the model's world matrix and essentially the composition of the camera's view and projection matrixes are all that is needed to do the job. You already fetch viewProjection by invoking getCameraViewProjectionMatrix() (BTW a function I did not found in Ogre's documentation). What is wrong with that matrix? What's the reason you are not using it?
  10. Well, history shows that the forum is frown upon both walls of text and multiple subsequent posts of the same poster. If in doubt one can split off a thread, so that an aspect can be discussed in greater detail in a companion thread. It seems a bit complicated, but w.r.t. ECS one has the need to synchronize things over sub-system borders anyway. Sending messages in an unorganized way would not work, and notifying listeners in order ist just the push data flow variant where my way is the pull data flow variant. Regarding reversibility: Yep, I'm trying to detect error conditions early just to avoid hazardous situations. In the given case the aspect of asynchronous resource loading comes as addition into play. That resource loading means that resources may become available in the future, and of course you have to handle this in a way so that the sub-system can still work somehow. That is a well formulated description. If it isn't copyrighted, I'll tend to use it in future posts ... Already requesting creation/deletion jobs is kind of a modification. The main occurrences of non-read-only are when parts of internal components are enabled or disabled. For example, the SpatialServices manages Placement for the game objects. Those are the global positions and orientations. As such a Placement may be constraint (by mechanisms like Parenting, Targeting, Aiming, ...). While the execution of the constraint is driven by a task, the enabling is driven by service invocation. This ... ... is it. The coarse flow is from input disposal to entity management to simulation stuff to graphical rendering. Things like input catching, resource loading, and sound rendering run concurrently to the game loop. However, this ... ... perhaps needs some more discussion. Depending on how fire is simulated, it may or may not be implemented in an own sub-system. Sub-systems should provide generally useable services. If a fireplace is simulated by texture clip swapping, then it is nothing special and will be handled by the generic animation sub-system. If the fire is simulated by e.g. two particle systems then also a generic sub-system is used. Only if there is a special - say - physically based simulation or an extending forest fire, then a specific sub-system is useful. Think of duck typing: A game object made from a Model with a placement (of course), a mesh (looking like pieces of firewood), an animated billboard (fire), a particle emitter (with sparkle like particles), a second particle emitter (with smoke cloud like particles) ... gives you a fireplace. Nothing but the look of mesh and textures is specific to fire.
  11. Absolutely, although I would not say that services are read-only per se, but they are mostly read-only. Notice please that the S in ECS stands for "system" (or sub-system in this manner). This makes it distinct from component based entity implementations without (sub-)systems. The purpose of such systems is to deal with a specific more-or-less small aspect of an entity, which is given by one or at most a small number of what is called the "components". The (sub-)systems do this in a bulk operation, i.e. they work on the respective aspect for all managed entities in sequence. If this is done then we have an increment of the total state change done for all entities, and this is the basis for the next sub-sequent sub-system to stack up its own increment. You're right: This of course works if and only if the sub-systems are run in a defined order. That is the reason for the described structure of the game loop. Well, having a defined order is not bad. A counter-example: When a placement of an entity is updated, running a collision detection immediately is not necessarily okay, because that collision detection may use some other entities with already updated placements and some with not already updated placements. The result would be somewhat incomplete. You may want to read this Book Excerpt: Game Engine Architecture I'm used to cite at moments like this. However, this high level architectural decision does not avoid e.g. message passing at some lower level. When message passing is beneficial at some point, let it be the tool of choice.
  12. Mostly true, but there are some misunderstandings. The Model is a container class with a list of components. The sum of all the specific types of the components together with the therein stored parametrization constitutes an entity (or game object; I'm using these both terms mostly equivalent). The Model instance is used only during the entity creation process. It is a static resource, so it will not be altered at any time. Hence I wrote that its role is being a recipe, because it just allows the set of belonging sub-systems to determine what they have to do when creating or deleting an entity that matches the Model. The entity management sub-system does not know how to deal with particular components. It just knows that other sub-systems need to investigate the Model's components during the creation and deletion process, and that those sub-systems will generate identifiers for the respective inner structures that will result from components. Each sub-system that itself deals with a component of a Model has some kind of internal structure that is initialized accordingly to the parameters of the component. This inner structure will further be the part that is altered during each run through the game loop. Hence this inner structure is a part of the active state of an entity. Notice that this is a bit different to some other ECS implementations. Here we have a Model and its components, and we have an entity with its - well, so to say - components. There is some semantic coupling between both kinds of components, but that's already all coupling that exists. So the entity manager just knows how many entities are in the world, how many entities will be created or deleted soon, and which identifiers are attached to them. Even if the Model is remembered, the entity manager has no understanding of what any of its components means. EDIT: Well, having so much posts in sequence makes answering complicated. I've the feeling that some answers I've given here are already formulated by yourself in one of the other posts...
  13. The purpose of the entity manager (let's name the sub-system so) is to allocate and deallocate entity identifiers and to manage entity creation and deletion jobs. For this it has 4 job queues: * a queue where currently active creation jobs are linked * a queue where currently active deletion jobs are linked * a queue where currently scheduled creation jobs are linked * a queue where currently scheduled deletion jobs are linked Whenever a sub-system requests an entity creation or deletion, the entity manager instantiates a corresponding job and enqueues it into the corresponding scheduled jobs queue (a.k.a. it "schedules the job"). Whenever a sub-system asks for the current creation or deletion jobs, it gets access to the corresponding active jobs queue.The entity manager has a task that is to be integrated into the game loop in front of each task of any other sub-system that deals with entity components. When this task's update() is executed, it destroys all jobs enqueued in the both active jobs queues. This is because the previous run through the game loop had given all sub-systems the possibility to perform their part on creation/deletion of entities, so the currently active jobs are now rather deprecated. Then the currently pending jobs queues are made the new currently active jobs queues, so that ATM there are no longer any pending jobs. Notice that the entity manager itself does not really create or delete entities. It just organizes creation/deletion in a way that the other sub-systems can do creation/deletion w.r.t. their respective participation without ever becoming out-of-order. This is because wherever in the run through the game loop a sub-system means that an entity should be created or deleted, the actual creation/deletion process is deferred until the beginning of the next run, and the sub-systems get involved in the order of dependency. You may have noticed that the above mechanism works if and only if a sub-system's task actually does not cancel its part due to an inability. A typical reason for cancelation would be a missing resource. To overcome this problem, the entity manager actually deals with a fifth list: * a list where currently pending creation jobs are linked I said earlier that an incoming creation request creates a hob in the scheduled creation jobs queue. That is not exactly the case. Instead, the Model instance for which a creation is requested has a kind of BOM attached, i.e. a "bill or resources". Whenever the entity manager is requested for a creation, it first invokes the resource manager (which is the front-end of the resource sub-system) with the said BOM. On return the entity manager is notified whether a) all resources that are listed in the BOM are available; or else b) all resources that are listed and tagged as mandatory are available, but other are not yet; or else c) at least one resource tagged as mandatory is in the load process; or else d) at least one resource tagged as mandatory is finally not available. Then, only if a) or b) is the case, the entity manager enqueues the job into the scheduled creation jobs queue; if c) is true, the job is linked to the pending creation jobs list; and finally, if d) is true the job is discarded and an error is logged. Jobs in the pending jobs queue will eventually become scheduled in one of the following runs through the loop, whenever the resource management will notify that all mandatory resources are finally available. So the process of an entity creation is like this: * in run N a sub-system requests the creation of an entity * the entity manager immediately invokes the resource manager with the BOM * in this simple example, the resource manager returns a "all resources are available" * the entity manager schedules the creation job * all sub-sequent sub-system will not see the job, because it is not active yet * in run N+1 the entity manager's task's update() makes the previously scheduled job become an active job * subsequently in run N+1, the tasks of other sub-systems cause their part of creation to happen * at the end of run N+1, the entity is totally build and rendered the first time
  14. Make tris out of 2d set of points

    Yes, the first iteration does produce only triangles that use an edge (and hence 2 vertices) of the super triangle. Also the second and (I think) third iterations uses such vertices. The third iteration is the first one where a triangle occurs that is uncoupled from super-triangle vertices. That's fine, because you need to have at least 3 vertices in your original set to have at least 1 triangle as output. Look at the picture sequence of wikipedia's entry to Bowyer-Watson. Notice the red triangles: They are generated as needed but are rubbish in the end. The algorithm removes them in the last step. The blue triangles are all that remain as outcome.
  15. Make tris out of 2d set of points

    Notice that those step is done after all the points are added. So it does not remove the triangle just added, because any addition has been done much earlier in the previous loop. (Look at the indentation level.) The step is just there to get rid of all those wrong triangles that were build with the super triangle's corners. This is because the corners of the super triangle are build artificially; in other words, those points are not part of the original set of point. Hence also none of the edges and none of the triangles that use those points belong to the result.
  16. Your OP targets the architectural view, and there is of course not a single valid answer. And yes, diving into details will need further discussion in probably more threads. That said, here is an overview of how I manage the stuff. Several other solutions exist as well. You've already cited a post where the game loop is described as an ordered sequence of updates on sub-systems with the (graphical) rendering being the last step in the loop. That is still the case. However, the term "sub-system" is a bit like the term "manager": It's often too broad. So, let's say that any sub-system can have tasks and services. Such a task is needed to be invoked from the game loop, i.e. essentially one kind of update(...); I say "one kind" because a sub-system may have more than one update to be called, e.g. the animation sub-system comes to mind. The services, on the other hand, are routines that are provided for the use by other sub-systems. Now with tasks being executed one after one in order of the game loop, we can safely say that any task earlier in the loop is already completed when a later task is executed. Hence the later task can access the results of all earlier tasks by using the appropriate services. This makes a blackboard concept on the given level needless, because we clearly define execution instead of letting sub-systems self look-up whether they can do something. Regarding data ownership, the tasks define what sub-system owns what data and hence provides what service to allow access. However, chances are that data need to be duplicated. This may happen e.g. if a third party API (like physics) is used by a sub-system, and the API has its own idea of data organization. When I build the game loop, all services of participating sub-systems are registered, and the registry is passed to sub-system factories when a new instance is requested. Therefore sub-systems can look for services they have a dependency on, and store the resulting pointer. Now coming to the process of instantiation of game objects. First of, because there are several sub-systems involved for every single game object, instantiation is not an ad-hoc process. Remember that a task relies on earlier tasks having done their jobs, an instantiation has to began with the next run through the loop, or else some tasks may become out of sync. Therefore, if a sub-system wants a game object to be instantiated, it calls the (let's say) EntityService::scheduleCreation(Model const*) with the model resource as argument. (We need to shortly discuss what "model" means here in a later section.) As the service's routine name suggests, the service just generates a job and enqueues this in a job list. The same proceeding is done when a game object should be deleted by invoking EntityService::scheduleDeletion(EntityId). Well, the entity management sub-system also has an update task, and hence is integrated into the game loop. Because it manages entities, it must be placed very early in the game loop. The update then removes all active jobs and activates all scheduled jobs. That's all. Later on down the game loop, when a task of another sub-system that manages a component of game objects is updated, it first uses the entity service to get access to all of the active deletion jobs, and handles them accordingly to the purpose of the service (or perhaps sub-system). It then does the same with the creation jobs. Because all tasks depending on are already up-to-date, the task can safely access data of the other sub-systems at that moment. (Of course, in reality things are a bit more complex when considering features like resource prefetching and asynchronous loading.) Game objects are defined as entities with components, and the belonging data is managed by the sub-systems. So this is a CES - Component Entity System - way of handling game objects. A difference exists when this CES is compared to many of the existing ones: I use a model (which is just a container object) as recipe of how a game object (or "entity") has to be instantiated. That means also that the components are not used directly as they are attached to the Model, but are interpreted by the belonging sub-systems when those instantiate the game object. Hence the internal structure of components may (and often does) differ from the structure seen from the outside. The services then grants read-only access to the internal structure or an image of that. With this description in mind, a Model instance may provide a ShapeComponent with a mesh inside, a ParticleEmitterComponent, of course a PlacementComponent with positioning and orientation in the world, perhaps an AnimationComponent, a GraphicTechniqueComponent, ... and so forth. The different components are "consumed" by the belonging sub-systems. Phew, let me take a break here...
  17. Camera on Vehicle

    No offense, but most of your opening post is very confusing (at least to me). E.g. there is no such thing like a target position within a view matrix. Moreover, "farther viewing" should be done by zooming. And multiplying a position is not meaningful from a mathematical point of view. So it seems me that a deeper insight may be helpful ... Let's mostly ignore that the camera is somewhat special due to its view defining purpose. Instead, let's think of the camera as an object in the world like any other one. The placement (position and orientation) of the object w.r.t. the world is given as matrix C. Then the relation of a point (or direction) called v in the local space of the camera and its counterpart v' in world space is just v' = C * v Bringing C onto the other side of the equation like so (where inv(...) means the inverse matrix) inv(C) * v' = inv(C) * C * v = v defines the other way: Expressing a world co-ordinate v' in the local space of C. Especially w.r.t. the camera, C may be called "camera matrix", and inv(C) may be substituted by V what usually is called the "view matrix" in the context of cameras, because the view matrix is just the inverse of the camera matrix. (The math itself is universal to spaces regardless that we apply it to the camera in this case.) Attaching the camera to the vehicle means to make a geometrical constraint, so that moving the vehicle "automatically" moves the camera, too. This kind of thing is usually called "parenting" or - more technically - "forward kinematic". This means that we define a "local placement" L for the camera, i.e. a spatial relation that expresses the position and orientation of the camera not in the world but w.r.t. the vehicles world placement W. In other words, L defines how much translation and rotation must be applied to a vector in camera's local space so that it is given in the vehicle's local space. The formula needed to transform from a local space to its parent space is already shown above, where we've used the world space as the parent space. However, an indefinite number of parent spaces can be used. What we want here is to go from the camera local space into the vehicle local space, and from there to the world space. So we have v' = L * v v'' = W * v' or together v'' = W * ( L * v ) = ( W * L ) * v from what we see that the "parenting" just means to concatenate the particular transformation matrices. But be aware that matrix multiplication is not commutative, so the order of the matrices is important. In the given example we have a composited matrix W * L for parenting. Now the question pop up of how L is build. As a placement it has both a positional and an orientational part. Both can be set to fixed values, meaning that the camera is installed with a static device into the vehicle cockpit. Less strict parenting can be done, too. E.g. the position can be fixed while the orientation can be driven by targeting a "look-at" point. Let's investigate this example a bit further. So we define that the placement matrix L is composed from a translational and a rotational part, T and R resp., in the usual order (as you've hopefully noticed, this post uses column vectors): L := T * R To calculate the look-at vector, i.e. the unit direction vector from the position of the camera to the target point, the both positions must be given in the same space, and the resulting look-at vector will be in that space, too. Because L is given in vehicle space, and R (which the look-at vector is a part of) is hence also, we are interested in a vehicle local look-at vector. T is already vehicle local, but the target point p should be given in world space. So we need to transform it first inv(W) * p and can then compute the difference vector d = inv(W) * p - T * 0 where 0 denotes the origin point vector in homogeneous co-ordinates, i.e. [ 0 0 0 1 ]. From here normalization and matrix building is done as usual, so I neglect that stuff here. HtH
  18. Since the title mentioned DirectX (as I've noticed just now): AFAIK it uses z = 0/+1 as normalized depth range.
  19. The vertices you're interested in are the corners of a view frustum. They are give as the corners where the 6 view clipping planes meet. In normalized device co-ordinates they are the 6 sides of the unit cube, i.e. the cube with their sides at x = +1/-1, y = +1/-1, and z = +1/-1 or z = 0/+1 (all possible combinations in a vector [x,y,z] give you the 8 corners; notice that some APIs use z = +1/-1 while others use z = 0/+1). On the other hand, in camera space the frustum is limited by the near and far clipping plane and 4 other planes which depend on the projection mode (orthogonal or perspective) and its parametrization (view size and perhaps view angle).
  20. That depends on the co-ordinate space in which the said vertices are given. In general you just have to apply the transformation matrices that change the space relation of the vertices from the given one up to the world space. When chaining transformation matrices the correct order has to be obeyed, of course. Assuming that the vertices are given in camera local space, you have to apply the usually so-called camera matrix (equal to the inverse view matrix), because that matrix describes the relation of the camera local space in world space. If otherwise the vertices are given in normalized device co-ordinate space, you have to apply the inverse projection first before using the camera matrix.
  21. The while-loop slices totalDeltaTime (the total elapsed time since last iteration through the game loop) into equally sized (i.e. MAX_DELTA_TIME) fixed timesteps but only as long as so much (i.e. MAX_DELTA_TIME) duration is still available. Otherwise only the remaining time is used. That std::min ensures exactly that: Make deltaTime being MAX_DELTA_TIME as long as it fits in, but the remaining time if it doesn't fit. To actually work, a line like "totalDeltaTime -= deltaTime;" needs to be within the while-loop. Also the shortcut "totalDeltaTime>0.0" makes no sense without such a decrement.
  22. Unresolved external symbol?

    There is an argument-less constructor in the class gameInventoryMaster within which the method gameInventory::setSingleSlot is invoked. So look out for that. More an indirect include, i.e. an include in an included file or an include in an implementation file.
  23. Could anyone explain this math?

    While your thought is true in common mathematical understanding, it is just a question of definition for computer languages. In GLSL, for example, adding a scalar to a vector is possible; it just builds a vector where all components are set to the scalar and then performs a vector addition. GLSL specification, chapter "Operators and Expressions", section "Expressions", says:
  24. Code Snippet Help, Please.

    What kind of "3D" does this game show? It remembers me of a method of making racing car games that was popular in the era before GPUs. There is this excellent article that explains the technique in great depth. However, the code snippet alone shows little context, so I may be wrong though. If that technique is used here, then the shown code snippet creates the track in advance. That "line.curve = ..." stuff could be the offset that causes the visual bending of the road when rendered. And that "line.sprite ..." stuff looks like placing billboards to decorate the track to its left and right.
  25. Searching the internet for "vector reflection" gives you a ton of hits. Just the first hit for me answers your question. The closest vector on the plane (i.e. perpendicular to the plane's normal) can be found by using the cross product twice and rescaling the result, as is usual when a basis should be computed. So the first cross product is used to calculate a vector perpendicular to both the normal and the initially given vector: k := n x v Then the second cross product used on the previous result and the normal gives a vector in the desired direction: m := k x n The rescaled variant of the vector is what you're looking for: v' := m / ||m|| * ||v|| That works if and only if v and n are not collinear, of course. This can be done simply by applying a rotation on the vector. The rotation's axis is given as cross product of the vector and the normal.
  • Advertisement