Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Oct 2005
Offline Last Active Today, 12:49 AM

Posts I've Made

In Topic: Help creating an interactive story game

26 June 2016 - 02:01 AM

A "motion comic" is defined to be a comic like presentation of graphical content but augmented with basic animations and perhaps sound within the panels. It will ever have user interaction to advance the story and may have interaction to examine pictorially given details (to mimic the look-at / examine functionality of IF / TA / GA). Such interaction has no impact on the story.


Now, It may have interactions that leads to different story paths and endings (a.k.a branching story). This is close to the CYOA approach mentioned by alnite above. Due to the narration being done in (partly) animated comic panels, this approach consumes much time of artists and requires great attention on continuity checking. All potential endings are pre-defined, and the particular reader sees a single path through the story.


Another approach is to tell the story in parts, so that one part is published per time unit (e.g. a week). Each part ends on a branching point, and the readership is enabled to vote online how the story branches and hence advances in the next coming part. This approach to influence the story is not directly depending on the motion comic media, but it reduces the overall amount of work drastically. At the same time it has, obviously, other hurdles.


Graphic adventures (or GA for short), often implemented utilizing the point-&-click interaction style as mentioned by Alberth, are a different thing. They and their text variants TA and IF are based on a concept where the player is in a room and can examine and interact with the things therein and finally leave the room through one of the exits. Notice how different the media is. In a motion comic, the story is narrated in a sequence of panels, each one allowing to express a dramatic composition with camera settings and scene composition and such, while a GA does not do such things.


In other words, the artistic expressiveness in a motion comic is much greater, while in GA the interactive part plays a bigger role. Interestingly enough, the authoring systems available for GA can also be used to design a motion comic (although I doubt a bit that all that nice effects seen in motion comics can be done with them … but maybe).


Allowing the readership to contribute content to a story driven motion comic - well, ATM I cannot really imagine that it will work well. Can you share your ideas with more details?

In Topic: render antialiased lines.

12 June 2016 - 02:02 AM

thanks for the explanation, by color space you mean alpha values or all rgba?

Neither - nor. The alpha channel does not belong to the color space. Strictly seen, RGB is also not a color space but a color model. But yes, RGB is a term in daily speech that names what I mean. When interpolating color, you need do this in a linear color space. If you have sRGB, you first need to convert it into RGB (maybe your GPU does it for you on the fly).

In Topic: render antialiased lines.

11 June 2016 - 12:55 AM

from the 1st one what i cannot understand is the antialiasing part, i mean how exactly it is interpolated between 2 normals?! if i just take its length it doesn't quite work. can anyone explain or have an idea why/how it should be about missing details?


The vertex normals are used to expand the geometry from an infinitesimal thin line to a line with thickness. The latter one is represented as a set of triangles that form a closed region. This is done in the vertex shader, because that is a place where vertices can be shifted around. The vertex shader is called on each vertex, and because that are a few points (usually with some additional informations) in space, they are far too spatially distributed to be used for anti-aliasing.


The fragment shader, on the other hand, is called for every pixel (called fragment in OpenGL) that is covered by the thick line's triangles. Anti-aliasing can be done here, because its spatial resolution is much finer than those of the vertices. To do anti-aliasing, you compute the distance of the pixel to the lines skeleton, i.e. to the original infinitesimal thin line. Letting corners aside, the distance from a point to a line is defined as the shortest distance from the point to every point on the line. If you think about it, it is the length of a vector that starts somewhere on the line, end at the point of interest, and is perpendicular to the line. Hence it can be called a normal, too, but its length depends on the pixel's position interested in. So let us name it difference vector just to avoid confusion.


Now, for anti-aliasing, you compare the length of the difference vector with half of the line with. Anti-aliasing should happen only if the pixel's is close enough to half of the line width, but is should not happen otherwise. So, as an example with numbers, use the line color if the distance is less than 90% of half of the line width, use color interpolation if the distance is between 90% and 110%, and discard the pixel if the distance is greater than 110%.


An interpolation factor can be calculated as

    f := 1.0 - ( distance - 0.9 * half_width ) / ( ( 1.1 - 0.9 ) * half_width )   for   0.0 * half_width <= distance <= 1.1 * half_width

and hence will be 1.0 at 90% and 0.0 at 110% positions, and falls off straightly in-between. You can, of course, feed this value into a response curve if you want some other fall off profile.


Notice that color interpolation should be done using a linear color space, or the results will look oddly, but that is another topic ;)



EDIT: To be precise, an OpenGL fragment is meant to be a region on a surface that maps to a pixel.

In Topic: QuadTree: design and implementation question, (n+1)th topic

26 May 2016 - 01:12 AM

What's about the 1:1 relation of components and quadtrees? I mean to create a separate quad tree for each sortable component? This solution has the highest memory cost but probably the fastest search. But the extra memory consuption is not a big deal IMHO, it's just a couple extra MB at most.


A QuadTree is a storage structure with attention to a partial neighborhood relation. By itself it is totally independent on what kind of objects are stored. Any higher level of abstraction can use (its own instance of) QuadTree to manage its belonging objects. For example, a sub-system that manages the Listener (meaning a simulated hearing) component can check for spatial vicinity by utilizing a QuadTree; the Collision sub-system can manage a set of colliders by utilizing a QuadTree (where a collider itself refers to Collision component); the lighting sub-system manages lights in a QuadTree; and so on. However, in an implementation where separation of concerns are respected, a QuadTree is just an utility used behind the scenes, and there is an instance of QuadTree for each concern. A sub-system provides services on its interface, and whether it used a QuadTree or whatsoever is irrelevant from a clients point of view. Doing this allows to change internal implementations when necessary without impact on the clients.

In Topic: Determine what side of rectangle is hit by ball

19 May 2016 - 06:03 AM

Basically, I just copied your formula and try to convert it to code base on how I understand it(How I understand your explanation on your 2nd post).

Ugh, that's not the way to go. Much of the stuff as was written is for explanation and derivation of the solution. The approach to convert this stuff into code is to pick the formulas usable to solve for a variable (here k' and eventually f'), derive the solution on paper, decide which variables are worth to be coded separately, and write all that as code.


The f here is kinda magic. I dont know where it came from?

Lactose! has answered this correctly.


However, I still have a feeling this is a lot more complicated than it probably needs to be.

It is too complicated just to decide on which side the collision happened, that's true. However, I think the game mechanics requires to reflect the ball. Doing this appropriately requires more information than just the side. The full solution requires the location of collision in space, and that is the matter shown in my approach.