Jump to content
  • Advertisement
  • entries
    32
  • comments
    50
  • views
    46143

Entries in this blog

 

Tools for Photosensitive Epilepsy I: Flashing

You can find the original post here. This version has been formatted to fit your scr-- I mean, formatted for GameDev.net
.
In 2009 engineers from the University of Wisconsin released a piece of software called PEAT. This software has been used in medicine since as a tool to diagnose patients with photosensitive epilepsy. This is a disease where strobing lights cause seizures, and it currently has no cure. Things that trigger it include lightning, flashing sirens, flickering lights, and other similar strobe effects. Other triggers that seem unrelated, but are also very cyclic in nature, include rapidly changing images, and even certain stationary stripes and checkerboard patterns. This particular form of epilepsy affects roughly 1 in every 10,000 people, and around 3.5 million people worldwide.
.
Despite this number, however, sufferers have to take precautions on their own. Nearly every movie has lightning in it, or camera flashes, and videos online are often a lot worse. With the exception of UK television broadcasts (which are regulated), there are no real preventative measures to combat epileptic events.
.
The next little mini-series of posts talks about my development of a sort of real-time epilepsy protector: a program that watches the current output of a computer monitor and detects when epileptic events may be possible and then blocks the offending region of screen from further flashing. During development, all suggestions are very welcome, as I'm probably going to go about everything in the exact wrong way.
.
Without further ado, let's jump right in.
.
W3C definition
.
The W3C spec for Web Accessibility is the spec that PEAT is implemented from. It defines two types of ways that an epileptic event can occur: general flashes and red flashes. For the rest of this post I'll only be dealing with general flashes, as I want to get the general case out of the way first.
.
So the actual spec is kinda confusing. It mixes units, defines thresholds for passing instead of failing, and tries to mix approximations and measurements. I'm going to take creative liberties to fix these issues and add new terms for further clarity and without loss of meaning of the actual spec.
.
Taking it all apart and extracting the actual meaning, an epileptic event can be triggered when:
There are more than 3 "general flashes" in a 1 second interval

A general flash is defined as:
25% of the pixels in a 0.024 steradian area on the screen, relative to the viewer, all experience the same "frame flash" at the same time

A frame flash is defined per pixel as what happens when:
There is a decrease followed by an increase, or an increase followed by a decrease in the histogram of luminance for a single pixel. Each increase and decrease satisfy the following:
The minimum luminance in the histogram over the time-span of the increase or decrease is less than 0.8
The amount of increase or decrease of luminance in the histogram must be at least 10% of the maximum luminance in the "range" of the increase or decrease

So an epileptic event can be triggered when many pixels agree that a flash happened at the same time. The W3C spec says "concurrently," so I'm taking that to mean "the same frame," even though differing by 1 frame probably isn't discernable to the eye.
.
So you could create a function that takes only a histogram of luminance for a single pixel and be able to get out the frame flashes for that pixel. In fact, all we need to know about the frame flashes of each pixel is when they occur, so that it can be used to determine how many happen concurrently. This is the only thing that is stored about them in the program.
.
There are other details in the W3C spec that I'm ignoring:
Very "fine" transitions, like 1px x 1px checkerboards alternating colors every frame are fine, because the individual transitions are not discernable to the human eye. This is a complicated one to deal with, for many reasons. However, if you ignore it, you fall prey to detecting flashes in videos of light fog moving, where a lot of the transparency comes from stochastic opacity, or from grain filters. I might address this later.
It gives some approximations of the areas on the screen you should be looking at for a viewing distance of 22-26 inches with a 15-17 inch screen with resolution 1024x768. This is far too specific to try to generalize to any screen, so I'm going to be doing all the calculations on my own.

I'm also imposing my own restrictions on the definitions above. I have no idea if they're correct, but they make the algorithms more approachable (or rather, it makes the problem less ambiguous)
The time-span of the increases and decreases of each frame flash cannot overlap between frame flashes in a single pixel
A general flash is counted if 25% or more of a 0.024 steriadian viewing area experiences a common frame flash. The spec didn't specify, but this is the only thing that makes sense
I'm only going to be testing fixed areas of 0.024 steradians, instead of trying to find every area that satisfies the conditions in the spec.

So, just as a review of everything, the general pipeline of figuring out if an epileptic event is possible in the past second:(Sample phase) Create histogram of luminance over the past second of every sampled pixel on the screen(Frame Flash phase) For every pixel: Create a list of all frame flashes by looking at the peaks of their histogram(Gather phase) For each frame: Try to find a 0.024 steradian area where 25% or more pixels experienced a frame flash on that frame If such an area is found, increment the general flash counter by 1If the general flash counter is greater than 3, an epileptic event is possible
Since epileptic events are possible when the general flash counter is greater than 3, protective measures are going to be triggered when the counter is equal to 3. Also, I don't necessarily want to just know if an epileptic event is possible, what I want to do is block it out.
.
In order to break everything up into multiple parts, I'll tackle each stage of the algorithm in order of what needs clarification the most. I'll cover the Frame Flash Detection phase here, and the Sample and Gather phases in another post.
.
Frame Flash Detection
.
So what we need to do is, given a histogram of luminance for a single pixel over the course of 1 second, we need to find every frame that contains a frame flash. However, the big problem is that, most of the time, flashes happen over the course of several frames. So, we need to find a way to not only find the flashes but also associate an entire flash with a single number.
Recall the definition of a frame flash, re-written here:
There is a decrease followed by an increase, or an increase followed by a decrease in the histogram of luminance for a single pixel. Each increase and decrease satisfy the following:
The minimum luminance in the histogram over the time-span of the increase or decrease is less than 0.8
The amount of increase or decrease of luminance in the histogram must be at least 10% of the maximum luminance in the "range" of the increase or decrease

This definition took me 2 or 3 different tries to implement correctly, and I may still be wrong. I blame how cryptic the W3C spec is about this.
.
So this is going to be implemented exactly how it's stated. We are going to keep a record of all of the increases and decreases that each satisfy the above constraints, and then match pairs of adjacent increases and decreases. For an example, consider the histogram of luminance:
.


.
In order to find increases and decreases, we're going to look at every local extrema, compare it to the first sample, and see if the difference fulfills the above constraints. If it does, we record the result and now use that sample as the sample to compare future samples against. The code goes something like this:int extrema[NUM_SAMPLES];transition pairs[NUM_SAMPLES];int numPairs = 0;int lastExtrema = lumHistogram[(sampleCounter + 1) % NUM_SAMPLES];for (int i = 1; i 0) == (dx2 > 0) && (dx1 != 0) && (dx2 != 0)) { float dx = x_curr - lastExtrema; if (0.1 0.8 && x_curr > 0.8) continue; lastExtrema = x_curr; if (dx1
.
The extrema[numPairs] = i; part is the part that stores the location of the "frame" of each transition. When it comes time to look for adjacent increases and decreases, you can just pick one of the two "frames" and call that the exact frame of the flash, taking care of that particular problem.
.
Now that we have the frame flashes found, we need to store them in a way that makes it easy on the next stage to use the info, the Gather stage. I'm going to cover that next time, but the general way I'm doing it is by making a giant array of distributions, with each distribution corresponding to a single Gather stage chunk. The specific part of the array is passed to the function so that each sampled pixel can contribute to it. This makes the gather stage really simple.
.
The nice thing about this method of finding frame flashes is that it doesn't matter if you traverse backwards or forwards through the histogram; as long as you're consistent then it'll work well.
.
Conclusion
.
Next time I'll go over how the Gather and Sample stages work. Until then, I'll be continuously working on the project. There are still some to-do's, even with the just the Frame Flash phase. Some of these include:
Multithreading each sampled pixel, or maybe porting to the GPU. I don't want to port to CUDA because that technology isn't very universal
Analyzing the pixels adjacent to the current pixel, so as to maybe combat things like grain filters and clouds triggering the detector.
More stability improvements. The program doesn't pick up on a lot of really obvious cases, especially if they happen in small areas, like epilepsy gifs on Google. I feel like if I could sample at a finer granularity, that problem would go away. This ties back in to multithreading, because reducing the granularity would lead to achieving enough throughput to be able to process that extra granularity.
For the whole program, minimizing to tray and some other polishing touches. Hopefully these will get resolved in some way or another sometime soon. The current project will be hosted as soon as I'm confident that Nopelepsy v0.1.0 is okay to show the world.

Thanks for reading, and stay tuned for next time!

CulDeVu

CulDeVu

 

Microfacet Importance Sampling: For Dummies

Despite how important of a topic good importance sampling is in the area of global illumination, it's usually left out of all common-English conversations and tutorials about path tracing (with the exception of the Lambertian case, which is a good introduction to importance sampling). If you want to get into microfacet importance sampling, multiple importance sampling, or even, God forbid, Metropolis light transport, you're on your own. And, probably for a good reason. At some point in a path tracing project, you WILL have to butt heads with the math. And believe me, the head that the math rears is ugly. Beautiful in design, but large and ugly all the same. I'm going to try to correct that, at least for microfacet importance sampling. Present just enough math to get the point across what *needs* to be understood to implement everything correctly, with minimal pain. Choosing the Microfacet Terms The Cook-Torrance Microfacet BRDF for specular reflections comes in the form , where F is the Fresnel term, D is the microfacet distribution term, and G is the microfacet shadowing term. This is great and all, as it allows you to basically stop and swap different terms in as much as you want. Realtime graphics use that property to be able to simplify a BRDF into something pretty but efficient, by mixing and matching models with terms that cancel out. However, there is danger in this though. Choosing terms purely because they are less complex, and therefore more efficient, can lead to completely wrong results. As a case in point, let me show the difference between the implicit Smith G and the Beckmann with Smith G. For reference: For future reference, the Beckmann G1 listed is an approximation to the actual Beckmann shadowing term. The alpha_b is a tunable parameter that controls the roughness of the material. I usually just set that term to the Cook-Torrance roughness value. These are the results, after importance sampling for 500 samples:

Top: Implicit G1, roughness 0.05, 500 samples of importance sampling
Bottom: Beckmann G1, roughness 0.05, 500 samples of importance sampling As you can see, the Implicit G suffers greatly for its simplicity. While it calculates color noticeably faster, it does so by ignoring the cases that need extra work to represent, like really low roughness values. It approximates the mid-range fairly well, but in doing so butchers the tails. So take this example to heart: for a path tracer, don't be afraid to sacrifice a little speed for correct color, especially because people use pathtracers because they're willing to wait for good images. So. What terms should be chosen? Well, my personal favorite $$ D $$ and $$ G $$ are the Beckmann ones, so this post is going to be based off of those. A more complete description of the *whys* of all of this can be found [here](https://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf), but as titled, this is the "For Dummies" version, and re-stating everything found there would be a massive waste of time. The Fresnel term is taken from Cook-Torrance: The Distribution term is from Beckmann: The Geometry term is a little trickier. It involves the erf(x) function, which can be very expensive to calculate. There's an approximation of it, though, that's very good, shown again here: The only things left to clarify is alpha_b term, which is just the roughness of the material, and the chi^+ function, which enforces positivity: Importance Sampling So to importance sample the microfacet model shown above involves a lot of integrals. So many that I'll probably end up getting LaTex-pinky from typing all the brackets. So I'm just going to give the short version. Because it'd be damn near impossible to try to importance sample the entire microfacet model, with all of its terms together, the strategy is to importance sample the distribution term to get a microfacet normal, and then transform everything from all vectors relative to the normal to relative to the origin. Given 2 uniform random variables, zeta1 and zeta2, generate a microfacet normal with the angles (relative to the surface normal): Note how the phi_m term is can spin freely around the normal. This is the power that sampling only the D function has. It follows, then, that by reversing the age-old vector-reflectance equation, you get the outbound (or inbound, or whatever) light ray: Now all that's left to do, with the exception of a few caveats that I will get to in a minute, is to define the probability distribution function for this setup: There! That's all that's needed! If all you want to do is render really shiny metals, at least. However, there's some unfinished business we have to attend to concerning what to do with the un-reflected light, the amount that the Fresnel term takes away. First, though, let's look at some images. Intermediate Results
[Above: Sponza atrium with a floor with Cook-Torrance roughness of 0.001, 500 samples per pixel] What's impressive about the above image is that that isn't a mirror BRDF. It's the microfacet BRDF described above, importance sampled. This amount of image clarity for a floor with a roughness that small, and at only 500 samples, is something that would have only been achieved with a special-case in the code for materials like that. However, with importance sampling, you get it all for free! Extending it With Lambertian Diffuse The Cook-Torrance BRDF is really nice and all, but it's only a specular BRDF after all. We need to fuse it with a Lambertian BRDF to finish everything up nicely. The form the BRDF takes after the fuse is: This makes a lot of sense too. A Fresnel Term describes how the ratio between reflected and transmitted signals, and a diffuse reflection is just a really short-range refraction. Now there's an issue, and it's a big one. The density function generated above describes the distribution of outbound rays based off of the D microfacet distribution. To convert between a microfacet distribution and an outbound ray distribution, it undergoes a change of base via a Jacobian. Now that Jacobian is analytically derived from the microfacet BRDF definition itself, so to integrate the diffuse term into it (a term which has no microfacets but still changes the shape of the BRDF drastically) it'd have to be re-calculated. And this is a re-occurring problem. There are papers describing every way to couple this with that. Some of them are about coupling matte diffuse with microfacet specular, some describe how to couple transmissive plus specular, some specular with specular. And at the end of the day, these are all specialized cases of different material properties coexisting on a surface. Until it's taken care of, the problem of mixing and matching BRDFs will never go away. I don't know how other path tracers deal with this issue, but I won't stand for it. For my path tracer, I chose to go the way that Arnold does, by using the idea of infinitesimal layers of materials. Each layer of the material can, by using the Fresnel function we already have, reflect an amount of light, and then transmit the rest down to the next layer down. This will happen for every layer down until it hits the final layer: either a diffuse or transmission layer, which will use the rest of the remaining energy. Thus, the problem of coupling BRDFs was solved by thinking about the problem a little differently. If you think about it, the above algorithm can be simplified using Russian Roulette, and so the final algorithm becomes something like this:[code=:0]Generate random number r1for each layer in material: determine the new half-vector (or microfacet normal) f = Fresnel function using that half-vector if (r1 > f): r1 -= f continue calculate outbound ray generate BRDF for current layer and inbound and outbound rays generate pdf for outbound ray sample and accumulate as normal break
The only thing left to do is get rid of the Fresnel term from the microfacet BRDF and the (1 - F) from in front of the Lambertian term: it won't be of any more use to us right there, as that calculation is handled implicitly now. That function now serves as a Russian Roulette mechanism to switch between layers, and is no longer needed to scale the energy of the microfacet BRDF, as the D and G term are perfectly capable of doing exactly that on their own. One more pic to show it in action:
[Above: Floor with layered specular and diffuse. 500 samples per pixel] That concludes my write-up of Microfacet Importance Sampling: For Dummies edition. With luck, hopefully implementing it will be as enjoyable for you as it was for me. Happy rendering, and may your dividends be forever non-zero! :D

CulDeVu

CulDeVu

 

Mathematics of Origami: Flat folding

So. Back at this. First things first. You can safely disregard most everything I said last post. All of that was very observation-based, and is actually incorrect in almost every way. Which might seem strange, cause most of it made sense. For clarity and completeness, though, here's a list of why everything last post was wrong:
The idea that a fold is a collection of fold segments in order starts falling apart after you do certain folds, like petal folds. The idea that you can keep track of where they are and which way they're going would be way to hard, and you could conceivably connect everything under one fold, since all origami crease diagrams are Hamiltonian.
The idea that folds only reflect is another screwy one. It falls apart when you have more than 4 fold segments touching a vertex
The fold tree was a nice idea, but constructions using it can only create a subset of all origami, so it's not what I'm looking for

Notable things that I'm keeping un-debunked:
Fold segments are straight lines that represent part of a fold. They come in 2 flavors: mountain and valley. The ends of the segments either end at an edge of the paper or at the end of another fold segment. These places of union are called vertexes and the graph of all fold segments on the paper, including the edges of the paper, is called a fold diagram
All flat folded origami is non-directed and complete
If you sum up the amount of mountain and valley fold segments coming out of any vertex, they will always differ by two. I don't really know why it's like that, but it definitely is.

Aight. I think that's enough talking about the past. Math time.
.
Flat Folding
.
First of all: Flat folding. In a physical sense, this is a type of origami that can be folded flat, pressed between the pages of a book. So, for example, a paper crane without the wings.
.
So if you make a flat folded origami, and you take a pin and poke a hole completely through it, through every layer, and open it up, you'll see that there are multiple holes in the paper. In essence, many points on the origami paper map to the same point in 3D space. These points also map perfectly to reflections along the axes of the creases themselves. Here's a picture to illustrate:
.

.
[in this image, the red points all map to the same point in 3D space, and the green points all map to the same point as well]
.
Though that was probably pretty obvious. However, I discovered a couple facts that are not so obvious I think. The first is that, for flat folded origami, when you reflect a point around creases like in the above image, one by one, if the path you take ever ends up in the face that you started with, you'll have the exact same point you started with. In other words, no matter what face you're viewing from, the the original point you start out with will always be equal to itself, relative to its starting face. Second is that this property is not present in all fold diagrams. See the images below for further explanation.
.

[as you can see in this image, the path that a point takes on this fold diagram, even though you can take two very different paths back, you will still always end up at the same point again]
.

[in this example, however, the path that the point took back to the original face did not correspond to the same starting point. Likewise, you cannot possibly fold a piece of paper along these lines]
.
Well if only certain fold diagrams have this property, how do we tell which ones do? How do we prove correctness? It's hard to tell just by looking at all the folds without physically folding it yourself, so it's useful to look at a really simple case: paths going clockwise around a point where fold segments intersect. Because reflections around multiple edges rooted at a same point preserves radial distance [see image], you only have to keep track of the angles of the fold segments and the angle of v. In this case, the transformation around each fold segment, in order, would result in the same point, or the same angle plus 2 * pi.
.

.

.
From that:
.

.
The case of n begin odd can be thrown out because, as you see, it's only true for specific angles (specific offsets epsilon from a1), but the definition above states that it should be true for EVERY angle around the vertex.
.
(as a side note, this is a variant of Kawasaki's Theorem, but since I came up with it independently, along with everything else described here, I'm claiming all credit for my own variant)
.
This is a very nice equation. It allows us to solve for any missing angle if one is needed, and also acts as a really simple validator. You can check each vertex one by one with this equation to check the simple case of correctness-by-rotating-around. However, that is a very simple case. Paths can take any form, but they all need to exhibit the property listed above, not just paths spinning around vertexes.
.
However, each vertex is connected to each other by the very fold segments that we're rotating around by. When you transfer a pt from one vertex to another by just connecting the two via a fold segment, pt doesn't change (pt stays in the same face, it's just the origin is changing). Since all fold diagrams are complete anyway, you can connect any vertex to any other by following the path from one to the next, rotating around each vertex as you go. Thus, it can be proven that a fold diagram can be flat folded by simply checking each vertex and seeing if it conforms to the above theorem.
.
As a caveat, this doesn't ensure that there are no self-intersections. I don't think there is a way to both achieve interactive framerates and also ensure no self-intersection, but I'll get back to you on that.
.
Yup. So that's that.
.
Implementation
.
So the implementation of this is fairly easy.
.
It's tempting to try to implement a flat folded origami interpreter by trying to represent each fold segment. However, if you travel along this path, it leads to lots of trees connecting faces (that you have to build from the edges) and a lot of searching over spaces and it's just not fun. Believe me, that was what my first implementation was like.
.
The way I'm now doing everything is by having a soup of vertexes that all have a vector of angles, sorted counter-clockwise, of all the edges connecting them and other vertexes. Finding both sides of a fold segment given one end requires a raycast by the fold segment's angle relative to the vertex. For faster use, this can be accelerated using nice structures, or, since the origami is only changing rarely in our case, we can just spend the time to bake them into each vertex and then go about our merry way.
.struct vertex{ dvec2 pt; vector angles; // sorted over [0, 2*PI) vector bakedConnections; int angleToTraverseTree = 0; vertex(dvec2 p) { pt = p; } void sortAngles() { std::sort(angles.begin(), angles.end()); } void removeDuplicates() { // get rid of duplicates. im not worried about performance here for (int a = 1; a verts;
.
Besides the vertex manual labor of adding, and sometimes iteratively adding, angles to vertexes, there's only 1 more thing that's critical to the implementation: taking this structure and actually folding the paper. In order to do this you first plot out the direction any individual point will take across vertexes (because this only allows rotations and trading off of origins, and that preserves the origami property). If you do this and follow each point in the piece of paper to the same vertex and the same face, you'll have the completed origami.
.

.void creaseTraversalTree(){ vector processed; vector leftToProcess; for (vertex& v : verts) v.angleToTraverseTree = 0; for (int i = 1; i ::iterator leftIter; for (leftIter = leftToProcess.begin(); leftIter != leftToProcess.end(); ++leftIter) { int vId = *leftIter; bool found = false; for (int angleId = 0; angleId ::iterator iter = std::find(leftToProcess.begin(), leftToProcess.end(), closest); if (iter == leftToProcess.end()) { verts[vId].angleToTraverseTree = angleId; leftToProcess.erase(leftIter); found = true; break; } } if (found) break; } }}
.dvec2 fold(dvec2 ptOriginal){ dvec2 pt = ptOriginal; int vId = getAnyVertexInFace(pt); double angle, rad; int angleAt = getBiggestAngleLessThan(vId, pt); while (1) { { dvec2 ptRelV = pt - verts[vId].pt; angle = atan2(ptRelV.y, ptRelV.x); if (angle angleAt) ? 1 : -1; for (int i = angleAt; i != curV.angleToTraverseTree; i += shiftSign) { int correctedI = i % curV.angles.size(); angle = 2 * curV.angles[correctedI] - angle; } pt = rad * dvec2(cos(angle), sin(angle)) + curV.pt; if (vId == 0) break; assert(curV.bakedConnections.size() != 0); int newVert = curV.bakedConnections[curV.angleToTraverseTree]; // find the new angleAt to preserve the rotation double angleToFind = rotClamp(curV.angles[curV.angleToTraverseTree], PI); int i; bool found = false; for (i = 0; i )
.
Conclusion
.
So, Here's some screenshots of everything in action:
.

[creases in a piece of paper with the corresponding folded points, generated as described above]
.

[the internal data structure of the above paper]
.

[a finished folded crane, without the folded wings]
.
Also, since I'm not that good at explaining things clearly, here's the link to the github for the project. This, if you're interested in looking at it, probably explains everything a bit more clearly, if you can get past the messy 1-file codebase.
.
Well, that should be everything! Next step: autocompleting origami, and maybe non-planar folds!
.
:D

CulDeVu

CulDeVu

 

Origami and the associated maths

'ey.
.
So this is going to be another one of those write-it-out-so-that-I-can-better-understand-it sort of things. Just a heads up. This IS a journal, after all, and functions as advertisement second.
.
So. Origami and mathematics. We know that the two are related because, if you fold a piece of paper the same way twice, you'll get two of the same shape, so there's rules governing it. And some smart people have tried figuring out those rules. There's a bunch of them, but they all fall short for what I'm wanting to do. So here it goes:
.
Together with research in 1992, and again in 2002, a total of 7 axioms for flat folding paper were created, detailing exactly which basic operations could be performed with paper. These were laid out in much the same way that Euclid's axioms of geometry were laid out. This is basically the way that most research that I've been able to turn up on the internet is: interesting constructions that, using the axioms, prove something about something else.
.
What I'm wanting to do, however, is a lot different. I'm wanting to be able to take a collection of creases and their associated information, and turn it into a model of the finished product. A kind of crease instruction to finished origami model converter if you will. To do this I've tried a couple methods, like physics based, relaxation and constraint based, and they've failed for one reason or another. I've settled on doing something sort of finite-element based: given a point on the original piece of paper and all the folds, where will that point be located in 3D space?
.

(turn up for trackpad art)
.
Unfortunately, there doesn't seem to be any information on the topic, so I guess I'll have to do it. Which is good, because I really wanted an interesting problem to work on. The rest of the entry will be about the stuff I've concluded so far, and the questions I have.
.
Types of folds
.
Folds are what what happen when you crease the paper in a straight line, and either fold the other side down or turn it some arbitrary angle around the fold. I'll be using the term "fold" and "crease" interchangeably.
.
Folds can created that do one of two things:
start at one edge of a piece of paper and end at another
loop back in on itself

(a fold can also start and stop on the same point on the edge of a piece of paper, but I'll ignore that case and say it's case #1)
.

.
As you can see in the above examples, folds can be segmented into multiple parts going different directions, but are still connected end to end. I'll call these fold segments. They come in 2 flavors:
mountain
valley

These types mean essentially nothing by themselves. They represent a direction to fold relative to your viewing angle. What's important is that they are opposites, and reversing every single fold segment will result in the exact same origami (this is equivalent to turning the piece of paper over). In this journal, all mountain folds are shown as dotted lines in red, and all valley folds are shown as dashed lines in blue, unless i'm trying to point something out.
.
Things you can do with folds
.
Origami is all about layering paper folds, one after another, in order, to create something. The "in order" part is important. You can't create the head and tail of a paper crane (the last steps) before you make all the other folds. However, folds are not always dependent on all of the previous ones. Sometimes folds are made purely to get a reference point for some other fold later on down the line. Some folds cancel out the symmetry of previous folds (more on that later). Further, some steps can be done in parallel, like the 2 wings of a paper plane, because they are independent of each other.
.
When you make a fold, sometimes the mountain fold that you thought you made is actually a bunch of mountain-valley segments. To clear this up, I'll call the starting fold segment the fold seed.
.

(the green fold segment is the fold seed. The purple fold segments were calculated from other information about symmetry and dependency)
.
There are 4 types of fold seeds that I'll support for right now. They are:
mountain
valley
inside-reverse (more on this one later)
outside-reverse (the opposite of inside-reverse)


(from left to right: mountain, valley, inside-reverse, outside-reverse. The green is the seed, and this shows what happens when each fold interacts with a single dependent)
.
I chose these specifically because these are the types of fold seeds that are needed (with the exception to #4) to create a paper crane. I figured if I can't create a origami interpreter that can't interpret the quintessential piece of origami, I've failed. The first 2 are really obvious and the latter two I'm saving for a later time. There are certain issues about them that make them more complicated.
.
You've likely noticed in all of the pictures you've seen so far there's a lot of reflection about lines going on. I'll call this reflecting about lines "dependency," because of the its ordering properties. When a fold segment is dependent on another fold segment, it will reflect about the other line and change from mountain to valley or vice versa. Here's a pic:
.

.
An important thing to note about this dependency is that the line that is being reflected about *has* to be done first. The only exception to the rule is fold segments at 90 degree angles on the paper, which can be done interchangeably (which I think says something very interesting about these types of folds).
.
All of the above info has led me to use a tree to talk about origami folds. the parents of the tree represent dependency, while the nodes represent fold seeds.
.

.
For mountain and valley fold seeds, this representation encompasses everything that needs to be said about an origami: from parallelism to dependency to ordering to direction.
.
Other fun facts
.
Here are just some neat things that happen to be true, that are essential to have a complete implementation of a origami interpreter, but aren't essential to have a complete understanding of what's happening. In no particular order:
Everything talked about above has to do with planar folds, folds that create something that ends up laying down flat (180 degree folds). Non-planar folds act the exact same way but have an additional constraint: they can have no children in the dependency tree.
For planar folds, every vertex where fold segments intersect, there are always the number of valley folds segments and mountain ones always differ by 2. For vertexes where fold segments and paper edges intersect, all bets are off. For non-planar fold segments, you can treat them like paper edges for the purpose of counting mountain-valley folds. This is an easy way to tell if a fold is valid or not.
For mountain and valley fold seeds only, if a fold seed is dependent on a parent, which is dependent on a grandparent, then the fold seed is also dependent on the grandparent. This is not true for inside-reverse or outside-reverse folds.
Folds that do not touch their parent fold in the dependency tree can be turned into 2 separate folds without any issues. For non-planar folds, it get's a bit more complicated. This is the only time self-intersection of paper is a problem, and I think I can safely ignore this case as it's fairly easy to work around. I think
Nodes on the same layer of the dependency tree that are not dependent on each other can be done in parallel, or in any order.
Each node on the dependency tree must be unique, otherwise it'd be kind of hard to do anything with it or conclude anything from it.

Yup.
.
Well that's about it. That's all I'm really semi-confident about at the moment. There are a bunch of things I still have to look into and figure out before all is said and done though.
.
Tune in next time to find out what any of this has to do with finite element transformations, what the deal is with inside- and outside-reverse fold seeds and why they throw a kink in everything, and how any of this will help creating a fold to origami interpreter.
.
See yah!

CulDeVu

CulDeVu

 

Fluid Dynamics update

Oioioioioioioioioioioioi
.
I'm not really going to talk about much math-y stuff this time. I've covered too much ground since my last entry ( ) to be able to remember all the pitfalls I ran into. So this time I'm only really going to be talking about the general algorithm, how I deal with the pitfalls of the marker particle method, and the ongoing translation to CUDA. And probably share some of my Mario Maker levels. I'm addicted to stars
.
So! Let's do this!
.
[media] [/media]
.
Sorry for the video being kinda shaky. OBS is having a hard time capturing details and YouTube thinks everything ever is a camcorder, so it looks a little strange. It get's across the idea, though. Note that there is still no surface tension, which would account for just about everything else strange looking with the simulation.
.
[size=7]The General Algorithm
.
So the algorithm for simulating a Eulerian fluid goes as follows:init particlesinit velocity field, type field, etcfor each frame: apply external forces to velocity field clear type field update particle positions and apply to type field extrapolate the velocity field find particles to re-position re-position those particles build pressure matrices solve the pressure equation apply the pressure to the velocity field enforce boundary conditions advect the velocity field
.
The velocity field is the standard MAC staggered grid, cause why not, and because it's useful for pressure computation and velocity extrapolation. The extrapolation and pressure step are straight out of academic papers so it's standard stuff.
.
The marker particles are the part I don't really know about. I just advect the marker particles through the velocity field just like anything else, and wherever the marker particles end up defines where the fluid exists or doesn't. This is pretty much the simplest method of defining the interior vs exterior of a fluid, so it has a lot of pitfalls, but I'll get to those in a minute. The issue is, though, that most everyone doesn't talk about this method (because of the many many issues it has) and so they use something called level sets. I've tried implementing level sets several times, and marker particles are just so much simpler in every way.
.
Marker Particles
.
So the biggest pitfall about marker particles is that, due to numerical inaccuracy and a bunch of other issues, they tend to bunch up a lot in certain places. Namely, places where the divergence of the velocity field is still positive even after the incompressibility pressure solver. You'd think that, since the pressure is positive in some places, it'd be negative in the same area, cancelling each other out, but it's not. The fact that gravity is always pulling down on the fluid makes it not always true, so what ends up happening is a net loss of volume from full to nearly 0 in a couple minutes.
.
So what I tried to do, instead of using level sets like a sane person, I decided to force the fluid to conserve volume. The way I did this is pretty straightforward, and is the "find/re-position particles" part of the above algorithm. Basically,
.
1) iterate over every particle over a grid and see if there are already more than a certain number of particles (max density, if you will) in that grid cell. If there are, I mark it to be re-positioned
2) iterate over every grid cell where and see if there are less than some other number of particles (min density), and if there are, start pulling from the re-position pool and fill in those spots
.
This is rather finicky in practice. For example, if I set minDensity to maxDensity, you see a lot of artifacts from there not being enough particles in the re-position pool to accommodate (because of the overall positive divergence I mentioned). A happy median, I found, was setting maxDensity to anywhere between 2 or 3 times the minDensity. Sure, this DOES lead to volume loss by a factor of whatever multiple you choose, but it's much better than having visually disgusting and simulations.
.

.
To be fair, the simulation without re-position looks a lot more fluid and water-like. However, conserving volume it too important to be avoided. Oh well.
.
CUDA
.
I've been translating small pieces of the simulation to use the GPGPU via CUDA. I've gotten the "update particle" portion completely ported to the GPU, which is just a simple advection and setting type fields. The really neat one is the pressure solver.
.
In order to find the pressure in each cell, long story short, you need to construct a matrix something like this:

, and then solve for pressure. On the cpu I used Gauss-Seidel to solve it, but I have no idea how to do it on the GPU. Luckily, there's a library called cusp that implemented everything for me!
.struct cuspTriple{ int row, col; float amount;};cusp::array1d pressure(mapW * mapH);void project(){ cusp::array1d b(mapW * mapH); { float scale = rho / dt; for (int y = 0; y at(x + 1, y) - u->at(x, y) + v->at(x, y + 1) - v->at(x, y)); } } } vector data; { for (int y = 0; y 0) { if (type[y * mapW + x - 1] != SOLID) { if (type[y * mapW + x - 1] == WATER) { cuspTriple t; t.row = y * mapW + x; t.col = y * mapW + x - 1; t.amount = 1; data.push_back(t); } ++n; } } if (y > 0) { if (type[(y - 1) * mapW + x] != SOLID) { if (type[(y - 1) * mapW + x] == WATER) { cuspTriple t; t.row = y * mapW + x; t.col = (y - 1) * mapW + x; t.amount = 1; data.push_back(t); } ++n; } } if (x A(mapW * mapH, mapW * mapH, data.size()); { for (int i = 0; i = data.row; A.column_indices = data.col; A.values = data.amount; } } cusp::default_monitor monitor(b, 600, 0.01, 0); cusp::precond::diagonal M(A); cusp::krylov::cg(A, pressure, b, monitor, M);}void applyPressure(){ float scale = dt / (rho); for (int y = 0; y at(x, y) -= scale * p; u->at(x + 1, y) += scale * p; v->at(x, y) -= scale * p; v->at(x, y + 1) += scale * p; } }}
.
This is the version of the GPU solver I have right now. It has a ton of theoretical std::vector overhead (idk how well the CUDA compiler does to try to optimize things out), so the next thing I'm going to be testing is whether or not over-approximating the number of non-zero elements in the sparse matrix will still be efficient.
.
Also, the GPU version is around 3-4 times faster than the CPU version! Now, on the other hand, it's only 3 or 4 times faster than the CPU version. That's with copying data back and forth from the GPU, so I give it a little bit of a break, but not much. I'm extremely confident that I could easily squeeze much more efficiency out of it, so that will have to be something I'll deal with at a later date, after I port a bit more to the GPU.
.
In short, kill me now.
.
Conclusion
.
Could someone please give me stars in Super Mario Maker? I'm addicted and I need all you have. One of my levels' ID is 04B7-0000-0069-DB69 and from there you should be able to access the 3 levels I've uploaded. They have .
Anyway, thanks for reading!

CulDeVu

CulDeVu

 

Fluid Dynamics: Advection

Heyy all!
.
First off, congrats to all the WoA participants. I didn't have the time to do it this year, but I'll try again for it the next go around ;)
.
So, in my last little let-my-figure-everything-out-by-writing-stuff-down, I got some stuff wrong, and I got some stuff misleading. Minor things, mostly:
The last little bit I said about density. Scratch all of that, I'm just an idiot.
Some of the stuff I said about the actual equation was misleading. No big deal items were incorrect though.

So fluid simulation is a large topic, even though it's governed by only a handful of really intuitive equations. So I'm going to break everything up into pieces. This time, I'm going to be talking about the simplest piece of the equation: advection.
.
Two ways to represent a fluid
.
There are many ways to represent a fluid, but they all fall into one of two categories, or are a hybrid of the two. I discussed them last time, so here they are:
Lagrangian: fluids are made up of a ton of particles/molecules that interact with one another. This is probably the most intuitive way to visualize a fluid
Eulerian: fluids are partitioned up into a bunch of little cells that have properties like temperature, velocity, etc. This is probably the most GPU efficient way to visualize a fluid.

There's a really nice connection between the two as well:
.

.
The term on the left side is called the Lagrangian derivative (along with a bunch of other things), and the right side looks very much like part of the original Navier Stokes equation that was reviewed last entry.
.
So what does it mean? Well, imagine a 1D fluid, with a temperature T(x) at each particle, with a constant speed at each particle, and a thermometer placed in the middle at some point:
.

.
The big thing to notice here is that, even IF the individual particles don't change their own temperatures, the thermometer will still feel a change in temperature of the fluid over time. This is because, as different parts of the fluid move, they carry the quantities such as temperature, velocity, etc with them. You can write the situation that was just described as such:
.

.
The first term is 0 because the individual particles don't change temperature (hence, why it's called the Lagrangian derivative). Rearranging terms gets you:
.

.
, which tells you precisely how the thermometer is being effected by the flow of a fluid.
.
We'll take this idea and apply it to the Navier Stokes equation, and drop the viscosity term while we're at it (it's rather annoying):
.

.
This is a much cleaner and simpler form, and it also encompasses 2 different ways to view a fluid: as a group of particles, and as a bunch of thermometers (or really any tool for taking data, like speedometers or barometers) monitoring the current state of the fluid.
.
Advection
.
The problem I'm going to talk about here is an even simpler form of the above equation:
.

.
This is a pure advection equation. It says that some quantity in the fluid is carried along by the fluid itself, and isn't affected by anything else (like pressure or gravity or stuff). We'll look at some of the simplest ways to solve this relationship.
.
Do demonstrate the different techniques, I'll be squirting ink into the velocity field and letting the fluid advect that around, because that's the simplest way to show the differences between the different methods.
.

[a comparison of different techniques. My bet is that Eulerian is going places]
.
It should be abundantly clear that Eulerian is extremely underperformant compared to the semi-Lagrangian versions. This is due to the nature of the problem itself. The Eulerian viewpoint talks about a gradient operator, which is problematic, as we will see. The Semi-Lagrangian techniques are based on interpreting the field q as a bunch of particles that we can then advect simply by moving them.
.
Eulerian Advection
.
Eulerian advection involves directly solving the equation:
.

.
Here's the code snippet for naively solving this thing:void advectEulerian(T q[cellW][cellH], T q_next[cellW][cellH]){ for (int y = 0; y cellW - 1 || y cellH - 1) continue; q_next[x][y] = q[x][y]; T dq_dx = (q[x + 1][y] - q[x - 1][y]) / 2.f; T dq_dy = (q[x][y + 1] - q[x][y - 1]) / 2.f; T u_dot_del_q = vel[x][y].x * dq_dx + vel[x][y].y * dq_dy; T dq_dt = -u_dot_del_q; q_next[x][y] += dq_dt * dt; } }}
There's nothing fancy here. It's literally evaluating the gradient and then storing the result in a new buffer. There ARE better ways to do this (or so I've been told) by, instead of aligning the velocity components to grid centers, to instead align them to the edges of the cells. This avoids the issue that you see where everything looks all ripply because there's no more null space.
.
Here's an animation for q being ink squirted into the liquid. I don't have a video for q being the velocity buffer though, because it blows up in about 3 seconds
[media][/media]
.
Forward Semi-Lagrangian
.
This technique and the next one are, again, exactly what it sounds like. You treat the cell like a particle in the Lagrangian sense, and then advect it forward by the current velocity, not caring at all about how everything is changing around it. Here's the code snippet for that:template void advectForwardLangrangian(T q[cellW][cellH], T q_next[cellW][cellH]){ for (int y = 0; y cellW - 1 || yt cellH - 1) { continue; } q_next[xt][yt] += ll * q[x][y]; q_next[xt + 1][yt] += hl * q[x][y]; q_next[xt][yt + 1] += lh * q[x][y]; [xt + 1][yt + 1] += hh * q[x][y]; } }}
Again, really simple. The contents of each q in the cell are pushed forward by the cell's velocity and interpolates the difference. As you can see, this looks a LOT smoother. Unfortunately, this method suffers from something called numerical diffusion, which I might talk about at a later date. Here is an animation of that forward semi-Lagrangian action:
.
[media][/media]
.
Backwards Semi-Lagrangian
.
So it turns out that, in theory, forward semi-Lagrangian is unconditionally unstable if ?t > ?x / max(vel). I personally haven't had any issues, but whatever, I'm not going to start doubting math now. Fortunately there's a simple solution: just find what particles WILL BE at each grid position after interpolation, and advect backwards that way. This also has the added bonus of being ridiculously simple to do on the GPU, so that's a plus as well. Here's a code sample:template void advectBackwardsLangrangian(T q[cellW][cellH], T q_next[cellW][cellH]){ for (int y = 0; y cellW - 1 || yt cellH - 1) { q_next[x][y] = T(); continue; } q_next[x][y] += ll * q[xt][yt]; q_next[x][y] += hl * q[xt + 1][yt]; q_next[x][y] += lh * q[xt][yt + 1]; q_next[x][y] += hh * q[xt + 1][yt + 1]; } }}
This looks very much like the forward-Lagrangian. The only difference is that there's a minus where a plus used to be and that you're now writing to the current cell instead of 4 different cells. Upon looking at the animations, there seems to be just about no difference, in this case, to forward semi-Lagrangian. The only strange things that happen are near the edges where backward semi-Lagrangian predicts incorrectly. Here's an animation:
.
[media][/media]
.
Yup
.
Well, that's that. That pretty much covers advection. Normally, you don't just advect ink or temperature, you also advect the velocity of the fluid itself. This is a little confusing to visualize, but you can think of it as fluid carrying itself along itself, if that makes sense I didn't show it this time because it looks unimpressive when the fluid isn't incompressible.
.
Welp, I hope those vids are pretty, and that I can actually get some cooler stuff done. Well, that's all for now, this one is getting lengthy as it is. Oh well
.
Thanks for reading!

CulDeVu

CulDeVu

 

Trying my hand at fluid simulation

Aight, let's roll.
.
So fluid simulation is one of those things I've been meaning to get around to for a long time now. I've made simple particle simulations before but nothing that could simulate an actual fluid. Well, here's my opportunity!
.
The biggest issue I've had over the last couple days is the amount of information on the subject. There's a ton of literature on the subject. Like, metric tons of scientific and mathematical and computational papers out there on the science/math/simulation of fluids of all kinds. And if you're looking for a smooth introduction, almost every beginner's resource I've found says something a little different. For example:
GPUGems: a little daunting. I'm sure it's fantastic but holy hell just walk me through this stuff slowly please
Stam's GDC03 paper: completely discounts pressure terms, and forces the fluid to be "incompressible." Also, apparently the algos in the paper are copyrighted so there's that . Also, this is the only source I've seen that talks about a Navier-Stokes equation for the density of a fluid field as well as one for a velocity field
Cowboy Programming: bases itself off of Stam's paper, but makes modifications. Also points out a lot of issues with Stam's method
WIkipedia page for Navier Stokes equations: Holy sweet mother of Jesus make the equations stop.

Okay. So let me see if I can make sense of everything. I'll write everything down as I understand it, and hope that I don't butcher everything in the process. I'm going to use this as a sort of documentation for myself, or as a way to kind of re-explain everything to myself and hope that it makes sense.
.
Let's get to it!
.
Compressibility of Fluids
.
So there are 2 ways to represent a fluid: as a compressible or an incompressible flow.
.
A compressible fluid is one where changes in pressure or temperature will result in a change of density. All real-world fluids are compressible to some extent, but some don't exhibit it that much. Fluids that react under a certain threshold to changes in pressure or temperature (like liquid water above freezing point and below melting point) can be approximated as an incompressible fluid. Essentially, the density of the fluid doesn't change as it moves through the fluid:
.

.
The Navier-Stokes Equations
.
The Navier-Stokes equations were formulated by 2 guys whose last names were Navier and Stokes (who'd have guessed it?). A bunch of other hotshot names like Maxwell and Poisson had a part in it too, from what I understand, but nobody cares, least of all me.
.
Now without further ado, here's the Navier-Stokes equation for incompressible flow:
.

.
Where u is the velocity field, p is the pressure field, rho is the uniform density, and g denotes all external forces, like gravity or magnetism. So a little explanation:
.
That leftmost differential is really what we're interested in.
.
The next term to the right is the advection term (or convection, depending on who's talking). In the same way that the velocity field moves particles and densities through a fluid, the velocity also transports itself over the velocity field. This is called self-advection and is how water currents work.
.
The next term is the diffusion term. The v term is the kinematic viscosity term, and is a constant. This term talks about a fluids aversion to movement. Fluids like molasses are extremely viscous and as a result, don't slosh around as much as, say, water. This is because the velocity field softens out or "diffuses" over the entire fluid at a much faster rate, which is why it's called the diffusion term.
.
The next term, the first one on the other side of the equals sign, is the pressure term. This is the term that talks about how areas with higher density in a fluid tend to try to push away and go somewhere where there's less pressure. This is the term that allows us to make simulations of water, because without this term, the entire fluid would lay in the bottom of the cup and ignore the pressure. Okay, that's a lie, the incompressibility constraint should take care of that, but that's the jist of it anyways.
.
There's also another constraint placed on the equation above:
.

.
This is basically saying that the amount of flow entering is equal to the amount of flow leaving. This is one of those things that's necessary for an incompressible fluid. Notice I said "amount of flow entering" instead of "amount of fluid." Imagine a water particle on the surface of a liquid. On one side it's being pressed by a ton of air and on the other side it's being pressed on by an even greater density of water. However, despite the amount of fluid contesting for the spot, the above constraint ways that as long as they are opposite and opposing, that the fluid particle will keep doing what it's doing.
.
So yeah. Pretty simple all around. I've seen a lot of variation of this thing though. For example, Stam's GDC03 paper completely chucks the pressure term out the window, and also gives a formula for the flow of density through a fluid. A lot of other places use the compressible form of the Navier-Stokes equation, and some are just like "eh. gaussian blurs and pressure looks good to me."
.
Simulating a Fluid
.
So there's 2 really simple ways to simulate a fluid. Well, by "simple" I mean that they're ideas that arise naturally from the definition of the Navier-Stokes equation. When you simulate something, you should always try to layout data in a way that makes the most sense and reflects the nature of the system you're trying to simulate. In this case, 2 main ideas pop out instantly:
Fluid simulation on a discrete grid: this is where you try to approximate a liquid with a bunch of velocity cells in a 2D or 3D array. This reflects the del and laplace operators that you see in the equation very well, because it's easy to calculate those on a discrete field
Fluid simulation with a bunch of particles: this is where you have a bunch of particles that all behave and interact in such a way that it ends up looking right. This is how fluids work in the real world (fluids aren't actually smooth fields like the mathematics suggest), just at an incredibly small scale. This type reflects well the du/dt term in the equation, because it's easy to give a particle velocity and then integrate its position

These are really your options. I mean, you *could* be all edgy and store a pressure field but that would be strange and also it's not as easy to advect/convect a pressure field. You could store all sorts of things of things, but these are the main 2 ways.
.
I'm going to go with the first method (mainly because I tried the 2nd option first and it barely worked. Also it requires a lot of N-body-simulation-style optimizations to get working fast because it has a lot of distance functions flying everywhere.
.
So there's 2 things you need if you're going the first direction: a velocity grid and something to figure out what the pressure is for the pressure gradient portion of the equation. This is where funky things start happening.
.
Even though we're modeling an incompressible fluid, where density doesn't change, we're still going store a density field and operate on it as if it weren't. I'm not entirely sure the reason for this. There's a bunch of possibilities I'm thinking:
Theory 1: we're treating the Navier-Stokes equation as something that tries to make an incorrect fluid turn into a more correct fluid. The pressure term and diffusion term might be self-correcting in a way. Speculation, but it's a guess.
Theory 2: The Ideal Gas Law talks about the relationship between pressure and temperature and density and a bunch of other things that don't really matter. Basically, it says that if volume and temperature and all of that other unimportant stuff remain constant then the pressure of the fluid varies directly with density of the liquid. So I'm thinking that *maybe* what people are storing as a density field is actually a pressure field and nobody's telling me. Again, just a guess.

.
Blah-Blah-Blah-LOOK PRETTY PICTURES
.
So this thing is getting really lengthy but I think I've cleared a lot up to myself by writing this. Strange how that works.
.
?I'll go into gruesome details about how exactly to get past theory and implement everything, don't worry. My end goal is to eventually be able to simulate a fluid something like this:
.

[ image: CulDeVinci makes a surprise return! ]
.
Right now, though, it's looking something awful. As of right now, gravity is kinda buggy (I'm thinking instead of propagating velocities I should propagate momentums, but imma have to try that out) and advection blows up after about 10 seconds of simulation or so, so the only thing happening is fluid propagation and fluid pressure. Here's a demonstration of what a zero-gravity non-coheasive fluid might look like:
.
[media][/media]
.
Yeah, I have to admit, it looks pretty funky. Oh well. I'll write another entry when I have something that actually works and doesn't look *completely* stupid.
.
Until, then, see yah, and thanks for reading!

CulDeVu

CulDeVu

 

Failures in GI-land

Hello all! It's been quite a while.

So I've been working on life/jobs/family/etc the past couple months. A couple weeks ago, however, I decided to implement a type of arbitrary BRDF sampling based on factoring large matrices, found here: http://gfx.cs.princeton.edu/proj/brdf/brdf.pdf

The idea is to convert the BRDF into a series of matrix representations in such a way that



, where F[sub]l[/sub], u[sub]l[/sub], and v[sub]l[/sub] are matrices that depend on outgoing direction, half angle azimuth, and half angle zenith about surface normal, respectively. Now that the functions for half angle azimuth and zenith are separated out, it's trivial to construct a probability density function based on each. From then on, it's the whole uniform random number -> CDF -> random ray thing like we're used to. The PDF is defined by



, where that last funky term is the Jacobian between half angle representation and normal paths.

Pretty Pictures and Opinions
.

[Left: factored BRDF representation, Right: Path traced reference, 200 samples each]
.
Okay. enough math. On to my personal thoughts. (Just as a warning, I've checked and re-checked my math a hundred times, so I don't think anything is wrong. If it is, though, be sure to sufficiently embarrass me in the comments )
.
So. The big issue is that, while the representation is extremely good at importance sampling shiny materials, it does so at the expense of accurately representing more lambertian properties. The effect can be seen when only one bounce is traversed:
.

[Left: 256 u terms, 128 v terms, Middle: path traced reference, Right: 512 u terms, 256 v terms, all images rendered with 200 samples]
(btw, the blue ball is the only thing in the picture that new material is applied to here. In case you couldn't tell.)
.
This issue arises because of its discrete nature. In order to get a good enough representation of the BRDF to sample it in this manner, you have to throw more and more samples at the BRDF. In this type of representation, a pure mirror would work very well, and diffuse materials would work less well, because this representation prefers more jagged BRDFs that so that it can sample the correct lobe. Otherwise, it has to make approximations at to what the CDF is, in the same way that a Euler integrator has issues because of regular sampling that doesn't capture the essence of the function. If that makes sense A graph of what this looks like might be something like
.

.
I could show some close-ups like in normal graphics papers, but this journal software is really kicking my ass. I'd like to point out my solution of inserting periods between every paragraph in order to keep them from slamming into each other every time i press the edit button
.
Welp. Hoped you liked it. The factored representation is still fantastic if you want to store measured BRDF data, but it doesn't suit my needs for an actual path tracer. Oh well. I'm bound to find some other snazzy graphics paper to try to implement soon
.
See yah!

CulDeVu

CulDeVu

 

Down with the FastCall22 reign!

So FastCall22 has figured out how to kick himself in the gdchat. Well, he hath prowess over us no more!

Here's how to go about it:

Step 1:
Pull up the developer tools that show you incoming and outgoing packets. On Chrome, you hit F12 and click Network, and Firefox has a similar thingy.

Step 2:
Post a single thing.The object of this is to see the session's unique id and verification key for you. This changes every time you enter the room. I'm pretty sure that you can get that value off of your entrance, but oh well.

Anyways, do that, and you'll send a POST packet, followed by a GET packet with your exact message in it. The GET one is the one you want. You can quickly filter through the packets to find the right one by looking in the Preview tab under Network on Chrome.

Step 3:
Scroll down to the bottom of the packet header where it talks about the Query String Parameters. Those are the things in the url with all the &'s after the .php. Yeah, you need those.



Step 4:
You need something that will send arbitrary packets to places. Postman for Chrome is a good choice. I'm sure there's something similar for Firefox. And maybe for Opera. Maybe AOL too, if you're thusly inclined.

Step 5:
Fill in the packet data.
You want to paste this into the field that says "Enter request URL here"

http://server05.ips-chat-service.com/moderate.php?room=FILL_ME_IN&user=FILL_ME_IN&access_key=FILL_ME_IN

The URL params that say "FILL_ME_IN" need to be filled in with the info from step 3.

Down under form-data, there's 2 fields that need to be made.
against: YOUR_USER_VALUE
_: [leave this value blank. i dont know why its here, or if it's even necessary, but do it anyways]



Step 6:
Send. You should get a response that looks like:
1
If you get something different, you did something wrong.

Step [font=arial] ?:[/font]
Profit.



Let the GDchatting commence!

CulDeVu

CulDeVu

 

Small update with spaceships

Well, I didn't get much done this week, admittedly.

I really dislike this part of the school semester. I'm not that used to the whole college thing just yet, so it kinda happens out of nowhere. Oh well

So what I HAVE gotten done this week is a bunch of codebase-better-ing. You know, getting rid of some of the old lone classes everywhere, starting building some semblance of structure in the codebase.

One problem I had was that, apparently, if you're flying at like 400000m/s and you turn 1 degree, if the speed is applied to the current orientation, then you kind of go flying a very far way away from you're destination. So I started fixing the issue once and for all: thrusters!


[ ms paint with a trackpad mouse = CulDeVinci ]

The thrusters are kinda realistic. They apply a force to the craft, which both pushes the craft forward and also applies some rotational velocity. Sure, all rotational axes have the same moment of inertia. Sure, the world in the background is just an impostor at the far-pane. Sure, I don't really know where to put the thrusters that control roll. Oh well.



Thanks for reading!

CulDeVu

CulDeVu

 

Planetary Biomes

Oi there once more!

Not many images or videos this time around, unfortunately. It's mostly about updates about what I'm up to and what I'm working on.

So, first off, the non-euclidean stretchy game thingy. Turns out that some people in my art class had taken an interest in it when I was working on it in class, and they offered to help offload some of the content generation, like making art, animations, level design, etc. This is actually exactly what I need in order to continue working on it, so you'll be seeing some gosh-darn pretty images in the future, after finals are over! Look forward to it!

In the meantime though, as I always need something to procrastinate on homework with, I've been playing around with a space flying simulation, one where you can get really close to the ground!




It looks a little rough right now, I know. Atmospheric scattering and physically correct lighting have taken a back seat to getting the game mechanics up and working. As you can see, though, there are biomes on the planet that try to correspond to how an actual planet's biomes work. With the exception of icecaps and rivers and stuff that have to have a little more processing thrown at them, the biomes are roughly there. I need to work on smoothing them out and getting them to be believable colors, but I'll work on that when I start doing close-up stuff.

So, down to the nitty gritty technical details of everything.

Biomes 'n Stuff

The game world is represented in a sort of fixed-point-esque system using doubles in hopes to make it expand to more than just a single solar system. However, everything drawn on the GPU is done using floats, particularly because I don't want to have to require GLSL version 400. So for the planet and sun that you see right now, everything is done using raytraced impostors.

So how do the biomes work? Well, back in Freshman year of high school I took biology, and I remember learning about a chart that determines all of the different biomes we see on earth by plotting them on a graph of average rainfall vs average temperature. Here's a picture of it:



So, that's pretty easy. For the average temperature, I took inspiration from the earth: http://upload.wikimedia.org/wikipedia/commons/a/aa/Annual_Average_Temperature_Map.jpg . I approximate it by a dot product plus offset by some value noise.// takes normalized position rel sphere// returns a value on [0, 1] representing the range [-15, 30] celciusfloat getTemperature(vec3 p){ vec3 randTempSeed = p + vec3(12, 11, 10); vec3 upVec = vec3(0, 1, 0); float temp = dot(upVec, p); if (temp
The average rainfall on the earth doesn't look like it has any discernible pattern, so I just thew some value noise on it and called it a day. All together, here's the snippet that determines biomes:// takes normalized position rel sphere// returns a value on [0, 1] representing the range [-15, 30]float getTemperature(vec3 p){ vec3 randTempSeed = p + vec3(12, 11, 10); vec3 upVec = vec3(0, 1, 0); float temp = dot(upVec, p); if (temp
?Translating it to C++

So, now with a basic rudimentary system in place for generating biomes for GLSL impostors, I need to translate it over to C++ so that I can build a mesh for when you get up close.

The issue is that I'll probably end up changing things, like finding ways to make the biomes more natural (like having more rainfall around ocean edges, etc), and I'll have to translate every single change I make to C++ as well.

The solution I came up with is to use the power of GLM plus the power of simple parsing, to get the C++ compiler to compile GLSL.

Here's how it works:

My shaders get run through a very simple parser that looks for #include statements, and then starts revursively copying and pasting those files into the body of the shader (I call these files .glh). This serves a double purpose. Firstly, it makes a large GLSL codebase much cleaner and more plug-and-play, which helps with prototyping. The separate header files can also, if GLM is willing, be parsed by the C++ compiler if there's just functions and their definitions.

This way, I can import all of my biome creation code into C++ by saying something like:using glm::max;using glm::min;using glm::clamp;using glm::normalize;typedef uint32_t uint;#include "planetImpostor.glh"
and BOOM! I've got the biome determination code transfered.

Here are some more boring screens:




Welp, I hope that interested you guys and everything! I'll get back to you guys in a week and catch you up with everything if finals don't get too rough.

Thanks for reading!

CulDeVu

CulDeVu

 

CulDeVu vs game development

Oi thar e'rybody!

So, I'm really bad at making video games.

I really like making prototypes and exploring new and cool mechanics.

These things don't go together very well.

See, it's not that I loose motivation. I'd be more than happy to work on it if making any sort of artwork weren't such a gargantuan bottleneck. Artwork is one of those things that it takes way too many hours of my life to make anything that I would ever be proud of. And games are one of those things that require a lot of art.

I can level design. I can mechanics design. I can't do sound design, and I can't do artwork. And it's kinda ironic, because those are the two things that players (at least my friends) notice first about a game. And it's hard for me to work on something that I'm not proud of.

So, after kinda slacking off for a week or so, I've decided that I'll put the non-euclidean prototype away for a while. If anybody wants to pull it back out of its grave to make something of it, PM me or something. My personal opinion is that its mechanic deserves to be placed in a game that's been showered in love by the developers, and that's something I can't personally provide, with my schedule and talents.

So!

Now to wonder what I'm going to be doing next, because I can't just sit around doing nothing.

I've always wanted to make a space combat game. I used to pour way too many hours into the Rogue Squadron games and into Tie Fighter back in the day. I don't know what it was, but I loved flying though asteroid belts, swerving around space rocks and chasing enemies and blowing up things in zero-g.

And you know what? They don't look that content-heavy! You know, besides space rocks and planets and forests and other spacecraft and stuff. But other than that, there's no carefully hand-crafted anything in them usually!

What do you guys think? Does anybody else find space combat games fun?

As always, thanks for reading.

CulDeVu

CulDeVu

 

End of Spring Break update

'eyyyy!

So indiedb is an interesting place, and it has a really weird ranking system. I went from rank 1778/24950 to 4911/24950 in about 24 hours, during all of which I was never even in the catalog of games. I think the ranking system is based on both time and site visitors, or something similar. But whatever, I'm cool with it if it helps me out.

In terms social networking, as of right now, I have:
indiedb account: http://www.indiedb.com/games/contorted
twitter page that I hardly ever use, but should: https://twitter.com/culdevu
and I have this GameDev journal.

That should be enough. Right?

... right?

...

Anyways, I haven't really been working on the actual game the past couple days. I've fixed up the physics engine and decked it out with friction, with hopefully the last update I'll ever make to it.

For the most part, I've been pulling back out my old GPU pathtracer, and tried to fix it up in preparation for some voxel tracing, mainly because I've started getting jealous of Migi's screenshots. I found some old bugs in the specular lighting, I made the pathtracing loop more iterative and more readable. I still to this day, though, have no idea how you're supposed to make random numbers on the GPU right. The method I use currently is a slightly modified version of a code snippet from http://byteblacksmith.com/improvements-to-the-canonical-one-liner-glsl-rand-for-opengl-es-2-0/uniform vec2 rand_1;uniform vec2 rand_2; ... float randShift_helper = 0.0;float rand(vec2 co){ co += rand_1; float c = 43758.5453; float dt= dot(co.xy, rand_2) + randShift_helper; float sn= mod(dt,3.14); float ret = fract(cos(sn) * c); randShift_helper = ret; return ret;}
As you can see, it's the familiar canonical GLSL rand function that you see everywhere, but got a couple constants switched out for my own uniforms (randomly generated every frame), and a splattering of ideas taken from normal LCRNG.

Though generating rays from this thing makes images that converge to things like


I'm thinking it has something to do with the way that , although the individual paths are random, each successive path bounce generates a ray that's deterministic, based on the parent ray (because of the LCG part).


ugh... GLSL, y u do this 2 me?

Anyways, gotta go. Have tons of homework due in Discrete in a couple days. That can get to be kinda important.

Thanks for reading and stuff!

CulDeVu

CulDeVu

 

Half-week Update

Greetings, all!

Spring break! Woo-hoo!

That means that I'm using the time that I should be spending catching up on my school-work instead working on the not-so-much-of-a-prototype-anymore thing!

Firstly, I will say that every time I sit down in front of Illustrator or Photoshop, for me at least, is a humbling experience. I'm terrible at art-ing, but I can't afford an actual artist. So... that sucks. But I've been trying!

It's around this time in the little projects I've worked on where I start feeling kinda down. Because I know that nobody looking at the game will care about anything other than the visuals, and I know that I'm not really good at making good visuals. With that being said, the only way to get any sort of notice from anyone other than my roommate is to have a large amount of content already finished. So at this point it's pretty much a rush to get as much of the game finished, or at least blocked out, in as little time as humanly possible.

I've been making a tree village (the central hub of the game), for a proof-of-artwork concept, you know, making sure everything flows and works together in practice, before I rush into the rest of the game.

Things that I've gotten done in the past couple days include:
- avoided adding much-needed friction to the physics engine
- the first couple pieces of artwork
- the beginning stages of sprite sheet support (god, I hate Flash)
- better distortion culling (although you can see it mess up a lot)
- game states, like menus and start screens, etc
- parallax scrolling (can't see it in the video, took it out to work on the set design)
- a bunch of other stuff

I feel pretty productive, to say the least. Time to binge-watch all the Walking Dead episodes I've missed

Here's a little video, you know the drill. It shows off the artwork and how it interacts with the warping mechanic. Right at the 2-minute mark I start showing how abysmal the framerate drops when distorting in different scenarios.

Also please note that the leaf roofs and the tree are probably the best pieces of artwork you'll see me do. I'll try to take advantage of the blocky and jagged nature of it and try to pass it off as an art style or something

[media] [/media]



Thanks for reading, like always! Pretty soon I'm going to start posting to #screenshotsaturday and being more active on #gamedev and #indiegamedev, once I get something that looks good.

CulDeVu

CulDeVu

 

Moving slowly

Hola and all that jazz.

So I've decided that I'm officially friendzoning OpenGL.

I really want to like OpenGL. I really do. However, after everything we've been through, after both of us seeing each others' glaring faults come to light, it's just not the same. As hard as it is, I'll never be able to see OpenGL as more than a friend. A friend that I go to when I want to prototype something up in a jiffy, or I don't really want to have to hassle around with D3D11. I understand OpenGL, and we've been through a lot...

OpenGL, I hate to say it, but...

...

I have a crush on the new kid in town, Vulcan.

[/tears]

(Actually, I'd personally just want a graphics library that gives me the power to send data to VRAM and access it via a shader. Anything else is bonus. Allow me to write my own numbers to my buffers, allow me to full control of the resources I've allocated in VRAM. Let me incur any sort of cache misses or other nasty things. Allow me to write and interpret my own z-buffers, to write and read from RAM in the same way that you can in a multithreaded CPU program. I understand it's dangerous, but at least give me the option! No more complicated binding, and setting render targets, please, just give me the opportunity to read and write to VRAM!)

(Although, I'm probably just talking out my ass here. I know very little about GPU architecture compared to a lot of other people who roam these forums. I just get frustrated because I feel like there are a lot of barriers holding me back from doing interesting things easier than normal. I hate having to tell the graphics driver that I'm allocating a depth buffer, and that I want the RGB data to be full floating point, blah blah blah. I would just really like to be able to write and read my own data at will. However wrong I may be, this is 'MURICA goddammit! I can be ignorant and loud as much as I want!)

In other news, or rather lack thereof, I haven't really been working on my project for the last week or so. I've been trying to see if Java could ever be a decent platform to test rendering algorithms on (hint: it's not), because after my cone tracing experiment fell apart I wanted to see if I could improve my sampling patterns in a path tracer. It turns out the JVM JIT optimizer isn't quite a match for good old -O3.

*Hopefully* I'll get back to work on the non-euclidean thing. Maybe. I have some ideas as to how to increase my hype about the project, but at the moment, with midterms and everything happening, it just doesn't seem appealing right now :(

Aight, well, ramble to you guys next week.

Thanks for reading!

CulDeVu

CulDeVu

 

Playing with new content generation

Hey all!

So I've been working on fixing an annoying little bug in the physics engine that sometimes makes you get stuck on corners. The fix is kinda hack-ish, but hey. Can't do anything about it. I also have got to implement some sort of friction in the game. It feels like you're playing on an ice level, but I really don't like the idea of implementing friction inside of the warping part of the physics engine. The physics engine is iterative, so friction will be really really hack-y.

I've also been working on stuff like controller support, game state stacks. You know, stuff that people don't really care about/boring stuff. I've also been working on the visuals a little, and getting a new content generation up and running.

So I've gone and changed how I do maps work in the game I'm working on. Instead of tiles to create the game world, I use SVG files, and separate out collidable geometry and what's drawn. So far I'm liking it, and it would give me a lot of creative license while requiring little artistic ability, which is something I need.

The only issue with it, so far, is that if I ever want to add any sort of dynamic lighting, you know, to breathe some life into the world, SVGs are definitely not the way to go. And who wouldn't want to see a 2D platformer with cone tracing and dynamic global illumination?

Other than that, it's coming along nicely. I still have no art skills to speak of, but at least I *can* art a little like this.

First piece of art was made this morning:



And these are pics after distortion:





As you can see with the distorted pictures, there's still issues with the distortions messing up a little, but that can be solved with a higher triangle subdivision count
Thanks for reading!

EDIT: Almost forgot, I was the most 1337 GD.net member ever a couple days ago! @et1337, I'm coming for you!

CulDeVu

CulDeVu

 

Switcheroo to Vector graphics

Greetings!

First off, let me just say something about SVG files. In many ways, it's a fantastic format, especially in the way that its designed to be easily parsed. Unfortunately, SVG files are really bad at anything other than static art. Which is what it was made for, I guess, but that'll make animating hard in the future. There *are* tools for animating vector art out there, you know, to rasterize them into spritesheets, but I'm going to have to figure out which one of them is actually good.

The reason I'm switching everything up is because I actually have no art skills to speak of. However, I took a few graphic design classes in high school, so I know Illustrator pretty well. Given a drawing or some sort of reference, I can be sort of competent, which isn't something I can say about pixel art or photoshop art. I'm also a college kid, so I'm too poor to actually hire an artist. I have some friends majoring in art, though, so I might guilt trip them into helping out

The biggest change since last time has been just the codebase. The rendering and physics used to be synced up, but now they're modular. I also added a lot of support for subdividing triangles, triangulation, etc. You know, pretty basic, run of the mill vector art stuff. The switch was worth it though.

Here's a gif of the current triangle subdivision. It's really bad and kind of really inefficient in the way it sometimes creates long triangle slivers, but I'm working on making a triangle relaxation method I can use. Nothing Delaunay-correct, just something to stop triangle slivers from happening.

(if it doesn't animate, click on it. That usually makes it work)

Other than that, yeah that's all. I'm working on getting a decent looking level up (or at least collision geometry) up for a demo next week. Or I might decide to do my homework instead. Depends on my mood, really.

See yah next time!

CulDeVu

CulDeVu

 

Getting a little pushy

??????????!

My art studio class has been kicking my ass with deadlines, so I haven't had much time to do development stuff this week. I've been trying to figure out how to get blender to render normal maps so I can get an art-pipeline up and running.

So my roomate was like "so what if you could push in your game too?" So I used it as an excuse to sort of clean up the codebase and add some pretty tweening as a result of the GameDev thread about Juicing up Your Game, before I tackle moving platforms in warping space, which sounds god-awful.

As a result of adding a pushing mechanic, I found another glitch in the physics engine. Which is a shame, because I thought I would finally never have to ever open up that file ever again. It has something to do with outer edges of things, but I haven't quite figured it out yet.

So here's a little video of me playing around with the mechanics in a little demo world. Bugs start at 1:35, where you can see the weird edge glitch. Other than that, though, there's not much going on. Sorry I left the debug HUD up for like half the video, I've gotten kind of used to it so it didn't occur to me that it was up too long



Thanks for reading!

CulDeVu

CulDeVu

 

CulDeVu vs Photon Mapping

Heyy all!

Sorry for not posting for a couple weeks, but it's the beginning of the semester again, and somebody's got to represent the freshman class at parties

So the graphics professor at school here was talking to me and he was like "you know what would be a cool idea?" Turns out, it WAS a cool idea.

So! I present to you:

HBR: Human Based Rendering
(the photon mapping of the next generation)

The general idea is to either get the user that's looking at a screen to plot the photons themselves, or somehow crowd-source rendering an environment by manually tracing out light paths.

It's a novel technique, you see. Instead of you offloading the work to the GPU, the GPU is offloading the work to you. The former has been explored many times, but the other side of the symmetry has not. While HBR is not a silver bullet, we believe that, if used in conjunction with more conventional GPU usage, HBR will become a powerful tool in a graphics programmer's toolbox.



(all screenshots above were rendered with real humans!)

This has many advantages over normal rendering techniques. A few:

Aesthetically Accurate over Physically Accurate: the end result of a render will look like how the user thinks the environment should look. This weight on personal stylistic choice builds the illusion that they're crafting the environment themselves. This is why we believe that HBR, the act of manually placing and tracing photons, will increase immersion in games.

GPU Empathy: as previously stated, instead of you offloading the work to the GPU, the GPU is offloading work to you. When you have to manually render each frame yourself, you start being a little more okay with the thought of awful looking graphics.

Cheap for Static Geometry: this just comes with photon mapping. a pre-computation is all you need to cull offscreen photons, and BOOM! you're realtime.



Jesting aside, it would actually be really cool to have a crowdsourced rendered cornell box, where people come to place photons and render the scene themselves, and see how far off of the ground truth it is.

Welp, that's all for now. I promise, I'll be back next week with some more non-euclidean fun. I can't report on anything at the moment, though, because I have a project due in a few hours that I really should get to working on.

See yah!

CulDeVu

CulDeVu

 

Updates on top of updates

'Ello there!

Just kidding. Not really that many updates. Just a few minor things, honestly, while I'm working on getting the first piece of artwork done.

So I read something swiftcoder say[quote]
I keep a tally of all the bugs I have fixed with a single line code change. Over time it's reached a fairly horrifying number[/quote]

That sounds like a fantastic idea!

I'll start my own count with this one from the physics engine. There was this bug that if you jumped on a small distortion on the wall, you could clip through it, and sometimes you could clip through the wall. So I started rummaging through the codebase and eventually honed in on this little piece of code:// only segment the parts that need itif (lastWasWarped) { if (distortedPt0_temp.sub(wsPt0).lengthsqr()
This was the piece of the physics code that tries to figure out if the geometry is distorted or if the geometry can possibly be batched into straight lines of some sort. This was done in order to mitigate getting stuck on corners everywhere when simply walking over normal terrain.


As Jonathan Blow said in his recent programming language videos, a large portion of bugs in programs come from not reasoning through state changes correctly. Couldn't be truer here::// only segment the parts that need itif (lastWasWarped) { if (distortedPt0_temp.sub(wsPt0).lengthsqr()
So that one line, that single else clause, stopped the clipping issues. Now, as far as I can tell, with the current system I've got going on, the collision system is impenetrable! At least until I find some other bug

The bug fix also allowed me to do cool things, like scale walls, like in this video:


But yeah. Nothing much else happened this week. If anything absolutely crazy happens, I'll let you guys know next week.

See yah!

EDIT: actually, the journals view counter has been acting weird ever since the new years. It always reads 0 views for me, even though I can see members visiting the journal and upvoting stuff. Is a fix coming soon? Because I kinda like to know how many people actually read this stuff, like on RSS feeds and the like. It's a sort of selfish request, but it would really help to know. Thanks!

CulDeVu

CulDeVu

 

I can't art

Oi tharrr maties!

So I've been working more on grappling hooks and getting lua scripting to work. You know, gameplay specific stuff. The part of the game development where the interesting stuff stops really happening all the time. Cause let's be real, level design is kinda boring and tedious

So, as part of my thing I'm trying to do, writing a journal entry at least once a week, I'm going to give the world a taste of my awful art skillz

I've started really getting vibes from the mechanics here recently as they've started maturing. The mechanics, they speak to you. The mechanics sort of tell a story, and I'm going to try to capture that. The biggest source of fun in my playing around has to do with warping space and moving objects, so I'm going to go full on metallic, mechanical levels with lots of moving parts and switches and puzzles and stuff (already in the works! ) and dark lighting. Imma see how the first concepts turn out, but I've got to replace my current default tilemaps before I can really decide.

First little bit of concept art I've made of the main character (witha bit of parabolic grappling hook collision math in the upper corner there)


CulDeVu

CulDeVu

 

Grappling Hooks in Non-Euclidean-Land

Oi there again!

I don't know how accepted it is to post 2 entries here so close together (like, 15 hours or so apart). I assume it's like double posting, but don't worry, I'll be quick.

I read last night Aardvajk's blog post about the grappling hook idea going under,
https://www.gamedev.net/blog/515/entry-2260648-project-fail/

, and I started thinking about how grappling hooks tend to work really well in 2D but not in 3D and-- wait a minute... my prototype is in 2D!

So I said "what the hell" and whipped up a quick little demo into the non-euclidean prototype. A natural opposite of the grappling hook, which involves pulling yourself closer to other objects in most games like a hookshot, would be to pull the other geometry to you instead. It's basically a restricted form of the exact same mechanics I've been playing with so far, except it makes a whole lot more sense conceptually. Tell me what you think of it!

Here's a quick little video of me playing around with it. Sorry about the odd resolution, I'm trying to get OBS to work well on my new laptop.
[media] [/media]

Shoutouts to Aardvajk for grappling hooks, and for prototyping ideas publicly for other people like me to steal

CulDeVu

CulDeVu

 

Bret Victor on Humane Representations

Hello all!

I haven't been as active the past week on the project as I should have, but luckily for me, Bret Victor released a new talk the other day that shows what he's been working on.

In case you don't know about Bret Victor, he's a man that I have a massive amount of respect for. He's most well known for his talk on "Inventing on Principle," but I know him best (and resonate most deeply with him) through his "Kill Math" initiative (http://worrydream.com/KillMath/). In it, he says that mathematics, as it currently stands, is unsuitable for the types of problems we face on a day to day basis. The mathematics that we all learn and are taught in school is the type that was most useful for people a couple thousand years ago, but not necessarily for today.

So he's been coming up with ideas and prototypes the last couple of years to explore different ways of looking at problems, different ways of viewing programming, different ways to approach math. I may not agree with a lot of things he says, but the thing that impresses me so much is that he's trying. He's starting the conversation about how to bring things like mathematics and programming down to a more approachable level by completely reinventing it, or trying to view it in a different light, sort of how Paul Lockhart was trying to get people to approach mathematics.

Actually, on second thought, I'm really bad at explaining what it is that he talks about. Just read the Kill Math link above. You'll get what I'm trying to say.

Anyways, he released a talk the other day about his latest line of work (no demos, unfortunately) about how the mediums that we work with keep us in a cage. His biggest point was that "intellectual work" today means staring at this little glowing rectangle, and not really utilizing any of your senses besides your sight. He says that it's analogous to working with music by only interacting with sheet music, without any sort of audio. And this, he says, in inhumane, and that we wants to fix it.

Here's the video:
http://vimeo.com/115154289

Unfortunately, the talk is a little on the boring side after the halfway mark, but he's gained enough of my respect to get my full undivided attention when he talks.

The non-euclidean project

The project I've been writing about for the last month or so or more doesn't really have anything to show for itself in terms of progress. One major game-breaking bug was fixed (and it took a while too!). Instead of trying to pre-predict how the geometry of the levels were going to transform, I decided instead to distort the geometry fist, and then do a second pass over the warped geometry to try to figure out the optimal way to warp it. And BOOM! No more getting caught on corners, and no more edge bugs!

It always humors me just how little code it takes to fix things sometimes. And those 10 lines or so take a long time to write, and they're very finicky to wrinkle out the details and bugs and whatnot. But when's all said and done, it was 10 lines that solved a massive bug. And that's always a happy feeling!

Happy new years everyone! See you guys around next year!

CulDeVu

CulDeVu

 

Update on Non-Euclidean physics

Oi there!

Sorry, I haven't really been working on things as much as I have been the last couple months/weeks/things, on the non-euclidean game thingy. Finals are now over, and thus completes my first semester of college!

In other, less important news, I just reached the 1000 rep point club! I got there with slightly dishonorable means, like grinding rep in the Screenshot Saturday showdown for the last couple weeks, but I'm there!
https://www.gamedev.net/sm/member/175326-culdevu

So onto some real talk.

The non-euclidean prototype/thing is having some major issues with the physics. Here's a screen (again, sorry for clip art textures, I'm trying to iron out some other issues before I tackle artwork)



See, the way everything works is kinda hack-y. Because it's hard to predict at runtime exactly how geometry in the levels will deform, given a set of distortions. And even if I did, collision normals would still be an issue, because they wouldn't be preserved. So what I do right now is I segment all edges into little line segments in the physics code, warp them separately, and then apply the physics to that.

This leads to MASSIVE amounts of edge bugs of all sorts. Getting stuck on corners is the worst. It's mitigated a lot by getting better physics culling, but it won't go away. It wouldn't be that big of a deal, but it affects jumping near vertical walls. SAT doesn't really like to play nicely when it comes to collectively deciding how much to displace an object after collision. And it's too bad, because a Minkowski difference isn't what I need right now.

And the big issue is that I can't see a way to fix it. I mean, in a normal physics platformer, you can assume that stage geometry isn't going to change, so you can batch and simplify the hell out of collision objects. However, in this prototype, nearly type of wedge-star-pentadecahedragon is possible, so taking shortcuts and cutting corners to fix edge bugs isn't acceptable, and would probably break the game in many ways.

Oh well. I'll have to think on it some more. Maybe I could try to pre-predict what vertices will be distorted, and only segment those. That would get rid of almost all the edge and corner bugs.

Or, I could always ditch this project and work on, say, a space combat game. They look fun to make!

But I'm probably not going to.

No video this time, unfortunately. I'm in the middle of installing a lot of software onto my new laptop right now, so I'll probably have a video next time!

Cheers!

CulDeVu

CulDeVu

 

CulDeVu vs Global Illumination

Hellooo there!

Final exams are a humbling experience. During them, professors manage to hone in on every moment you ever felt stupid or unable to grasp a concept, and then make you relive it with a frightening 40% weighting. I'm particularly terrified of my university physics course, which is ironic, considering I've spent the last month or so making a non-euclidean physics engine. Damn professors, with their fancy "analytic solutions." :P

So people in the GameDev journals have been posting beautiful screens of their real-time global illumination demos. Migi's got his voxel cone tracing thing he's doing, eppo's got whatever voodo he's been up to lately. What you guys didn't know was that CulDeVu has been making a global illumination demo of his own! It's pretty much the latest-tech start-of-the-art stuff happening over here!

Some screens!





So by now you're probably thinking something along the lines of "CulDeVu, how exactly do you make such beautiful images of spheres and cubes?!" It's a company secret, but here's a quick little FAQ, if you were wanting to buy into the business.

How fast does it render?
Usually 8-10 hours for a good render, 30 minutes - 1 hour for a pretty decent one.

Does it support .FBX files?
Don't get your hopes up! It doesn't even load .obj files!

Well, does it do subsurface scattering?
Nope.

Well what about volumetr--
Nope.

Does it even use the GPU?
Pfft. In your dreams.

Well, can it even do--
Jesus Christ, what do you think this is, RenderMan? Jeez...

CulDeVu

CulDeVu

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!