• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Way Walker

Members
  • Content count

    3076
  • Joined

  • Last visited

Community Reputation

745 Good

About Way Walker

  • Rank
    Contributor
  1. This is essentially what frameworks like .Net or Java do (although these are far from the first to do this). They specify a virtual machine which is essentially "a fixed OS", and even "fixed hardware" in the sense that the bytecode/instruction set is fixed. The idea, as suggested, is that you don't need to worry about changes in hardware or OS because you're targeting a virtual, as opposed to real, OS/hardware combination that doesn't change. Actually, anything above assembly can get you a lot of the way there if you use cross platform libraries (assuming they're maintained, but the same is true of the virtual machine or any "reusable OS"). Even C is defined in terms of an "abstract machine". The only problem is that users aren't typically set up to compile code, but languages like Python and Ruby get around this by making compilation typical.
  2. The only real difference between the two methods is whether you use an adaptive or constant timestep. In both cases you're essentially doing 6N individual integrations for the given timestep, the only difference is whether you change the timestep between steps. Using a constant timestep is pretty much standard practice in molecular dynamics where many researchers simply use velocity Verlet. Check out [url="http://lammps.sandia.gov/"]LAMMPS[/url] for a popular parallel code that does this. The only problem is that you need to "guess" a timestep that will provide sufficient accuracy throughout the simulation. This isn't a big problem in molecular dynamics where the main issue is convergence and the maximum timestep for convergence won't vary much during a given simulation. For an adaptive method, I've had good luck with [url="https://computation.llnl.gov/casc/sundials/main.html"]SUNDIALS[/url]. All the work is done through an NVECTOR interface which makes parallelization easy enough. It provides a basic parallel implementation of the NVECTOR interface, but you'll probably want to write your own that works with how you'd like to partition your data.
  3. [quote name='Sirisian' timestamp='1295859309' post='4763797'] Oh and one of my big gripes is malloc and free are annoying as hell to use with all their casting clutter.[/quote] If you need to cast, then you're trying to compile a C program as C++. My favorite idiom is:[code] Type *var = malloc(sizeof *var); free(var);[/code] This is less brittle, more readable, and the compiler will tell you if there's no prototype for malloc(). I don't know how relevant that last part is anymore. Maybe it could cause a problem if sizeof(int) isn't the same as sizeof(void*)? But, I still agree that there's not much use for C in game programming. The only place I could see using it would be as a lighter choice than C++ to drop down to from a higher level language, whether for speed or gluing bits together. In those cases, I see myself using mostly the C subset of C++, and C++ isn't as good at being C as C is.
  4. [quote name='godmodder' timestamp='1295644239' post='4762642'] Look, all I wanted to express in response to this thread was that Ubuntu may not be the ultimate operating system when it comes to some everyday tasks that I feel are important. Ofcourse that's just an opinion, but I've seen so many cases of people giving a presentation on Windows and on some Linux distro (mostly Ubuntu). And most of the time the linux people need way more time to set things up. It's not a good impression when your employer for example sees you're struggling with your system.[/quote] I agree that Linux distros aren't always the best for some everyday tasks. I'm always running into minor issues with Skype, and if you want to rip to flac you have to jump through an extra hoop if you want to be able to seek to the middle of a track (at least if you use Amarok and k3b). However, I really don't get this presentation issue. Is that something that's been fixed since XP? I haven't seen anyone try it with Vista or 7, but in XP there's no end of issues, the most common being movies only playing on the laptop's screen and that dialog asking every few minutes if now would be a good time to restart your computer to install updates. On the Linux side, I plug it into the VGA port on my netbook and things just work. At least, that was the case in Kubuntu. I haven't tried it with openSUSE yet.
  5. I switched to Ubuntu from Fedora for KDE support and greater stability. With Fedora, the attitude toward KDE is that they are a Gnome distribution so you're lucky they provided the packages at all, let alone if the packages work. They aren't much better with supported packages and things breaking is almost a fact of life with Fedora. KDE support is a bit better with Kubuntu being an official derivative, but it's still a derivative so it's not supported to nearly the extent the Gnome desktop is. Also, Ubuntu's focus on usability means that I've had good luck even with backports enabled. valderman pretty much covered why I moved away from Ubuntu and currently use openSUSE. The Ubuntu forums try to be helpful, no RTFM attitude, but often seem like the blind leading the blind. Also, manual configuration is very nice in openSUSE with Yast where Ubuntu makes you jump through hoops. For example, there's a PPA that gets recommended regularly to get the aoTuV updates to the ogg vorbis libraries, but to use it you'd have to uninstall the standard libraries (which means you'll have to do something about the dependencies) install the version from the PPA, and lock that version meaning you'll have to manually track updates to the libraries. Unfortunately, aoTuV isn't available in the OBS for openSUSE, but, if it were, you'd just select the version you want in Yast, which will change the vendor so it will track the version in that repository.
  6. [quote name='Extrarius' timestamp='1294678656' post='4756705'] The reason I'd guess people are leaving is exactly because they are here to enjoy technical discussions with competent developers. The new design looks largely targeted to draw in younger, less experienced people that will contribute more questions than answers and (based on my experience with other social networking sites) more soliloquies, snark, and sass with a significant shortage of serious science and other interesting-to-me topics. If the community makeup switches so such demographics match that description, it will not have the same kinds of content that such people are interested in.[/quote] If all the competent developers leave because they're afraid of the possibility that all the competent developers leave or be drowned out, then it becomes a self-fulfilling prophecy. As for snark: I wonder how many of those leaving would nod to the wisdom that gameplay is more important than graphics. [quote]Also, it makes my eyes hurt in a way that feels very similar to a migraine. I'm not sure how GDNet has always managed that, considering that most sites on the internet used black-on-white, but somehow onl this site has ever continually caused pain. I could easily see why this alone would cause people to leave at least until a less painful theme is available. [/quote] For me it's about the wrapping my brain around the layout so I don't have to think about the layout. For the moment, I can't just scan the page, I have to think about the layout before I can start thinking about the content. However, it's a well-organized layout, so it won't be too hard. Also related, my eyes still jump try to jump where the information used to be even though I know it's somewhere else which is causing a strain similar to trying to bring the foreground into focus in a 3D movie. It'll just take a little time. (For me, at least.)
  7. Quote:Original post by way2lazy2care I'd really consider getting a smartphone with decent media capabilities if I were you. For the price of a good MP3 player you get a good MP3 player and phone in one device. I like keeping my media player separate from my phone so that each is on its own battery. I suppose I could just carry an extra phone battery, but then I'm still carrying two things and my mp3 player isn't much larger than the battery. Quote:Original post by AndreTheGiant in summary I'm looking for: - a SIMPLE mp3 player - no screen (ok maybe a cheapo LED, but definitely no fancy LCD) - flash memory (no spinning disk) - able to plug in to my auxilliary port in my car - simple interface - forward, backward, play, pause, volume. - as large as possible. Could live with 4 GB, 8 is better I've been pleased with my Sansa Fuze and I have a friend who bought a Sansa Clip and seems to like it. Sansa has since updated them to a Fuze+ and Clip+. Either would, in my opinion, fit every point above except that the Fuze has a "fancy" screen and the Clip is a bit simpler. I think the old Clip maxed out at 4GB, but the Clip+ has an 8GB model and the Fuze can go to 16GB. Also, both can be expanded with a microSD card that, unlike the Creative Zen, is integrated with the rest of the library. My only complaint about the Fuze is that it's starting to get a little flaky after 3 years. Sometimes the screen will be corrupted when I turn it on and something with the microSD card has become loose so it refreshes the media a lot.
  8. Quote:Original post by Fenrisulvur I'm not exactly sure which terms and notation you're taking as "common", here, or in what sense. Are you rejecting structure like metric spaces and definitions like that of the circle I gave in favour of some unstated common-sense interpretation, or are you familiar with such structural abstractions and objecting to having it all rehashed? More the latter. For example, this says the same thing in three ways: "the set of ordered pairs of real numbers, in other words ". What seems strange to me is that I believe the first would reach the largest audience since the Cartesian product is likely to be introduced later in any math program. It's maybe justified by the use of in the mapping notation to make the connection more immediate, but functional notation with implicit (co)domain and range would've worked as well. Quote: Quote:Original post by Way Walker (you even got fed up with it and just said you would be using "vector space axioms", which usually goes without saying) Oh, no it doesn't. That line would've been torn apart in a more formal setting, I hadn't defined any structure other than a metric at that point. Do you have any idea how many vector spaces can be constructed on R*R? How many are isomorphic? Or, more to the point, how many are isomorphic to the Euclidean space? Quote: I maintain that an answer to the troll's paradox is going to have to be structural, it's going to have to illuminate criteria which govern whether a sequence of bounding shapes do or do not coverge to give the circumference of the circle, and we're probably going to need a formal definition of what a circumference actually is. I agree, but the simplest definition would be to simply integrate the length along the curve. Quote:Original post by JoeCooper I don't think we could possibly have a procedure that can validate any and all possible shapes you could throw at it, for all manner of procedures, with my philosophy-student level math. Maybe true, maybe not. I was hoping something would come out of the discussion. [smile] Quote: As I understood the specifications originally, all I felt I needed to do was to show that this particular gizmo isn't approximating a circle. Actually, the problem is that it is approximating a circle (defined as the set of points) but not all of its properties are good approximations of the properties of a circle. In particular, the perimeter, but also, for example, the first derivative. The n-gon method also doesn't provide good approximations of all the properties of a circle, like the second derivative. Quote: I don't mean this to sound hostile in any way, but you seem confident that you have a superior handle on the maths to everyone else - why not pitch a fitness test? Doesn't seem hostile and I almost certainly don't have a superior handle on the math. For one thing, I know very little about fractals. I've been trying to think of a good fitness test, but I'm having trouble finding a necessary condition. One problem being that the limit of the derivative is not just incorrect, but undefined. Quote:Original post by Eelco 'the derivatives'? Only the zero'th and first, actually. That still leaves the nagging question as to why it is only those two that matter. On a more generic level to nilkn's answer, the different derivatives have different meanings. Having a continuous first derivative makes a function smooth, so, since the lack of smoothness is part of the problem, it wouldn't be a surprise if the first derivative is important. Likewise, if curvature isn't an issue, then the second derivative probably isn't important. Quote: Emphasis mine. The second derivative and upwards are all zero for a polygon, so they do not converge at all. Technically, they do converge, just not to the same value as the derivatives of the figure the polygons converge to. I wonder if they have to converge to the same value, or if convergence is enough (e.g. it just needs to be smooth). Of course, I'm not sure that you can have the zeroth derivative converge to the right value and the first derivative converge without the first derivative converging to the right value.
  9. Quote:Original post by JoeCooper Quote:everything after made my head explode ... I can't add anything so I better bail I'll write a little more just to answer, but I'm exhausted now, let's stop firing please. Sorry for playing the devil's advocate. I stopped halfway through Fenrisulvur's dense math, too. Fenrisulvur: For someone who accuses others of writing like philosophy students, your math looks like a philospher's or a new student's. Reading it is slow going because you spend so much time defining common terms and notation (you even got fed up with it and just said you would be using "vector space axioms", which usually goes without saying). It would also help if you didn't use mathematical notation to cram so much on a single line. It's like complaints that Perl can look like line noise. (I often get complaints about being too heavy on the math, so this is a bit of the pot calling the kettle black. [smile]) Quote: Quote:What's the "deviation"? Is it the (perpendicular) distance from the circle to the point Yessir; for a given vertex, d = r^2 - x^2 + y^2 I was thinking d = |r - sqrt(x2 + y2)|, but it doesn't really affect anything I said. Quote: Quote:take the circle circumscribing the regular n-gon. The metric is always infinite but, again, it produces the correct result. Wrong; it's actually worse. The above deviation gives 0 for any individual vertex (ideal!), but since there's an infinite number of them, the sum of them would be 0 * infinity, which is undefined, so the test cannot be conducted. I think this is the upper vs. lower bound thing again. I was going by the upper bound (i.e. the n-gon circumscribes the circle) so the deviation would be a positive number. For the lower bound, just inscribe the circle within the n-gon instead of circumscribing the n-gon.
  10. Quote:Original post by JoeCooper Quote:But why does this metric mean anything? It means something because knowing that these extreme-hypotenuse-vertices always alternate with the valid vertices, this metric shows whether or not the shape smooths into a circle as n approaches infinity. Since at the end we take the sum of the distances from each vertex to the next, I take the sum of the deviation. Maybe I misunderstood the metric you proposed. What's the "deviation"? Is it the (perpendicular) distance from the circle to the point, or the distance from the point of contact to the point of furthest deviation? I was assuming the former (and will for the rest of this post). However, if it's the latter, then I don't think it tells us, at best, anything more than we already knew: the approximation does not decrease. In particular, it doesn't tell us whether or not this is value is correct. Quote: Quote:The same n isn't really comparable between the two, but the way they behave as n increases is comparable. The comparison doesn't really matter. In fact, nevermind the regular polygon method. It doesn't matter how they compare; only whether or not the procedure in question does as it says on the tin. It matters because it gives us insight into why the value isn't decreasing. What it shows is that the proposed metric depends just as much on the number of worst points as it does on how those points converge. The thing is, there's no reason the number of worst points should matter at all. Also, it's useful to consider the regular polygon method since any proposed metric should reject the excavation method but not reject the regular polygon method (or the three methods I suggested in my previous post). Quote: Since we know there are valid points, and between two valid points are extreme-hypotenuse-points, we can assess if these extremes go away as n approaches infinity. Since the value goes up instead of down, we see that it does not, thus failing at its own goal, failing to approximate a circle' shape and therefore giving us no reason to accept its perimeter as an approximation of a circle's. But I gave three examples that fail the given metric but still approximate the circle's shape in the sense that the limiting curve is a circle and the limiting perimeter is the circumference of that circle. One has no smoothing out to do since it's already a circle. I gave two examples that have the same performance as the excavation method but produce the correct perimeter. So, the metric fails at its own goal: rejecting models that don't produce the correct perimeter. Quote: The fact that its area does is interesting, but that doesn't mean anything to its perimeter. Both the perimeter and area stem from the geometry. Therefore we should not consider its area. The reason I keep mentioning the enclosed area is because it's what you get when you iron out some of the issues with the proposed metric.
  11. Quote:Original post by Antheus Quote:Original post by Way Walker but the most basic definition of pi is a relation between the perimeter and diameter. The geometric definition is either that of circumference or that of area. Quote:It's maybe more robust to derive it from the area, Except that in this case the curve does not define Pi, if my assumption about difference between perimeter of curve and area holds. I think I maybe misunderstood what you were saying. There's nothing wrong with deriving pi from the perimeter of a circle, the problem comes in assuming that, if the limit of a sequence of curves is a circle, then the limit of the perimeter of that sequence will be the same as the perimeter of the circle. However, you can still (my confusion is perhaps from the word "must"?) derive pi from the limit of the area of that sequence (since the area enclosed between the circle and the curve must go to 0 so the areas will be identical).
  12. Quote:Original post by JoeCooper First, singling out the worst points. I don't necessarily do that, actually. If I add in the valid vertices' deviation, all I do in either analysis is add 0 (and half it, I think. I did this earlier and now I can't remember exactly.) It's easy enough to create a construction that exploits the fact that it singles out the worst points. For example, let f(n) be this metric for the nth iteration of the square excavation. Now, create a star whose inner vertices are along the limiting circle (radius rinfinity), outer vertices along the circle running through the vertices of the circumscribing regular n-gon (radius rn), and ceil(f(n)/(rn - rinfinty)). This creates a concave figure where the given metric will always be greater than or equal to f(n) but produces the correct result. Modifying this slightly, you could control the metric to behave like any function that doesn't go to infinity. You could also create a sequence that is the limiting circle at all points except for the worst points of the excavation sequence. This is a discontinuous figure with metric f(n) that also produces the correct result. For a third example, take the circle circumscribing the regular n-gon. The metric is always infinite but, again, it produces the correct result. Quote: Secondly, the iteration # means nothing outside of a given procedure. It's as meaningful as comparing an O(n) algorithm to an O(2n) algorithm. The same n isn't really comparable between the two, but the way they behave as n increases is comparable. Quote: Quote:Perhaps the area enclosed between the two would be a better metric I disagree; the area doesn't predict the shape's perimeter. As I said, in this case, the enclosed area is just the weighted deviation of all points normalized by the arc length. It's a generalization of the metric you proposed that takes care of the fact that it singles out the worst points (by taking the weighted contribution of all points) and the fact that arcs of the worst deviation would make it infinite (by normalizing by arc length). Quote: Quote:It's not so simple as showing that it is inferior. What we need is to show that it's inferior in a meaningful way. This procedure iterates over a square, cutting parts away, so that the vertices are closer and closer to a circle. My analysis is to examine their hypotenuse, and the deviation from a circle, and see if it actually converges on 0 or not. It does not, so, it is not an approximation of a circle, and that makes it inappropriate the only meaningful way; if it isn't a circle, than any resemblance between its properties and those of a circle is purely coincidental. But why does this metric mean anything? Another metric is how close the limiting figure is to a circle, which it passes. You could also consider whether the enclosed area goes to zero, which it also passes. I gave three examples of false negatives by the metric you proposed, so it's not a necessary condition. It may be sufficient, but it cannot reject an approximation.
  13. Quote:Original post by JoeCooper If so, in a regular polygon (the method this spoofs that - I think - we're trying to differentiate it from), every specified vertex lies on a circle. Actually, that would produce a lower bound. The upper bound is found by placing the midpoints of the edges, not the vertices, of a regular polygon on the circle. I think your analysis wouldn't be changed too much, but I wouldn't be surprised if the equivalent lower bound "spoof" would, like the Koch snowflake, have an infinte perimeter (so infinty < pi < 4 [smile]). Quote: Thus we can quantify that squares are an inferior approximation of a circle compared to a regular polygon. It's not so simple as showing that it is inferior. What we need is to show that it's inferior in a meaningful way. You can approximate the volume of a cone as a stack of cylinders or as a stack of truncated cones. You can quantify how much better the latter is, but both will produce the same result in the limit as the number of sections in the stack approaches infinity. One thing that bothers me about this analysis is that it singles out the worst points. If we just look at the best points, the number of points with zero deviation increases linearly for the regular polygons but exponentially for the square excavation. By this metric, the square excavation is quantifiably better. Perhaps the area enclosed between the two would be a better metric (since it's the sum total deviation for all points normalized by the arc length), but that would seem to be a better measure of how quickly the approximations converge and not how well they converge. As I said before, I think the best criterion will be related to the convergence of the tangents. See nilkn's post for a good explanation of that issue. Quote:Original post by Antheus I'd say that the problem here comes from implication that value of Pi follows directly from perimeter, whereas even when using the curve it must be derived from area. It's maybe more robust to derive it from the area, but the most basic definition of pi is a relation between the perimeter and diameter.
  14. Quote:Original post by JoeCooper Quote:the construction in the OP still seems to fulfill the usual definition of a circle: all the points in a plane that are a given distance away from a given point Maybe that's not the case. Again, if you just zoom in, it's stair-steps, it's not a circle. The shape on your screen isn't really a circle either, given that it's also painted onto such a grid. Sorry, I wasn't precise enough in what I was saying. It is a circle if we interpret "repeat to infinity" as "take the limit as this sequence is iterated to infinity" or, if we number the steps and consider the nth step, "take the limit as n approaches infinity". Quote: We do have preconceived notions, assumptions and specifications, which is perfectly OK and in fact mandatory. If we can't agree on what a circle is and what Pi is supposed to do, than we might as well skip the diagrams, make up numbers and call it a day. This isn't a very good way to proceed. You still haven't explained why the one is correct and the other isn't. That is, why the procedure that yields 3.141... is correct and the one that yields 4 isn't. The only explanation you have is that you knew beforehand that 3.141... is the correct answer, but by what procedure did you come up with that number? How did you decide that that procedure yielded the correct answer? This is important because in math and science you can often get two different values for an unknown quantity and you need to determine which procedure is correct. As for why the original construction "should" produce pi, intuitively, if curve A converges to curve B, then the length of curve A should converge to the length of curve B. So, if we know the what length curve A converges to and curve B is a circle of known diameter, then we should be able to calculate pi from its primary definition: the ratio of a circle's perimeter to its diameter. In the original construction, B is a circle with a diameter of 1 and A is a curve that converges to a circle and whose length converges to 4. The problem is that the "intuitive" part there is wrong. Quote:Original post by BlueSalamander "If we had a sequence of approximations whose direction of travel converged correctly, the length would converge correctly too." So, an obvious necessary condition is that the curve itself converges, but that's apparently not sufficient. Given that the curve converges, is a converging gradient sufficient?
  15. Quote:Original post by JoeCooper Quote:Original post by Way Walker But these criticisms also apply to circumscribing regular polygons That was the exact previous thing I went into. Quote:Also, because "I said so" isn't a satisfying mathematical explanation. Excuse me? I said that if you try to calculate it through the area, you get a radically different figure, and while using the regular polygon method also isn't perfect, the difference is dramatically smaller to the point of being useful. I didn't mean to single out your comment since it was a comment on the whole discussion (including the linked thread) and why I included another post there as well. A lot of the explanations of why it doesn't work are no more insightful than "I said so". For example, in the linked discussion the explanations are, "you cannot interchange limits and lengths," and, "Suppose that X(n) is a sequence of objects that have a meaningful limit X. If all of hte X(n) have a property P, then [...] most people will accept that the limit must have P without thinking about." The first isn't entirely true because you can if you have the right limiting sequence (e.g. regular polygons in this case) and the second gives no reason as to why it doesn't work in this case while there are still "numerous theorems in maths that follow this pattern." Basically, it doesn't work because they said so. Students (and others) new to a particular area of math sometimes get the right answer by doing something wrong or even completely irrelevant. Their answer is quantifiably correct (there being literally 0 difference between their answer and the correct answer), but there's not necessarily any reason it should be or that it will be anywhere near correct in other cases. Why do regular polygons produce a better result? Or is it just chance, like a coder who makes a working program by randomly copy-and-pasting code from the internet? Or is it just that it was on the internet so it must be true? I think it's related to the fact that the limiting set really is a circle so it will be pi and the derivative of the limiting set exists everywhere, but the limit of the derivative is undefined everywhere. Maybe the full explanation requires a deeper knowledge of fractals than one can give in a single post to a technical but still general audience? And you're excused. [smile]