CodeDemon

Members
  • Content count

    202
  • Joined

  • Last visited

Community Reputation

363 Neutral

About CodeDemon

  • Rank
    Member
  1. Why XML is all the rage now?

      S-expressions are just as powerful, yet more terse. Naughty Dog uses them in the Uncharted Engine for similar things.
  2. Why XML is all the rage now?

      Quite the array of language projects you have there!   I too am fond of the use of S-expressions over that of XML, and have had experience using them for data and DSLs in a number of projects. You can't beat the terseness and expressive power, and it's not hard to roll your own parser to handle them.   I share many of the opinions from: http://c2.com/cgi/wiki?XmlIsaPoorCopyOfEssExpressions   As for my own projects, I've also built a custom R6RS parser in C++, and have done some interesting things with it. For specifying data as maps/sets/vectors, I added support for handling special forms which yield new data-structure semantics, added Closure-like syntactic sugar to the lexer/parser where braces and square brackets can be used to define such data structures, and added a quick tree-rewriting pass to the data compiler to convert from the internal list AST node representation to the appropriate container type.   For simple data, sometimes I just go with simple key-value text files if I can get away with it (less is more! strtok_r does the job good enough), and I've recently been experimenting with using parsing expression grammar generators to quickly create parser combinators for custom DSLs that generate more complex data or code as s-expressions or C++.   A shame that many of the "big iron" game studios still use XML for a lot of things, although I've managed to convince a number people that it's time to move on. I dread the days where I am tasked with working on anything touching the stuff.   In short, if you're still using XML, you're needlessly wading through an endless swamp of pain, suffering and obtuse complexity. Things can be better.
  3. John Carmack a racist?

    [quote name='cowsarenotevil' timestamp='1337415383' post='4941376'] Let me be clear; I have nothing against you. I think we probably just ultimately have different value systems. There are certain things that I want to see exist in the future, and there are certain things that you want to see exist in the future, and they might turn out to be mutually exclusive (in fact I'd say that's fairly certain). And that's fine. [/quote] This is not an argument about fundamental value systems. This is an argument about certain historical, societal, and biological truths. Objective facts aren't a projection of one's value system, they're either true or they're not. Attack my actual statements and my evidence with logic. Don't fallaciously attack my character, even if that wasn't your intention. Hiding ad hominen or strawman arguments with the illusion of apology and compromise doesn't make it acceptable. [quote name='cowsarenotevil' timestamp='1337415383' post='4941376'] There are certain things you're saying that I don't think are true, factually, but even if you changed your mind about all of those things I wouldn't expect or want you to change your fundamental value system, so I'm willing to not worry about those things if you are. [/quote] I have provided what I believe to be factual evidence to support my claims. Perhaps you haven't yet had the time or the motivation to fully go through what I have cited. If you don't want to argue about it, or to research it on your own time, then that's your loss.
  4. John Carmack a racist?

    [quote name='cowsarenotevil' timestamp='1337407149' post='4941360'] I don't mean scary as uncivil, just as some pretty odd sort-of-paranoid beliefs that I would have thought was a bit farther from the mainstream. [/quote] There was a time when the idea that the Earth was flat instead of spherical was mainstream. Yes, there is the stigma of conspiracy and paranoia, but it's all in one's head. Judge the evidence or the argument on its own merits. [quote name='cowsarenotevil' timestamp='1337407149' post='4941360'] I guess I just don't spend that much time worrying about protecting "White European civilization" because it's not really in danger and who cares anyway? [/quote] At the very least, it's in the same type of danger as the [url="http://www.japantoday.com/category/national/view/japan-faces-extinction-in-1000-years"]Japanese civilization[/url]. With a global average fertility rate of around ~1.6 and comprising only around 8% of the World's population, White Europeans will be extinct within a matter of centuries and displaced far before then unless current trends change. Keep in mind that Japan keeps its borders much more closed to immigration than the West. Of course, things can and will change in the future, but we don't live in the future--we're responsible for the future. [media]http://www.youtube.com/watch?v=jxUD8E-qbyI[/media] [quote name='cowsarenotevil' timestamp='1337407149' post='4941360'] I see using skin color as an important indicator as, if not evil, at least very lazy and ultimately not beneficial to either party. [/quote] Is it simply just skin color? And why do you think it is evil to protect something? [url="http://www.edge.org/conversation/rethinking-out-of-africa"]http://www.edge.org/...g-out-of-africa[/url] [url="http://www.nature.com/news/special-issue-peopling-the-planet-1.10561"]http://www.nature.co...-planet-1.10561[/url] [url="http://www.pnas.org/content/early/2011/08/29/1109300108"]http://www.pnas.org/...8/29/1109300108[/url] [url="http://www.ncbi.nlm.nih.gov/pubmed/20448178?dopt=Abstract&holding=npg"]http://www.ncbi.nlm....act&holding=npg[/url] [url="http://www.ncbi.nlm.nih.gov/pubmed/21179161?dopt=Abstract&holding=npg"]http://www.ncbi.nlm....act&holding=npg[/url] [url="http://www.medical-hypotheses.com/article/S0306-9877%2809%2900537-4/abstract"]http://www.medical-h...0537-4/abstract[/url] [url="http://www.nature.com/mp/journal/v16/n10/full/mp201185a.html"]http://www.nature.co.../mp201185a.html[/url] [url="http://www.nature.com/ng/journal/vaop/ncurrent/full/ng.2250.html"]http://www.nature.co...ll/ng.2250.html[/url] [url="http://www.nature.com/ng/journal/vaop/ncurrent/full/ng.2237.html"]http://www.nature.co...ll/ng.2237.html[/url] [url="http://www.sciencemag.org/content/309/5741/1717.abstract"]http://www.sciencema...1/1717.abstract[/url]
  5. John Carmack a racist?

    [quote name='A Brain in a Vat' timestamp='1337389299' post='4941326'] I hope this isn't the last we hear from you on the topic. I don't give a shit about political correctness, but ignorance, myopia, elitism, and prejudice all rub me the wrong way. [/quote] And I couldn't help but notice, but you say you don't care about Political Correctness, and yet you unwittingly play your role to the narrative. Perhaps its time to unplug yourself from the matrix, wake up and realize you're being played as nothing more than a pawn in a game of conquest that has been going on for over a century. [url="http://www.youtube.com/watch?v=q6c_dinY3fM"]http://www.youtube.com/watch?v=q6c_dinY3fM[/url] [url="http://www.nationalreview.com/articles/299918/censored-race-war-thomas-sowell"]http://www.nationalreview.com/articles/299918/censored-race-war-thomas-sowell[/url] Oh wait, that last editorial, like Derbyshire's, was also pulled days after being published. Gee, I wonder why. Here's the Google cache version. [url="http://webcache.googleusercontent.com/search?q=cache:Ikz6UnQKS8IJ:www.nationalreview.com/articles/299918/censored-race-war-thomas-sowell+&cd=1&hl=en&ct=clnk"]http://webcache.googleusercontent.com/search?q=cache:Ikz6UnQKS8IJ:www.nationalreview.com/articles/299918/censored-race-war-thomas-sowell+&cd=1&hl=en&ct=clnk[/url]
  6. John Carmack a racist?

    [quote name='A Brain in a Vat' timestamp='1337389299' post='4941326'] Regarding the article you actually were referring to, it's just as ignorant, if not as blatantly racist. [/quote] You do realize that Leon Trotsky, a key Bolshevik and leader of the Red Army during the genesis of the USSR, was the one who coined the word 'racist' and preconfigured the ideology of anti-racism so as to specifically destroy White European civilization, making it amenable to communist revolution (Source: [url="http://www.marxists.org/archive/trotsky/1930/hrr/index.htm"]http://www.marxists....0/hrr/index.htm[/url]). Anti-racism, like every Marxist movement, will ultimately fail in the same way that say literal Creationism has failed: it's not grounded in reality. History will venerate John Carmack and everyone else from absense of apology, for there is no reason to apologize towards the imposition of a bankrupt ideology.
  7. Which Country Should I Move To?

    And for anyone who thought I was being over-dramatic, like clock-work, it's happening. The EU is finished. http://www.cnbc.com/id/45609228 British Prime Minister David Cameron announces that Britain will never join the Euro, and will not sign a new European Union treaty. This is the beginning of the end.
  8. Which Country Should I Move To?

    http://en.wikipedia.org/wiki/Patagonia I wouldn't trust Canada, if shit hits the fan, the authorities here won't have any qualms deporting you to America. http://www.cbc.ca/news/politics/story/2011/12/06/weston-border-deal-exit.html Also, EU is dangerously close (a matter of days) to breaking up which will lead to civil disorder and socio-economic breakdown. http://www.bbc.co.uk/news/business-16082755 http://www.thisismoney.co.uk/money/markets/article-2071800/Tesco-plans-collapse-eurozone.html?ito=feeds-newsxml http://blogs.telegraph.co.uk/news/jameskirkup/100122774/eurozone-crisis-summit-what-david-cameron-will-say/ http://www.guardian.co.uk/world/2011/dec/08/treaty-changes-on-eu-summit-agenda
  9. Machine Learning in Graphics

    You can apply machine learning to anything that requires optimization of a set of parameters or functions controlling the transformation of data. Generally, it's used in cases where searching the entire space of solutions has non-trivial complexity, and you simply want to find a good enough solution that isn't obvious. I can think of at least one area within real-time computers graphics that might bear fruit. Occlusion culling often uses general purpose hard-coded heuristics to determine "good" objects or surfaces for use as occluders, and may rely on input from content designers or programmers to flag which objects should be used for occlusion or to generate acceptable bounding volumes or primitives to use when rendering into the occlusion buffer. Using machine learning, you could have the system do all of the hard work for you in determining good heuristics that are custom tailored for a specific scene, level, or sub-region/sector within a level, and for generating better occlusion geometry for static (occluder planes) or dynamic (optimal LOD for an occluder mesh) datasets, duration of time before reconsidering an object as an occluder in implementing temporal coherence, etc. There's probably an endless number of things you could optimize for. A starting point would probably involve simulating the contribution of various objects as occluders across the different regions of a level as a preprocessing step, generating feature vectors that capture the camera position & orientation, a measure of contribution from objects into the occlusion buffer, perhaps some other information, and then performing clustering analysis of the feature vectors and optimizing for least amount of over-draw and least cost of performing the occlusion culling pass. Note that your feature vectors will be very large as they have to account for every possible occluder in a given level data set, and so you they will be n-dimensional where n is probably in the thousands or tens of thousands. You could probably implement a lot of this to run on a GPU using DirectCompute or OpenCL. Alternatively, you could record a number play throughs and use the play-back for your simulation and analysis phase and combine the results. Think of it as profile-guided optimization for occlusion culling. I'm not sure how much of this kind of stuff is already done in COTS occlusion systems like Umbra, but I seem to recall the original paper on it used simple heuristics and statistical geometric methods to generate occluder sets and the like.
  10. [quote name='Hodgman' timestamp='1295491988' post='4761629'] A lot of traditional material on "multi-threading" focuses on the practice of having shared memory between threads, synchronised via mutexes/semaphores/etc... However, this idea of "[i]what can I put onto a thread?[/i]" is very outdated. The question is now, "[i]how do I write game code that runs on a modern (multicore, or even NUMA) CPU?[/i]". To make a game (engine) that seamlessly scales to multi-core CPUs, you need to be writing things at a completely different level than threads. This is important: To write multi-threaded code, you shouldn't be dealing with threads. Threads are used at the lowest level of your multi-core framework, but the end user of that framework ([i]i.e. the game programmer[/i]) shouldn't even have "thread" in their vocabulary. You ([i]the low level framework author[/i]) will use threads to implement a higher level model, such as flow-based programming, of functional programming, or the actor model, or a message-passing interface, or anything where the concept of threads/mutexes isn't required. Things like DMA transfer alignment, or number of hardware threads, or cache atomicity should all be transparent at the game programming level. The game programmer should just be able to write functions, which your system will execute safely in a multicore environment automagically. [/quote] You've got the right idea, but it depends on what your role is. If you're the lead technical guy on a project, you should still be highly knowledgeable about everything down to the lowest level. You still need to deal with threads to some degree and you need to be aware of the memory model of your target architecture and programming environment. If you try to develop high-level abstractions that hide threads, you won't be able to take advantage of things like thread affinity to ensure that your thread schedulers are NUMA aware. Many modern multi-core single CPU systems are somewhat non-uniform in the memory architecture if you consider separate L1 cache for each core. If you try to build abstractions that hide the memory model, you may end up with solutions that do not take advantage of cache coherency. Memory management is intrinsically tied to your threading model when it comes to performance and scalability. In languages where you have manual memory management, unfortunately, the only way you can keep things NUMA aware it to also do manual thread scheduling. And you can't use a single type of thread pool / task scheduler for all problems in a general manner and expect to get optimal performance in each case. There's far more than one way to implement a scheduler, and each has different tradeoffs and benefits. Here in lies the problem with general purpose frameworks and language abstractions. Frameworks like Intel Threading Building Blocks and Microsoft's Parallel Patterns Library are good starting points, but people need to understand that those libraries just give you a hammer, a hand saw, and a screwdriver, but when you need a cordless reciprocating saw with a blade capable of cutting through metal pipe, you're out of luck. It's quite possible to out-do these libraries. I know you were more getting at is that people shouldn't be be sticking to the traditional paradigm of trying to shoehorn in a few extra threads and throwing blocking synchronization primitives around shared data, because that just does not scale at all. You've got to jump in at the deep end and think about concurrency on a whole new level. And you're right that it should be as easy as possible for those working at the highest level writing game logic and game components and what not. But you shouldn't deceive yourself that it's possible to hide everything and generalize concurrency for all use cases. You may no longer be shoe-horning in a few extra threads, sure; instead you're just trying to force everything into a small handful of concurrency abstractions and that's not always much better.
  11. [quote name='nfries88' timestamp='1295385965' post='4760898'] The time consumed by a function being virtual is usually nothing compared to the time it consumes just to execute. [/quote] That really depends. On modern CPUs where instruction parallelism has been maximized quite thoroughly, it's possible to execute a fairly complex function in far less time than it takes for a cache miss to resolve. If you have a wide variety of game objects and components with different concrete types and vtables, and you're doing a lot of different things in the components, accessing different types of data, it's quite possible for the relevant section of the vtable for a particular concrete game component to be thrown out of the L1/L2/L3 cache during each iteration of update loop. Cache misses galore. You're right about not optimizing things though if it's not really a major concern. Most hobbyist projects on modern desktop platforms won't even get close to hitting the hardware performance wall. If you're developing for smaller mobile devices, it's a lot easier to hit it though.
  12. It's true that templates offer a form of compile-time polymorphism, but ultimately in the context of a single run-time call site (your update loop), you still need to map compile-time bindings into run-time bindings. So no, you aren't going to find an easy way to eliminate the virtual call there while maintaining your existing design. So you're left with changing your design. If you want to get rid of virtuals in your GO/scene graph system, you should divide your system into two, have a flat game object/component sub-system which may still uses virtuals for updating and your heirarchical scene graph where you can pull the node-update and render calls out of the node classes in to a heavier-grained node manager/scene world class, and optimize everything in a data-oriented fashion. Your game objects and components keep a reference to the scene nodes they're interested in. You can even eliminate the virtual calls from the game object sub-system, just by making your update code more general and/or by hard-coding update calls in your main update loop to different class types. This is some times done to maximize performance, at the cost of increased maintenance. EDIT: Just want to clarify that what you're essentially wanting to do is reduce the virtual update calls (or calls into your scripting engine) to just those scene nodes that actually need game logic. If you have an independent physics sub-system, and you want to give a scene node some physics, you just need to associate a rigid body object within the physics sub-system with the scene node, for example. The physics sub-system can be written in such as was as to reduce or eliminate virtual calls as well. No need to use a general purpose polymorphic game object here if you design things right.
  13. [quote name='Atomical' timestamp='1295367068' post='4760746'] [quote name='rip-off' timestamp='1295361267' post='4760711'] reply [/quote] Do you have any tips for the synchronisation issues you mentioned? Take for example synchronizing rendering and game logic, set that I just have these two threads. [/quote] When you find the word "synchronisation" in the context of multi-threading, you should be suspicious of it because what it really means is anti-threading. You should do your best to make the data that each thread is working on as independent as possible from other threads. Or rather, you should do your best to eliminate write-sharing, as that's what really kills performance. It's alright to use critical sections/monitors/mutexes and other heavy weight blocking synchronization primitives on cold-paths, but in hot-paths that are being executed a lot, you may want to use different techniques that offer weaker synchronization guarantees. But you should usually start out using locks, make sure your code works, and then refactor hot areas of your code to use non-blocking algorithms, at least until you get some experience in concurrency programming. Non-blocking message passing queues/stacks are ideal for notifying other threads of particular events in an independent manner. Multi-version concurrency control, read-copy-update, and software transactional memory mechanisms are often better than reader-writer locks for data where you want shared reading across threads, while synchronizing the writes. Use asynchronous file and network I/O for reading in data and processing it in stages. Design intelligent thread pool schedulers to balance and distribute your work load. Try to use non-blocking thread-local memory allocators for data internal to a particular thread, and use a scalable thread-safe memory allocator for your general purpose allocations.
  14. Pulse Code Modulation quesitons

    [quote name='godsenddeath' timestamp='1295311931' post='4760464'] I've been working on some audio stuff, loading wav's from disk, recording and playing, etc.... I understand PCM and how it's sampled, but what I don't understand is what the samples actually mean. How is the sound represented by the values? Higher values, higher volume? How does timbre, pitch, etc... affect the values (and more importantly, vice-versa). Also any suggestions on synchronizing multiple audio sources would be greatly appreciated, but I won't ask for too much . Thanks. [/quote] Each sample represents an amplitude relative to some scale over time (it's generally linear with PCM from completely quiet to max volume, but it gets mapped to a logarithmic decibel scale by the digital to analog conversion hardware when output to a speaker). That's it. You're probably wondering where the magic is, how can extremely rich sounds and audio be reconstructed from this? The magic is in our ears and brains. Our brains are very good at integrating differences in amplitude of mechanical vibrations (sound) over various time domains picked up by our inner year and demodulating them into different frequency domains, mapping them to the unique sensations of different types of sound that we experience. It's really all in our heads. Timber and pitch are simply just mathematical properties that can be isolated and examined from a series of amplitude samples. If you plot any sound sample, say represented by a function [i][b]f[/b][/i] on a Cartesian graph, with amplitude being the Y axis, and time being the X axis, it's possible to represent the plot of the function [b][i]f[/i][/b] by an infinite sum of sinusoidal waves. In the bounded and discrete case, if you have a sound with [i][b]n[/b][/i] samples, you can represent that sound precisely with [i][b]n/2[/b][/i] sinusoidal waves, where each wave represents a different frequency of a particular amplitude and phase. Mapping a function into set of sinusoidal waves is known as transforming the function into the frequency domain. This is usually done by the Discrete Fourier Transform or Fast Fourier Transform. You can change the pitch of a sound by shifting the frequency of each sinusoidal wave when you reconstruct the digital sample during the inverse Fourier Transform, which coincidentally is the same process as the forward Fourier transform. Also, what do you mean by synchronizing multiple audio sources?
  15. Intel c++ compiler and directx?

    [quote name='joegallagher' timestamp='1295039456' post='4759011'] Is there any way i could get directx runing in Intel's c++ compiler??? [/quote] Uhh. It should work fine with DirectX out of the box. It's compatible with the Windows SDK headers and libraries. If you have Microsoft Visual Studio installed, it'll install a VS add-in allowing you to use the Intel compilers from within the IDE. It's a drop-in replacement for MSVC++. At least the commercial Windows version is. The non-commercial community version for Linux is of course more of a drop-in replacement for GCC.