Raghar

Members
  • Content count

    1888
  • Joined

  • Last visited

Community Reputation

96 Neutral

About Raghar

  • Rank
    Contributor
  1. Quote:Original post by johnstanp Quote:Original post by Raghar Bad. An integrator used in the game development should be oversensitive, or exact. Clamping the rise in energy is simple, guessing orientation of a null vector isn't. Who has ever talked about what an integrator used in the game development should be? This is a game development forum, and he talked about integrator borrowed from Game Programming Gems 4, in addition he talked about writing a physics engine. Bad energy clamping, and energy loss are common problems with physics engines. Quote:Numerical integrators never give exact results: if the solution of a system of equations is analytically unknown, I don't know how an exact solution can be provided...So remind it: numerical integration only give approximate values( when compared to analytical solutions ), and this is why we talk about the behaviour of the "error". We don't even talk about the inevitable error related to the approximation of real numbers.How do you know the analytical solution isn't only an approximation, and the result of the numerical integration isn't mathematically exact? A person who invented the analytic solution could mistake a discrete problem for a continuous function. Anyway, I used words oversensitive, or exact in the meaning: >= exact Applying bias to the results would cause the integrator to either overestimate result, or have the exact result. (When min(noise) + bias is the correct result.) The result + bias is then clamped into, for engine, a safe value. Quote:Original post by Splinter of Chaos Also is common sense: floating point operations are not exact. They too are approximations. They are exact implementation of the standard. An exact result is mathematically inaccurate when the standard requires the inaccuracy because of physical restrictions. For rest of the topic see above. Quote:Compilers don't even attempt to optimize floating point equations due to the possibility of altering their inaccuracy. I'd say 4.8999951 when the goal was 4.9 is pretty good! Actually they tried to optimize floating point operations, but programmer's complains forced them to decrease aggressiveness of optimizations. AFAIK PCSX2 doesn't work well when compiled in the most aggressive profile. Quote:However, since we're on the topic of accuracy, my implementation of RK4 integration prints this: [code]Position: 4.9So lets go back to the topic. An integration with position = 0 velocity -10 acceleration 20 resulted in this: Quote:step 1/4 integrator0 pos 2.5 v 10.0 integrator1 pos 0.0 v 10.0 integrator2 pos 0.0 v 10.0 step 1/10 integrator0 pos 0.9999999999999998 v 10.0 integrator1 pos 0.0 v 10.0 integrator2 pos 6.661338147750939E-16 v 10.000000000000005 step about 1/32 integrator0 pos 0.3125 v 10.0 integrator1 pos 0.0 v 10.0 integrator2 pos -3.969047313034935E-15 v 10.0The first integrator is an first order integrator, which perfectly satisfies what I said about integrators by definition. Its accuracy is sufficiently crap, thus the floating point noise doesn't matter. The second integrator is second order, and solves the situation perfectly. The third integrator has higher order, and deviation from exact solution has been caused by its internal behavior. It has quite a lot of constants which are themselves only approximations. Obviously because the problem is perfectly solvable by a second order integrator, both second and third integrators have solved it easily. The 4.704 looks quite bad. It nearly looks like a first order integrator... re: agan0617Quote:MultiplyAccumulate(m_StateT1.m_Position, dt, m_StateT1.m_Velocity); MultiplyAccumulate(m_StateT1.m_Position, kHalf * dt * dt, m_AccelPrev); MultiplyAccumulate(m_StateT1.m_Velocity, [red]dt[/red], m_AccelPrev); Try to change the code into this, it might fare better.
  2. What compiler do you guys use for java?

    javac.exe Eclipse. (thought Eclipse is poor when someone needs to quickly create a class, and do few lines of code to test something.) DevFred Test driven design doesn't have partial compilable code. It has only fully compilable code where important parts that are commented out.
  3. Quote:Original post by johnstanp The algorithm( or integration scheme is correct, at least seems to be ) since I obtained the following values position = 4.8999951 Bad. An integrator used in the game development should be oversensitive, or exact. Clamping the rise in energy is simple, guessing orientation of a null vector isn't.Quote:And for dt = 4.0e-2( 25 frames per second ) A rate of a game engine (or integrator) should be independent on the rate of the 3D engine. Aka rules are the same for everyone doesn't matter how fast is GFX card.
  4. Multithreading considerations

    Quote:Original post by VanillaSnake21 Actually I planned multithreading from beginning. The reason I didn't write my framework with it in mind, was because I wanted to undersand the difficulties of transforming a non-multithread app into a multithreaded one. The difficulties are new application design. It's much better to write multithreaded application and believe majority of people are using multicore CPU anyway. (Considering costs of E1400, or AMD CPUs, there is no reason why they shouldn't be.) Well written multithreaded applications scale well on low core CPUs anyway, so why worry. The programmers just shouldn't abuse CPU too much, and use only what the application needs.
  5. Algorithms vs loops

    Quote:Original post by godmodder this is an issue I regularly bump into when coding larger projects: do I need to prefer (STL) algorithms or manual loops for very small code fragments?Algorithms? For loops are part of an algorithm, you can't separate loops and certain algorithms (you can just hide them under the carpet). Don't mistake programming with math, the skills required are vastly different. (Remember what Djiksra said...) "for(i32..." is explicit, some recursive abomination isn't. As long as you would be only one to read the code it doesn't matter too much, but you'd learn bad style of coding. BTW functors are not guaranteed to be on each OS. Quote:Original post by samoth One thing it does for you is that it gives the compiler optimisation opportunities that don't exist in other code. It may end up producing the same code, or better code, however it will never produce worse code than anything you could do manually (with for() or while() or whatever). Another thing it does for you is that if you have a decent compiler, then you can use one commandline switch or one #pragma, and the compiler will parallelize for_each without you having to worry about threads, synchronisation, or anything of that matter. It will just work, only faster.It would be scary. Imagine a game which finally guarantees the highest spike would be under specifications, and anything which wouldn't finish under 30 ms would be either interrupted or dereferenced. That apply for Ex(n). Now imagine you have sudden Ex(n + 1) or higher that appeared out of nowhere and messes things up. GFX threads tend to be high priority, and with multicore CPUs they can run fairly independently, now the problem is a derived thread has priority of the original thread. Quote:Original post by Jan Wassenberg Quote:Whether it's "legal" by the standard... no idea, but why would it not be legal? Of course it's "illegal", parallel for_each breaks the guarantee that elements are visited in order. As with OpenMP, you have to explicitly ask for parallel mode behavior and thereby promise that your functors have no side effects (and that no exceptions will leak out).In order execution is just a suggestion, as long as the elements are properly visited on average. People who would like theirs for loop to access all elements in order should specify this by volatile keyword. ~_^
  6. Programming in the world of tomorrow.

    Quote:Original post by outRider I would argue that ILP has tapped block level parallelism very well, and that threads should be utilized at a higher level, i.e. relatively longer running "tasks", because even though you can use thread pools to eliminate excessive thread creation, context switch overhead will swallow up any gains from parallelizing small blocks of code, makes more sense to deal with small blocks of code at the ILP level.The main problem with multithreading isn't a thread overhead, its synchronization. Latency increase required by parallelism isn't always acceptable.
  7. Poverty. Everything what was cheaply/free of charge obtainable would be for money with bad consequences for society.
  8. Programming in the world of tomorrow.

    Quote:Original post by ChaosEngine The problem is that people think sequentially. Our concious mind is inherently single-threaded. When asked to describe any given procedure, the vast majority of people will break it into sequential steps. Even developers do this. Well CPU are doing things sequentially. A CPU pipeline goes like this: A CPU fetches instruction located at address described in register RIP. CPU decodes the instruction, and splits the instruction into its own internal code. Then CPU reorders it's own internal code and place it into a pipeline for out of order (or in order) execution. Basically CPU is always doing sequential work. CPUs can do one+ work per core, GFX cards can do 512+ works per core because its wider instruction pipeline.
  9. Programming in the world of tomorrow.

    Quote:Original post by Drigovas Modern programming languages excel at describing sequential processing, but so far are very immature with respect to constructs that allow for the description A program design and a source code are sometimes different things. I for example use a lot of parallel design. (Thought the parallelization often isn't worthy the effort, even for the optimal code. People who wouldn't be able to create optimal code intuitively, have large amount of possibilities to screw themselves up without knowing about it until release.) A parallelization begins on design board (which might be wrong), however design from a programmer side is important as well. I wouldn't trust a SW architect who wouldn't be able, and willing to program his own design. (BTW too rigid specification often kills the design when the SW is supposed to be asynchronously parallel. These things often have unforeseen consequences, and it's better to have ability to redesign things now, than talking and testing possible outcomes for two weeks.) Quote:of parallel processing. A number of research languages are attempting to remedy this, but are in general not so successful, and most of them vary in quality from poor to unusable. Yes a programming language created by a researcher without long experience in the field, that didn't worked on about hundred freeware projects in his free time, has a high probability to suck. Do you remember pre C languages? Quote:There is however a clear trend in terms of architecture away from faster sequential processing and more toward parallel processing, and multi-core machines will only become more common. Thus there is clearly a need to develop language constructs that better allow for expression of parallelism. Thousand monkeys wouldn't be better anyway. Quote:So the question is, in 2020 when you go to write a program on your thousand core computer, what will the language that you use be like? Java 8.0, game developers are very conservative, quite a lot of people are still using C++. And you'd probably see me screaming murder about OpenGL 5.0 and its drivers and all these features that were not implemented yet. ~_^
  10. Universal Warfare

    I see two problems. 1. Unsuspecting player would be hit by missile, and scream murder. 2. It's a MMO, and market is overcrowded. (add into it an economical downfall...)
  11. Why do devs not want customer feedback?

    Quote:Original post by loufoque Games are made for profit. They do not try to make the game as good or perfect as possible: they just aim at making more money. It makes more sense financially, for example, to work on adding more content than fixing the existing one, unless the problems are critical enough. All the stuff that has been said about bug reporting producing too much whining is nonsense, a lot of major free software has no problem managing such systems. It however requires quite some openness, and it also requires caring about issues. Look at the gamespot PS2 section 2007-2008, you'd see a lot of bad games. Quick and dirty, made just for profit. Boring, and even software pirates wouldn't touch them by a large pole. A game needs to know not only what to implement, but also what not implement. A customer feature feedback should be put through QA department. QA department could addressed this issue already, and should be able to provide some analysis. The worst thing that could happen is a developer addressing the issue directly. At best it would break his schedule. At worst it could cause delay in a serious bug removal. A lot of a free software is extraordinary (that's a hyperbole of course) crap. These few pieces that are not, are either made by one developer, or made by people who managed to put unnecessary distraction far out of theirs working space.
  12. Dualshock3 and wiimote....for pc?

    Quote:Original post by dkx187Can you shoot some examples of impostor peripherals for the wiimote because I can't find any for the pc. It's probably too soon for quality USB 3D pointer devices. Even quality USB/PS2 dualshock 2 compatible were somehow late. You might like to mail Saitech, Logitech, and AFAIK Poland manufacturers wasn't half bad. They might need a small push, because wiimote is still a niche market.
  13. Quote:Original post by Codeka iiNet have said they're going to fight the case, so let's hope they win! It'd be like sueing the postal service for delivering CDs containing pirated songs or something. Actually it's more likely similar to visit of a library, and reading one book with your girlfriend. The library provided environment which allowed multiple users to freely read a book. No money paid. Multiple users are using one license at once. They are able to read the book comfortably because of politeness of others. Of course there is a difference. The original work can't be damaged. I kinda wonder what they'd do with users of WoW, all of them are using BitTorrent. In fact I like using BT for distribution of my work too.
  14. Quote:Original post by samoth Well, seeing that "Trusted Execution" is a standard feature of all Intel CPUs beginning with and including the Wolfdale series and seeing that most new computers running Vista implement TPM and DRM in a more or less severe form already... sorry, what was the question again? TXT is disabled in following Wolfsdale CPUs: E5200 //out of manufacturing cycle E5300 E5400 E7200 //out of manufacturing cycle E7300 E7400 It's definitely not implemented on E1400, and E2xxx series. And it's for sure not implemented on E4xxx series. Considering quite a lot of CPUs are E7xxx, and E5xxx, TXT is somehow rare which is great for users. Otherwise a slow virus with long dormant stage before killer impact would vaporise quite a lot of data by the same virtue as copy protection.
  15. Quote:Original post by riruilo Hi friends! How to declare a queue of vector (of byte, fixed size) on Java? I want just every element is a byte array of 64 elements. Something like this: (this does not work) Queue<byte[64]> m_queue = new LinkedList<byte[64]>(); Must I use another class? For instance: Queue<Vector64Byte> m_queue = new LinkedList<Vector64Byte>(); I wouldn't like to do that. I suspect you mean an array, not a vector. If you meant a vector located in a 64D space with only 8 bits of precision per axis, you should create its class. If you meant array, arrays are not vectors, each array must be allocated by new. (And each array would have Object overhead.) ... add(new byte[64]); ... Should do the trick. Also don't prefix your variables, it's not C, it's Java, and it was bad even in C. If you desperately need a name decorator for some real reason, use postfix decoration.