• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

medv4380

Members
  • Content count

    24
  • Joined

  • Last visited

Community Reputation

98 Neutral

About medv4380

  • Rank
    Member
  1. A Phaser or Cyclic Barrier may help in what you are trying to do. What I've done myself is create 1 Display thread for the OpenGL Context and N-1 Logic Threads. On my quad core that leaves me with 4 active threads running in parallel. The logic threads add their tasks for the next display frame and then swap their queue with another queue at the end of the cycle/phase. Basically the Display thread is always showing the frame the logic threads previously worked on. However, to do it right you have to double up on any memory variables that are shared since you don't want the Logic Threads changing the previous frames Display Variables. You're going to come to a point where you'll have to decide if you want efficiency of processor use, or waist memory to increase processor utilization.
  2. Yuukan, I think you have it close. Your method matches a valid example [url="http://stackoverflow.com/questions/1641580/what-is-data-oriented-design"]http://stackoverflow.com/questions/1641580/what-is-data-oriented-design[/url] How you structure the data will depend on what you're doing, and turch does have a point since X and Y are apart of the Position it would be good to keep them together. You may also want to consider an additional layer of abstraction because if you approach it with what you have you'd end up with a struct for orcs, elves and anything else that would be nearly identical.
  3. This should satisfy your benchmark source request, and it should give you an ample number of languages to do additional comparisons. [url="http://shootout.alioth.debian.org/u32/java.php"]http://shootout.alio...rg/u32/java.php[/url] First Java7 has vastly improved from Java6 but that's 5 years of updated given in 1 version. Java 7 beats C++ and C in the the K-Nucliotide benchmark by 4 to 5 seconds. [url="http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=java&lang2=gcc"]http://shootout.alio...=java&lang2=gcc[/url] Java 7 beats C++ in the fasta benchmark by ~1 second but loses to C by .1 seconds [url="http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=java&lang2=gpp"]http://shootout.alio...=java&lang2=gpp[/url] C flat out wins or nearly ties vs C++ in everything except the K-Nucliotide test [url="http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=gcc&lang2=gpp"]http://shootout.alio...g=gcc&lang2=gpp[/url] C when written properly beats C++ hands down. The drawback of C is really that some more complex tasks are a pain to write without objects. Java's main drawback is usually memory, but it's not as bad as most people think. Sure you have some where the Binary-Tree test ends up with Java taking nearly 5x the amount of memory C does, and on even small tests it still has to load a 10-15 meg virtual machine. However, somethings like the reverse complement test it takes nearly the same amount of memory as C does. Java also attracted every programmer who was incapable of managing memory properly in C and C++ so there are a lot of bad memory structures out their in Java because at least it doesn't do the delayed crash thing C and C++ does when you mess with the memory the wrong way. Just because java has a garbage collector doesn't mean you can ignore memory, but that's what a lot of Java programmers do.
  4. [quote name='alnite' timestamp='1327441512' post='4905894'] If Windows doesn't ship with a JRE, then that's Windows fault, which is obvious since they are promoting .NET. [/quote] No Microsoft is forbidden from bundling a Java Virtual Machine with Windows due to being caught being evil. [url="http://en.wikipedia.org/wiki/Microsoft_Java_Virtual_Machine"]http://en.wikipedia.org/wiki/Microsoft_Java_Virtual_Machine[/url]
  5. C is best if you're going for speed. C++ has many of the same slowdowns as Java does now. In fact today Java7 is about as fast as C++. Java is in limbo in terms of support. The buyout of Sun by Oracle resulted in several years of Java 6 and left Java 7's updates out in the cold. Nothing like having to wait 5 years for a JSR updates that were ready to go to be integrated in. If Java 8 is released on time then that drawback is over, but if I have to wait 5 years again I will be upset. Java tends to have outdated API's that people insist on trying to use in games, and API's that weren't tailored for games. For example Key Listeners that were intended for Swing Gui's don't work well in games. Java Sound has been neglected for years, and because of legal issues the JMF was abandoned along with MP3 support. You'll need to get a real game api for java like lwjgl. Otherwise you're asking Java to do things in a way it wasn't designed for. C and C++ is easier to port to consoles. XBox, Wii and PS3 do not have a real JVM supported for games. The PS3 has a slimmed down JVM for Blue-Ray functionality only.
  6. You ether you need to manually do the port forwarding on your Firewall/Router like hplus0603 suggest, or you need to add in a UPnP Nat API so that it can do it automatically.
  7. That's one of several issues in java. In truth the fix is working as intended. This is the line you need to edit in [color=#660066][size=2][left]RepeatingKeyEventsFixer.java [/left][/size][/color]to try and get the code to behave the way you want it. [CODE] public static final int RELEASED_LAG_MILLIS = 5; [/CODE] It is using a timer trick to ignore the key repeats. The way the code is written it waits for 5 milliseconds by default before it reverts back to the undesirable behavior. There are other "bugs" that you'll eventually run into using KeyAdapters and KeyListeners. One annoying one is when multiple keys are being pressed. These aren't really bugs in truth. They behave this way because the OS is really sending these events. It makes the events worthless to listen to but that's how it goes. In C and C++ the key repeat is turned off for games since it ruins the experience. Oracle really should look into integrating some of the more useful features needed for games, but until then you might want to consider using the lwjgl or something similar. With the lwjgl you can write something that looks at the keyboard state instead, and if you still needed to know if the key was just pressed or if the key was just released you could write something like this. [CODE] boolean keyUp = false; private void checkInput() { if (Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { GameThread.isRunning = false; } if(Keyboard.isKeyDown(Keyboard.KEY_UP) && keyUp == false){ System.out.println("up Key Pressed"); keyUp = true; } if(keyUp == true && !Keyboard.isKeyDown(Keyboard.KEY_UP)){ System.out.println("up Key Released"); keyUp = false; } } [/CODE] I believe you can even turn off the repeat events if you wanted to, but basically if you want to do what you're doing in Java correctly then you'd have to implement some native code or an exiting library that already has the native code.
  8. I'm currently building a little pet project game engine so I can feel out the lwjgl, and doing what I can to put it in parallel properly. Right now it's divided into 1 Display Thread and 3 Logic Threads on my quad core (its actually setup to always to 1 Display and N Cores -1 Logic). Under heavy load if I had 1 more logic thread to try and soak up the left over time that the Display thread is using it messes with the Phaser just enough to drop my fps and only marginally increase my utilization. Under heavy load I get about 3 cores working solid and just a marginal usage on the core using the display. 1 Logic 1 Display = 12.5 fps 2 Logic 1 Display = 25 fps 3 Logic 1 Display = 36 fps 4 Logic 1 Display = 33 fps The Display thread cannot run in different threads so I'm not using a thread pool.(If I do I'll just loose the display context) Which is fine since all it is doing is sending the gl commands to the graphics card. It is running in parallel with the logic threads and I get around needing to sync/lock the memory by creating two sets of memory (one read and one write) that swap at the end of each cycle. The threads use a Phaser to avoid having any individual thread get too far ahead of the other, and so one doesn't swap the memory ahead of the others and start writing on the others reads. Basically instead of writing x += y; I write Write.x = Read.x + Read.y; Currently it appears to beat the single threaded version so I think it is going in the right directly, but I'm keeping a single threaded test around just to make sure for now. I now want to add in a texture loaded. Due to the limitation of gl command can only be run on the Display object and the Display cannot be accessed on the Logic threads without loosing the Display context I am in part confined to running many of the commands on the Display thread. This is fine since right now the Display thread doesn't do much but draw objects to the screen anyways. However, loading a texture requires that I discover I need a texture, read in the image texture from the hard drive, then construct a texture object then load/push it out to the graphics card through the gl commands. I figure the logic threads will do the discovery part ether as an on demand feature or at the beginning of a level queuing up load requests. I think I should then try a low priority worker thread that periodically looks at a ConcurrentLinkedQueue to see if their are any load requests. It will then read in the Image file, and send a task request to the Display Thread through another ConcurrentLinkedQueue. The Display Thread would then finish up and load the the texture with the remaining gl commands and binding. What I'm hoping is that the low priority worker que doesn't interfere in the same way adding a 4th logic thread does on my quad core. I only need 1 since the optimal IO usage with most hard drives is 1 file being read en mass. Their might be some side effect with the display like Texture Pop In if it needs more textures while a level is loading or it would just display the loading screen if their are any pending textures. I figure I'd use my Logic 0 thread to control whether or not the Texture Worker Que is even running since their would be no reason to have it even exist as a Thread unless their is a reason to expect incoming work, and my Logic 0 has been where I've been putting control processes. Does a Parallel Texture Loader like this already exist, and if not why not? I'd rather not reinvent one if one already exists, and if their is a known pitfall to this method it would be good to know. I might have to sacrifice 1 logic thread to get this to work right and I'd rather not if it can be avoided, but if I do it's not too big of a loss. It might seem like a lot on my quad but it should be minimal loss for an 8 core bulldozer.
  9. [quote name='jonbonazza' timestamp='1326912271' post='4904033'] Actually, there are two Timer classes in the JDK. You and him are using two different classes. The code the OP isusing is correct for the Timer class he is using. With that said, he already posted this in another thread in this forum, at which I answered his question. [/quote] No he is not using his Timer class correctly, and he should read the Java Doc. He imported [CODE] import java.util.Timer; [/CODE] this the one he is using. [url="http://docs.oracle.com/javase/1.4.2/docs/api/java/util/Timer.html"]http://docs.oracle.c...util/Timer.html[/url] He cannot use it in that way. You are thinking of javax.swing.Timer which he might want to use but is not using. [url="http://docs.oracle.com/javase/7/docs/api/javax/swing/Timer.html"]http://docs.oracle.c...wing/Timer.html[/url] He will get a compile error with his code because of an Invalid Constructor call. [quote name='CryoGenesis' timestamp='1326919674' post='4904073'] You did not need the unnecessary sub class. but if you did want to use it then you should have put this in the Timer constructor: time = new Timer(5,new TimerListener()); putting the this keyword means that nothing will be rendered due to the fact that in your actionPerformed method in the Board class was empty. Hope it helped Gen. [/quote] You should actually avoid using 'this' in constructors because it is one of the conditions that can cause Java to leak because it's not fully initialized so it is best to get out of that bad habit. A subclass is a good way to avoid it, but a Factory is better but well beyond what he's doing. However, you're right he used the subclass incorrectly.
  10. You're initializing the timer incorrectly. Sun put a lot of time into making good java docs, and you should use them when you're having a problem with an object/API. For Java 7 use [url="http://docs.oracle.com/javase/7/docs/api/java/util/Timer.html"]http://docs.oracle.com/javase/7/docs/api/java/util/Timer.html[/url] For 1.4 use [url="http://docs.oracle.com/javase/1.4.2/docs/api/java/util/Timer.html"]http://docs.oracle.com/javase/1.4.2/docs/api/java/util/Timer.html[/url] Timer should just be initialized with Timer() or Timer(boolean) if you want to to run as a daemon if you're using one of the older version of java. Newer ones allow you to name the thread that the timer will use but that's usually irrelevant unless you're using a profiler to monitor the app as it runs. From what your code is doing you're trying to initialize with int (5) and with a JPanel, which isn't in the list of valid constructors which is why reading the API documentation is important. I assume you were trying to make a timer that would launch a JPanel after 5 milliseconds. To launch any frame or panel you have to ether set them to visible or add them to an existing visible panel. You're also using "this" in the constructor and if you're using NetBeans, and I believe even Eclipse, should give you a warning/tip to not do that. The reason is that your code can leak because the Object isn't fully initialized yet. You'll see a lot of code use 'this' in the constructor, but it can cause problems so it is best to just avoid doing so entirely. Here is some code that should be similar enough to your own that you can barrow some of the concepts from it. Using a subclass can help avoid using the 'this' keyword, and you even had a DELAY variable in your own code, but you didn't use it. [CODE] import java.util.Timer; import java.util.TimerTask; import javax.swing.JFrame; public class Board{ public static void main(String args[]) { Board b = new Board(); } Timer time; JFrame mainFrame; int DELAY = 5000; public Board() { mainFrame = new JFrame("me"); mainFrame.setSize(320, 240); mainFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); time = new Timer(); time.schedule(new myTimer(), DELAY); } class myTimer extends TimerTask { @Override public void run() { System.out.println("Delayed Hello World"); System.out.println("I will show you my Frame"); mainFrame.setVisible(true); } } } [/CODE]
  11. It sounds like you have multiple feeder threads feeding 1 worker thread if I'm understanding what you're trying to do, or you have one feeder thread feeding several worker threads. From what it sound like you've got it to execute the tasks, but can't get them to execute "in order". If you have one feeder thread that thread will have to stamp an orderid onto each object then the worker thread will have to iterate over the tasks each time to find the task with the smallest ID, and execute that, and you cant care about what tasks have and haven't been executed on the other workers. If you have multiple feeders and 1 worker then you'll have to assign each feeder an ID and they will have to also stamp each task with an ID. Then your worker will have to use the Feeder ID and the Task ID to figure out which one comes first. This will be overly complex and their won't be any payoff to do it. You'd be better off figuring out how to Partition the tasks into equal sizes to the number of available processors, or finding logically independent blocks of code that can be executed in parallel and then using them in a Thread Pool or in a Fork/Join Framework. If you really need to execute such small tasks then you may have to look at CUDA or OpenCL but they will need more than 3 tasks handed to them at any 1 time to be efficient. You're probably going to find out that a single threaded version of what you're doing is going to be the fastest.
  12. I view both Adam_42's method and Madhed's method as valid, but the First option would allow for Object Recycling to save on time allocating another IMatrix4x4. However recycling object can be dangerous if it wasn't anticipated that the object being passed might already had data allocated to something other than the defaults. Object Recycling can save time, but only if you're dealing with a lot of objects. On a small number of object the recycling logic could easily eat into any performance gains..
  13. The problem is with the Task being too small. The ConcurrentQueue with a thread pool works just fine if the task is large enough. When tasks are too small you end up with too much blocking overhead. However, if you didn't use it you'd ether A) end up in a busy wait state and waist the CPUs time or B) take too much time passing large objects between processors and waist wall time. The only time segmenting down an application to very small tasks is when you can offload it to a ManyCore GPU. The Many Core method still waists and burns the cpu but beats wall time through the overwhelming use of a large number of cores. The problem you have is with most current multicores, and the only real way around it is to use large course tasks on your MPU. You'll have the same problem with 2,4 and 6 cores with AMD and Intel. An APU is a possible solution but that's really just taking a GPU and putting it on the MPU, and still requires you do many of the same tricks to use it. The only current viable options for multithreading frameworks are limited to Thread Pools, Join/Fork, Message Passing(a la Erlang), and Fine Grained Processes with a GPU/APU. All the existing frameworks out there are usually an extension of those. For example the Grand Central Dispatch that Apple uses is just a very fast ThreadPool.
  14. [quote name='irreversible' timestamp='1326721868' post='4903223'] - what's with passing everything by reference? So many times I've found myself going over code that passes everything by reference and I've never seen an actual benefit to it. Unless you have an object on the heap (which you do more and more seldom as your code grows) or an already dereferenced object (which would have required a validity check at some point anyway), then you've gained nothing compared to having to reference an object that's on the heap (which costs nothing) and conversely not having to dereference an object on the stack (which doesn't mean you don't need to check the pointer for validity). In both cases, at the end of the day, it boils down to zero loss or zero gain either way. To illustrate, to me the following are functionally equivalent. However, the first one is shorter and more clear: [/quote] Passing by reference does have it's benefits. You save on the memory foot print that passing by value would add, and that could add up if it's a frequently called function. Making them constant servers two purposes. Some programmers are under the assumption that this will allow the compiler to optimize their code. Some compilers might but most do not. The real reason is that if you are assuming that the value will not change and your wrong about your assumption then if you have a smart compiler then it hopefully will throw an error and say that the constant value changed. However, passing by reference has the obvious risk of that API you're using might actually change the value when you weren't expecting it to. I personally prefer how Java handled the pass by reference or pass by value. They established a set of rules that cannot be changed. This forced programmers to do something that they should have done in the first place, and that was to just be consistent. As long as you and your team are playing by the same rules you're fine. You only really start to have problems with coding conventions when people start playing by different rules.
  15. The code you posted has several ConcurrentQueues in it [CODE] ConcurrentQueue<Action<object>> queue; [/CODE] This one is the one causing the blocking because you're doing this after each task [CODE] while (queue.TryDequeue(out method)) [/CODE] What you need is a larger courser tasks. What you have is too small to be used on multiple threads, even if it's only one read and one write thread. You can attempt to use a Non-Blocking que to implement message passing, but then you have to set both thread into a loop that periodically has to check the que. Unless you have a system with a predetermined wait time you'll end up in a busy wait state when there isn't any work for the threads to do which will eat a lot of clock cycles. Which will lead you to make a Synchronization method to allow the threads to sleep when their isn't any work and be woken up by the other thread. The moment you do that you'll cause blocking to come back in and you won't be able to use even 1 processor again when you do have work. Which is why a ConcurrentQueue already blocks since it is the most efficient way to do it as long as you have large tasks. The larger the tasks that go into it the closer you'll be able to meet the speed expectation of Amdahl's Law.