• Content count

  • Joined

  • Last visited

Community Reputation

104 Neutral

About MatthewJackson

  • Rank
  1. Questions of effeciency

    Well yes, I knew I likely misunderstood the GPU thing. I actually only talked about it because I thought it was something that readers would likely know and understand. In small projects I do load the entire data set into ram, but larger projects are such that it is impossible to expect all of the data to remain in memory. This is one of those situations where I've attempted to relate an idea into a system that is completely incompatible, but I think, if you look past the GPU stuff, my perspective was clear .
  2. Questions of effeciency

    I considered posting this in one of the technical sections, but, since this seems to be a subject driven by opinions and habits more than technical constraints, I think a nice friendly discussion is more in order. I was initially taught programming skills by an assembly programmer who had spent their career writing code for 70's era mainframes then transitioned into early 80's desktop computers through the Commodore 64 and eventually the 8088 IBM PC. Thus, I was taught to use system resources as efficiently as plausible and have maintained those habits throughout my programming experiences. However, I read more and more often as time goes by a growing sentiment that "memory is cheap" and "the processor is idle most of the time anyway." I know that, for the most part, these statements are true: my cellphone has more memory and a faster processor than several of my first computers working in parallel and the desktop I'm using to type this post (6 core 3.2gHz, 4 gb RAM, twin 1gb ATI 5k GPUs) has the processing power of a huge section of the US government's mainframes during the 70's. However, these resources are not always cheap even in this modern era. Take Google for instance. The wikipedia page says, "Google receives several hundred million queries each day through its various services." If we take 'several' to 'more than two' then we can safely say that google likely transmits 300 million individual pages every 24 hours, thus every extra character in their pages eats about 300 meg of bandwidth a day multiplied by two (once for the server and once for the clients), and I invite you to take a look at their source. Using firebug, the front page's body tag has 19 superfluous characters by my count (thinking in HTML5 so I'm counting the quotes as well as 3 characters for some of the hex colors) and that's without being able to see the tabs and line feeds. More over, a quick glance at the rest of the page's code makes me estimate that there are well over 200 character that are being uselessly transmitted throughout the entire document, which brings the estimated lower end (being unreasonably low, but easily quantifiable) of the wasted resources to 60 gig a day without taking into consideration the repetitive nature of search results. Even with this waste, Google is very fast because they have paid huge amounts for network connections and server hardware. A portion of what they have paid for can be said to merely compensate for poor resource management, and keep in mind that they only foot half the bill. So, my point is that, in cases where a thing is being done a great many times, poor resource consideration can still, in this age of cheap memory and idle processors, put a drain on systems. Most situations are not as clear as with websites. For instance, I read somewhere that current video cards handle models under 1500 tri count inefficiently, so an 800 tri count model places slightly more load on the GPU than a 400 tri count model. Taking this as possibly being true, this does not mean that the total load on the system is equally negligible. That thinking fails to take into account the read time from disk to memory as well as the read time from memory to the processing unit, not to mention the large number of other data-blocks that likely need to be in memory at the same time. Models with variable levels of detail seem to solve some of these problems by running a calculation that represents how visible the rendered object needs to be, then only passing a predeclared simplified version. Thus increasing the size of the object in memory while passing only a fraction of that data-block to the GPU. That is the resource exchange. More of the artist's time with an increase in required hard-drive storage and RAM so that a frame can render more rapidly. (As an aside, I have wondered what would happen to the system if the different levels of detail of the models were actually stored in different files, thus decreasing the RAM footprint and transferring the load onto the drive bus.) So, my questions to the community are: (1) when you are developing (writing compilable code or script code, creating images, modeling, or anything else) do you think about system load; (2) do you favor loading one resource over another; and (3) how do you decide which resource to place load on? My answers are: 1) yes 2) I tend to keep RAM clean while loading the disk and processor (not to mention my own time) 3) it usually has to do with what I placed load on last
  3. Java vs C# (Mono)

    [quote name='LizardGamer' timestamp='1311488914' post='4839500'] [quote name='ApochPiQ' timestamp='1311488447' post='4839498'] "Cross platform" is way too vague. [i]Which[/i] platforms are you trying to target? [/quote] Mainly Linux, Windows, Mac, Android [/quote] As I understand it, all of these platforms commonly have Java compatibility, but, with the exception of Windows, will require that additional components be installed (namely Mono) in order to run emulated .NET bytecode (I know that it's not actually emulation). I researched something similar, but for Windows and OSX only as well as having a few other parameters, and came up with C# being incapable of transparent to the user cross-platform compatibility. In my opinion, that makes C# only appropriate in special circumstances.
  4. HTML5, JS and CSS3. Unity question.

    [quote name='wolfscaptain' timestamp='1311350045' post='4838975'] HTML5 adds some new tags, that is all.[/quote] Although being entirely accurate, I do not feel that it gives an accurate impression. I experimented with HTML5 by recreating a few pages that I had done in HTML 4.01 and was able to eliminate over half of the images used on the page without any real noticeable visual change. In my opinion, that is a huge benefit. Of course, these alterations are not compatible with IE in any way. The advantage of HTML5, at this time, is mostly dependent on what exactly you are trying to do. In answer to the actual question at hand, I believe that it is advantageous to learn the new standard since it abbreviates some development while also including basically all of the old standard.
  5. The big benefit I can see right away (having never done this before) is the distance method should be adaptable to space simulations where you are already using distance everywhere for gravitational calculations. I assume you could also use the same method for a collision style that causes rapid deceleration rather than abrupt collision (think bubbles). Also, a brief glance around on the subject seems to indicate that distance based is very useful for soft-body objects.
  6. Help ..from everyone

    Unless i have totally missed out on something profound, OpenGL and DirectDraw (DirectX) are not engines in the sense that I believe you to mean. I believe that you want a complete game engine that can handle rendering, physics, interface, and sounds so that you don't have to worry about doing that yourself. If that is the case, I would recommend Unity3d for two reasons: (1) it is free; and (2) it uses C#. Basically, the programing learning curve should be abbreviated by the fact that you already know C#, so the inclusion of 'game' logic should be fairly quick and easy once you have a direction in mind. The online [url=""]class[/url] reference is an excellent place to start once you have the engine downloaded and installed. There are also a great number of tutorials that can help you get started as well. As for your project in particular, you would likely be well served taking on a partner to do the modeling. I'm not exactly sure how detailed you intend to get with the anatomy, but, from what I remember from a year of general anatomy and two years of specialized anatomy, you have your work cut out for you. The skeleton alone would take me a month to model with any scientific accuracy, and I would certainly need a model a complete skeleton as reference. Additionally, I am assuming that neither yourself or whomever does the modeling is expert on anatomy, you would be well served collaborating with an advanced biology student that has retained their books.
  7. why is C++ not commonly used in low level code?

    First, let's get our terms correct. There is only one "low level" language: assembly. The reason this isn't widely used in application development should be clear by simply looking at some example code. I am a fan of Assymbly, but I realize that it is impractical for application development. C is a mid-level language that is capable of compiling to as near to the size and execution speed of an Assembly executable as possible with non-low-level coding. On a system level, the object orientation of C++ is just a naming trick, but it does allow more abstraction in source. More abstraction means more reliance on libraries and language than the compiler. This means that an amount of the binary included in the executable is completely unnecessary. Now, before people get their hackles up of this, let me explain: if a function included in a program was written to handle dozens of different situations and you only ever use that function for a single situation, you have un-used code and a procedure that is more broad in scope and size than you actually need. The modern idea is that memory is cheap and processors are fast. Linux does not, by and large, abide by this idea. Thus I refer to the posts above mine by Katie.
  8. I need ideas!

    An idea that I've been kicking around for a while, but haven't set aside any time for development, may be of interest to you. It's multimedia centric, but should require any real multimedia programming. The problem: Every media player has a media library, but they all seem to be designed around a single type of media, thus making it so that any other type of media is sorted in a manner that seems inappropriate. Additionally, each multimedia application seems good at playing a limited set of file formats while either not being able to decode others or simply doing so poorly. My example is the FLAC format with winamp. Winamp is great with MP3 and OGG, but the version I have does not support FLAC. Video files are even more scattered in what programs do well with them, thus making one want to install VLC and Media Player Classic for video while having audio spread between two or three other applications. This means, that adding a few recordings from Librivox necessitates updating at least two application's media library, and even then they do not keep track of which file's I've already listened to (really irritating when you have episodic style files that are named by number alone: I have to guess where I left off and start listening to a bit to tell if I'm in the right place). The proposed solution: A pure media library application. It does not play the media, it issues a launch command to the application that it has been configured to use for that file type. Allow division of the library by media type (music, movies, non-movie video, texts, non-music audio, and so forth) as well as type of content so that multiple files of the same thing (think audio book chapters) can be condensed into a single expandable entry (logical folder without being an actual system folder). Support configurable sort specification so a user can override 'the' from the sort algorithm. Keep track of which files have been played and when. Also, because it would be too easy to do so, have different 'profiles' that only serve to keep track of file play history (not individual media libraries). If I were to do it, I would write it as a pseudo web application so I could take advantage of CSS, thus adding the support for themes and layout modification without actually writing that functionality into the program. This approach also allows easy multi-platform support. Anyway, it's just an idea that I may or may not ever get around to myself, but would really love to see...
  9. How long would a zombie apocalypse last?

    A good book to read on this subject that takes reality into account is World War Z. I recommend it highly.
  10. Create 2D model in 3ds max

    Serapth is correct. You cannot natively export a 3ds model into a vector format. I am sure that someone more motivated than myself could write an extension to do this, but I cannot conceive why anyone would want to do this. Traditionally, when a 3D application is used to create 2D images, the final product is a render of a 3D environment. So, one might model and animate a person walking, then place that person in a preset render-room with a camera at a specific angle and render to a series of still images that are then put into a game environment and scaled like any other graphic. This can have aliasing problems unless your images are of a large size, which can cause loading problems depending on your implementation and when all the different walk cycles and other animations are included your size and memory usage will likely exceed simply using the full 3d model. If at all possible, you want the vector based approach as mentioned by Serapth.
  11. Recommendation

    I cannot offer anything other than some insight from my experience. I would not say that what worked for me is the 'best,' but I would say that it worked well for me. The real question is how do you learn? If you learn by looking at complete code samples and detailed step-by-step instructions then I cannot offer you any advice on a free tutorial. However, if you know some other imperative programming language, as opposed to a functional language, and learn by establishing a logic and researching how to implement that logic, then I would point you at the MSDN online reference. It's actually packed full of information and fairly easy to navigate once you learn some of the terms. Now, because I've started arguments before, I would like to specify that C, C++, C#, Java and almost every language that is commonly known are not functional languages. This does not mean that they do not function, it means that they operate by utilizing what is know as a 'side effect' to alter the program's state through variable manipulation. A functional language, like Haskel and a few others, works entirely off formula results very similarly to a C++ static function that does not change global variables. So, that's my disclaimer about the above 'functional' term. Where am I going with this? If you don't have experience with C# or Java then you may not realize, as I did not for a long time, that classic procedural logic does not work on a macro scale since all 'methods' (code block that does something) must exist inside an object and cannot naturally communicate with any data outside that object instance. I had a real hard time with this concept and still find it easier to code in C++ because of this. If you are coming from that sort of background, you will certainly want to understand 'event handlers' from an early stage. I really wish someone had pointed me at them earlier.
  12. I have a question.

    It seems that I went with the trend of: Maybe. In my opinion it depends on the scale of the game as well as the development team making it. If you have a character artist at the ready from the beginning, are able to maintain the same quality of graphics throughout the entire project, and can keep the triangle count per scene within acceptable limits then go for the models. However, if your plan is to slap models in at the last moment I would recommend against it since you will likely have in-congruency of graphic style between assets.
  13. Blender 2.5 tutorials?

    Over a year ago, I became interested in the same thing, and I am here to say a big resounding no. It's not so much that one cannot put together what they need to know by sifting through the hundreds, if not thousands, of tutorials that use every version of blender since 2.3 and find the information, but the truth is that there is no freely available complete tutorial on intermediate modeling or animation. Now, I say "freely available" because I don't know if the same is true for 'paid-for' content, but my intuition is that market has the same deficit. My best advice to you, and anyone looking to learn modeling, is that, if you are going to use Blender, or any other tool for that matter, search out instruction that is application independent. The idea here is that modeling is modeling no matter what application you choose. Yes, Blender does have some quirks that make some methods easier while others are made more difficult, but if the instructional media focuses on techniques, rather than a sequence of hot-bar selections and built in short-cuts, then you should do just fine. Unfortunately, it has been a while since I looked at any of that, so I cannot provide generic websites. I can say that Andrew Price from [url=""]Blender Guru[/url] put together a rather nice list of keyboard shortcuts. You can find it on the "Freebies" section of his website. Aside from that, many of his tutorials will help a great deal with learning the Blender methodology even though he focuses on final touch-up and rendering effects. Also, I believe this question would be better answered in a Blender forum.
  14. What graphics is this game using?

    Just to add to the above post, it looks to me as if a great deal of the detail may be achieved through the heavy usage of transparent overlays. This method adds details of little rocks or other detail to whatever is behind it, thus giving the illusion of a greater detailed terrain system. If you look at [url=""]this[/url] image, it shows the size of the image tiles used as well as some of the subtle simplicities of their method.
  15. Directx Meshes from Blender Models

    Although I have not done anything like what you are doing, I have worked a great deal with Blender and trying to import its models into existing game systems. With my experience, I would recommend using the Collada exporter in Blender, then either write an interpreter in your engine to translate the collada data or write you engine to use the raw collada data. The reasoning for this is flexibility. Learning how to write an exporter for Blender may be a great exercise, but it absolutely ties your engine to Blender. That may not be a great thing to do if you think you'll ever use the engine for anything other than a personal hobby. Additionally, since Collada is simply a standardized XML for digital shapes and animation, it should be easier to implement than any other format that I am aware of Blender supporting. It also seems to support everything, where most other model formats have specifications on how models must be made and animated.