• Advertisement

Jarrod1937

Member
  • Content count

    2498
  • Joined

  • Last visited

Everything posted by Jarrod1937

  1. This reference guide has now been proofread by Stimarco (Sean Timarco Baggaley). Please give your thanks to him. The guide should now be far easier to read and understand than previous revisions, enjoy! Note: The normal mapping tutorial has been temporarily moved, to be added back as its own topic, to help separate the two for more clarity. Update 30-JUL-2008: Proofreading and editing by Sean Timarco Baggaley. Many typos and grammar errors fixed. Some rephrasing and changes to titles. Update 21-APR-2008: Edited and expanded existing areas and attempted to improve clarity. Added new section titled "Texture related articles," [Renamed: "Further Reading", moved to end. *STB] for further reading. Will soon be moving on to the modeling portion of the reference. Update 05-MAR-2008: Edited existing areas and attempted to improve clarity. Added hand created normal map tutorial and extended normal map info. If anyone has any corrections, please contact me. 3D Graphics Primer 1: Textures. by Jarrod This is a quick reference for artists who are starting out. The first topic revolves around textures and the many things an artist who is starting out needs to understand. I am primarily a 3d artist and my focus will therefore be primarily on 3d art. However, some of this information is applicable to 2d artists. Textures What is a texture? By classical definition a texture is the visual and esp. tactile quality of a surface (Dictionary.com). Since current games lack the ability to convey tactile sensations, a texture in game terms simply refers to the visual quality of a surface, with an implicit tactile quality. That is, a rock texture should give the impression of the surface of a rock, and depending on the type, a rough or smooth tactile quality. We see these types of surfaces in real life and feel them in real life. So when we see the texture, we know what it feels like without needing to touch it due to our past experiences. But a lot more goes into making a convincing texture beyond the simple tactile quality. As you will learn as you read on, textures in games is a very complex topic, with many elements involved in creating them for realtime rendering. We will look at: Texture File Types & FormatsTexture Image SizeTexture File SizeTexture TypesTiling TexturesTexture Bit DepthMaking Normal Maps (Brief overview only)Pixel Density and Fill Rate LimitationMipmapsFurther Reading Further Reading Creating and using textures is such a big subject that covering it entirely within this one primer is simply not sensible. All I can sensibly achieve here is a skimming over the surface, so here are some links to further reading matter. Beautiful Yet Friendly - Written by Guillaume Provost, hosted by Adam Bromell. This is a very interesting article that goes into some depth about basic optimizations and the thought process when designing and modeling a level. It goes into the technical side of things to truly give you an understanding on what is going on in the background. You can use the information in this article to find out how to build models that use fewer resources -- polygons, textures, etc. -- for the same results. This is the reason why this is the first article I am linking to: it is imperative to understand the topics discussed in this article. If you need any extra explanation after reading it, you can PM me and I am more than happy to help. However, parts of this article go outside the texture realm of things and into the mesh side, so keep that in mind if you're focusing on learning textures at the moment. UVW Mapping Tutorial - by Waylon Brinck. This is about the best tutorial I have found for a topic that gives all 3D artists a headache: unwrapping your three-dimensional object into a two-dimensional plane for 2D painting. It is the process by which all 3D artists place a texture on a mesh (model). NOTE: while this tutorial is very good and will help you in learning the process, UVW mapping/unwrapping is just one of those things you must practice and experiment with for a while before you truly understand it. Poop In My Mouth's Tutorials - By Ben Mathis. Probably the only professional I know who has such a scary website name, but don't be afraid! I swear there is nothing terrible beyond that link. He has a ton of excellent tutorials, short and long, that cover both the modeling and texturing processes, ranging from normal-mapping to UVW unwrapping. You may want to read this reference first before delving into some of his tutorials. Texture File Types & Formats In the computer world, textures are really nothing more than image files applied to a model. Because of this, a variety of common computer image formats can be used. These include, .TGA, .DDS, .BMP, and even .JPG (or .JPEG). Almost any digital image format can be used, but some things must be taken into consideration: In the modern world of gaming, being heavily reliant on shaders, formats like the .JPG format are rarely used. This is because .JPG, and others like it, are lossy formats, where data in the image file is actually thrown away to make the file smaller. This process can result in compression artifacts The problem is that these artifacts will interfere with shaders, because these rely on having all the data contained within the image intact. Because of this, lossless formats are used -- formats like .DDS (if lossless option chosen), .BMP, and .TGA. However, there is such a thing called S3TC (also known as "dxt") compression. This was a compression technique developed for use on Savage 3D graphics cards, with the benefit of keeping a texture compressed within video memory whereas non-S3TC-compressed textures are not. This results in a 1:8 or greater compression ratio and can allow either more textures to be used in a scene, or can be used to increase the resolution of a texture without using more memory. S3TC compression can be made to work with any format, but is most commonly associated with the .DDS format. Just like the .jpg and other lossy formats, any texture using S3TC will suffer compression artifacts, and as such is not suitable for normal maps, (which we'll discuss a little later on). Even with S3TC it is common to use a lossless format for the texture format, and then apply S3TC when necessary. This is done to provide an artist with the ability to have lossless textures when needed -- e.g. for normal maps -- but then provide them with a method for compression on textures that could benefit from S3TC compression, such as diffuse textures. Texture Image SizeThe engineers who design computers and their component parts like us to feed data to their hardware in chunks that have dimensions defined as powers of two. (E.g. 16 pixels, 32 pixels, 64 pixels, and so on.) While it is possible to have a texture that is not the power of two, it is generally a good idea to stick to power-of-two sizes for compatibility reasons (especially if you're targeting older hardware). That is, if you're creating a texture for a game, you want to use image dimensions that are the power of two. Examples, 32x32, 16x32, 2048x1024, 1024x1024, 512x512, 512x32, etc. Say for example, you have a mesh/model and you're UV Unwrapping, for a game you must work within dimensions that are a power of two. Powers of two include: 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, and so on. What if you want to use images that aren't powers of two? In such cases, you can often use uneven pixel ratios. This means you can create your texture at, say, 1024x768, and then you can save it as 1024x1024. When you're applying your texture to your mesh you can stretch it back out in proportion. However, it is best to go for a 1:1 pixel ratio and create the texture starting at the power of 2, but the stretching is one method for getting around this if needed Please refer to the "Pixel Density and Fill Rate Limitation" section for more in depth info on how exactly to choose your image size. Texture File Size File size is important for a number of reasons. The file is the actual amount of memory (permanent or temporary) that the texture requires. For an obvious example, an uncompressed .BMP could be 6MB, this is the space it requires to be saved on a hard drive and within the video and/or system RAM. Using compression we can squeeze the same image into a file size of, say, 400 KB, or possibly even smaller. Compression, like that used by .JPG and other similar formats, will only do compression on permanent storage media, such as hard drives. That is to say, when the image is stored in video card memory it must be uncompressed within the memory, so you truly only get a benefit on storing the texture on its permanent medium but not in memory during use. Enter S3TC The key benefit of the S3TC compression system is that it compresses on hard drives, discs, and other media, while also staying compressed within video card memory. But why should you care what size it is within the video memory? Video cards have onboard memory called, unsuprisingly enough, video memory, for storing data that needs to be accessed fast. This memory is limited, so considerations on the artists' part must be used. The good news is that video card manufacturers are constantly adding more of this video memory -- known as Video RAM (or VRAM for short). Where once 64 MB was considered good, we can now find video cards with up to 1 GB of RAM. The more textures you have, the more RAM will be used. The actual amount is based primarily on the file size of each texture. Other data take up video memory, such as the model data itself, but the majority is used for textures. For this reason, it is a good idea to both plan your video memory usage and test how much you're using once in the engine. It is also advised that you have a minimum hardware configuration for what you want your game to run on. If this is the case, then you should always make sure your video memory usage does not go over the minimum target hardware's amount. Another advantage of in-memory compression like S3TC is that it can increase available bandwidth. If you know your engine on your target hardware may be swapping textures back and forth frequently (something that should be avoided if possible, but is a technique used on consoles), then you may want to consider having the textures compressed and then decompress them on the fly. That is to say, you have the textures compressed, and then when they're required, they're transported and then decompressed as they're added to video memory. This results in less data having to be shunted across the graphics card's bus (transport line) to and from the computer's main memory, resulting in less bandwidth utilization, but with the added penalty of a few wasted processing clocks. Texture Types Now we're going to discuss such things are diffuse, normal, specular, parallax and cube mapping. Aside from the diffuse map, these mapping types are common in what have become known as 'next-gen' engines, where the artist is given more control over how the model is rendered by use of control maps for shaders. Diffuse maps are textures which simply provide the color and any baked-in details. Games before shaders simply used diffuse textures. (You could say this is the 'default' texture type.) For example, when texturing a rock, the diffuse map of the rock would be just the color and data of that rock. Diffuse textures can be painted by hand or made from photographs, or a mixture of both. However, in any modern engine, most lighting and shadow detail is preferred to not be 'baked' (i.e. pre-drawn directly into the texture) into the diffuse map, but instead to have just the color data in the diffuse and use other control maps to recreate such things as shadows, specular reflections, refraction effects and so on. This results in a more dynamic look for the texture overall, helping its believability once in-game. 'Baking' such details will hinder the game engine's ability to produce "dynamic" results, which will cause the end result to look unrealistic. There is sometimes an exception to this rule: if you're providing "general" shading/lighting like ambient occlusion maps baked (merged) into the diffuse, then it is ok. These types of additions into the diffuse are general enough that they won't hinder the dynamic effect of running in a realtime engine, while achieving a higher level of realism. Another point to remember is that while textures sourced from photographs tend to work very well in 3D environment work, it is often frowned upon to use such 'photosourced textures' for humans. Human diffuse textures are usually hand-painted. Normal maps are used to provide lighting detail for dynamic lighting, however this is involved in an even more important role, as we will discuss shortly. Normal maps get their name from the fact that they recreate normals on a mesh using a texture. A 'normal' is a point (actually a vector) extending from a triangle on a mesh. This tells the game engine how much light the triangle should receive from a particular light source -- the engine simply compares the angle of the normal with the position and angle of the light itself and thus calculates how the light strikes the triangle. Without a normal map, the game engine can only use the data available in the mesh itself; a triangle would only have three normals for the engine to use -- one at each point -- regardless of how big that triangle is on the screen, resulting in a flat look. A normal map, on the other hand, creates normals right across a mesh's surface. The result is the capability to generate a normal map from a 2 million poly mesh and have this lighting detail recreated on a 200 poly mesh. This can allow an artist to recreate a high-poly mesh with relatively low polygons in comparison. Such tricks are associated with 'next-gen' engines, and are heavily used in screenshots of the Unreal 3 Engine, where you can see large amounts of detail yet able to run in realtime due to the actual amount of polys used. Normal maps use the different color channels (red, green, and blue) for storing gray scale data of lighting information at different angles, however there is no set standard for how these channels can be interpreted and as such sometimes require a channel to be flipped for correct function in an engine. Normal maps can be generated from high-poly meshes, or they can be image generated, that is, being generated from a gray scale map. However, high-poly generated normal maps tend to be preferred as they tend to have more depth. This is for various reasons, but the main one is that it is difficult for most to paint a grayscale map that is equal to the quality you'll receive straight from a model. Also, you will often find that the image generators tend to miss details, requiring the artist to edit the normal map by hand later. However, it is not impossible to receive near equal results using both methods, each have their own requirements, it is up to you on which to use in each situation (Please refer to the tutorial section for extended information on normal maps.) Specular maps are simple in creation. They are easy to paint by hand, because the map is simply a gray scale map that defines the specular level for any point on a mesh. However, in some engines and in 3D modeling applications this map can be full-color so the brightness defines the specular level and color defines the color of the specular highlights. This gives the artist finer control to produce more lifelike textures because the specific specular attribute of certain materials can be more defined and closer to reality. In this way you can give a plastic material a more plastic-like specular reflection, or simulate a stone's particular specular look. Parallax mapping picks up where regular normal mapping fails. We create normals on a model by use of a normal map. A parallax map does the same thing as a normal map except it samples a grayscale map within the alpha channel. Parallax mapping works by using the angles recorded in the tangent space normal maps along with the heightmap to calculate which way to displace the texture coordinates. It uses the grayscale map to define how much the texture should be extruded outwards, and uses the angle information recorded in the tangent normal map to determine the angle to offset the texture. Doing things this way a parallax map can then recreate the extrusion of a normal map, but without the flattening that visible in ordinary normal mapping due to lack of data at different perspectives. It mainly gets its name from how the effect is created by the parallax effect. The end results are cleaner and deeper extrusions with less flattening that occurs in normal mapping because the texture coordinates are offset with your perspective. Parallax mapping is usually more costly than normal mapping. Parallax also has the limitation of not being used on deformable meshes because of the tendency for the texture to "swim" due to the way textures are offset. Cube mapping uses a texture that is not unlike an unfolded box and acts just like a sky box when used. Basically this texture is designed like an unfolded box where it all is folded back together when being used. This allows us to create a 3d dimensional reflection of sorts. The result is very realistic precomputed reflections. For example, you would use this technique to create a shiny, metal ball. The cube map would provide the reflection data. (In some engines, you can tell the engine itself to generate a cube map from a specific point in the scene and have it apply the result to a model. This is how we get shiny, reflective cars in racing games, for example. The engine is constantly taking cube map 'photos' for each car to ensure it reflects its surroundings accurately.) Tiling Textures Now, while you can have unique, specific textures made for specific models, a common thing to do to both save time and video memory, is tiling textures. These are textures which can be fitted together like floor or wall tiles, producing a single, seamless texture/image. The benefit is that you can texture an entire floor in an entire building using a single tiling texture, which both saves the artist's time, and video memory due to fewer textures being needed. A tiling texture is achieved by having the left and right side of a texture blend into each other, and the same for the bottom and top of the texture. Such blending can be achieved by the use of a specialist program, a plugin, or by simply offsetting the texture a little to the left and down, cleaning up the seams, and then offsetting back to the original position After you create your first tiling texture and test it, you're bound to see that each texture will produce a 'tiling artifact' which shows how and where the texture is tiled. Such artifacts can be reduced by avoiding high-contrast detail, unique detail (such as a single rock on a sand texture), and by tiling the texture less. Texture Bit Depth A texture is just an image file. This means most of the theory you're familiar with when it comes to working with images also applies to textures. One such would be bit depth. Here you will see such numbers as 8, 16, 24, and 32 bits. These each correspond to the amount of color data that is stored for each image. How do we get the number 24 from an image file? Well, the number 24 refers to how many bits it contains. That is, that you have 3 channels, red, green, and blue, all of which are simply channels which contain a gray scale image, but are added together to produce a full color image. So black, in the red channel, means "no red" at that point, while white in the red channel means "lots of red". Same applies to blue and green. When these are combined, they produce a full color image, a mix of Red, Green and Blue (if using the RGB color model). The bits come in by the fact that they define how many levels of gray each channel has: 8 bits per channel, over 3 channels, is 24 bits total. 8 bits gives you 256 levels of gray. Combining the idea that 8 bits gives you 256 levels of gray and that each channel is simply gray scale and different levels of gray define a level within that color, we can then see that a 24 bit image will give us 16,777,216 different colors to play with. That is, 8 bits x 3 channels= 24 bits, 8 bits= 256 gray scale levels, so 256 x 3= 16,777,216 colors. This knowledge comes in useful when at certain times it is easier to edit the RGB channels individually, with a correct understanding you can then delve deeper into editing your textures. However, with the increase in shader use, you'll often see a 32 bit image/texture file. These are image files which contain 4 channels, each of 8 bits: 4 x 8 = 32. This allows a developer to use the 4th channel to carry a unique control map or extra data needed for shaders. Since each channel is gray scale, a 32 bit image is ideal to carry a full color texture along with an extra map within it. Depending on your engine you may see a full color texture with the extra 4th channel being used to hold a gray scale map for transparency (more commonly known as an "alpha channel"), specular control map, or a gray scale map along with a normal map in the other channels to be used for parallax mapping. As you paint your textures you may start to realize that you're probably not using all of the colors available to you in a 24 bit image. And you're probably right, this is why artists can at times use a lower bit depth texture to achieve the same or near the same look with a lesser memory footprint. There are certain cases where you will more than likely need a 24 bit image however: If your image/texture contains frequent gradations in the color or shading, then a 24 bit image is required. However, if your image/texture contains solid colors, little detail, little or no shading, and so on, you can probably get away with a 16, 8, or perhaps even a 4 bit texture. Often, this type of memory conservation technique is best done when the artist is able to choose/make his own color pallette. This is where you hand pick the colors that will be saved with your image, instead of letting the computer automatically choose. By using a careful eye you have the possibility to choose more optimal colors which will fit your texture better. Basically, in a way, all you're doing is throwing out what would be considered useless colors which are being stored in the texture but not being used. Making Normal Maps There are two methods for creating normal maps: Using a detailed mesh/model.Creating a normal map from an image. The first method is part of a common workflow that nearly all modelers who support normal map generation use. For generating a normal map from a model you can either generate it out to a straight texture, or if you're generating your normal map from a high-poly mesh, it is common to then model the low poly mesh around your high-poly mesh. (Some artists have prefer for modeling the low-poly version first while others like to do the high then the low, in the end there is no perfect way, its just preference.) For example: you have a high-poly rock, you will then model/build a low poly mesh around the high, then UVW unwrap it, and generate the normal map from your high-poly version. Virtual "rays" will be cast from the high- to the low-poly model -- a technique known as "projecting". This allows for a better application of your high-poly mesh normal map to your low-poly mesh since you're projecting the detail from the high to the low. However, some applications will switch the requirements and have your low poly mesh be inside your high, and others allow the rays for generating the normal map to be cast both ways. So refer to your application tutorials for how to do this as it may vary. Creating a normal map from an image. This method can be quicker than the more common method described above. For this, all you need is an edited version of your diffuse map. The key to good-looking image-based normal maps is to edit out any unneeded information for your grayscale diffuse texture. If your diffuse has any baked-in specular, shadows, or any information that does not define depth, this needs to be removed from the image. Also, anything that is extra, like strips of paint on a concrete texture, that too should be edited out. This is because, just like bump maps, and displacement maps, the colour of the pixels defines depth, with lighter pixels being "higher" than darker pixels. So, if you have specular (will turn white when made gray), it will be interpreted as a bump in your normal map, you don't want this if the specular in fact lays on a flat surface. The same applies to shadows and everything else mentioned: it will all interfere with the normal map generation process. You simply want to only have various shades of gray represent various depths, any other data in the texture will not produce the correct results. For generating normal maps you can always use the Nvidia Plugin, however it takes a lot of tweaking to get a good looking normal map. As such, I recommend Crazy Bump!. Crazy Bump will produce some very good normal maps if the given texture it is generating it from is good. Combining the two methods. It is common, even if you're generating a normal map from a 3d high-poly mesh to then generate an image generated normal map and overlay it over the high-poly generated one. This is done by generating one from your diffuse map, filling the resulting normal map's blue channel with 128 neutral gray, and then overlaying this over your high-poly generated one. This is done to add in those small details that only the image can generate. This way you get the high frequency detail along with the nice and cleanly generated mid-to-low frequency detail from your high-poly generated normal map. Pixel Density and Fill Rate Limitation Let's say you have a coin that you just finished UVW unwrapping, it will indeed be very small once in-game, however you decide it would be fine to use a 1024x1024 texture. What is wrong with the above situation? Firstly, you shouldn't need to UVW unwrap a coin! Furthermore, you should not be applying the 1024x1024 texture! Not only is this wasteful of video memory, but it will result in uneven pixel density and will increase your fill rate on that model for no reason. A good rule of thumb is to only use the amount of resources that would make sense based on how much screen space an object will take up. A building will take up more of the screen than a coin, so it needs a higher resolution texture and more polygons. A coin takes up less screen space and therefore needs fewer polys and a lower resolution texture to obtain a similar pixel density. So, what is pixel density? It is the density of each pixel from a texture on a mesh. For example, take the UVW unwrapping tutorial linked to in the "Texture Image Size" section: There you will see a checkered pattern, this is not only used to make sure the texture is applied right, but to also keep track of pixel density. If you increased the pixel density, you would see the checkered pattern get more dense; if you decrease the density, the checkered pattern would be less dense, with fewer squares showing. Maintaining a consistent pixel density in a game helps all of the art fit together. How would you feel if your high pixel density character walks up to a wall with a significantly lower pixel density? Your eyes would be able to compare the two and see that the wall looks like crap compared to the character, however would this same thing happen if the character were near the same pixel density of the wall? Probably not -- such things only become apparent (within reason) to the end user if they have something to compare it to. If you keep a consistent pixel density throughout all of the game's assets, you will see all of it fits together better. It is important to note that there is one other reason for this, but we'll come to it in a moment. First, we need to look at two related problems that can arise: transform(ation) limited and fill-rate limited modeling. A transform-limited model will have less pixel density per polygon than a fill-rate limited model where it has a higher pixel density per polygon. The theory is that a model takes longer on either processing the polys, or processing the actual pixel-dense surface. Knowing this, we can see that our coin, with very few polys will have a giant pixel density per polygon, resulting in a fill rate limited mesh. However, it does not need to be fill rate limited if we lower the texture resolution, resulting in a lower pixel density. The point is that your mesh will be held back when rendering based on which process takes longer: transform or fill rate. If your mesh is fill rate limited then you can speed up its processing by decreasing its pixel density, and its speed will increase until you reach transform limitation, in which your mesh is now taking longer to render based on the amount of polygons it contains. In the latter case, you would then speed up the processing of the model by decreasing the amount of polygons the model contains. That is, until you decrease the polygon count to the point where you're now fill rate limited once again! As you can see, it's a balancing act. The trick is to maximize the speed of the transform and fill rate processing (minimize the impact of both as much as you can), to get the best possible processing speed for your mesh. That said, being fill rate limited can sometimes be a good thing. The majority of "next-gen" games are fill rate limited primarily because of their use of texture/shading techniques. So, if you can't possibly get any lower on the fill rate limitation and you're still fill rate limited, then you have a little bit of wiggle room to work around where you can actually introduce more polygons with no performance hit. However, you should always try to cut down on fill rate limitations when possible because of general performance concerns Some methods revolve around splitting up a single polygon into multiple polygons on a single mesh (like a wall for example). This works by then decreasing the pixel density and processing (shaders) for the single polygon by splitting the work into multiple polygons. There are other methods for dealing with fill rate limitation, but mainly it is as simple as keeping your pixel density at a reasonable level. MipMaps It is fitting that after we discuss pixel density and fill rate limitation that we discuss a thing called Mipmapping. Mipmaps (or mip maps) are a collection of precomputed lower resolution image copies for a texture contained in the texture. Let's say you have a 1024x1024 texture. If you generate mipmaps for your texture, it will contain the original 1024x1024 texture, but it will also contain a 512x512, 256x256, 128x128, 64x64, 32x32, 16x16, 8x8, 4x4, 2x2 version of the same texture, (exactly how many levels there are is up to you). These smaller mipmaps (textures) are then used in sequence, one after the other, according to the model's distance from the camera in the scene. If your model uses a 1024x1024 texture up close, it may be using a 256x256 texture when further away, and an even smaller mipmap texture level when it's way off in the distance. This is done because of many things: The further away you are from your mesh, the less polygonal and texture detail is needed. This is because all displays have a fixed display resolution and it is physically impossible for the player to decipher the detail of a 1024x1024 texture and 6,000 polygon mesh when the model takes up only 20 pixels on screen. The further away we are from the mesh, the fewer the polygons and the lower the texture resolution we need to render it. Because of the whole fill rate limitation described above, it is beneficial to use mipmaps as less texture detail must be processed for distant meshes. This results is a less fill-rate-heavy scene because only the closer models are receiving larger textures, whereas more distant models are receiving smaller textures. Texture filtering. What happens when the player tries to view a 1024x1024 texture at such a distance that only 40 pixels are given to render the mesh on screen? You get noise and aliasing artefacts! Without mipmaps, any textures too large for the display resolution will only result in unneeded and unwanted noise. Instead of filtering out this noise, mipmaps use a lower resolution texture for different distances, this results in less noise. It is important to note, that while mipmaps will increase performance overall, you're actually increasing your texture memory usage. The mipmaps and the whole texture will be loaded into memory. However, it is possible to have a system where the user or dev can select the highest mipmap they want and the ones higher than this limit will not be loaded into memory (as the system we're using now), however the mipmaps which meet or are lower than this limit will still be loaded into memory. It is widely agreed that the benefits of mipmaps vastly outweigh the small memory hit. NOTE: Mesh detail can also affect performance, so the equivalent method used for mesh detail is known as LOD -- "Level Of Detail. Multiple versions of the mesh itself are stored at different levels of detail. The less-detailed mesh is rendered when it's a long way away. Like mipmaps, a mesh can have any number of levels of detail you feel it requires. The image below is of a level I created which makes use of most of what we've discussed. It makes use of S3TC compressed textures, normal mapping, diffuse mapping, specular mapping, and tiled textures. An example of a level making use of most of the features discussed. [Edited by - Jarrod1937 on August 17, 2008 5:47:42 PM]
  2. Here is another article/guide i wrote for my site. This covers some general and advanced methods for tweaking for performance. I hope you find it helpful. Whether or not this gets stickied though is up the the moderator. Improving Your Computers Performance In this article I'll be writing about ways to make the best use of your system. Before we go any further, i must give the standard disclaimer. I can not be held liable for any event that is derived from the use of this article. Nor any damages or loss of data. If you're unsure of anything below, do not attempt it, or thoroughly backup your data before hand. First a few items of interest need to be covered. These will help you understand what technically is occurring in the background, and so you can understand the "why," which will allow you to make the best choices later. Memory Management and Virtual Memory There are two types of memory managers, we'll cover the OS one for now. The Operating System (OS) has its own form of memory management that also works in coordination with the hardware Memory Management Unit (MMU). From OS's view, there are plentiful amounts of contiguous memory (non-fragmented memory). This is done by the use of Virtual Memory, which uses virtual memory address space. Memory addresses are basically hexadecimal/binary numbered pointers to sections/chunks of memory, in most modern OS's each address references a byte of memory. These virtual memory addresses are then combined into larger units called Pages. Pages are then converted to real memory addresses (addresses that have been generated from the existence of "real" memory, aka ram) by use of a lookup table, called a page table. However, this is only for data that needs to be stored in ram. If the data is needed/requested by the CPU (main processor of the computer), a hardware unit called the Memory Management Unit (MMU) will translate this for the CPU on the fly. Now, by the nature of virtual memory, there will generally be an overflow, as there are more virtual memory addresses available than real memory addresses from physical memory that they can be mapped/translated too. The result is the creation of the "Paging" file, also known as swap, page, virtual memory file. This file is in essence a reserved chunk of hard drive space, where the space can hold "paged out" virtual memory, where the paged out pages are usually judged by algorithims like LRU (Least Recently Used). Now, you may be asking, "What is the point of this?" This arrangement facilitates in the creation of programs and in the running of all software. If you mapped programs directly to ram you would run into all kinds of problems, problems which would increase the complexity of programing software. With virtual memory you don't have to worry that you'll allocate too much ram and cause a system crash. Instead you simply allocate a large amount of virtual memory, which could be held in the paging file, and be paged in once it is needed. Basically, the system gives developers, OS and Application alike, more head room. So, what does this teach us? 1.)The page file is in essence an extension of your memory subsystem. 2.)Its performance is important to your memory subsystems performance. 3.) If object is taking up a lot of physical memory and another object needs more physical memory, the least recently used pages will be paged out to make room. Though, it should be noted that not all data contained in the page file actually needs to be assigned an address space. As data can, in fact, reside there, with no addressing, but can later be assigned addresses once some are free. This can allow for a very large allocation to occur for a program, that exceeds even the total virtual address space. Clusters Back in the early days of computing, an OS would reference specific blocks on a hard drive, as all hard drive manufacturers used a set number of blocks (blocks are intersections of a track with a sector) for all cylinders/tracks. This made it easy to reference each block, because it could be predicted how many sectors will be in a cylinder/track. However, this technique is inefficient as the density varies as you get closer to the inner cylinders on a head. Using the old method would waste space available for storage on a platter. Now we use zone bit recording, where the number of blocks vary with track/cylinder location on the head. More on the outside tracks and less in the inner tracks. With this system, a large overhead is created if you wish to deal directly with blocks. Now, because of this change a modern OS has defined an abstraction called "clusters." Clusters are in a way a higher unit for storage space, similar to pages discussed above, and are basically groups of contiguous blocks. Here, you have the operating system viewing everything as clusters, which are groups of bits/byte/KiB. The key is to understand that the OS only sees clusters. With this you can define the "Cluster Size," which is the minimum chunk of data any file can use. If a file is under your cluster size, it will take up the entire space of the cluster. If a file is over the cluster size, the file will be spread across multiple clusters, and a table of sorts is used to keep track of what clusters hold what data for what files. To give an example, say you have a cluster size of 4 KiB, yu save a 2 KiB file to the drive, this will result in a total of 4 KiB being written to the drive, as the minimum file size is judged by the cluster size. Now, say you still have a 4 KiB cluster size, but now you're saving a 16 KiB file, this file will now be spread out into 4 different clusters, each 4 KiB in size. So, if the OS only sees clusters how does it write/read from a drive? The controller takes over that job. The controller for the storage device translates the clusters into specific sectors on a drive. So, what does this teach us? 1.)The smallest space a file can take up is defined by the cluster size. 2.) A file over the cluster size will be split into multiple clusters. 3.) The smaller the cluster size and the larger the file, the more clusters will makeup that file. 4.) If a file takes up multiple clusters, a table is used to locate each cluster that contains part of that file. File Systems In the Windows world of things, there are 2 main types of File Systems (FS) you can. There are FAT and NTFS, with subsets/subversion's of each. FAT being the simplest of the two contains the bare basics one might consider for a FS. A file systems' main function is to provide a base for writing/reading data from a storage device. As discussed above, the use of clusters is used for most modern day application of a file system. This makes the file systems main function to facilitate the reading and writing from clusters, however a FS may also make use of other data in order to provide advanced features like security/permissions and data for each file (aka dates and such). FAT stands for File Allocation Table, and is the main function which creates a "map" of sorts that defines which clusters are used for what. If a file is larger than the cluster size, the first cluster will reside with the file in its directory along with a pointer, this pointer directs the OS to a specific row in the FAT that defines a cluster where part of the file is, and then may define another pointer to another row, it will do this until it runs into a row with an end pointer that states that the row is the end of the file. This method works quite well, however this table is a bit inefficient with larger files. NTFS has the MFT (Master File Table). The MFT does the same job as the FAT, but also contains a attribute metadata for each file, and is in a relational data form for greater efficiency. However, NTFS has more to offer than just the MFT, it also provides things like file level compression that can be uncompressed on the fly by the OS. But it is important to notice that even though NTFS is a superior file system, it has greater overhead than FAT. And the benefits overcome the overhead as you get larger volumes. This means, that in effect, you will get better performance with a 2 Gig volume formatted with FAT than with NTFS. The performance benefits gained by less overhead in FAT starts to drop around the 2 gig mark though, in which case NTFS would be a better choice. This is also true, because FAT's required cluster size increases as the volume increases, after a certain point is reached it starts being too taxing (around the 2 Gig mark). Fragmentation So, what happens when you save 20 different 4 KiB files, labled 1 through 20, onto the hard drive, delete files 10 and 15, then save a single 10 KiB file to the drive? First, you allocated and used 80 KiB of space on the drive, then you deleted 2 files, leaving 8 KiB of free space in the middle of that allocated space. After that, you then saved a 10 KiB file to the drive. The result is that the 10 KiB file will use that 8 KiB space, in the middle, for most of itself, then skip to the end until it reaches more free space and then save the remaining 2 KiB of itself. This basically means that the file is using non-contiguous allocations of clusters. This is called fragmentation, where a file is split into multiple chunks as a result of writing itself to the first chunks of free space it sees, to avoid wasting space. Since the hard drives have mechanical latencies, which get worse the more it has to move around to read data (excluding Solid State Drives), then the drive will become slower as the mechanical latencies increase for the same file as the file becomes more fragmented. And, as you may figure, this will slowdown overall system performance. Registry Windows is in a way a massive application that supports the use of running other software within it. When you create something like an OS you msut also create an efficient means of storing settings and data for both the OS and its applications. This is where the registry comes in. The registry is a collection of database files: System, Software, SAM, UserDiff, Security, Default and NTuser.dat. These files hold all the settings for the OS's operations and applications. To open and edit the registry you can go to Start -> Run -> type in regedit. We'll discuss some changes below. System Setup Ok, time for setup guidelines that we can follow by using this information. Paging File If you right-click on "My Computer" and go to "Properties", then "Advanced", then under Performance click on "Settings." From here go to the "Advanced" tab, and under virtual memory click on "Change." Here you'll be able to set the size of the page file that we discussed earlier. Now, here is where we get into myth-land. There is a lot of talk around the web that you can simply disable the page file. Now, while this is certainly true, there is no performance to be gained from this, and more than likely performance to be lost. As discussed, items that are LRU will be paged out of the physical ram and into the page file. The result is that there is not more physical ram free for actual use by the OS. By disabling the paging file, all you're doing is forcing every single piece of data that was loaded into ram to stay there, even if it has not been used in quite some time, or the data may just be completely removed from ram. This causes two things to happen. 1.) You increase the amount of actual physical ram you're using for the same task. 2.)If more ram is needed the OS will completely remove the data from the ram, instead of paging it out to the page file. This results in a page fault being raised if the removed data was not found in ram, if so then the data must be re-transferred from the drive. If you have a page file, instead of the data being removed completely from the virtual memory space, it would instead be paged out to the page file, then if the data is requested be pulled from the page file to the ram. As you can see, this gives the same effect in the end, yet you're then loosing the primary purpose and usefulness of the paging file. The purpose of the page file is to use it as a working space for programs. If you disable it you'll run into many more page faults, more ram usage as more "useless" and "least recently used" items are stuck there. This then gives less free ram, which gives less room for the OS's file cache, which uses ram, and so also impacts performance elsewhere. Not to mention, shrinking the room with which programs get to work in. All in all, the page file is a needed element within the modern OS and should not be disabled, no matter how much ram you have. Now the question is, how do you set it? The page file needs to be at least as large as your largest memory requirement. To find this you need to sit and use your computer as heavily as you possibly will in your environment, then watch the "Peak Commit Charge" in the Task Manager under the "performance" tab. This is the max value you should set your page file to. This value may be a bit excessive, but there is no harm in setting the page file too large, only too small. After you have this value, you'll want to convert it into megabytes (commonly it is in KiB within the task manager). Then you'll want to set the page file to your value, set it the same for both the Max and Min. This will prevent cpu utilization for a dynamically changing page file, and more importantly, it will prevent the page file itself from becoming fragmented, or split into multiple chunks as it grows and shrinks hurting overall performance. Setting the drive to the same Max and Min value will prevent this fragmentation from occurring. This is also the reason why you don't want to create a page file "after the fact," the page file, if placed on the system drive, should be the first thing you create after installing your OS. If you create the page file later on, with a drive that does not have 100% perfectly packed storage, the page file will become fragmented right off the bat, however, this is still fixable if you use the "page defrag" app mentioned below in the "Defragement" section. Next we can consider placing the page file on a separate drive than the main system drive. Placing the page file on less accessed drive will help increase performance in times of concurrent system drive and page file access. It is not a good idea to place the page file on a separate partition on the same drive. Because drives use cylindrical platters for storing data, the speed at which certain parts spin varies, faster on the outside and slower toward the inner parts of the platter. Because of this, as data is stored, it is stored from the outside in. If you create your main system partition with a specific size, it will be created from the outside in. After then, if you create a partition for your page file, it will be placed after the system partition, and you'll inevitably be creating it more within the inner realms of the platter, already hindering its throughput performance, compared to placing it within the same system partition. So, instead, keep in in the same partition, or on a separate drive. Now, if you do have a second drive, you may remember what i said earlier, FAT is the king around 2 gigs or under. So, if you do have a second drive, make a partition 2 Gigs or under (the first partition on the drive), and instead of using NTFS, use FAT, the decrease in overhead will increase its performance. You may also want to consider placing the page file on a raid setup. This will provide superior performance. You can look into setting up raid with my raid guide. Just be sure that if you do decide to go this route to not place the page file on a raid setup that gives penalty for writes (aka 3,5,6) as this will give unpredictable (usually not good) performance. Defragment! Now, this may be the simplest thing you can do. Defragment your hard drive. To do this within windows go to start -> All programs -> Accessories -> System Tools -> Disk Defragmentor. However, even though that tools is satisfactory enough, it does not do too great of a job. I Really would recommend using tools like Disk Perfect and Diskeeper, which are excellent retail tools that do a better job than the one built into Windows. It is worth noting, that Windows own disk defragger does not defragment such important files like the MFT and others that reside outside the system partition, but Diskeeper and Disk Perfect do. Even better, there is also a free tool from Sysinternals that does defragment all important system files, excluding the MFT, but including the page file. You can get it here, it will require you to restart your computer, as access to the page file is impossible while windows is running. Cluster Sizes As mentioned earlier, cluster size determines the minimum size a file can be, and files larger than the cluster size are split into multiple clusters, the amount of clusters being, clusters=[file size]/[cluster size]. This creates an interesting delimma, what should the cluster size be set to? If you create a large cluster size, you'll end up wasting a ton of disk space if a lot of your files are smaller than the cluster size. But if you have a lot of files that are over that size, you'll be reducing the amount of clusters they must be split into. What this means, is: -Small cluster size, the more files will be split into multiple clusters. But the better your storage efficiency will be. However, the easier your system will become fragmented. -Large cluster sizes will reduce the amount of clusters used per file. This will result in less clusters being used, which helps reduce future fragmentation. The choice on which to go for is yours, as it is dependent on your environment and uses. Compression Most will probably not consider NTFS's compression as being a performance booster. However file/volume compression within NTFS can compress a file and decompress it on the fly. When NTFS decompresses the file it must first load the file off the drive, then it decompresses it. So, what does this mean? Well, which would be faster, pulling a 180 MiB file off a mechanical latent hard drive, or pulling a 20 MiB file off the drive? Hard drives are the main bottleneck in transfer/throughput performance. When you use compression, it is like you compressing a large file on a server, then having the client download the smaller file, and then uncompressing it. You will be able to transfer the same amount of data in less time. Only one catch though, processor performance. The dynamic compression and decompression of NTFS will increase cpu utilization. However, this option is almost a viable solution now a days with dual or even quad core processors, where the one core could decompress and compress the files on the fly while you're busy working using the other(s), all the while achieving a technically better throughput. It is, at the very least, worth trying out. Update: After further experimentation i have found that NTFS' compression does induce overhead while writing. While benchmarking i found that write speeds were decreased considerably. I also found that reading+writing combination was especially slow. Like copying a file from a directory to another on the same drive, will cause the drive to read the data, write the data, read the data, write the data...etc until done. Doing this speed was decreased even further, making me believe that NTFS' compression algorithm was not designed with performance in mind. This is because, by looking at the benchmarks it looks almost as if, it reads the data, uncompresses it, then compresses it, then writes. If the algo was designed more intelligently it should just have to copy the compressed data from one part of the drive to another, with no need for uncompressing, but this does not seem to be the case. Because of this new data, i would recommend only considering NTFS' compression if your purpose is purely reading oriented, any other purpose would incur too much overhead. Disable 8.3 File Name Creation Back in the days of DOS, and the use of FAT12 and FAT16, there was the requirement to use a special format for file names. This is known as the 8.3 file name format, I.E. 8 characters a period and 3 characters to identify the file. For example, 12345678.123, or qwertyui.zip. These days there is an extension system in place that allows for up to 255 characters to be used in a file name, but for compatibility purposes these long file names are truncated to the 8.3 format and then written along side the long file name. So, if a file ever need be read by an older OS or written to a an older FAT FS, it can do so. However, if you know for a fact you'll never be dealing with DOS (or its 16 bit applications), or a FAT FS below FAT32, you can disable this name creation. There are a multitude of different methods you can use for this, but the simplest method is: 1.)Go to start, then Run, type in regedit 2.)Navigate to HKLM -> System -> CurrentControlSet -> Control -> FileSystem 3.)Change NtfsDisable8dot3NameCreation to 1. You'll then have to restart your computer for the change to take any affect. Alternatively i have made a quick .reg file that will automate this for you. Simply download the file (very small download), and double click it, a menu will pop up asking if you want to add the entry, say yes, then restart. Disable Last Access Update in NTFS Each time you view/list a directory or list of directories, the "Last Access" time stamp is updated for each. If you wish to speed up the listing of directories, you can disable this. To do this follow these instructions: 1.)Go to Start, then Run, type in regedit 2.)Navigate to HKLM -> System -> CurrentControlSet -> Control -> FileSystem 3.)Change NtfsDisableLastAccessUpdate to 1. If you do not have this entry (some systems may not), you can add it by right-clicking -> New -> REG_DWORD -> Name it NtfsDisableLastAccessUpdate -> Seti its value to 1. You'll then have to restart your computer for the change to take any affect. Alternatively i have made a quick .reg file that will automate this for you. Simply download the file (very small download), and double click it, a menu will pop up asking if you want to add the entry, say yes, then restart. Startup Items When you install software, hardware, or even just use your browser, startup entries will be added to your system. These entries define drivers that need to be loaded, background services, and most useless of all, taskbar/system tray items (items which load into the bottom right side of the taskbar). Most of the time these items are useless and can be taken away. This will both reduce the idle ram usage you may experience, but will also decrease the amount of time it takes your computer to start. Now, there are, as usual, a few ways you can go about this. The first is to use a builtin utility called msconfig to launch it go to: 1.)Go to start, then run, type in msconfig (this utility does not exist with windows 2000, if you have windows 2000 use the other tool i'm about to mention). From here, you can go to the startup tab, under most cases (meaning near 100% of them) you can go straight through and disable all of them, unless you have some preference for one or two of your programs to load at startup (aka aim and such). After you're done, click apply and restart. However, you can also go to the services tab, check the "Hide ms services" and look through those. Normally, you can also remove all of these entries too. However, some programs, like 3ds Max, Photoshop (or most adobe products for that matter) usually have a licensing service running in the background (adobe's is called, "adobe LM service"), so do your research before you remove any entry that is not obvious what its purpose is. You may also keep the hide checkbox unchecked and disable services you know you don't need, but again, do your research before disabling any that you're not sure about. As few examples, in most cases you can disable the indexing service, which simply makes your files faster to search through, but at the cost of random performance drops as the service indexes the files periodically. One service that may be recommended by some to disable is the system restore service, however i would strongly advise against this, as it backs up important system files and your registry, if you run into a problem you can use system restore to fix it/restore the settings/files. And you can even use the "disable all" button, but if you do that, you'll still be able to start your computer, but not much will function :-) If you have Windows 2000 (which does not have msconfig), or you wish to use a more advanced tool, i would recommend giving Sysinternals Autoruns a try. I must warn you though, this is an advanced user level tool, you can render your system inoperable by the simple click on a button. However, no other tool I've run across gives the user as much control. It covers everything from services, browser helper objects (BHO), drivers, regular startup items, and much more. Msconfig Autoruns Drivers The most simple "tweak" you can do to your system is to update your device drivers. These include your chipset, and video card drivers. Keeping your drivers uptodate will not only give a possible performance increase, but may also resolve any conflicts and/or technical problems that has been occuring. Windows XP Visual Effects These days the impact of the visual effects are pretty minimal. However, if you want, you can get rid of them anyways. To do this go to start -> control panel -> make sure you're under classic view, if so then click on system -> then go to the advanced tab -> under performance click on settings -> select "adjust for best performance." This will remove ALL visual effects, including themes. Alternatively you can use msconfig, or Start -> Control Panel -> Under the classic control panel go to administrator tools -> Services, and disable the "themes" service, and then restart your computer. However, if you prefer the Win Xp look over the Win 95 look you can click "adjust for best performance," but then go to the very bottom of the list and choose "Use visual styles on windows and buttons." This will remove all of the visual effects but still keep the Win Xp theme. Well, that is about all i have for now for major performance tweaks, and i hope you have found them to be helpful. Be sure to check back periodically as i am sure to add to this page. [Edited by - Jarrod1937 on July 18, 2008 9:18:22 PM]
  3. I wrote this article for my site, which i decided to do since recently i upped my raid setup from a raid 0 to a raid 10 and figured i could show the concepts and ideas behind the process. So, hopefully i have achieved my goal at making the concepts clear and concise. Enjoy! Setting Up Raid Before we go any further, i must give the standard disclaimer. I can not be held liable for any event that is derived from the use of this article. Nor any damages or loss of data. If you're unsure of anything below, do not attempt it, or thoroughly backup your data before hand. What Is Raid? RAID stands for, Redundant Array of Independent/Inexpensive Disks. Here you're basically taking two or more "physical" disks and forming one or more "logical" disks. The intent is to either increase performance by utilizing two or more drives at once, or to provide some level of redundancy for the protection of data. However, as we will discuss below, no matter what the intention is, some RAID levels offer both performance and redundancy. Terms Strip Width- Here strip width defines the amount of parallel commands that can be executed at once to a stripped raid. Basically, if you have 2 drives, your strip width is 2. That is to say that you can send 2 requests that will be carried out immediately by two different drives at once. The more drives you add the larger your strip width can be, and the more data you can read/write in parallel. In this way you can see why a stripped array of 4 30 gig drives will have better transfer performance than 2 60 gig drives Strip Size- Strip size defines the minimum size a file needs to be to be stripped across drives. If you write a 4 KiB (binary kilobyte) file to a stripped array, with an array size of 32 KiB, that file will only be written to the first drive and will only take up 4 KiB. However, If you have 4 drives in a stripped array with a 32 KiB strip size, then writing a 128 KiB file to the array will result in the file being evenly split into 32 KiB chunks across all drives in the array. Choosing the strip size is an important factor in performance of a stripped array, and the choice varies drastically depending on the use of the array. If you choose a smaller strip size you're going to split files into more chunks/blocks/strips across more drives. This will have the effect of increasing your throughput/transfer performance, but positioning performance (access timing) will decrease as all of the drives will be busy handling a data request already (since the file must be accessed by all drives). If you increase the strip size you're decreasing the amount of drives files will be split across, but this will increase positioning performance as most controllers will then allow another drive that is not in use to fulfill another request while the original request is being completed. Generally there is no rule of thumb to setting the strip size as it depends on many different factors, some of which include the arrays use and the drives being used. Parity- Parity data is error correction data that has been calculated using the actual data being stored. An example of this is a single parity bit added to serial connection transmissions, where 7 bits of actual data is transmitted, with the last bit being the parity bit, then a stop bit/s. Here The parity bit is calculated before the data is sent, after the data is transmitted and received the parity bit is recalculated, if it matches the parity bit transmitted then the data is accepted, if not then the data is resent. The same type of parity information is calculated, but just on a larger scale in raid setups (except for raid 2, which does bit level stripping and bit level error correction, however raid 2 is no longer used). Raid Controller- A raid controller is a controller that handles all raid operation and protocols. There are three types of raid: Software- All raid operations including logic and calculation is done purely in software. Usually an operating system has support for the less intensive raid levels (1,0). This will generally provide "ok" performance at the cost of a larger cpu utilization due to the entire process being done in software. And as you can probably guess, true software raid does not use any hardware. Hardware- All raid operations including logic and protocols are handled by a dedicated processor and bios on the hardware controller. The controller may also have an onboard buffer/cache, and maybe even a battery backup for it to keep data from corrupting when utilizing write-back cache mode (in more expensive controllers). A hardware raid controller has to interface with the computer in some way. Here you'll see raid controllers for pci, pci-x, and most recently pci-e, though this list is not exhaustive and there may be raid controllers for other, more obscure interfaces. It is generally a good idea to research the actual throughput of the interfaces before choosing a raid controller. Pseudo Hardware Controller- Here you have a hardware controller that has its own buffer and bios. However it may be lacking a dedicated processor. This means that all of the protocol and logic of the raid controller is contained on the controller, but the controller uses your processor (cpu) in absence of its own. This generally will lead to a smaller cpu utilization than just software raid, and performance is better than plain software raid, but it will be larger than complete dedicated hardware raid. Keep in mind that the cheaper you go the cheaper the controller itself will be. While this may be obvious, cheaper controllers offer less advanced performance improving features, and may reach their bottleneck sooner. Depending on how the controller are designed you will generally start to see the controller, or its interface bottleneck. Cheaper controllers tend to do this sooner. Drive Interface Choice For a drive to connect to the computer it needs to use a certain interface, most common choices being SCSI, SAS (basically SCSI over a serial Connection), SATA, and IDE. SCSI/SAS interfaced drives tend to be a bit more expensive, but they generally offer the best performance due to maturity of the interface protocols and large use within the server and mass storage industry. Next we have IDE, whose creation was for cheapness. IDE came into the computer industry as a need for a cheap, economical interface was required to lower the cost of drives and their controllers, and IDE filled that void. Because it was cheaper than other options its popularity grew, even though it was never meant to be an interface for performance. The original IDE interface design had the processor being used for the interface, this mode was called PIO, now there are DMA modes that allow for better performance. But there is still the lingering design problem of single device access per channel. Basically if you have two devices on a single IDE ribbon cable, only one of those devices can communicate with the computer at one time. This makes IDE not suited for raid whatsoever, as benefits of raid come from simutanious access of multiple devices at once. So, i would recommend using only SCSI, its cousin SAS, or SATA for raid. SATA is the successor to IDE, and is a serial based interface. SATA does offer simutanious access to multiple devices and offers better throughput than IDE. Along with this SATA is generally cheaper than SCSI but offers only slightly lesser performance. Lession... Use only SCSI, SAS, or SATA for raid setups. RAID Levels There are many different RAID Levels, including some proprietary ones, that we could discuss. However, for this article i will concentrate on the most common implementations. Raid 0 Here you'll find a raid level that actually is not redundant at all. Raid 0 is the simplest of all the levels. Here you have two or more physical drives being combined into one or more logical drives with the recorded data being "stripped" between the drives. The intent with this case is absolute performance with no regard to data safety and redundancy. With raid 0 you will take two more more drives combine them, then data will be stripped, based on the chosen strip size, between them. This has the effect of spreading out the workload of both writing and reading data from the logical drive, which is composed of multiple physical drives. However, as stated above, how the stripping affects performance, I.E. access times or throughput, is dependent on the strip size used. The stripping in raid 0 will help throughput with smaller strip sizes, and help access times with smart controller if a larger strip size is used. Raid 0 is generally not used if you have any data you wish to keep safe from data loss. This is because in raid 0, since all data is simply stripped and stripped across multiple drive, if a single drive dies in the array, all data will be lost. Data can not be retrieved if it is in pieces across two or more drives, so if a single drive dies in the array, you're effectively deleting a chunk out of each and every file that was stripped, in a way corrupting the files. However, out of all the raid level's, raid 0 has the best overall read and write performance combination. Raid 1 Raid 1 is another simple raid, however it is the first level that offers redundancy. Here you have two disks, that are "mirroring" each other. That is to say, bit per bit, all data that is written to drive A is also written to Drive B. This create a write speed penalty for all data being written to the array, since it must be written to both drives. However, when you read back the data, you will get an improvement. Under most controllers when you request data, it will be read from drive A, and while drive A is busy, drive B can then service an request that may come in the meantime. Thus you get the throughput of a single drive but an increase in access time performance as both drives can service two different read requests at once. In a mirror, one drive of the pair can die and the raid can still function. Drive A could die and drive B would take over, or vice versa. Here one backs up the other in case of failure, so one drive death can be tolerated before risk of data loss. Raid 3 Raid 3 offers byte level stripping combined with a dedicated parity drive. Here you have the data stripped, similar to raid 0, but with parity data being calculated and stored on a single dedicated drive. This setup requires a minimum of 3 drives. With raid 3 the read performance is quite similar to raid 0 with a slight hit in performance from byte level stripping. However the write performance suffers considerably due to overhead of calculating parity data and the fact that the single dedicated drive becomes a bottleneck as it must be accessed every time new data is written to the array. Raid 3 can have a single drive die before any data loss is incurred. Raid 3,4,5 have a 2-1 drive capacity. Meaning the space given by the 3 drives with 1 drive subtracted is the usable space capacity. Raid 4 Raid 4 improves upon raid 3 by doing away with byte level stripping and instead does block level stripping and parity data calculation. However writes still suffer from the parity calculation and raid 4 still makes use of a dedicated parity data drive, that is still a bottleneck. Raid 4 can have a single drive die before any data loss. Raid 5 Raid 5 improves by doing the same as raid 4 except that it now disperses the parity data throughout all drives, as opposed to a single dedicated parity drive. This removes the dedicated drive bottleneck. However, writes still suffer from the extra overhead of having to be calculated and written along with the other data. Raid 5 can have a single drive die without data loss. Raid 6 Raid 6 is the same as raid 5 with with an extra drive added to the minimum requirement. Then dual parity data sets are calculated and written. This allows a maximum of two drives to die without data loss. But as you may guess, write performance is even worse than raid 5 as there is now two data sets to be written per write request. Nested Raid You can also have different raid levels nested. Meaning you can mix or combine two raid levels together. The most common being: Raid 10 (1+0) Raid 10 requires a minimum of 4 drives. It will separate the four drives into 2 pairs of 2, each drive in each of those pairs will mirror each other, this forms two logical drives. These logical drives are then combined in a raid 0, forming one logical drive. Here You can have two drives die before any data loss, a drive can die in each mirror pair. By probability this is more redundant compared to raid 01 (0+1). Raid 10 is excellent with both read and write performance. Raid 10 is considered one of the better raid levels as it offers the same level of redundancy as others but without any parity calculations. Only downside is a 50% usage capacity, meaning out of all the storage space combined from all the drives, only half is usable. However, with the prices of high storage drives getting cheaper it is almost a non-issue. Raid 01 (0+1) Raid 01 (0+1) requires a minimum of 4 drives. It will separate the four drives into 2 pairs of 2, each of those pairs will strip each other, this forms two logical drives. These logical drives are then combined in a raid 1 (mirroring), forming one logical drive. Here You can have two drives die before any data loss. As with Raid 10, it half's the usable drive space. Setting Up a Software Raid- Using Windows Software raid might a be a solution you're looking for, and you may even be able to do it right now with just your operating system. Windows supports software raid 0 and raid 1. To setup a software raid in Windows Xp follow the steps below: 1.)Right-Click on "MY Computer" and go to "Manage." 2.)Under "Storage" click on "Disk Management" 3.)Here you'll want to right-click on the small square next to your newly installed drives, that are unpartitioned (unless you're doing raid 1, in which case only one needs to be unpartitioned), and say "Convert to Dynamic Disk" for each. Please keep in mind that converting your disks to dynamic disks may cause compatibility issues with some software, like Acronis. And once your disk is dynamic it is risky to convert it back to basic, so if you're doing so, backup your data first. 4.)Right-click the unpartitioned space of any one of the drives, click "New Volume." From here you can make a choice of which raid type you want, and unless you have windows server, raid 1 will be grayed out. This was part of a limitation Microsoft put into the OS, reasons for this are unknown. However, if you're not afraid of hex editors, you can attempt to edit the needed files into being the server versions, this method has been tested to work by me, however because it is a gray legal area i can not provide the files directly myself. 5.)After you have selected which raid level you want, you can then click "Next", add the drives you want to raid, click "Next," then follow the menu from there which lets you choose the standard format options. And you're done! However, as stated above, you will generally get poorer performance through software raid, along with extra CPU utilization. And under raid 1 you will not be able to boot into windows without a boot disk when using the mirrored volume, due to Window's software raid not writing the needed boot information like the MBR and other files to the drive (Window's software mirror only works on the partition level). But the pro with software raid is hardware independence for raid migration. Setting Up Hardware Raid You should start by researching what kind of raid level you're looking for, this will depend on your budget and use. After you have decided, see what interface you can use. Does your motherboard have a spare pci/pci-e/pci-x? Does the interface you have available meet the bandwidth/throughput you predict will be needed by your raid setup? After you figure this out, look online for what choices you have, I'd recommend shopping at Newegg.com (if in the USA), or Tigerdirect.com, who tend to have new prices. Be careful which controller you get, and avoid cheaper ones if you're looking for performance, generally $50 and above will give results. After you select your controller, you should start looking around for drives. I recommend the Raptor series for their excellent access times. However, the Raptor series, and especially the Velociraptor, may be a too expensive for some. If you can't afford those i would look into the Samsung Spinpoint series, whose high platter data density gives excellent transfer performance for a 7,200 drive. However, I would avoid the 1 terabyte models for now as they're as having quality control issues with that model that make for a lot of DOA's. Only downside are the access times are that of any 7,200 rpm drive. Since access times scale near the same as the performance gain, it is important to watch the access times of the drives you buy. As said earlier, depending on your raid setup, some access time issues may be slightly negated by the controllers operation (another reason to go for a quality controller). For my setup i used an Adaptec 1430SA pci-e 4x raid controller which supports raid 0,1, and 10 (bought on newegg for $104). For my drives, i used 4 150 gig, 10,000 rpm Raptor X's. Power supply is an Enermax Liberty 500 watt modular power supply, processor is a Q6600 quad core, 4 gigs of ddr2 800 ram at 5-5-5-12, and an 8800 gts video card. For this article, all are at default speeds. Here you will see that i installed my controller card into a spare pci-e slot i had available. I then plugged the 4 SATA cables into the controller, which are then connected to my 4 Raptor X's. As you may notice, i plugged 2 drives into one cable chain, and two others into another chain. Dispersing out the power to two will help disperse out the initial spike of power required by all drives to start their initial spinning. If you don't disperse the power you may end up with a few drives failing to spin up or dropout during testing/use. To setup the raid with everything plugged in, turn on your computer, and after it passes the initial POST it will ask you to press ctrl+[letter derived from the manufacturer], in my case it is ctrl+A (for Adaptec). Here you'll have to refer to your controller manual on the specifics of the menu's. But in general most raid controller's bios's will allow you to select which drives are to be a part of the array, then it will ask what array type you want, then the strip size. After you're done setting up the raid you will need to exit the bios and restart the computer. After you restart you will need to install the drivers for the raid before you can use it. In my case i am installing the raid under an existing windows install that is on a separate drive, so when i start up my computer, Window's saw the raid controller, and i simply pointed the install to the drivers on the disc/downloaded from their site. The same should be done by you. However, if you're wanting to install windows on this raid, you will need to start with a fresh install. To do this create a floppy disk that contains the drivers, backup any data you want saved, pop in the windows install disk, and press "F6" at the beginning of the windows install disk, from here the windows install disk will install the drivers and you are set. We are almost there, but before we can use the raid we need to initialize the disk. Now, if you installed windows on the raid array from the step above, then you can just skip this step, as it is not needed, you only need to initialize the disk if you installed the array on an existing windows install. To do this, right-click on the "My computer" icon and select "Manage," then click on "Disk Management" under "Storage." Here you may get a prompt from the manager, ignore the prompt and close it, then you should see your new raid disk, it will be unpartitioned and uninitialized. Right-click on the square to the left of the unpartitioned space and click "Initialize Disk." Now the disk should be active. From here you just need to right-Click on the unpartitioned space and click "New Volume," from here just follow the steps. Now your hardware raid should be up and running! Testing Now, i decided to setup my 4 raptors in a 4 drive raid 0 setup for testing, after which i then set them up in raid 10 for more permanent use (I.E. redundancy). I tested these drives in various configurations using different strip sizes to illustrate the principles discussed above. Here you can see our base test. This is the performance of your average 7,200 rpm SATA drive. You can see its access time by itself is over 13 ms, and its average throughput being only 52.2 MB/s. This is about the performance most people get with only1 drive doing one task, the drive performance suffers even further if being requested data from multiple sources. Here are the drives in raid 0. The raid 0 setup has a strip size of 32 KiB, and i get a very nice and even (too even) transfer rate of 202.5 MB/s average. The access time is 8.1 ms, being reported as doubled compared to the Raptor X's single 4.2 ms time performance. Now, without any further testing we can see that our raid array is already considerably faster than our base generic 7200 rpm drive. Both in sustained throughput and access time. But, can we push performance further? Now, here is the raid 0 with a 16 KiB strip size. We can see that the access time is pretty much the same as before, but look at the max throughput, 326.2 MB/s and an average sustained throughput of 279.6 MB/s! Now that is quite an improvement. As discussed above, using lower strip sizes force the file to be split across more of the drives giving a large increase in throughput when reading and writing as all drives are being collectively used. Next we have our raid 10 results... Just for fun i decided to test the raid 10 array starting out with a 64 KiB strip size. The results are a bit hard to judge, being so sporadic. While i achieve a nice maximum value, the average is a little low, being brought down by the random dips. This is a good lesson as to why you should never go off of just maximum transfer rate values, it is very important to see the overall average transfer rate. So, i then changed the strip size and got, This is the final setup i have come to. It is the raid 10 but with a 16 KiB strip size, as with the raid 0. Here we see a more calm and predictable curve, giving a higher average and minimum. Hopefully, after reading this you can see the benefits of raid arrays. Just by utilizing this technology, even without spending a chunk of change you can achieve considerable performance improvements in hard drives which is important as it is one of the largest bottlenecks in computers, and affects much more than just load times. From here there is now nothing more to do, other than to go and have fun with your raid! Soon I'll be writing an article on how best to optimize your hard drive, which will allow you to make use of your array even further. until then, enjoy! [Edited by - Jarrod1937 on July 11, 2008 11:08:27 PM]
  4. IDE Performance

    Check your IDE cable. The older cables were purely 40 wire conductor cables. However, when IDE updated the standard it increased the number to 80 wires (the extra wires are just more grounds to limit crosstalk). However, using a 40 wire cable on faster drives will make them behave very slowly. Though, the other possibility is the drive is dieing. Does hd tune turnup any bad blocks under its error scanner? Does the SMART report look ok?
  5. Revenues for PC Games

    Quote:Original post by RivieraKid can you pirate games on steam? Yep, you can download the game files, install steam, then overwrite the steam files with a cracked version. This version bypasses the online check.
  6. Any Advice on Drawing Waterfalls?

    I would take a photo of some water spray, add motion blur to it, and then elongate your particles. That should give some good results. Waterfalls look the way they do because of motion blur, if you're not going to add any to your rendering, you need to do it prerender like in photoshop. Though, personally, i would model the water and use a simple pixel shader with a scrolling overlay of two normal maps going slightly different speeds. Then use a single mist particle source.
  7. The actual signal traveling through the speaker wires is low voltage and high current. Meaning it would take quite a lot to inject interference into the signal. If you do have any interference it is likely on the amplification end. You either need to shield your amp. Although, even considering that possibility, it would seem odd for the extremely high frequency carrier wave of cell phones to cause any type of audible interference, as they are way, way, out of range of speakers, they are usually 824 Mhz and up. Even considering that's possible, the grounding of the amp should prevent this. Make sure your amp is properly grounded. And with all of this taken into consideration, i must ask, are you sure the phone is the source of this interference?
  8. Fringed pixels in complex image (Photoshop)

    I assume he means if you're using a quick mask or the alpha channel. Using brightness/contrast on the 8 bit grayscale channel, you can eliminate the in between pixels to 100% opacity pixels. However, while i use this technique myself, you need to clean it up at the end, or else you'll get some jaggy results. As for cs4, i can not help you there, i only have cs3.
  9. Fringed pixels in complex image (Photoshop)

    The only way to remove them from exisitng works is the manual method. Either by duplicating the layers as you have been doing or by limiting your selection to only 100% opacity pixels and deleting all other pixels. However, to prevent this from happening in the future you should turnoff anti-aliasing for the eraser and brush. Google "photoshop turnoff anti-aliasing."
  10. New machine, should I use Vista 64?

    I too use Vista 64 and could not be happier. However, even though i may get yelled at for doing so, i enabled the true admin account and completly did away with uac. In its place i use tea timer to monitor all registry edits by any program. To me its just as safe and is a lot less compatible with older apps. So far i have only had problems with one app. Display mate, a test signal generator program for video calibration... but i'm sure that is because i'm using an old version. So far i've had no troubles with playing games or developing them (from an artist's standpoint).
  11. Fast hard drives

    Quote:Original post by davepermen I don't see an actual reason for raptors at all. an ordinary disk is close to the same performance. no one needs a raptor or an ssd right now. and if one wants more performance, ssd's are a bigger step than the raptors. First you say that you should not focus purely on the throughput and instead more on the access/seek times. Then you say that a raptor (or any 10,000rpm drive) is not that big of a leap over regular drives... even though its seek time is generally half of that of any 7,200 rpm drive. The larger drives like the terabyte ones offer nice space along with high throughput because of high data density, but they are generally even worse with access times. Now, while it is true ssd's are a bigger step than higher rpm drives, the whole point is the price per gig. For less than the price of a two raid 0'd ssd's like mentioned earlier you can get 4 raptors and get almost twice the throughput, getting 600 gigs of space. I would love to have a ton of nice ssd's myself, but they just are not there yet. However, as i stated before, they do have their place in laptops right at this moment. Its hard to compete against the size of most ssd's, which can easily fit into a laptop.
  12. Fast hard drives

    Quote:Original post by davepermen well, the prices for ssd's are in movement. and the latency difference of x80 from a raptor to an ssd is a feelable and great difference. getting a 64gb ssd for 389$ is not that much of an investment for the highend and definitely enough for a lot of apps installed (not so for games yet :( but there, a cheaper mlc disk would be enough, as only fast read is important)). sure, they're more expensive than raptors. on the other hand, they're really much faster in usage. raw MB/s is not the most important measure. the latency helps much much more. currently my tiny tablet notebook is much faster than my quadcore i'm writing on right now. this only thanks to a 1.8" ssd, which doesn't even run at max performance. numbers here: http://www.davepermen.net/SSDs.aspx there are much faster harddisks. still my notebook is much quicker to boot, to start apps, to do anything where snappiness is important. getting a fast ssd into an existing notebook boosts the notebooks performance much more than getting a new notebook, and you use much less money. i look at them from that point of view. they are hella expensive as an item on their own. but instead of buying a new pc/laptop, they're cheap. Well, it really comes down to what your doing. However, generally, if you graph out the performance increase from both throughput and access times you'll see a point that is reached where one matters more than the other. Because of this, the faster access times may only be noticed by a certain range of file sizes and queue depths. So, it is not correct to say access times matter more than throughput or vice versa, both are equally important. "currently my tiny tablet notebook is much faster than my quadcore i'm writing on right now. this only thanks to a 1.8" ssd, which doesn't even run at max performance." Well, if your only qualitative measurement of speed is loading... And loading small files at that. Access time is the time it takes to find the file and start delivering it. After that, its throughputs job to get the data as fast as possible. Faster access times, like those in SSD's, while nice, are only truly noticeable if your load is, A.) Lots of tiny small files or B.) You're running in a server environment and you have a large queue depth. Because of this, i recommend to most to not go the SSD route atm and instead use 10,000 rpm drives in some sort of raid config. SSD's are not worth their price per gig when you consider most will be using them in single user environments with an average file size load where the importance of throughput is starting to gain over the importance of access times. Although of course, the performance cutoff in even a single user environment for lower and lower access times is still lower than that of even a 15,000 rpm drive, the point is that it still is not worth it for the price per gig of the SSD's. Although, i suppose this is a matter of opinion, if you have the money, then why not go with SSD's i guess. Now, a server/notebook environment is a bit different, and SSD's may be right for those uses. Its especially nice for high queue load database servers where high IOPS is nice. [Edited by - Jarrod1937 on November 6, 2008 1:41:55 PM]
  13. Fast hard drives

    Quote:Original post by BeanDog Quote:Original post by Jarrod1937 Quote:Original post by davepermen dude, just get some ssd's and raid0 them up to any performance you want :) see here i'm currently waiting for two mtron 3500 64gb for a raid0.. 200mb read/write, 0.1ms latency. <800$ investment. I really feel SSD's are not there yet. Their price per gig is still terrible. If you want good performance with the risk of RAID 0, you can get 4 150 gig 10,000 rpm Raptor X's and raid 0 them. I was able to achieve around 340 MB/s Max throughput with excellent access times. And their price is less than the SSD's in your example and you get a lot more storage space. Remember that the risk of RAID 0 is much less for modern SSDs than mechanical drives. SSDs have no moving parts to speak of, and with wear leveling maturing, SSDs fail very slowly and (more importantly) predictably. Thats not entirely true. Because of the arrangement of their "clusters", if one bit/cell goes bad, the entire block is cutoff from access. If you're running a stripped with no parity/mirroring that can potentially corrupt a few bits from a lot of files. That and other failures and vulnerabilities exist for SSd's, such as more failure from ESD and other similar electronic failures. The majority of deaths in hard drives are actually not from damage because of the moving parts, but from similar electronic damage and corruption (damaged onboard controller, corrupted firmware, bad head...etc). They're both equal in the risk factor area imo.
  14. Fast hard drives

    Quote:Original post by davepermen dude, just get some ssd's and raid0 them up to any performance you want :) see here i'm currently waiting for two mtron 3500 64gb for a raid0.. 200mb read/write, 0.1ms latency. <800$ investment. I really feel SSD's are not there yet. Their price per gig is still terrible. If you want good performance with the risk of RAID 0, you can get 4 150 gig 10,000 rpm Raptor X's and raid 0 them. I was able to achieve around 340 MB/s Max throughput with excellent access times. And their price is less than the SSD's in your example and you get a lot more storage space.
  15. Fast hard drives

    Quote:Original post by oliii Yeah, SAS. He is getting 8 gig memory and some funky top of the line multicore processor. They are fast, but I don't know his budget. The good thing is that can be added as an upgrade, but it's bloody expensive. This is for his work, so I would expect his budget would be almost 'unlimited'. SAS are fast, but I'm wondering how far is the next generation, it's not a huge leap from the SATA2. The deal with SAS, is that even though it too is simply a serial data interface like SATA, SAS drives can be linked, just like the old SCSI. That and the SAS interface protocol is built off of the old SCSI protocol and so is already more mature and faster than that of the SATA protocol. Depending on your use, SAS may be a better choice, especially if your use will have a large amount of random read/write requests, since SAS's TCQ has been shown to be better than SATA's NCQ in high queue depth situations.
  16. Rendering - with, 3DS Max 9

    Quote:Original post by Instigator Still unresolved. I increased my page file to 8Gb's and yet the program becomes unresponsive as soon as I start the radiosity task. It seems like there's a bug in 3DS Max.. If anyone else gets this tutorial to work with 3ds max please let me know! Thanks. Well, first thing, how much actual physical ram do you have? Page file size can help considerably, but not if your system is dieing for more ram. And secondly, watch your settings with radiosity, it can be an extremely ram hungry render calculation.
  17. Are we becoming too advanced?

    Quote:Original post by Chris Reynolds "In all seriousness, we are becoming far too advanced for our own good. We're curing diseases, transplanting vital organs, creating vaccines.. and if/when we socialize healthcare, these will be available to just about anyone. We seem to be fighting natural selection. At what point do we decide to let nature control our population? I know these seem like radical ideas, but we can all agree that we have an often ignored population problem in the world. It may not seem evident in the United States yet, but ~50 years down the line we're going to have twice as many people on this earth and much more than an energy crisis at hand. And with medical and social advances, this rate becomes exponential." At some point we will outgrow our own natural resources, right? Will our human compassion to save lives ultimately become a problem? If you've ever studied sociology, you can see that the stats show, as we progress technologically, the birthrate decreases. There are many factors as to why this is the case, but the point is, the whole overpopulation of the earth was a scare of the 1970's based off of old data. "I know these seem like radical ideas" Don't be mistaken, you're far from the first to carry such thoughts.
  18. RAID - Information and Tutorial

    Quote:Original post by hplus0603 Quote:This create a write speed penalty for all data being written to the array, since it must be written to both drives Not necessarily. First, if you have spare controller bandwidth, because it's writing to two drives at the same time, it can write to both drives in parallel. This is a good reason to put the two drives on different channels. Second, write-behind RAID 1 controllers will buffer up the additional writes, and then flush them out once there's a lull in activity, thus if you're doing something other than full-on video capture, you may never see the slow-down, even if you put both drives on the same channel. Personally, I find that RAID 1 is the easiest to set up, and the best trade-off of performance and reliability for me. Hard drives do go bad after a few years, and in the last five years, I've replaced 4 failed drives in 2 different systems, without losing a single bit of data, all because of RAID 1. I do, however, still have a remote back-up that gets taken once weekly, in case the entire computer burns out, is stolen, etc. And, finally, a spelling nit: it's called "striping" as in "the stars and stripes." "Stripping" is something else, usually found in bars where they've sealed the windows and serve cheap, crappy beer. You're correct actually, though it depends on how intelligent the controller is. I'll make the corrections later.
  19. Fast hard drives

    Quote:Original post by daviangel Raptors normally get a 5.9 in Vista rating from what I have seen. At least I know my 150GB and 75GB ones do. They are noticably faster than most normal 7,500 except for the newer 1 and 1.5TB drives. Don't just consider max throughput, access times are quite important.
  20. Fast hard drives

    Vista gives you a 5.9 if the drive does 30 MB/s or more it seems, i've yet to see any computer i've put together get anything less for the drive rating, even the slower ones. The original posters speed was an average of 48 MB/s when calculated, if anyone was wondering the actual speed. Thats 1.65 Gigs x 1024= 1689 MB/35 seconds = 48.2 MB/s. Which actually seems extremely slow for a raid 0. To the op, whats your strip size? Personally, my raid 10 does 164 MB/s peak with an average of 140 MB/s, with a seek of around 4 ms (total access time 8.2 ms). Edit: You may also want to checkout my sticky on raids.
  21. Rendering - with, 3DS Max 9

    Check your page file size.
  22. Trying to define left

    Quote:Original post by Moonshoe Left or right? Using the sentence above, "left" is on the left side, and "right" is on the right side. The evil of the challenge is that your solution is technically a "mental image." Since there is no true placement/direction of words if you actually said out loud, "Left or Right." Even more so, if you insisted that you could write in down (ignoring that it is a mental image/picture), not all cultures write left right, some right to left, some up and down.
  23. Trying to define left

    Quote:Original post by Oberon_Command Quote:Original post by Jarrod1937 Can we use math? Left is the direction a vector is pointing if it starts from your face while you're facing forward and is rotated a positive 90 degrees. Which direction is positive? My coordinate system could be different from yours. Regarding the OP: left is the side of the road the British drive on, and right is the side North Americans drive on. :P Well, nothing about the question stated we're explaining it to an alien of our world. The world has a standard of which direction is positive in the cartesian coordinate system. Otherwise, as already stated, if we can't give a basis for context, then it is a useless thought experiment.
  24. Trying to define left

    Can we use math? Left is the direction a vector is pointing if it starts from your face while you're facing forward and is rotated a positive 90 degrees.
  25. Quote:Original post by wodinoneeye Quote:Original post by Jarrod1937 You could very easily overdraw current from the power supply. Best case scenerio you'll end up with two unstable motherboards, as its difficult to balance two independent systems running off of the same power source. That and splicing will make the power supply attempt to deliver the same rock solid voltage across now half as much resistance, doing calculations, that equals double the current drawn when both are drawing their max instantaneously. Isnt a similar case when the disk drive/cd starts up (maybe the Caps on the Mobos these days -- but then the other Mobos caps help make up the difference.) I suppose I could get away with starting all of them simultaneously (assuming crash&burn isnt part of the SOP ...) I could still have sufficient overcapacity allowance to prevent most of the effect (the dinky power supplies ar still at a premium price these days and the cheap ~300W tend to be bulk, one larger unit could easily handle 4 Mobos for the price of 2 standard power supplies. No, its not similar, you'll notice the atx mobo power connections are really running off of their own line. As for the capacitors on mobos, they are usually there as noise filters, usually for higher frequency noise. They usually will not help too much with voltage drops. I suppose you can try it and see what happens. When running both, if they even startup, you can use an oscilloscope and see how much the voltage is dropping (use an oscilloscope over an voltmeter, as the voltmeter may not show the true nature of the fluctuations).
  • Advertisement