• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

smitty1276

Members
  • Content count

    2387
  • Joined

  • Last visited

Community Reputation

560 Good

About smitty1276

  • Rank
    GDNet+
  1. [color=#1C2837][size=2] [color=#1C2837][size=2](reposted/moved from OpenGL forum... more appropriate here, I think)[/size][/color] [color=#1C2837][size=2] [/size][/color]Hi everyone, I'm trying to ultimately do something similar to the Johnny Lee headtracking demo, where I want to use the center of the actual screen as the world origin, and I want my "camera" eyepoint to correspond to the user eye position in the real world (in the physical display's coordinate system). Initially, I am just trying to set up a more "traditional" on-axis frustum using eye position and screen dimentions... I am using OGL so this is a right-handed coordinate system looking down neg Z axis... I setup my [b]projection[/b] [b]matrix[/b] with a frustum like below left = -SCREEN_WIDTH_IN_MM / 2.f; right = SCREEN_WIDTH_IN_MM / 2.f; bottom = -SCREEN_HEIGHT_IN_MM / 2.f; top = SCREEN_HEIGHT_IN_MM / 2.f; near = eyePos.Z - 0.01; far = eyePos.Z + 1000.f My [b]view matrix[/b] is just translating by -eyePos. My understanding was that this would provide a view frustum such that anything that sits at z=0.0 (basically, a hair behind the near clip plane) would be displayed in the same position as if I had an ortho view. However, if I move my eyePos backward away from the screen, the geometry is appearing smaller as though it is moving back from the screen. Where is my thinking wrong on this? As the eye moves away from the screen, shouldn't the narrowing frustum exactly offset the greater distance, so that the geometry takes the same space on the screen? [/size][/color]
  2. Hi everyone, I'm trying to ultimately do something similar to the Johnny Lee headtracking demo, where I want to use the center of the actual screen as the world origin, and I want my "camera" eyepoint to correspond to the user eye position in the real world (in the physical display's coordinate system). Initially, I am just trying to set up a more "traditional" on-axis frustum using eye position and screen dimentions... I setup my [b]projection[/b] [b]matrix[/b] with a frustum like below[indent] left = -SCREEN_WIDTH_IN_MM / 2.f; right = SCREEN_WIDTH_IN_MM / 2.f; bottom = -SCREEN_HEIGHT_IN_MM / 2.f; top = SCREEN_HEIGHT_IN_MM / 2.f; near = eyePos.Z - 0.01; far = eyePos.Z + 1000.f [/indent]My [b]view matrix[/b] is just translating by -eyePos. [indent] [/indent]My understanding was that this would provide a view frustum such that anything that sits at z=0.0 (basically, a hair behind the near clip plane) would be displayed in the same position as if I had an ortho view. However, if I move my eyePos backward away from the screen, the geometry is appearing smaller as though it is moving back from the screen. Where is my thinking wrong on this? As the eye moves away from the screen, shouldn't the narrowing frustum exactly offset the greater distance, so that the geometry takes the same space on the screen? EDIT: How do I delete posts on the "new" GDnet?
  3. I have a native window context and some pixmap contexts that are "shared" with it. Is it possible to share shaders between contexts or is that sort of thing only limited to textures, buffers, etc.?
  4. Unfortunately, this implementation is on a particular device that I can't check at the moment... I'll definitely go try it later. I was hoping maybe there was some obvious thing that I was overlooking. Thanks for looking at it. EXPLANATION: I just realized that this wouldn't make much sense without explanation... I have an email with someone asking me why something isn't working on this device. I didn't just now run it myself and get the link error.
  5. Setting aside the quality of the code, etc., can anyone explain to me how this vertex shader code would fail to link with the this error? [i]"Error: uniform variables in vertex shader do not fit in 251 vectors." [/i]The excerpt below contains all of the declarations at the top of the source... I'm only counting 120-130 vectors in the worst possible case (depending on how the implementation stores them I guess). What am I missing? [i]// lights const int maxLights = 8;[/i] [i]// attributes attribute vec4 inPosition; attribute vec4 inDiffuse; attribute vec2 inTextureCoordinate; attribute vec3 inNormal;[/i] [i]// uniforms uniform mat4 ModelViewProjectionMatrix; uniform mat4 NormalMatrix; uniform mat4 TextureMatrix;[/i] [i]// the light uniform uniform float lighting_enable;[/i] [i]struct LightSourceParams { float light_enable; vec4 light_ambient; vec4 light_diffuse; vec4 light_specular; vec4 light_position; vec4 light_spotDirection; float light_spotExponent; float light_spotCutoff; float light_spotCosCutoff; // user must define float light_constantAttenuation; float light_linearAttenuation; float light_quardraticAttenuation; vec4 light_halfVector; // user must define ... halfVector=Eye-Light [not presicion] }; uniform LightSourceParams LightSource[maxLights];[/i] [i] // material-uniforms uniform vec4 material_emmision; uniform vec4 material_ambient; uniform vec4 material_diffuse; uniform vec4 material_specular; uniform float material_shininess; // varyings varying vec4 v_texcoord0; varying vec4 v_outcolor;[/i]
  6. OK, this makes sense. Bearhugger, that was a good link. Thanks for the help, everyone.
  7. Also, I just realized that the classes that are inherited (in my example, A, B, and Meh... and a third one I didn't use in my example) are all pure virtual interfaces. They have no data defined at all. Shouldn't it work in that case?
  8. OK, this is what I was afraid of. So, without RTTI/dynamic_cast, this isn't something that is possible?
  9. Can anyone shed any light on this? Here is an approximation of what I have: class A {}; class B : public A {}; class Meh {}; class C : public B, Meh {}; Foo::Foo(A* a) { m_pAObject = a; m_pCObject = reinterpret_cast<C*>(a); } void C::SomeMethod() { this->DoStuff(); // <-- "this" and m_pCObject above don't match } C::C is only invoked once, but when I examine these guys in debugger the pointer assigned to m_pCObject and the this pointer when tracing in C are two different things--specifically, m_pCObject is 64 greater than the this pointer inside of C methods. The objects methods are only invoked as an instance of C from other places, so it hasn't been an issue. I added some methods to C so that Foo could notify it of some state change, and it is behaving as though it is pointing to a different, uninitialized instance. Is there some behavior around the casting here that I'm not aware of? (I'm not able to use dynamic_cast, but I know that the A object will always be a C object.)
  10. Yes, sorry, I just typed the example too quickly... I actually seed it with a prime number. Turns out, this is basically the djb2 hash function. Thanks for the link!
  11. (This may be better placed under math and physics, but I put it here in case it doesn't get too deep in the math weeds... we'll see how it goes.) I have some objects that have 2 arrays of values associated with them. To calculate a unique value for these objects I am doing something like this: UINT32 hash1 = 0; for (int i = 0; i < list1Size; i++) hash1 = hash1 * PRIME_NUMBER + i; UINT32 hash2 = 0; for (int i = 0; i < list2Size; i++) hash2 = hash2 * PRIME_NUMBER + i; UINT64 objectHash = (UINT64)(hash1) << 32 + hash2; Obviously, not very sophisticated, but it seems to work well and from what little I've found it seems to be a favored approach and people seem to think it produces well-mixed results. It seemed like the combination of 2 values in the upper/lower 32-bits of a UINT64 would make collisions even less likely. Questions: 1) Is there a name for this technique? (meaning the "val * PRIME + a[i]" approach)? I'd like some solid answers for people who want to know how well distributed the outputs are. Something better than "its this thing I found on the internet" would be great! :-) 2) Is there a better technique that is not significantly slower?
  12. I may be wrong, but I think I recall hearing that Silverlight 4 supports UDP.
  13. OK, I was concerned with cache performance and compiler optimization suffering and that sort of thing. If it's just UGLY, I can live with it. ;-) Thanks
  14. This will be a performance nightmare, right? Note, this is NOT my code, but I don't want to refactor things if I am worrying about nothing and this can somehow be optimized by compiler or if the cost is actually small. It's also a very large system, and this is a pattern... in reality the "wrapper" contains multiple duplicate interfaces and they do lots of things. It isn't easy for me to just change something and "try it" hence the question here. In a nutshell I have: 1. An interface A 2. An interface B which has the same methods as A 3. A class C that implements interface B 4. A "wrapper" that implements A, but its methods merely invoke the same methods on an instance of A, which is a member. 5. An app that has stored "A* a = instanceofWrapper", and calls methods on that A* object very often So you end up with: (A*)::method --> resolves wrapper::method --> (B*)::method --> resolves to C::method For every call there are two layers of indirection, which seems like it would be a performance killer... am I wrong? These methods are called many many times per frame (at least hundreds) in a graphics app. How badly could this realistically affect performance? // ----- First interface ----- class ISysFoo { public: virtual TYPE SomeThingFooDoes() = 0; }; // ----- Second interface with same methods ----- class IFoo { public: virtual TYPE SomeThingFooDoes() = 0; }; // ----- Implementation of second interface ----- class Foo : public IFoo { public: TYPE SomeThingFooDoes() { /* do stuff */ }; }; // --- Wrapper implements FIRST interface // --- Wrapper contains and calls into instance of SECOND interface; class FooWrapper : public ISysFoo { protected: IFoo *pFoo; TYPE SomeThingFooDoes() { pFoo->SomeThingFooDoes(); }; // --- Do lots of different things with implementation of // the FIRST interface, via the wrapper void main() { ISysFoo *pSysFoo = someFooWrapper; for (int i = 0; i < someBigNumberOfThings; i++) { pSysFoo->SomeThingFooDoes(); } }