Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

221 Neutral

About schupf

  • Rank
    Advanced Member
  1. Hello!   I am writing a puzzle game for iOS and android that has multiple levels (maybe like 20). Currently I describe all my levels via one .xml file. Since each level is relatively simple (shapes at certain positions), the file is not too big.   I also plan to offer level packages (lets say 30 levels for 40 cents) via in app shop. Since it is my first app I have a lot of questions about a good file layout suitable for my game.   Should I provide one big file that contains ALL possible levels and the available ones (depending on the player's purchases) is only controlled by code? I.e. my game package always contains a config file with all 110 levels and the first 20 are playable. When the player purchases the first level package my code will allow to play the first 50 levels and so on.    Or should I only deliver a level config file containing the first 20 levels and whenever the player purchases a package a separate level file (containing the 30 purchased levels of the package) is downloaded to the device (Is this even possible in the app stores of apple and android?)   Thanks for any advice!
  2. schupf

    Rendering a texture

    Thanks RobTheBloke for your detailed help! You pushed me in the right direction:)   The problem was: When I uploaded the images with glTexImage2D I had no OpenGL context. I find it very strange that glError() did not tell any error though...   One last question: How is it possible NOT to have a valid context? I mean after I create a context and "activate" it by calling wglMakeCurrent (), isn't the context now active forever? In other words: I thought after calling wglMakeCurrent() I can use OpenGL calls in my code in every function I want. Did I miss something?
  3. Hello!   I am very new to OpenGL and have a basic problem: I want to render 2 quads each with its own texture. First I have an init function which loads the texture, generates the texture IDs and uploads the texture: void initTex(TexObj& texObj, const char* imageFilename) // Images are .xpm files { QImage image = QImage(imageFilename); texObj.w = image.width(); texObj.h = image.height(); QImage glImage; glImage = QGLWidget::convertToGLFormat(image); char* data = static_cast<char*>( calloc( texObj.w * texObj.h, 4) ); memcpy(data, glImage.bits(), texObj.w * texObj.h * 4); texObj.data = data glGenTextures(1, &texObj.texID); glBindTexture(GL_TEXTURE_2D, texObj.texID); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texObj.w, texObj.h, 0, GL_RGBA, GL_UNSIGNED_BYTE, texObj.data); } Function initTex is called 2 times (for 2 different textures).   To render the 2 quads I use this function: static void draw(TexObj& tex, float scale) { glEnable(GL_CULL_FACE); glEnable( GL_TEXTURE_2D ); glFrontFace( GL_CCW ); glActiveTexture( GL_TEXTURE0 ); glBindTexture(GL_TEXTURE_2D, tex.texID); // glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, tex->w, tex->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, tex->data ); glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR) ; glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR) ; glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE) ; glTranslated( 0.0, 0.0, 0.0 ); glScalef( scale, scale, 1.0f ); glPolygonMode(GL_FRONT, GL_FILL); glBegin(GL_POLYGON); glTexCoord2f( 0.0, 0.0 ); glVertex2d(0.0, 1.0); glTexCoord2f( 0.0, -1.0 ); glVertex2d(0.0, 0.0); glTexCoord2f( 1.0, -1.0 ); glVertex2d(1.0, 0.0); glTexCoord2f( 1.0, 0.0 ); glVertex2d(1.0, 1.0); glEnd(); glDisable( GL_TEXTURE_2D ); glDisable(GL_CULL_FACE); } The problem is: When I let this code run the quads are completely white! The strange thing: When I comment in the outcommented line (glTexImage2D), the quads are textured! It seems like as if the texture I load into the VRAM in function init() is disappeared and I have to upload it every frame.   Does anyone have an idea what could cause this? Thanks!    
  4. I agree, OSGs code is a mess. But its the lack of documentation that really makes working with OSG such a pain in the ass...
  5. Thanks to all of you guys! I have chosen the custom key struct with operator<(). The extended string approach also would have worked, but the struct is more flexible. @rip-off: I think your operator<() contains a bug. Assume these values for a and b: {52,30, "Foo"}, {50,40, "Foo"} Your width has the highest priority, so operator<(a,b) should return false. But your operator< will return true, since a.width < b.width is false but a.height < b.height evaluates to true. My struct looks like this (I hope its correct;) struct TextureCacheID { std::string textureFilename; uint width, height; bool operator<(const TextureCacheID& rhs) const { return (textureFilename < rhs.textureFilename) || ((textureFilename == rhs.textureFilename) && (width < rhs.width)) || ((textureFilename == rhs.textureFilename) && (width == rhs.width) && (height < rhs.height)); } };
  6. Hi, in my Engine I have a ResourceManager class, that loads resources (textures, shaders, materials etc.) from hard drive and caches them. I.e: ShaderPtr p1 = manager->loadShader("Color.psh"); ... ShaderPtr p2 = manager->loadShader("Color.psh"); When shader Color.psh is loaded from the first time, it is loaded from hard drive, a ShaderPtr is stored in a map and returned. Now when Color.psh is requested for the second time, the Shader is just returned from the cache map instead of loading it again. This works fine, but now I have a problem with textures. I have to pass the size of the texture to the load function, because internally I use D3DXCreateTextureFromFileEx() to load the texture and If I do not explicitely specify the real size of the texture, DirectX will use the next power of 2 size - something I do not want. So my loading looks like this: TexturePtr t1 = manager->loadTexture("Grass.png", 500, 200); // A ... TexturePtr t2 = manager->loadTexture("Grass.png", 300, 300); // B In line A the manager really loads Grass.png from the disk and creates a texture of size 500x200. Now in line B the manager detects that Grass.png is already loaded and just returns a reference to the already created texture of size 500x200. Thus t2 now stores a texture object with size of 500x200, even though I wanted the texture to be loaded with a size of 300x300! (In other words: Another instance with another size). Do you guys have an idea how I could solve this problem? How I could keep a caching system but also support resources with different values. Thanks!
  7. schupf

    UI components: Scaling

    @frob: Could you please elaborate your first approach a little bit more? Do you mean I determine whether I scale a component based on the smaller dimension of the window?
  8. Hello, I have made a little UI system to render UI elements (button, combo boxes etc.) in my DirectX 9 game. Now I struggle a little bit with scaling of my components. To make my problem more clear I made a sketch: [sharedmedia=core:attachments:12412] Lets say I start my game windowed with resolution of 2000x1000. The red rectangle is my UI component. Now If i resize the window down to 1000x500 I guess it would be desirable to scale down the UI component proportionally. So my algorithm could be something like this: After resize of window, resize all UI components proportionally horizontally and then vertically. But then I have a problem when the user only applies a resize of the window in one dimension (bottom image). If I applied my algorithm the UI component would be distorted - clearly something I do not want. So how could I implemented a solid scaling mechanism that scales my UI elements when the resolution changes? Thanks!
  9. Hello, I have written an application C++ that compiles on Windows and Linux. Now I have also written some .py Python Scripts that shall be shipped with my application. My questions: 1) If I want to call a function from one of my Python scripts from C++ all I have to do is to do include <Python.h> and link against libPython.so (on Linux) and python27.lib on Windows. Anything else I have to do? 2) There is one thing about Python on Windows I don't understand. I have installed Python 2.7 on my Windows Machine and in the installed directory there is an "include", "libs" and "DLLs" directory. In the "libs" directory is a python27.lib and I guess I have to link against that one. But why is there no python27.dll in the DLLs directory? I guess this .lib is an import lib, so there HAS to be a corresponding .dll, but I can't find it. Anyone knows where the python dll is? 3) If I want to ship my C++ application (that calls python code) along with the .py files: All I need to add into my distribution is the python .dll/.so and the python interpreter? Anything else I need? Thanks for any help!
  10. When I use a texture atlas for multiple depth maps I assume I the part of the atlas specific for a certain light is selected by setting the viewport? @AgentC: I do not know the maximum number of available registers from VS to PS (I use Shader Model 3), but I do not think this will be a problem. My shaders are relatively simple - just Pos, Normal and Tex Coordinates and then one additional texture coordinate for each Depth Map. I also just want to implement standard Shadow Mapping. But do you think my idea with the fixes sampler registers (i.e. s10 for depth map 1, s11 for depth map 2 etc) is good? Another question just came into my mind: Lets say I have 3 lights and I pass that information to my HLSL shader per constant float u_numLights; Now I could write a for loop in the Pixel shader to iterate over all depthmaps and make the test if the pixel is in shadow. But my simple problem is: The samplers aren't an array, so I can't just use an array expression like this: for(int i=0; i < u_numLights; i++) { depth = tex2D(sampler[10 + i], texCoords); } I have to address each sampler with its full name, like depthMap1, depthMap2 etc, which makes it impossible to use a loop. Or do I miss something? depthMaps[
  11. schupf

    DX9: How many Sampler Units?

    Thanks. Where did you find this information?
  12. Hello, In my (Direct9 based) Render Engine I render objects and each object has a material. A material is basically just a pixel and vertex shaders, a set of textures and render states. So lets say I render an object with a material that uses 3 textures, I call 3 times setTexture: device->setTexture(0, tex0); device->setTexture(1, tex1); device->setTexture(2, tex2); It works great, but now I want to add Shadow Mapping to my Engine. The number of shadow casting lights should be flexible. Since I need one Shadow Texture for each shadow casting light and since ALL material shaders need access to the shadow maps I guess the best (only) way to do this is to start with the shadow map textures at a specific sampler register. For example: Lets say I have 4 lights and Light0 and Light2 cast shadows. The shadow map (aka depth map) textures start at sampler register s10. Then Depth Map for Light0 is in sampler s10 and Light2 in sampler s11. Is this a good idea or rather crappy? Thanks for any feedback!
  13. Hello, I am using DX9 and Shader Model 3.0. In my Pixel Shader files I declare textures like this: sampler myTexture : register(s0); sampler myTexture2 : register(s1); Now I wonder how many sampler registers s# are available? I searched a lot but could not find a number. Thanks for help!
  14. What the.... it seems like this font costs 35 Dollars. So I have to pay 35 Dollars just to support this font on Windows XP? This sucks:( Isn't there a way to get the font free?
  15. Hello, I want to use the Font Calibri Bold in my game that is targeted for Windows XP, Vista and 7. On Vista and 7 the font is already included, but on XP it is not. Does anyone know what I have to do to distribute this font with my application? Thanks for any help!
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!