Search the Community

Showing results for tags 'Textures'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Audio Jobs
  • Business Jobs
  • Game Design Jobs
  • Programming Jobs
  • Visual Arts Jobs

Categories

  • GameDev Unboxed

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 24 results

  1. Dynamic resource reloading

    Making editors is a pain. I have a list of thousands of items I'd rather do than this - yet I made myself a promise to drag at least one full featured editor tool over the finish line. There are few reasons for that: I believe I have quite useful engine, it was my pet project all these years, it went through many transformations and stages - and solid tool is something like a goal I'd like to do with it, to make it something better than "just a framework". I'm very patient person, and I believe also hard working one. Throughout the years my goal is to make a game on my own engine (note, I've made games with other engines and I've used my engine for multiple non-game projects so far -> it eventually branched to be a full-featured commercial project in past few years). I've made few attempts but mostly was stopped by lacking such tool - that would allow me to build scenes and levels, in an easy way. And the most important one ... I consider tools one of the hardest part in making any larger project, so it is something like a challenge for me. Anyways so much for motivation, the tool is progressing well - it can be used to assemble scene so far, various entities (like light or materials) can have their properties (components) modified, with full undo/redo system of course. And so the next big part was ahead of me - asset loading and dynamic reloading. So here are the results: Engine editor and texture editor before my work on the texture. And then I worked on the texture: And after I used my highly professional programmer-art skills to modify the texture! All credits for GameDev.net logo go to its author! Yes, it's working. The whole system needs a bit of cleanup - but in short this is how it works: All textures are managed by Manager<Texture> class instance, this one is defined in Editor class There is a thread waiting for change on hard drive with ReadDirectoryChangesW Upon change in directory (or subdirectories), DirectoryTree class instance is notified. It updates view in bottom left (which is just a directory-file structure for watched directory and subdirectories), and also for modified/new files creates or reloads records in Manager<Texture> class instance (on Editor level) The trick is, reloading the records can only be done while they're not in use (so some clever synchronization needs to be done) I might write out some interesting information or even short article on this. Implementing it was quite a pain, but it's finally done. Now short cleanup - and towards the next one on my editor todo list! Thanks for reading & see you around!
  2. I want to change the sampling behaviour to SampleLevel(coord, ddx(coord.y).xx, ddy(coord.y).xx). I was just wondering if it's possible without explicit shader code, e.g. with some flags or so?
  3. So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing. Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance. Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
  4. As part of a video project I'm working on, I have to pass ID3D11Texture2D decoded by CUDA, from one D3D11Device to the other, which handles rendering. I managed to achieve the goal, but it looks like I'm leaking textures. The workflow looks as follows: Sending side (decoder): ID3D11Texture2D* pD3D11OutTexture; if (!createOutputTexture(pD3D11OutTexture)) return false; IDXGIResource1* pRsrc = nullptr; pD3D11OutTexture->QueryInterface(__uuidof(IDXGIResource1), reinterpret_cast<void**>(&pRsrc)); auto hr = pRsrc->CreateSharedHandle( nullptr, DXGI_SHARED_RESOURCE_READ | DXGI_SHARED_RESOURCE_WRITE, nullptr, &frameData->shared_handle); pRsrc->Release(); Receiving side (renderer): ID3D11Texture2D* pTex = nullptr; hres = m_pD3D11Device->OpenSharedResource1( frameData->shared_handle, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&pTex)); DrawFrame(pTex); pTex->Release(); CloseHandle(frameData->shared_handle); I'm somewhat puzzled by the inner workings of this workflow, namely: what happens when I create a shared handle? Does this allow me to release texture? what happens, when I call OpenSharedResource? Does it create separate texture - that is do I have to release both textures after rendering? Appreciate your help!
  5. Please reply me about this
  6. I'm working in an old application (DX9-based) where I don't have access to the C code, but I can write any (model 3.0) HLSL shaders I want. I'm trying to mess with some cube mapping concepts. I've gotten to the point where I'm rendering a cube map of the scene to a cross cube that I can plug directly into ATI cubemapgen for filtering, which is already easier than trying to make one in Blender, so I'm pretty happy so far. But I would like to do my own filtering and lookups for two purposes: one, to effortlessly render directly to sphere map (which is the out-of-the-box environment mapping for the renderer I'm using), and two, to try out dynamic cube mapping so I can play with something approaching real-time reflections. Also, eventually, I'd like to do realish-time angular Gaussian on the cube map so that I can get a good feel for how to map specular roughness values to Gaussian-blurred environment miplevels. It's hard to get a feel for that when it requires processing through several independent, slow applications. Unfortunately, the math to do lookups and filtering is challenging, and I can't find anybody else online doing the same thing. It seems to me that I'm going to need a world-vector-to-cube-cross-UV function for the lookup, then a cube-cross-UV-to-world-vector function for the filtering (so I can point sample four or more adjacent texels, then interpolate on the basis of angular distance rather than UV distance.) First, I'm wondering if there's any kind of matrix that I can use here to transform vector to cube-cross map, rather than doing a bunch of conditionals on the basis of which cube face I want to read. This seems like maybe it would be possible? But I'm not really sure, it's kind of a weird transformation. Right now, my cube cross is a 3:4 portrait, going top/front/bottom/back from top to bottom, because that's what cubemapgen wants to see. I suppose I could make another texture from it with a different orientation, if that would mean I could skip a bunch of conditionals on every lookup. Second, it seems like once I have the face, I could just use something like my rendering matrix for that face to transform a vector to UV space, but I'm not sure that I could use the inverse of that matrix to get a vector from an arbitrary cube texel for filtering, because it involves a projection matrix-- I know those are kind of special, but I'm still wrapping my head around a lot of these concepts. I'm not even sure I could make the inverse very easily; I can grab an inverseProj from the engine, but I'm writing to projM._11_22 to set the FOV to 90, and I'm not sure how that would affect the inverse. Really interested in any kind of discussion on techniques involved, as well as any free resources. I'd like to solve the problem, but it's much more important to me to use the problem as a way to learn more.
  7. 3D Easy get 3d models

    Hello! I want share new tool http://modelshub.org. This tool will allow you to convert 3D models in to OBJ format from p3d.in and other sites. In case success work, you get zip-archive contents geometry, materials library and textures.
  8. Hola estoy interesado en practicar y crear un sistema de pre cargado de texturas, así el juego que estoy desarrollando en vb6 utilizando Direcxt8 tiene un mayor rendimiento a la hora de cargar los gráficos de un mapa. ¿Pueden orientarme con temas acerca el tema? ¿Qué es lo que necesito saber? Gracias! Hi, I'm interested in practicing and creating a pre-loaded system of textures, so the game I'm developing in vb6 using Direcxt8 has a higher performance when loading graphics on a map. Can you guide me with topics on the subject? What do I need to know? Thank you!
  9. Hey guys, I hope for help on the following problem - hopefully an easy one for you, but I am an absolute beginner... I want to simulate a Lidar in a closed source D3D9 game (ArmA2-based I think), meaning I want to get the pixel positions in view space. For that I intercept the D3D9 calls (via proxy dll) and retrieve the handle to the depth buffer texture (R32F) which I have identified . I've attached an example of the contents of this depth buffer (just StretchRect'ed the contents to the back buffer) and the corresponding rendered image. Now what I am trying to do is transform the depth buffer values to view space but I keep failing on this. I am trying to do it as described in this stackoverflow post (the only change I made after I saw that it does not work is to delete the first line 'z = depth*2.0 - 1.0' and instead directly using the depth value, because as far as I know the projection matrix(from MSDN: https://msdn.microsoft.com/de-de/library/windows/desktop/bb147302(v=vs.85).aspx,, Q = Zf/(Zf-Zn) )is already adjusted to output a depth in [0,1] - and thus should also be directly usable in inverted form for the transformatino from clip space to view space) I should mention that I can query the game API for the view frustum values (left + right angle, top + bottom angle, near and far plane), but as you can see a large part of the depth buffer is black So here are my questions: 1) What I don't quite understand is that apparently the whole depth buffer value interval is inverted, meaning far objects are dark (a color of (1,1,1,1) is white) . Are there any other projection matrices commonly used together with D3D9, that have this behavior and which I could try? 2) Is the approach shown in the mentioned stackoverflow post a valid one? Especially I'd really like to know if this division by w after applying the inverse projection matrix is correct - I thought, this division is necessary to get from clip space to normalized device coordinates. But why is it necessary here? 3) Is there any other approach I could use to get the pixel positions in view space? Thanks in advance!
  10. I've built some simple normal maps out of meshes and a custom HLSL shader that writes their normals to the screen. While I've only used this for creating tiling normal maps, where I control the orientation of the mesh used to generate normals, I don't see why I couldn't do this for a full-model normal map, placing the models in screen space based on their UV rather than world-space coords, writing the normals of the low-poly to one image, the high-poly to another, and the vector necessary to transform the normals of the first to the second onto a third image. With the tiling normal maps I've made, I haven't seen any artifacts or weirdnesses. All it takes is one or two models, a relatively simple shader, and a single frame of computer time. But when I visit modelling sites, baking normals sounds like a major headache, involving the creation of a cage and a lengthy bake process. It sounds like the modelling packages are using some kind of raycasting algorithm. There must be a reason not to be doing things the way that I've been doing them. Can anyone explain to me the problems with creating normal maps via shader?
  11. My independent project that I create alone. I want to show in this video how I draw and create a level with the second boss in the game. Pleasant viewing. I hope you will not remain indifferent.
  12. Hi everyone! Just wanted to let everyone know that ShaderMap 4 (SM4) has been released and is now free for non commercial use. You can learn more at the new website https://shadermap.com Thanks for checking it out! Neil
  13. I made my obj parser and It also calculate tagent space for normalmap. it seems calculation is wrong.. any good suggestion for this? I can't upload my pics so I link my question. https://gamedev.stackexchange.com/questions/147199/how-to-debug-calculating-tangent-space and I uploaded my code here ObjLoader.cpp ObjLoader.h
  14. Multipacker is an Unreal Engine 4 Plugin editor for manipulate AtlasTextures & Channels(and experimental various Masks inside the same Channel) inside Unreal Engine. Its greatlly helpfull for Mobile Projects, allowing a great save of texture memory. The plugin is intended to be simple and the same time powerfull. The Plugin is on Gumroad now: And have a CodeOffer discount 7.50: initialfeedback https://gumroad.com/l/cYyEo Will be updated ASAP adding new features. More Info: https://drive.google.com/drive/folders/0B63pISMLaAAgcHc2Y1BBcXV1c0k?usp=sharing Daily information of the progress on my Twitter https://twitter.com/turbocheke Whats done now: Version 0.2: -Get from a TextureAtlas a number of opacity Masks: -Set 1 Opacity Mask on a Channel RGB/RGBA. -Set 3 Opacity Masks on a Channel RGB/RGBA (allowing 9 opacity Masks on RGB, and 12 on RGBA). -One or more Texture Inputs -Input by Specific Channel (RGB, Red, Green, Blue, Alpha, RGBA) What will be on future releases: 0.25: -Save TextureAtlas 0.3: -Save a Texture Database for a faster icon management -Blueprint functions to manipulate the texture with the Database -Base Blueprint to generate buttons icons(press, normal); and a differrent types of procedural ussage of the icons. 0.4: -SDF from texture mask -Can save SDF on Atlas and Channels RGBA 0.5: -Hot reload Textures based on the AutoImport functionality of the Unreal Engine Editor.
  15. How to learn to draw game graphics

    So, drawing. How to learn how to draw game graphics and how to learn to draw at all, when is your maximum this? Okay, then let's discuss this. To begin with, I'm not a cool artist, but I persistently develop this skill in myself, every day drawing and stacking tons of paper. By the way, I advise this to you. This is good advice. As in the beginning of any business that you start, you do not need to show yourself a super genius, have super equipment and immediately invest huge amounts of money in your development. No, my dear friend, start small, and everything else will come with time. Therefore, to begin with, choose the style of drawing that you want and just practice in it. Just do not paint in all styles at once. To begin with, determine what you like and understand what styles there is. Watching how the games and their style develop, I can say with confidence that now the pixel art, cartoonish and comic graphics are in fashion. Who needs hyperrealistic humanoids and canons now? This is a huge zamorochki in creating games and stereotyped. Do not be afraid to draw a fist bigger than your head, and legs are the size of a joystick. Fashion, where beautifully painted drawings and brought them to realism, gradually disappears. Minimalism, simplicity and violation of proportions is what is now actual. Look at the latest games, because they became easier in style and no less beautiful (Overwatch, Dota2, Pixel Piracy, etc.). I will also say that I am a big fan of vector graphics and how it looks. What not to say about China and the eastern countries that can not live without it. This is not a joke, because in fact, Chinese developers are styling their games for a bright cartoon graphics with the addition of anime. In the east This is popular. Here are the games of Klei Entertainment. Pay attention to the style. You see? He's alone in all games and it's cool. This is the most important thing - to find your own style. I'm sure the director will not fire this artist. Look at other industries: advertising, television, etc. Everywhere simple graphics are used, for it is easy to perceive. People are now very lazy and quickly get used to everything, so vector graphics are now gaining good growth. By the way, to draw graphics for games, you need and do not need a graphics tablet. Look, the point is that you can draw a vector and pixels with your mouse. Believe me, this will be enough for you. But the tablet is relevant when you are drawing something more or less detailed. More often it is used for detailing and texturing an object in Photoshop. I once read Christopher Hart's book about drawing comics. You can also read the book of your chosen style, as well as redraw the different pictures you like. Why? Over time, the hand and the brain will memorize the outline and images, and it will be easier for you to come up with something new in the future, as well as draw already existing pictures in your head. Well, I probably will finish this, but this is not my last article. I will be happy if you need my experience. By the way, more information about the development of games and everything related to it you can find out on my YouTube channel. With you was a Ukrainian developer of indie games - Flatingo. Good luck to you.
  16. Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions. In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding). My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster. Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
  17. Does anyone know of a tool that converts PNG's into an SDF (Signed Distance Field) usable texture? I've seen a few tools for fonts but nothing specifically for single sprite textures.
  18. Post Vice

    • Albedo 1024 px, • Tangent normal map 1024 px, • Glossiness map 1024 px, • Specular map 1024 px. More specs and screenshots here.
  19. I am trying to implement sprite clips, or rendering a portion of a sprite sheet onto the screen. For now, I am least trying to get one sprite to display on the screen. Controller support and adding the other sprites when changing the character direction will come later. Here is what I have so far. First, the class declaration: class World1 { public: //other stuff //screen resolution int window_width = 0; int window_height = 0; //handles player void renderPlayer(SDL_Renderer*&); //other stuff private: //player graphic and rectangle structs/variables/etc. SDL_Texture* pSpriteSheet = nullptr; SDL_Rect pSpriteClips[3]; SDL_Rect* pSprite; SDL_Rect pBase; SDL_RendererFlip sFlip = SDL_FLIP_NONE; //other stuff }; Then, the constructor: World1::World1(int SCREEN_WIDTH, int SCREEN_HEIGHT, SDL_Renderer*& renderer) { window_width = SCREEN_WIDTH; window_height = SCREEN_HEIGHT; *pSpriteSheet = IMG_LoadTexture(renderer, "male_base-test-anim.gif"); if (pSpriteSheet == nullptr) { cout << "Unable to load player Sprite sheet."; } //facing down sprite pSpriteClips[0].x = 14; pSpriteClips[0].y = 12; pSpriteClips[0].w = 145; pSpriteClips[0].h = 320; //more code... //player sprite pBase.x = 0; pBase.y = 0; pBase.w = pSpriteClips[0].w; pBase.h = pSpriteClips[0].h; } Then, the rendering void World1::renderPlayer(SDL_Renderer*& renderer) { //render SDL_RenderClear(renderer); //clears screen SDL_RenderCopyEx(renderer, pSpriteSheet, pSprite, &pBase, 0.0, NULL, sFlip); SDL_RenderPresent(renderer); //puts image on screen } So far, the program does not build and I obtain the following error: C:\Users\Kevin\Documents\Codeblocks\KnightQuest\worlds.cpp|12|error: invalid use of incomplete type 'SDL_Texture {aka struct SDL_Texture}'| Am I mistaken to assume that C++ will automatically assign a memory address to SDL_Rect* when I assign a value to it via the dereference operator? Or, perhaps there is another issue?
  20. Hello guys! Totally new here, as well as to graphics programming in general. I'm making a 2D fighting game engine using XNA/MonoGame, and oddly enough what's tripping me up is how I can have the user define their lifebars. Obviously the simplest (old-school) method would be to just use a bounding rect to cut off the display region of the current sprite/animation to only the "remaining life" region. Done. But what if they want to define some other non-rectangular shape for the end of the lifebar, such as a rhombus, or pill shape? So I figure, why not have them make a mask sprite to define what the end of the lifebar should look like? That way they can have whatever end shape they want at any point in the lifebar, whether it's a static sprite or animation. It will also provide another useful tool familiar to designers that they can also use anywhere else in the engine. Basically, I want the user to be able to insert a sprite like this: And a sprite like this: And end up with this at the end of the lifebar (after aligning the mask with the end of the lifebar): The algorithm itself is simple. I just need it to take the sprite and multiply it by the color (or color component, to simplify things) in the overlapping mask sprite. But the problem is this: the mask texture and the lifebar texture may be different dimensions. In what I've worked with in HLSL so far, the texture indices are represented from 0.0F to 1.0F in the X or Y direction. So in my head, that mask sprite is going to be treated the same size as the lifebar texture, which means it would create a totally different image as the life decreases! I'm using HLSL, but I'm unsure how to approach this problem. Is there any way to check image dimensions so I can tell it to use the mask texture as-is, and not clip anything outside of the mask texture?
  21. Hi, Not sure to post at the right place, if not, please forgive me... For a game project I am working on, I would like to implement a 2D starfield as a background. I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map. I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...). Is there someone who could have an idea of a distribution which could result in such a starfield? Any insight would be appreciated
  22. I'm copying mipmaps of a BC3 compressed texture region to a new (and bigger) BC3 compressed texture with ID3D11DeviceContext::CopySubresourceRegion. Unfortunately the new texture contains incorrect mipmaps when the width or height of a mipmap level are unaligned to the block size, which is 4 in the case of BC3. I think this has to do with the virtual and physical size of a mipmap level for block compressed textures: https://msdn.microsoft.com/en-us/library/windows/desktop/bb694531(v=vs.85).aspx#Virtual_Size There is also a warning: I don't know how to account for the physical memory size and if that's possible when using ID3D11DeviceContext::CopySubresourceRegion. Is it possible, and if so, how?
  23. We use OpenGL ES2.0 for our game. On iPhone6 series, the IOKit has very large swapped size(100MB for just one resource, 0 resident size and 0 dirty size) which will lead to crash, for the same setting, almost no swapped size on iPhone 5 and 7 series...(attached images are iphone 5,6,7 in order) Our textures are mostly 1024x1024, 512x512, one 2048x1024 any ideas?
  24. We just finished texturing the entrance hall of the appartement. The appartement is the first location within the game, the place from where main-character Charly Clearwater will start his journey. Furthermore, we set the light in the appartement. For our next video, that will be published this week on YouTube, we played God and created a moon. The video is supposed to show our progress in texturing and illuminating the appartement.