Advertisement Jump to content
  • Advertisement

Search the Community

Showing results for tags '2D' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 82 results

  1. Hello everyone. I have been trying to make a font renderer that uses freetype for the past couple of days but am currently stuck with getting uv tex values when rendering. Here is the current code I use struct SVertex { SVector4f pos; Color32 col; SVector2f tex; }; // D3DFVF_XYZRHW | D3DFVF_DIFFUSE | D3DFVF_TEX1 float CDisplayFont::DrawTextA( const char * szText, int textCount, const SVector2f& pos, const SColor & color, const SRectf * pClipRect ) { SVector2f tmp = ( pos ); IRender * pRender = g_pCore->GetGraphics()->GetRender(); for ( int i = 0; i < textCount; ++i ) { Codepoint_t cp = static_cast< Codepoint_t >( szText[ i ] ); if ( GlyphInfo_t * info = GetGlyphInfo( cp ) ) { if ( szText[ i ] != ' ' ) { float sx = tmp.x + info->offsetX * m_fScaleHoriz; float sy = tmp.y - ( info->height - info->offsetY ) * m_fScaleVert; float w = info->width * m_fScaleHoriz; float h = info->height * m_fScaleVert; // column (u) and row (v) number // -- heres where i'm stuck float u = 0; float v = 0; SVertex vtx[ ] = { { sx, sy + h, 0.0f, 1.f, color, u, v }, { sx, sy, 0.0f, 1.f, color, u, v }, { sx + w, sy + h, 0.0f, 1.f, color, u, v }, { sx + w, sy, 0.0f, 1.f, color, u, v }, { sx + w, sy + h, 0.0f, 1.f, color, u, v }, { sx, sy, 0.0f, 1.f, color, u, v } }; // arguments -> ( rl, vtx data, vtx count, topology, texture (IDirect3DTexture9) ) pRender->PushVertices( NULL, vtx, 6, D3DPT_TRIANGLELIST, info->texture->GetInternalPtr() ); } tmp.x += ( float )( info->advance >> 6 ) * m_fScaleHoriz; } } return tmp.x; } This obviously prints nothing because I am stuck on how exactly I should be getting the correct u & v coords. I have confirmed that I am getting the correct texture by saving the IDirect3DTexture9 to a file, getting stuff like this - https://imgur.com/a/Lwl2Xws I appreciate any advice/pointers in the right direction, thank you.
  2. Hi there, We are small indie team developing a casual mobile game. We are trying to implement the following two effects in our game. 1. Ice to Normal Blocks: https://youtu.be/q71RaKHn-iw?t=1227 Here a nice particle effect happens when all the ice blocks are turned to normal blocks. We want to do this. // Aiming Dots: https://youtu.be/q71RaKHn-iw?t=127 We want to implement this aiming dots for a ball in our game. Does anyone know – 1. How to implement these two effects? We have a good programmer. Any link to an online reference will be enough. 2. Is there a asset at Unity asset store that could help us? Thanks. Any suggestions appreciated.
  3. Simple problem compared to what is usually posted around here, but I'm trying to draw a rotated textured rectangle in my sprite batch: if(sprt->tex != tex) { flush(); sprt->tex = tex; } glm::vec2 oldHsize = hsize; hsize.x = oldHsize.x * cos(angle) - oldHsize.y * sin(angle); hsize.y = oldHsize.x * sin(angle) + oldHsize.y * cos(angle); data.push_back({{pos.x - hsize.x, pos.y - hsize.y}, {0, 1}}); data.push_back({{pos.x + hsize.x, pos.y - hsize.y}, {1, 1}}); data.push_back({{pos.x - hsize.x, pos.y + hsize.y}, {0, 0}}); data.push_back({{pos.x + hsize.x, pos.y - hsize.y}, {1, 1}}); data.push_back({{pos.x - hsize.x, pos.y + hsize.y}, {0, 0}}); data.push_back({{pos.x + hsize.x, pos.y + hsize.y}, {1, 0}}); } hsize is half the size of the sprite being drawn. But it seems to draw it completely incorrectly. If I slowly increase the value of angle, here's what it looks like (in the attachments). 2019-01-01_18-07-35.mp4
  4. I'm trying to set the correct world positions in 4 vertices and for reasons I need to do the calculations with matrices on the CPU before I send in the positions in my GLSL shader. I decide the positions somewhat like this mat4 m_proj; void Submit(renderable, vec3 pos, vec3 origin, vec3 scale) { vec3 _size = renderable.size; vec3 _pos = pos + origin + vec3(0, _size.y, 0); mat4 _model = mat4::CreateScale(scale) * mat4::Translate(_pos); mat4 _pm = m_proj * _model; _pos = _pm.Transform(_pos); vertex1.pos = _pos; _pos = pos + origin + vec3(_size.x, _size.y, 0); _model = mat4::CreateScale(scale) * mat4::Translate(_pos); _pm = m_proj * _model; _pos = _pm.Transform(_pos); vertex2.pos = _pos; _pos = pos + origin + vec3(_size.x, 0, 0); _model = mat4::CreateScale(scale) * mat4::Translate(_pos); _pm = m_proj * _model; _pos = _pm.Transform(_pos); vertex3.pos = _pos; _pos = pos + origin; _model = mat4::CreateScale(scale) * mat4::Translate(_pos); _pm = m_proj * _model; _pos = _pm.Transform(_pos); vertex4.pos = _pos; //Code to submit... } And here are all the used mat4 functions: union { float elements[16]; vec4 rows[4]; }; mat4 Multiply(mat4 other) { //This is used in * operator float _data[16]; for (int _row = 0; _row < 4; _row++) { for (int _col = 0; _col < 4; _col++) { float _sum = 0.0f; for (int e = 0; e < 4; e++) { _sum += elements[e + _row * 4] * other.elements[_col + e * 4]; } _data[_col + _row * 4] = _sum; } } memcpy(elements, _data, 4 * 4 * sizeof(float)); return *this; } mat4 CreateScale(vec3 scale) { mat4 _result(1.0f); //Identity matrix _result.elements[0 + 0 * 4] = scale.x; _result.elements[1 + 1 * 4] = scale.y; _result.elements[2 + 2 * 4] = scale.z; return _result; } mat4 Translate(vec3 pos) { mat4 _result(1.f); _result.elements[0 + 3 * 4] = translation.x; _result.elements[1 + 3 * 4] = translation.y; _result.elements[2 + 3 * 4] = translation.z; return _result; } vec3 Transform(vec3 vec) { return vec3( rows[0].x * vec.x + rows[0].y * vec.y + rows[0].z * vec.z + rows[0].w, rows[1].x * vec.x + rows[1].y * vec.y + rows[1].z * vec.z + rows[1].w, rows[2].x * vec.x + rows[2].y * vec.y + rows[2].z * vec.z + rows[2].w ); } m_proj is just an orthographic matrix which I know is fine. My question is really just if there is something wrong with what I'm doing here as I don't get the result I want.
  5. I have written the following pixel shader for point lighting in a 2D platformer. A normal map and a specularity map is generated using Sprite DLight. colorMap is the original sprite, normalMap.rgb is the normal and normalMap.a is the specularity from the specularity map. lightPosition has a z coordinate to position it further away from the scene. Is this correct for 2D specular lighting? float4 PointLightShader(VertexToPixel PSIn) : COLOR0 { float4 colorMap = tex2D(ColorMapSampler, PSIn.TexCoord); clip(colorMap.a - 0.001); float3 pixelPosition = float3(PSIn.WorldPos, 0); float3 lightDirection = lightPosition - pixelPosition; float coneAttenuation = saturate(1.0f - length(lightDirection) / lightDecay); float4 normalColor = tex2D(NormalMapSampler, PSIn.TexCoord); float3 normal = normalColor.rgb; float3 normalTangent = (2.0f * normal) - 1.0f; float3 lightDirNorm = normalize(lightDirection); float3 halfVec = float3(0, 0, 1); float amount = max(dot(normalTangent, lightDirNorm), 0); float3 reflect = normalize(2 * amount * normalTangent - lightDirNorm); float specular = min(pow(saturate(dot(reflect, halfVec)), 10), amount); return (colorMap * coneAttenuation * lightColor * lightStrength) + (colorMap * specular * coneAttenuation * normalColor.a); }
  6. In MonoGame I'm writing a shader for lighting and shadows in a 2D Platformer. A shadow will be drawn for each character for each light that hits said character. Because shadows from different lights can overlap the shadows are drawn to a texture where each pixel is a bitfield where each bit tells you if the pixel was reflected by a given light. In the lighting shader for each light, it only applies light if the bit for that light is not set at the given pixel. In order to not make for example 40 draw calls to draw 40 shadows if 40 lights overlapped a character, I batch shadows together into a VertexBuffer with the data specifying which light created the given shadow. In the shadow shader, it samples the render target I am drawing the shadows to and sets its own bit. My problem is that the changes from the previous shadows in the same batch aren't applied to the render target until after the draw call has completed. This results in the bitfield getting overwritten by shadows from other lights. If I could somehow sample the back buffer this wouldn't be a problem. Is there any way I can fix this without making a draw call for each shadow?
  7. Hi, I am trying to code a Lightmapper for my 3D engine, and as such I implemented a TexturePacker/Atlas, and although it's working OK, I find it oddly in-effecient. It mostly packs all to the left and leaves a lot of free space to the right. Here is the code, can you spot anything wrong, or suggest another way to do it? Cheers. I have a test win.form app if required, I could upload it. Cheers. using System.Collections.Generic; namespace Vivid3D.Util.Texture { public class TexTree { public static List<TreeLeaf> Leafs = new List<TreeLeaf>(); public TreeLeaf Root { get; set; } public Rect RC; public TexTree ( int w, int h ) { RC = new Rect ( 0, 0, w, h ); //Root = ne //w TreeLeaf (new Rect(0,0,w,h)); } public TreeLeaf Insert ( int w, int h, int id = -1 ) { //return Root.Insert ( w, h ); if ( Root == null ) { Root = new TreeLeaf ( new Rect ( 0, 0, w, h ), id ) { Used = true }; Root.Child [ 0 ] = new TreeLeaf ( new Rect ( 0, h, RC.W, RC.H - h ) ); Root.Child [ 1 ] = new TreeLeaf ( new Rect ( w, 0, RC.W - w, h ) ); } else { return Root.Insert ( w, h ); } return Root; } } public class TreeLeaf { public TreeLeaf[] Child = new TreeLeaf[2]; public Rect RC = new Rect(); public int TexID = 0; public bool Used = false; public TreeLeaf ( Rect s, int id = -1 ) { RC = s; TexID = -1; Child [ 0 ] = Child [ 1 ] = null; TexTree.Leafs.Add ( this ); } public TreeLeaf Insert ( int w, int h ) { if ( Used ) { TreeLeaf rn = Child [ 1 ].Insert ( w, h ); if ( rn != null ) { return rn; } rn = Child [ 0 ].Insert ( w, h ); if ( rn != null ) { return rn; } } else { if ( w < RC.W && h < RC.H ) { Used = true; Child [ 0 ] = new TreeLeaf ( new Rect ( RC.X, RC.Y + h, RC.W, RC.H - h ) ); Child [ 1 ] = new TreeLeaf ( new Rect ( RC.X + w, RC.Y, RC.W - w, h ) ); RC.W = w; RC.H = h; return this; } else { return null; } } return null; } public bool Fits ( int w, int h ) { return w < RC.W && h < RC.H; } } public class Rect { public float X,Y,W,H; public Rect ( ) { X = Y = W = H = 0; } public Rect ( float x, float y, float w, float h ) { X = x; Y = y; W = w; H = h; } } }
  8. I'm creating a 2D game engine using Vulkan. I've been looking at how to draw different textures (each GameObject can contain its own texture and can be different from others). In OpenGL you call glBindTexture and in vulkan I have seen that there are people who say that you can create a descriptor for each texture and call vkCmdBindDescriptorSets for each. But I have read that doing this has a high cost. The way I'm doing it is to use only 1 descriptor for the Sampler2D and use a VkDescriptorImageInfo vector where I add each VkDescriptorImageInfo for each texture and assign the vector in pImageInfo. VkWriteDescriptorSet samplerDescriptorSet; samplerDescriptorSet.pNext = NULL; samplerDescriptorSet.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET; samplerDescriptorSet.dstSet = descriptorSets[i]; samplerDescriptorSet.dstBinding = 1; samplerDescriptorSet.dstArrayElement = 0; samplerDescriptorSet.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER; samplerDescriptorSet.descriptorCount = static_cast<uint32_t>(samplerDescriptors.size()); samplerDescriptorSet.pImageInfo = samplerDescriptors.data(); //samplerDescriptors is the vector Using this, I can skip creating and binding a descriptor for each texture but now I need an array of Samplers in fragment shader. I can't use sampler2DArray because each texture have different sizes so I decided to use an array of Samplers2D (Sampler2D textures[n]). The problem with this is that I don't want to set a max number of textures to use. I found a way to do it dynamically using: #extension GL_EXT_nonuniform_qualifier : enable layout(binding = 1) uniform sampler2D texSampler[]; I never used this before and don't know if is efficient or not. Anyways there is still a problem with this. Now I need to set the number of descriptor count when I create the descriptor layout and again, I don't want to set a max number you can use: VkDescriptorSetLayoutBinding samplerLayoutBinding = {}; samplerLayoutBinding.binding = 1; samplerLayoutBinding.descriptorCount = 999999; <<<< HERE samplerLayoutBinding.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER; samplerLayoutBinding.pImmutableSamplers = nullptr; samplerLayoutBinding.stageFlags = VK_SHADER_STAGE_FRAGMENT_BIT; Having said that. How can I solve this? Or which is the correct way to do this efficiently? If you need more information, just ask. Thanks in advance!
  9. Programmer One

    2D Android Sprite Caching

    I'm currently writing a 2D game engine from scratch for Android. The first iteration of the engine is just going to use the Android Canvas view for drawing. At some point, I want to support OpenGL ES - but not until I finish this first project (which is a very simply game based on this engine). Right now, I'm dealing with Sprites and I've encountered a design challenge that I'm not entirely sure which direction I should go. For the sprite bitmaps, I've decided to go down the sprite atlas route (as opposed to individual image files). I'm using Texture Packer and I've written a custom JSON exporter. I didn't really want to limit myself too much, so I decided I'd support sprite rotation and trimming in order to save as much space I can in the atlas. I backed off from supporting polygon trimming for now. If you're unfamiliar with Texture Packer, it's essentially a tool that will allow you to import individual sprite frames, organize them into folders and then have the application generate a sprite map and corresponding coordinate data file. This application supports trimming any blank (alpha) space around the sprite images in order to pack them closer together. It also supports rotation if it makes the image fit better. What I'm trying to figure out now is how to deal with loading the sprite image data. Currently, I'm at the point where I can deserialize the JSON map data into "Sprite Frame" objects. These objects contain information about each frame. My format allows grouping of sprite frames in order to organize frames that correspond to the same animation. In essence, the sprite frame object has: The original (untrimmed) size of the sprite image. The original position of the sprite image within it's bounding box. The rect of where the image is in the sprite atlas. A flag indicating if it had been trimmed. A flag indicating if it has been rotated (CW). This will give me all the information I need to draw the image onto the Canvas. If I didn't support all the other fancy features I want (packed rotation, trimming) and pre-transformation (i.e. mirroring a sprite so I can reuse it for things like changing the walking animation without having to pack in more sprites), then drawing the image from the sprite atlas onto the canvas would be as simple as a simple Canvas.drawBitmap([Source Bitmap], [Destination Rect], [Source Rect]). But, since the image I'd be drawing MIGHT have been rotated, trimmed or otherwise transformed, I can't just simply blit it onto the Canvas. I'd first would need to apply some transformations in order to "undo" changes that were done during packing. This means I would need to either: Slice out the child image from the sprite atlas into a new bitmap, and apply the "unpacking" transformations (i.e. rotate back, realign, etc). Apply a transformation to the Canvas itself. (I don't think I want to go down this road since I've read that transforming the Canvas tends to be rather slow). So, I'm probably left with having to create smaller bitmaps from the sprite atlas and then keep those in memory for as long as I would need them. So, for a single sprite character, I'd be looking at around 36 sprite frames (9 different animations, each with 4 frames). What I'm concerned about is memory consumption. So now I'm thinking: I should read in all the sprite bitmaps from the sprite atlas and shove them into an LRU cache. This means all the sprite image data is now in memory, all ready to go for whatever animation sequence and frame I want. Once I'm done with the atlas, I dispose of it and just work with what I have in memory. I can perform this caching when I load levels and then clear items from the cache that I no longer need. I should just keep the sprite atlas, blit directly from that onto the canvas, and get rid of the fancy packing features so that I don't have to process any transformations. The only problem with this approach is that I will also have to shelve shearing and rotation on the sprite object itself. TL;DR: Am I being overly memory conscientious or having a couple frames of sprite data in memory not a super big deal?
  10. How can I get scaling of a texture when drawn on screen? In the other words: how can I get the amount of texture elements (texel) a pixel on the screen takes up? i.e. if a texture has 100x100 pixels in size and it only takes up 20x20 pixels on the monitor screen then I want to calculate 5.0 as value. I don't need anything complex since it's a 2D scene with ortographic camera setup. I'm trying to do manual texture sampling in my fragment shader. It's a Cg prorgam inside a Unity project so if there is a built-in way to get/calculate this let me know. I feel like there are two ways: Calculate using viewport and camera information Calculate using world to screen space transformations Is there a better way? Which one should I implement and how?
  11. Can anyone explain how to procedurally generate lake floor(sand) texture like on screenshot below?
  12. Hi all, I'm adrift on this one + going round in circles after 2 months! I'm looking to develop an app that presents multi-media layouts to end consumers. The app will allow editing of these final layouts, so must be able use techniques to draw grids and boxes around elements. In short, it needs to be able to draw: Images Text Video Simple geometry (rectangles, lines etc) Preferably, would also like: Image and text effects such as drop-shadow, blur and outline PDF display GIF display My preferable language is vb.net, but I could always move to c# if absolutely necessary (gotta make that painful journey one day!) I've tried all-sorts: ShardDX, SlimDX, monogame, gorgon, skiasharpe, DeltaEngine, SFML, Veldrid All fall short or get too low-level too quickly! The closest I've gotten is via SharpDX, but I must admit I'm lost with all the technicalities and I got stuck at applying effects to bitmaps (my code-monkery led to my populating the backbuffer via RenderTarget, but image effects appear to leverage RenderContext, which I have no idea how to integrate into my RenderTarget approach (see HERE). Of course, the irony is I can achieve all of the above via GDI+ and winforms, But naturally, this is incredibly slow. It does feel like I'm getting the closest by using low-level directx approaches, but the main problem is, I just can't get my head around the concepts (fine individually, but get stuck at how they all bind together!) so, 2 requests: Any ideas or guidance? (libraries that I may have missed; answers to the points I'm getting stuck at; directx guidance) I have the feeling that once some core code is written around the above, the rest should be straightforward. Does anyone know of anyone providing coding services around SharpDX? Thanks all,
  13. Hello there, I have tried following various OpenGL tutorials and I'm now at a point where I can render multiple 2d sprites with textures. For that I have a sprite class: Header: #ifndef SPRITE_H #define SPRITE_H #include <GL/glew.h> #include "Shader.h" #include <glm/glm.hpp> #include <glm/gtc/matrix_transform.hpp> #include <glm/gtc/type_ptr.hpp> #include "Texture.h" #include <stb_image.h> #include "Camera.h" class Sprite { public: Sprite(GLfloat x, GLfloat y, GLfloat width, GLfloat height, Shader& shader, Texture *texture); ~Sprite(); void draw(Camera &camera); void setPosition(float x, float y, float z); void move(float x, float y, float z); void setTexture(Texture *texture); Texture getTexture(); float x, y, width, height; private: void init(GLfloat x, GLfloat y, GLfloat width, GLfloat height, Shader& shader, Texture *texture); GLuint VBO = 0, VAO = 0, EBO = 0; GLint transformShaderLocation, viewShaderLocation, projectionShaderLocation; Shader* shader; glm::mat4 transform, projection, view; Texture *texture; }; #endif Code: #include "Sprite.h" Sprite::Sprite(GLfloat x, GLfloat y, GLfloat width, GLfloat height, Shader& shader, Texture *texture) { init(x, y, width, height, shader, texture); } void Sprite::init(GLfloat x, GLfloat y, GLfloat width, GLfloat height, Shader& shader, Texture *texture) { this->shader = &shader; this->x = x; this->y = y; this->width = width; this->height = height; GLfloat vertices[] = { width / 2 , height / 2, 0.0f, /* Top Right */ 1.0f, 1.0f, width / 2 , -height / 2 , 0.0f, /* Bottom Right*/ 1.0f, 0.0f, -width / 2 ,-height / 2 , 0.0f, /* Bottom Left */ 0.0f, 0.0f, -width / 2 , height / 2 , 0.0f, /* Top Left */ 0.0f, 1.0f }; GLuint indices[] = { 0, 1, 3, // 1 1, 2, 3 // 2 }; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); glGenBuffers(1, &EBO); glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); //Position glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0); glEnableVertexAttribArray(0); // TexCoord glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat))); glEnableVertexAttribArray(1); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); transformShaderLocation = glGetUniformLocation(shader.program, "transform"); viewShaderLocation = glGetUniformLocation(shader.program, "view"); projectionShaderLocation = glGetUniformLocation(shader.program, "projection"); transform = glm::translate(transform, glm::vec3(x , y , 0)); this->texture = texture; } Sprite::~Sprite() { //DELETE BUFFERS glDeleteBuffers(1, &VBO); glDeleteBuffers(1, &EBO); glDeleteBuffers(1, &VAO); delete texture; } void Sprite::draw(Camera &camera) { shader->Use(); glBindTexture(GL_TEXTURE_2D, texture->texture); view = camera.getView(); projection = camera.getProjection(); // Pass to shaders glUniformMatrix4fv(transformShaderLocation, 1, GL_FALSE, glm::value_ptr(transform)); glUniformMatrix4fv(viewShaderLocation, 1, GL_FALSE, glm::value_ptr(view)); // Note: currently we set the projection matrix each frame, but since the projection matrix rarely changes it's often best practice to set it outside the main loop only once. glUniformMatrix4fv(projectionShaderLocation, 1, GL_FALSE, glm::value_ptr(projection)); glBindVertexArray(VAO); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0); glBindVertexArray(0); } void Sprite::setPosition(float x, float y, float z) { //Z? transform = glm::translate(transform, glm::vec3(x - this->x , y - this->y , z)); this->x = x; this->y = y; } void Sprite::move(float x, float y, float z) { transform = glm::translate(transform, glm::vec3(x, y , z)); this->x += x; this->y += y; } void Sprite::setTexture(Texture *texture) { delete this->texture; this->texture = texture; } Texture Sprite::getTexture() { return *texture; } When I want to draw something, I create an instance of the sprite class with it's own Texture and use sprite->draw(); in the draw loop for each sprite to draw it. This works perfectly fine. To improve the performance, I now want to create a spritebatch. As far as I understood it puts all the sprites together so it can send them all at once to the gpu. I had no clue how to get started, so I just created a spritebatch class which put all the vertices and indices into one object every time draw() is called, and actually only draws when flush() is called. Here's the header file: #ifndef SPRITEBATCH_H #define SPRITEBATCH_H #include <glm/glm.hpp> #include "Texture.h" #include <GL/glew.h> #include "Camera.h" #include "Shader.h" #include <vector> class SpriteBatch { public: SpriteBatch(Shader& shader, Camera &camera); ~SpriteBatch(); void draw(Texture *texture, GLfloat x, GLfloat y, GLfloat width, GLfloat height); void flush(); private: GLfloat vertices[800]; GLuint indices[800]; int index{ 0 }; int indicesIndex{ 0 }; GLuint VBO = 0, VAO = 0, EBO = 0; GLint transformShaderLocation, viewShaderLocation, projectionShaderLocation; Shader *shader; Camera *camera; std::vector<Texture*>* textures; glm::mat4 transform, projection, view; }; #endif And the class. I added some comments here: #include "SpriteBatch.h" SpriteBatch::SpriteBatch(Shader& shader, Camera &camera) { this->shader = &shader; this->camera = &camera; textures = new std::vector<Texture*>(); } SpriteBatch::~SpriteBatch() { glDeleteBuffers(1, &VBO); glDeleteBuffers(1, &EBO); glDeleteBuffers(1, &VAO); //delete texture; } void SpriteBatch::draw(Texture *texture, GLfloat x, GLfloat y, GLfloat width, GLfloat height) { textures->push_back(texture); vertices[index] = width/2 ; vertices[index + 1] = height/2; vertices[index + 2] = 0.0f; vertices[index + 3] = 1.0f; vertices[index + 4] = 1.0f; vertices[index + 5] = width / 2; vertices[index + 6] = -height / 2; vertices[index + 7] = 0.0f; vertices[index + 8] = 1.0f; vertices[index + 9] = 0.0f; vertices[index + 10] = -width / 2; vertices[index + 11] = -height / 2; vertices[index + 12] = 0.0f; vertices[index + 13] = 0.0f; vertices[index + 14] = 0.0f; vertices[index + 15] = -width / 2; vertices[index + 16] = height / 2; vertices[index + 17] = 0.0f; vertices[index + 18] = 0.0f; vertices[index + 19] = 1.0f; index += 20; indices[indicesIndex] = 0; indices[indicesIndex + 1] = 1; indices[indicesIndex + 2] = 3; indices[indicesIndex + 3] = 1; indices[indicesIndex + 4] = 2; indices[indicesIndex + 5] = 3; indicesIndex += 6; } void SpriteBatch::flush() { if (index == 0) return; //Ensures that there are sprites added //Debug information. This works perfectly int spritesInBatch = index / 20; std::cout << spritesInBatch << " I : " << index << std::endl; int drawn = 0; //Create Buffers glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); glGenBuffers(1, &EBO); glBindVertexArray(VAO); //Bind vertices glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); //Bind indices glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); //Position glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0); glEnableVertexAttribArray(0); // TexCoord glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat))); glEnableVertexAttribArray(1); //VAO glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); //Shader locations transformShaderLocation = glGetUniformLocation(shader->program, "transform"); viewShaderLocation = glGetUniformLocation(shader->program, "view"); projectionShaderLocation = glGetUniformLocation(shader->program, "projection"); //Draw //So this sets the texture for each sprites and draws it afterwards with the right texture. At least that's how it should work. for (int i = 0; i < spritesInBatch; i++) { Texture *tex = textures->at(i); shader->Use(); glBindTexture(GL_TEXTURE_2D, tex->texture); //? view = camera->getView(); projection = camera->getProjection(); // Pass them to the shaders glUniformMatrix4fv(transformShaderLocation, 1, GL_FALSE, glm::value_ptr(transform)); glUniformMatrix4fv(viewShaderLocation, 1, GL_FALSE, glm::value_ptr(view)); glUniformMatrix4fv(projectionShaderLocation, 1, GL_FALSE, glm::value_ptr(projection)); //Draw VAO glBindVertexArray(VAO); glDrawElements(GL_TRIANGLES, indicesIndex, GL_UNSIGNED_INT, 0); glBindVertexArray(0); } //Sets index 0 to welcome new sprites index = 0; } It also puts the textures into a list. The code to draw two sprites is this: spriteBatch->draw(&_sprite1->getTexture(), _sprite1->x, _sprite1->y, _sprite1->width, _sprite1->height); spriteBatch->draw(&_sprite1->getTexture(), _sprite1->x+10, _sprite1->y+10, _sprite1->width*2, _sprite1->height); spriteBatch->flush(); but I only get one small black rectangle in the bottom left corner. It works perfectly when I draw the sprites without the spritebatch; _sprite1->draw(*camera); _sprite2->draw(*camera); I think I messed up in the flush() method, but I have no clue how to implement this. I'd be grateful if someone can help me with it. Thank you!
  14. I hope this is the correct place to post this, but I have been having some issues. I am new to this site and to python/pygame so please humor me. My game is suppose to have a four way split screen so that four players can compete against each other at once. The way I went about creating this is by making an "action_surface" that acts as the main canvas and four subsurface's that are positioned into four quadrants. As I create each subsurface object, I store it into a list so that each "camera" (aka. subsurface) can be easily accessed. Here is the code for that: Camera.py import pygame, random class Camera: cameras = [] def __init__(self, screen, action_surface, screen_size): self.screen = screen self.surface = action_surface.subsurface(pygame.Rect(0,0,screen_size[0]/2,screen_size[1]/2)) Camera.cameras.append(self.surface) cam_count = len(Camera.cameras) print(cam_count) print(Camera.cameras) @staticmethod def update(screen, screen_size): for s in range(0,100): pygame.draw.rect(Camera.cameras[1], (0,255,255), pygame.Rect(s*100, s*100, 10, 10)) pygame.draw.rect(Camera.cameras[3], (0,0,255), pygame.Rect(s*110, s*110, 10, 10)) if len(Camera.cameras) > 0: for i in range(0, len(Camera.cameras)): if (i%2==0): screen.blit(Camera.cameras[i], (0, ((i/2)*(screen_size[1]/(len(Camera.cameras)/2))))) else: screen.blit(Camera.cameras[i], ((screen_size[0]/2), (((i/2)-0.5)*(screen_size[1]/(len(Camera.cameras)/2))))) In theory, what this should allow me to do is access each camera from their index and draw to the individually. Unfortunately, this is not what happens. In reality, when I select any of the camera's and attempt to draw art, all the cameras (surfaces) are updated with a duplicate copy of such art! As you might be able to tell, this is not the desired effect I am going for. But, none the less this has happened. I am hoping that maybe somebody with more experience than me can give some insight on how to fix this problem. Thanks.
  15. I have the following setup in a GUI library I'm making (for fun). Ignore the colors and positions being integers, it's just to serve the purpose of illustration. Code: // Example program #include <iostream> #include <string> class Rect { public: virtual void Draw(const int position) const = 0; }; class ColorRect : public Rect { public: virtual void Draw(const int position) const override { /* does a draw */ } int color; }; class TextureRect : public Rect { public: virtual void Draw(const int position) const override { /* does a draw textured */ } int texture; }; class Widget { public: Rect* background; int position; virtual void Draw() const { background->Draw(position); } virtual void Update() { /* might do stuff */ } }; class Button : public Widget { public: virtual void Draw() const override { Widget::Draw(); border->Draw(position); } Rect* border; }; int main() { ColorRect redRect; redRect.color = 1; ColorRect blueRect; blueRect.color = 2; TextureRect textRect; textRect.texture = 1; Button b1; b1.border = &blueRect; b1.background = &redRect; b1.position = 1; Button b2; b2.border = &blueRect; b2.background = &textRect; b2.position = 2 while(true) { b1.Update(); b2.Update(); b1.Draw(); b2.Draw(); } } I think it should be obvious that the Rect class is a Flyweight that is used by multiple different objects at the same time. What I want to do now, for example, is have b1 animate it's border Rect. This animation should not affect the other Widget*s that use the same Rect*, i.e their animation needs to be paused, started and have a different "current frame". I'm hesitant to create an "Animated Rect" class or something, because then the only way to handle separate animation is for that class to hold a map of pointers to Widgets that are using it or something. In addition; I would also want to animate, for example, the position. Which means ideally the interface that you use to animate positions (which are specific to the object), is the same as the one used to animate the texture or color (which is shared amongst objects currently). Whatever it is that would hold the Animation state would need to hold a few things like frame_rate and probably time_since_updated or something. I just can't work out where to put it, or what to reorganise to fit in.
  16. Hey guys, what's up? I've been building a 2D game with SDL, and I'd really like it to have that retro console/arcade look and feel. This mostly because those are the kinds of games I play, and have since I was a kid, but also partly because 8-bit sprites or (S)NES sprite sheets are about as good as I get in terms of graphic design (lol). But those old games had really small sprites, 8x8 at the least or 64x64 at the most. And of course, those old consoles had a surprisingly small resolution of 256x240. Another important note here: I'm really only targeting PCs for now; I know SDL can be ported to pretty much anything, but I only have my PC to test it on. I'm also using Windows 10 x64 and Visual Studio 2017, if that helps. So I started out by using "fake" fullscreen (passing SDL_WINDOW_FULLSCREEN_DESKTOP into the window creation functions). But this turned out to be a problem because SDL's drawing functions are pixel based. This is good IMOt, because it makes the math simpler and keeps the images crisper. But you can't get half-a-pixel, so scaling to fit the player's desktop is literally impossible in some cases. The end result is I had to draw black bars across the top and bottom, not just on the left and right. If possible I would like to avoid that, so I switched to "real" fullscreen (SDL_WINDOW_FULLSCREE). The lowest I could get it to go is 640x480, which is perfect (all image widths and heights can be multiplied by two to fill in the space). But is 640x480 still supported on modern monitors? The fact that it worked on my laptop is all well and good, but I'm afraid it would be a problem for others. I remember using Game Maker (before it was a "studio") back in the early 2000s and that resolution was not an issue back then. But is it still supported in 2018? My laptop is relatively new (about 2 years old), a Dell running at 1366x768 normally. If the answer is "no", what resolution(s) should I be using for this type of game? Thanks & happy Labor Day weekend!
  17. hello. I'm trying to understand how i can convert a bitmap array to a single yuv file. I'm trying to do this for register webcamera from windows and android. Next i would use ffmpeg for convert the yuv file to a video. Is possible? My chose to the single file because i save the max possible space in a stream in memory after write it to disk, for performances. i found this http://embedav.blogspot.com/2013/06/convert-rgb-to-yuv420-planar-format-in.html but i'm not understand how i can use it for an array of bitmap.but a yuv file not have any header? very thanks. ps. I must use java or c# but i understand c++ as well i should do something of easy , then no big code library in c++ but is possible in some methods to save big quantity of data in only a file for transform this file to a video with ffmpeg? this is my real question.
  18. Hello, I am currently reading Procedural Generation Content with C++. They start you off with a basic template using SFML, and it slowly builds off it. I have noticed a bug that is really bothering me now. I thought it might have been an error on my part. Unfortunately, it seems to be the source code in general. I did a google for the source code on github to see if someone fixed the problem. I only found some people who literally copied the final example's source code, which isn't helpful because I own it already. What is happening is when I am stationary and press either down or right, there is a quick red flicker. If I am already moving and press down or right, there is no problem. I've tried messing around with the texture manager and they have done in a way I have not come across yet and I'm wondering if this is the issue or if I'm wasting my time. I would like to keep the book's code, but I might just replace the whole thing. // Gets a texture from the texture manager from an ID. sf::Texture& TextureManager::GetTexture(const int textureID) { auto it = m_textures.begin(); auto found = false; while (it != m_textures.end()) { if (it->second.first == textureID) { return *it->second.second; } else { ++it; } } } I can't recall exactly what I did. But messing around with a modern for loop and the traditional way gave me different results. With one of the for loops, I got a "white" flicker when I went left now. I changed it back and the white is now gone. So I believe it has something to do with this. The original code from the book did this for (auto it = m_textures.begin(); it != m_textures.end(); ++it) { if (it->second.first == textureID) { return *it->second.second; } } When I changed the for loop to the modern iteration, the white flicker appears. So I have no idea what that means since I thought it was the same thing. I hope this is enough information for an idea of the problem. Down below I posted a link of the source code from someone's Github that is basically the exact same from the basic template standpoint. If anyone can get me through this, I would greatly appreciate it. https://github.com/utilForever/ProceduralContentGeneration Thanks, have a great day!
  19. Hi all, I have been spending so much time trying to replicate a basic effect similar to these: Glowing line or Tron lines or More tron lines I've tried to use blurring using the shrink, horizontal and vertical passes, expand technique but the results of my implementation are crappy. I simply want my custom, non-textured 2d polygons to have a glow around them, in a size and color I can define. For example, I want to draw a blue rectangle using 2 triangles and have a glow around the shape. I am not sure how to best achieve this and what technique to use. I am prototyping an idea so performance is not an issue, I just want to get the pixels properly on the screen and I just can't figure out how to do it! It seems this effect has been done to death by now and should be easy, but I can't wrap my head around it, I'm not good at shaders at all I'm afraid. Are the Rastertek blur or glow tutorials the way to go? I'm using DirectX 11. Any tips or suggestions would be greatly appreciated!
  20. Here is another example and request to improve the quality of special rare card with Shadero The shine effect use the object rotation and there is no script added Le holographic use the world space coordinate to change the color plasma of the holographic texture. Also, a mask is applied and, you can easily add a soft holographic overlay over it. Hope you like it. shadero180.mp4
  21. The orb include: multiple texture change multiple wave distortion bubble effect smoke like distortion twist distortion filters color hdr support fisheye effect Hope you like it Shadero_Sprite_2D_Shader_Editor_Diablo_Like_Orb.mp4
  22. Hi guys, I'm having a problem rendering with DWrite, and I don't understand why, can you help me figure it out? As you can see in the image below, if you look carefully you'll notice that the top of R8 is cut (missing 1 row of pixels), the bottom of R11 is cut again, the 4 in R14 is rendered weird compared to the 4 in R4 and so on, if you look closely you'll spot more yourself. I can't figure out why 😕 Under the image I'll also leave the code, in case I'm doing something wrong like with type conversion or stuff. Any help is much appreciated #include "GBAEmulator_PCH.h" #include "Disassembler.h" #include "GBAEmulator.h" Disassembler::Disassembler(LONG width, LONG height, HINSTANCE hInstance, GBAEmulator* emuInstance) : D2DWindowBase(width, height, hInstance, emuInstance), m_background(0.156f, 0.087f, 0.16f, 1.f), m_textFormat{ nullptr } { //Init Window std::string className = "Disassembler"; std::string windowName = "Disassembler"; WNDCLASSEX clientClass{}; clientClass.cbSize = sizeof(WNDCLASSEX); clientClass.style = CS_HREDRAW | CS_VREDRAW; clientClass.lpfnWndProc = GBAEmulator::DisassemblerWinProc; clientClass.hInstance = m_hInstance; //clientClass.hIcon =; TODO: Add Icon clientClass.hCursor = LoadCursor(m_hInstance, IDC_ARROW); clientClass.hbrBackground = (HBRUSH)(COLOR_BACKGROUND + 1); clientClass.lpszClassName = className.c_str(); //clientClass.hIconSm =; TODO: Add Icon DWORD windowStyle = WS_VISIBLE | WS_CAPTION | WS_MINIMIZEBOX | WS_TABSTOP | WS_SYSMENU; m_isValid = InitWindow(windowName, clientClass, windowStyle, false); //Init DWrite if (m_isValid) m_isValid = InitDWrite(); std::vector<std::wstring> tempEntries{ L"PC: ", L"R0: ", L"R1: ", L"R2: ", L"R3: ", L"R4: ", L"R5: ", L"R6: ", L"R7: ", L"R8: ", L"R9: ", L"R10: ", L"R11: ", L"R12: ", L"R13: ", L"R14: ", L"R15: ", L"R16: " }; std::wstring value = L"-UNDEFINED-"; FLOAT left{}, top{}, right{ 300.f }, bottom{ 50.f }; for (auto& s : tempEntries) { m_entries.emplace_back(TextEntry{ s, value, D2D1_RECT_F{ left, top, right, bottom} }); top += 30.f; bottom += 30.f; } } bool Disassembler::InitDWrite() { //Set Text Format HRESULT hr; hr = m_DWriteFactory->CreateTextFormat( L"consolas", NULL, DWRITE_FONT_WEIGHT_NORMAL, DWRITE_FONT_STYLE_NORMAL, DWRITE_FONT_STRETCH_NORMAL, 22.f, L"en-US", &m_textFormat ); if (FAILED(hr)) { MessageBox(NULL, "Failed to create TextFormat", "Error", MB_OK); return false; } //Set Colors m_renderTarget->CreateSolidColorBrush( D2D1::ColorF(D2D1::ColorF::SkyBlue), &m_fillBrush1 ); m_renderTarget->CreateSolidColorBrush( D2D1::ColorF(D2D1::ColorF::Crimson), &m_fillBrush2 ); return true; } Disassembler::~Disassembler() { DestroyWindow(m_hwnd); if (m_textFormat) m_textFormat->Release(); if (m_fillBrush1) m_fillBrush1->Release(); if (m_fillBrush2) m_fillBrush2->Release(); } void Disassembler::Updade(float deltaTime) { } void Disassembler::Draw() { m_renderTarget->BeginDraw(); m_renderTarget->Clear(m_background); for (auto& entry : m_entries) { DrawEntryWithShadow(entry); } m_renderTarget->EndDraw(); } void Disassembler::DrawEntryWithShadow(const TextEntry& entry) { //shadow offset D2D1_RECT_F shadowPos = entry.position; shadowPos.top += 1.05f; shadowPos.left -= 0.95f; //draw text DrawEntry(entry.text, shadowPos, m_fillBrush2); DrawEntry(entry.text, entry.position, m_fillBrush1); D2D1_RECT_F valuePos = entry.position; FLOAT valueOffset = 50.f; valuePos.left += valueOffset; valuePos.right += valueOffset; shadowPos.left += valueOffset; shadowPos.right += valueOffset; //draw value DrawEntry(entry.value, shadowPos, m_fillBrush2); DrawEntry(entry.value, valuePos, m_fillBrush1); } void Disassembler::DrawEntry(const std::wstring& text, const D2D1_RECT_F& pos, ID2D1SolidColorBrush* brush) { m_renderTarget->DrawTextA( text.c_str(), static_cast<UINT>(text.size()), m_textFormat, pos, brush, D2D1_DRAW_TEXT_OPTIONS_NONE ); }
  23. If someone could assist me through this I would be really grateful. I'm using SharpDX/C#/WinForms but I think this could more apply to directx in general. I'm very new to graphics programming and I'm really just trying to do something as simple as displaying a rectangle to the screen. Here is my issue: I have the below code: ---------------------------------------------------------- var desc = new SwapChainDescription() { BufferCount = 1, ModeDescription = new ModeDescription(1024, 768, new Rational(60, 1), Format.R8G8B8A8_UNorm), IsWindowed = false, OutputHandle = form.Handle, SampleDescription = new SampleDescription(1, 0), SwapEffect = SwapEffect.Discard, Usage = Usage.RenderTargetOutput }; I'm not sure if the window is loading in full screen. Actually to make it go full screen I actually have to set the forms property to: this.WindowState = FormWindowState.Maximized; but that only seems lke Im using a C# code to maximize the form. For instance if I don't set the form to maximize, the form loads at the original size if IsWindowed is set to false. I recall with directx programing using dx7, when I set full screen you could actually see what looked like a display resolution change. I'm not seeing that. It pretty much looks like the form is loaded at the same size as the screen and not the value I provide in modeDescription. This is not what I want as well because I want to set the display to 1024x768 to avoid stretching of my graphics in wide screens. Can someone help me make sense of this please.
  24. Now I am making 2d game in my home. I am so confused with the resolution. Please anyone help me to choose the resolution
  25. I was thinking about how to render multiple objects. Things like sprites, truck models, plane models, boats models, etc. And I'm not too sure about this process Let's say I have a vector of Models objects class Model { Matrix4 modelMat; VertexData vertices; Texture texture; Shader shader; }; Since each model has is own model matrix, as all models should, does this mean I now need to have 1 draw call per model? Because each model that needs to be drawn could change the MVP matrix used by the bound vertex shader. Meaning I have to keep updating/mapping the constant buffer my MVP matrix is stored in, which is used by the vertex shader Am I thinking about all of this wrong? Isn't this horribly inefficient?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!