Jump to content
  • Advertisement

Search the Community

Showing results for tags '2D' in content posted in Graphics and GPU Programming.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Art Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum
  • Unity Developers's Forum
  • Unity Developers's Asset Share

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 76 results

  1. I'm working on my first Android game and I have a few questions. I need to scale the graphics for different screen sizes/resolutions. I'm working in 16:9 and plan to use letter boxing to maintain this aspect ratio. Everything is fine for standard screen resolutions. Going from 320:180 to 640:360, one pixel just becomes four. I'm a little confused though as to what happens when you letterbox on a screen with a unusual resolution. Say, just for exsample, my original graphics are 160:90. Then to fit the devise I stretch everything by 1.1 and end up with a final resolution of 176:99. Its still 16:9 but now everything is a mess. If I had a sprite that used to be at x-33 y-33 it's new location would now be at x-36.3 y-36.3. Would I just drop the 0.3 of a pixel, round down and accept that it's no longer in its exact position? Secondly what exactly happens when you stretch images by an amount like 1.1? How dose it decide what pixels to add to the image to make it bigger?
  2. Hi guys, I get the code from StackOverflow to convert RGB (camera image) to RGBA public static unsafe void RGB2RGBA_FastConvert4(int pixelCount, byte[] rgbData, byte[] rgbaData) { if ((pixelCount & 3) != 0) throw new ArgumentException(); fixed (byte* rgbP = &rgbData[0], rgbaP = &rgbaData[0]) { FastConvert4Loop(pixelCount, rgbP, rgbaP); } } static unsafe void FastConvert4Loop(long pixelCount, byte* rgbP, byte* rgbaP) { for (long i = 0, offsetRgb = 0; i < pixelCount; i += 4, offsetRgb += 12) { uint c1 = *(uint*)(rgbP + offsetRgb); uint c2 = *(uint*)(rgbP + offsetRgb + 3); uint c3 = *(uint*)(rgbP + offsetRgb + 6); uint c4 = *(uint*)(rgbP + offsetRgb + 9); ((uint*)rgbaP)[i] = c1 | 0xff000000; ((uint*)rgbaP)[i + 1] = c2 | 0xff000000; ((uint*)rgbaP)[i + 2] = c3 | 0xff000000; ((uint*)rgbaP)[i + 3] = c4 | 0xff000000; } } And Convert RGB to RGBA like this: byte[] rgb = new byte[800*600*3]; byte[] rgba = new byte[800*600*4]; RGB2RGBA_FastConvert4(800 * 600, rgb, rgba); But i need to convert RGB TO ARGB too, so anybody can please help me? Thanks
  3. Greetings, in the past the first time i got into shadows i made simple hard shadows in Game Maker Studio calculated on the CPU side, with a single light source and only squares in the map. However i get that calculating shadows each frame (not each step in a fixed timestep with dynamic fps loop) is way more costy than a draw function should be. The better way to do that would be making the GPU do all that math, which should be exactly what the gpu is meant for. However i'm having issues, lots of issues. I'm using SFML, and i know how to load and apply a shader on sfml, that's super easy. What i dont know is… what shader? I found exactly what i need in shadertoy, but now? How do i "put it" in my C++ project and make it work? What is CPU side and what should be in the shader? What do i pass to that shader? Lights cooridnates list, polygons? I'm kinda lost, i've always programmed only on the cpu side...
  4. Hello Everyone, I was just wondering if there's a program out there that's able to take an arbitrary sprite sheet (perhaps not dived evenly) and generate a text file of clipped rectangles for each individual sprite. If not, does anyone know what's the best way to take an arbitrary sprite sheet and rip out the clipped rectangle without having to do it manually? Thanks so much, Mike
  5. Hello, I have been reading and trying learnopengl.com and it's fun. However when trying to use it in a game, I have a few questions that I'd like to discuss. This topic is about how to do a map with tiles in OpenGL. Think a horizontal shooter with a tiled background, or a top-view game. It's all in 2D, with a few layers on top of each other. As an example (to make the discussion more concrete), assume my tiles are 100x100 pixels which is also the size of the tile when displayed to the screen, and I have a map of 300x200 tiles. For simplicity, let's assume a pixel has size 1x1 in OpenGL coordinates (it's all scalable a lot, many options there). The first solution I could think of is to upload the entire map of 30000x20000 units size, with 2 triangles every 100x100, with triangles pointing to the texture that should be displayed at that point in the map (ie I share the images of the tiles across the map). Advantage: It's simple; Disadvantage: It has to skip most of the tiles, since the tot map size is larger than the size of the display. In OpenGL you just translate to the correct position to display that part of the map. The second solution I could think of is to create the above map, but only just a little larger than the display size. In other words, it is only the displayed portion with a bit extra to enable moving around smoothly. To move further, I guess you have to re-upload the triangle data with different texture indices once every while, or you put the tile information in another texture which is indexed relative to eg the top-left corner of the displayed part (that is, every triangle first queries the tile texture to know the tile-type, and uses that to access the correct tile image for display in the triangle. Advantage: It's less triangles for the GPU (few skipped triangles if any); Disadvantage: It's more complicated to implement. Conceptually, both look like it should work. What I don't know is which one is preferable. Likely actual mapsize is a factor here, so larger mapsize means that at some point the second solution would be better (I think??). But where is that point roughly? Also, are there better options to display a tiled map in 2D?
  6. Hi guys, I created a Bitmap1 and filled it with RawColor4(1f, 0.0f, 0.0f, 0.5f) by this code: D2D1.Bitmap1 img = new D2D1.Bitmap1(_graphics.D2D1Context5, new SharpDX.Size2(640, 480), new BitmapProperties1() { PixelFormat = new D2D1.PixelFormat(Format.R8G8B8A8_UNorm, D2D1.AlphaMode.Premultiplied), DpiX = 96, DpiY = 96, BitmapOptions = BitmapOptions.Target }); _graphics.D2D1Context5.Target = img; _graphics.D2D1Context5.BeginDraw(); _graphics.D2D1Context5.AntialiasMode = AntialiasMode.Aliased; //RawColor4 with Red=1f; G=0.0f; B=0.0f; Alpha = 0.5f; SolidColorBrush br = new SolidColorBrush(_graphics.D2D1Context5, new SharpDX.Mathematics.Interop.RawColor4(1f, 0.0f, 0.0f, 0.5f)); _graphics.D2D1Context5.FillRoundedRectangle(new RoundedRectangle() { RadiusX = 20, RadiusY = 20, Rect = new SharpDX.Mathematics.Interop.RawRectangleF(10, 10, 630, 470) }, br); br.Dispose(); _graphics.D2D1Context5.EndDraw(); Then i use this function to get Pixel value from img above: private static Color4 GetPixel( Bitmap1 created_with_BitmapOption_Target, int x, int y) { var img1 = new D2D1.Bitmap1(_graphics.D2D1Context5, new SharpDX.Size2(created_with_BitmapOption_Target.PixelSize.Width, created_with_BitmapOption_Target.PixelSize.Height), new BitmapProperties1() { PixelFormat = new D2D1.PixelFormat(Format.R8G8B8A8_UNorm, D2D1.AlphaMode.Premultiplied), DpiX = 96, DpiY = 96, BitmapOptions = BitmapOptions.CannotDraw | BitmapOptions.CpuRead }); img1.CopyFromBitmap(created_with_BitmapOption_Target); var map = img1.Map(MapOptions.Read); var size = created_with_BitmapOption_Target.PixelSize.Width * created_with_BitmapOption_Target.PixelSize.Height * 4; byte[] bytes = new byte[size]; Marshal.Copy(map.DataPointer, bytes, 0, size); img1.Unmap(); img1.Dispose(); var position = (y * created_with_BitmapOption_Target.PixelSize.Width + x) * 4; return new Color4(bytes[position], bytes[position + 1], bytes[position + 2], bytes[position + 3]); } Then i get pixel value : Color4 c4val = GetPixel(img,50, 50); I get c4value is: Alpha=127; Red=127; Green=0; Blue=0 This color is not the same the color i filled img (Red=1f; G=0.0f; B=0.0f; Alpha = 0.5f;) Can anybody help where i was wrong in code? Thank you so much in advance, HoaHong
  7. Hey everyone.I have issues with displaying bones in a Scene.OS X 10.14.5Unity 2019.1.10f1Installed packages:2D animation2D IK2D Pixel Perfect2D PSD Importer2D SpriteShapeI do according to the instructions, like in the videos:Video 1Video 2Process:1. Copy PSB file to Assets directory2. Open in Sprite Editor3. Add bones4. Create Auto Geometry5. Click ApplyAnd here I have issues: when dragging PSB into Scene, bones are not displayed in Scene.They are displayed randomly when I click on the Scene. Because of this, I can't edit the animation.Tell me what's the matter? How can i solve this? How the bones make visible?On the video I show what's the matter (image for example): --And i see in Hierarchy: blue man icon have white list. What does it mean?
  8. I'm trying to compile the July07 source code from GDMag link here I'm using Visual Studio 2008 and Microsoft DirectX SDK (June 2010) i managed to compile and run different direct x projects but this one is giving the following errors >scatter.obj : error LNK2001: unresolved external symbol _c_dfDIJoystick2 1>scatter.obj : error LNK2019: unresolved external symbol _DirectInput8Create@20 referenced in function "long __cdecl InitDirectInput(struct HWND__ *)" (?InitDirectInput@@YAJPAUHWND__@@@Z) 1>scatter.obj : error LNK2001: unresolved external symbol _IID_IDirectInput8A Any help on this will be greatly appreciated. Regards
  9. Hi everyone! Im working on a tutorial / work portfolio that will try and cover not just a single topic ("animating a 3D model", "lighting", "collision detection"), but a whole asteroid-like game using C++ and OpenGL. My goal is to try and cover all aspects of this game and how all the parts are connected to each other in a comprehensive and efficient way. The game is pretty close to being finished and sits at gitlab, but my site covering it is still in progress. But I figured it would make sense to let others look at it and give me feedback on it as I work on the tutorial. Is there anyone who would be interested in helping out with reading through it, checking for misstakes or just general feedback? I would of course mention anyone who gives me feedback at a credits part of the tutorial (please put a credits section in your feedback (name, twitter account, website - what ever you want) if you want to get credited). The work-in-progress site sits at http://165.22.53.246 All and any help would be very appreciated!
  10. Howdy, So ive nearly finished my image libary for loading and converting textures, but im wonderingabout a more effecient way to get surface data. These are the 2 storage methods that come to mind: std::vector<std::vector<Byte> > mData; Byte* mData; Vector Pros: -Can access any surface/mip easily: Uint32 Image::GetSurfaceIndex(Uint32 face, Uint32 mipLevel)const { return (face * (mFileInfo.mArraySize - 1)) + mipLevel; } -No manually memory clean up etc. Cons: -Cannot load Blob data in single call, must run loop such as: for (size_t f = 0; f < mFileInfo.mArraySize; f++) { Uint32 w = mFileInfo.mWidth; Uint32 h = mFileInfo.mHeight; for (size_t i = 0; i < mFileInfo.mMipCount; i++) { numBytes = GetSurfaceByteCount(w, h, mFileInfo.mSurfaceFormat); std::vector<Byte> cpyData = std::vector<Byte>(data, data + numBytes); mData[GetSurfaceIndex(f, i)] = cpyData; } if (src + numBytes > end) { Reset(); return; } src += numBytes; w = SF::Max(w >> 1, 1); h = SF::Max(h >> 1, 1); } Byte array Pros: -Loading is simple: mData = new Byte[mFileInfo.mByteCount]; memcpy(mData, ptr, mFileInfo.mByteCount); Cons: -Getting ptr to start of a surface becomes inefficient, especilaly with mips as it requires a loop. Byte* Image::GetPixelPtr(Uint32 face, Uint32 mipLevel, Uint32 x, Uint32 y) { if (face > mFileInfo.mArraySize) { return nullptr; } if (mipLevel > mFileInfo.mMipCount) { return nullptr; } int w = mFileInfo.mWidth; int h = mFileInfo.mHeight; int mipBytes = 0; for(int i = 1; i < mipLevel, i++) { w = SF::Max(w >> 1,1 ); h = SF::Max(h >> 1, 1); mipBytes += GetSurfacebytes(w, h, mFileInfo.mSurfaceFormat); } return &mData[GetSurfaceOffset(face) + mipBytes + ((y * w) + x) * mPixelByteSize]; } Note that GetSurfaceOffset() calculates the offset to the start of a surface, includeing the offset of the previous surfaces mips. Does anyone know of a better way to store surface arrays with there mips in a single blob thats effecient to access? Thanks
  11. Howdy, I have been upgrading texture support in my framework, I opted to use stb_image instead of individual libs (except for lib tiff) however, I have no idea how to figure out the color space. I’m wanting to use linear rendered pipeline and thus all images should be in linear color space and only gamma corrected at the end during tone mapping (HDR) id imagine. HAs anyone used stb_image? is there a way of telling what the input image is encoded as so that I can run my GammaToLinear function on the gamma corrected images? Thanks,
  12. Firstly, a disclaimer: my query covers both graphics AND bits of artificial intelligence, if it's in the wrong forum - shift it by all means.I am a graphic designer and I have a real layman's interest in the recent developments in AI and how it is being used to enhance images. It sounds sad, but as a sort of hobby, I constantly muse over different methods of calculating how to upscale images based on samples of other pixels etc... I have heaps of jottings/notes of formulas that I have devised that I would love to put into practise by writing little experimental programs to implement them.By extension, I am also fascinated by deep learning and neural networks and how they can be trained to develop image enhancement algorithms.Here's the problem.I have no idea about programming/coding (but I am trying to learn). I've been struggling along with C# as part of also learning to make games in Unity - but I'm barely at the 'Hello World' stage and would have no idea how to write a program that would handle images (for example, how do C# programs break down and handle the individual pixels in a bitmap - and how are these individual pixels edited etc...).I would really appreciate pointers on what the best way to start learning something like this in my spare time would be?Sorry if this is really vague, I can clarify anything if need be.
  13. Hello - I recently started designing a tile-based game, but I'm not exactly sure what would be the best method for storing my tiles in the map that will be displayed. I understand that most maps are a simple 2D array with each dimension representing the different axes (x, y), but my game has massive quantities of tiles that need to be displayed at once (I don't have an exact number, but I know it will be over 60x30, which is 1800 at once). I'm fairly certain that the aforementioned construct will be too simple and costly for my needs. I'm also looking to delve into simple procedural generation in the future, meaning I want my method of storage to be able to be easily modified/populated. The best analogy of a game I could give for reference is Terraria, since it has both procedural generations and a bunch of tiles. To simplify, my question is basically: What would be an efficient way to store a map with large amounts of tiles (in the thousands), while still allowing the construct to be populated relatively easily? Thanks for any input/help.
  14. I'm quite a noob when it comes to pixel manipulation shaders, so you will probably understand my fascination with this tool, even though it will look simple for most of you: https://kronbits.itch.io/pixatool I've been trying to create something similar inside Urho3D for the purpose of giving a retro-look to my particles, but I had no luck with the palletization. Does anyone have any sample shaders that change a texture to match a palette? Is it easier to do on a per-texture basis, or doing a full-screen shader?
  15. Hello there, I have tried following various OpenGL tutorials and I'm now at a point where I can render multiple 2d sprites with textures. For that I have a sprite class: Header: #ifndef SPRITE_H #define SPRITE_H #include <GL/glew.h> #include "Shader.h" #include <glm/glm.hpp> #include <glm/gtc/matrix_transform.hpp> #include <glm/gtc/type_ptr.hpp> #include "Texture.h" #include <stb_image.h> #include "Camera.h" class Sprite { public: Sprite(GLfloat x, GLfloat y, GLfloat width, GLfloat height, Shader& shader, Texture *texture); ~Sprite(); void draw(Camera &camera); void setPosition(float x, float y, float z); void move(float x, float y, float z); void setTexture(Texture *texture); Texture getTexture(); float x, y, width, height; private: void init(GLfloat x, GLfloat y, GLfloat width, GLfloat height, Shader& shader, Texture *texture); GLuint VBO = 0, VAO = 0, EBO = 0; GLint transformShaderLocation, viewShaderLocation, projectionShaderLocation; Shader* shader; glm::mat4 transform, projection, view; Texture *texture; }; #endif Code: #include "Sprite.h" Sprite::Sprite(GLfloat x, GLfloat y, GLfloat width, GLfloat height, Shader& shader, Texture *texture) { init(x, y, width, height, shader, texture); } void Sprite::init(GLfloat x, GLfloat y, GLfloat width, GLfloat height, Shader& shader, Texture *texture) { this->shader = &shader; this->x = x; this->y = y; this->width = width; this->height = height; GLfloat vertices[] = { width / 2 , height / 2, 0.0f, /* Top Right */ 1.0f, 1.0f, width / 2 , -height / 2 , 0.0f, /* Bottom Right*/ 1.0f, 0.0f, -width / 2 ,-height / 2 , 0.0f, /* Bottom Left */ 0.0f, 0.0f, -width / 2 , height / 2 , 0.0f, /* Top Left */ 0.0f, 1.0f }; GLuint indices[] = { 0, 1, 3, // 1 1, 2, 3 // 2 }; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); glGenBuffers(1, &EBO); glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); //Position glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0); glEnableVertexAttribArray(0); // TexCoord glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat))); glEnableVertexAttribArray(1); glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); transformShaderLocation = glGetUniformLocation(shader.program, "transform"); viewShaderLocation = glGetUniformLocation(shader.program, "view"); projectionShaderLocation = glGetUniformLocation(shader.program, "projection"); transform = glm::translate(transform, glm::vec3(x , y , 0)); this->texture = texture; } Sprite::~Sprite() { //DELETE BUFFERS glDeleteBuffers(1, &VBO); glDeleteBuffers(1, &EBO); glDeleteBuffers(1, &VAO); delete texture; } void Sprite::draw(Camera &camera) { shader->Use(); glBindTexture(GL_TEXTURE_2D, texture->texture); view = camera.getView(); projection = camera.getProjection(); // Pass to shaders glUniformMatrix4fv(transformShaderLocation, 1, GL_FALSE, glm::value_ptr(transform)); glUniformMatrix4fv(viewShaderLocation, 1, GL_FALSE, glm::value_ptr(view)); // Note: currently we set the projection matrix each frame, but since the projection matrix rarely changes it's often best practice to set it outside the main loop only once. glUniformMatrix4fv(projectionShaderLocation, 1, GL_FALSE, glm::value_ptr(projection)); glBindVertexArray(VAO); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0); glBindVertexArray(0); } void Sprite::setPosition(float x, float y, float z) { //Z? transform = glm::translate(transform, glm::vec3(x - this->x , y - this->y , z)); this->x = x; this->y = y; } void Sprite::move(float x, float y, float z) { transform = glm::translate(transform, glm::vec3(x, y , z)); this->x += x; this->y += y; } void Sprite::setTexture(Texture *texture) { delete this->texture; this->texture = texture; } Texture Sprite::getTexture() { return *texture; } When I want to draw something, I create an instance of the sprite class with it's own Texture and use sprite->draw(); in the draw loop for each sprite to draw it. This works perfectly fine. To improve the performance, I now want to create a spritebatch. As far as I understood it puts all the sprites together so it can send them all at once to the gpu. I had no clue how to get started, so I just created a spritebatch class which put all the vertices and indices into one object every time draw() is called, and actually only draws when flush() is called. Here's the header file: #ifndef SPRITEBATCH_H #define SPRITEBATCH_H #include <glm/glm.hpp> #include "Texture.h" #include <GL/glew.h> #include "Camera.h" #include "Shader.h" #include <vector> class SpriteBatch { public: SpriteBatch(Shader& shader, Camera &camera); ~SpriteBatch(); void draw(Texture *texture, GLfloat x, GLfloat y, GLfloat width, GLfloat height); void flush(); private: GLfloat vertices[800]; GLuint indices[800]; int index{ 0 }; int indicesIndex{ 0 }; GLuint VBO = 0, VAO = 0, EBO = 0; GLint transformShaderLocation, viewShaderLocation, projectionShaderLocation; Shader *shader; Camera *camera; std::vector<Texture*>* textures; glm::mat4 transform, projection, view; }; #endif And the class. I added some comments here: #include "SpriteBatch.h" SpriteBatch::SpriteBatch(Shader& shader, Camera &camera) { this->shader = &shader; this->camera = &camera; textures = new std::vector<Texture*>(); } SpriteBatch::~SpriteBatch() { glDeleteBuffers(1, &VBO); glDeleteBuffers(1, &EBO); glDeleteBuffers(1, &VAO); //delete texture; } void SpriteBatch::draw(Texture *texture, GLfloat x, GLfloat y, GLfloat width, GLfloat height) { textures->push_back(texture); vertices[index] = width/2 ; vertices[index + 1] = height/2; vertices[index + 2] = 0.0f; vertices[index + 3] = 1.0f; vertices[index + 4] = 1.0f; vertices[index + 5] = width / 2; vertices[index + 6] = -height / 2; vertices[index + 7] = 0.0f; vertices[index + 8] = 1.0f; vertices[index + 9] = 0.0f; vertices[index + 10] = -width / 2; vertices[index + 11] = -height / 2; vertices[index + 12] = 0.0f; vertices[index + 13] = 0.0f; vertices[index + 14] = 0.0f; vertices[index + 15] = -width / 2; vertices[index + 16] = height / 2; vertices[index + 17] = 0.0f; vertices[index + 18] = 0.0f; vertices[index + 19] = 1.0f; index += 20; indices[indicesIndex] = 0; indices[indicesIndex + 1] = 1; indices[indicesIndex + 2] = 3; indices[indicesIndex + 3] = 1; indices[indicesIndex + 4] = 2; indices[indicesIndex + 5] = 3; indicesIndex += 6; } void SpriteBatch::flush() { if (index == 0) return; //Ensures that there are sprites added //Debug information. This works perfectly int spritesInBatch = index / 20; std::cout << spritesInBatch << " I : " << index << std::endl; int drawn = 0; //Create Buffers glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); glGenBuffers(1, &EBO); glBindVertexArray(VAO); //Bind vertices glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); //Bind indices glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); //Position glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)0); glEnableVertexAttribArray(0); // TexCoord glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat))); glEnableVertexAttribArray(1); //VAO glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); //Shader locations transformShaderLocation = glGetUniformLocation(shader->program, "transform"); viewShaderLocation = glGetUniformLocation(shader->program, "view"); projectionShaderLocation = glGetUniformLocation(shader->program, "projection"); //Draw //So this sets the texture for each sprites and draws it afterwards with the right texture. At least that's how it should work. for (int i = 0; i < spritesInBatch; i++) { Texture *tex = textures->at(i); shader->Use(); glBindTexture(GL_TEXTURE_2D, tex->texture); //? view = camera->getView(); projection = camera->getProjection(); // Pass them to the shaders glUniformMatrix4fv(transformShaderLocation, 1, GL_FALSE, glm::value_ptr(transform)); glUniformMatrix4fv(viewShaderLocation, 1, GL_FALSE, glm::value_ptr(view)); glUniformMatrix4fv(projectionShaderLocation, 1, GL_FALSE, glm::value_ptr(projection)); //Draw VAO glBindVertexArray(VAO); glDrawElements(GL_TRIANGLES, indicesIndex, GL_UNSIGNED_INT, 0); glBindVertexArray(0); } //Sets index 0 to welcome new sprites index = 0; } It also puts the textures into a list. The code to draw two sprites is this: spriteBatch->draw(&_sprite1->getTexture(), _sprite1->x, _sprite1->y, _sprite1->width, _sprite1->height); spriteBatch->draw(&_sprite1->getTexture(), _sprite1->x+10, _sprite1->y+10, _sprite1->width*2, _sprite1->height); spriteBatch->flush(); but I only get one small black rectangle in the bottom left corner. It works perfectly when I draw the sprites without the spritebatch; _sprite1->draw(*camera); _sprite2->draw(*camera); I think I messed up in the flush() method, but I have no clue how to implement this. I'd be grateful if someone can help me with it. Thank you!
  16. I'm having a spot of bother trying to create bitmaps with an appropriate palette for programs written with SGDK( Sega Genesis Development Kit ). At first I tried MSPaint as it has a 16 colour bitmap save feature, but it doesn't seem to generate a 16 colour palette for the bitmap. I tried Gimp, but aside from "create an index", I'm not having much luck there either, but I'm wondering if thats just my lack of experience with that graphics package. Come to think of it, its not every day one tries to make images for a 30 year old games console with a palettized colour system, so tutorials are very slim indeed. There is a tiles tutorial for SGDK which provides a moon image, and that loads in fine, but if I try to make an image from scratch I notice the palette entries are missing, and the bitmap data seems to have the first row or two of pixels chopped off...so I am assuming the bitmap file is missing the palette data. Just wondering how that moon image was made and in what package... If push comes to shove I suppose I could write my own image program to create such palettized bitmaps, but it seems a bit extreme with all these 2D image editing programs to choose from. I would have asked on Sprites Mind, but have been unsuccessful in registering. Which is a shame because it seems a very friendly and helpful place. Cheers.