Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

176 Neutral

About VanillaSnake21

  • Rank
    Advanced Member


  • Twitter
  • Github
  • Steam
  1. VanillaSnake21

    When drawing a bitmap, pixels come out misaligned.

    Yes I think that was the issue, i forgot to change this as my old code had RGB only, then I moved to RGBA. However now the letters come out with different colors, but it's probably something in the way I'm encoding the rgba value, I'm going to rework this whole code section. Thanks a bunch, I appreciate you spotting that!
  2. I'm working on a software rasterizer and I'm trying to get the text rending to work correctly. I'm producing the text as a collection of bitmaps, one bitmap for each letter. The bitmaps are generated by the FreeType library, which then passes it to my software blitter that draws them on my main texture. The issue is that when the text gets small the letters look garbled, some pixels come out missing and it just looks blurry and unreadable. However when I move the letters around by a few pixels they render correctly, sharp and crisp. I'm thinking that it's got something to do with pixel alignment, however I'm not sure exactly how my code is not aligning them already. This is the overall picture of the project. Main Texture that is the size of the screen -> this is my main drawing surface of the screen, I just access it through an array like video_memory[x_coord + y_coord * pitch] Various Small Bitmaps and Textures and plots -> all get drawn on the main bitmap through the same array access method This is an except from this particular issue: //I get the bitmap generated by the FreeType library FT_Bitmap bmp = slot->bitmap; //I convert the bitmap into my own bitmap format that the engine can handle //Essentially just copies the buffer and width and height BitmapFile* file = new BitmapFile(bmp.width, bmp.rows, bmp.buffer); //I then draw the bitmap on my main texture DrawBitmapWithClipping(video_mem, lpitch32, bitmap, 102, 100, NULL); //this is the breakdown of the method void DrawBitmapWithClipping(DWORD* dest, int destLPitch32, BitmapFile* source, int destPosX, int destPosY, RECT* sourceRegion) { //... other code// //handles 8 bit bitmap (this is what FreeType is outputting) else if (byteCount == 1) { UCHAR* sourceStartMem = source->GetData() + (((offsetX + reg_offsetX) * byteCount) + ((offsetY + reg_offsetY)* image_width * byteCount)); UCHAR* destStartMem = (UCHAR*)(dest)+(x1 + y1 *(destLPitch32 << 2)); int numColumns = (x2 - x1) + 1; int numRows = (y2 - y1) + 1; for (int row = 0; row < numRows - 1; row++) { for (int column = 0; column < numColumns; column++) { UCHAR pixel[4]; pixel[0] = sourceStartMem[column]; if (pixel[0] != 0) { destStartMem[column * 3] = pixel[0]; destStartMem[column * 3 + 1] = pixel[0]; destStartMem[column * 3 + 2] = pixel[0]; destStartMem[column * 3 + 3] = 255; } } destStartMem += destLPitch32 << 2; sourceStartMem += image_width; } } } I'd like to know how this misalignment happens and what can I do to fix it. Thanks. The images included: exmpl1 - the properly aligned letters, the code for it is expl1_ image exmpl2 - the misaligned letters, code for it is in exmpl2_ As you can see I just changed the x-position by 1 pixel on each and they get misaligned
  3. Wasn't sure whether to post this in Game Design section, but it seems that that one leans towards actual gameplay design, whereas this topic is more about programming decisions. In any case mods can move it. Just want to say that this is the first in a series of questions I planned out oriented towards using proper OOP. A while back I came across a post on here that made me rethink my ways in writing OO code. I cannot quote the individual but he said something along the lines of "it's not OO programming if you're just throwing related functions and data in a class and calling it a day". It kind of struck a chord with me and I got to thinking how most of the code I had was just that, things were Objects for no reason. For example in my initial iterations of frameworks, I had an Application class, a WindowsInterface class, a Utility "class" that just had all static functions (because it was convenient to just write Utility::TransformVector or something like that). Everything was a class just because. About a year ago I decided to step back from that and write code without any classes at all just to see the difference. I rewrote my entire framework in that style and really never looked back since. I no longer have to worry about class instances, about what seem like useless restrictions like having to pass hidden "this" pointers in window parameters or managing objects that didn't even seem like objects to being with. So on to my question. I'm now reading Code Complete 2 (after having read 1 years back) and with everything I've learned I'm tempted to give OOP another go. With a renewed mindset and a renewed appreciation for constraint. However that also got me thinking of whether or not all things conform well to OOP, maybe things like general low level frameworks or systems programming are inherently anti-oop? Maybe I'm just trying to push for something that's not really needed at this point? The reason I came to that conclusion is because I'm re-reading some design chapters right now in CC2 and he speaks of designing software at high level first thinking of it like a house. Designating subsystems, then designating modules in the subsystems, then designing classes and planning their interactions and only then moving on to functional execution of class methods. As I sat down to rewrite my code, I realized that it's really difficult to even begin. I can't specify subsystem interaction for example and as per the book I have to restrict subsystem interaction because "it's chaos if every subsystem can access every other subsystem". Well that's the way I have right now, I have a SoftwareRenderer, UserInterface, Resources, Windows, Application and GameEngine subsystems. I see no reason to restrict SoftwareRendrer to any one, and as of right now since I'm coding in C, all a subsystem has to do is include "SoftwareRasterizer.h" and it's good to go with making calls. It's flexible and convenient. So besides subsystem interaction, I'm also having difficulty breaking things down into meaningfully classes. I blame it on the fact that a framework is by definition a low level system, so the obejcts can't be really straighforward common sense abstractions, they must be EventListeners and FrameDescriptors, which are abstractions non-the-less but at a much less intuitive level. Despite being confident that I don't really need OOP to get the job done, it's nagging me that it's so difficult for me to find an OO solution to a framework. Does that mean that I don't understand what a framework requires if I can't easily delineate the objects required to create it? Should I still push to get there? Or are some things just not as suited for OOP as others? Thanks.
  4. VanillaSnake21

    Pointer becomes invalid for unknown reason

    That did it. Thanks jpetrie. I totally forgot about the difference in bit layouts
  5. I've restructured some of my code to use namespaces and started getting problems in a module that was working correctly previously. The one in question is a DebugWindow, what happens is I give it a pointer to a variable that I want to monitor/change and it's job is to display that variable in a separate window along with a some + and - buttons to in/decrement the variable. These are the relevant portions: WindowManager.h namespace WindowManager { /* WindowManager functions snipped */ namespace DebugWindow { void AddView(double* vard, std::wstring desc, double increment); void AddView(std::wstring* vars, std::wstring desc); void CreateDebugWindow(int width, int height, int x, int y); } } Application.cpp is the main app, it calls the above functions to set the watch on the variables I need to see in real-time void ApplicationInitialization() { //create the main window UINT windowID = SR::WindowManager::CreateNewWindow(LocalWindowsSettings); //initialize the rasterizer InitializeSoftwareRasterizer(SR::WindowManager::GetWindow(windowID)); //create the debug window SR::WindowManager::DebugWindow::CreateDebugWindow(400, LocalWindowsSettings.clientHeight, LocalWindowsSettings.clientPosition.x + LocalWindowsSettings.clientWidth, LocalWindowsSettings.clientPosition.y); //display some debug info SR::WindowManager::DebugWindow::AddView((double*)&gMouseX,TEXT("Mouse X"), 1); SR::WindowManager::DebugWindow::AddView((double*)&gMouseY, TEXT("Mouse Y"), 1); } The variables gMouseX and Y are globals in my application, they are updated inside the App's WindProc inside the WM_MOUSEMOVE like so : case WM_MOUSEMOVE: { gMouseX = GET_X_LPARAM(lParam); gMouseY = GET_Y_LPARAM(lParam); /* .... */ }break; Now inside the AddView() function that I'm calling to set the watch on the variable void AddView(double* vard, std::wstring desc, double increment) { _var v; v.vard = vard; // used when variable is a number v.vars = nullptr; // used when varialbe is a string (in this case it's not) v.desc = desc; v.increment = increment; mAddVariable(v); } _var is just a structure I use to pass the variable definition and annotation inside the module, it's defined as such struct _var { double* vard; //use when variable is a number double increment; //value to increment/decrement in live-view std::wstring* vars; //use when variable is a string std::wstring desc; //description to be displayed next to the variable int minusControlID; int plusControlID; HWND viewControlEdit; //WinAPI windows associated with the display, TextEdit, and two buttons (P) for plus and (M) for minus. HWND viewControlBtnM; HWND viewControlBtnP; }; So after I call AddView it formats this structure and passes it on to mAddVariable(_var), here it is: void mAddVariable(_var variable) { //destroy and recreate a timer KillTimer(mDebugOutWindow, 1); SetTimer(mDebugOutWindow, 1, 10, (TIMERPROC)NULL); //convert the variable into readable string if it's a number std::wstring varString; if (variable.vard) varString = std::to_wstring(*variable.vard); else varString = *variable.vars; //create all the controls variable.viewControlEdit = CreateWindow(/*...*/); //text field control variable.minusControlID = (mVariables.size() - 1) * 2 + 1; variable.viewControlBtnM = CreateWindow(/*...*/); //minus button control variable.plusControlID = (mVariables.size() - 1) * 2 + 2; variable.viewControlBtnP = CreateWindow(/*...*/); //plus button control mVariables.push_back(variable); } I then update the variable using a timer inside the DebugWindow msgproc case WM_TIMER: { switch (wParam) { case 1: // 1 is the id of the timer { for (_var v : mVariables) { SetWindowText(v.viewControlEdit, std::to_wstring(*v.vard).c_str()); } }break; default: break; } }; break; When I examine the mVariables, their vard* is something like 1.48237482E-33#DEN. Why does this happen? Also to note is that I'm programming in C like fashion, without using any objects at all. The module consists of .h and .cpp file, whatever I expose in .h is public, if a function is only exposed in .cpp it's private . So even though I precede some functions with m_Function it's not in a member of a class but just means that it's not exposed in the header file, so it's only visible within this module. Thanks.
  6. VanillaSnake21

    How to correctly scale a set of bezier curves?

    Oh so I should contain all the curves in a region and just map it from 0 to 1. Right, that makes sense. I guess come to think of it every font editor has set size glyphs, not sure why I thought I needed arbitrary sizes inside the editor. Thanks. Also @JoeJ, I didn't consider the baseline and letter metrics, thanks.
  7. VanillaSnake21

    How to correctly scale a set of bezier curves?

    It's not easy to explain or even draw or demonstrate, I'll try again but bear with me. When I'm making a font in my editor I do so by plotting single control vertices. For every click I create a single control vertex, after I plot 4 of them it makes a 4d bezier curve. Now how do I store the actual positions of the control vertex? Right now I just have them as exact mouse coordinates where I clicked on the screen so CV1(100px, 100px) CV2( 300px, 300px) and so on until CV4 which now makes a curve. These are all in screen space. Now I add a few more curve lets say which form a letter, so all these curves are being manipulated in pixel coordinates. So now if I want to actually use these letters and scale them to any font size, I can't use these screen coordinates anymore, I have to fit the letter in some scalable space like 0 to 1. So I have to convert all the vertex coordinates into that space. Right now I'm doing that manually, I just have a button in my editor called Normalize, so once I'm happy with the letter I've formed, I click normalize and it transforms all the vertices into normalized 0 to 1 space. My question was whether I can avoid doing the normalization manually and work in some space that is normalized from the get go. As in when I plot the point with the mouse, I wouldn't store the location of the mouse as the vertex coordinate, but right away transoform the mouse coordinate into a normalized space. I hope that clears up what my intentions with the question were. It's not really a problem as everything works just fine as of now, I just wanted to know if there is a more elegant way of doing this.
  8. VanillaSnake21

    How to correctly scale a set of bezier curves?

    I mean suggesting to dig through a mature open source library code to see how it does a certain specific action is a bit of an overkill imo. If there are some docs you can point me to that deal with this issue, that's another thing.
  9. VanillaSnake21

    How to correctly scale a set of bezier curves?

    Because I can't normalize until I get the final shape of the letter, lets say letter A. Takes 3 curves, / -- \ , if I just renormalize after I add the second curve, the structure shifts to renormalized units, meaning it shifts to the center of the canonical box as I have it now, so I have to manually renormalize once I finalize the letter. That's how I have it now, it's a bit tedious and I was looking for a way to maybe use alternate coordinate systems to have a more elegant implementation, but not every piece of code has to be perfect I guess, just have to settle on this for now.
  10. VanillaSnake21

    How to correctly scale a set of bezier curves?

    It's my own framework, I'm not willing to use anything but the most low level libraries, as I'm not even using a graphics api. My question was how to represent the spline correctly internally so it could both be used in letter glyphs as well as modified in the editor. I've settled on having a duplicate structure at this point, I have one representation for a spline when I'm dragging around it's vertices in the editor and another normalized representation for when it's rendered, I was just looking for a single elegant implementation in this question.
  11. I've got a working implementation of 4d and 1d bezier curve font generator, however I'm not sure how to now transition into actually making text. As of right now I create my font by clicking and dragging control vertices on the screen, once I have a few curves aligned I designate it as a letter and save the font. But I'm not sure what coordinate system to use to make sure that I can scale the existing curves to any size? I'm thinking to have the letter sit in like a canonical box with -1 to 1 in both x and y, but then how do I re normalize the curves and still have the ability to plot points directly on screen? As of right now the control vertices are in viewport space of [0 to screen dimention], so when I plot a point I just take the client mouse coordinates. But if I choose to project the final letter to -1 to 1 space, I can only do so once I draw all the curves for that letter as I need the bounding box of all the curves. So what is the right way to approach this? This is probably a bit convoluted, the point of the question is how do I transition from font editor to actual font. Do I have to unproject the curves when I open them in font editor and duplicate as working copies and only bake the final normalized letter into the font when I'm done editing it or else how would I do it at runtime?
  12. I don't need real-time rendering, I should have mentioned it initially, I need smooth animations, real time rendering is obviously not happening with 120 ms cycle. I don't think this is something unusual. I need for example a graphic to play in the top right corner on game load, something like a snake eating it's tail. I don't have to render it out in realtime, just render it at game load and then playback at any fps when needed. What my mistake was is that I was in fact trying to render in real time by using time delta. 
  13.   The conceptual process looks like this  while(!done) { CurrentTime = GetTheCurrentTime(); ElapsedTime = CurrentTime - PreviousTime; PreviousTime = CurrentTime; Update(ElapsedTime); Render(ElapsedTime); } You get the current time at the top of the loop, subtract it from whatever the time was at the top of the last iteration of the loop. That's your elapsed time. You're measure the time it took to go from the top of the loop, through all the instructions in the body, and get back to the top. There's no need to concern yourself with trivial details like the cycle count of a jump instruction.       This sounds like a problem that you should fix. Updating animations (and game logic) via delta time is correct. But using a fixed timestep to do that is not going to solve the problem where it takes too long to render. What makes your game run like a slideshow is the fact that it takes 120ms to render stuff. That's like... 8 frames per second. If you subtract out the time spent rendering, your animations will make smaller adjustments between any two presented frames. But they will still only render at 8 FPS, and when you eventually fix the renderer or switch to a real one, all of your assumed times will be wrong.     I understand the loop, the way I had it didn't include the Update and Render functions on purpose because I thought that it wasn't what I needed. I was in a way right because in my case I don't really need Render and Update timing. What I was asking is why can't I see the delta of the jump instruction reflected by QPC. But in any case it's not important I suppose.     <But they will still only render at 8 FPS> No, they will render at whatever fps I instruct. 8 FPS would be realtime render, the buffered animation can be played at any fps (up to capture limit) after the fact.   Edit: I mean playback, playback at any fps I need.
  14. So in other words you're saying it may help but it's not an ideal way? So what do you suggest I do then if not this?
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!