VanillaSnake21

Members
  • Content count

    805
  • Joined

  • Last visited

Community Reputation

175 Neutral

About VanillaSnake21

  • Rank
    Advanced Member

Social

  • Twitter
    TymAfterDark
  • Github
    TimAkgayev
  • Steam
    VanillaSnake
  1. C++ Pointer becomes invalid for unknown reason

    That did it. Thanks jpetrie. I totally forgot about the difference in bit layouts
  2. I've restructured some of my code to use namespaces and started getting problems in a module that was working correctly previously. The one in question is a DebugWindow, what happens is I give it a pointer to a variable that I want to monitor/change and it's job is to display that variable in a separate window along with a some + and - buttons to in/decrement the variable. These are the relevant portions: WindowManager.h namespace WindowManager { /* WindowManager functions snipped */ namespace DebugWindow { void AddView(double* vard, std::wstring desc, double increment); void AddView(std::wstring* vars, std::wstring desc); void CreateDebugWindow(int width, int height, int x, int y); } } Application.cpp is the main app, it calls the above functions to set the watch on the variables I need to see in real-time void ApplicationInitialization() { //create the main window UINT windowID = SR::WindowManager::CreateNewWindow(LocalWindowsSettings); //initialize the rasterizer InitializeSoftwareRasterizer(SR::WindowManager::GetWindow(windowID)); //create the debug window SR::WindowManager::DebugWindow::CreateDebugWindow(400, LocalWindowsSettings.clientHeight, LocalWindowsSettings.clientPosition.x + LocalWindowsSettings.clientWidth, LocalWindowsSettings.clientPosition.y); //display some debug info SR::WindowManager::DebugWindow::AddView((double*)&gMouseX,TEXT("Mouse X"), 1); SR::WindowManager::DebugWindow::AddView((double*)&gMouseY, TEXT("Mouse Y"), 1); } The variables gMouseX and Y are globals in my application, they are updated inside the App's WindProc inside the WM_MOUSEMOVE like so : case WM_MOUSEMOVE: { gMouseX = GET_X_LPARAM(lParam); gMouseY = GET_Y_LPARAM(lParam); /* .... */ }break; Now inside the AddView() function that I'm calling to set the watch on the variable void AddView(double* vard, std::wstring desc, double increment) { _var v; v.vard = vard; // used when variable is a number v.vars = nullptr; // used when varialbe is a string (in this case it's not) v.desc = desc; v.increment = increment; mAddVariable(v); } _var is just a structure I use to pass the variable definition and annotation inside the module, it's defined as such struct _var { double* vard; //use when variable is a number double increment; //value to increment/decrement in live-view std::wstring* vars; //use when variable is a string std::wstring desc; //description to be displayed next to the variable int minusControlID; int plusControlID; HWND viewControlEdit; //WinAPI windows associated with the display, TextEdit, and two buttons (P) for plus and (M) for minus. HWND viewControlBtnM; HWND viewControlBtnP; }; So after I call AddView it formats this structure and passes it on to mAddVariable(_var), here it is: void mAddVariable(_var variable) { //destroy and recreate a timer KillTimer(mDebugOutWindow, 1); SetTimer(mDebugOutWindow, 1, 10, (TIMERPROC)NULL); //convert the variable into readable string if it's a number std::wstring varString; if (variable.vard) varString = std::to_wstring(*variable.vard); else varString = *variable.vars; //create all the controls variable.viewControlEdit = CreateWindow(/*...*/); //text field control variable.minusControlID = (mVariables.size() - 1) * 2 + 1; variable.viewControlBtnM = CreateWindow(/*...*/); //minus button control variable.plusControlID = (mVariables.size() - 1) * 2 + 2; variable.viewControlBtnP = CreateWindow(/*...*/); //plus button control mVariables.push_back(variable); } I then update the variable using a timer inside the DebugWindow msgproc case WM_TIMER: { switch (wParam) { case 1: // 1 is the id of the timer { for (_var v : mVariables) { SetWindowText(v.viewControlEdit, std::to_wstring(*v.vard).c_str()); } }break; default: break; } }; break; When I examine the mVariables, their vard* is something like 1.48237482E-33#DEN. Why does this happen? Also to note is that I'm programming in C like fashion, without using any objects at all. The module consists of .h and .cpp file, whatever I expose in .h is public, if a function is only exposed in .cpp it's private . So even though I precede some functions with m_Function it's not in a member of a class but just means that it's not exposed in the header file, so it's only visible within this module. Thanks.
  3. C++ How to correctly scale a set of bezier curves?

    Oh so I should contain all the curves in a region and just map it from 0 to 1. Right, that makes sense. I guess come to think of it every font editor has set size glyphs, not sure why I thought I needed arbitrary sizes inside the editor. Thanks. Also @JoeJ, I didn't consider the baseline and letter metrics, thanks.
  4. C++ How to correctly scale a set of bezier curves?

    It's not easy to explain or even draw or demonstrate, I'll try again but bear with me. When I'm making a font in my editor I do so by plotting single control vertices. For every click I create a single control vertex, after I plot 4 of them it makes a 4d bezier curve. Now how do I store the actual positions of the control vertex? Right now I just have them as exact mouse coordinates where I clicked on the screen so CV1(100px, 100px) CV2( 300px, 300px) and so on until CV4 which now makes a curve. These are all in screen space. Now I add a few more curve lets say which form a letter, so all these curves are being manipulated in pixel coordinates. So now if I want to actually use these letters and scale them to any font size, I can't use these screen coordinates anymore, I have to fit the letter in some scalable space like 0 to 1. So I have to convert all the vertex coordinates into that space. Right now I'm doing that manually, I just have a button in my editor called Normalize, so once I'm happy with the letter I've formed, I click normalize and it transforms all the vertices into normalized 0 to 1 space. My question was whether I can avoid doing the normalization manually and work in some space that is normalized from the get go. As in when I plot the point with the mouse, I wouldn't store the location of the mouse as the vertex coordinate, but right away transoform the mouse coordinate into a normalized space. I hope that clears up what my intentions with the question were. It's not really a problem as everything works just fine as of now, I just wanted to know if there is a more elegant way of doing this.
  5. C++ How to correctly scale a set of bezier curves?

    I mean suggesting to dig through a mature open source library code to see how it does a certain specific action is a bit of an overkill imo. If there are some docs you can point me to that deal with this issue, that's another thing.
  6. C++ How to correctly scale a set of bezier curves?

    Because I can't normalize until I get the final shape of the letter, lets say letter A. Takes 3 curves, / -- \ , if I just renormalize after I add the second curve, the structure shifts to renormalized units, meaning it shifts to the center of the canonical box as I have it now, so I have to manually renormalize once I finalize the letter. That's how I have it now, it's a bit tedious and I was looking for a way to maybe use alternate coordinate systems to have a more elegant implementation, but not every piece of code has to be perfect I guess, just have to settle on this for now.
  7. C++ How to correctly scale a set of bezier curves?

    It's my own framework, I'm not willing to use anything but the most low level libraries, as I'm not even using a graphics api. My question was how to represent the spline correctly internally so it could both be used in letter glyphs as well as modified in the editor. I've settled on having a duplicate structure at this point, I have one representation for a spline when I'm dragging around it's vertices in the editor and another normalized representation for when it's rendered, I was just looking for a single elegant implementation in this question.
  8. I've got a working implementation of 4d and 1d bezier curve font generator, however I'm not sure how to now transition into actually making text. As of right now I create my font by clicking and dragging control vertices on the screen, once I have a few curves aligned I designate it as a letter and save the font. But I'm not sure what coordinate system to use to make sure that I can scale the existing curves to any size? I'm thinking to have the letter sit in like a canonical box with -1 to 1 in both x and y, but then how do I re normalize the curves and still have the ability to plot points directly on screen? As of right now the control vertices are in viewport space of [0 to screen dimention], so when I plot a point I just take the client mouse coordinates. But if I choose to project the final letter to -1 to 1 space, I can only do so once I draw all the curves for that letter as I need the bounding box of all the curves. So what is the right way to approach this? This is probably a bit convoluted, the point of the question is how do I transition from font editor to actual font. Do I have to unproject the curves when I open them in font editor and duplicate as working copies and only bake the final normalized letter into the font when I'm done editing it or else how would I do it at runtime?
  9. I don't need real-time rendering, I should have mentioned it initially, I need smooth animations, real time rendering is obviously not happening with 120 ms cycle. I don't think this is something unusual. I need for example a graphic to play in the top right corner on game load, something like a snake eating it's tail. I don't have to render it out in realtime, just render it at game load and then playback at any fps when needed. What my mistake was is that I was in fact trying to render in real time by using time delta. 
  10.   The conceptual process looks like this  while(!done) { CurrentTime = GetTheCurrentTime(); ElapsedTime = CurrentTime - PreviousTime; PreviousTime = CurrentTime; Update(ElapsedTime); Render(ElapsedTime); } You get the current time at the top of the loop, subtract it from whatever the time was at the top of the last iteration of the loop. That's your elapsed time. You're measure the time it took to go from the top of the loop, through all the instructions in the body, and get back to the top. There's no need to concern yourself with trivial details like the cycle count of a jump instruction.       This sounds like a problem that you should fix. Updating animations (and game logic) via delta time is correct. But using a fixed timestep to do that is not going to solve the problem where it takes too long to render. What makes your game run like a slideshow is the fact that it takes 120ms to render stuff. That's like... 8 frames per second. If you subtract out the time spent rendering, your animations will make smaller adjustments between any two presented frames. But they will still only render at 8 FPS, and when you eventually fix the renderer or switch to a real one, all of your assumed times will be wrong.     I understand the loop, the way I had it didn't include the Update and Render functions on purpose because I thought that it wasn't what I needed. I was in a way right because in my case I don't really need Render and Update timing. What I was asking is why can't I see the delta of the jump instruction reflected by QPC. But in any case it's not important I suppose.     <But they will still only render at 8 FPS> No, they will render at whatever fps I instruct. 8 FPS would be realtime render, the buffered animation can be played at any fps (up to capture limit) after the fact.   Edit: I mean playback, playback at any fps I need.
  11. So in other words you're saying it may help but it's not an ideal way? So what do you suggest I do then if not this?
  12. My animations are based on delta time, akin to this <scalar*dir.x*delta_time, .scalar*dir.y*delta_time> so because my render function takes more time than acceptable refresh it completely destroys the look of the animation. What I think I have to do is to untie my anims from realtime (timedelta). But you're saying fixed timestep is not the way to go? So what should I do?   Edit: but is there really no way to avoid including my raster function in time loop? I don't even understand how a high rez timer timer does capture the time to get from bottom of loop to top, you're saying that the jmp instruction is taking less that one tick? And QPC can see 1 clock tick right?
  13. But this will completely destroy my animations then. By the time I finish rendering the elapsed time delta will be so huge that it's just going to be like watching bad stop motion anim. I guess I'm using the wrong approach to begin with. I need to cancel out the render time delta right, so just using a fixed time step would do it?
  14. No I'm doing software rasterizing, GPU is only involved in backbone. My code had the SetLastTIme at end of one of the branches of the loop, not at the end of the loop that is why I assumed that it won't always be circular. But again why am I timing my raster function and including that into delta time? The raster operations take about 120ms, so my delta time gets bloated out of control. I always though that the point was to only time the time between the calls to raster function, not raster call function timing in itself.