Sign in to follow this  

C++ How to correctly scale a set of bezier curves?

Recommended Posts

Posted (edited)

I've got a working implementation of 4d and 1d bezier curve font generator, however I'm not sure how to now transition into actually making text. As of right now I create my font by clicking and dragging control vertices on the screen, once I have a few curves aligned I designate it as a letter and save the font. But I'm not sure what coordinate system to use to make sure that I can scale the existing curves to any size? I'm thinking to have the letter sit in like a canonical box with -1 to 1 in both x and y, but then how do I re normalize the curves and still have the ability to plot points directly on screen? As of right now the control vertices are in viewport space of [0 to screen dimention], so when I plot a point I just take the client mouse coordinates. But if I choose to project the final letter to -1 to 1 space, I can only do so once I draw all the curves for that letter as I need the bounding box of all the curves. So what is the right way to approach this? 

 

This is probably a bit convoluted, the point of the question is how do I transition from font editor to actual font. Do I have to unproject the curves when I open them in font editor and duplicate as working copies and only bake the final normalized letter into the font when I'm done editing it or else how would I do it at runtime?

Edited by VanillaSnake21

Share this post


Link to post
Share on other sites

I'm not sure what you ask for, but in any case you seem to reinvent things that have common standarts e.g. TrueType and Postscript, sou you should find some answers by looking at their specs.

I wonder why you don't use those standarts, as there are font editors and also libs to render those fonts with graphics APIs, probably all for free.

One main problem is the horizontal distance between letters. AFAIK this is solved by default left/right bounds, but there are also individual settings for custom pairs of letter, e.g. "LA" should be as close together as possible, while "IL" should have some space between. (I know this from bitmap fonts typical for games like http://www.angelcode.com/products/bmfont/ , probably vector fonts do the same)

Share this post


Link to post
Share on other sites

It's my own framework, I'm not willing to use anything but the most low level libraries, as I'm not even using a graphics api. My question was how to represent the spline correctly internally so it could both be used in letter glyphs as well as modified in the editor. I've settled on having a duplicate structure at this point, I have one representation for a spline when I'm dragging around it's vertices in the editor and another normalized representation for when it's rendered, I was just looking for a single elegant implementation in this question. 

Share this post


Link to post
Share on other sites

Why can't your editor just use normalized coordinates, too? After any change to the spline's control points, compute the correct normalization factor to return everything to the unit square, and apply it/redraw.

Share this post


Link to post
Share on other sites
57 minutes ago, ApochPiQ said:

Why can't your editor just use normalized coordinates, too? After any change to the spline's control points, compute the correct normalization factor to return everything to the unit square, and apply it/redraw.

 

Because I can't normalize until I get the final shape of the letter, lets say letter A. Takes 3 curves, / -- \  , if I just renormalize after I add the second curve, the structure shifts to renormalized units, meaning it shifts to the center of the canonical box as I have it now, so I have to manually renormalize once I finalize the letter. That's how I have it now, it's a bit tedious and I was looking for a way to maybe use alternate coordinate systems to have a more elegant implementation, but not every piece of code has to be perfect I guess, just have to settle on this for now.

Share this post


Link to post
Share on other sites
7 hours ago, VanillaSnake21 said:

It's my own framework, I'm not willing to use anything but the most low level libraries

JoeJ was not saying to adopt the standards or adopt libraries that implement the standard, he was suggesting to just look at how they solved the same problem as a source of inspiration.

Share this post


Link to post
Share on other sites
9 hours ago, Alberth said:

JoeJ was not saying to adopt the standards or adopt libraries that implement the standard, he was suggesting to just look at how they solved the same problem as a source of inspiration.

 I mean suggesting to dig through a mature open source library code to see how it does a certain specific action is a bit of an overkill imo. If there are some docs you can point me to that deal with this issue, that's another thing.

Share this post


Link to post
Share on other sites

Maybe looking at example rendering implementations linked at the angelcode webpage helps. (Probably all OpenGL, but it would be the same for a software renderer.) The data per letter is pretty self explaining.

But i don't know if this would help with your problem - i still have no idea what the problem is exactly. Probably it's the same with others. I think you should provide screenshots from your tool and / or some illustration to visualize your problem.

Share this post


Link to post
Share on other sites
7 hours ago, JoeJ said:

Maybe looking at example rendering implementations linked at the angelcode webpage helps. (Probably all OpenGL, but it would be the same for a software renderer.) The data per letter is pretty self explaining.

But i don't know if this would help with your problem - i still have no idea what the problem is exactly. Probably it's the same with others. I think you should provide screenshots from your tool and / or some illustration to visualize your problem.

 

It's not easy to explain or even draw or demonstrate, I'll try again but bear with me. When I'm making a font in my editor I do so by plotting single control vertices. For every click I create a single control vertex, after I plot 4 of them it makes a 4d bezier curve. Now how do I store the actual positions of the control vertex? Right now I just have them as exact mouse coordinates where I clicked on the screen so CV1(100px, 100px) CV2( 300px, 300px) and so on until CV4 which now makes a curve. These are all in screen space. Now I add a few more curve lets say which form a letter, so all these curves are being manipulated in pixel coordinates. So now if I want to actually use these letters and scale them to any font size, I can't use these screen coordinates anymore, I have to fit the letter in some scalable space like 0 to 1. So I have to convert all the vertex coordinates into that space. Right now I'm doing that manually, I just have a button in my editor called Normalize, so once I'm happy with the letter I've formed, I click normalize and it transforms all the vertices into normalized 0 to 1 space. My question was whether I can avoid doing the normalization manually and work in some space that is normalized from the get go. As in when I plot the point with the mouse, I wouldn't store the location of the mouse as the vertex coordinate, but right away transoform the mouse coordinate into a normalized space. I hope that clears up what my intentions with the question were. It's not really a problem as everything works just fine as of now, I just wanted to know if there is a more elegant way of doing this.

Share this post


Link to post
Share on other sites

There can be no magic solution.

You want arbitrary sized splines that are normalized. No math can predict how big you will make your splines; to normalize you must by definition know how big the spline is.

The compromise is to draw boundaries ahead of time and force all characters in your font to adhere to predefined dimensions. If you know a glyph's size ahead of time you can draw its splines in normal space.

But you can't have arbitrary glyph dimensions and also eliminate the need for explicit normalization steps.

Share this post


Link to post
Share on other sites

Yeah, but the bounding rect (or normalization square - however we call it) needs to be part of the editing process anyways. To be more clear, the user needs to specify left and right bounds an the base line manually. The baseline ensures all letters like 'mn' have the same bottom height. A letter like 'g' is lower than the baseline and a algorithm can't get the base just from calculating bounds. A letter like 'o' is a tiny bit lower (and higher) than 'nm' too, so it does not appear smaller than other letters to the eye. The user should set those bounds and he should be able to tweak them later until he is happy with the result. So it's like with any other form of content creation: You keep the original data and convert it to game asset anytime the artist does any changes.

Share this post


Link to post
Share on other sites

Oh so I should contain all the curves in a region and just map it from 0 to 1. Right, that makes sense. I guess come to think of it every font editor has set size glyphs, not sure why I thought I needed arbitrary sizes inside the editor. Thanks. Also @JoeJ, I didn't consider the baseline and letter metrics, thanks.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628754
    • Total Posts
      2984518
  • Similar Content

    • By Josheir
      void update() { if (thrust) { dx += cos(angle*DEGTORAD)*.02; dy += sin(angle*DEGTORAD)*.02; } else { dx*=0.99; dy*=0.99; } int maxSpeed = 15; float speed = sqrt(dx*dx+dy*dy); if (speed>maxSpeed) { dx *= maxSpeed/speed; dy *= maxSpeed/speed; } x+=dx; y+=dy; . . . } In the above code, why is maxSpeed being divided by the speed variable.  I'm stumped.
       
      Thank you,
      Josheir
    • By Benjamin Shefte
      Hey there,  I have this old code im trying to compile using GCC and am running into a few issues..
      im trying to figure out how to convert these functions to gcc
      static __int64 MyQueryPerformanceFrequency() { static __int64 aFreq = 0; if(aFreq!=0) return aFreq; LARGE_INTEGER s1, e1, f1; __int64 s2, e2, f2; QueryPerformanceCounter(&s1); s2 = MyQueryPerformanceCounter(); Sleep(50); e2 = MyQueryPerformanceCounter(); QueryPerformanceCounter(&e1); QueryPerformanceFrequency(&f1); double aTime = (double)(e1.QuadPart - s1.QuadPart)/f1.QuadPart; f2 = (e2 - s2)/aTime; aFreq = f2; return aFreq; } void PerfTimer::GlobalStart(const char *theName) { gPerfTimerStarted = true; gPerfTotalTime = 0; gPerfTimerStartCount = 0; gPerfElapsedTime = 0; LARGE_INTEGER anInt; QueryPerformanceCounter(&anInt); gPerfResetTick = anInt.QuadPart; } /////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////// void PerfTimer::GlobalStop(const char *theName) { LARGE_INTEGER anInt; QueryPerformanceCounter(&anInt); LARGE_INTEGER aFreq; QueryPerformanceFrequency(&aFreq); gPerfElapsedTime = (double)(anInt.QuadPart - gPerfResetTick)/aFreq.QuadPart*1000.0; gPerfTimerStarted = false; }  
      I also tried converting this function (original function is the first function below and my converted for gcc function is under that) is this correct?:
      #if defined(WIN32) static __int64 MyQueryPerformanceCounter() { // LARGE_INTEGER anInt; // QueryPerformanceCounter(&anInt); // return anInt.QuadPart; #if defined(WIN32) unsigned long x,y; _asm { rdtsc mov x, eax mov y, edx } __int64 result = y; result<<=32; result|=x; return result; } #else static __int64 MyQueryPerformanceCounter() { struct timeval t1, t2; double elapsedTime; // start timer gettimeofday(&t1, NULL); Sleep(50); // stop timer gettimeofday(&t2, NULL); // compute and print the elapsed time in millisec elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms return elapsedTime; } #endif Any help would be appreciated, Thank you!
    • By mister345
      Hi, I'm building a game engine using DirectX11 in c++.
      I need a basic physics engine to handle collisions and motion, and no time to write my own.
      What is the easiest solution for this? Bullet and PhysX both seem too complicated and would still require writing my own wrapper classes, it seems. 
      I found this thing called PAL - physics abstraction layer that can support bullet, physx, etc, but it's so old and no info on how to download or install it.
      The simpler the better. Please let me know, thanks!
    • By lawnjelly
      It comes that time again when I try and get my PC build working on Android via Android Studio. All was going swimmingly, it ran in the emulator fine, but on my first actual test device (Google Nexus 7 2012 tablet (32 bit ARM Cortex-A9, ARM v7A architecture)) I was getting a 'SIGBUS illegal alignment' crash.
      My little research has indicated that while x86 is fine with loading 16 / 32 / 64 bit values from any byte address in memory, the earlier ARM chips may need data to be aligned to the data size. This isn't a massive problem, and I see the reason for it (probably faster, like SIMD aligned loads, and simpler for the CPU). I probably have quite a few of these, particular in my own byte packed file formats. I can adjust the exporter / formats so that they are using the required alignment.
      Just to confirm, if anyone knows this, is it all 16 / 32 / 64 bit accesses that need to be data size aligned on early android devices? Or e.g. just 64 bit size access? 
      And is there any easy way to get the compiler to spit out some kind of useful information as to the alignment of each member of a struct / class, so I can quickly pin down the culprits?
      The ARM docs (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka15414.html) suggest another alternative is using a __packed qualifier. Anyone used this, is this practical?
    • By Josheir
      In the following code:

       
      Point p = a[1]; center of rotation for (int i = 0; I<4; i++) { int x = a[i].x - p.x; int y = a[i].y - p.y; a[i].x = y + p.x; a[i].y = - x + p.y; }  
      I am understanding that a 90 degree shift results in a change like:   
      xNew = -y
      yNew = x
       
      Could someone please explain how the two additions and subtractions of the p.x and p.y works?
       
      Thank you,
      Josheir
  • Popular Now