m3mb3rsh1p

Members
  • Content count

    45
  • Joined

  • Last visited

Community Reputation

440 Neutral

About m3mb3rsh1p

  • Rank
    Member
  1. I think the "IT" powers that be failed to establish a standard for online etiquette, resulting in conflict whenever opinions differ in what should be friendly discourse. It's this era's most socially destructive force in my opinion. C++ invariably seems to draw "expert" opinion even for simple questions; links and references to the standard, quotes from prominent language experts, clever language magic... I think it's been shown that enum bit flags are "normal / conventional". Surely most of us have seen it? Language debate is interesting but also distracting... also why "C++ is hard."
  2. Are you trying to find a way to write new enums that will compile as both C and C++ or are you modifying existing C code to compile with a C++ compiler? The following method works for both and has type-checking in C++. Perhaps it will require less modifications to existing C code than adding operators to C++ code?: #include <stdio.h> typedef enum { flag0, flag1, flag2, flag3 = 4, flag4 = 8 } test_flags; void config( test_flags iflags[4] ) { unsigned uflags[] = { iflags[0], iflags[1], iflags[2], iflags[3] }; printf( "0x%X\n", uflags[0] | uflags[1] | uflags[2] | uflags[3] ); } int main( int argc, const char *args[] ) { test_flags my_flags[] = { flag1, flag2, flag3, flag4 }; unsigned badflags[] = { 30, 40, 50, 32 }; config( badflags ); // not accepted by C++ config( my_flags ); return 0; }
  3. I've found the following pattern to be effective: #include <stdio.h> constexpr const unsigned flag[] = { 0, 1, 2, 4, 8, 16, 32, 64, 128, 256 }; struct fpl { enum init_flag { init_none, init_window, init_opengl }; fpl( const unsigned &iflags ) : init_flags{ iflags } {} unsigned init_flags; }; int main( int argc, const char *args[] ) { fpl my_fpl{ flag[ fpl::init_window ] | flag[ fpl::init_opengl ] }; printf( "my_fpl flags: 0x%X\n", my_fpl.init_flags ); return 0; }
  4. Help with toolchain/DC Load - Flags

    Hello. Congratulations on getting your Dreamcast development environment set up. From the README file included with dcload sources, you can upload using the following pattern (after booting your Dreamcast with the burned loader disc): dc-tool -t [device] -b [baud rate] -x [executable] e.g. linux: dc-tool -t /dev/usb/tts/0 -b 1500000 -x <sh-executable> Cygwin: dc-tool -t COM4 -b 500000 -x <sh-executable> A default device and baud rate would have been compiled-in according to the Makefile.cfg so you could just type dc-tool -x [executable] if the defaults match your system. The dcemulation programming community is dedicated to Dreamcast development, so you can supplement general programming info here with dc-specific info on their forums at http://dcemulation.org/phpBB/viewforum.php?f=29 All the best.
  5. Is it okay that you are dereferencing "this" in your constructor and proceeding to loadModel() into your possibly incomplete object?
  6. Congratulations on the achievement and thanks for sharing. I must say, however that "A first taste of Stanza" provided on the Home Page could be made more palatable.   Would you consider a simpler, real-world example? Your story about the origins of the language would be well served with an example such as a calculator or guessing game.   This would help to show what Stanza makes easier.
  7. Why is my struct destructor not called?

    @Bitmaster...   I said pointers must be used because they are the language facility for managing object... "lifetime," for lack of a better word. Sorry you caught my post before I edited it to reflect my concerns about turning to "advanced concepts" not placement new specifically. The OP asked "Why am I not getting Destruct A [500]?!." The answer, in my opinion should have been something like "Because you need to call "delete a" on a A* or exit the scope which does not automatically happen when you assign." The assignment was addressed early but I reproach the jump to topics about optimization, stack, free-store, dynamic allocation, placement new all to avoid the most basic of C++ concepts: pointers and new+delete.   A user-defined type need not allocate any memory to have a non-trivial destructor but the "basic" way of taking control of an object's lifetime away from the automatic scoping is to use a pointer. Surely it is folly to be suggesting that pointers+new+delete = dynamic allocation = free store = slow = much typing = avoid, avoid, avoid?? The OP wanted to dynamically destroy (ie before end of scope)  why are we trying to concoct a static allocation??   The OP clearly hit a gotcha in expecting the destructor to be called within the scope of the function. Pointers are necessary, irrespective of the language facility, allocator, container or custom class used to implement RAII / reduce typing / avoid arrow dereferencing / avoid free-store...etc. C/C++ programmers MUST know that they are responsible for their object's lifetime from the moment they declare a variable. This is the C/C++ way.   Automatic initialization+destruction and support structures exist to allow the programmer to delegate that responsibility not be relieved of it. "Thou shalt assume responsibility for this object's lifetime, on my behalf, from point A (e.g initialization) to point B(e.g end of scope or reset()). Placement new, manual destructor call, and all the other expert "implementations" of handling object lifetime must not change "Good Practice" in using variables/pointers/objects/references especially in main() or any other user-level function.   My example was meant to demonstrate the awareness a programmer should have of object lifetime, with particular respect to end of scope which is why I showed the unique_ptr example as one way of delegating responsibility and avoiding manual new+delete. "...an emphatic no." to the use of free store is a bit extreme. If one prefers to dabble in stack-only allocation then they are free to place "new" there, even for unique_ptr.   It is certainly interesting to explore the advanced concepts related to object allocation but all that "expert" knowledge belongs in "encapsulation." "General Programming" should be a place to exemplify good coding practices, moreso than "For Beginners" IMO. Bjarne Stroustrup and most of the "top brass" regularly bemoan the tendency of experts to turn basic questions into in-depth analyses of advanced language concepts instead of demonstrating the virtues and wonders of simplicity.   "The Pointer" is basic C/C++. It is what clearly defines the languages from Java, C# and others. The years have afforded us facilities to delegate object lifetime management but it remains our greatest advantage that we can say and aspire to guarantee that a particular scope behaves as it appears (as long as we don't go fighting the language with "expert" tricks).
  8. Why is my struct destructor not called?

    Hmm, the replies seemed to jump straight to advanced concepts, skipping the basic new + delete which is how C++ works at the basic level:     A *a = new A;     a->val = 500; delete a;     a = new A();     printf( "Finally %d\n", a->val );     a->val = 300; delete a;     return 0; The recommended practice these days is to use the standard library, but the point to remember is that pointers MUST be used if object destruction is needed before exiting scope.   #include <memory> struct A { int val; }; int main() { std::unique_ptr a{ new A}; a->val = 500; a.reset(new A); printf("Finally %d\n" a->val); a->val = 300; } I believe manual scoping might work as well in some case but here, it seems you need "a" for the entire scope... int main() {     A a;     a.val = 500;       {         A b;         b.val = 300;         a.val = b.val;         printf("Destroying b...\n");     } }
  9. Thanks for the amazing trivia, frob. I didn't know that the x86 microcode was so sophisticated. If there's one take-away from this, it should be that programmers are wiser to name such oft-used utilities. It makes for easier mental parsing and future maintenance. I'm pretty sure a SWAP macro would have been possible when that text book was written.   Other examples that I sometimes find unnecessarily complicated : -"shift left when multiplying by two." if one doesn't trust the compiler to make this one optimization then just write a template mult() or even just mult2() function.   - recursion of incomplete concepts such as factorial. Even mathematicians use "if" and "where"
  10. casting double* to float*

    @nfries88 I was exploring double to float conversion using casts to see how accuracy would be lost due to multiplication by factors of 10 (naive, I know). I have often read that double and float can represent sooo many decimal points and huuuuge whole numbers so I wasn't expecting to see my naive version of an error pop up after 5 loops or less.   It was a sobering realisation that even floating point limits can be easily exceeded by multiplying seemingly small quantities, especially since they are used in the oft-mentioned "critical loops", "engine cores," "numeric libraries." I started appreciating things like media compression artifacts, 3D graphics clipping, statistical / mechanical errors...   I also wanted to see whether the compiler performs numeric conversion in contrast to truncation of integer types. I'm not familiar with the binary representation  of ieee floating point but I had the sense that even simple truncation would not be as error-prone as short(int). I know some processors provide double to float instructions so I was also hoping / checking that a double would truncate to (whole).(fraction) and not just read the bits that happen to occupy the lower 32 bits. (whose meaning is unfamiliar to me)   Before frob mentioned the 6 digits of precision, the "errors" were surprising because of the presumption I held about the level of precision. The final cast was interesting to me because I assumed the "error" was due to float inaccuracy propagation vs double and that casting the more accurate double would yield a more accurate float.   I just had this naive impression that ieee float dealt with the int-overflowing-multiply-by-10 by incrementing an exponent such that it could handle much larger values; e.g. 10 to the 10 "felt" possible with little error. On the fraction part, I "knew" shifting the decimal point would favour double  but a clear "error" in the fraction part when doing float(double) was a shock.   From this mix of results, I'll be striving to keep numeric calculations within limits. I've noticed that scaling down e.g by 10 or 100 works against loss of accuracy ie it seems better to work using fractions when aiming for accuracy, by my naive observations.
  11. casting double* to float*

    After a bit of playing around, I have come to appreciate the importance of structuring mathematical operations to ensure that intermediate values and end_values stay within numeric limits. The following test_function may not be practical, but I would previously have performed calculations using the naive assumption that "my values will remain within limits because I'm only multiplying by 100 and my tolerance is 0.01"   Notice how the final float(double) discards the fraction. void test_fp_accuracy(int set_size = 100, float tolerance = 0.01f) {     float rounding = 5.0f;     float error = 0.0f;     bool inaccurate = false;     double dtotal = 0.0;     double dresult = 0.0;     float ftotal = 0.0f;     float fresult = 0.0f;     /*      * simulate possible fp loop e.g      * add totals and multiply by number of items in the set      * we use counter for the whole part and tolerance*rounding for the fraction      */     float random_value = 0.0f;          for(int counter = 1; counter < set_size; ++counter)     {         random_value = counter + (rounding * tolerance);         // built-in static_cast<T>(counter) and         dresult = dtotal + random_value;         fresult = ftotal + random_value;         error = dresult - fresult;         if(error > tolerance)         {             inaccurate = true;             std::printf("ERROR!: Intolerable Inaccuracy after %d loops\n", counter);             break;         }                  dtotal = (dresult * set_size);         ftotal = (fresult * set_size);         /*          * next time the total is used, it may or may not be accurate          * the project mathematician must restructure this math          */     }     std::printf("dresult: %f\nfresult: %f\nfloat(dresult): %f\n",                 dresult, fresult, float(dresult));     if(inaccurate) {         // Correct double_to_float()     } else {         std::printf("set looped through without inaccuracy\n");     } }   output: ERROR!: Intolerable Inaccuracy after 5 loops dresult: 107080905.233487 fresult: 107080904.000000 float(dresult): 107080904.000000
  12. casting double* to float*

    I think it's best to distinguish between data size conversion and numeric conversion. If a C-style cast is a size conversion, then it's conceptually incorrect to recommend using it to convert between float and double where the programmer is thinking in numeric / computational terms. The danger of doing this is exemplified by int value = std::numeric_limits<short>::max() + 1; short val_1 = short(value); char val_2 = char(value); std::printf("val_1: %d\nval_2: %d", val_1, val_2); output: val_1: -32768 val_2: 0
  13. no good way to prevent these errors?

    Perhaps an inexperienced opinion will offer some insight. These seem to be the same issues I face trying to read the code from "real" projects by "real" coders. -For the if_statement: I find that new_location and current_location are often declared at the top of a long function or as parameters, maybe even as class variables. I find it easier to understand code where their declaration immediately precedes use/modification even if it means repetition: /* lots of initialization math GL */ T new_location = in_new; T old_location = in_old; If(out_of_bounds(new_location)) {} -Regarding the typos I find named operations on objects to be clearer than direct manipulation of member variables. As an inexperienced coder, or even one unfamiliar with someone else's work, it is sometimes difficult to understand what a math operation or a basic assignment does. Naming the operation helps a lot here. basic_copy(A, B); deep_copy(A, B); v_target = { A.x, B.y, B.z }; B = v_target; Regardless of the frequency or simplicity of the operation, I find such naming to be more useful than comments. Any time an object's member variables are manipulated directly outside their associated methods/operators, I find that my mind enters a detective state.
  14. make c++ more like glsl

    The library linked above provides an exact match so the following is but a humble exercise. The prior discussions in the thread below also seemed to avoid the "naive" approach so I thought I'd give it the 'ol newbie try... http://www.gamedev.net/topic/650045-vector-swizzling-in-c/ #include #include #include #include enum axis { x = 0, y = 1, z = 2 }; template< typename T > class vec3 { public: vec3(){} vec3(T in_x, T in_y, T in_z) { _values[0] = in_x; _values[1] = in_y; _values[2] = in_z; } vec3(const vec3 &in) { *this = in; } vec3& operator=(const vec3 &in) { if(this != &in) { _values[0] = in._values[0]; _values[1] = in._values[1]; _values[2] = in._values[2]; } return *this; } T& operator[](axis in_a); const T& operator[](axis in_a) const; constexpr vec3< T > operator()(axis in_01, axis in_1, axis in_2) const; private: T _values[3] { std::numeric_limits::quiet_NaN(), std::numeric_limits::quiet_NaN(), std::numeric_limits::quiet_NaN() }; }; template< typename T > T& vec3< T >::operator[](axis in_a) { return _values[static_cast< size_t >(in_a)]; } template< typename T > const T& vec3< T >::operator[](axis in_a) const { return _values[static_cast< size_t >(in_a)]; } template< typename T > constexpr vec3< T > vec3< T >::operator()(axis in_0, axis in_1, axis in_2) const { return vec3< T >( _values[in_0], _values[in_1], _values[in_2] ); } template< typename T > void vec3_print(const char *in_name, const vec3< T > &in_v) { std::cout << in_name << "{" << in_v[x] << ", " << in_v[y]<< ", " << in_v[z]<< " }\n" ; } int main() { typedef vec3< float > vec3f; vec3f v1{1.1f, 1.2f, 1.3f}; vec3f v2{2.1f, 2.2f, 2.3f}; vec3f v3; vec3_print("v1", v1); vec3_print("v2", v2); vec3_print("v3", v3); v1[x] = v2[y]; v2[z] = v1[y]; std::cout << "> v1[x] = v2[y]; v2[z] = v1[y];\n"; vec3_print("v1", v1); vec3_print("v2", v2); v3[y] = v2[z]; std::cout << "> v3[y] = v2[z];\n"; vec3_print("v3", v3); vec3_print("> v3(y, z, x)\n", v3(y, z, x)); vec3_print("v3", v3); v3 = v2(z, y, x); std::cout << "> v3 = v2(z, y, x);\n"; vec3_print("v3", v3); vec3_print("v2", v2); return 0; }
  15. What is a tight loop?

    This negative attitude is detrimental to the goal of producing good technology especially when one asks for the "right way" to accomplish a task. People do and should care about efficiency and correctness even if the component is invisible to the user. It transfers good practices to parts that are visible. There are numerous examples of systems that "work" but have performance quirks because someone didn't care or thought the part would be invisible. it is best to use technology correctly and efficiently because it only feels limitless in the moment. A fast but callously developed app on one device will be sluggish on another or even on the same device when other resource-hungry apps are active.... Seriously, please care.