Jump to content
  • Advertisement

irreversible

Member
  • Content Count

    1950
  • Joined

  • Last visited

Community Reputation

2871 Excellent

2 Followers

About irreversible

  • Rank
    Crossbones+

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hey - this works! Like I mentioned in my original post, though - I'd like to understand why this is the case. It would seem the operator defined in the same (in this case global) namespace should, regardless of placement, be preferred by a call from the same namespace. PS - I'm also not quite sure why not to put these specific overloads in global since they get exposed via an early using namespace logging; anyway.
  2. Nope - no warning or anything. The wrong operator just is called blindly. As for it being a compiler bug - it's possible, although it would be nice if I could get some external verification for this since I only have VS set up on my rig. I can't recall if this used to be the case in VS2013 as well or not... As for the code - there isn't much else to show really. This is kinda the minimal example. I might write up a compileable test program when I get back to my computer.
  3. The void* operator is a friend member of the logging class - it doesn't need exposure to the global namespace. I personally love structuring my code with namespaces. It's a way to filter everything.
  4. I'm guessing the problem might be tied to my overall design, but I'd like to know why this behavior happens. Some background: I have a logger class, which implements a number of << operators for common types. namespace logging { class log { public: ... _FRIEND _INLINE logging::log& operator << (IN logging::log& l, IN void* v) { LOG_APPEND_SPRINTF_MSG(32, "%p", v); return l; } ... }; } There's a reason I singled out the overload for void*. My vector class provides an automatic cast operator to float*. In order to output a vector, however, I provide a special case for the << operator in global namespace: _INLINE logging::log& operator<<(logging::log& log, const math::vec3& v) { log << '('; log.append_float(v.x); log << ", "; log.append_float(v.y); log << ", "; log.append_float(v.z); log << ')'; logging::dispatch_log_message(log); return log; } Now, if I log a vec3 in a function that lies in the global namespace, the proper operator for << gets called. However, when logging from a function that resides inside any namespace, Visual Studio first casts the vector to float* and then forwards that to the void* operator. For now I only know one bad solution apart from rewriting my logging class - define the << operator for vec3 in the respective namespace (this has to be done for all namespaces that want to log a vector type). namespace mynamespace { logging::log& operator<<(logging::log& log, const math::vec3& v) ... } void mynamespace:foo(math::vec3& v){ log << v << endl; } This extremely annoying. What prompts this behavior/operator resolution? More importantly, can this be solved without expelling the base type operators from the logging class (I mean, frankly I haven't even tried, but for now I'm also using certain internal functions so I would need to restructure the entire logging class to boot; besides - I'd like to know if there's a simpler solution out of sheer principle )?
  5. irreversible

    clCreateFromGLBuffer crash

    if(clCreateFromGLBuffer == NULL) ... How? I can't see it in your code. Try stepping through your program an check if anything looks out of the ordinary. Like I said, unless you've malformed one or more of your calls, the culprit is most likely somewhere else in your program. Just to have a point of reference, try running your program on a different computer with a different hardware setup. It's unlikely, but not impossible that your driver installation is corrupted etc. What version is whatever library you use to initialize OpenCL? (update to the latest one) This might hurt your ego, but if something crashes, then 99.999% of the time it's because of your code . Except when it isn't...
  6. irreversible

    clCreateFromGLBuffer crash

    Is clCreateFromGLBuffer a valid function pointer? Eg have you tested it for null? Other than that, I've no idea what your code is doing, but there are two primary reasons why it can crash like this: - the simpler reason has to do with your function arguments. Is the context valid? You're not running any checks at or around clCreateContext (or any other CL API functions, for that matter)? How can you be sure it succeeds? Add an error check after each call (better yet - wrap each call in a checker macro that you can later define to empty). - the more complex reason has to do with the rest of your code. Create a test project. Remove any and all outside factors (multi-threading, prior code not related to the call, etc). If it you've checked everything outlined above and it still crashes, then you're looking at a more serious problem. Unless you've borked your arguments somewhere, the most likely culprit here is a buffer overflow (eg writing out of bounds) anywhere else in your program, which isn't detected, but thrashes the pointer to clCreateFromGLBuffer. Catching this can be hard. However, for starters, if your IDE supports it, you could try setting a hardware/data breakpoint to clCreateFromGLBuffer after it's been loaded and letting your program run to see if the address is modified. If it is, you'll hopefully know where the problem is. PS - the code you posted doesn't necessarily reproduce the issue for someone else, because it's not a full program. PPS - you're asking for a solution, but your code doesn't seem to be doing any validation/error checking of its own. Without knowing anything about CL, I'm going to go out on a limb and say either your call to clCreateContext is malformed or you have a memory write problem literally anywhere else in your code.
  7. irreversible

    Smooth normals and Tessellation

    You should have access to "node" vertex normals, right (eg "pixel normals" on your heightmap, that you've precomputed)? If not, you might sample the heightmap in the vertex shader and calculate the normals in realtime. Assuming quads, interpolating them in the tessellator should be as simple as bilerping the node normals. You can pass this information to the GPU as adjacency data and access it in the geometry shader*. Remember that since you're dealing with normals, the resulting vectors need to be renormalized after interpolation. * I don't have any experience with D3D, but searching for "vertex array adjacency" turns up this and this.
  8. irreversible

    Compile time type name formatting

    Hey - that's pretty neat. I became so focused on splicing the name up at compile time that I never considered filling it in at runtime :D. I like your approach a lot - thanks!
  9. I'm looking into CTTI, which provides some nifty compile-time type info resolution. However, it does so in a marginally inconvenient way (and by marginally I really do mean marginally) as it provides the class name in the form of a pointer and a length as opposed to a NULL-terminated copy. I figured this wouldn't be too difficult to change. In particular, my idea was to take the pointer to the type name and length that CTTI provides, copy the contents into a statically stored constexpr std::array and store a pointer into said array within the type info container. Now, I'm neither completely foreign nor a genius when it comes to template metaprogramming, but the fact is that after a few hours I'm staring at an "internal error" and am somewhat out of ideas. First the issue at hand. nameof.hpp contains the apply() function, which I've modified in the following way: namespace ctti { namespace detail { // this sub-detail snippet is mine... namespace detail { // the idea is to convert a pointer/string into a std::array template<std::size_t... Is> constexpr auto make_elements(const char* src, std::index_sequence<Is...>) -> std::array<char, sizeof...(Is)> { return std::array<char, sizeof...(Is)> {{ src[Is]... }}; } template<typename T, typename = void> struct nameof_impl { static constexpr ctti::detail::cstring apply() { // get the name as ctti currently does it... static constexpr const ctti::detail::cstring cst = ctti::detail::filter_typename_prefix( ctti::pretty_function::type<T>().pad( CTTI_TYPE_PRETTY_FUNCTION_LEFT, CTTI_TYPE_PRETTY_FUNCTION_RIGHT )); // the following is code I added (all of the involved functions are constexpr, so there should be no problem with compile time evaluation) // get the length of the type name static constexpr const std::size_t N = cst.length(); // copy the substring into an array and store it as a static member within this function for any type T static constexpr const std::array<char, N> arr = detail::make_elements(cst.begin(), std::make_index_sequence<N>()); // get the pointer to the name static constexpr const char const* arrData = arr.data(); // construct a new ctti string that contains a pointer to the to-be NULL-terminated name return ctti::detail::cstring(cst.begin(), cst.length(), arrData); } } } Note that I haven't gotten to NULL-terminating the array yet. Not entirely sure how to do it, but first things first. The problem arises when using arr (eg accessing the data pointer), which causes Visual Studio to presumably not optimize it and spit out a C1001 (internal compiler error). What am I doing wrong in my code? Is this a problem with the compiler or am I doing something inherently illegal? NOTE: I've modified ctti::detail::cstring to contain the additional pointer (not shown here). NOTE 2: Visual Studio currently chokes when using more than one template parameter pack, so in case you want to use the library, there's a modification that needs to be done in meta.hpp: NOTE 3: Additionally, by default CTTI will produce bad type information if you're using it with classes (as opposed to structs). To fix this, the following change needs to be made:
  10. Drawing lines is hard. That being said, drawing caps/miter joints in of themselves is a matter of some trigonometry, but properly texturing the line in 3D gets a bit nastier as you'll either have up to up the tessellation quite a bit in corners, resort to projection or perform some sort of fancy triplanar texturing to avoid distortion.
  11. This. Another approach I use to track, say, temporary globals is by marking them with an empty preprocessor define: #define _GLOBAL _GLOBAL static int32 myTempGlobalVar = 0; This makes it easy to track down all the globals (which can accumulate over time) at a later time without having to keep tabs on them.
  12. Have you tried something as primitive as cleaning your project (eg rebuilding the intellisense database) or renaming the variable?
  13. Thanks for sharing your thoughts! Here's where I'm mentally at at the moment: 1) I can't prevent the user from making a raw local copy anyway 2) a PPoolObject type proxy seems like a good compromise, but... 3) I'm leaning toward compiling it to an encoded/checked index in debug mode, but a raw pointer wrapper in release mode. If some smart hat decides to dereference it in a loop, it either gets optimized or becomes an unnecessary bottleneck Here are my concerns: 1) the indirection runs a risk of thrashing the cache, although I haven't written a single line of code so far so that's just speculation 2) I'm not entirely sure how to go about locking in the proxy. Technically PPoolObject should lock the pool every time its value is read, which seems like it could add up fast 3) if I don't lock, then the proxy is as unsafe as a raw pointer in the first place, so it kind of defeats at least part of the idea 4) in a way this seems like hack. The real answer here seems to stem from a grander design paradigm. If I manage to enforce a strict destruction cycle, then I feel like trusting the programmer should be fine. Maybe I'm too naive though...
  14. When recycling direct pointers into a pool of allocated objects whose lifetimes are controlled by well defined periods (eg session, permanent or temporary ("user")), are there any additional clever security measures I can employ to make sure local copies of these pointers are more likely to not be used outside the object's life cycle? That is, I'm not able to ensure anything as soon as I emit a raw pointer in the first place, so it's not like I want to prevent the user from being able to segfault the program if they're reckless or go out of their way to do so, but I would still prefer if there was some sort of mental barrier that would ensure the programmer is aware of the pointer's lifetime. These raw references are not to be given out in bulk, but are rather likely limited to something like 1-5 instances. I do not want to make them smart pointers as the pool must be free-able regardless of any dangling references. Two options I can think of are: 1) add a layer of indirection and instead of providing raw pointers directly hand out internally managed weak pointer style wrappers. These could be set to null when a pool is freed or an object is recycled, but would in no way prevent the programmer from making local copies anyway, 2) force the programmer to use specific context-sensitive calls to retrieve the pointer in the first place that spell out "caveat emptor". Eg something like GetSessionPointer(), GetPermanentPointer() and GetUserPointer(). Cleanup of pool data is performed when a session is terminated (eg when a level/chunk is unloaded), the program closes or the user manually decides to free up temporary memory. A callback is invoked to notify the programmer when this occurs. In the past I've opted to using individual allocations, but there are a few classes of objects that I wish to manage in bulk (these are mostly related general speed improvements, serialization of level data, etc). Any thoughts how to add additional security on top of this? What's the best approach in a production environment?
  15. One workaround would be to Watch the variable (as a bounded array): "d3d_byte_code, 30" displays 30 first bytes. TBH I've never seen it hang like this while debugging, so start by adding the array to the watch list and see if it still lags. Hover viewing large arrays seems pointless anyway as you have to click on the lens icon to view the entire contents. Furthermore, a large byte array (especially something akin to what your array name seems to suggest) in general seems like strange thing to debug by hovering over it. So, if VS is freezing when you do, wouldn't easiest solution be to just... not hover over it?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!