Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

2870 Excellent


About irreversible

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. irreversible

    Smooth normals and Tessellation

    You should have access to "node" vertex normals, right (eg "pixel normals" on your heightmap, that you've precomputed)? If not, you might sample the heightmap in the vertex shader and calculate the normals in realtime. Assuming quads, interpolating them in the tessellator should be as simple as bilerping the node normals. You can pass this information to the GPU as adjacency data and access it in the geometry shader*. Remember that since you're dealing with normals, the resulting vectors need to be renormalized after interpolation. * I don't have any experience with D3D, but searching for "vertex array adjacency" turns up this and this.
  2. irreversible

    Compile time type name formatting

    Hey - that's pretty neat. I became so focused on splicing the name up at compile time that I never considered filling it in at runtime :D. I like your approach a lot - thanks!
  3. I'm looking into CTTI, which provides some nifty compile-time type info resolution. However, it does so in a marginally inconvenient way (and by marginally I really do mean marginally) as it provides the class name in the form of a pointer and a length as opposed to a NULL-terminated copy. I figured this wouldn't be too difficult to change. In particular, my idea was to take the pointer to the type name and length that CTTI provides, copy the contents into a statically stored constexpr std::array and store a pointer into said array within the type info container. Now, I'm neither completely foreign nor a genius when it comes to template metaprogramming, but the fact is that after a few hours I'm staring at an "internal error" and am somewhat out of ideas. First the issue at hand. nameof.hpp contains the apply() function, which I've modified in the following way: namespace ctti { namespace detail { // this sub-detail snippet is mine... namespace detail { // the idea is to convert a pointer/string into a std::array template<std::size_t... Is> constexpr auto make_elements(const char* src, std::index_sequence<Is...>) -> std::array<char, sizeof...(Is)> { return std::array<char, sizeof...(Is)> {{ src[Is]... }}; } template<typename T, typename = void> struct nameof_impl { static constexpr ctti::detail::cstring apply() { // get the name as ctti currently does it... static constexpr const ctti::detail::cstring cst = ctti::detail::filter_typename_prefix( ctti::pretty_function::type<T>().pad( CTTI_TYPE_PRETTY_FUNCTION_LEFT, CTTI_TYPE_PRETTY_FUNCTION_RIGHT )); // the following is code I added (all of the involved functions are constexpr, so there should be no problem with compile time evaluation) // get the length of the type name static constexpr const std::size_t N = cst.length(); // copy the substring into an array and store it as a static member within this function for any type T static constexpr const std::array<char, N> arr = detail::make_elements(cst.begin(), std::make_index_sequence<N>()); // get the pointer to the name static constexpr const char const* arrData = arr.data(); // construct a new ctti string that contains a pointer to the to-be NULL-terminated name return ctti::detail::cstring(cst.begin(), cst.length(), arrData); } } } Note that I haven't gotten to NULL-terminating the array yet. Not entirely sure how to do it, but first things first. The problem arises when using arr (eg accessing the data pointer), which causes Visual Studio to presumably not optimize it and spit out a C1001 (internal compiler error). What am I doing wrong in my code? Is this a problem with the compiler or am I doing something inherently illegal? NOTE: I've modified ctti::detail::cstring to contain the additional pointer (not shown here). NOTE 2: Visual Studio currently chokes when using more than one template parameter pack, so in case you want to use the library, there's a modification that needs to be done in meta.hpp: NOTE 3: Additionally, by default CTTI will produce bad type information if you're using it with classes (as opposed to structs). To fix this, the following change needs to be made:
  4. Drawing lines is hard. That being said, drawing caps/miter joints in of themselves is a matter of some trigonometry, but properly texturing the line in 3D gets a bit nastier as you'll either have up to up the tessellation quite a bit in corners, resort to projection or perform some sort of fancy triplanar texturing to avoid distortion.
  5. This. Another approach I use to track, say, temporary globals is by marking them with an empty preprocessor define: #define _GLOBAL _GLOBAL static int32 myTempGlobalVar = 0; This makes it easy to track down all the globals (which can accumulate over time) at a later time without having to keep tabs on them.
  6. Have you tried something as primitive as cleaning your project (eg rebuilding the intellisense database) or renaming the variable?
  7. Thanks for sharing your thoughts! Here's where I'm mentally at at the moment: 1) I can't prevent the user from making a raw local copy anyway 2) a PPoolObject type proxy seems like a good compromise, but... 3) I'm leaning toward compiling it to an encoded/checked index in debug mode, but a raw pointer wrapper in release mode. If some smart hat decides to dereference it in a loop, it either gets optimized or becomes an unnecessary bottleneck Here are my concerns: 1) the indirection runs a risk of thrashing the cache, although I haven't written a single line of code so far so that's just speculation 2) I'm not entirely sure how to go about locking in the proxy. Technically PPoolObject should lock the pool every time its value is read, which seems like it could add up fast 3) if I don't lock, then the proxy is as unsafe as a raw pointer in the first place, so it kind of defeats at least part of the idea 4) in a way this seems like hack. The real answer here seems to stem from a grander design paradigm. If I manage to enforce a strict destruction cycle, then I feel like trusting the programmer should be fine. Maybe I'm too naive though...
  8. When recycling direct pointers into a pool of allocated objects whose lifetimes are controlled by well defined periods (eg session, permanent or temporary ("user")), are there any additional clever security measures I can employ to make sure local copies of these pointers are more likely to not be used outside the object's life cycle? That is, I'm not able to ensure anything as soon as I emit a raw pointer in the first place, so it's not like I want to prevent the user from being able to segfault the program if they're reckless or go out of their way to do so, but I would still prefer if there was some sort of mental barrier that would ensure the programmer is aware of the pointer's lifetime. These raw references are not to be given out in bulk, but are rather likely limited to something like 1-5 instances. I do not want to make them smart pointers as the pool must be free-able regardless of any dangling references. Two options I can think of are: 1) add a layer of indirection and instead of providing raw pointers directly hand out internally managed weak pointer style wrappers. These could be set to null when a pool is freed or an object is recycled, but would in no way prevent the programmer from making local copies anyway, 2) force the programmer to use specific context-sensitive calls to retrieve the pointer in the first place that spell out "caveat emptor". Eg something like GetSessionPointer(), GetPermanentPointer() and GetUserPointer(). Cleanup of pool data is performed when a session is terminated (eg when a level/chunk is unloaded), the program closes or the user manually decides to free up temporary memory. A callback is invoked to notify the programmer when this occurs. In the past I've opted to using individual allocations, but there are a few classes of objects that I wish to manage in bulk (these are mostly related general speed improvements, serialization of level data, etc). Any thoughts how to add additional security on top of this? What's the best approach in a production environment?
  9. One workaround would be to Watch the variable (as a bounded array): "d3d_byte_code, 30" displays 30 first bytes. TBH I've never seen it hang like this while debugging, so start by adding the array to the watch list and see if it still lags. Hover viewing large arrays seems pointless anyway as you have to click on the lens icon to view the entire contents. Furthermore, a large byte array (especially something akin to what your array name seems to suggest) in general seems like strange thing to debug by hovering over it. So, if VS is freezing when you do, wouldn't easiest solution be to just... not hover over it?
  10. irreversible

    Visual Studio 2017 usability issues

    The ironic and also kind of funny thing here is a blog post by MS a few years ago, explaining why certain C++ features were not implemented (I believe it had to do with something in the standard library). I can't find the topic right off the bat but, as customary, the reasoning was there and it did make sense if you looked at it from a certain angle. The post was inundated with backlash and negative responses along the lines of "gcc has had these for years now" and "it's a paid product", which were valid criticism. From a certain angle. The compiler team (or at least someone from there) were pretty actively trying to respond and calm people down, but you can guess how well that went. I mean, MS's STL didn't even implement emplace_back() properly back then (I'm kinda assuming it does now). I remember reading this and trying to put myself in both shoes. I was biased, of course, because the lack of conformation to the standard was becoming increasingly frustrating and I think I was doing research to make sure it was worthwhile upgrading to 2015 (which I finally decided it wasn't). In any case, the entire "discussion" suddenly took on a completely new look, when it dawned on people that something like one single person was responsible for implementing the STL back end and he flat out couldn't figure out how to make things work. Which is when I formed a completely new view of the VS dev team: - C++ is secondary, if not tertiary or quaternary one their list of priorities - the C++ compiler and likely tools development teams appear to be under-staffed and/or focused on more than one thing. I don't want to bring the word "incompetence" to the table, but some of the bugs and hacks (I'm looking at you, hint files and blatantly wrong underlining), which the team routinely dances around without flat out ignoring on the blog/forums, suggest there is some degree of, shall we say, uncoordination at play here - as far as I'm concerned, a run of the mill text editor should consume around 1-3% CPU when implemented correctly. That's what Chrome is consuming as I type this (and it does dictionary lookups as I type). The fact that VS jumps to 20-40% while I'm typing a damn string, is a disgrace and a testament to the absolute irrelevance of hardware. I'm on a laptop and typing in the IDE drains my battery 3x faster than watching a HD video on Youtube. Let that sink in for a moment. Software was an order of magnitude faster 15 years ago - you still needed to redraw a similar number of pixels every time a key was pressed, but the computer simply didn't have the oomph it has today, so things needed to be handled differently. By writing optimized software. Just recall what it was like to write a Word/Excel document in 2003 and what a pile of broken and unoptimized bloatware 360 has turned into.
  11. irreversible

    Visual Studio 2017 usability issues

    Good to know I haven't screwed up my settings or something. On the flipside, too bad they've apparently opted to implementing a completely new intellisense and broken so many basic things while doing so. Here's a few further observations: - occasionally, the IDE starts chewing up CPU like crazy, getting to 30-40% on my 4/4 core. Restarting fixes this. - the enum auto-complete issue seems to come and go. It did work for an hour and then stopped working again. - rebuilding the database seems to help occasionally. It's a pain, though. Ironically, just as I started to respond to this thread, I got a UX questionnaire when starting up VS in the background. It was surprisingly thorough, but then hard froze the IDE just as I was about to click "Done". As for navigation (and this is not an endorsement) - I've found VAX to be well worth the money they're asking. One of the best things it has is a Ctrl+click to navigate to feature - so if you click on a function name, it lists all declarations and definitions and you can go to which ever you need. AFAIR it does need to be manually enabled though. I wish they made their custom intellisense about 2-3x faster and added a bunch of features the built-in one actually does have. Right now it just feels awkward and unnatural. As for other IDE's - ironically MS's own free Visual Studio Code is actually a fair bit faster, although it does lack many of the features I've grown accustomed to. It's fairly good for GLSL and stuff, though.
  12. irreversible

    Visual Studio 2017 usability issues

    I recently upgraded from 2013 Pro to 2017 Community and I've been putting the IDE through its paces. After a couple of weeks, here are the top things that still confuse me. I'm wondering if there might be something I'm missing. like everyone else, I don't understand the sudden need for the green squigglies. I've seen MS suggest adding problematic macro declarations to hint.cpp over and over again, but that 1) didn't use to be necessary, 2) is a blatant hack and 3) doesn't work occasionally, when writing a new variable declaration or a function definition, I get a sporadic "Object reference not set" error in the form of a modal popup. Usually this happens when I type in a type, a variable name and then press tab (which I do often due to the way I format my code). The error persists until I press OK/Cancel (eg Enter or Escape), then briefly multitask out of the IDE and back again. I used to get this error in 2013, but only occasionally when code peek failed to display something. I can't really find information on the particular flavor of the error on the web despite being eligible for the free Community edition, I bought myself into the new ecosystem by renewing Visual Assist X. Initially this caused massive slowdown issues in the editor, to the point where the screen would be redrawn 2-3 times per second while scrolling quickly. I alleviated this by disabling VAX's own intellisense (which is apparently really slow and feels incomplete) and disabling most extensions I was using in 2013. I'm mentioning this, because it leads me to my next issue: auto-complete (eg the built-in parser) flat out fails in many cases. One most notable case happens when listing enum members, which simply does not work. Now, I don't use vanilla enum syntax, but rather have my own self-unrolling macros that recursively expand enum declarations using FOR_EACH. This, however, was not an issue in 2013 what baffles me even more, is the fact that auto-complete no longer suggests function names and/or signatures when overloading a function in a derived class. It simply doesn't suggest anything build times seem comparable to 2013, but link times can be uneven and abysmal, even when changes are small and incremental Given that the IDE has been out for a while and my whole system is as up to date as it gets, I'm left to wonder if at least some of these issues can be mitigated in some way. Considering that it took me 10+ hours just to find the right config to get the IDE to be responsive without sacrificing too many things I'm accustomed to, it's entirely possible I've missed a checkbox or somehow managed to mal-configure either the VS or VAX. Are you having a similar user experience? Any ideas or suggestions?
  13. irreversible

    Stretched result on sobelx filter

    Your output looks like a multiple of the original output in some form - the basic structure is there, but the image appears to be offset and banded for whatever reason. Are you running your filter in non-RGB mode (eg not per channel)? You really should post some code, and not only the filter but more likely the calling loop and how you output the resulting image. Have you tried a pass thru filter - eg are you getting exactly the same output as the input when no filter is applied? There are so many links here that could be wrong that I wouldn't be surprised if your filter was working properly. Oh and one more thing - have you accounted for a possible alpha channel?
  14. irreversible

    Player to player collision in MMOs

    The way collision works in most networked titles, including MMO-s, is two-fold: 1) the client side code performs its collision tests to provide immediate feedback to the end user, removing lag from the equation 2) to avoid any third party intervention and potentially mitigate player-to-player lag, the server always performs its own collision checks and syncs its state to all players periodically You cannot defer this check to any single client, because doing so is prone to both cheating and technical issues, like rounding errors specific to different systems, which can easily desync the game state. Keep your gameplay code on the server, always. Emulate it on the client to make the local user experience more immediate and responsive, but always, always validate it based on the server state.
  15. irreversible

    Buying a decent last gen laptop in the EU?

    I agree that the Lenovo example isn't strictly generalizable, but the root problem is there. Reviews rarely cover even the most basic flaws that Joe Average may encounter when it comes to the mass market, not preview examples, which are likely to be hand-picked to begin with. Coil whine is a handy example - this isn't something I've read from reviews. It's one of those things you probably don't even know to look for prior to purchase. It doesn't help that not all laptops even from the same skew are made equal so one specimen might have it bad while another might be free of it. Compound this across generations of the same model though, and yeah, I would say it's a pretty gross QC issue, or worse yet - sheer negligence. The bottom line is - for this kind of money one shouldn't feel like they're playing the lottery.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!