Jump to content
  • Advertisement
Sign in to follow this  
suliman

C++ Avoiding "similar conversion" errors?

Recommended Posts

Hi!

I have two simple functions like this:

bool within(int var, int min, int max){
if (var>=min && var<=max)
return true;
else
return false;
}

bool within(float var, float min, float max){
if (var>=min && var<=max)
return true;
else
return false;
}

Calling it like this gives me "Error    2    error C2666: 'within' : 2 overloads have similar conversions

float foo = 0.4;

if(within(foo, 0, 1))
   bar();

I know i can specify constants to be float like so : (within(foo, 0f, 1f)) but i remember I earlier didnt have to do this and I use this ALL over in my projects and now suddenly has loads of compilation errors.

Can I write this in some way that it just accepts (float, something, something) and that would use the second function? (the one that works with floats?)

Thanks!
Erik

Edited by suliman

Share this post


Link to post
Share on other sites
Advertisement

My gut says to fix the compilation errors. Int <-> float coercion is evil and you should not rely on it. More strongly, you should be scared shitless of it and hate the possibility that it might bite you someday - because it just did.

You can simplify your function to:

template <typename T>
bool within (T value, T min, T max) {
  return (value >= min) && (value <= max);
}

But I don't think that actually eliminates the silent int/float coercion so it's more a side issue.

The root problem is that you are passing a float and two ints to your function, which is defined to take either three floats or three ints. The conversion that the language allows between float and int makes this call ambiguous. The correct solution is again to fix the passing of ints when you want to check a float. The easiest change is to suffix your literals with f, as you noted. The harder and arguably messier change is to cast explicitly.

Share this post


Link to post
Share on other sites
15 minutes ago, suliman said:

within(foo, 0, 1)

You've got parameters (float, int, int) there.  The compiler needs to do some type conversion to make it work.

The problem is that you've got more than one valid conversion, and there is no requirement the compiler can use to pick one over the other.

Even if you go with template versions you're still mixing types, the (var>=min && var<=max) is doing an operation between the first and second parameter and the first and third parameter so the types need to have compatible operations.  If it were a template function then the rules became a little more strict for the inequality operations (greater than, less than) in C++11 and C++14 but it would still have the same fundamental issue in older compilers.

 

There are a series of implicit conversions that the compiler can do automatically, but in this case it is ambiguous: Is it supposed to turn the float into an int, or turn the int into a float? Conversions in either direction can potentially lose information. Neither one is inherently right or wrong as far as the compiler is concerned. The compiler has a long list of potential conversions and two of them are both valid.

The details of that floating/integral conversion are implementation defined.  It could convert them to int, it could convert them to float, it could generate a warning or not, all of that is implementation defined.  

31 minutes ago, suliman said:

but i remember I earlier didnt have to do this and I use this ALL over in my projects and now suddenly has loads of compilation errors.

That's an implementation decision. Through history most compilers would warn by default, but different warning levels could be used that would enable or disable the warning. 

For your old code, old versions of Visual C++ were more lax about many conversions, silently choosing conversions that may lose precision or deciding between versions when they were equal in precedence. In particularly, VC++ before 2008 were far more lax about several of the standard-required conversions. The code was technically ambiguous but the standard allows for implementation defined behavior. The compiler made some implementation-defined choices about which conversions to take and accepted it without generating a warning.  As the compiler teams in VC++2005 and VC++2008 pushed for better standards compliance there were many of these errors that started popping up in some code bases.

The warning there is correct, pass all three as the same parameter type.  If that means using floating point constants like 0f or 1f, then do it, just as in the 16-bit world people would write constants like 0L or 1L, or in today's world many code standards encourage 0ULL or 1LL and similar.  

Make sure your parameters are the correct type so the compiler doesn't have to guess.

 

Share this post


Link to post
Share on other sites

Thanks!

Is there any way to let the first parameter "decide" the format of the others? In this case make the first one the "master" and cast the other ones accordingly? I mean i can write "float foo = 2; " The 2 is a integer (acoording to the logic that causes problem in my code above) but it doesnt complain about needing to cast it to a float in that line. Can I use some similar technique in the code above?

Share this post


Link to post
Share on other sites

You could always use something like this.

template<typename T, typename R>
  bool
  within(T value, R min, R max)
  {
  	return (min <= value) && (value <= max);
  }

If you frequently find yourself comparing, say, complex numbers to strings (since pretty much anything would go here), you can add some static_asserts and type classifiers to indicate type errors at compile time.

Frankly, you're better off making type conversions explicit for the reader and just using either the right constants or casting in the call line.

Share this post


Link to post
Share on other sites

That still suffers the same problem.  The comparison (a <= b) for (float <= int) is still going to result in the same question:  Should the compiler convert the float to an int, or an int to a float?  (The rules are well-defined, but may not be the one you are expecting. ) Both options are exactly one floating-integral conversion so both remain equally valid. The compiler will do the conversion it believes is correct through implicit conversions, which may not be the one you believe to be correct. It is still better to explicitly pick the correct value rather than rely on an implicit conversion.

I suppose yet another option is to explicitly provide those options, but it seems even worse to me:

bool within(float v, int a, int b) { return within( v, (float)a, (float)b);}

bool within( float v, int a, float b) {...}

bool within( float v, float a, int b) {...}

Explicit is better than implicit in cases like this.  If you want the float version pass floats. If you want the int version pass ints. Don't mix and match with the hope that implicit conversions will convert the way you want.

Share this post


Link to post
Share on other sites

Thanks for clearing that up. On a side note, is the template only "available" for the next function/brackets section?

template <typename T>
bool within(T var, T min, T max){
	return (var >= min && var <= max);
}
      
template <typename P>
bool withinTest(T var, T min, T max){
	return (var >= min && var <= max);
}                                   

If i write like this, the second function is not aware of "T" and cannot compile.

 

Share this post


Link to post
Share on other sites

The words are a little rough, but I think the answer is yes.

The template portion is a part of the total signature.  People commonly write it on a second line, but that is just for readability.  You could put every word on a new line, or squish the code down to a single line of text, the language doesn't handle it differently.  

The first template is:  template <typename T> bool within (T var, T min, T max) {...}

Defines a single template, basically a cookie cutter, with T as a replacement.  When the function is encountered, all the options for template types "T" are tried as replacements for the symbol "T", the best match is found, and new function is automatically generated where "T" is replaced with the template type.

That is the entire definition of the template, it stops at the end of the function body.

The second template is: template <typename P> bool withinTest(T var, T min, T max){...} 

Defines a second template, basically a cookie cutter, with P as a replacement.  When the function is encountered all the options for type "P" are tried as replacements for the symbol "P". Unfortunately the compiler cannot find any type called "T", so compilation will fail.

Again, that is the entire definition, it stops at the end of the function body.

A class template is: template <typename A> class foo {...} 

Defines a class template, basically a cookie cutter, with A as a replacement. When an instance is created all the options for type "A" are tried as replacements for the symbol "A", etc. It stops at the end of the class body.

 

Both template classes and template functions start with the word "template", then some type parameters inside a < > block, then provides the rest of the class or function that is basically a search-and-replace with that symbol replaced.  

I think of it more like a cookie cutter rather than actual code. You make the cookie cutter in the right shape. The cookie maker tries it on all the options, perhaps trying it on gingerbread cookies,  sugar cookies,  oatmeal cookies,  vanilla cookies, wafer cookies, and all other cookies it has ever learned about, eventually finding the version that works the best.  On rare occasion you will expect there to be one match, perhaps expecting apple-cinnamon flat cookies, only to discover the choice was chocolate pistachio sea salt cookies that you never thought was an option.

Similarly, template code takes a long time to compile because it has to consider all the options (and sometimes "ALL the options" is hundreds of thousands of types, references to types, pointers to types, typedefs of types, parameter packs, enumeration types, deduced types ...) and considering all the options takes time. In rare cases the best option the compiler finds is not the one you would have thought, which may be an unexpected neutral surprise or an unexpected defect.


Also somewhat critically, something many people forget is that templates are not actual code. One excellent writeup is on cppreference: "A template by itself is not a type, or a function, or any other entity. No code is generated from a source file that contains only template definitions. In order for any code to appear, a template must be instantiated: the template arguments must be determined so that the compiler can generate an actual function or an actual class."  The template tells the compiler how to generate a function with the proper types.  This subtlety helps explain many of the quirky behaviors templates seem to have.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By nilkun
      Hello everyone!
      First time posting in the forum.
      I've just completed my first game ever ( C++ / SDL ), and I am feeling utterly proud. It is a small game resembling Missile Command. The code is a mess, but it is my mess! In the process of making the game, I developed my own little game engine.
      My question is, where would be a good place to spread the news to at least get some people to try the game?
    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
       
      Examples: 
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Andrew Parkes
      I am a talented 2D/3D artist with 3 years animation working experience and a Degree in Illustration and Animation. I have won a world-wide art competition hosted by SFX magazine and am looking to develop a survival game. I have some knowledge of C sharp and have notes for a survival based game with flexible storyline and PVP. Looking for developers to team up with. I can create models, animations and artwork and I have beginner knowledge of C sharp with Unity. The idea is Inventory menu based gameplay and is inspired by games like DAYZ.
      Here is some early sci-fi concept art to give you an idea of the work level. Hope to work with like minded people and create something special. email me andrewparkesanim@gmail.com.
      Developers who share the same passion please contact me, or if you have a similar project and want me to join your team email me. 
      Many thanks, Andrew.

    • By mike44
      Hi
      saw in dependency walker that my app still needs msvcp140d.dll even after disabling debug.
      What did I forget in the VS2017 release settings? After setting to multithreaded dll I get linker errors.
      Thanks
       
    • By 3dmodelerguy
      So I have been playing around with yaml-cpp as I want to use YAML for most of my game data files however I am running into some pretty big performance issues and not sure if it is something I am doing or the library itself.
      I created this code in order to test a moderately sized file:
      Player newPlayer = Player(); newPlayer.name = "new player"; newPlayer.maximumHealth = 1000; newPlayer.currentHealth = 1; Inventory newInventory; newInventory.maximumWeight = 10.9f; for (int z = 0; z < 10000; z++) { InventoryItem* newItem = new InventoryItem(); newItem->name = "Stone"; newItem->baseValue = 1; newItem->weight = 0.1f; newInventory.items.push_back(newItem); } YAML::Node newSavedGame; newSavedGame["player"] = newPlayer; newSavedGame["inventory"] = newInventory; This is where I ran into my first issue, memory consumption.
      Before I added this code, the memory usage of my game was about 22MB. After I added everything expect the YAML::Node stuff, it went up to 23MB, so far nothing unexpected. Then when I added the YAML::Node and added data to it, the memory went up to 108MB. I am not sure why when I add the class instance it only adds like 1MB of memory but then copying that data to a YAML:Node instance, it take another 85MB of memory.
      So putting that issue aside, I want want to test the performance of writing out the files. the initial attempt looked like this:
      void YamlUtility::saveAsFile(YAML::Node node, std::string filePath) { std::ofstream myfile; myfile.open(filePath); myfile << node << std::endl; myfile.close(); } To write out the file (that ends up to be about 570KB), it took about 8 seconds to do that. That seems really slow to me.
      After read the documentation a little more I decide to try a different route using the YAML::Emitter, the implemntation looked like this:
      static void buildYamlManually(std::ofstream& file, YAML::Node node) { YAML::Emitter out; out << YAML::BeginMap << YAML::Key << "player" << YAML::Value << YAML::BeginMap << YAML::Key << "name" << YAML::Value << node["player"]["name"].as<std::string>() << YAML::Key << "maximumHealth" << YAML::Value << node["player"]["maximumHealth"].as<int>() << YAML::Key << "currentHealth" << YAML::Value << node["player"]["currentHealth"].as<int>() << YAML::EndMap; out << YAML::BeginSeq; std::vector<InventoryItem*> items = node["inventory"]["items"].as<std::vector<InventoryItem*>>(); for (InventoryItem* const value : items) { out << YAML::BeginMap << YAML::Key << "name" << YAML::Value << value->name << YAML::Key << "baseValue" << YAML::Value << value->baseValue << YAML::Key << "weight" << YAML::Value << value->weight << YAML::EndMap; } out << YAML::EndSeq; out << YAML::EndMap; file << out.c_str() << std::endl; } While this did seem to improve the speed, it was still take about 7 seconds instead of 8 seconds.
      Since it has been a while since I used C++ and was not sure if this was normal, I decided to for testing just write a simple method to manually generate the YAMLin this use case, that looked something like this:
      static void buildYamlManually(std::ofstream& file, SavedGame savedGame) { file << "player: \n" << " name: " << savedGame.player.name << "\n maximumHealth: " << savedGame.player.maximumHealth << "\n currentHealth: " << savedGame.player.currentHealth << "\ninventory:" << "\n maximumWeight: " << savedGame.inventory.maximumWeight << "\n items:"; for (InventoryItem* const value : savedGame.inventory.items) { file << "\n - name: " << value->name << "\n baseValue: " << value->baseValue << "\n weight: " << value->weight; } } This wrote the same file and it took about 0.15 seconds which seemed a lot more to what I was expecting.
      While I would expect some overhead in using yaml-cpp to manage and write out YAML files, it consuming 70X+ the amount of memory and it being 40X+ slower in writing files seems really bad.
      I am not sure if I am doing something wrong with how I am using yaml-cpp that would be causing this issue or maybe it was never design to handle large files but was just wondering if anyone has any insight on what might be happening here (or an alternative to dealing with YAMLin C++)?
    • By 3dmodelerguy
      So I am trying to using Yaml as my game data files (mainly because it support comments, is a bit easier to read than JSON, and I am going to be working in these files a lot) with C++ and yaml-cpp (https://github.com/jbeder/yaml-cpp) seems like the most popular library for dealing with it however I am running into an issue when using pointers.
      Here is my code:
      struct InventoryItem { std::string name; int baseValue; float weight; }; struct Inventory { float maximumWeight; std::vector<InventoryItem*> items; }; namespace YAML { template <> struct convert<InventoryItem*> { static Node encode(const InventoryItem* inventoryItem) { Node node; node["name"] = inventoryItem->name; node["baseValue"] = inventoryItem->baseValue; node["weight"] = inventoryItem->weight; return node; } static bool decode(const Node& node, InventoryItem* inventoryItem) { // @todo validation inventoryItem->name = node["name"].as<std::string>(); inventoryItem->baseValue = node["baseValue"].as<int>(); inventoryItem->weight = node["weight"].as<float>(); return true; } }; template <> struct convert<Inventory> { static Node encode(const Inventory& inventory) { Node node; node["maximumWeight"] = inventory.maximumWeight; node["items"] = inventory.items; return node; } static bool decode(const Node& node, Inventory& inventory) { // @todo validation inventory.maximumWeight = node["maximumWeight"].as<float>(); inventory.items = node["items"].as<std::vector<InventoryItem*>>(); return true; } }; } if I just did `std::vector<InventoryItem> items` and had the encode / decode use `InventoryItem& inventoryItem` everything works fine however when I use the code above that has it as a pointer, I get the following error from code that is part of the yaml-cpp library:
      impl.h(123): error C4700: uninitialized local variable 't' used The code with the error is:
      template <typename T> struct as_if<T, void> { explicit as_if(const Node& node_) : node(node_) {} const Node& node; T operator()() const { if (!node.m_pNode) throw TypedBadConversion<T>(node.Mark()); T t; if (convert<T>::decode(node, t)) // NOTE: THIS IS THE LINE THE COMPILER ERROR IS REFERENCING return t; throw TypedBadConversion<T>(node.Mark()); } }; With my relative lack of experience in C++ and not being able to find any documentation for yaml-cpp using pointers, I am not exactly sure what is wrong with my code.
      Anyone have any ideas what I need to change with my code? 
    • By Gnollrunner
      I already asked this question on stack overflow, and they got pissed at me, down-voted me and so forth, LOL .... so I'm pretty sure the answer is NO, but I'll try again here anyway just in case..... Is there any way to get the size of a polymorphic object at run-time? I know you can create a virtual function that returns size and overload it for each child class, but I'm trying to avoid that since (a) it takes a virtual function call and I want it to be fast and (b) it's a pain to have to include the size function for every subclass. I figure since each object has a v-table their should be some way since the information is there, but perhaps there is no portable way to do it.
    • By MarcusAseth
      This is the code I have:
       //Create Window     DWORD windowStyle = WS_VISIBLE;     DWORD windowExStyle = WS_EX_OVERLAPPEDWINDOW;     SetThreadDpiAwarenessContext(DPI_AWARENESS_CONTEXT_SYSTEM_AWARE);     RECT client{ 0, 0, 100, 40 };     UINT dpi = GetDpiForSystem();     AdjustWindowRectExForDpi(&client, windowStyle, false, windowExStyle, dpi);     UINT adjustedWidth = client.right - client.left;     UINT adjustedHeight = client.bottom - client.top;     m_hwnd = CreateWindowEx(windowExStyle,                             className.c_str(),                             windowName.c_str(),                             windowStyle,                             CW_USEDEFAULT,                             CW_USEDEFAULT,                             adjustedWidth,                             adjustedHeight,                             nullptr,                             nullptr,                             m_hInstance,                             m_emu     ); The generated window has a client area of 1 pixel in height, even though I'm asking for 40. so I'm always getting 39 pixel less than what I need...can someone help me with this? x_x
    • By SeraphLance
      I've spent quite a while (and probably far longer than I actually should) trying to design an allocator system.  I've bounced ideas around to various people in the past, but never really gotten something satisfactory.
      Basically, the requirements I'm trying to target are:
        Composability -- allocators that seamlessly allocate from memory allocated by other allocators.  This helps me to do things like, for example, write an allocator that pads allocations from its parent allocator with bit patterns to detect heap corruption.  It also allows me to easily create spillovers, or optionally assert on overflow with specialized fallbacks.   Handling the fact that some allocators have different interfaces than others in an elegant way.  For example, a regular allocator might have Allocate/Deallocate, but a linear allocator can't do itemized deallocation (but can deallocate everything at once).   I want to be able to tell how much I've allocated, and how much of that is actually being used.  I also want to be able to bucket that on subsystem, but as far as I can tell, that doesn't really impact the design outside of adding a new parameter to allocate calls. Note:  I'm avoiding implementation of allocation buckets and alignment from this, since it's largely orthogonal to what I'm asking and can be done with any of the designs.
       
      To meet those three requirements, I've come up with the following solutions, all of which have significant drawbacks.
      Static Policy-Based Allocators
      I originally built this off of this talk.
      Examples;
      struct AllocBlock { std::byte* ptr; size_t size; }; class Mallocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size); void Deallocate(AllocBlock blk); }; template <typename BackingAllocator, size_t allocSize> class LinearAllocator : BackingAllocator { AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator() : baseMemory(BackingAllocator::Allocate(allocSize)) { /* stuff */ } AllocBlock Allocate(size_t size); }; template <typename BackingAllocator, size_t allocSize> class PoolAllocator : BackingAllocator { AllocBlock baseMemory; char* currentHead; public: PoolAllocator() : baseMemory(BackingAllocator::Allocate(allocSize)) { /* stuff */ } void* Allocate(); // note the different signature. void Deallocate(void*); }; // ex: auto allocator = PoolAllocator<Mallocator, size>; Advantages:
      SFINAE gives me a pseudo-duck-typing thing.  I don't need any kind of common interfaces, and I'll get a compile-time error if I try to do something like create a LinearAllocator backed by a PoolAllocator. It's composable. Disadvantages:
      Composability is type composability, meaning every allocator I create has an independent chain of compositions.  This makes tracking memory usage pretty hard, and presumably can cause me external fragmentation issues.  I might able to get around this with some kind of singleton kung-fu, but I'm unsure as I don't really have any experience with them. Owing to the above, all of my customization points have to be template parameters because the concept relies on empty constructors.  This isn't a huge issue, but it makes defining allocators cumbersome. Dynamic Allocator Dependency
      This is probably just the strategy pattern, but then again everything involving polymorphic type composition looks like the strategy pattern to me. 😃
      Examples:
      struct AllocBlock { std::byte* ptr; size_t size; }; class Allocator { virtual AllocBlock Allocate(size_t) = 0; virtual void Deallocate(AllocBlock) = 0; }; class Mallocator : Allocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size); void Deallocate(AllocBlock blk); }; class LinearAllocator { Allocator* backingAllocator; AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator(Allocator* backingAllocator, size_t allocSize) : backingAllocator(backingAllocator) { baseMemory = backingAllocator->Allocate(allocSize); /* stuff */ } AllocBlock Allocate(size_t size); }; class PoolAllocator { Allocator* backingAllocator; AllocBlock baseMemory; char* currentHead; public: PoolAllocator(Allocator* backingAllocator, size_t allocSize) : backingAllocator(backingAllocator) { baseMemory = backingAllocator->Allocate(allocSize); /* stuff */ } void* Allocate(); // note the different signature. void Deallocate(void*); }; // ex: auto allocator = PoolAllocator(someGlobalMallocator, size); There's an obvious problem with the above:  Namely that PoolAllocator and LinearAllocator don't inherit from the generic Allocator interface.  They can't, because their interfaces provide different semantics.  There's to ways I can solve this:
        Inherit from Allocator anyway and assert on unsupported operations (delegates composition failure to runtime errors, which I'd rather avoid).   As above:  Don't inherit and just deal with the fact that some composability is lost (not ideal, because it means you can't do things like back a pool allocator with a linear allocator) As for the overall structure, I think it looks something like this:
      Advantages:
      Memory usage tracking is easy, since I can use the top-level mallocator(s) to keep track of total memory allocated, and all of the leaf allocators to track of used memory.  How to do that in particular is outside the scope of what I'm asking about, but I've got some ideas. I still have composability Disadvantages:
      The interface issues above.  There's no duck-typing-like mechanism to help here, and I'm strongly of the opinion that programmer errors in construction like that should fail at compile-time, not runtime. Composition on Allocated Memory instead of Allocators
      This is probably going to be somewhat buggy and poorly thought, since it's just an idea rather than something I've actually tried.
      Examples:
      struct AllocBlock { void* ptr; size_t size; std::function<void()> dealloc; } class Mallocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size) { void* ptr = malloc(size); return {ptr, size, [ptr](){ free(ptr); }}; } }; class LinearAllocator { AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator(AllocBlock baseMemory) : baseMemory(baseMemory) {end = ptr = baseMemory.ptr;} AllocBlock Allocate(size_t); }; class PoolAllocator { AllocBlock baseMemory; char* head; public: PoolAllocator(AllocBlock baseMemory) : baseMemory(baseMemory) { /* stuff */ } void* Allocate(); }; // ex: auto allocator = PoolAllocator(someGlobalMallocator.Allocate(size)); I don't really like this design at first blush, but I haven't really tried it.

      Advantages:
      "Composable", since we've delegated most of what composition entails into the memory block rather than the allocator. Tracking memory is a bit more complex, but I *think* it's still doable. Disadvantages:
      Makes the interface more complex, since we have to allocate first and then pass that block into our "child" allocator. Can't do specialized deallocation (i.e. stack deallocation) since the memory blocks don't know anything about their parent allocation pool.  I might be able to get around this though.  
      I've done a lot of research against all of the source-available engines I can find, and it seems like most of them either have very small allocator systems or simply don't try to make them composable at all (CryEngine does this, for example).  That said, it seems like something that should have a lot of good examples, but I can't find a whole lot.  Does anyone have any good feedback/suggestions on this, or is composability in general just a pipe dream?
    • By RobMaddison
      Hi
      I’ve been working on a game engine for years and I’ve recently come back to it after a couple of years break.  Because my engine uses DirectX9.0c I thought maybe it would be a good idea to upgrade it to DX11. I then installed Windows 10 and starting tinkering around with the engine trying to refamiliarise myself with all the code.
      It all seems to work ok in the new OS but there’s something I’ve noticed that has caused a massive slowdown in frame rate. My engine has a relatively sophisticated terrain system which includes the ability to paint roads onto it, ala CryEngine. The roads are spline curves and built up with polygons matching the terrain surface. It used to work perfectly but I’ve noticed that when I’m dynamically adding the roads, which involves moving the spline curve control points around the surface of the terrain, the frame rate comes to a grinding halt.
      There’s some relatively complex processing going on each time the mouse moves - the road either side of the control point(s) being moved, is reconstructed in real time so you can position and bend the road precisely. On my previous OS, which was Win2k Pro, this worked really smoothly and in release mode there was barely any slow down in frame rate, but now it’s unusable. As part of the road reconstruction, I lock the vertex and index buffers and refill them with the new values so my question is, on windows 10 using DX9, is anyone aware of any locking issues? I’m aware that there can be contention when locking buffers dynamically but I’m locking with LOCK_DISCARD and this has never been an issue before.
      Any help would be greatly appreciated.
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631395
    • Total Posts
      2999780
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!