Jump to content

  • Log In with Google      Sign In   
  • Create Account

King Mir

Member Since 11 Jun 2006
Offline Last Active Jun 08 2015 06:56 AM

#5196006 NULL vs nullptr

Posted by King Mir on 02 December 2014 - 09:23 PM

so the best thing to do is just stick with existing conventions of the rest of the codebase (which in my case is literal 0).


The best case for your code base may indeed be to use old conventions, but that does not mean that it is a good idea to recommend everyone working on new projects follows the conventions from back when your project started.


nullptr is superior to 0 because it prevents silent errors.


IMO standard library writers should have used something like nullptr as the expansion of NULL before C++11. It'd only require a little compiler help to make code like pow(NULL) a warning.

#5190732 Why don't you use GCC on windows?

Posted by King Mir on 02 November 2014 - 08:44 AM


How come everyone uses VS's compiler on windows and then GCC when they port the same code to Linux?

Wouldn't it make sense to GCC on windows as well?

Most of the IDE's for linux are available on windows as well.


Sure it makes sense. That's why I do it. The GCC port to Windows is known as MinGW. There's another GCC port to windows called Cygwin, but I don't have much experience with Cygwin.


GCC is dependent on a linux-like environment, so MinGW (and Cygwin) include not just GCC but also a small group of minor GNU-related libraries and tools that GCC requires.


MinGW and Cygwin aren't officially part of the GNU project, so MinGW is usually a few months behind on the GCC version (since they have to wait for the latest GCC to be officially released before porting it, which takes a little time), but development is continuous and consistent, so I'm using the latest C++ features with MinGW.


QtCreator is my IDE of choice, and comes pre-installed with MinGW if you ask it to be.


Cygwin isn't a port of GCC to windows. Cygwin is a POSIX API layer, that allows programs that are written to run on a POSIX (Linux, Unix) environment, run on windows, provided the user has Cygwin installed. You do have to compile for cygwin; linux binaries don't run with Cygwin. But you don't generally need to wait for things to be specially ported to Cygwin if the previous version worked on Cygwin. So in theory, you should be able to use a pre-release version of GCC. This is in part because GCC does officially support Cygwin. (Many linux applications just work on Cygwin, but a compiler needs special attention.)


You can also use clang with Cygwin.


But because binaries compiled with Cygwin binaries need Cygwin, it's not a practical way to write applications for windows, especially non GPL licensed applications. It is a way to write applications for linux on your windows box.

#5187904 How to remove GET variables from URL in php

Posted by King Mir on 18 October 2014 - 06:11 PM

You could put data in custom http headers.

#5184960 Should i prioritize CPU or Memory usage?

Posted by King Mir on 04 October 2014 - 09:31 AM

I know this is a rather ambiguous question, and the answer will highly depend on the software at a hand, but generally speaking if i have a situation where i could prioritize the CPU or the Memory, which one should i choose, or which one do you generally choose?


For example: If i create a code that will load certain areas of the map around the player, i have two options

a) The size of each area can be bigger, therefore storing more data on the memory, but i wouldn't have to load each area so often. 

b) The size of each area can be smaller, therefore, i would store a lot less Data on the Memory, but i would have to reload different areas much more often.


What do you generally do in this instances?

You almost always have plenty of memory, but accessing a lot of memory will make your program slow. So it's not really a trade off between memory and CPU speed. It's a question of which option makes for a faster program.


Your example isn't a trade of between memory and CPU at all. It's a trade off between up front loading, versus delayed loading. That is, with smaller map chuncks, you can initially load the map faster, but have to load data more often. You wouldn't want to offload data too often, even for small maps, because that's wasted effort; having extra memory allocated but not accessed won't slow anything down. You may need to offload map data to disk so you don't run out of memory, but generally you can just do that during a save. There may also be questions of what how big a map chunk should be to offset the constant cost of loading anything at all.


For actual cases of CPU processing versus using more memory, doing more calculations often wins out. Main/Physical memory is about 100 times slower than doing an arithmetic operation, or accessing the first level of cache. Furthermore, on a multi threaded program on a multi core machine, access to main memory isn't done in parallel.

#5183410 c++ Performance boost needed

Posted by King Mir on 27 September 2014 - 10:12 PM


I think a better way to achieve this memory layout would be a unique_ptr<array<array<int, X>,Y>>.


Unless I'm missing something, I don't see a need for unique pointer. It sounds like OP builds the table out once and then only uses it for lookup. If that's the case, the lifetime of the table can almost certainly be determined by its location on the stack. Also, std::array<int, X*Y> is still valid -- You'll have to calculate the index like ApockPiQ showed, but you'll still get whatever static or runtime checks std::array offers, you won't get them on each dimension individually but you'll get them on the whole of the std::array, and that ought to at least help prevent most bugs (most indexing bugs are going to go off the reservation entirely, especially with a non-square array; its pretty rare to see an indexing bug that manages to stay only inside its dimensionality and never step outside its total bounds.)


The point of the unique pointer in my example is to move the large array from the stack to the heap. You generally don't want large arrays eating up your stack space. I also wanted to show an identical layout as Apoch suggested, but with more safety.


Yes you could use array<int,X*Y>, but why would you?

#5168588 Inter-thread communication

Posted by King Mir on 23 July 2014 - 01:37 AM

Volatile is not for thread synchronization, it's for IO. It's both too stringent and too weak for synchronization. It does not guarantee indivisibility, so it can split reads and write into two operations. It excessively limits the ability of the compiler to move code around the volatile read or  write, but then doesn't limit the CPU from doing the same reordering (because the CPU knows that it's not really doing IO). Atomic variables are the correct way to share data like that between threads.

#5168568 Converting STL heavy static library to shared

Posted by King Mir on 22 July 2014 - 10:08 PM

As long as your shared library doesn't expose STL containers as part of its interface, there's no problem. So you do want to roll your own types for passing data in and out.


Then you have to make sure your library has a stable ABI itself while still being amendable to bug fixes and expansion. This is mainly done by using the PImpl pattern. The details of what you can change while maintaining ABI compatibility is different between MSVS and Gcc.

#5168558 Inter-thread communication

Posted by King Mir on 22 July 2014 - 09:27 PM

The broader picture I'm actually somewhat more concerned about is that in terms of communicating between the input and update (render) threads, at the end of the day I don't think there's a way to make it lock-free. Or rather I'm failing to grasp how to effectively go about synchronization.


Consider the below two calls. The following example is simplistic, but it hilights the problem rather effectively: how can I guarantee that GetSize(), when called from an arbitrary thread, will actually return the correct values? In fact, while the SetSize()-GetSize() example is trivial, for any application that has several threads that depend on the window's current width and height, this seems like a potential cluster-bork.


In other words - I think I've been far too concerned about making the updates non-concurrent, but what is instead much more worrisome is how the data can be reliably read.

//public, called by any thread
Window::SetSize(int w, int h)
//directly calls DoSetSize() if called by consumer thread, otherwise posts the message to queue
DispatchWindowMessage(this, w, h);

//called by any thread
Window::GetSize(int &w, int &h)
//simple read, assumes both values are up-to-date
w = width;
h = height;

//only accessible to the consumer thread
Window::DoSetSize(int w, int h)
//can set these without a lock, but what if GetSize() is called at the same time from another thread?
width = w;
height = h;

A simple solution would be to leave this for the application to worry about, but I I'm more interested in how this kind of synchronization is achieved in something more critical, like an operating system.


In this limited case you could just make the width and hight a single 64 bit atomic variable. That will ensure that the size is always what a single call to DoSetSize() asked for. But necessarily, calling GetSize() cannot guarantee, without external synchronization, that the size remain what it returned one instruction later.

#5163275 Vector Efficiency question

Posted by King Mir on 27 June 2014 - 12:53 PM

If you're iterating over all members, a vector is as fast as it gets. If you want to be able to lookup by id, a sorted vector is pretty fast too.

#5162610 Visual Programming

Posted by King Mir on 24 June 2014 - 12:53 PM

In order for me to be interested in this, I'd want to be able to convert back and fourth between text and graphics. If you could make a visualizer for the control flow of a piece of code, and allow me to modify the code in graphical form, then it could be a useful tool.

#5162604 Should I break_the_build?

Posted by King Mir on 24 June 2014 - 12:33 PM

Refactor means internal changes only. The interface will remain unchanged.
That's still too broad. If you change a back end algorithm, that's not a refactor. A refactor only changes structure.


I'd say the archetypal refactor is moving common code to a common interface, like an inheritance hierarchy. You're not adding new functionality, but you're changing the code to make code reuse easier in the future. You wouldn't do this for public facing code, because that would change the API and ABI, so I guess the observation that a refactor is internal only is correct,

#5160041 Why does this not compile?

Posted by King Mir on 12 June 2014 - 08:27 AM

Or use "Test t{}". That gives uniformity to how variables are declared and initialized.

#5148479 C++ std::move() vs std::memcpy()

Posted by King Mir on 20 April 2014 - 10:43 PM

In adition to SiCrane's example, another thing that could complicate moving is pointers into the object itself. For example, a small vector might have a pointer that points within the object when there are few members, and to allocated memory when there are lots. That pointer would need to be updated to point within the moved object.


I understand that std::move(), std::memcpy(), and normal copying all copy bytes around.  But in the context of C++ a move is different than a copy.  So yes I am talking about std::memcpy(), but I'm talking about move semantics not copy semantics.  POD types and std::is_trivially_copyable refer to copy semantics.  For example a class like:
struct Object { 
   char* data; 
   Object () : data(new char[10]);
   virtual ~Object () { delete[] data; }

is not trivially copyable (you'd have two objects pointing to the same data on the heap), but... is it trivially moveable?  There are no exceptions thrown.  The pointer will move properly with a std::memcpy(), as there will be still only one owner.  As long as the src isn't destructed we don't have a data leak.  Does the hidden v-table pointer get copied properly?  Will something else get mangled?


I hope that makes my question clearer.

One of the things that a move constructor needs to do is leave the source object in a destructable state. So a just a memmove wouldn't be enough here.

#5148362 Costs of (Re)Designing a Programming Language

Posted by King Mir on 20 April 2014 - 09:46 AM

For proprietary languages like Java, Oracle could easily release a document that outlines the change, plus update their own libraries with the change. Not counting the development involved in deciding what changes to make, it's petty cheep.


C and C++ are international treaties. To change those you need consensus in their respective ISO committees. As I understand it, there's a small fee for Oracle to send a voting member to ISO, but there are no other costs. The tricky part is convincing everyone else on the committee that it's a good idea.


And some languages don't have an owner or a treaty. Oracle could release a compiler/interpreter that uses an expanded form of these, but it would have no way to force that expansion on other vendors. It could try to get ANSI or other national bodies to publish a standard, and there are fees to do that.

#5147524 c++ oo syntax behind the scenes behavior

Posted by King Mir on 16 April 2014 - 08:43 PM

If a variable is global, or if a variable is a static member-variable of a class (which is just a global variable in the class's namespace), then you can't count on the order or timing of when they are constructed. The only thing you can depend on is that they'll be constructed before int main() enters... but other than that, the order is compiler-specific.

You can't count on the initialization of globals defined in different translation units. Globals defined in the same translation unit are well defined to initialize in the order they are found. Not that that helps much.