• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

182 Neutral

About jorgander

  • Rank

Personal Information

  • Interests
  1. Thanks for your comment! The actual use case is a bit more involved than I describe, but not much different than what I use as examples; it will be for rasterizing averaged values of weather data over a geographic area. The weather data comes in pre-defined grids, where each point of the grid is defined by latitude/longitude and a data value, and the grid may or may not be the same format as what is rendered (i.e. the weather data grid may be Polar Stereographic, while the rendered grid may be Lambert Conformal). As such, the internal structure should be agnostic. Another layer of complication is that the weather data also comes in one or more forecasts, where there is a grid for each forecasted unit of time X number of times. For example, 72 grids spaced out an hour apart from each other, containing wave height data for the Caribbean Sea. To keep the implementation simplified, each BSP only contains one type of weather data, at one point in time. Since the time is the same for all points in a BSP, it is not stored for every point, but once for the whole BSP. I mention it because even though it is not stored with each point, it is taken into consideration as another "dimension" when finding nearby points. The end result looks something like this: float target_time = ...; // render time float max_distance = ...; // points outside this distance are not considered int N = ...; // number of nearby points to consider foreach (pixel) { float target_geo[3] = ...; // unit vector (nvector) of pixel // [0] = distance, [1] = value float distance_and_value[N][2] = ...; // initialize to max_distance // check BSPs foreach (BSP close enough to target_time) { // modifies distance_and_value parameter as smaller distances are found; keeps the array sorted based on distance BSP.findClosest(target_geo, target_time, &distance_and_value); } float average = ...; // calculate using distance_and_value, stopping when >= max_distance pixel = ...; // calculate using average } But I didn't mention all of this because it is tangential to the problem of selecting a partition for each BSP tree branch. how many points are we talking about - it varies from grid to grid. Some are as small as 350x150, and others as large as 4400x2200. It could be larger as more meteorological data is added. For now I'm only using freely available data from NOAA, but could eventually use proprietary data from other sources. what is the distance (relative to the sphere) over which to make a query for nearest neighbours - this is a parameter passed to the query function, and may differ based on application. For most cases, the end result should look "pretty" and show smooth gradients, whereas accuracy is preferred for "serious" uses such as maritime navigation. what are the access patterns (are the queries random, or are you, for instance looking for the closest points to a moving object, where you can reuse the results of a previous search)? - (see above code snippet) The best scheme may depend on how fast adding a point to the data structure needs to be, versus the speed of the queries. - you are right, it may be, although not for my current use case. Regardless of which is best, I would certainly implement faster BSP insertion if I knew how. That makes sense for rendering multiple images of the same geographic area, but not for the case where the area may be panned or zoomed, as the list would have to be re-built. Also how would the list of nearby neighbors be efficiently built without some partitioning scheme to begin with? I think this is also not effective for the same reasons. Pre-calculating the angular distance between data points is not necessary, as that is never needed (only need to determine distance between data points and another arbitrary point), and as in the previous paragraph, pre-calculating angular distance between data points and another point is only used once.
  2. Introduction The impetus for this research topic is concerned with a data structure that efficiently stores and queries scatter point data. These points may originally be neatly arranged on a grid, or randomly scattered across the surface of a sphere (e.g. the earth), but it makes no assumption in that regard. The data structure could have member functions such as add(point) or add(points, amount). The points themselves contain three scalar values: the longitude, the latitude, and the data value itself. This data value could be geographical data such as land height or weather data such as air temperature, but again, the data structure is agnostic about the data itself other than as a scalar value. Data Structure The purpose of the data structure is ultimately to efficiently and accurately query for point(s) that is(are) nearby another point, the reason being to compute the average value at that point. Suppose a requirement is to create an image representing the air temperature (or rainfall, cloud cover, whatever the scalar value of each point represents) - this data structure could be queried for each point (i.e. pixel) in the image. Once nearby points are queried, the algorithm to average the scalar values of each point is outside the scope of this, but a good candidate is the inverse distance weighed algorithm. Within the data structure, the data points will not be stored in spherical coordinates, but as n-vectors. The reason for this is that the sampling algorithm - finding N closest data points - is more efficient and accurate using vector algebra. While it is outside the scope of this, to summarize: it is easier to determine how close two points are on a sphere if these points are represented by unit vectors than by latitude and longitude. For example, two points sharing the same longitude are farther apart at the equator than above or below the equator. Even if we are just concerned with relative distances, and do not need to compute the square root of the distance formula, for this reason it is still not as accurate. Also the international date line, where longitude "resets", presents a problem that would have to be dealt with. Also either pole. But you get the idea - vectors are more stable, accurate, and efficient. I've always been a fan of Binary Space Partitions - each branch node of a BSP has two areas, one below and one above (or one left and one right, or one in and one out, however you like to think of it). To go along with storing points as vectors, branch nodes within the structure partitions their space also using a vector, which in this case forms the normal of a plane that bisects the unit sphere. All data points in a node (or children of the node) will either be above or below this plane (points that are on the plane can be allocated to either side as is appropriate). Points that are below the plane are placed in the first child of the node, and points that are above placed in the second child. We call this plane normal the split axis of the node. Querying the structure for the closest N points then becomes trivial. For any branch node, compute the dot product of the point in question and the split axis of the node. If it is negative (the point is below the split plane), recursively query with the first child node, and if positive (the point is above the split plane), with the second child node. For a leaf node, compute the dot product of the point in question with each data point contained in the node, keeping a sorted list of the closest N points. The one caveat is that in branch nodes, after recursing into one child node, it may be necessary to recurse into the other child if the farthest point found so far is farther than the other child node, since there may be closer points in the other child node. But this is trivial as we are comparing dot products. No expensive computations are necessary to compute the N closest points - just a bunch of dot products. As dot products of unit vectors range from -1 to 1 (-1 being farthest apart and 1 being equal), two points are closer if their dot product is higher. Once complete, and the list of N points found, the actual distances can be calculated if necessary, which in this case is done by calculating the angles using the already-calculated dot products. This angle can then be multiplied by the radius of the earth to get an exact distance, again only if necessary. But for averaging, these extra calculations are not necessary. As a final note, the data structure lends itself well to graphics hardware. In my particular case, rendering an image using such a data structure on the CPU may take several minutes, but on the GPU takes a fraction of a second. Problem The problem - common to any space partitioning tree - is how to initially create the data structure. Again, the points are not assumed to be arranged in any specific way, and as far as we are concerned, are a "point soup". They can be specified one at a time - addPoint(point) - or all at once - addPoints(points) - or both. Specifically, how can the determination of the split axis of any branch be efficient and provide an even split (the same or similar number of points on either side). The problem is unique here because the points are not arranged on a two-dimensional surface or in three-dimensional space, but lie on a unit sphere. My current solution to the problem is not terribly elegant. For each two data points, compute the axis that splits them evenly. This is done by computing the point between the two (subtract one from the other) and normalizing it, then crossing this with the cross product of the two points. This cross product comprises the normal of the plane that evenly splits the two points. This normal is then tested against each other point in the node to get 1) the number of points on either side of the plane and 2) the distances of the points to the plane. A good split plane is one that 1) splits points evenly, 2) has the same/similar distances on other side, and 3) has large distances on other side. As this test is performed for each two data points, some big-O notation shows that determination of the split plane for nodes containing a large number of points will become prohibitive. However, I have not been able to determine a better solution. But the advantages of such a data structure still outweigh this expense. In my particular case, the time spent initially creating the tree is worth the time saved during queries. I should mention that if the data points are known ahead of time, it is faster to specify them all at once, so the tree can re-build itself once, rather than one at a time which may cause the tree to re-build itself multiple times.
  3. Efficient thread synchronization

    Thanks for the info!  I knew there must be something I'm missing.  I've read the article you linked, as well as other related articles, and will adjust my code to account for it.  However, my "real" job has taken priority for now and I will update this when I am able to.
  4. Efficient thread synchronization

    *See the EDIT below* The impetus for this research topic is the necessity for threads to exchange data in an efficient manner, namely between the network and main threads. Within the context of networking, we want data to be quickly available to, and dispensable from, the main thread, and that means calling the corresponding system calls (e.g. sendto/recvfrom) as often as possible. In addition to this, there is necessarily some translation done between native and network formats, as well lookups and other operations that could be off-loaded to the network thread if possible. However, this introduces a problem; as data arrives and is picked up by the network thread, or as logic in the main thread requires data be sent out to the network, how do the separate threads share this data? If we create a mutex for it, we introduce an expensive system call that causes delays and is generally unacceptable in a tight loop. Whatever solution we implement should not involve systems calls or any significant pause in execution. If our shared data is simple enough to be just one variable - just one *instance* of something, if you will - then nothing fancy is needed. As an example, if the network thread needs to notify the main thread that a player has sent a QUIT message, and that this player be removed from all operations and deleted, a "player * quitting" member can be added, which is updated as necessary. The main thread will watch this member, and take appropriate action when it sees that it is non-null. So, the course of action for both threads is: Network thread: 1. QUIT message received by player 2. update member "quitting" to refer to this player Main thread: 1. check member "quitting" 2. if non-null, delete all associated resourced and updating member "quitting" to null But there is a problem with this: what if another QUIT message is received by the network thread before the main thread has processed the first one? The network thread may update member "quitting" before the main thread has seen the first one. To address this, we can add a safety check in step 2 of the Network thread - only update member "quitting" if it is null. The reasoning behind this is that after the Network thread has updated this member, it should not update it again until the Main thread has processed it, and the Network thread can tell this by whether or not it is null. And this introduces yet another problem, as you can probably guess: what should the Network thread do with second QUIT message that it has received before the Main thread has finished processing the first one? It can simply queue this second message, and any others it receives, in a buffer. Each loop iteration, it will check to see if the member "quitting" is null, and if so, assign it to one of the items in the buffer (also removing this item from the buffer). This solution works and is great in that it allows each thread to produce or consume data without waiting for the other. If data arrives in the producer, and the consumer is busy with previously shared data, it is simply queued, and the producer continues execution. The only cost incurred is one conditional in each thread, plus a buffer in the producer thread for queueing purposes. The drawback of this approach is that only one item can be shared at a time. While this may be ok for QUIT messages - it would be uncommon for many players to quit in the few milliseconds it takes per each loop iteration - it would not be efficient for messages that are transferred more often. For this approach, two buffers can be used that swap their contents. Instead of explaining all the details, I will simply post the code I've written to support this://// Thread synchronization w/out semaphores//// the data producer can always insert data, and should call Commit() once during it's event loop// (the Commit() call will only commit produced data if the consumer has processed existing data)//// the data consumer will process data if there is any has been committed by the producer//template class Queue{private: std::vector m_clsIn; std::vector m_clsOut; bool m_blnData; bool m_blnProcessed;public: Queue() : m_blnData(false), m_blnProcessed(true) {} void Add(T clsData) { //ASSERT(thread == producer) m_clsIn.push_back(clsData); } void Commit() { //ASSERT(thread == producer) if ( m_blnProcessed && m_clsIn.size() > 0 ) { m_clsIn.swap(m_clsOut); m_blnProcessed = false; m_blnData = true; } } template void Process(Functor & refFunctor) { //ASSERT(thread == consumer) if ( m_blnData ) { for (size_t i=0; im_ptrNext; m_ptrInactive->m_ptrNext = NULL; delete ptrNode; } template void Put(Functor functor) { //ASSERT(thread == producer) // // Check if the queue is full, in which case we must add a node // if ( m_ptrInactive->m_ptrNext == m_ptrActive ) { // // Between the above conditional and the following statement, the consumer can process // data and move the 'active' pointer, making the following node addition unnecessary. // However, there is no way to avoid it except with a mutex lock or other expensive // operation, and the extra node will get used anyway in the next call to Put() // // In the unlikely case that the 'active' pointer is advanced between the preceding // conditional and the Node c'tor below, we use 'inactive->next' as a parameter instead // of 'active' // m_ptrInactive->m_ptrNext = new Node(m_ptrInactive->m_ptrNext); } functor(static_cast(*m_ptrInactive)); std::atomic_signal_fence(std::memory_order_release); m_ptrInactive = m_ptrInactive->m_ptrNext; } template void Get(Functor refFunctor) { //ASSERT(thread == consumer) // // These is no [good] reason to fence here, as it is optimal to let the CPU handle memory as it // naturally does. The producer thread may not see our memory updates right away and subsequently // add extra nodes when it doesn't have to, but any extra nodes will be used anyway in subsequent // calls to Put(). // while ( m_ptrActive != m_ptrInactive ) { refFunctor(static_cast(*m_ptrActive)); m_ptrActive = m_ptrActive->m_ptrNext; } } // // For accurate results, call this from the producer thread // size_t SizeAll() { // Count from inactive -> inactive size_t count = 1; for (Node * ptrNode = m_ptrActive->m_ptrNext; ptrNode != m_ptrActive; ptrNode = ptrNode->m_ptrNext) ++count; return count; } // // Results may be inaccurate regardless of which thread calls this // size_t SizeActive() { // Count from active -> inactive size_t count = 0; for (Node * ptrNode = m_ptrActive; ptrNode != m_ptrInactive; ptrNode = ptrNode->m_ptrNext) ++count; return count; }}; The calls to [font='courier new']std::atomic_signal_fence(...)[/font] will have to be rewritten for compilers that do not support C++11. In my code I do not directly call these, but instead call a library that mimics the code in the article linked by Josh. I changed them here as I do not want to include my entire library. An assumption made is that a [font='courier new']Queue[/font] object will not be destroyed without coordination between threads (e.g. one waits for the other to finish). In other words, the destructor will clobber concurrent calls to [font='courier new']Put()[/font]/[font='courier new']Get()[/font]. A caveat is that T objects are not destroyed until the [font='courier new']Queue[/font] object is destroyed. This is by design, since my particular implementation benefits from it. However, it can be trivially changed by: In [font='courier new']Node[/font]: adding [font='courier new']char object[sizeof(T)][/font] and removing the T inheritance in [font='courier new']Put()[/font]: calling [font='courier new']new (node->object) T()[/font] and changing the functor parameter to [font='courier new']static_cast(node->object)[/font] in [font='courier new']Get()[/font]: changing the functor parameter to [font='courier new']static_cast(node->object)[/font] and calling [font='courier new']static_cast(node->object)->T::~T()[/font] (syntax may not be correct, but you get the idea)
  5. I've been under the impression for the longest time that a template class cannot have virtual methods, and that MS VStudio allowed it like it allows other illegal/bad practices. I'm not really sure where I picked it up, but a bit of research has shed some light on what I was wrong about: A member function template shall not be virtual. Example: template <class T> struct AA { template <class C> virtual void g(C); // error virtual void f(); // OK }; This section of the function pointer tutorials also shows something I thought was illegal. Thanks for the replies
  6. ... that overrides virtual functions in its parent? I know that this is illegal: class foo { public: virtual dostuff(); }; template <typename T> class bar : public foo { public: virtual dostuff(); }; but is this illegal: class foo { public: virtual dostuff(); }; template <typename T> class bar { private: class inner : public foo { T _t inner(T) {...} // and possibly other members/functions with T virtual dostuff(); }; inner _foo; public: ... }; I know that MS VStudio lets you do it, but it's not legal for a template class to derive from a class that has virtual methods. However, can a template class contain a non-template class that does it, as shown above?
  7. Upgrading to VC++ Express 2010 does not fix it, so I'm assuming they fixed it only in the paid version. Unfortunately I'll have to go with composition for now.
  8. I did try that, and I got the same error as in my second post. I also removed the forward declaration and tried it both ways - same errors. Interestingly enough, changing the relationship from "is-a" to "has-a" compiles without error: template <typename T, typename GlobalTraits> class Global { private: Registry::GlobalInternal internal; ... }; Anyone know why? I would think the same access rules apply. Perhaps it has something to do with when the compiler is loading the class - with "has-a" relationship the compiler has already read the class declaration and checked access rules, but with "is-a" it is still reading the class declaration (i.e. the class name and bases) when it encounters the base class name.
  9. Quote:Original post by alvaro template <typename T, typename GlobalTraits> friend class Script::Global; Is that what you were looking for? I originally thought so, but upon adding class I am presented with this compiler error: 1>d:\development\cpp\assware\common\include\asw\script.h(315) : error C2248: 'ASW::Script::Registry::GlobalInternal' : cannot access private class declared in class 'ASW::Script::Registry' 1> d:\development\cpp\assware\common\include\asw\script.h(271) : see declaration of 'ASW::Script::Registry::GlobalInternal' 1> d:\development\cpp\assware\common\include\asw\script.h(237) : see declaration of 'ASW::Script::Registry' ... The error leads me to believe that the compiler is not matching the friend declaration with prior forward declaration, although I'm not sure of this.
  10. I don't see why what I'm trying to do shouldn't be possible, but for some reason I can't get VC++ 2008 Express to like it. Here's the basic idea: namespace Script { template <typename T, typename GlobalTraits> class Global; class Registry { private: class GlobalInternal { ... }; template <typename T, typename GlobalTraits> friend Script::Global; }; template <typename T, typename GlobalTraits> class Global : private Registry::GlobalInternal { ... }; } With the above code I get this error: 1>d:\development\cpp\assware\common\include\asw\script.h(291) : error C2955: 'ASW::Script::Global' : use of class template requires template argument list 1> d:\development\cpp\assware\common\include\asw\script.h(234) : see declaration of 'ASW::Script::Global' Does anyone know the correct syntax? In case anyone asks, Registry::GlobalInternal is supposed to be inaccessible from any class that isn't supposed to have access to it, but still have it separate from Global so that source code that uses it can be compiled on its own (i.e. not use a template class). If anyone knows of a cleaner way to accomplish that I would like to know of it.
  11. That compiles alright, thanks for the suggestion. I'll have to test and make sure the correct version is called.
  12. I have these two string conversion functions: template <typename T> const T * UTIL::Convert(const T *, const size_t, size_t &); template <typename T, typename U> const T * UTIL::Convert(const U *, const size_t, size_t &); The first simply returns its parameter, since they are the same type and no conversion is necessary. The second performs a conversion using wcstombs, mbstowcs, or some other method. This line of code const UTIL::Char * ptrText = UTIL::Convert<UTIL::Char>(p_ptrText, p_intLength, uintLength); Produces this error 1>c:\development\assware\common\src\gui\factory.cpp(220) : error C2668: 'UTIL::Convert' : ambiguous call to overloaded function 1> c:\development\util\include\util\impl\traits.h(47): could be 'const T *UTIL::Convert<UTIL::Char,XML_Char>(const U *,const size_t,size_t &)' 1> with 1> [ 1> T=UTIL::Char, 1> U=XML_Char 1> ] 1> c:\development\util\include\util\impl\traits.h(14): or 'const T *UTIL::Convert<UTIL::Char>(const T *,const size_t,size_t &)' 1> with 1> [ 1> T=UTIL::Char 1> ] 1> while trying to match the argument list '(const XML_Char *, int, size_t)' I'm wondering how to achieve the effect of automatically calling the correct Covert method using templates. That is to say, the compiler should know when the types are the same and call the first method.
  13. Found the cause: The directories are set up differently between the two PCs I use. For example, the expat library at my home PC is at "C:\Development\libraries\expat-2.0.1\lib", while at my work PC it's at "D:\Development\libraries\expat-2.0.1\lib". Recently I was moving as many project-specific settings as possible into the project properties instead of having them in the source code. So for example, removing "#pragma comment(lib, 'opengl32.lib')" from the source and adding "opengl32.lib" to Configuration Properties -> Linker -> Input -> Additional Dependencies. Well wouldn't you know it, on my work PC I had put "D:\Development\libraries\expat-2.0.1\lib" in the C/C++ -> Additional Include Directories and removed the same path from the global VStudio include directories since I don't use it for any other project. Since this path isn't the same on my home PC ('D:' at home is my dvd drive, while at work is another hard drive), I'll assume it was taking extra time to search that directory for every include file. I have moved the include directories out of the project settings and back into the VStudio list, and the files are taking <2 seconds to compile now. Thanks for the replies and helpful ideas, I probably wouldn't have figured it out if I hadn't sat down and gone over every last detail in order to post here.
  14. Two minutes to just compile one file by right clicking on it in VStudio and selecting "Compile". This happens with every file in the project. I created a new console project with "#include <string>" and an empty main(), but no string objects declared, and it compiled instantly. So I'm pretty sure it's just this one project, I just have no idea what the cause is. I've looked into precompiled headers, although I'm not using them for this project (yet). As I said I use two different machines to work on the project (home and work) - the slow compile occurs on the machine with more RAM, faster processor, faster hard drives, and almost no other running processes. While the slower PC isn't instant, it doesn't take anywhere near two minutes and has a crapload of browsers, other IDEs, and documents open.
  15. It's not just <string>, but any STL header. After a bit of research and testing, I found that commenting out all the "#include <X>" lines made it compile instantly like it used to, where X is one of 'list', 'limits', 'string', 'sstream', and so on. Of course, the compilation failed since I had only commented out the include directives and the rest of my code still contained references to the STL classes. But the point is that it failed instantly. Interestingly enough, I uncommented one of the STL include directives, and the compilation time shot up again. It still failed since my code needs all of them to compile correctly, but it took a long time to fail. To summarize: This takes a full two minutes to successfully compile: #include <list> #include <limits> #include <string> #include <sstream> #define XML_STATIC #define XML_UNICODE_WCHAR_T #include <expat.h> ... This instantly fails: //#include <list> //#include <limits> //#include <string> //#include <sstream> #define XML_STATIC #define XML_UNICODE_WCHAR_T #include <expat.h> ... This takes ~13 seconds to fail: //#include <list> //#include <limits> #include <string> //#include <sstream> #define XML_STATIC #define XML_UNICODE_WCHAR_T #include <expat.h> ... I'm using Visual C++ 2008 Express Edition. I work on the project from different locations, and this only happens at one of them.
  • Advertisement