
Advertisement
Zakwayda
Member
Content Count
12931 
Joined

Last visited

Days Won
9
Zakwayda last won the day on October 17
Zakwayda had the most liked content!
Community Reputation
2356 ExcellentAbout Zakwayda

Rank
GDNet+
Personal Information

Interests
Programming
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.

Python Python + Pygame Threading Sample
Zakwayda replied to Gomta777's topic in For Beginners's Forum
This echoes some things already said, but it seems unlikely to me that multithreading would be necessary or advantageous for something like Tetris, or that a multithreaded implementation would be simpler than a singlethreaded implementation. Can you elaborate on how you think multithreading would help here? 
OBBOBB detected 'intersected' but It's not
Zakwayda replied to isu diss's topic in Math and Physics
There are different ways to perform the peraxis test once you've computed the axis. For boxes, for example (both oriented and axisaligned) there are shortcuts for computing the projection of the box onto an axis that don't require processing each vertex individually. You can compute the full projections and test those for intersection, or in some cases you can compute the projected radii and test the sum of the radii against the projected distance between the object centers. In the code you posted, perhaps the most straightforward (although perhaps not most efficient) way to test against arbitrary axes might be to compute the projection of each shape by projecting all vertices of each shape onto the axis, and then testing the projections for intersection. 
I suggest finding a configuration for which the algorithm returns a false negative, and then testing the algorithm in isolation using that configuration as input. If you have a debugger available, you can step through in the debugger. And/or, you can add log messages to track the algorithm's progress. The algorithm itself isn't particularly intuitive, so even under that kind of examination it may not be obvious what's going wrong. In case you get stuck, here are some more suggestions:  Post your current code and the input for which the algorithm fails so we can take a look.  There are a lot of opportunities for typos or copypaste errors there, so you could doublecheck your code again against the reference you're using to check for obvious mistakes.  I don't know much about Python, and maybe you're inlining all the operations for performance reasons. Otherwise though, writing functions for common operations like vector dot product, addition, etc. will make your code cleaner, easier to read, and less bugprone. (Again though, I understand that you may be inlining on purpose.) Also, on the topic of performance I notice you're doing a lot of squaring via exponentiation. I don't know what the performance characteristics of the exponential operator are in Python (maybe simple cases are optimized), but in some languages/environments, simply multiplying a value by itself can be faster. Anyway, something to consider if you're encountering performance issues. (There's some discussion of the cost of the exponential operator here.)

Can you clarify what you mean by particles 'going through the algorithm'? I wouldn't just assume it's due to floatingpoint error unless you have a specific reason to think that's the culprit. If the people who suggested it was floatingpoint error also told you why/how floatingpoint error might be an issue, that would be worth considering. But absent that information I'm not sure the suggestion that it's 'floatingpoint error' is particularly useful. Can you clarify where the problem in your image is? I can see that two capsules are intersecting. Is the problem that that intersection (or others like it) isn't being detected by the algorithm?

Linear interpolation of two vector arrays with different lenghts
Zakwayda replied to C3D_'s topic in Math and Physics
I think only the mods can answer that question. I'd think any continuation of this discussion would belong here (I'd imagine future readers would appreciate having all the information and discussion in one place), but that's just my guess as a user of the forums. 
Linear interpolation of two vector arrays with different lenghts
Zakwayda replied to C3D_'s topic in Math and Physics
Great  I hope that conditional evaluates to true The resolution in the 3rd dimension would be the same as that of the original curve  that is, one value per point. Currently you do have data for each point  the absolute and delta times. What we'd need though for each point is information about the change in speed between the segment preceding the point and the segment following it (this is all assuming my idea makes sense and would actually work). In my previous post I included some ideas about how this value could be computed. In any case, I think it's an interesting problem, and if you decide you want to pursue this idea, I can try to elaborate on what I mean. But, if you're reasonably happy with the results you have now, it may be that there'd be little point in pursuing more complex solutions. As we've established, the current solution generally only loses timing information along relatively straight parts of the path, which may not be something the user is likely to be concerned about or even notice. 
Linear interpolation of two vector arrays with different lenghts
Zakwayda replied to C3D_'s topic in Math and Physics
Did you just make that? If so, nice. If in that example the red points lie in a plane and the blue part is perpendicular to that plane, then yeah, that's close to what I mean. The only difference is that in my example the time information is reduced to a single value for each point, so the blue part would be angular rather than curved and would only have vertices where there are red points. (Also, it wouldn't be 'time spent' exactly but rather information about changes in speed.) RDP would then be run on the vertices of the blue part, which, theoretically at least, would account for both the curve geometry, and changes in speed. That said: If that's the case, it sounds like maybe your problem is essentially solved and the extra complexity may not be needed 
Linear interpolation of two vector arrays with different lenghts
Zakwayda replied to C3D_'s topic in Math and Physics
It certainly looks just fine, and very natural. It you were to put the original and the 'replay' side by side instead of in sequence one might be able to spot inaccuracies with respect to speed, but shown in succession there certainly don't seem to be any problems. Of course as long as the path is sufficiently 'curvy', your algorithms are probably going to do fine with the timing. A problem case would be more like the one I described earlier, where the user draws a more or less straight line, but with varying speed. As for your video though, seems good to me 🙂 
Linear interpolation of two vector arrays with different lenghts
Zakwayda replied to C3D_'s topic in Math and Physics
I thought a little more about the problem of losing relevant data with RDP. I'll discuss the speed issue here, but I think the same concept can be applied to other parameters, like pressure. The following is speculative and off the top of my head, and there may be errors (conceptual or otherwise). Each segment in the original curve has a speed, which can be computed as (p0 and p1 are the endpoints of the segment): distance(p0, p1) / (p1.absolute_time  p0.absolute_time) Every point but the first and last can then be given a 'speed delta' value that is the absolute value of the difference between the speeds for the segments preceding and following the point. The greater this value, the more 'important' this point is (irrespective of other factors) in terms of capturing information about the original drawing speed. Here's a concrete example: speed delta = abs(55) = 0  v *** speed=5 speed=5 In this case you could remove the middle point without losing any information regarding speed. However, in this case: speed delta = abs(91) = 8  v *** speed=1 speed=9 Losing the middle point would lose valuable information, turning a period of slow movement followed by a period of quick movement into a single, uniform average speed. This seems to be the problem you're describing, where your RDP implementation is losing this kind of information. What I'm wondering is if the dimension in which the RDP algorithm is run could be expanded to include other parameters. So that we don't have to think beyond 3d, let's just stick to one extra parameter, speed. Imagine the geometric curve lying in the xy plane. Now, map the speed deltas to the z axis in 3d, so you have a 3d curve. If RDP is then run in 3d, incorporating this new information, the speed deltas can help prevent points that represent important speed information from being lost. One factor here is scale. If the speed deltas are much smaller or larger in general than the geometric deltas, that could throw things off, giving the speed deltas too little or too much weight. A possible solution would be to include a scaling/weighting factor for the speed deltas (and any other additional parameters you wish to include) so that the contribution is appropriately proportional. In summary, although I'm not sure I've worked out all the details here, I think it still may be possible to get the results you want by modifying the smoothing algorithm(s) (and given the possible expense of postprocessing, that may be the only practical option). 
Sorry, '< or <=' was probably a little confusing. What I meant is that you can use either < or <=, as you prefer. In other words, it's your choice which to use. Conceptually, whether you choose < or <= depends on whether you want to count 'just touching' as an intersection (<= will count 'just touching', < won't). Both in theory and in practice, the 'just touching' case is probably unlikely to occur (except in contrived cases), so which comparison you use may not matter much. As for squared distance, in this context comparing squared distances is equivalent to comparing distances, so you can save some math (in particular a square root) by comparing squared distances instead. Untested pseudocode, but here's how you might perform the test once you've computed the pair of closest points: return squaredDistance(closestPoint0, closestPoint1) <= square(2 * radius)

I'm not exactly sure what you mean, but that doesn't sound correct to me. The test under discussion (implemented correctly) should detect any and all intersections. If that sounds wrong to you, maybe you could elaborate (I may just be misunderstanding what you said).

OBBOBB detected 'intersected' but It's not
Zakwayda replied to isu diss's topic in Math and Physics
This exact issue is discussed in this thread. It goes into considerable detail, and even provides a nicely illustrated example of a failure case. 
Linear interpolation of two vector arrays with different lenghts
Zakwayda replied to C3D_'s topic in Math and Physics
Aha  we finally found a way in which a postprocessing step may fare better than doing the interpolation as part of the smoothing. However, all may not be lost. Here: RamerDouglasPeucker(PLin, 3.0, PLOut); You have the opportunity to specify a tolerance for the RDP algorithm ('3' in the above example). You might try some different tolerance values to see if you can get the RDP step to retain more of the original data. There may be a conceptual problem though. You could draw a roughly straight line of points with varying speed (or pressures or whatever), and RDP probably is simply going to discard that data more or less irrespective of the tolerance value, because RDP only considers geometry and doesn't care about the ancillary data. So in a perfect world, it seems you'd want something like this: only simplify parts of the curve that are relatively linear and for which the ancillary data deltas are fairly consistent. I have a vague idea about revising RDP to treat the ancillary data as additional spatial dimensions, and computing the distances in that space, meaning that significant changes in delta values would register as 'curviness' and cause that part of the curve not to be simplified. But I'm speculating (I'd need to think about it some more and test it out myself to have any confidence in that idea). It also might be worth considering that if your last smoothing step adds in new points where RDP previously removed points, there might be little point in running RDP at all. Another possible question to ask is if the smoothed paths are actually in some way more to your liking than the originals. Are all the smoothing steps actually necessary? Could you just use the original points? Or maybe just run cornercutting but not the other steps? The postprocessing step is tempting, but the complexity (and therefore execution time) when you have hundreds or thousands of points seems like an obstacle. My intuition is that the smoothing algorithms could be further revised to preserve more of the original ancillary data when the deltas are irregular, but I admit that that would be a less straightforward change to make than what I've been suggesting so far. 
Too late to edit my previous post, but that code seems to test for whether the capsules are just touching, whereas I'm guessing you want to know if they're intersecting at all. For that, you only need check that the distance is < or <= twice the radius (or you can do a squared distance check and save some math). It looks like the source you're using as reference already addresses numerical problems. If there's some other issue you're trying to address via an epsilon, maybe you could clarify what it is.

Irrespective of whether the rest of the code is correct (it may or may not be), I'm not sure how this is supposed to work: diff = abs( close_d  (2*S_RADIUS) ) if(diff <= epsilon): return True else: return False If the distance is less than twice the radius, then 'close_d  (2*S_RADIUS)' will be some negative value, and 'abs( close_d  (2*S_RADIUS) )' will be some positive value that's likely > epsilon, therefore the function will (incorrectly) return false. Am I missing something there? (I may be.)

Advertisement