Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 Feb 2011
Offline Last Active Today, 05:25 AM

#5279856 How C++ Programs Are Compiled (A Brief Look)

Posted by Bacterius on 06 March 2016 - 12:51 PM

that _is_ the reason. it has to be able to load anywhere, so it can co-exist in ram with other apps. wouldn't work too well if all apps expected to be loaded at some specific address. what if you tried to run two of them at once?


Replace "apps" with "dynamic libraries" and it'll be correct. Each application already has its own address space all for itself, and maps segments of dynamically loaded libraries into its address space.

#5278429 Vulkan command pools and buffer questions

Posted by Bacterius on 27 February 2016 - 06:42 AM

If I can't call vkBeginCommandBuffer(), how would a command buffer ever be used? Isn't it a requirement to call vkBeginCommandBuffer() prior to recording any commands in the buffer? Is it that command buffers from a pool without the bit set are already implicitly in a 'recording' state and vkBeginCommandBuffer() isn't required or is vkBeginCommandBuffer() only callable once and can't be used a second time to reset the command buffer (it seems to imply the latter in 5.3 but doesn't really out right state it)?


You can call it once, while the command buffer is still in the "initial state", and it only becomes executable after some commands have been recorded into it for the first time; see the first few paragraphs of "Chapter 5. Command Buffers".


- When vkResetCommandPool() is called are the command buffers just reset or are they destroyed and have to be recreated?


I'm not sure but I think it only resets it, but it may release internally-used host memory depending on the flags passed (that memory would presumably be allocated back when recording a new batch of commands in that command buffer).


Also experimenting with Vulkan so take my comments with a grain of salt smile.png

#5275258 What theory explains this?

Posted by Bacterius on 11 February 2016 - 05:44 AM

Even of the time length there, it is a fact (is?) that there is no observable acceleration before the object travels its speed? Man, how big force would that be, and it seems to not be emited by matter adjusting during collapse, too few energy to overcame donated energy of contact, right?


There is an acceleration, it's just extremely fast because the objects only touch for a very small amount of time and a large kinetic energy transfer is carried out over that time interval. The acceleration can be huge (like on the order of several thousand g's) but it isn't infinite.

#5275217 Clinical studies on overlooking stupid bugs

Posted by Bacterius on 10 February 2016 - 10:25 PM

It's not really specific to code or programmers. Have you never re-read a sentence you wrote several times without spotting a repeated article like "the the" or a missing word? Then you get someone else to read it and they spot it instantly, because they didn't write it so they don't "optimize away" the act of actually reading the sentence like you do because you already (believe you) know what's there and they don't.


One thing that I find helps is to add a generous amount of whitespace in your expressions, keep your lines short (not necessarily in terms of characters but in terms of information content) and not just cram everything into a tiny line full of symbols. For the rest, let the compiler check the syntax for you, and make sure that it can catch accessing undefined variables at compile-time or at least runtime with some kind of error. That should make sure that the syntax you write is approximately the syntax you had in mind.


I don't know about others but it's not rare for me to start coding something up, try and compile it 20-30 minutes later and spend a few seconds fixing a dozen little syntax errors or typos that I didn't catch (or didn't bother to) in my new code.

#5275091 Note to self

Posted by Bacterius on 10 February 2016 - 12:58 AM

Fortunately we had good QA at a previous job, but once I was using SHDeleteFile and apparently instead of deleting a specific directory, I was deleting EVERY directory!




QA noticed it during install once Windows started throwing up errors.  Apparently not all critical files are locked at all times, but when it needs them and they aren't there then it's not a good thing.  XD


Good thing QA "noticed" it laugh.png

#5272402 C++ exceptions

Posted by Bacterius on 23 January 2016 - 03:56 PM

Also, queue push, regardless of the prototype, can fail. It can fail if it fails to allocate memory, and it could potentially have a ring buffer implementation, that would run out of queue space sooner.


Not necessarily. If the queue uses a fixed-size circular buffer then pushing an item into it can never fail. In many situations old queue items can be discarded over new items.


Using operations that cannot fail whenever possible and making use of transactional memory patterns for destructive operations that could fail is a good way to increase your software's reliability. But that may be overkill for many applications such as games; after all, in a game most of the time you can just abort on error, as long as the player's save file is not corrupted and not too out of date, no harm done.


Although I agree that just checking the prototype isn't enough by itself; you want to have a good read of the library's documentation and maybe peruse its source code a bit to see how it handles errors. When you know for sure that functions that return void legitimately cannot fail, then you are good and can just refer to the prototype for quick reference. Of course, if the library is written by baboons that use longjmp to escape to god knows where on error, do you really want to use it in your software that's supposed to be reliable? smile.png

#5272071 Risks Of Using Computer As Webhost?

Posted by Bacterius on 20 January 2016 - 05:52 PM

Honestly it's not worth it. You can get away with hosting private, low-availability or low-bandwidth services locally, but for anything more serious such as a public website or a game server you will never meet an acceptable uptime, someone might flood your residential line (super easy) or, worse, if you have a data cap and have a shitty ISP you may find overnight that you've gone 300% over, have been charged $600 over-cap and have had your subscription suspended.


You can get a shared VPS for as low as a couple dollars a month and a dedicated server for $30/mo or less, the difference is they will be sitting in a data center connected to a fat network pipe, will have better uptime, and you can actually use them for mostly anything you want (for many ISP's hosting commercial servers, torrenting hubs or even game servers is against their terms of service).


Also don't forget that if you host a server at home that is separate from your own desktop/laptop/whatever (which is probably a good idea) then you also need to pay for the electricity to run it; and you may find that comes out about as (or more) expensive than just renting hosting... having an appliance running 24/7 is actually pretty costly these days even if it doesn't draw much!

#5270811 Weird access violation when copying data

Posted by Bacterius on 13 January 2016 - 02:24 AM

Imagine if the entire forum contained "deleted post" in every post of every thread. Please don't remove your questions, it helps no-one and simply means people in the future are led to this thread by its title and find... nothing sad.png if you find your solution please share it for future visitors!

#5270499 Render to a texture

Posted by Bacterius on 11 January 2016 - 01:07 AM

I believe the swapchain is only used when you commit the backbuffer's contents to your display device using the Present() method. If you are just rendering to a texture, then you just don't present, after drawing you can then retrieve the contents of your texture and do whatever you want with it.


I am pretty sure if you are only doing offscreen rendering (for instance some command line conversion tool) then you don't even need a swapchain at all.

#5270228 One function for several structs in a void**

Posted by Bacterius on 09 January 2016 - 12:34 AM

*ppVerts is of type void*, so you need to cast it to your array's type (PosCol* in this case).

#5269266 Criticism of C++

Posted by Bacterius on 04 January 2016 - 03:48 PM

Which is just the issue... pointer arithmetic that lands outside the bounds of an array is undefined behavior. I'm not going to argue that this makes 99% of all programs ill-formed (with haphazard results) because every pointer arithmetic bears a result that is outside the bounds of some array (and given the fact that there is argv, there is always at least one other array).


... what? not some array, the array that is involved in the pointer arithmetic expression! And while that specific part of the standard seems arbitrary in light of the modern, unified, fully byte-addressable memory model of today's architectures, it makes more sense when you view it in the context of segmented memory architectures, where in C you still only have, say, an int* pointer type, but you could have two int arrays in two different memory segments, and it's just not possible to meaningfully, say, subtract the two array pointers, or add an integer to one array to reach the other somehow; with this in mind it makes sense to not have distinct arrays be able to interact in any way (not that most code does this anyway)


EDIT: I think I see your misunderstanding now; the standard states that a pointer not otherwise part of an array may be treated as a one-element array in the context of pointer arithmetic (it is actually very clear on that point)


I agree some aspects of undefined behaviour can seem punishing in that a meaning could have been assigned to the operation that everyone would have been happy with and it would have made life much simpler. But, these things were decided upon a long time ago, and in many cases there were historical reasons for why the standard is written a certain way.

#5268988 How long does the C/C++ preprocessor keep going?

Posted by Bacterius on 03 January 2016 - 08:12 AM

The algorithm followed by the preprocessor is actually quite elaborate and involves stacked contexts to correctly handle nested and/or recursive macros (for instance if a token is expanded using some macro, then that macro will not be considered in expanding the resulting token). Probably the most direct way to fully understand the system, if you want to, is to read the docs and maybe even the internal docs.


But in most cases the preprocessor can be thought of as a process that does a single pass over the input stream, keeps track of all preprocessor macros encountered so far, and tries to expand each token according to these macros. If a token is encountered before the macro that's supposed to handle it is defined, the token will be left untouched.


Be aware that despite its apparent simplicity, advanced use of the preprocessor is typically not portable; things will work on a particular compiler and will break horribly on another because it doesn't expand your tokens correctly or somesuch. If your preprocessor definitions are growing out of control, consider reviewing your design to see if all these macros are needed, and then think about using a more specialized preprocessing system like m4 or similar.

#5267590 Multiplication support in GLM

Posted by Bacterius on 22 December 2015 - 10:28 PM

It doesn't look like anything geometrically, it's just component-wise multiplication. One typical use is in blending, where you might multiply colors together channel-wise. Not all vec3's are geometric, sometimes they are (ab?)used as a bag of three related values.

#5267102 What will change with HDR monitor ?

Posted by Bacterius on 19 December 2015 - 11:12 PM

Since the original question was "What will change with HDR monitor?"; if the monitors truly are HDR (like Hodgman said, HDR is quite overused by marketing... what it really means by HDR monitors "depends") then what will change for sure is your electricity bill. A monitor that can show a picture in a dynamic range around 4-10 times higher is bound to consume much higher electricity when the "shiny stuff" lits the pixels.


Funny how "staying green" slogan is also important.


If the monitors had a high enough dynamic range then our GPUs might in contrast use less power as they won't need to do things like tonemapping, bloom, lens flares and so on, as the monitor and our eyes will do it automatically, so we may actually save electricity in the long run! laugh.png (although we may overall lose due to people accidentally setting fire to their homes and losing their eyesight after displaying a rendered sun on their monitor unsure.png )

#5264540 PRNG Question

Posted by Bacterius on 01 December 2015 - 11:28 PM

Bacterius, so you're implying I should use a ctyptographic algorithm to create my random numbers?


I'm saying that it's one surefire way to do it (that also provides a bunch more advantages when done right, such as an extremely small memory footprint, the ability to seed, zero issues with poorly distributed seeds, good parallelization properties, optimal quality of output, and effectively infinite period). Sure, they may be slightly slower than a highly optimized dedicated generator, but:


 - random number generation is almost never the bottleneck in software as almost all generators are completely CPU-bound (and if it is, there are ways around that, and if you seriously need every last clock cycle for some embedded or HPC task then you are definitely an outlier and already know the tradeoff between output quality and the amount of work you'll need to put in to actually achieve that)

 - on the other hand, random number quality is actually a much bigger deal, and you avoid plenty of subtle issues with the way "dedicated" generators expect to be used, e.g. "weak seeds" or the all-too-common "yeah, the first few bytes kinda don't really look random, just skip them" defect


So, yes, it's a heavy-duty solution, but it's a flexible one that I've come to rely on and that has never once failed me. Why settle for anything less when the cost is so low, and the benefits so great? I have nothing against things like the Mersenne twister or xorshift, they are nice and noteworthy algorithms in their own right, I just find them inferior in almost every way to the above when it comes to practical matters. Necessary reading if you are interested in knowing more.