Ravyne

Members
  • Content count

    4884
  • Joined

  • Last visited

Community Reputation

14300 Excellent

About Ravyne

  • Rank
    Contributor
  1. I--and most people, I think--view GDDs as a living document that's anything but sacrosanct; they're not meant to set things in stone and then be followed verbatim, or for what's written in them to go forever unchallenged and unchanged. I think the answer to what should be recorded is to record what seems important, when it's important, to the level of detail that's necessary to keep the team operating together -- not what's inconsequential, not sooner, and not in more detail than is useful and--more importantly--can be clearly seem. Over-planning carries risks just like underplanning -- working in perfect lock-step towards the wrong goal can waste just as much time, energy, and morale as working haphazardly at unclear ones. Resist the urge to codify your speculations as anything but that -- "We don't know the answer yet, but we think this is an important question." Is a perfectly fine thing to write down, and in fact far better than pretending to know.
  2. Compile time "polymorphism"

    It's a useful trick to use template specialization to unify the identifiers so that you can hand unrelated types to another template. It's especially useful when you don't control the types yourself, or can't change the types to unify the names directly. If you do own the types and have the freedom to alter the signatures, it's usually better to do it directly. The C++ STL does that, more or less. Strings are a distinct type, but they're also a container in a sense like std::vector -- you can use them with std::algorithms, for instance. Surprisingly few people seem to know that.
  3. C++ Magazine

    Dobbs' print run ended in 2009, but the articles are available on the website. IIRC, I believe I stumbled upon a zip file of the entire archive once, but I might be confusing it with something else. I'm not aware of any print magazines for C++ specifically, the few print magazines that are around are broader or have a specific industry focus. isocpp.org is a pretty good aggregator for C++ goings-on, though.
  4. They're 64 bit CPUs, though some of the least expensive devices might only have 2GB of non-upgradable RAM and come with a 32-bit OS (Microsoft basically gives Windows away for some kinds of low-spec devices). Performance-wise, you-re looking at a CPU core that's probably less than half the speed of an i-series CPU between lower clocks and IPC, probably doesn't support AVX, and has a lot less cache. You're looking at single-channel memory. You're looking at an integrated GPU that's only 12 or 18 shader lanes wide (compared to 24, 48, or 72 on the i-series) that are clocked about half as fast, have no EDRAM, and have to share that single memory channel with running programs. It might work for you, but it's not going to be a premium -- maybe not even comfortable -- experience. You might be better off saving for a lower-end, non-atom machine -- even one that's last-years model or a gently-used/refurbished machine might be a better value if cost is a factor.
  5. Keep in mind that a big part of the reason that C++ can seem old-hat is that the majority of resources are pre-C++11 -- there's just a lot of material, resources, and code out there that reflects an earlier state of the language. Things are getting much better though, and if you know where to look, you can find current-day and even forward-looking resources. C++14 made significant strides in language ergonomics, and C++17 and surrounding technical specifications are bringing some long-overdue libraries into the standard ecosystem. I do like Rust a lot as well. I've been meaning to play with it more myself. It's got possibly the most amazing community, especially considering its modest size. The tooling is great, the error messages are awesome, there's really a lot to like. The challenge with Rust, I think, is that it demands quite a lot from people as they approach it -- more understanding and possibly more patience than a lot of other languages.
  6. Display resolution matters?

    You can do some low-level bit-fiddling/shift-arithmetic more readily if your width* is a power of two -- it doesn't matter so much what your height is. Whether you gain a measurable advantage can depend on other things. For example, if you don't have a proper display controller or useful DMA and you're blasting pixels to the display in real-time, then your CPU probably doesn't have enough downtime to do what we'd think of as traditional software rasterization--the hard-real-time constraint becomes the limiting factor, along with any memory/device access latencies in the pixel-loop. It can also be hard to make blanket statements about embedded devices. Some displays might have embedded buffers, some might have serial addressing, some might have planar video memory, some might have a line-stride that's different than the number of pixels wide it is, or you might be attempting to use an 8-bit microcontroller with a display that's wider than 256 pixels, or the display wants 9-bit color, or... You get the idea. In general, powers of two are nice in some modestly useful ways, but opportunity and meaningful ability to leverage those nice things are pretty situational. For a computer a power-of-two number is kind of like a prime or square number is for a mathematician--its an ordinary number with an extra property that might sometimes give way to a helpful observation or shortcut, but more often than not, doesn't. If the inner kernel of what you're doing can exploit one such trick well you can see considerable speedups, or none at all.
  7. Simplicity vs. Complexity

    Complexity is sometimes necessary to achieve a design, it's unnecessary complexity that's to be avoided. Complexity always carries a kind of psychic weight to it, and if that weight is greater than the payoff, then it's a net negative. You could say that good design trends towards being another facet of the "make it as simple as possible, but no simpler" axiom.
  8. Also getting the PS4 pro? what to expect

    [quote name="dpadam450" post="5318603" timestamp="1478573756" Yea its not upscaled 1080p. I work on Killing Floor 2 and its 3200x1800 effectively. So its not native 1800p but it looks pretty freaking good for a 400 complete system. I spent 400 on my 1070 alone to do full 4k native, so for console gamers, it is pretty legit. Also, buy Killing Floor 2 please.[/quote] 1080p upscaled on original PS4 is what I said, but yes 4K looks pretty great regardless of checkerboard or upscaling something like 3200x1800.
  9. getting a new PC for DX development

    I think 6-8 cores will be the norm for gaming builds in 2-4 years -- they're already possible, and not even hideously unaffordable now, and it looks to be shaping up that Zen performs on par to at least Haswell (pessimistically) and more likely to Broadwell/Skylake on a core-for-core, clock-for-clock basis. Assuming AMD delivers on clock-speed (We've already seen 3.0 Ghz demoed on an engineering sample, which would be a bit low, but no telling if they're just holding back) its looking quite likely that they're prepared to deliver competitive 8-core CPUs and will all but certainly take their usual tactic of undercutting Intel, who'll respond by restructuring their current price-sheets immediately, and likely by moving 6 and 8 core designs down to more mainstream segments ASAP. This will happen unless AMD fails to deliver a compelling value, but even if clockspeeds disappoint it only takes the right price to make those 8-core Zens appealing, given the baseline IPC we have evidence for.   If we were a few months further along, I'd have recommended to wait and see what Zen delivers, but at some point you just have to stop looking over the horizon or you'll never buy a computer again.
  10. Also getting the PS4 pro? what to expect

    IIRC, Sony is mandating that all games continue to support the original PS4, and essentially mandating that all new games support the Pro's extended capabilities in some way, for most games that will mean 1080p upscaled approaching 60fps for original and 4k upscaled approaching 60fps for extra crispy Pro, at minimum, with many games probably also provide a setting for true 1080p at a solid 60 FPS with higher fidelity on the Pro hardware as well -- a lot of the market doesn't have 4K yet so that power might as well be put to good use.   For me, with my very nice, very low-latency 1080p television I'm looking forward to the PS4 Pro and Xbox Scorpio giving me a really good 1080p experience and that's worth the cost of upgrading. Neither machine is really capable of 4K/60 without rendering tricks anyhow, it'll be another proper generation before that happens. 
  11. getting a new PC for DX development

    On threading, most modern games typically expect about 6 threads on 6 modestly-speedy cores, or to 4 or 8 threads on 4 significantly-speedy cores. The current crop of consoles is the former, with the console giving developers about 6 or 7 full cores out of the 8 in the machine -- these are Jaguar architecture in both Microsoft and Sony's boxes, and run at around 1.6Ghz (2.1Ghz in the PS4 Pro). A modern gaming PC typically has 4 cores and 4 or 8 threads, with much higher clock-rates and better IPC in general.   That being said, modern games typically employ some sort of job-queue to leverage all those resources, and don't really lock particular workload-threads to particular cores, aside from maybe a few well-known, long-lived ones -- e.g. rendering, UI/input, networking, might be statically-allocated threads. Most times what they do is structure some piece of work as an independent unit (say, computing collisions for a particular body) and stick it into a priority que where a job-manager with find it and assign it to an available thread -- the main thread of the engine gets the result asynchronously at a later time, going about its business (usually, making more jobs) until results are in. This scales really well in general, but was a really crucial abstraction during the PS3/Xbox 360 generation where the architectures were so different (3c/6t extended AVX + GPU compute on 360, 1c/2t + 7 DSP (Cell) on PS3).
  12. Changes to GDNet+ and More

    These seem like great tiers to me, and the included add impressions make it a steal if you're going to use them. The yearly prices are quite fair regardless, and the ad-lite/ad-free stand-alone subs are nice for people who want to support the site directly but wouldn't use the Pro features -- Maybe they too should get some other forum badge in recognition of that.   Speaking of badges -- will crossbones go away entirely, or just no longer confer the benefits of the old Plus membership? I'd be cool to keep it around just as a marker of people who make regular contributions in the form of articles, journals or making significant reputation gains during some window of time.
  13. getting a new PC for DX development

    When I moved from quadcore (8 thread) to hexcore (12 thread) i7s, my build times improved completely linearly. For day to day productivity as a professional developer, the difference was absolutely worth the two hundred bucks or so it cost to go to hexcore and X99. The Ks have allowed me, more than once, to hit target framerates when running demos of our work that had not reached proper optimization stages. It's up to you whether these things matter for a personal machine, but it would be absurd to cut these corners in a professional environment.       I do NOT recommend going to SLI/Crossfire configurations except in extreme configurations requiring multiple top of the line GPUs. They are much more trouble than they're worth.     On X99/Hexacore, also consider Xeon v4 -- most of the x99 motherboards are compatible (but check your manufacturer's page and note the BIOS version required), and there are a couple benefits -- ECC, support for server/workstation/enterprise features, and a full compliment of 40 PCIe lanes all the way down the range (even the quad-cores), and you give up 200Mhz on the base clockspeed (comparing Broadwell i7 hexacore vs same-priced Broadwell Xeon v4 hexacore) but *gain* 200Mhz on boost clocks -- which is probably a more beneficial arrangement. You're looking at a ~620 USD processor in any case, but if money is tight you can go quad-core Xeon v4 for under 300 USD and get all those benefits I mentioned for about the same price as a consumer i7 -- you give up some clockspeed (~400Mhz), but gain quad-channel memory and a couple megs of cache against the consumer part.   This strategy of x99/socket 2011 has another potential side-benefit, which is that in 3 years or so a *ton* of server equipment is going to start going off-lease and being retired, and you'll be able to pick up a high core-count Xeon (and probably tons of RAM) for relatively cheap and maybe extend the life of the machine another few years. There's 8-core, 2.6Ghz dual-socket-capable Xeon V2s going for about $65 each right now all over Ebay. People are using these to build out 16-core, dual-socket machines with 64GB ECC for ~600 USD, lots of YouTuber's are using them to do their video capture, rendering, and transcoding. I considered building one myself, but my workloads don't really scale out that wide.     On video cards, definitely prefer a faster single card -- AMD actually does a really good job scaling traditional Crossfire workloads, but the future of traditional SLI/Crossfire is a bit murky since both D3D12 and Vulkan move that functionality out of the driver and put it into the hands of Devs to figure out. Don't feel like you have to chase the top-end though -- the diminished returns are really pronounced. Think about what resolution and settings you want to play with, and then chose a card that can deliver 60FPS under those conditions. 1060s and 480s are good to about 2560x1600 or (stretching a bit) 3440x1440 ultrawide -- though I'd probably go 1070 for 3440x1440 -- If you wanted to go all the way to 4k, only the newest Titan/1080ti really hits 60FPS consistently, the vanilla 1080 falls just short unless you start dropping settings a bit back.   As a developer, I'd only go multi-GPU in my system if I planned to support it, but you really should make sure you test for it if you're using D3D11 or OpenGL that provides it in the driver in any case. Also keep in mind that the new APIs allow you to mix GPUs of different capabilities and vendors now -- For example, you could have the integrated GPU do simple tasks like rendering the skybox, UI, and running full-screen effect shaders; it doesn't sound like much, but these low-utilization tasks can sometimes stall the next frame on a single GPU (or at least, occupy GPU hardware while not utilizing it as much as game rendering would) -- done well, end result is picking up a handful of frames/second, with generally-better frame-rate consistency.
  14. getting a new PC for DX development

    There's no denying the sheer speed of a RamDisk, but its awful costly -- $800 is the lowest priced 4x32GB DDR4 ECC I see on newegg, and most of the kits are half-again or twice as expensive. And it really only addresses that initial load latency, and only then if the transformations you're doing are actually bottle necked by a fast nVME PCIe SSD (you can get half-terabyte 3.2GB/s read at around $350) -- I would think most studios would be thrilled to death if the tools in their build pipeline could consume and process assets at that rate. RamDisks have their place, but I'd want to know I was getting a real benefit before going through the trouble, and it certainly doesn't negate the need for fast non-volatile storage backing it. I probably would actually use a RamDisk on something like a build or continuous integration-server for staging the build but moreso for the sake of not putting all that wear-and-tear on the SSD than for any performance improvement.
  15. getting a new PC for DX development

    Its honestly hard to go wrong these days as far as capability goes -- even Intel Broadwell and Skylake integrated graphics support D3D12 (in fact, I think they're technically the only one's who support every optional feature and neither AMD nor NVidia's best do). You could even do well with a moderately higher-end laptop without breaking the bank.   So what you really are considering is performance and creature comforts vs. price.   You really should consider a single larger monitor or dual smaller monitors. For a single, I'd do something like an ultra-wide 3440x1440 -- plenty of screen real-estate without taking up so much desk real-estate, a good size for 2 or even 3 applications/text buffers side-by-side and at 34" inches the DPI is low enough to not invoke Windows' UI-scaling gremlins. If you want something a little smaller you can do 2560x1440 (16:9) or 2560x1600 (16:10) in 27" or 30". Dual-monitors are even better, IMO, you can have the game running fullscreen in one window and a debugger, profiler, or other tools open in the other -- while I'm writing code, I tend to use the second screen for research/design docs/notes.   Do get a good-quality m.2 SSD though of a decent size for your usage -- make sure its a PCIe3.0 x4 NVME drive and that your motherboard choice supports the full PCIe 3.0 x4 bandwidth -- these drives can do sequential reads at north of 2GB/s (the very new ones north of 3GB/s), and writes at about half that rate. There are cheaper, SATA-protocol m.2 SSD drives to beware though -- they're limited to around the same 500MB/s read, ~400MB/s write speeds of their off-board SSD counterparts.   For bulk storage, I'd suggest getting a *pair* of drives and mirror them in case one drive dies -- its fairly rare for a drive to die these days but its regular enough to not leave it to chance since the consequences of a lost drive can be devastating. If you're on windows, I heartily recommend Windows Storage Spaces -- it takes over whole drives and virtualizes their combined storage -- create a virtual drive on that pool and set it to mirrored, the bits we be written to both drives -- set it to striped and half the bits will be written to each disk, increasing read/write speeds -- you can even create virtual disks that are larger than the sum of the drives in the pool, and when you get close to overflowing what you've got, Windows will prompt you to add more or larger drives and safely shuffle things around. On Linux, there's various soft raid solutions that do similar things. Hardware raid is more trouble than its worth, IMO, the marginal throughput advantages don't outweigh the additional administration burdens and potential downfalls -- for example, moving a Storage Spaces pool to an entirely different PC is stupid simple, but hardware raid isn't always so because you might have a different/incompatible raid controller -- what if your raid controller card dies?