• Advertisement

irreversible

Member
  • Content count

    1930
  • Joined

  • Last visited

Community Reputation

2864 Excellent

2 Followers

About irreversible

  • Rank
    Crossbones+

Personal Information

  • Interests
    Programming
  1. Buying a decent last gen laptop in the EU?

    I agree that the Lenovo example isn't strictly generalizable, but the root problem is there. Reviews rarely cover even the most basic flaws that Joe Average may encounter when it comes to the mass market, not preview examples, which are likely to be hand-picked to begin with. Coil whine is a handy example - this isn't something I've read from reviews. It's one of those things you probably don't even know to look for prior to purchase. It doesn't help that not all laptops even from the same skew are made equal so one specimen might have it bad while another might be free of it. Compound this across generations of the same model though, and yeah, I would say it's a pretty gross QC issue, or worse yet - sheer negligence. The bottom line is - for this kind of money one shouldn't feel like they're playing the lottery.
  2. My old laptop body is falling apart around the power connector (thanks again, Acer!) and I'm approaching the critical point where I'm pretty sure I will either short the entire motherboard or lose the connector altogether. I also want an upgrade and this time I figured I'd pay for it. So I started doing research and this is where I'm at. Wants: 15 inch, i7 8550U or better (hence the last gen cutoff), at least 1 thunderbolt, 16 GB ram, 1TB storage or easy access so I can upgrade it myself, battery life 6+ hours when coding (and compiling regularly). Price point is ideally less than €2000, but I'm willing to go as high as €2700 if the laptop is worth it. Nice to haves: touch, good to excellent RGB coverage, weight less than 1.5kg ------------------------------- XPS 15 2-in-1 looks nice on paper, but is off the table due to apparent thermal throttling and coil whine issues. The long-rumored HP x360 Spectre 2-in-1 with RX Vega graphics was supposed to launch on March 16, but so far I've seen absolutely nothing related to it. Previous models have consistently had the same issues as the XPS, though and HP's quality control appears to be horrendous to boot. The LG Gram 15 would actually be GREAT (with one possible caveat), but of course it's limited strictly to the US market. Oh, and it's out of stock, because of course it is. Lenovo's X1 Yoga and X1 Carbon both look fantastic, but I can only find one i5 model of the Carbon in the EU. The high end Yoga is readily available in the US. The P52s looks pretty decent as well and is available, but I can't find any reviews for it. Like - none. Which seems to suggest it's a localization issue. The Surface Book 2 looks nice, I guess. Just one thing, though: the price tag. Oh, and Apple is off the table. Because Apple. As for Acer - I've had 3 Acers now and they've all literally fallen apart (the screen came off, left 5 cm column of pixels stopped working, the body started falling apart around the power connector - in that order). Each lasted pretty much exactly 2 years. tldr; What is going on here? Why is the US getting all the goodies and there seems to be no way to (officially) buy any of this stuff in the EU, AU or presumably elsewhere (I've no idea about the Asian market). I would actually be fine with an incremental release, but I can't even find the older models of the Gram for sale in the EU and there seems to be zero indication as to when any of the above-mentioned models might become available around here. Like - daheck... So yeah - is there something obvious that I'm missing or has this always been the case? Here are a few less related technical questions in case anyone feels like chiming into the conversation: - is 4k on a 15 inch worth anything at all productivity-wise or does is it there to just demolish battery life? - is something as trivial as a cooling pad effective against thermal throttling? - why is quality control so horribly bad in what purport to be high end laptops/ultrabooks? The Lenovo link above has four 5/5 stars and two 1/5 stars, because for 20% of the people, you know, the device's screen came apart in a couple of days. Searching for problems with the XPS or the Spectre turns up a similar ratio.
  3. I already have the code down that deals with signature extraction in very much the same way. I ended up having a little bit of trouble getting it to work with constant member functions, though. Anyway, here's the solution I came up with yesterday. I haven't tested it thoroughly, but it does seem to work. A short rundown of what the code does: - first, remove const from member functions. I don't know why but, return type deduction does not work when const is present and reports the decltype to be an incomplete type. Perhaps someone can fill me in here? - next, use really simple manual return type deduction. - as far as I can tell, there is no way to deduce the current class at compile time without previously noting it down somewhere. This is fine in my case as I'm writing a kind of a plugin system, which requires the user to implement a default interface with something like IMPLEMENT_EXTENSION(...classnamehere...) in the main body of the class anyway. Nevertheless, here the solution is to simply typedef the class somewhere near the top of the class declaration. Here's a short test snippet that works in VS2013. It automatically forwards any call to the base class, but it's trivial to make it do anything. // return type deduction from a member function template<class T> struct return_type; template<class C, class R, class... Args> struct return_type<R(C::*)(Args...)> { using type = R; }; // const removal from a member function template <typename T> struct function_remove_const; template <typename R, typename C, typename... Args> struct function_remove_const<R(C::*)(Args...)> { using type = R(C::*)(Args...); }; template <typename R, typename C, typename... Args> struct function_remove_const<R(C::*)(Args...)const> { using type = R(C::*)(Args...); }; // just to hide the local clutter #define FORWARD_EVENT(_fn, ...) \ using _fn##_FUNTYPE = function_remove_const<decltype(&CLASS_TYPE::_fn)>::type; \ using _fn##_RETTYPE = return_type<_fn##_FUNTYPE>::type; \ return _fn##_RETTYPE(this->TESTBASE::_fn(__VA_ARGS__)); class TESTBASE { public: virtual void voidfn() const { } virtual bool boolfn(int32 arg) const { return arg == 1; } }; class TESTCLASS : public TESTBASE { public: // need this in FORWARD_EVENT() using CLASS_TYPE = TESTCLASS; void voidfn() const override { // returning void is not an issue FORWARD_EVENT(voidfn); } bool boolfn(int32 arg) const override { FORWARD_EVENT(boolfn, arg); } }; I found nothing of this sort on the web, so hopefully someone finds this useful.
  4. That thread deals with RTTI. I need to accomplish this at compile time.
  5. I need to implement really simple event forwarding and I want to avoid two things: 1) having the user type out the name of the current class (or any class for that matter) and 2) having to worry about the return value, which may be void The forwarding itself will be wrapped into something like: bool/int/void MyCustomHandler::OnMouseEvent(int e, int, x, int y) { FORWARD_EVENT(OnMouseEvent, e, x, y); // calls some default implementation for this event } I want FORWARD_EVENT everywhere as long as it is given the name of the function and arguments. Of the the two problems listed above I have (2) under control, and I can also deduce the return value for (1) given a function signature. But I can't figure out if there's a way to automatically deduce the signature. Basically I want the compile time equivalent of this: using t2 = decltype((*this).OnMouseEvent); I've butted heads with member functions before so this has bugged me in the past. So far, as best I can tell there's no real way to accomplish this. I hope I'm wrong
  6. C++ Get HWND of another application

    Read the reply in that thread. It outlines the method and mentions the fact that you may not be dealing with a single main window. Once you can list windows that belong to a process, figure out how to go about doing that to a different process. The first logical step here would be to substitute the current process for what ever process you need. Again, first reply. Note that there may be more than once instance of a single executable running, so you probably need to list all processes called notepad.exe, open each one and list all windows, doing your best to figure out which one is the main window. This may be trivial for notepad.exe, but not so much for something messier, like Gimp. This will only give you a valid result if the user is editing an unnamed and likely unsaved document and only if there is one instance of Notepad running.
  7. C++ Get HWND of another application

    First result.
  8. 3rd person camera on "rails"

    Based on your description I'm assuming you're not having trouble with setting up the view matrix so the camera looks at the player. It's not really clear, though, if your problem is related to how to place the camera in the scene so it isn't obscured by objects or if you're simply concerned with the transition from one angle to another. The former is a fairly complex problem and either likely requires manual camera placement or allowing the camera to see through geometry, as for instance in Divinity: Original Sin. The latter kinda depends on the context. Alone In The Dark simply jumps from one angle to another - the effect is jarring and highly effective at briefly disorienting the player. If your locations are more tightly knitted together, you might consider a fast transition and obscure it with something like motion blur or lens distortion (you're likely already using a quaternion to interpolate the position and lookat vector anyway), or if you want to move really slowly and cinematically, you'll need to set up your camera movement manually - eg have it follow a spline and stop at specific locations based on where the player is.
  9. This is actually the problem - yes, the encoding is identical, but all blocks are stored as 4x4 pixels, which are encoded using the original field order. In order to conform a regular DDS texture to GL, the order of individual scanlines (or UVs, as was pointed) need to be flipped vertically. Which is to say, after a texture is encoded, the flip needs to be also performed within each block or you'll end up with a texture with each four-scanline horizontal slice flipped vertically. Simply flipping the rows of blocks would effectively flip the order of the blocks, but not individual scanlines. To fix that, pixels in a block can be swizzled during loading. For BC3 that looks something like this: // (C) 2009 Cruden BV static void FlipDXT3BlockFull(unsigned char* block) { unsigned char tmp = block[0]; block[0] = block[6]; block[6] = tmp; tmp = block[1]; block[1] = block[7]; block[7] = tmp; tmp = block[2]; block[2] = block[4]; block[4] = tmp; tmp = block[3]; block[3] = block[5]; block[5] = tmp; } The problem is that while flipping encoded blocks is fairly easy for BC versions 1-5, the process is not as straightforward for BC 6/7 (and likely also ASTC), which AFAIK necessitates flipping the source texture and then reencoding it. Encoding a large BC7 texture can take on the order minutes so as far as I can tell the only realistic solution is to perform this during cooking. This isn't something I'm just throwing out there, but rather something I'm currently dealing with in my own code. As far as flipping the V coordinate goes, I'm still not sure how to that would work in all tiled cases (see below). Suppose you have splatted surface or some sort of terrain and your V coordinate runs from 0.2 to 18.6 or some other similarly arbitrary tiling. The only way to flip that would be to know the UV bounds, which in itself can be cumbersome if not outright difficult in a shader. Now, what if the texture coordinates are animated?
  10. This may work in trivial cases, but not for geometry that uses texture tiling.
  11. I meant the vertical UV coord difference between D3D and OpenGL. Unless I'm uninformed and D3D allows setting (0, 0) to the bottom left corner as in GL, you need to flip your textures vertically. For block-compressed data this means not flipping scanlines across the entire texture, but rather within each block. I'm unaware of a way to accomplish this for BC 6/7 post-compression - it might be possible, but it seems to be easier to just re-compress. Which is too expensive. This isn't an issue when targeting a single API, but unless I'm missing something, seems like a problem when trying to support both. Hm - I'll give it a shot. The decompress can reasonably be performed once at first run so that doesn't seem like too much of an issue. This makes sense. So, ship at max resolution, but during loading simply feed data to the GPU from a different mip offset. I have to admit I was overthinking it
  12. Things are pretty straightforward if you only target a single tier (which, let's be honest, is my case), but I've still been pondering how to go about basic scalability. Assumption: all textures are encoded with BC (DDS) 1/(3)/5/6/7 or in the future ASTC once that reaches mainstream. The target is the PC market. Building BC-compressed textures is trivial for 1/3/5, but becomes a strictly offline process for versions 6 and 7. Moreover, while cooking textures for a single API (D3D or OpenGL/Vulkan in this case) is a fixed process, switching between the two requires swizzling blocks in the encoded texture. Again, this is fairly trivial for 1/3/5, but I'm not really aware of any publicly available implementation of how to do it for 6 and 7. In practice this means that the texture needs to be read and re-encoded for which ever API it wasn't cooked for. I'm assuming (sic!) this is also true for ASTC. The same problem applies to resolution - scaling BC 1/3/5 textures down on the user's machine probably entails a fairly short preprocessing step during first run or installation, but re-encoding a couple of hundred or more BC 6/7 textures will probably end with the game getting uninstalled before it is even run. So here are the options I can think of: - target only one API and don't care about supporting both - or target both APIs and ship all high quality textures for either platform. (or, you known, figure out how to swizzle BC 6/7 blocks). Reorder blocks for BC 1/3/5 for your non-preferred API. - ship textures in 2-3 sizes (eg 4k, 2k, 1k), turning a blind eye to space requirements - don't use texture compression for high quality textures Any thoughts on a semi-automatic "best case" implementation?
  13. What could be causing this crash (?)

    Hehe - thanks, but it's not that. My laptop lacks the extra properietary-looking SATA connector and I'm holding out for a substantial upgrade when Ryzen versions of the new LG and/or Samsung models hopefully come out next year :). PS - the DLL is cached into memory, at which point it is initialized. By re-linking, the previous file gets overwritten, causing it to be reinitialized when the program is run again after a build. It's just strange to see it stall the main application like that.
  14. What could be causing this crash (?)

    Welp. I just had a walk outside and realized it's probably just ispc_texcomp.dll caching stuff when it gets initialized. The cache gets invalidated when the module is overwritten. Yes. Hard drive space.
  15. I'm adapting the Intel ISPC Texture Compressor into my workflow and I'm experiencing a strange issue. Basically what is happening is that the first time after building the solution the executable crashes. Except that it doesn't. The following does NOT seem to happen on consecutive runs, but invariably occurs every time I relink: 1) the binary seems to take a fair bit longer to start up (10 or even 15+ seconds). This happens for every instance of the binary (eg the delay occurs again when I copy the exe to a new location). Note that no external files are being referenced other than ispc_texcomp.dll. 2) it then throws the "This program has stopped responding" error for about another 10 seconds. I can close this without stopping execution and it's barely enough time to start up a debug session, which seems to get me nowhere. The notification then goes away and the program executes normally, is if it had been stuck in a tight loop or was run a second time. 3) steps 1 and 2 seem to happen before WinMain is called. Which is to say I can't even step into anything in the debugger before the end of the stall. Now, the number one thing here is that the code is DX-heavy, which is quite foreign to me. If I had to guess, I'd surmise the problem has to do with some kind of (shader) caching or whatnot by the DX API itself. Except that this seems to (so far) only happen prior to the first run and before any actual program code is called. The main reason I'm making a fuss about this is because I'm unsure about DX versions and might well be linking against bad modules. Also, the problem could potentially be exacerbated once I start using the tool for batch conversions. I'm using the June 2010 SDK libraries in VS2013, which I fed manually into the project. Can someone maybe download and see if they get similar behavior or suggest what might be causing this?
  • Advertisement