Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ravyne

Member Since 26 Feb 2007
Offline Last Active Today, 05:25 PM

#5306232 Tax Deductions

Posted by on 16 August 2016 - 03:32 PM

You really need to consult your own tax professional now, and should have done so before paying anyone out of your own pocket. I would imagine that--depending on the formal structure of your company (e.g. DBA, sole proprietorship, partnership, LLC, or Corp) or lack thereof--how you put money into the "company" from your own pocket can have wildly different tax implications, just as how you take money *out* of your company has wildly different tax implications. For both you as an individual, and for your company.

 

In particular, I'm 95% certain that putting your personal wages into a "company" wouldn't adjust your income unless, potentially, it was somehow structured as an investment. I certainly would not expect for your employer to adjust for it.




#5306068 Is it pathetic to never get actually helpful answers here?

Posted by on 15 August 2016 - 08:46 PM

I would say pretty universally that good-quality questions attract good-quality answers, often excellent answers given the level of expertise around here. If you find yourself gathering low-quality, follow-up-questions, or even aggression, the best way to remedy that situation is to ask better questions. Also, I've seen a handful of people over the years who were upset not because their questions weren't being answered, but because they weren't getting the answer they wanted or expected -- if this is you, stop it. Part of the art of answering a question here is reading between the lines and getting to the heart of the issue, and sometimes the problem that needs to be addressed is not the one you've asked about. No one wan't to give you the right answer to the wrong problem, but without context to your question its easy to come to the wrong conclusion about what your problem really is. Again, providing more context to your question is key to getting useful answers.

 

Likewise, there's an art to asking questions as well. Tell us the what, where, why, and how of the problem in your question. What are you trying to do? What's wrong with the results you see (and what did you expect to see)? What are the surrounding conditions/assumptions that restrict your solution? Where are you doing this--platforms, libraries, programming languages? Why is this problem important? How have you tried to solve it already?

 

Keep your question focused -- its easy to drift into giving too many details or irrelevant details.




#5305666 Preparing for a 'XBox one' code stream

Posted by on 13 August 2016 - 02:43 PM

I would say say that that your perception is off with regard to market share if you're talking about new style apps using the Windows 8.1 SDK -- Windows 8.0/8.1 combined account for only 11% of OS market share according to the Steam hardware survey. Windows 10 currently accounts for 46% (with only 3% of those 32bit) and Windows 7 accounts for 36% (with 20% being 32bit). In other words, you don't gain much by addressable market by setting Windows 8 as your minimum bar -- it seems to me that if you're going to go through the trouble, you'd want to go back to Windows 7, not 8 -- and by not fully embracing Windows 10 UWP over Windows 8, it costs you being able to reach XBox (assuming you were accepted into ID@XBox) which is one hardware configuration with a 20+ million install base.

 

That said, those are the July survey numbers and the Windows free upgrade offer expired at the end of July, which probably prompted a final push for people to upgrade (anecdotally reinforced by the panic on my facebook feed). I'm really interested to see august's survey, I expect a larger than usual decline for both Windows 8 and for windows 7, and I would expect Windows 10 to reach perhaps as high as 60%.

 

After that, Windows 7 will remain a relevant, if ever-shrinking, market for probably another year or two. Windows 8 will be even less relevant than it already is.




#5305573 Preparing for a 'XBox one' code stream

Posted by on 12 August 2016 - 08:11 PM

You can use the Windows 10 SDK to make UWP apps, which work across PC/Xbox, but as an "app", not a native game. That's probably closer to XNA / XBLIG on the Xbox 360.

 

The Xbox uses Direct3D11.x and Direct3D12.x, so you'd have to carry out some small amount of porting still.

 

If you mean to take the 'I don't have an AAA publisher' route, you have only two paths to get software running on other people's XBox Ones -- you have the open "UWP Apps on Xbox One" program, and you have the ID@Xbox program.

 

Ultimately, the direction things are going in is that Xbox will become, more or less, just another Windows form-factor. Still, though, the two programs I mentioned are different, and have different intentions behind them.

 

Currently, the UWP on XBox program that's completely open to anyone is for Apps, not games -- this is a policy decision, not a technical one. As a developer you can write a game using the tools made available, and you can deploy and test on your own Xbox. You cannot, however, put that same application on the Store where other people can buy it. This could change, but its not how it is today. Furthermore, the apps you create using these tools don't have full access to the hardware (All 'apps' on Xbox work this way, e.g. Netflix, etc) -- You're limited to 1GB of RAM (and you have to be able to enter suspensions using only 128MB quickly), your app shares 2-4 CPU cores with the OS and other apps, you have the vanilla DirectX 11 API, but only at feature-level 10_0 (e.g. no async), and you get only 45% of the GPU when your app is in the foreground.

 

For games, the ID@Xbox program is what you want, but its not open to everyone. If approved, you get the same level of access to the hardware and to XBox Live that AAA games get. 

 

 

As far as setting yourself up for an easy transition, you've gotten good advice. Basically, create a Windows 10 UWP game, and stick to the new APIs as much as you're able to (not the legacy/Win32 ones), assume until further notice that you'll have Vanilla D3D11 (not D3D12 or 11.x), assume that libraries that work under Windows 10 UWP will work under XBox UWP (or that support will be added quickly once things open up).

 

Design-wise, the Xbox is not just a PC with a controller -- Do design for first-class controller input (Look at 7 Days to Die to see what happens when you don't), but that doesn't just mean gameplay input. UI best-practices are completely different for gamepad vs. Keyboard/mouse, and you design just differently for a 40" screen at 6-10 feet than you do a 24" screen at 18-30 inches.

 

Here are some links:

Games and DirectX (UWP), especially the Windows 10 game development guide and Game technologies for UWP apps.

ID@Xbox program page or check out what indies are making.

UWP on XBox One




#5304983 Ide For Linux

Posted by on 09 August 2016 - 04:05 PM

For the full story: Evil Mode: Or, How I learned to stop worrying and love Emacs

 

The short version is that while Vim is a remarkable contribution to free software, and is awesome for many people, the quality of the software code itself is not so great, its unlikely that Vim proper will ever be able to support asynchronous processes (One of the motivating factors behind neo-vim), and there is effectively no continuity of ownership when its originator and sole maintainer, Bram, passes away. My reasons for moving away have less to do with any current lack of vim (other than async processes), I just see it as inevitable that once Bram is no longer around, its effectively the end of the line for evolution from the Vim codebase. There's a lot of spaghetti inside, and a lot of not-so-great code, and Bram is basically the only one who understands it properly. In an interview he was once asked "What can we do to keep Vim around?" and he quipped "Keep me alive."

 

As I said, I prefer Vim's approach to the keyboard and modality -- but I can get that through Evil Mode under Emacs for the most part. Then, you have the fact that the torch has already been passed to the community for Emacs, its abundance of modes, its support for glyphs/images (no more vim font-hacks) -- The ecosystem is equally good, if not better, and there's no specter of reaching the end of the line.

 

Plus Org-Mode -- praise be to Org-Mode, its truly awesome. Access to Org-Mode is reason enough alone, to make the switch to Emacs, IMO.




#5304825 Using Other Cultural Traditions In A Fantasy Game?

Posted by on 08 August 2016 - 11:40 PM

Sure -- and to be clear I don't mean my line of questioning as an attack on your motives, but more as a prompt for examining and challenging your own motivations (and for anyone who might come along asking this same vein of question). Likewise, I'm not in a position to promote or defend your intended use because I'm not a follower of Hindu faith or culturally Indian -- in fact, a white American like you.

 

I think its entirely possible to present a work of fantasy that borrows from history and traditions that are not your own without misappropriating them. And honestly a lot of contemporary humanity is not all that much more disconnected from others' traditions as they are 'their own' -- its not like being a white Catholic from the states makes you an expert in Catholicism or the early Vatican politics. I think the way to think about it is that the more directly you attempt to crib from any source, the greater risk you carry to get it wrong, and potentially the greater harm you do by getting it wrong. I think its also more difficult for you to gauge that risk or potential for harm the further you're removed from those things you're borrowing. For those reasons, extra caution and care when dealing with such things are your due diligence, IMO.

 

I think as designers and storytellers we also need to keep sight of the fact that once we put something out into the world, we lose control of it. We created it, but its not ours anymore. What we meant to put out into the world doesn't matter then, only how it was received. If some people were offended by it, then it was offensive -- maybe only to one single person, but no amount of our own good intentions erases or excuses how they experienced it. And still some people might think its a beautiful thing -- maybe lots of people, maybe people very much like the one person it offended -- all that can happen simultaneously and it doesn't make one perception any better or more correct than any other. This is true of any axis of criticism -- we do our best to put our visions out there, and in the end you're the one who's ultimately responsible for how well you can continue to sleep at night.




#5304801 Headless Server Side Software Renderer

Posted by on 08 August 2016 - 08:59 PM

Sean makes a very salient point -- its not free to do this server side. The CPU costs are probably pretty minimal, and can be amortized across users, but bandwidth costs money and even small updates (assuming you chunk your map, and program your clients to deal with it) add up. Its not always worthwhile to try to please everybody -- you're going to undertake a lot of work to please a few people who will make up your largest active operational costs -- are the margins wide enough to support them? Is the opportunity cost of implementing and maintaining that support now and in the future more important than all the other work you could be doing? From a business perspective you're often better off trying to capture more people who are like your core users (e.g. those with WebGL) than by capturing entirely new segments of users (especially already marginal and ever-shrinking segments like those without WebGL) -- its just a form of economies of scale, you may want to expand your definition of who a core user is after a time, but only when you believe it'll be easier/cheaper to capture new people from the expanded definition than to continue capturing from the original.




#5304776 Adequate Windows Operating System Testing

Posted by on 08 August 2016 - 05:57 PM

For games, XP has some relevancy left if you're targeting internet cafes in China or nearby regions (and maybe some others), but even that is giving over to Windows 7 or newer -- XP is already small, and only going to get smaller. Keep in mind that supporting XP means supporting Direct3D 9 and other legacy APIs, which is an entirely different level of expense than supporting the modern APIs you're already using on down-level operating systems that support them.

 

Also ask yourself if a PC that meets the hardware requirements of your game is likely to be running XP to begin with -- the number of people running XP on hardware suitable for Windows 7 is pretty small.

 

I'd say that Windows 10 is now a pretty safe bet -- since Windows 10 was a strict improvement over Windows 8/8.1 there was no reason not to do the free upgrade that Microsoft offered. Add to that that app-store model for Windows 8/8.1 is now effectively deprecated, I'd personally be entirely comfortable assuming Windows 10. If I felt compelled to support down-level operating systems I'd support Windows 7 in addition to 10, there's no compelling technical or market-share argument to support Windows 8 or 8.1.




#5304772 Headless Server Side Software Renderer

Posted by on 08 August 2016 - 05:37 PM

Look at SwiftShader on GitHub. Its the software renderer used by chrome when hardware acelleration is not available, and it implements very modern software rendering techniques. Its got an interface for OpenGL ES already, which should be quite similar to any WebGL rendering you already have. I don't know what dependencies it takes on, but it should do the trick.

 

Before being purchased by Google, it was owned by Transgaming which sold it as a solution for games that wanted a fast software-fallback when users lacked sufficient 3D acceleration and for helping Windows games port to Linux or run better under Wine (part of the SwiftShader distribution is a shared library that implements the Direct3D 9 interface exactly, entirely in software).




#5304769 Rendering 3D Over 2D

Posted by on 08 August 2016 - 05:22 PM

I don't know about MonoGame, but XNA (from which it decends) had an overloaded GraphicsDevice.Clear method that took some flags and other parameters that correspond to the depth-buffer.

 

Anticipating another potential issue -- whether you want to clear between 2D and 3D rendering at all depends on whether each scene's depth is described in terms of independent Z-spaces. For example, if your 2D elements are entirely behind the 3D elements then your current approach works (but its not optimal*) -- this would be the case if you are using the depth buffer for Z-ordering during the 2D phase and for normal depth buffering during the 3D phase. Or, if you mean for your 2D and 3D elements to be intermixed (some 2D behind 3D, some 2D in front of 3D) then what you want to do is to not clear the depth buffer here (but before each frame is rendered) and pre-bake scene-approapriate depth values into your 2D assets.

 

*On optimization -- if your 3D assets are entirely in front of your 2D elements, then its often better to render your 3D things first while writing the depth buffer, and then draw your 2D things afterwards with depth-orders that you know are larger than any of your 3D depth values. This skips drawing 2D pixels that ultimately get covered by 3D objects anyways. If those 2D pixels are complex, or if much of the screen gets covered by 3D objects this can save your GPU a lot of work.

 

If you undertake this optimization, be aware that transparency has to be considered. You would draw fully-opaque 3D stuff first from nearest objects to furthest objects, then 2D stuff (still with larger depth than anything in the 3D scene) in your usual order, then return to draw 3D stuff that's partially transparent from furthest objects to nearest objects.




#5304765 Math Book For Game Programming

Posted by on 08 August 2016 - 05:01 PM

I happen to like Mathematics for 3D Game Programming and Computer Graphics myself. Its probably the most approachable book I've come across on the topic, though it doesn't go to the depth that more serious academic texts do, nor does it touch on much 2D at all.

 

The lack of 2D isn't as bad as it sounds -- much what's called "2D Graphics" in an academic sense is stuff like drawing lines and other do-it-yourself rasterization -- what's left of classical 2D techniques usually has a 3D extension (e.g. transformations), is supplanted by 3D techniques (e.g. painters algorithm vs. depth-buffer), or is basic algebra. For 2D games, rendering happens almost exclusively through 3D APIs today so you're probably better served by understanding the 3D stuff that makes your 2D game go. That's not to say that the classic 2D stuff never comes up -- it sometimes does -- but its not the most practical place to start.




#5304759 Rendering 3D Over 2D

Posted by on 08 August 2016 - 04:39 PM

Does the 2D-rendering work when the 3D-rendering is commented out?

 

As it is, with 2D and 3D enabled, what do you see? Do you see the 2D scene only, or the 3D scene only, or something else?




#5304736 Ide For Linux

Posted by on 08 August 2016 - 02:46 PM

Personally, I've been developing on Linux for over 20 years and never felt the need to be limited by an IDE, but 

 to each their own I guess.

 

 

I love this. "Limited" by an IDE. I'd be interested to hear what you consider limiting factors of an IDE?

 

They can be heavy:

  • Features you aren't using consume system resources, potentially making your system less responsive or making it unreasonable to run on older hardware.
  • Sometimes 'background' IDE processes can block/stall the editor. You can't always configure these away.
  • Until recently, a "typical" Visual Studio installation was ~20GB (they've done a lot of installer work recently to get certain "workloads" down to under a couple GB). A typical Emacs/Vim-based Development environment is maybe a couple hundred MB with compilers and libraries; a minimal one can probably be done under 100MB
  • For remote work, a console-based development environment over SSH is pretty tractable, even over a slower internet connection -- IDE? nope.

They can be too integrated:

  • Some IDEs don't expose interfaces to modify or replace all IDE features, or its difficult to modify such features for lack of documentation or compounding design flaws. The design mindset of IDEs tends to be 'monolithic-but-extensible' while Emacs/Vim-style development environment mindset is necessarily a composition of inter-operable parts; the former is more 'benevolent dictator' while the later is more 'grassroots democratic' -- its exactly what you make it to be.
  • The ways they're built to work might not fit your preferred workflow.
  • Generally, the entirety of the source code is free and of a manageable volume for Emacs/Vim-based workflows, including all the plugins you're using. This is true of many (but not all) IDEs, and there is an order of magnitude more code to them when it is.

They can be slower than you:

  • IDE GUIs as an input device are fundamentally limited in ways that keyboard-centric interfaces are not -- they're better for some things, and certainly for discoverability, but not for speed of interfacing. In general, a keyboard-based Emacs/Vim+console type environment has a higher ceiling and a steeper learning curve (though, I don't the IDE learning curve is any better in the end -- GUIs just allow you to stab around in the dark until you find what you're looking for).



#5304280 Git Usage: Multiple Versions And Platform Builds

Posted by on 05 August 2016 - 06:33 PM

When you say multiple versions, you mean that you want to check-point a particular release so that you can get back to it, correct? For example, you want to be able to get back version 1.0.0 to fix a critical bug and release it as 1.0.1 -- but in the meantime you want to be free to continue working on new features to be released in version 1.1.0?

 

What you probably want for that are called tags -- its just a point in your project history that you tag with a friendly name like "version 1.0.0" -- you can continue working towards 1.1.0 in 'master' (or on your developer branch, depending on the workflow you've adopted), but you can always go back to "version 1.0.0" and branch off bug fixes from there. If the bug fix needs to be applied to both version 1.0.0 (because you have customers using the software already that need the fix) and version 1.1.0 then you have some different options depending on what else has gone on in your 1.1.0 code, but its beyond the scope of what you're asking (just know you can make fixes in one version and selectively pull them into another). Anyways, if the fix only applies to 'version 1.0.0' then you just maintain an separate development branch for that version parallel to the version you're actively developing -- IMO, you want to tag proper releases 'version 1.0.0' and keep another tag that tracks the most-current release of version 1.0, something like 'version 1.0.current'.

 

If you're going to start seriously tracking multiple versions, do yourself a favor and adopt something like Semantic Versioning -- If not semantic versioning itself, then take a look at the thought behind it and come up with a versioning scheme that will be similarly consistent and works with your circumstances (it looks to me that semantic versioning is defined relative to APIs/web services, but the reasoning behind it transfers mostly-laterally to applications themselves (after all, what is a closed application if not an internal-facing API?)

 

Regarding multiple platform targets, you definitely don't want a separate repository for each platform -- that would be a nightmare to maintain. It might sound difficult to "provide file_windows and file_linux, but that would end up in ugly checking for what kind of operating system I'm currently using while compiling/building" but that's exactly what Build systems like Make are for. A tool like Make basically lets you define different targets (which might include a different set of files and directories, or define different pre-processor symbols to direct conditional compilation) and then builds the target. Configuring Make is done with a kind of mini programming language all its own--its something more to learn, but its a good skill to have.

 

What I usually do is have separate folders for each platform's specific code -- 'win32', 'linux', 'macos' -- and then I define a Make variable that gets set to the correct platform based on the target -- so instead of telling Make to build "win32/windowing.cpp" directly just like that, my Makefile defines the release target to effectively say "build $PLATFORM/windowing.cpp". The release target doesn't get built directly, it only gets run by (or after) a platform-specific target that sets the $PLATFORM variable correctly. Depending on how your code is structured your approach may need to be more elaborate, but the basic idea is the same -- but this approach more or less works as-is if you have a clearly defined abstraction layer that all platforms share.




#5304262 Ide For Linux

Posted by on 05 August 2016 - 04:41 PM

Give Visual Studio Code a look together with the first-party C++ Extension -- its a lightweight code environment that's partway between an IDE and simpler programmers' editors. 

 

It doesn't have as good of intellisense as Visual Studio (but its as good as CLion's) -- The debugger is pretty slick, too. VS Code itself supports Windows, Linux, and Mac -- the C++ extension supports GCC on all of those platforms, but Windows is (surprisingly) the least mature (but then you have VS Community for free on Windows). Last I looked, CMake was a work in progress, but should be coming along quickly.

 

Agreed with previous posters that Eclipse is quite heavy, though closest in feel  to Visual Studio.

 

I like the Emacs+cmake+GDB+terminal route myself (using Evil Mode (Vim emulation) because Vim's keyboard commands are better). Vim is fine and works, and is the better human interface (IMO), but Emacs is better software. NeoVim is also better if Emacs doesn't float your boat.






PARTNERS