1. Past hour
2. ## Rev-sharing platform: see Crowdsourcer.io

You also realize the gamedev's administration put the advertisement up, right? I'm fairly certain they would check something out before promoting it.
3. ## GameSounds.xyz reference

Actually as stated on the site, each library (.zip file) comes with a LICENSE file inside it stating the licensing requirements for that particular library. You'll have to read over these files for each library you want to use (as they can be different, between libraries) from the site.
4. ## What technology for a game-server

If you are for some reason fond of PHP, but prefer static typing, there is always Hack. It's basically a Facebook-developed PHP variant with type declarations (plus some other goodies).
5. ## GameSounds.xyz reference

A licence is just the right to use the track, and the terms under which it is used. That website doesn't specify the licence clearly so it's not possible to give you a definitive answer on what you have to do in order to comply.
6. ## GameSounds.xyz reference

Hello In our game we want to use sounds based on https://gamesounds.xyz/. I mean that sounds in game would be mix of those from GameSounds and our own recordings. For example sound of explosion would be mix of burning effect from GameSounds and 'boom' made by our sound designer. I know that all sounds on this sites are free for use commercially, but what about licences? It would be enough that in credits would be mention that some part of sfx were made with those from GameSounds.xyz ? Is there another for of reference needed? Regards Crane
7. ## [Programmer] Want team for game jams or small time games

Hi all, I'm a programmer skilled in C/C++, Java, C# and more, I've been programming for about 10 years now as a hobby. I am currently working with a team making games but that's a more serious position. I'm looking to form a team for making games with no real deadlines or to work with to do Game Jams as they come up. I am lacking in artistry and music/sound creation so I'd be more so looking for someone with those skills. The recent game I helped publish was "Atlas Sentry" for iOS and Android. If you have any questions or want to talk further to me, contact me on discord One Punch Panda#8709
8. Today

10. ## Unity in Action 2nd edition released!

My bestselling and highly recommended Unity book has been fully revised! Unity in Action, Second Edition teaches you to write and deploy games with the Unity game development platform. You'll master the Unity toolset from the ground up, adding the skills you need to go from application coder to game developer.Foreword by Jesse Schell, author of The Art of Game DesignDon't take my word for it being good, look at the sky-high ratings on GoodReads.You can order the ebook directly from the publisher's site, or order the book on Amazon to get both the physical book and a coupon to download the ebook!
11. ## What technology for a game-server

Oh, sorry, it probably makes more sense if you have the full context. Users can rate downloadable levels, expressing how much they like/enjoy it, so it does not really matter how much they rate it as long as it is within the given range of minimum and maximum : ) If it is outside boundaries, the request will be rejected, which makes sense since input should be validated as far as possible. I'm yet unsure whether sending files is an easy task and reliable (images, files, ...) over HTTPS? Ah! My issue with PHP is that is not statically typed, so I might stick with another solution anyway : ) But thanks for this insight, one never knows when that will help me!
12. ## Rev-sharing platform: see Crowdsourcer.io

Hey Xaviersythe, I'm the creator of the site. I'm sorry you think it sounds sketchy. Is there any particular concerns you've got or a specific element of the platform that doesn't add up? Building the site has been a massive labour of love, and it's something I made so people have a better chance of making a game (or whatever) without having to spend money, quit their day job etc. etc. I appreciate this model is a little different to how people currently work (though it's not a mile off the open source methodology) but I genuinely believe the model can work and benefit developers. Cheers, Mike
13. ## Java Breakout - Brick removal

So i am working on a java swing breakout game and am on the last task to complete, which is detecting collision with a brick and then deleting it from the array so it cannot be seen on the screen. I have created a for loop which is somewhat working however the ball bounces off the bat/paddle and goes straight through the first few rows of bricks and then start to detect only the rows around the 6/7th row. Here is the loop i am working on. public void runAsSeparateThread() { final float S = 3; // Units to move (Speed) try { synchronized ( Model.class ) // Make thread safe { GameObj ball = getBall(); // Ball in game GameObj bat = getBat(); // Bat ArrayList<GameObj> bricks = getBricks(); // Bricks } while (runGame) { synchronized ( Model.class ) // Make thread safe { float x = ball.getX(); // Current x,y position float y = ball.getY(); // Deal with possible edge of board hit if (x >= W - B - BALL_SIZE) ball.changeDirectionX(); if (x <= 0 + B ) ball.changeDirectionX(); if (y >= H - B - BALL_SIZE) // Bottom { ball.changeDirectionY(); addToScore( HIT_BOTTOM ); } if (y <= 0 + M ) ball.changeDirectionY(); // As only a hit on the bat/ball is detected it is // assumed to be on the top or bottom of the object. // A hit on the left or right of the object // has an interesting affect boolean hit = false; // *[3]******************************************************[3]* // * Fill in code to check if a visible brick has been hit * // * The ball has no effect on an invisible brick * // ************************************************************** for ( int i = 0; i <= 60; i++ ){ GameObj brick1 = bricks.get(i); if ( brick1.hitBy(ball) ){ bricks.remove(i); //hit = true; ball.changeDirectionY(); //ball.changeDirectionX(); addToScore(50); } } if (hit) ball.changeDirectionY(); if ( ball.hitBy(bat) ) ball.changeDirectionY(); } modelChanged(); // Model changed refresh screen Thread.sleep( fast ? 2 : 20 ); ball.moveX(S); ball.moveY(S); } } catch (Exception e) { Debug.error("Model.runAsSeparateThread - Error\n%s", e.getMessage() ); } } } i need to be able to break each brick individually and for them to rebound ... this is the code that i am working with so far for ( int i = 0; i <= 60; i++ ){ GameObj brick1 = bricks.get(i); if ( brick1.hitBy(ball) ){ bricks.remove(i); //hit = true; ball.changeDirectionY(); //ball.changeDirectionX(); addToScore(50); } }
14. ## IPv6 Multicast not working

Yeah, 4 times a second seems high :-) Another option to reduce broadcast is for the client to broadcast a "solicit" that servers answer to, rather than the server blindly broadcasting. Clients can also look for other clients soliciting, if servers also answer using broadcast, and hold off on soliciting themselves if they've seen a request in the last (500 + random(0, 1000)) milliseconds. This can be effectively instant when the client joins, because it can send its first solicit right away, and usually, active servers will immediately answer. mDNS is pretty good. If you want a drop-in solution that works, use that. Only go to your own broadcast if you find that you need faster response, or for some other reason need custom functionality.
15. ## What technology for a game-server

Using HTTPS everywhere is the right thing to do, so no argument from me there! PHP needs to run hosted inside a server like Apache or Nginx. Thus you'd set up HTTPS with Apache or Nginx. There exist scripts that will automatically renew letsencrypt certificates for those servers, and they're pretty easy to set up. I don't know what "rating" means for you, so I have no idea what your security model should be. Is it OK if someone says "the rating of this is 99" ? As much as they can?
16. ## First time writer here.

Yes. I did that with Beyond Two Souls. Wanted to see all the endings.
17. ## Vulkan Writes from compute shader not visible

missed this. I don't use plain memory barriers at all - maybe removing them helps. I guess you already use GLSL for initial tests, no HLSL? You could look at this for a messy template of using simple graphics, imgui and compute: https://github.com/JoeJGit/Vulkan-Async-Compute-Test I guess it throws tons of validation errors meanwhile and contains bugs, but maybe it helps to spot something you've forgotten.
18. ## Which of these 2 Colleges/Degrees

I do not think the name makes that much of a difference. If you think you will enjoy Northeastern more, then you absolutely MUST go that way. Especially if the distance, cost, and other factors in your decision grid back it up.
19. ## Rev-sharing platform: see Crowdsourcer.io

Sounds very sketchy. No thanks.
20. ## Vulkan Writes from compute shader not visible

Thanks, but as I said, I am issuing both memory barriers and buffer memory barriers. Also double checked and compared the code with mine and can't see any big differences with vulkan calls...
21. ## First time writer here.

Some parts the same, yes, but it really depends on how frequently they show up. You said it's a short game, so if you're seeing the exact same stuff 25% of the time, that's a lot. Small variants to show that something you did had an affect on the story (like Tony Li mentioned above) would go a long way in keeping the user engaged. If it's a challenge of coming up with different storylines, maybe consider intertwining some from different points of view. For example, in one playthrough, you could get in an argument and shoot someone - then grab a hostage and go on the run, deciding whether or not to explore a relationship with the hostage. In another, your best friend was shot and the girlfriend you were about to break up with was taken hostage. Same story, so there would be plenty of writing overlap - but shown from different points of view. That would cut down on the amount needed, and also make for some compelling replayability. If it's just a matter of writing a ton of content being a daunting task, I'd just say that the more you write, the better it will turn out and you'll be very happy in the end with any extra effort put in.

24. ## Vulkan Writes from compute shader not visible

Hi, I am having problems with all of my compute shaders in Vulkan. They are not writing to resources, even though there are no problems in the debug layer, every descriptor seem correctly bound in the graphics debugger, and the shaders definitely take time to execute. I understand that this is probably a bug in my implementation which is a bit complex, trying to emulate a DX11 style rendering API, but maybe I'm missing something trivial in my logic here? Currently I am doing these: Set descriptors, such as VK_DESCRIPTOR_TYPE_STORAGE_BUFFER for a read-write structured buffer (which is non formatted buffer) Bind descriptor table / validate correctness by debug layer Dispatch on graphics/compute queue, the same one that is feeding graphics rendering commands. Insert memory barrier with both stagemasks as VK_PIPELINE_STAGE_ALL_COMMANDS_BIT and srcAccessMask VK_ACCESS_SHADER_WRITE_BIT to dstAccessMask VK_ACCESS_SHADER_READ_BIT Also insert buffer memory barrier just for the storage buffer I wanted to write Both my application behaves like the buffers are empty, and Nsight debugger also shows empty buffers (ssems like everything initialized to 0). Also, I tried the most trivial shader, writing value of 1 to the first element of uint buffer. Am I missing something trivial here? What could be an other way to debug this further?
25. ## volunteers for music and graphic design.

graphics has been filled only looking for music now
26. ## Strategy Gniarf

Hey ! I wanted a better vizualisation of Gniarf moving, I did that : https://www.youtube.com/watch?v=8keBQRer4Vs Gniarfs flying to the tiles they are taking, any feedbacks ? Thanks for reading me !
27. ## OpenGL Removing one "instance" in a sequence of instanced objects

The order of instanced objects shouldn't be all that important, so you can just do a swap-delete (swap the item to be deleted with the last item, and then render one less item next frame). Since you don't care about the removed item, that boils down to overwriting the removed item with the last item in the buffer, and reducing the instance count by one.

## Recent Blogs

1. Last month, I made a pretty simple dungeon generator algorithm. It's an organic brute force algorithm, in the sense that the rooms and corridors aren't carved into a grid and that it stops when an area doesn't fit in the graph. Here's the algorithm : Start from the center (0, 0) in 2D Generate a room Choose a side to extend to Attach a corridor to that side If it doesn't fit, stop the generation Attach a room at the end of the corridor If it doesn't fit, stop the generation Repeat from steps 3 to 7 until enough rooms are generated It allowed us to test out our pathfinding algorithm (A* & String pulling). Here are some pictures of the output in 2D and 3D :
2. Just powering through and coding my heart out. Nothing to show yet, but things are progressing behind the scenes.
Working towards that Minimum Viable Product with a few days to go til showtime! Pretty much every feature from the last demo needs to be remade and bug free using the new development tools I have. The next demo has to be ready by Saturday morning, or I'm going to have to introduce countless strangers and fellow devs to my buggy pre-alpha from December!

Can't let that happen. It's crunch time. Check out the devlog. (nearly caught up to the most recent DevBlog! Been sharing one a day as to not be too spammy about it :p)
3. Welcome to #GlitchJam! What is #GlitchJam? It’s a game jam, hosted by long time Corona developers Glitch Games. Early in their past, Glitch games ran their first #GlitchJam’s after some time off and with modern game jam management tools, they are ready to resume this awesome Corona based game jam. “At Glitch we love jams, be they game or strawberry, and we’re incredibly excited to be able to run #GlitchJam for all Corona developers.” said Graham Ransom of Glitch Games. #GlitchJam will be held every 6 months and will always follow the same format; 72 hours long, open to all Corona developers, a theme will be announced upon commencement, judging will be done by both Graham and Simon Pearce of ​Glitch Games​ and a pair of guest judges that change every jam, plus prizes for the top three submissions. Each jam will have some great prizes from our sponsors, including Corona Labs. The next #GlitchJam is scheduled for May 4, 2018. To learn more, please visit ​the jam page​, and if you have any questions you can reach out to Graham and Simon on the forums or on Twitter​.
5. I don't have a lot of time this week, regrettably.  As mentioned previously this is when my real day job reaches it's crescendo and so I'm going to throw together something for this week and maybe post something more detailed later this week.

Thanks for bearing with me!

Game-Guru News:
Work continues on people trying to find errors and problems in the latest public preview.  Nothing really stand-out, just testing, testing and more testing!

What's New in The Store:
This stuff:

I really like TMG's diner pack - I'm interested in some nice PBR screenshots.  Gtox is continuing to make animals and Ken Charles Long is making new and improved buildings.  His previous work was good but since he's upgraded his tools it's really come a long way!

Third Party Tools:
Nada.  Sorry can't dig too deep.

Free Stuff:
Sorry guys, this one is going to have to wait.  It usually takes an inordinate amount of time to do and I can't afford that right now.

Random Acts of Creativity: A few minor updates on the forums here:
https://forum.game-guru.com/thread/219549 - an update to Tarkus1971's urban exploration game
https://forum.game-guru.com/thread/219588 - an interesting, fairly well done horror game

But of particular note, I missed the update screenshots to Wolf's shooter Acythian which look PHENOMENAL!

That is some top shelf stuff, visually.  Nicely done Wolf!
In My Own Works:
Work continues on my book though it's taking a while.  I'm hoping to get a few thousand words pounded out this week as I am going to have a lot of hours doing nothing otherwise.

7. Hi everybody, My name is Olivier Girardot, and I am a music composer and a sound designer. Here is my new sound effect video, where I explain what the Shepard Tone is, which is a very cool effect, often used in films.   And don't forget to visit my Sound Effect shop: http://www.ogsoundfx.com (Get 120 MB of free professional sound effects by subscribing to the newsletter)
8. Hello All,

There has been quite a few updates to the project so I figured its best to compress them all on here to let you all know how we have progresses. Fox is here Fox is a guide, a friend, and a all round great guy. He is a central character to the gameplay. We’ve started with a basic dialogue tree, but we will be expanding this out as the game develops. Range of topics can vary! Player Levelling We’ve tweaked and improved the player levelling. In case you missed it, meditating/playing daily will activate a streak system, which will level up your character. Why are streaks so important? Because consistency is crucial to get the full benefits of meditation. This is why streaks are at the heart of our game.

9. I've been considering creating a game from start to finish entirely solo, and posting all the progress on the blog here as I work my way through.   I really enjoyed working on the GameDev.Net Challenge and thought I could really push myself to make my own title within a certain period of time. I will still be doing the GameDev.Net Challenges as they popup as I really enjoy the idea of time challenges. Most of my programming experience has been at a lower level (engines, toolkits, world builders, ect...) and I really would like to push myself to work on some of my own ideas in terms of full games. I intend on doing all of the programming, graphics, and audio myself for this challenge, and will post all the progress on a regular basis as I develop! On the programming side I will use C++ and SFML. For graphics I will be using a combination of 2D and 3D applications (Photoshop, Blender, ect...). The sound and music will be created using FL Studio and assorted VSTs. I will have to pull out my midi controllers and blow some dust off those nobs and keys! The game itself will be a Collectible Card Game. I was working on one prior and have always wanted to go out and make my own. I have already come up with the concept, basic story, and how the cards will work together, but as the game grows changes will be made as I'm developing this organically. In my next blog entry I will title it: (Name of Game) Dev Blog ##. I look forward to this challenge, and cannot wait to get started!  I should have a blog post ready to go within 5 days as I want to actually post content in each entry. Thanks for reading.
10. In this Blog entry I would like to introduce our main character and some of the basic gameplay mechanics. The player character of Warriorb was created in a magical accident. It involved a dead girl, a mighty spirit, a failed resurrection attempt and a group of kids playing football in the neighborhood, but let’s not go too deep into the details yet.
Long story short, our unlikely protagonist is a spirit who ended up imprisoned in a ball. This weird body has a few unique skills – and also some limitations one has to overcome using all the creativity to be found in a tiny little ball. The biggest problem is being a bit low on limbs – there are only two of them, so wise and creative usage is a must. Normally the limbs function like legs: In certain situations however hands can come more… handy: And there is the hybrid mode – in some situations one limb can function as a hand while the other works as a leg: If you ever tried to do two different coordinated moves with your two hands at the same time you must have an idea about how hard it is to walk on one hand while aiming with the other. Not all situation requires limb involvement however – as the game is a platformer there will be plenty of situation to use more ball-like abilities, like rolling... … and bouncing. There are other ancient, unfortunate spirits trapped in a bit more appropriate bodies out there – but most of them gone mad during their thousands of years spent in imprisonment. The only way to help them is to end their suffering: That’s all for now. See you next time!

## Latest News

1. The Oculus Developer Blog was updated on Friday with a new post from Chris Pruett entitled "Everything You Need to Know to Develop for Oculus Go". The post details Oculus' recommendations for developing Oculus Go apps and details some of the new tools and technical features they've built into the device.         Topics include: Gear VR Compatible Recommended Developer Environment Fixed Foveated Rendering Dynamic Throttling 72 Hz Mode Performance Tuning Submitting Apps and Updates for Oculus Go Read the full blog post here.
2. LunarG has released new Vulkan SDKs for Windows, Linux, and macOS based on the 1.1.73 header. The new SDK includes: New extensions: VK_ANDROID_external_memory_android_hardware_buffer VK_EXT_descriptor_indexing VK_AMD_shader_core_properties VK_NV_shader_subgroup_partitioned Many bug fixes, increased validation coverage and accuracy improvements, and feature additions Developers can download the SDK from LunarXchange at https://vulkan.lunarg.com/sdk/home.
3. Submissions are now open for IndieCade 2018.  The first submission deadline is May 28 with submission fees starting at $75 USD. Learn more about submission requirements, fees, and events at https://www.indiecade.com/. IndieCade supports independent game development and organizes a series of international events showcasing the future of independent games. It encourages, publicizes, and cultivates innovation and artistry in interactive media, helping to create a public perception of games as rich, diverse, artistic, and culturally significant. IndieCade's events and related production and publication programs are designed to bring visibility to and facilitate the production of new works within the emerging independent game movement. 0 comments 4. Spectral rendering enters the computer graphics world. After a period with not that many feature improvements, Dual Heights Software have now finally released a new major version of their water texture generator software called Caustics Generator. The new version includes full spectrum color rendering allowing the simulation of spectral refraction which produces prismatic coloring effects and thus bringing caustics simulation to a new level. The spectral functionality is available in the Pro version on both Windows, Mac and Linux. Read more about the Caustics Generator at https://www.dualheights.se/caustics/ 0 comments 5. In the latest CG Garage podcast, Chris Nochols interviews Sebastien Deguy, CEO of Allegorithmic. In the podcast Sebastien discusses the history of Allegorithmic in a refreshing, non-corporate way. Recorded at the Vertex Conference in London, Sebastien tells Chris how a single mistake led to the founding of this fast-growing and innovative company, which is branching out into visual effects and architecture, after rising to a state where 95% of AAA games use a Substance product for texturing. He also talks about the little tricks Allegorithmic uses behind the scenes, the future of the software, and even how we can understand the universe through mathematics. Check out the podcast at https://www.chaosgroup.com/blog/sebastien-deguy-allegorithmic. 0 comments 6. The International Game Developers Association (IGDA) and Plan C Ventures, LLC today announced the 2018 IGDA Game Leadership Summit will be held in Austin, TX on Thursday – Friday, Sept. 13 - 14. Featuring a diverse slate of executives who exemplify the best leadership practices in the industry, the global summit is the first-ever event focused on helping individual leaders at all levels within the industry maximize business performance and professional success as they navigate the rapidly changing interactive entertainment field. The announcement follows the release of IGDA’s highly publicized 2017 Developer Satisfaction Survey, citing diversity and job stability concerns across the industry. Informed by the survey’s results, the IGDA is developing a robust agenda designed to help leaders strengthen their individual leadership abilities and keep talent engaged in long-term careers. Summit Tracks 1. Management Excellence: Managers, executives, business owners and entrepreneurs will learn competencies essential to running smart, effective game businesses, at scales from small team to thousand-plus person studio. From planning ahead for success to navigating risk and positioning for growth, topics include Building Sustainable Teams, What to Consider Before Starting a Studio and Building Strong Partnerships 2. Professional Leadership: Game professionals at all stages of their careers and across all disciplines who seek to better manage their careers and acquire leadership skills will learn to improve their decision making, planning and problem solving. Topics include What I’ve Learned in 25 Years in Game Development, Negotiating Fundamentals and Adapting to Change in an Industry Driven by Change. “Strong, broad-based business management and professional leadership skills are the cornerstones of a healthy career and an essential foundation of high performance for companies in the interactive entertainment industry,” said Jen MacLean, executive director, IGDA. “With the re-launch of IGDA’s mission to support and empower game developers around the world in achieving fulfilling and sustainable careers, we’re excited to partner with Plan C Ventures to bring this one-of-a-kind program to our members and the worldwide game-development community.” (See GameDev.net's interview with Jen MacLean here.) Registration is now open. Early bird registration before July 13 is$125 for IGDA members and $200 for non-members. The IGDA Game Leadership Summit is guided by an expert advisory board. While most of the speakers and program content will be hand-selected by the board, a call for speakers is also now open. This comprehensive effort will insure a diverse field of innovation and experience for attendees. As a rolling call, speaker submissions are reviewed upon receipt, with the first deadline on April 30, 2018. 0 comments 7. Hello developers, my name is Sergio Ordonez from SOSFactory.com, I just wanted to let you know about the release of my Photoshop based Avatar creator. A premium, high resolution, high quality, full body cartoon Avatar Creator for male and female characters, fully customizable from the skin color or hair dresses to the clothing. Aimed for game and app developers, suitable for personal use too. With SOSFactory avatar system you can create unlimited characters, those are just a small selection: Already available 20 out 40 sets at my website: http://www.sosfactory.com/premium-stock-avatar-creator/ Feedback and critique is more than welcome! Thanks. Sergio 0 comments 8. Following a successful beta, Allegorithmic last week announced the official release of Substance Plugin for 3ds Max, now available for free. First announced in November, Substance Plugin for 3ds Max brings the professional material creation toolset to artists and designers, with over 20 new updates and a direct link to Substance Source. Designed for visualization experts, the new plugin launches with support for V-Ray and Corona, the AEC industry’s leading renderers, as well as the latest versions of Octane and Arnold. The new support comes with automated workflows, which send material data to the user’s renderer of choice at the push of a button. Substance menus have also been added to the design and default layouts of 3ds Max, ensuring a smooth, uninterrupted workflow for artists and designers. “This plugin was very much a collaborative effort between Allegorithmic and our community of beta testers,” said Sébastien Deguy, founder and CEO of Allegorithmic. “Thanks to their efforts, 3ds Maxusers can finally participate in the full benefits of the Substance ecosystem, creating and editing photorealistic materials in the most intuitive way possible.” Featuring a simplified design and tools that just work, Substance Plugin for 3ds Max streamlines the entire process without sacrificing quality. All materials can also now be sent to a user’s 3ds Maxlibrary, including anything drawn from Substance Source. With over 1,000 materials to choose from, artists and designers can use this tool to do their work faster, dropping readymade 8K materials into their projects outright, or editing for a little extra flavor. In the coming year, additional functionality will continue to be added at no cost. Current plans include network/cloud rendering, support for additional third-party renderers and animated Substance support. Additional features will be announced soon. A complete list of changes can be found here. Pricing/Availability Substance Plugin for 3ds Max 2018 is available now for Windows users for free. Substance Source files can be accessed through a Substance subscription; Substance Indie cost$19.90/month, while Pro plans cost $99.90/month. 0 comments 9. Last week VIVE announced the VIVE SRWorks SDK, allowing developers to access the stereo front facing cameras on the VIVE Pro. Developers will now be able to perform 3D perception and depth sensing with the stereo RGB sensors. From the announcement: The SDK includes plugins for Unity and Unreal. VIVE also included a few videos worth checking out: Learn more at http://developer.vive.com/resources. 0 comments 10. Epic Games has announced the Unreal Engine #ue4jam for Spring 2018. Once a quarter, Epic hosts the #ue4jam to give developers a chance to create new projects, sharpen their skills, and compete for prizes. Prizes include an Intel Optane SSD 900P, a Houdini Indie license from SideFX, and more. Learn more from the Unreal Engine blog here. 0 comments ## Latest Project Updates • Action • Strategy • Action • Other • Strategy • Role Playing • Action • Strategy • Adventure • Role Playing • Action • Casual • Action • Casual • Puzzle • Other • Role Playing • Strategy • Simulation • Engine • Action • Action • Adventure • Strategy ## Recent Articles and Tutorials 1. As DMarket platform development continues, we would like to share a few case studies regarding the newest functionality on the platform. With these case studies we would like to illuminate our development process, user requirements gathering and analysis, and much more. The first case study we’re going to share is “DMarket Wallet Development”: how, when and why we decided to implement functionality which improved virtual items and DMarket Coins collection and transfer. DMarket cares about every user, no matter how big or small the user group is. And that’s why we recently updated our virtual item purchase rules, bringing a brand new “DMarket Wallet” feature to our users. So let’s take a retrospective look and find out what challenges were brought to the DMarket team within this feature and how these challenges were met. DMarket and Blockchain Virtual Items Trading Rules Within the first major release of the DMarket platform, we provided you with a wide range of possibilities and options, assuring Steam account connection within user profile, confirmation of account and device ownership via email for enhanced security, DMarket Coins, and DMarket Tokens exchanging, transactions with intermediaries on blockchain within our very own Blockchain system called “Blockchain Explorer”. And well, regarding Blockchain... While it has totally proved itself as a working solution, we were having some issues with malefactors, as many of you may already know. DMarket specialists conducted an investigation, which resulted in a perfect solution: we found out that a few users created bots to buy our Founder’s Mark, a limited special edition memorabilia to commemorate the launch of the platform, for lower prices and then sell them at higher prices. Sure thing, there was no chance left for regular users. A month ago we fixed the issue, to our great relief. We received real feedback from the community, a real proof-of-concept. The whole DMarket ecosystem turned out to be truly resilient, proving all our detractors wrong. And while we’ve got proof, we also studied how users feel about platform UX since blockchain requires additional efforts when buying or selling an item. With our first release of the Demo platform, we let users sign transactions with a private key from their wallet. In terms of user experience, that practice wasn’t too good. Just think about it: you should enter the private key each time you want to buy or sell something. Every transaction required a lot of actions from the user’s side, which is unacceptable for a great and user-friendly product like ours. That’s why we decided to move from that approach, and create a single unified “wallet” on the DMarket side in order to store all the DMarket Coins and virtual items on our side and let users buy or sell stuff with a few clicks instead of the previous lengthy process. In other words, every user received a public key which serves as a destination address, while private keys were held on the DMarket side in order to avoid transaction signing each time something is traded. This improved usability, and most of our users were satisfied with the update. But not all of them... You Can’t Make Everyone Happy….. Can You? By removing the transaction signing requirement we made most of our users happy. Of course, within a large number of happy people, we can always find those who are worried about owning a public key wallet. When you don’t own a public key, it may disturb you a little bit. Sure, DMarket is a trusted company, but there are people who can’t trust even themselves sometimes. So what were we gonna do? Ignore them? Roll back to the previous way of buying virtual items and coins? No! We decided to go the other way. Within the briefest timeline, the DMarket team decided on providing a completely new feature on Blockchain Explorer — wallet creation functionality. With this functionality, you can create a wallet with 2 clicks, getting both private and public keys and therefore ensuring your items’ and coins’ safety. Basically, we separated wallets on the marketplace and wallets on our Blockchain in order to keep great UX and reassure a small part of users with a needed option to keep everything in a separate wallet. You can go shopping on DMarket with no additional effort of signing every transaction, and at the same time, you are free to transfer all the goods to your very own wallet anytime you feel the need. Isn’t it cool? Outcome After implementation of a separate DMarket wallet creation feature, we killed two birds with one stone and made everyone satisfied. Though it wasn’t too easy since we had a very limited amount of time. So if you need it, you can try it. Moreover, the creation of DMarket wallet within Blockchain Explorer will let you manage your wallet even on mobile devices because with downloading private and public keys you also get a 12-word mnemonic phrase to restore your wallet on any mobile device, from smartphone to tablet. Wow, but that’s another story — a story about DMarket Wallet application which has recently become available for Android users in the Google Play. Stay tuned for more case studies and don't forget to check out our website and gain firsthand experience with in-game items trading! 0 comments 2. I got into a conversation awhile ago with some fellow game artists and the prospect of signing bonuses got brought up. Out of the group, I was the only one who had negotiated any sort of sign on bonus or payment above and beyond base compensation. My goal with this article and possibly others is to inform and motivate other artists to work on this aspect of their “portfolio” and start treating their career as a business. What is a Sign-On Bonus? Quite simply, a sign-on bonus is a sum of money offered to a prospective candidate in order to get them to join. It is quite common in other industries but rarely seen in the games unless it is at the executive level. Unfortunately, conversations centered around artist employment usually stops at base compensation, quite literally leaving money on the table. Why Ask for a Sign-On Bonus? There are many reasons to ask for a sign-on bonus. In my experience, it has been to compensate for some delta between how much I need vs. how much the company is offering. For example, a company has offered a candidate a position paying$50k/year. However, research indicates that the candidate requires $60k/year in order to keep in line with their personal financial requirements and long-term goals. Instead of turning down the offer wholesale, they may ask for a$10k sign on bonus with actionable terms to partially bridge the gap. Whatever the reason may be, the ask needs to be reasonable. Would you like a $100k sign-on bonus? Of course! Should you ask for it? Probably not. A sign-on bonus is a tool to reduce risk, not a tool to help you buy a shiny new sports car. Aspects to Consider Before one goes and asks for a huge sum of money, there are some aspects of sign-on bonus negotiations the candidate needs to keep in mind. - The more experience you have, the more leverage you have to negotiate - You must have confidence in your role as an employee. - You must have done your research. This includes knowing your personal financial goals and how the prospective offer changes, influences or diminishes those goals. To the first point, the more experience one has, the better. If the candidate is a junior employee (roughly defined as less than 3 years of industry experience) or looking for their first job in the industry, it is highly unlikely that a company will entertain a conversation about sign-on bonuses. Getting into the industry is highly competitive and there is likely very little motivation for a company to pay a sign-on bonus for one candidate when there a dozens (or hundreds in some cases) of other candidates that will jump at the first offer. Additionally, the candidate must have confidence in succeeding at the desired role in the company. They have to know that they can handle the day to day responsibilities as well as any extra demands that may come up during production. The company needs to be convinced of their ability to be a team player and, as a result, is willing to put a little extra money down to hire them. In other words, the candidate needs to reduce the company’s risk in hiring them enough that an extra payment or two is negligible. And finally, they must know where they sit financially and where they want to be in the short-, mid-, and long-term. Having this information at hand is essential to the negotiation process. The Role Risk Plays in Employment The interviewing process is a tricky one for all parties involved and it revolves around the idea of risk. Is this candidate low-risk or high-risk? The risk level depends on a number of factors: portfolio quality, experience, soft skills, etc. Were you late for the interview? Your risk to the company just went up. Did you bring additional portfolio materials that were not online? Your risk just went down and you became more hireable. If a candidate has an offer in hand, then the company sees enough potential to get a return on their investment with as little risk as possible. At this point, the company is confident in their ability as an employee (ie. low risk) and they are willing to give them money in return for that ability. Asking for the Sign-On Bonus So what now? The candidate has gone through the interview process, the company has offered them a position and base compensation. Unfortunately, the offer falls below expectations. Here is where the knowledge and research of the position and personal financial goals comes in. The candidate has to know what their thresholds and limits are. If they ask for$60k/year and the company is offering $50k, how do you ask for the bonus? Once again, it comes down to risk. Here is the point to remember: risk is not one-sided. The candidate takes on risk by changing companies as well. The candidate has to leverage the sign-on bonus as a way to reduce risk for both parties. Here is the important part: A sign-on bonus reduces the company’s risk because they are not commiting to an increased salary and bonus payouts can be staggered and have terms attached to them. The sign-on bonus reduces the candidate’s risk because it bridges the gap between the offered compensation and their personal financial requirements. If the sign-on bonus is reasonable and the company has the finances (explained further down below), it is a win-win for both parties and hopefully the beginning a profitable business relationship. A Bit about Finances First off, I am not a business accountant nor have I managed finances for a business. I am sure that it is much more complicated than my example below and there are a lot of considerations to take into account. In my experience, however, I do know that base compensation (ie. salary) will generally fall into a different line item category on the financial books than a bonus payout. When companies determine how many open spots they have, it is usually done by department with inter-departmental salary caps. For a simplified example, an environment department’s total salary cap is$500k/year. They have 9 artists being paid $50k/year, leaving$50k/year remaining for the 10th member of the team. Remember the example I gave earlier asking for $60k/year? The company cannot offer that salary because it breaks the departmental cap. However, since bonuses typically do not affect departmental caps, the company can pull from a different pool of money without increasing their risk by committing to a higher salary. Sweetening the Deal Coming right out of the gate and asking for an upfront payment might be too aggressive of a play (ie. high risk for the company). One way around this is to attach terms to the bonus. What does this mean? Take the situation above. A candidate has an offer for$50k/year but would like a bit more. If through the course of discussing compensation they get the sense that $10k is too high, they can offer to break up the payments based on terms. For example, a counterpoint to the initial base compensation offer could look like this: -$50k/year salary - $5k bonus payout #1 after 30 days of successful employment -$5k bonus payout #2 after 365 days (or any length of time) of successful employment In this example, the candidate is guaranteed $55k/year salary for 2 years. If they factor in a standard 3% cost of living raise, the first 3 years of employment looks like this: - Year 0-1 =$55,000 ($50,000 +$5,000 payout #1) - Year 1-2 = $56,500 (($50,000 x 1.03%) + $5,000 payout #2) - Year 2-3 =$53,045 ($51,500 x 1.03%) Now it might not be the$60k/year they had in mind but it is a great compromise to keep both parties comfortable. If the Company Says Yes Great news! The company said yes! What now? Personally, I always request at least a full 24 hours to crunch the final numbers. In the past, I’ve requested up to a week for full consideration. Even if you know you will say yes, doing due diligence with your finances one last time is always a good practice. Plug the numbers into a spreadsheet, look at your bills and expenses again, and review the whole offer (base compensation, bonus, time off/sick leave, medical/dental/vision, etc.). Discuss the offer with your significant other as well. You will see the offer in a different light when you wake up, so make sure you are not rushing into a situation you will regret. If the Company Say No If the company says no, then you have a difficult decision to make. Request time to review the offer and crunch the numbers. If it is a lateral move (same position, different company) then you have to ask if the switch is worth it. Only due diligence will offer that insight and you have to give yourself enough time to let those insights arrive. You might find yourself accepting the new position due to other non-financial reasons (which could be a whole separate article!). Conclusion/Final Thoughts  When it comes to negotiating during the interview process, it is very easy to take what you can get and run. You might fear that in asking for more, you will be disqualifying yourself from the position. Keep in mind that the offer has already been extended to you and a company will not rescind their offer simply because you came back with a counterpoint. Negotiations are expected at this stage and by putting forth a creative compromise, your first impression is that of someone who conducts themselves in a professional manner. Also keep in mind that negotiations do not always go well. There are countless factors that influence whether or not someone gets a sign-on bonus. Sometimes it all comes down to being there at the right time at the right place. Just make sure you do your due diligence and be ready when the opportunity presents itself. Hope this helps!

6. This reference guide has now been proofread by @stimarco (Sean Timarco Baggaley). Please give your thanks to him. The guide should now be far easier to read and understand than previous revisions, enjoy! Note: The normal mapping tutorial has been temporarily moved, to be added back as its own topic, to help separate the two for more clarity.   If anyone has any corrections, please contact me. 3D Graphics Primer 1: Textures. This is a quick reference for artists who are starting out. The first topic revolves around textures and the many things an artist who is starting out needs to understand. I am primarily a 3d artist and my focus will therefore be primarily on 3d art. However, some of this information is applicable to 2d artists. Textures What is a texture?

By classical definition a texture is the visual and esp. tactile quality of a surface (Dictionary.com).

Since current games lack the ability to convey tactile sensations, a texture in game terms simply refers to the visual quality of a surface, with an implicit tactile quality. That is, a rock texture should give the impression of the surface of a rock, and depending on the type, a rough or smooth tactile quality. We see these types of surfaces in real life and feel them in real life. So when we see the texture, we know what it feels like without needing to touch it due to our past experiences. But a lot more goes into making a convincing texture beyond the simple tactile quality.

As you will learn as you read on, textures in games is a very complex topic, with many elements involved in creating them for realtime rendering.

We will look at: Texture File Types & Formats Texture Image Size Texture File Size Texture Types Tiling Textures Texture Bit Depth Making Normal Maps (Brief overview only) Pixel Density and Fill Rate Limitation Mipmaps Further Reading     Further Reading Creating and using textures is such a big subject that covering it entirely within this one primer is simply not sensible. All I can sensibly achieve here is a skimming over the surface, so here are some links to further reading matter.

Beautiful Yet Friendly - Written by Guillaume Provost, hosted by Adam Bromell. This is a very interesting article that goes into some depth about basic optimizations and the thought process when designing and modeling a level. It goes into the technical side of things to truly give you an understanding on what is going on in the background. You can use the information in this article to find out how to build models that use fewer resources -- polygons, textures, etc. -- for the same results.
This is the reason why this is the first article I am linking to: it is imperative to understand the topics discussed in this article. If you need any extra explanation after reading it, you can PM me and I am more than happy to help. However, parts of this article go outside the texture realm of things and into the mesh side, so keep that in mind if you're focusing on learning textures at the moment.

UVW Mapping Tutorial - by Waylon Brinck. This is about the best tutorial I have found for a topic that gives all 3D artists a headache: unwrapping your three-dimensional object into a two-dimensional plane for 2D painting. It is the process by which all 3D artists place a texture on a mesh (model). NOTE: while this tutorial is very good and will help you in learning the process, UVW mapping/unwrapping is just one of those things you must practice and experiment with for a while before you truly understand it.

Poop In My Mouth's Tutorials - By Ben Mathis. Probably the only professional I know who has such a scary website name, but don't be afraid! I swear there is nothing terrible beyond that link. He has a ton of excellent tutorials, short and long, that cover both the modeling and texturing processes, ranging from normal-mapping to UVW unwrapping. You may want to read this reference first before delving into some of his tutorials. Texture File Types & Formats In the computer world, textures are really nothing more than image files applied to a model. Because of this, a variety of common computer image formats can be used. These include, .TGA, .DDS, .BMP, and even .JPG (or .JPEG). Almost any digital image format can be used, but some things must be taken into consideration:

In the modern world of gaming, being heavily reliant on shaders, formats like the .JPG format are rarely used. This is because .JPG, and others like it, are lossy formats, where data in the image file is actually thrown away to make the file smaller. This process can result in compression artifacts The problem is that these artifacts will interfere with shaders, because these rely on having all the data contained within the image intact. Because of this, lossless formats are used -- formats like .DDS (if lossless option chosen), .BMP, and .TGA.

However, there is such a thing called S3TC (also known as "dxt") compression. This was a compression technique developed for use on Savage 3D graphics cards, with the benefit of keeping a texture compressed within video memory whereas non-S3TC-compressed textures are not. This results in a 1:8 or greater compression ratio and can allow either more textures to be used in a scene, or can be used to increase the resolution of a texture without using more memory. S3TC compression can be made to work with any format, but is most commonly associated with the .DDS format.

Just like the .jpg and other lossy formats, any texture using S3TC will suffer compression artifacts, and as such is not suitable for normal maps, (which we'll discuss a little later on).

Even with S3TC it is common to use a lossless format for the texture format, and then apply S3TC when necessary. This is done to provide an artist with the ability to have lossless textures when needed -- e.g. for normal maps -- but then provide them with a method for compression on textures that could benefit from S3TC compression, such as diffuse textures. Texture Image Size The engineers who design computers and their component parts like us to feed data to their hardware in chunks that have dimensions defined as powers of two. (E.g. 16 pixels, 32 pixels, 64 pixels, and so on.) While it is possible to have a texture that is not the power of two, it is generally a good idea to stick to power-of-two sizes for compatibility reasons (especially if you're targeting older hardware). That is, if you're creating a texture for a game, you want to use image dimensions that are the power of two. Examples, 32x32, 16x32, 2048x1024, 1024x1024, 512x512, 512x32, etc. Say for example, you have a mesh/model and you're UV Unwrapping, for a game you must work within dimensions that are a power of two.

Powers of two include: 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, and so on.

What if you want to use images that aren't powers of two? In such cases, you can often use uneven pixel ratios. This means you can create your texture at, say, 1024x768, and then you can save it as 1024x1024. When you're applying your texture to your mesh you can stretch it back out in proportion. However, it is best to go for a 1:1 pixel ratio and create the texture starting at the power of 2, but the stretching is one method for getting around this if needed

Please refer to the "Pixel Density and Fill Rate Limitation" section for more in depth info on how exactly to choose your image size. Texture File Size File size is important for a number of reasons. The file is the actual amount of memory (permanent or temporary) that the texture requires.
For an obvious example, an uncompressed .BMP could be 6MB, this is the space it requires to be saved on a hard drive and within the video and/or system RAM. Using compression we can squeeze the same image into a file size of, say, 400 KB, or possibly even smaller. Compression, like that used by .JPG and other similar formats, will only do compression on permanent storage media, such as hard drives. That is to say, when the image is stored in video card memory it must be uncompressed within the memory, so you truly only get a benefit on storing the texture on its permanent medium but not in memory during use. Enter S3TC The key benefit of the S3TC compression system is that it compresses on hard drives, discs, and other media, while also staying compressed within video card memory.

But why should you care what size it is within the video memory?

Video cards have onboard memory called, unsuprisingly enough, video memory, for storing data that needs to be accessed fast. This memory is limited, so considerations on the artists' part must be used. The good news is that video card manufacturers are constantly adding more of this video memory -- known as Video RAM (or VRAM for short). Where once 64 MB was considered good, we can now find video cards with up to 1 GB of RAM.

The more textures you have, the more RAM will be used. The actual amount is based primarily on the file size of each texture. Other data take up video memory, such as the model data itself, but the majority is used for textures. For this reason, it is a good idea to both plan your video memory usage and test how much you're using once in the engine. It is also advised that you have a minimum hardware configuration for what you want your game to run on. If this is the case, then you should always make sure your video memory usage does not go over the minimum target hardware's amount.

Another advantage of in-memory compression like S3TC is that it can increase available bandwidth. If you know your engine on your target hardware may be swapping textures back and forth frequently (something that should be avoided if possible, but is a technique used on consoles), then you may want to consider having the textures compressed and then decompress them on the fly. That is to say, you have the textures compressed, and then when they're required, they're transported and then decompressed as they're added to video memory. This results in less data having to be shunted across the graphics card's bus (transport line) to and from the computer's main memory, resulting in less bandwidth utilization, but with the added penalty of a few wasted processing clocks. Texture Types Now we're going to discuss such things are diffuse, normal, specular, parallax and cube mapping.
Aside from the diffuse map, these mapping types are common in what have become known as 'next-gen' engines, where the artist is given more control over how the model is rendered by use of control maps for shaders. Diffuse maps are textures which simply provide the color and any baked-in details. Games before shaders simply used diffuse textures. (You could say this is the 'default' texture type.) For example, when texturing a rock, the diffuse map of the rock would be just the color and data of that rock. Diffuse textures can be painted by hand or made from photographs, or a mixture of both.

However, in any modern engine, most lighting and shadow detail is preferred to not be 'baked' (i.e. pre-drawn directly into the texture) into the diffuse map, but instead to have just the color data in the diffuse and use other control maps to recreate such things as shadows, specular reflections, refraction effects and so on. This results in a more dynamic look for the texture overall, helping its believability once in-game. 'Baking' such details will hinder the game engine's ability to produce "dynamic" results, which will cause the end result to look unrealistic.

There is sometimes an exception to this rule: if you're providing "general" shading/lighting like ambient occlusion maps baked (merged) into the diffuse, then it is ok. These types of additions into the diffuse are general enough that they won't hinder the dynamic effect of running in a realtime engine, while achieving a higher level of realism.

Another point to remember is that while textures sourced from photographs tend to work very well in 3D environment work, it is often frowned upon to use such 'photosourced textures' for humans. Human diffuse textures are usually hand-painted.

Normal maps are used to provide lighting detail for dynamic lighting, however this is involved in an even more important role, as we will discuss shortly. Normal maps get their name from the fact that they recreate normals on a mesh using a texture. A 'normal' is a point (actually a vector) extending from a triangle on a mesh. This tells the game engine how much light the triangle should receive from a particular light source -- the engine simply compares the angle of the normal with the position and angle of the light itself and thus calculates how the light strikes the triangle. Without a normal map, the game engine can only use the data available in the mesh itself; a triangle would only have three normals for the engine to use -- one at each point -- regardless of how big that triangle is on the screen, resulting in a flat look. A normal map, on the other hand, creates normals right across a mesh's surface. The result is the capability to generate a normal map from a 2 million poly mesh and have this lighting detail recreated on a 200 poly mesh. This can allow an artist to recreate a high-poly mesh with relatively low polygons in comparison.

Such tricks are associated with 'next-gen' engines, and are heavily used in screenshots of the Unreal 3 Engine, where you can see large amounts of detail yet able to run in realtime due to the actual amount of polys used. Normal maps use the different color channels (red, green, and blue) for storing gray scale data of lighting information at different angles, however there is no set standard for how these channels can be interpreted and as such sometimes require a channel to be flipped for correct function in an engine.

Normal maps can be generated from high-poly meshes, or they can be image generated, that is, being generated from a gray scale map. However, high-poly generated normal maps tend to be preferred as they tend to have more depth. This is for various reasons, but the main one is that it is difficult for most to paint a grayscale map that is equal to the quality you'll receive straight from a model. Also, you will often find that the image generators tend to miss details, requiring the artist to edit the normal map by hand later. However, it is not impossible to receive near equal results using both methods, each have their own requirements, it is up to you on which to use in each situation

(Please refer to the tutorial section for extended information on normal maps.)

Specular maps are simple in creation. They are easy to paint by hand, because the map is simply a gray scale map that defines the specular level for any point on a mesh. However, in some engines and in 3D modeling applications this map can be full-color so the brightness defines the specular level and color defines the color of the specular highlights. This gives the artist finer control to produce more lifelike textures because the specific specular attribute of certain materials can be more defined and closer to reality. In this way you can give a plastic material a more plastic-like specular reflection, or simulate a stone's particular specular look.

Parallax mapping picks up where regular normal mapping fails. We create normals on a model by use of a normal map. A parallax map does the same thing as a normal map except it samples a grayscale map within the alpha channel. Parallax mapping works by using the angles recorded in the tangent space normal maps along with the heightmap to calculate which way to displace the texture coordinates. It uses the grayscale map to define how much the texture should be extruded outwards, and uses the angle information recorded in the tangent normal map to determine the angle to offset the texture. Doing things this way a parallax map can then recreate the extrusion of a normal map, but without the flattening that visible in ordinary normal mapping due to lack of data at different perspectives. It mainly gets its name from how the effect is created by the parallax effect. The end results are cleaner and deeper extrusions with less flattening that occurs in normal mapping because the texture coordinates are offset with your perspective.

Parallax mapping is usually more costly than normal mapping. Parallax also has the limitation of not being used on deformable meshes because of the tendency for the texture to "swim" due to the way textures are offset.

Cube mapping uses a texture that is not unlike an unfolded box and acts just like a sky box when used. Basically this texture is designed like an unfolded box where it all is folded back together when being used. This allows us to create a 3d dimensional reflection of sorts. The result is very realistic precomputed reflections. For example, you would use this technique to create a shiny, metal ball. The cube map would provide the reflection data. (In some engines, you can tell the engine itself to generate a cube map from a specific point in the scene and have it apply the result to a model. This is how we get shiny, reflective cars in racing games, for example. The engine is constantly taking cube map 'photos' for each car to ensure it reflects its surroundings accurately.) Tiling Textures Now, while you can have unique, specific textures made for specific models, a common thing to do to both save time and video memory, is tiling textures. These are textures which can be fitted together like floor or wall tiles, producing a single, seamless texture/image. The benefit is that you can texture an entire floor in an entire building using a single tiling texture, which both saves the artist's time, and video memory due to fewer textures being needed.

A tiling texture is achieved by having the left and right side of a texture blend into each other, and the same for the bottom and top of the texture. Such blending can be achieved by the use of a specialist program, a plugin, or by simply offsetting the texture a little to the left and down, cleaning up the seams, and then offsetting back to the original position

After you create your first tiling texture and test it, you're bound to see that each texture will produce a 'tiling artifact' which shows how and where the texture is tiled. Such artifacts can be reduced by avoiding high-contrast detail, unique detail (such as a single rock on a sand texture), and by tiling the texture less. Texture Bit Depth A texture is just an image file. This means most of the theory you're familiar with when it comes to working with images also applies to textures. One such would be bit depth. Here you will see such numbers as 8, 16, 24, and 32 bits. These each correspond to the amount of color data that is stored for each image.

How do we get the number 24 from an image file? Well, the number 24 refers to how many bits it contains. That is, that you have 3 channels, red, green, and blue, all of which are simply channels which contain a gray scale image, but are added together to produce a full color image. So black, in the red channel, means "no red" at that point, while white in the red channel means "lots of red". Same applies to blue and green. When these are combined, they produce a full color image, a mix of Red, Green and Blue (if using the RGB color model). The bits come in by the fact that they define how many levels of gray each channel has: 8 bits per channel, over 3 channels, is 24 bits total.

8 bits gives you 256 levels of gray. Combining the idea that 8 bits gives you 256 levels of gray and that each channel is simply gray scale and different levels of gray define a level within that color, we can then see that a 24 bit image will give us 16,777,216 different colors to play with. That is, 8 bits x 3 channels= 24 bits, 8 bits= 256 gray scale levels, so 256 x 3= 16,777,216 colors. This knowledge comes in useful when at certain times it is easier to edit the RGB channels individually, with a correct understanding you can then delve deeper into editing your textures.

However, with the increase in shader use, you'll often see a 32 bit image/texture file. These are image files which contain 4 channels, each of 8 bits: 4 x 8 = 32. This allows a developer to use the 4th channel to carry a unique control map or extra data needed for shaders. Since each channel is gray scale, a 32 bit image is ideal to carry a full color texture along with an extra map within it. Depending on your engine you may see a full color texture with the extra 4th channel being used to hold a gray scale map for transparency (more commonly known as an "alpha channel"), specular control map, or a gray scale map along with a normal map in the other channels to be used for parallax mapping.

As you paint your textures you may start to realize that you're probably not using all of the colors available to you in a 24 bit image. And you're probably right, this is why artists can at times use a lower bit depth texture to achieve the same or near the same look with a lesser memory footprint. There are certain cases where you will more than likely need a 24 bit image however: If your image/texture contains frequent gradations in the color or shading, then a 24 bit image is required. However, if your image/texture contains solid colors, little detail, little or no shading, and so on, you can probably get away with a 16, 8, or perhaps even a 4 bit texture.

Often, this type of memory conservation technique is best done when the artist is able to choose/make his own color pallette. This is where you hand pick the colors that will be saved with your image, instead of letting the computer automatically choose. By using a careful eye you have the possibility to choose more optimal colors which will fit your texture better. Basically, in a way, all you're doing is throwing out what would be considered useless colors which are being stored in the texture but not being used. Making Normal Maps There are two methods for creating normal maps: Using a detailed mesh/model. Creating a normal map from an image.
The first method is part of a common workflow that nearly all modelers who support normal map generation use. For generating a normal map from a model you can either generate it out to a straight texture, or if you're generating your normal map from a high-poly mesh, it is common to then model the low poly mesh around your high-poly mesh. (Some artists have prefer for modeling the low-poly version first while others like to do the high then the low, in the end there is no perfect way, its just preference.)

For generating normal maps you can always use the Nvidia Plugin, however it takes a lot of tweaking to get a good looking normal map. As such, I recommend Crazy Bump!. Crazy Bump will produce some very good normal maps if the given texture it is generating it from is good. Combining the two methods. It is common, even if you're generating a normal map from a 3d high-poly mesh to then generate an image generated normal map and overlay it over the high-poly generated one. This is done by generating one from your diffuse map, filling the resulting normal map's blue channel with 128 neutral gray, and then overlaying this over your high-poly generated one. This is done to add in those small details that only the image can generate. This way you get the high frequency detail along with the nice and cleanly generated mid-to-low frequency detail from your high-poly generated normal map. Pixel Density and Fill Rate Limitation Let's say you have a coin that you just finished UVW unwrapping, it will indeed be very small once in-game, however you decide it would be fine to use a 1024x1024 texture. What is wrong with the above situation? Firstly, you shouldn't need to UVW unwrap a coin! Furthermore, you should not be applying the 1024x1024 texture! Not only is this wasteful of video memory, but it will result in uneven pixel density and will increase your fill rate on that model for no reason. A good rule of thumb is to only use the amount of resources that would make sense based on how much screen space an object will take up. A building will take up more of the screen than a coin, so it needs a higher resolution texture and more polygons. A coin takes up less screen space and therefore needs fewer polys and a lower resolution texture to obtain a similar pixel density.

So, what is pixel density? It is the density of each pixel from a texture on a mesh. For example, take the UVW unwrapping tutorial linked to in the "Texture Image Size" section: There you will see a checkered pattern, this is not only used to make sure the texture is applied right, but to also keep track of pixel density. If you increased the pixel density, you would see the checkered pattern get more dense; if you decrease the density, the checkered pattern would be less dense, with fewer squares showing.

Maintaining a consistent pixel density in a game helps all of the art fit together. How would you feel if your high pixel density character walks up to a wall with a significantly lower pixel density? Your eyes would be able to compare the two and see that the wall looks like crap compared to the character, however would this same thing happen if the character were near the same pixel density of the wall? Probably not -- such things only become apparent (within reason) to the end user if they have something to compare it to. If you keep a consistent pixel density throughout all of the game's assets, you will see all of it fits together better.

It is important to note that there is one other reason for this, but we'll come to it in a moment. First, we need to look at two related problems that can arise: transform(ation) limited and fill-rate limited modeling. A transform-limited model will have less pixel density per polygon than a fill-rate limited model where it has a higher pixel density per polygon. The theory is that a model takes longer on either processing the polys, or processing the actual pixel-dense surface. Knowing this, we can see that our coin, with very few polys will have a giant pixel density per polygon, resulting in a fill rate limited mesh. However, it does not need to be fill rate limited if we lower the texture resolution, resulting in a lower pixel density.

The point is that your mesh will be held back when rendering based on which process takes longer: transform or fill rate. If your mesh is fill rate limited then you can speed up its processing by decreasing its pixel density, and its speed will increase until you reach transform limitation, in which your mesh is now taking longer to render based on the amount of polygons it contains. In the latter case, you would then speed up the processing of the model by decreasing the amount of polygons the model contains. That is, until you decrease the polygon count to the point where you're now fill rate limited once again! As you can see, it's a balancing act. The trick is to maximize the speed of the transform and fill rate processing (minimize the impact of both as much as you can), to get the best possible processing speed for your mesh.

That said, being fill rate limited can sometimes be a good thing. The majority of "next-gen" games are fill rate limited primarily because of their use of texture/shading techniques. So, if you can't possibly get any lower on the fill rate limitation and you're still fill rate limited, then you have a little bit of wiggle room to work around where you can actually introduce more polygons with no performance hit. However, you should always try to cut down on fill rate limitations when possible because of general performance concerns

Some methods revolve around splitting up a single polygon into multiple polygons on a single mesh (like a wall for example). This works by then decreasing the pixel density and processing (shaders) for the single polygon by splitting the work into multiple polygons. There are other methods for dealing with fill rate limitation, but mainly it is as simple as keeping your pixel density at a reasonable level.       MipMaps It is fitting that after we discuss pixel density and fill rate limitation that we discuss a thing called Mipmapping. Mipmaps (or mip maps) are a collection of precomputed lower resolution image copies for a texture contained in the texture. Let's say you have a 1024x1024 texture. If you generate mipmaps for your texture, it will contain the original 1024x1024 texture, but it will also contain a 512x512, 256x256, 128x128, 64x64, 32x32, 16x16, 8x8, 4x4, 2x2 version of the same texture, (exactly how many levels there are is up to you). These smaller mipmaps (textures) are then used in sequence, one after the other, according to the model's distance from the camera in the scene. If your model uses a 1024x1024 texture up close, it may be using a 256x256 texture when further away, and an even smaller mipmap texture level when it's way off in the distance. This is done because of many things: The further away you are from your mesh, the less polygonal and texture detail is needed. This is because all displays have a fixed display resolution and it is physically impossible for the player to decipher the detail of a 1024x1024 texture and 6,000 polygon mesh when the model takes up only 20 pixels on screen. The further away we are from the mesh, the fewer the polygons and the lower the texture resolution we need to render it. Because of the whole fill rate limitation described above, it is beneficial to use mipmaps as less texture detail must be processed for distant meshes. This results is a less fill-rate-heavy scene because only the closer models are receiving larger textures, whereas more distant models are receiving smaller textures. Texture filtering. What happens when the player tries to view a 1024x1024 texture at such a distance that only 40 pixels are given to render the mesh on screen? You get noise and aliasing artefacts! Without mipmaps, any textures too large for the display resolution will only result in unneeded and unwanted noise. Instead of filtering out this noise, mipmaps use a lower resolution texture for different distances, this results in less noise. It is important to note, that while mipmaps will increase performance overall, you're actually increasing your texture memory usage. The mipmaps and the whole texture will be loaded into memory. However, it is possible to have a system where the user or dev can select the highest mipmap they want and the ones higher than this limit will not be loaded into memory (as the system we're using now), however the mipmaps which meet or are lower than this limit will still be loaded into memory. It is widely agreed that the benefits of mipmaps vastly outweigh the small memory hit. NOTE: Mesh detail can also affect performance, so the equivalent method used for mesh detail is known as LOD -- "Level Of Detail. Multiple versions of the mesh itself are stored at different levels of detail. The less-detailed mesh is rendered when it's a long way away. Like mipmaps, a mesh can have any number of levels of detail you feel it requires.

The image below is of a level I created which makes use of most of what we've discussed. It makes use of S3TC compressed textures, normal mapping, diffuse mapping, specular mapping, and tiled textures.

## GameDev Contractors

1. Hi. Our team deals with all kinds of graphics outsource. We work for any budget and are open for cooperation with indie developers. If you want to get high quality graphics for your games – you are welcome. We specialize in: - GUI design
- 2D characters with animations
- 2D locations Our portfolio can be found at: https://kalitrom.artstation.com Contact us: kalitrom@gmail.com
2. Experience freelance developer of over 15 years. Worked on various games using engines such as Unity and Unreal. Previous releases have been published on Steam and app stores. Recent projects include games developed on the Ethereum blockchain and a 3D music software: http://store.steampowered.com/app/673730/Aurrery/ https://github.com/markfrance Available to develop games using a wide range of technologies. info@blockchain-games.io
3. Hi. I'm Ziz. I want to Build Your Thing. I'm experienced with building systems and mechanics using Unity's C# scripting tools, ranging from fast 2D action games to complex number-crunchy backend things for RPGs. I'm a bit of a generalist; I can even do shader development in a pinch, and I have a solid base of other programming skillsets that might be of use to you - I've worked in Python, Ruby, C, C++, Javascript, and a bit of assembly hacking just 'cause. Hey, look, here's a screenshot of a cool cel shader thingy, and here are some jiffs with lots of nifty 2D effects, complex hand-rolled player physics, and whatnot. Also, I did all the pixel art there, so I guess that's another hat I can don if need be, huh. I have a few things up on my Github in varying stages of completion, if you wanna see some code. Contact me if you've got a proposal; I'm game for whatever.
4. Hey there! Jannik - that's me. I'm a freelance web developer since March 2016 and was a regular, employed web developer before - total experience - about five years. What can I help you with? Well, basically anything web development related. I bring a vast amount of experience in working with PHP, MySQL, PostgreSQL, JavaScript, jQuery, CSS(3), HTML(5) and Dojo amongst others. You can find a full list on my website. I've been working for a company called InnoGames (any many more) for quite a few years (one of the major browser/mobile game developers in Germany and Europe) - and I'm always working on my own browser- and client game projects, too. How much do I charge? It's a tough question. I usually charge about 33-35 EUR per hour (net) remote work during prime time (Monday-Friday, 9-to-5). However, I'm aware that there are lots of decent projects and start-ups out there on a budget - so I'm offering fixed prices and (much) lower rates for non-priority work I can do in the evenings and on weekends. General rule: It's negotiable - and I'm more willing to find a good rate for both of us if the project sounds interesting. Anyway. Should you be interested - don't shy away. I'll be available to answer your requests and find a good deal.
You can reach me by sending a mail to hello@janniklorenzen.de Thanks!
5. Hello, Serial Lab Studios is a small audio production company with a focus on video game sound.     We provide the following services: Custom Music Composition Sound Design Voice Production Sonic Branding Spatial Audio for VR/AR Play Testing Audio implementation (Fmod, Wwise, Unity, Unreal) Our full portfolio can be found at: https://www.seriallab.com We do our best to accommodate every budget. Our sound can be heard on over 100 Video Game titles on all major consoles and handhelds. Contact us at sounddesign@seriallab.com for more information. Thank you for your time, we are looking forward to hearing from you. Best, Gina Z & Spencer Bambrick Serial Lab Studios