• Advertisement
  • entries
    21
  • comments
    58
  • views
    36875

Entries in this blog

hero.jpg.fba97f9a5478c7dbc95cb46e539b0ea8.jpg

For a long time I had been delaying finding a solution to feet etc interpenetrating terrain in my game. Finally I asked for suggestions here, and came to the conclusion that Inverse Kinematics (IK) was probably the best solution.

https://www.gamedev.net/forums/topic/694967-animating-characters-on-sloping-ground/

There seem to be quite a few 'ready built' solutions for Unity and Unreal, but I'm doing this from scratch so had to figure it out myself. I will detail here the first foray into getting IK working, some more steps are remaining to make it into a working solution.

Inverse Kinematics - how is it done?

The two main techniques for IK seem to be an iterative approach such as CCD or FABRIK, or an analytical solution where you directly calculate the solution. After some research CCD and FABRIK looked pretty simple, and I will probably implement one of these later. However for a simple 2 bone chain such as a leg, I decided that the analytical solution would probably do the job, and possibly be more efficient to calculate.

The idea is that based on some school maths, we can calculate the change in angle of the knee joint in order for the foot to reach a required destination.

The formula I used was based on the 'law of cosines':
https://en.wikipedia.org/wiki/Law_of_cosines

I will not detail here but it is easy enough to look up.

For the foot itself I used a different system, I calculated the normal of the ground under the foot in the collision detection, then matched the orientation of the foot to the ground.

leg.jpg.e4f905195e630f5d7c456486b4d79669.jpg

My test case was to turn off the animation and just have animals in an idle pose, and get the IK system to try to match the feet to the ground as I move them around. The end effect is like ice skating over the terrain. First I attempted to get it working with the main hero character.

Implementing

The biggest hurdle was not understanding IK itself, but in implementing it within an existing skeletal animation system. At first I considered changing the positions of the bones in local space (relative to the joint), but then realised it would be better to calculate the IK in world space (actually model space in my case), then somehow interpolate between the local space animation rotations and the world space IK solution.

I was quite successful in getting it working until I came to blending between the animation solution and the IK solution. The problems I was having seemed to be stemming from my animation system concatenating transforms using matrices, rather than quaternions and translates. As a result, I was ending up trying to decompose a matrix to a quaternion in order to perform blends to and from IK.

This seemed a bit ridiculous, and I had always been meaning to see whether I could totally work the animation system using quaternion / translate pairs rather than matrices, and it would clearly make things much easier for IK. So I went about converting the animation system. I wasn't even absolutely sure it would work, but after some fiddling, yay! It was working.

I now do all the animation blending / concatenation / IK as quaternions & translates, then only as a final stage convert the quaternion/translate pairs to matrices, for faster skinning.

This made it far easier in particular to rotate the foot to match the terrain.

monkey.jpg.fba6503f671b3cbdcc597e3c5a8fe5bd.jpg

Another snag I found was that blender seemed to be exporting some bones with an 'extra' rotation, i.e. if you use an identity local rotation the skin doesn't always point along the bone axis. I did some tests with an ultra simple 3 bone rig, trying to figure out what was causing this (perhaps I had set up my rig wrong?) but no joy. It is kind of hard to explain and I'm sure there is good reason for it. But I had to compensate for this in my foot rotation code.

Making it generic

To run the IK on legs, I set up each animal with a number of legs, and the foot bone ID, number of bones in the chain etc. Thus I could reuse the same IK routines for different animals just changing these IK chain lists. I also had to change the polarity of IK angles in some animals .. maybe because some legs work 'back to front' (look at the anatomy of e.g. a horse rear leg).

The IK now appears to be working on most of the animals I have tested. This basic solution simply bends the knees when the ground level is higher than the foot placed by the animation. This works passably with 2 legged creatures but it is clear that with 4 legged creatures such as elephant I will also have to rotate the back / pelvis to match the terrain gradient, and perhaps adjust the leg angles correspondingly to line up with gravity.

At the moment the elephant looks like it is sliding in snow down hills. :)

elephant.jpg.bcd92ddb3f9fc1b4ca79e45e80ee854a.jpg

Blending

To blend the IK solution with the animation is kind of tricky to get to look perfect. It is clear when the foot from the animation is at ground level or below, the IK solution should be blended in fully. At a small height above the ground I gradually blend back from the IK into the animation. This 'kind of' works, but doesn't look as good as the original animation, I'm sure I will tweak it.

Another issue is that when one leg is on an 'overhang', you can end up with a situation where the fully outstretched leg cannot reach the ground. I have seen that others offset the skeleton downwards in these cases, which I will experiment with. Of course this means that the other leg may have a knee bent further than physically possible. So there are limits to what can be achieved without rotating the animals pelvis / back.
 
 Anyway this is just description of the trials I had, hopefully helpful to those who haven't done IK, and maybe will generate some tips from those of you that have already solved these problems. :)

This will be a short technical one for anyone else facing the same problem. I can't pretend to have a clue what I was doing here, only the procedure I followed in the hope it will help others, I found little information online on this subject.

I am writing an Android game and want to put in gamepad support, for analogue controllers. This had proved incredibly difficult, because the Android Studio emulator has no built in support for trying out gamepad functionality. So I had bought a Tronsmart Mars G02 wireless gamepad (comes with a usb wireless dongle). It also supports bluetooth.

The problem I faced was that the gamepad worked fine on my Android tv box device, but wasn't working under Linux Mint, let alone in the emulator, and wasn't working via bluetooth on my tablet and phone. I needed it working in the emulator ideally to be able to debug (as the Android tv box was too far). 

Here is how I solved it, for anyone else facing the same problem. Firstly the problem of getting the gamepad working and seen under linux, and then the separate problem of getting it seen under the Android emulator (this may work under Windows too).

Under Linux

Unfortunately I couldn't get the bluetooth working as I didn't have up to date bluetooth, and none of my devices were seeing the gamepad. I plugged in the usb wireless dongle but no joy.

It turns out the way to find out what is going on with usb devices is to use the command:

lsusb

This gives a list of devices attached, along with a vendor id and device id (takes the form 20bc:5500).

It was identifying my dongle as an Xbox 360 controller. Yay! That was something at least, so I installed an xbox 360 gamepad driver by using:

https://unixblogger.com/2016/05/31/how-to-get-your-xbox-360-wireless-controller-working-under-your-linux-box/

sudo apt-get install xboxdrv

sudo xboxdrv --detach-kernel-driver

It still didn't seem to do anything, but I needed to test whether it worked so I installed a joystick test app, 'jstest-gtk' using apt-get.

The xbox gamepad showed up but didn't respond.

Then I realised I had read in the gamepad manual I might have to switch the controller mode for PC from D-input mode to X-input. I did this and it appeared as a PS3 controller (with a different USB id), and it was working in the jstest app!! :)

Under Android Emulator

Next stage was to get it working in the Emulator. I gather the emulator used with Android Studio is qemu and I found this article:

https://stackoverflow.com/questions/7875061/connect-usb-device-to-android-emulator

I followed the instructions here, basically:

Navigate to emulator directory in the android sdk.

Then to run it from command line:

./emulator -avd YOUR_VM -qemu -usb -usbdevice host:1234:abcd

where the host is your usb vendor and id from lsusb command.

This doesn't work straight off, you need to give it a udev rule to be able to talk to the usb port. I think this gives it permission, I'm not sure.

http://reactivated.net/writing_udev_rules.html

Navigate to etc/udev/rules.d folder

You will need to create a file in there with your rules. You will need root privileges for this (choose to open the folder as root in Nemo or use the appropriate method for your OS).

I created a file called '10-local.rules' following the article.

In this I inserted the udev rule suggested in the stackoverflow article:

SUBSYSTEM!="usb", GOTO="end_skip_usb"
ATTRS{idVendor}=="2563", ATTRS{idProduct}=="0575", TAG+="uaccess"
LABEL="end_skip_usb"
SUBSYSTEM!="usb", GOTO="end_skip_usb"
ATTRS{idVendor}=="20bc", ATTRS{idProduct}=="5500", TAG+="uaccess"
LABEL="end_skip_usb"

Note that I actually put in two sets of rules because the usb vendor ID seemed to change once I had the emulator running, it originally gave me an UNKNOWN USB DEVICE error or some such in the emulator, so watch that the usb ID has not changed. I suspect only the latter one was needed in the end.

To get the udev rules 'refreshed', I unplugged and replugged the usb dongle. This may be necessary.

Once all this was done, and the emulator was 'cold booted' (you may need to wipe the data first for it to work) the emulator started, connected to the usb gamepad, and it worked! :)

This whole procedure was a bit daunting for me as a linux newbie, but if at first you don't succeed keep trying and googling. Because the usb device is simply passed to the emulator, the first step getting it recognised by linux itself may not be necessary, I'm not sure. And a modified version of the technique may work for getting a gamepad working under windows.

stats.png.5159edef5e31e77b515c1f52a7c754e8.png

In the last few weeks I've been focusing on getting the Android build of my jungle game working and tested. Last time I did this I was working from Windows, but now I've totally migrated to Linux I wasn't sure how easily everything would go. In the end, it turns out that support for Linux is great, in fact it was easier than getting things up and running on Windows, no special drivers needed.

Definitely Android studio and particularly the emulators seem to be better than last time, with x86 emulators running near native speed, and much quicker APK uploads to the emulators (although still slow to the devices, I gather I can increase this by updating them to high Android version but then less good for testing).

My devices I have at home are an old Cat B15 phone, 800x480 with a GPU that seems to date from 2006(!), a Nexus 7 2012 tablet, and finally an Amlogic S905X TV media player (2017).  Funnily enough the TV box has been the most involved to get working.

CPU issues

My first issue to contend with was I got a 'SIGBUS illegal alignment' error when running on the phone. After tracking it down, it turns out the particular Arm CPU is very picky about the alignment of data. It is usually good practice to keep structures aligned well, but the x86 is very forgiving, and I use quite a few structs #pragma packed to 1 byte, particularly in serialization. Some padding in the structures sorted this.

Next I had spent many hours trying to figure out a strange bug whereby the object lighting worked fine on emulators, but looked wrong on the device. I had a suspicion it was a signed / unsigned issue in values for diffuse light in a shader input, but I couldn't see anything wrong with the code. Almost unbelievably, when I tracked it down, it turned there wasn't anything wrong with the code. The problem was that on the x86 compiler, a 'char' defaults to be a signed char, but on the ARM compiler, 'char' defaults to unsigned!!

This is an interesting choice (apparently on the ARM chip the unsigned may be faster) but it goes against the usual convention for short, int etc. It was easy enough to fix by flipping a compiler switch. I guess I should really be using explicit signed / unsigned types. It has always struck me as somewhat wierd that C is so vague with the in-built types, with number of bits and the sign, given that changing these usually gives bugs.

GPU issues

The biggest problem I tend to have with OpenGL ES devices is the 'precision' specifiers in shaders. You can fill them however you want on the desktop, but it just ignores them and uses high precision. However different devices have different capabilities for lowp, mediump and highp both in vertex and fragment shaders.

What would be really helpful if the guys making the emulators / OpenGL ES on the desktop could allow it to emulate the lower precision, allowing us to debug precision on the desktop. Alas no, I couldn't figure out a way to get this to work. It may be impossible using the hardware OpenGL ES, but the emulator also can use SwiftShader so maybe they could implement this?

My biggest problems were that my worst performing device for precision was actually my newest, the TV box. It is built for super fast decoding video at high resolution, but the fragment shaders are a minimal 10 bit precision affair, and the fill rate is poor for a 1080P device. This was coupled with the problem I couldn't usb connect up to the desktop for debugging, I literally was compiling an APK, putting it on a usb stick (or dropbox), taking to bedroom, installing, running. This is not ideal and I will look into either seeing if ADB will run over my LAN or getting another low precision device for testing.

I won't go into detail on the precision issues, I wrote more on this on a post here:
https://www.gamedev.net/forums/topic/694188-debugging-precision-issues-in-opengl-es-2

As a quick summary, 10 bits of precision in the fragment shader can lead to sampling error in any maths done there, especially in texture coordinate math. I was able to fix some of my problems by moving the tex coordinate calculations to the vertex shader, which has more precision. Then, it turns out that my TV box (and presumably many such chipsets) support an extra high precision path in the fragment shader, *as long as you don't touch the input data*. This allows them to do accurate uv coords on large texture maps, because they don't use the 10 bit precision.

Menus

menus_small.png.e65f7bd82b59c0f29faf6a2419bc7171.png

I've written a rudimentary menu system for the game, with tickboxes, sliders and listboxes. This has enabled me to put in a bunch of debugging features I can turn on and off on devices, to try and find out what affects performance, without recompiling. Another trick from my console days is I have put in some simple graphical performance bars. I record the last 60 frames into a circle buffer and store things like the frame duration, and when certain game tasks took place. In my case the big issue is when a 'scroll' event takes place, as I render horizontal and vertical tiles of the landscape as you move about it.

In the diagram the blue bar is where a scroll happens, a green bar is where the ground scroll happens, and the red is the frame duration. It doesn't show much on the desktop as the GPU is fast, but on the slow devices I often get a dropped frame on the scrolls, so I am trying to reduce this.

bars.thumb.png.b460803835d692003664eeb39ef3ba28.png

I can turn on and off various aspects of the scrolling / rendering to track down what causes performance issues. Certainly PCF shadows are a big ask on mobiles, as is the ground (terrain) shader.

On my first incarnation of the game I pre-rendered everything (graphics + shadows) out to a massive texture at loadup and just scrolled through it as you moved. This is great for performance, but unfortunately uses a shedload of memory if you want big maps. And phones don't have lots of memory.
So a lot of technical effort has gone into writing the scrolling system which redraws the background in horizontal and vertical tiles as you move about. This is much more tricky with an angled landscape than with a top-down 90 degree view, and even more tricky when you have to render shadow maps as you move.
Having identified the shadow map pass as being a bottleneck, I did some quick calculations for my max map size (approx 16384x16384) and decided that I could probably get away with pre-rendering the shadow map to a 2048x2048 texture. Alright it isn't very high resolution, but it beats turning shadows off completely.
This is working fine, and avoids a lot of ugly issues from scrolling the shadow map. To render out the shadow map I render a bunch of 256x256 tiles and copy them to the final shadowmap.

shadows.thumb.png.e5ea4b3a886e719867ca2f5da21bdfe7.png

This fixed some of the slowness, then I realised I could go a step further. Much of the PCF shadows slowdown was from rendering the landscape shadows. The buildings and objects are much rarer so I figured I could pre-render a low-res landscape shadow texture, and use this when scrolling, then only need to do expensive PCF / simple shadows on the static objects, and dynamic objects.

This worked a treat, and incidentally solves at a stroke precision issues I was having with the shadow shader on the 10 bit hardware.

Joysticks

As well as supporting touchscreens and keyboards, I want to support gamepads, so I bought a bluetooth / wireless gamepad for xmas. It works great with the TV box with wireless dongle, unfortunately, the bluetooth doesn't seem to work with my old phone and tablet, or my desktop. So it has been very difficult / impossible to debug to get analog joystick working.

And, in an oversight(?) for the emulator, there doesn't seem to be an option for emulating a gamepad. I can get a D pad but I don't think it is analog. So after some stabs in the dark with docs I am still facing gamepad focus issues so will have to wait till I have a suitable device to debug this.

That's all for now folks! :)

Just a little progress video. As well as getting the scripting working a bit more, I've been placing villages and naming map areas. The area names are generated by putting together random chosen syllables.

Morphing

For variation with the natives they now use realtime bones like the player, and there is a morphing system so you can have fat / thin / muscular natives etc (I only have created fat and thin so far to test, but it works fine).

morphs.jpg.d522b65e03dc28565526e02a78dec026.jpg

UV Maps

As well as the morphing variation, each native has a uv map so it can use different textures for different uv islands (parts of the body). This will allow e.g. wearing different clothing, different faces, jewellry etc. At the moment I've just just put some red green and blue and white over the different areas as placeholder until I create more textures.

uvmaps.jpg.f9c84187b3bfb9878c39a86905c1692a.jpg

The conversations are all random from script just for a test .. choosing random animals and people to make random snippets of talk. I will probably make this more purposeful, giving each villager names and relations so they can further the plot.

Next

Next up I am putting in attachments so they can carry spears etc, and the player can carry sword. Also I may be able to have a canoe as an attachment off the root nodes so they can canoe on the lakes. I will also add female natives. I could do them as morphs, but I think it will be easier for texturing etc to have a different female model.

Sept 2017 Update

screeny.png.808bd1719a0c9445e0767c46bc271311.png

Lots of improvements since moving my development to Linux. All the software has been working great, just as a recap I am using mainly QT Creator, Blender, Geany, Gimp, and Audacity.

Shared Textures

One of the big changes I made was to the export toolchain for static 3d models - buildings etc. Originally each model had its own texture (which got put in a texture atlas). This allows things like baked ambient occlusion, but was not very efficient with texture memory once you had lots of objects, and made it more difficult to get uniform looks, and easily change the texture 'palette'.

buildings.png.6c7832ba2f4d78dea231ea7ecfdbc8bb.png

So I improved my blender python export script to handle multiple models in the same scene, and detect shared textures. A second converter reads in the txt export and builds the binary model file to be used in game - builds texture atlases, modifies texture coords and packs the data into game friendly format.

exporter.png.00c62f1875765ed67ee7fa6d3f0418ba.png

This has worked very well and I've modelled a number of new buildings, boxes, barrels etc.

Map Generation

I've finally improved the map generator. Initially I had been using just simple random positions of map objects, which usually resulted in a mess with buildings on top of each other etc. I have replaced this with a similar physics based spacing system I used in the old version of the game.

Another snag is that buildings would often get placed on sloping ground. This looks bad in game because you get half the building underground, and half suspended in thin air. To improve this I add a flattening stage to the terrain, where the area around objects that are in a 'flattening group' gets flattened. This leads to problems of its own, like terrain 'cliffs' which are smoothed afterwards. This is still being tweaked.

Actor Movement

I've made some improvements to the physics - the movement speed takes account of the gradient of the land now so you no longer get fast moves up and down cliffs. The handling of elevation and altitude is also now more sensible, making the jumping and flying better.

The physics is all now currently cylinder based, and you can jump (and sit) on top of other objects and actors, and pass below them. Still being tweaked .. and I need to come up with a solution for non-cylindrical buildings - I will perhaps have have some solution made with 1 or 2 orientated bounding boxes.

The movement controllers for yaw, pitch and roll are also improved, now storing a velocity for each, which helps prevent yo-yoing artefacts. The bats look much better with some roll, and they also pitch in comical ways, I might have to turn that down lol.

Animation Notes

The animation toolchain now supports notes in the animation, to mark the frames where footsteps, attack noises etc should be.

Here is a video of everything working so far:

What is next?

Next on my todo list is modelling more animals, and adding realtime bones animation for the main player. I figured bones animation was probably too expensive on mobiles for the animals, so I pre-render out their bones animation to vertex tweening animation. The same is currently true of the player but if possible I want to use bones in realtime so I can get responsive animation blending and more flexibility. The downside is I will probably have to write 3 codepaths : software skinning, and 2 versions of the skinning shader as I have read articles suggesting that not all OpenGL ES 2.0 hardware supports a standard way of skinning.

 

For an introduction to my reasons for migrating from Windows to Linux, see my previous blog post. Here I will try to stick to my experience as a Linux beginner, and hopefully inspire other developers to try it out.

Installing Linux

The first stage of course in migrating to Linux is to either install it on your PC, or try a 'live' version off e.g. a usb stick (or even try it on a virtual machine?). I can't say too much here, because I got my new PC with Linux Mint pre-installed, and there should be plenty of guides on google. I went for Mint because I had briefly tried Ubuntu a few years ago, and I liked the look of Mint and fancied a change. I knew it was based on Debian like Ubuntu so there should be lots of software.

597703fa73af9_Screenshotfrom2017-07-2419-30-11.png.3a8f6a14c645829617ce9939b1a83649.png

My first stage after unplugging my windows machine was just to take baby steps to familiarize myself with it, without running away in fright. After plugging in my network cable, I was away with Firefox browser. But after a few minutes I decided to install Chrome, as I am a fan and used that on windows (going with familiar!! safe space!!). This entailed installing software.

Installing Software

On windows, the process of installing software usually involves downloading an installer package from the internet and running it, and hoping it likes your machine windows version / hardware / dependencies. You can do this on Linux too (particularly for cutting edge versions of software), but there is also a far easier way to do it, via a 'package manager'. The package manager is actually the pre-cursor to all the various 'app stores' that have become popular on Android and iOS, but the idea is simple, you have a searchable database of lots of software you can install, usually simply with a click. It also has the magic advantage that it has a very good system for automatically working out the dependencies required by any software, and installing those for you too in the background, or for finding conflicts (if these do occur, when I have rarely had conflicts it has been because I've been trying to do something nonsensical!).

5977044b7c577_Screenshotfrom2017-07-2419-31-30.png.434a97d0b058cba70b3114e72acd7ca0.png

I don't know whether it is my new machine, or linux, but the process of installation (and removal) is orders of magnitude faster than windows. It honestly only takes a couple of seconds for most of these installations. Anyway suffice to say I was very quickly running chrome, installing my favourite plugins, running my favourite websites.

Accessing Windows Hard Disks

The next stage was to get some of my data across from my old windows PC. This is where things get slightly interesting. Predictably enough, linux uses a different filesystem to windows, 'ext4' on my machine, whereas my windows external hard disk was formatted as NTFS. As is Microsoft's way (to discourage competitors, no doubt), NTFS is not public domain. The clever Linux devs have presumably reverse engineered much of NTFS, because you can mount and read from the NTFS disk. However, I am erring on the side of caution and not writing to NTFS for now, because from previous experience of exFAT on Android, it is possible that an incorrect write can bork the file system, and hence lose a LOT of work. My solution for now was to copy my working source code etc from the NTFS hard disk to my ext4 linux SSD. Longterm I intend to convert all my NTFS external hard drives to ext4. It would also presumably be useful if Windows could read from ext4 drives, but I don't know how easy this is as yet.

Great! I had some data on my new machine. I tried some movies and they worked great in the in-built player, and VLC (which I installed). Image files loaded fine in the in-built viewer and in 'the gimp', which is sort of like the linux photoshop. I've used the gimp a little on windows, and am hoping it can do a lot of the photoshop duties.

Blender

For 3d models, I've been using blender on windows, and as luck would have it, this open source software is available and runs very nicely on linux. Was installed and loading my game models in no time.

597704994b95f_Screenshotfrom2017-07-2416-16-52.png.82b51b382ecd7718a6e1565e459dff6f.png

For development, this just left an IDE and compiler for c++ (my language of choice). Linux has a very handy standard compiler which is easy to install (g++ / gcc). This is where I might mention 'the terminal'.

The Terminal

Although the name windows has become synonymous with the windows GUI, it is important to realise that an operating system doesn't have to be irrevocably intertwined with a GUI system. In linux, the operating system can use several different GUIs, depending which flavour you prefer. Or none at all, if for example you are running a server. The way to talk to the operating system below the level of the GUI is a command line interface called 'the terminal'. There used to be one used commonly in windows too, the DOS prompt, but it is rarely used now. In contrast on Linux, the terminal is still very useful for a number of operations, unfortunately it can be a little scary for beginners but this is a little unjustified.

To get the terminal up I just press Alt-T. You can list what is in your current directory by typing 'ls'. You can navigate up a directory with 'cd ..'. And you can navigate into a directory with 'cd MyFolder'. It will also auto-complete the folder / filename if you press tab.

597704c459348_Screenshotfrom2017-07-2419-38-43.png.9d154bb848905db8a7ec9f8388a9a5a5.png

From the terminal you can do a lot of stuff you would also do from the graphical file manager (the excellent 'nemo' is built in to linux mint), such as copying, deleting, moving files. You can also manually tell it to install packages just like it would install from the package manager, with the command 'apt-get'. To install software, you need admin privileges (this is handy as it prevents malware from doing anything naughty without you typing in the admin password). To get admin you type 'sudo' before the command:

sudo apt-get install build-essential

This tells it to run as admin (sudo) to run apt-get, and install (or remove) the package called 'build-essential'. This contains the compiler and other building tools.

IDE

Unless you fancy yourself as a hardcore compile from the terminal from the getgo type of guy, you will also probably want to use an IDE for development. As I use C++, there are several to choose from, such as Eclipse, Code::Blocks, KDevelop, Code Lite etc. I went for QT creator, as I have used it on windows (again, familiarity!! baby steps!!).

Once QT creator was installed, it was fairly easy to tell it to create a hello world app and test it, it worked great! :)

This is where things got slightly more interesting. My current project is an Android game. I had been maintaining both a PC build on windows, and the Android build, with the platform specific stuff isolated into things like creating a windows, setting up OpenGL, input, and low level sound.

5977054bccc12_Screenshotfrom2017-07-2419-40-59.png.deff2b6bc1a1bb878b69b293af0f6d3d.png

OpenGL ES

Where things got slightly confusing is that because I am developing for Android, I was using OpenGL ES 2.0, rather than the desktop version of OpenGL. On windows I had been using the ARM Mali OpenGL ES Emulator, which emulates OpenGL ES by outputting a bunch of normal OpenGL calls for each ES call. I was anticipating having to use something similar on linux, so I attempted to install the Mali emulator in Linux, however I had little joy.

I was getting conflicts with existing OpenGL libraries used in SDL (which I intended to use for platform specific stuff). Finally after investigation I realised that my assumptions were wrong, and Linux actually directly supports OpenGL ES AS WELL as desktop OpenGL, through the open source Mesa drivers. I eventually got a 'hello world' OpenGL ES program working, and was convinced I now had the necessary libraries to start work.

64 Bit Conversion

The next stumbling block was a biggie. For historical reasons, all my libraries and game code were 32 bit. I had been developing with the idea that a lot of Android devices were 32 bit, and I was hoping the 64 bit devices would run the 32 bit code (hadn't really tested this out lol). So I had been previously compiling a 32 bit windows version, and a 32 bit android version. And it soon became clear that my linux setup was compiling by default to 64 bit.

No problem I thought, I should be able to cross compile. With some quick research I managed to get 32 versions of the libraries, however I had no joy with 32 bit version of OpenGL. It refused to install, and being a linux beginner I was stuck. I did some little research, but no simple path, and realised that maybe it was time to convert my code to 64 bit. Or rather, to have my code run in 32 bit and 64 bit.

I had been (rather unjustifiably) dreading this, as I have a lot of library code written over quite a few years. As it happened, aside from some changes to my template library, the biggest problem was in the use of 'fixup' 32 pointers in flat binary files formats. I have been using this technique for a long time now as it greatly speeds file loading, and also helps prevent memory fragmentation.

Fixup Pointers

Essentially the idea with a 'fixup' pointer is you store into the file an 'offset' from a fixed point in the file to a resource, often the start, because there is no point in saving a real pointer to a file as it points to a (changeable) memory location. Then you can load the entire binary file as one big block, and on loading 'fixup' the offset pointer to a real pointer by adding e.g. the offset to the memory location of the start of the file in memory.

This works great when the offsets are 32 bit and your pointers are 32 bit. But when you move to 64 bit, your offsets are fine (as long as the file is smaller than 4gb), but there is not enough room to store a 64 bit pointer. So you have a choice, you can either do some pointer arithmetic on the fly, or change your file formats to use 64 bit offsets / pointers.

After a trial with the first method, I have eventually settled on going with 64 bit in the file, even if it uses a little more space. Of course the disadvantage is that it has meant I have needed to re-export all my assets. So at the same time as converting my libraries to 64 bit, the game code, I also needed to convert my exporters to 64 bit, and re-export all the assets (models, sprites, sound etc).

This has been a frustrating big job, particularly because you are coding 'blind'. Normally when you program you will change a little bit, recompile, run and test. But with such a conversion, I had to convert *everything* before I could test any of it.

Success!

It has been demoralizing doing the conversion, I won't lie. But I have been so impressed with the operating system I was determined to make it work. And finally bit by bit I got the exporters working, re-exported, then the game, debugged. Got some crazy graphical errors, errors in the shaders that the OpenGL ES implementation didn't like (that's a whole other story!) but finally got it displaying the graphics, then did an SDL version of the sound this afternoon which is working great.

5977056a52792_Screenshotfrom2017-07-2416-10-27.png.c456b4040cf6563b16189b020b0c64e3.png

One thing I will say is I should have been using SDL before, it is really simple and makes a whole lot of sense of taking out the eccentricity of setup code on different platforms (windows in particular is very messy).

So to summarize I now have (nearly) everything working, compiling and running on linux. I still have to install android studio and try debugging an android hardware device through usb but I'm very hopeful that will work. Even if it doesn't it's not a show stopper as I can always use a second PC. I am gradually becoming more familiar with linux every day and even feeling I might get tempted to learn QT so I can do some nice 'native' looking apps.

polls_just_been_dumped_logo_1_1023_424378_answer_1_xlarge.jpg.e4ddbe09f973065cb14dace79d30b0dc.jpg

I've been developing on Microsoft Windows for a long time, since around 1992/93, when I got my first PC. Various other platforms before that, but I've pretty much stayed with it, not because it is a technical marvel (it's not), but based on the idea that it was the most popular OS so it should be easy to get programs running on other people's machines. Coupled with this (and no doubt because of this) there is also loads of good software for development, which had made it the 'default' choice for me.

Windows_3.11_workspace.png.7ec71152193a43e46fabe4400760efad.png

Don't get me wrong, I have certainly admired certain aspects of the various Apple OSes over the years (especially when they embraced BSD), but been put off by having to relearn the 'backwards' way of doing everything, and rightly or wrongly the suspicion of a 'control freak' walled garden approach, where you are not in control of the computer, Apple are. And don't get me started on my experiences of having to use iTunes to do something as simple as transfer a file over usb from a Mac to an i-something. And the obvious bias towards monetizing every aspect of the experience.

In contrast I sometimes feel that Windows is *overly* open, exposing too much to developers, allowing them to too easily 'hijack' your PC and take over its resources for their own purposes at startup, as well as a series of insecure 'technologies' that seem more appropriate for malware authors than legit developers. It seems to be designed so that the OS will run slower and slower the more apps you install, until you give up and re-install windows.

2012-04-30_132903.png.d70b3617769998919c445496e5a40edc.png

Along this line comes the other unpleasant thing I found with windows, that a lot of the software would rely on some other flavour of the month technology being installed as a dependency. Want to use a text editor? No, first you needed to spend half a day installing the latest huge bloated .NET runtime, to find it probably breaks some other app. And for something that is meant to be backward compatible, certain software companies (particularly Microsoft themself) seem to go above and beyond the call of duty in making their software incompatible with anything but the latest builds of the OS.

And so we come to my personal last straw .. I spent some time last year evaluating different IDEs, and preparing projects, converting code etc, until I finally settled on using Visual Studio 2017, which was in the final release candidate stages at the time. The first version worked great until it expired. Then I tried the updater, which failed miserably at installing the next version, so I had to manually tweak things until it installed. Finally I came back from holiday 3 weeks ago to find that the 'final final' RC candidate had expired, and I was required to install the release version. Unfortunately I found the installer refused to work on my system. During the time between the RC and the release, they managed to screw up the installer (of all things??). So I was left unable to do any work until I had it resolved.

I spent several days backing up my PC and trying to update it, but even with the windows updates no joy with the installer. I resigned myself that I had a choice of either buying a new hard disk and installing windows 10, or buying a new PC. Given I didn't want to risk losing my old work, I went for a new PC, even though my old one was perfectly adequate.

C3LH-H110T-Rear__64180.1464706216_1280_1280.thumb.jpg.8b3a011ca68cdf1e5b620bd57c507c68.jpg

£650 or so later I had ordered a fanless kaby lake system. During the order I had the choice of OS to put on it. I had originally planned to put Windows on it, but thought what the hell, I should have another play with Linux, as one of the options was Linux Mint, and I could be sure the hardware would all work, so it should be easy.

While I waited a few days for the build, I did some research into Windows 10. Unfortunately I became more and more disillusioned the more I read. While I'm sure technically the OS has got better over the years, I've heard only disturbing things (from 'the register' etc) about the roadmap Microsoft is taking with Windows.

One of the things I hate about Windows is the need for updates, and the way you are left to pray during the process that they don't break some other bit of software. So usually I turn automatic updates off, and carefully manually select any if they are really required. Not so with Windows 10! As (allegedly) the 'last' version of windows, it will now automatically update itself, forever, whether you like it or not. That's nice to know that if you are a business, you have the very real possibility of waking up one morning to find Microsoft have borked your work and there's absolutely nothing you can do about it. This is clearly a showstopper for many people, for instance having a meeting to show clients the next day and finding your PC has been remotely broken by some well meaning folks who I'm sure have your best interests at heart and not theirs.

Windows-Update-4.png.06f6b50015c501300d4616868d6c118a.png

But it doesn't end there, no now the operating system is designed to take your personal info, searches, work etc etc and send it (without your permission) to Microsoft central command mothership. Simple, you turn it off, you would think, except that, apparently, it seems you can't turn it off. So you think you will block the MS servers in your firewall etc. No dice, as the OS apparently ignores these rules because slurping your private data is too important. And even if you think you've worked a way round this, you only have to leave the PC till the next morning, for the next AUTOMATIC update to circumvent your attempt to circumvent the data slurping. Honestly, there must be laws against this kind of thing.

All this made me realise I had to seriously think about moving off windows as a development platform in the longterm, and that time may just be NOW!

Several of my old dev colleagues had by now moved to other platforms, notably a lot have moved to Apple. I admit I have an irrational phobia of all Apple products, so the only choice for me was to investigate Linux. I only had some *very* basic grounding in unix (having done some pascal on unix machines at Uni), and having played with linux on my Asus EEE netbook many moons ago. So my experiences, in the next blog post, should be useful for anyone who is an absolute beginner like me.

Suffice to say, it has been a very difficult slog learning the basics and converting my code, but I have *finally* got my libraries and game code working, and I am now a convert. The whole Linux experience seems light years ahead of windows. I may still end up having to install windows in a VirtualBox machine, but I haven't had a need as yet.

Next blog post will be my migration experience...

Time for an update to show how I'm getting on. A lot of what I've been doing is copy-pasting and reworking code from the old version of the game, so I'm progressing more rapidly than would otherwise be the case.


Some of the things I've added:


Hills

Now instead of just random test heights the landscape is made up of distinct hills, which raise the surface from their centre. I've had to compromise a bit with the heights available (bottom of the lakes to top of the hills) because it affects a lot of aspects of the scrolling renderer and I don't want to go above hardware limits for the render target size.


Water

Just using my old dodgy shader from the old version, I'm currently just drawing a big quad at the water surface level. This may be changed to a rough polygonal shape around the lakes, to save on fillrate. It has to read the custom depth buffer so must be moderately expensive even when there is no drawing taking place.


Particles

I've added a very simple particle system, for things like fire, blood, splashes etc. You can place particle systems on the map and it will intelligently turn them on / off as needed as you move around. It is currently using point sprites, so the particles are flicking out of view as the point centre moves off the screen. This may not happen on the OpenGL ES version, I haven't tried it yet, but if it is still a problem I'll either switch to quads or try a workaround (I did read a suggestion of changing the viewport and using glScissor). I'm also considering using something similar for things like butterflies.


Animation

Minor tweak, I use the distance travelled to determine how far to advance the animation, instead of just time, so the footsteps are a better match to the ground instead of sliding so much.


Jumping

Added support for altitude and gravity. This is mainly used for the player but will also be used for flying creatures. The bats are not yet implemented, they are at a fixed height for now. However it works with the collision detection, so e.g. bats can fly over animals and plants, and the player can jump over low obstacles. :D


Scripting

The very basics of the LUA scripting is working again. I need to do more work for attaching it to characters, when I deal with level loading. You can use the scripting to drive subtitles for the game and speech bubbles on characters, play sounds, animations, move characters etc. :ph34r:


Sound

Finally I've got the sound working again. This was initially mostly a copy-paste affair, but I added support for looping sounds, for ambience around the maps, like insect noises, water rippling, fire etc. I also improved it to use a basic positional audio, where each sound has a location, and the listener has a location, and it smoothly interpolates the sounds in stereo as they move about relative to the listener. There is also reverb / echo and dynamic compression. I haven't tried the music tracker yet but I intend to change this considerably.


As usual any suggestions / comments are most welcome. :)

I've been working hard the last week.. firstly I put all the plants so far in the same blender file so I could render all the frames out in an automated fashion rather than manually. It was becoming ridiculous that everytime I wanted to tweak the lighting / depth export I'd have to redo all the files manually. The new process is much easier, I have them all lined up on a stage and move it along as each plant is rendered. Takes about 45 mins but I can go off and make a cuppa.

Here they are all lined up in blender:

backgrounds.png

And here's how I've been making the plants, using ngplant. I gather there is a sapling plugin for doing this in blender, but it seems a bit more time consuming to use than ngplant. I just put in a few branching parameters, load in some leaf textures, then export it as an .obj, then import into blender and fixup the materials. Could be a little quicker with the materials but it's not bad, I can do a new plant in circa 30 mins.

ngplant.png

The depth buffer render was a bit tricky to setup in blender, I used the composite node editor, then everything is run through an external tool to downsize the renders, and batch them up into spritesheets.

Once the new plants were working I finally had a go at getting shadows working again. Having a couple of weeks off from the renderer is really bad for forgetting how it works, but I managed to get the depth write shaders writing a shadow map, and using that to do PCF shadows as each row / column comes into view in the scrolling system. It's a little complex because the view angle for the shadow render is different to the main scrolling system, but it seems to work (just about!). I may end up having 2 long thin frame buffers for the shadow map for horizontal and vertical columns and rows (to save on fill rate) but it's a trade off as I'd have to submit all the geometry twice. For now I'm just using a 360x360 ish shadow map and blurring it loads, as I don't really need sharp shadows.

And here it is all put together (just random map):

z_interact.png

The big tech feature that I'm pleased to be getting working for this game is that the entire background jungle and terrain is pre-rendered, with a scrolling system, so it takes very little CPU / GPU / battery at runtime and can thus support large numbers of background models and vegetation. As well as storing the colour channel for the background, it also has to store and process my own custom Z buffer for the background, so that models of characters and animals will correctly interact depth-wise with the background. :ph34r:

As in my previous version of the game it is quite easy to get 3d models storing their z into a custom z buffer, and the terrain. The billboards were also simply rendered to the z buffer as rectangles standing up on the terrain surface. This works when a player is behind a tree for example, or in front, but there is no fine grained interaction with the tree because it is essentially a flat plane as far as the game is concerned. :(

So I have currently been experimenting with rendering the trees from blender with a depth channel, loading this into the game, and using it to offset the z in the sprite shader. Firstly I completely rewrote my sprite rendering library and export tool, as it needed a refresh, and it needed to support depth (and possibly other channels, such as normals?).

I now have it beginning to work in the game. In the top pic you can see the player is interacting with a 2d billboard with depth - the backpack is in front of the tree trunk, but the head is hidden by the leaves in the foreground. :D I still have some jiggling to do with the maths - the z buffer may not be linear, and I need to take account that the z render from blender is at a 45 degree angle, whereas the billboard is currently drawn straight up relative to the ground, rather than the screen. So I may have to fudge the z render, or change the billboard up-orientation to the screen.

rain_tree1_D.png

The other difficulty I've been having is exporting the z buffer from blender. :unsure: As the trees themselves are using billboards for the leaves, the z output comes out with lots of squares instead of leaf shapes. I'm also experimenting with exporting a mist layer to see if this solves it, but there are transparency issues. Really it would be nice if blender would do alpha testing for the z render, but I'm not sure it supports it (any help on this issue would be welcome!)

Other Work

Aside from this, I have been doing a lot of behind the scenes work since my last journal entry. Have got keyboard input working on android, and can even control the player with my tv remote (!) on my android media player. I intend to get a bluetooth gamepad and get it working with analog controllers too. I personally find this annoying, too many android games seem to rely on you playing with a touchscreen, and have no support for other inputs like keyboards, gamepads and mouse on other devices. It really isn't difficult to add keyboard input at a minimum. :rolleyes:

I also rejigged the OpenGL creation code to support restoring after lost contexts, in order to support resuming. This was less hassle than I was fearing, and seems to work fine. Unlike PCs, Android seems to have been built with the idea you never 'exit' apps, you just switch them to the background, and the OS deletes them if it needs to free up space. I can see the reasons why in theory this is the optimum arrangement for a device, but it does seem to assume that developers will properly clean up after themselves when their app is in the background, and pause their processing. And there is a famous saying about assumption... :lol:

Android Build

Scrolling

The rewrite is going well .. it was quite tricky to get the scrolling landscape to work with terrain elevation. In the end I had to have not one, but 2 scrolling systems working in parallel (which requires some cleverness to stop them stepping on each other's toes). One to do the terrain ground texture, and one for drawing the actual terrain (and other models and billboards). I did try drawing it all with the second system, but the texture filtering on the ground didn't look good.

jungle_pc.jpg

There is still loads not working, no PCF shadows or proper lighting yet, the terrain is just completely random for testing. But the scrolling landscape can be far bigger than before, I have tested it fine with around 16384x16384 pixel map sizes and lots of vegetation, much better than the 2048x2048 old version.


Android

I should say I had a reasonable idea of the main stuff I needed to get working for the android build, as I had an android build of the previous version of the game. For a start I ran all the OpenGL stuff through the mali OpenGL ES 2.0 emulator on the PC version, so it was 'roughly' right. Then once the android build was compiling and running on a test device I could do the many tweaks to the shaders necessary to get it working.

There is also a small java layer to call the native c++ code of the game, this just does stuff like setup the OpenGL window and pass in touchscreen presses and updates. I just mostly followed tutorials for this as I'm not a java guy.


Android Studio

Android Studio I found was even slower & bloated (and memory hungry, a gig or so) than last time I used it, but at least it was fairly painless to setup compared to the older versions. Adb found my devices okay, and the debugger actually seemed to report some info this time(!). :o

I can't believe anyone actually seriously develops on something like Android Studio though (far too slow), it makes far more sense to develop on PC and keep a build running for other platforms (which is what we used to do with consoles). :lol: Incidentally I preferred to use a real device because android studio plus the googley emulators just made my poor 4gb PC crawl / lockup.

jungle_tv.jpg


Precision and Power of Two Textures


Anyway there was much tweaking to be done, I found that the PC OpenGL ES mali emulator ignored the precision for shaders, whereas on actual devices my medium and low precision variables were screwing up shaders, depending on the device. There was also the dreaded power of 2 requirement, which I wasn't abiding by for my scrolling textures and a few others. I got by this by setting the wrap mode to CLAMP and not using mipmaps, which seems to get them working (I'm not using a variable viewing distance for characters, so mipmaps aren't quite so necessary).

It all seemed to run very well in the end, I had no trouble with frame spikes, and it ran towards 1000fps on my low power PC at 800x480 sizes (so much so that I had to fix my timing code for this lol). And I have a sleep now on the PC version and the Android version and it runs happily at 60fps on android without using much CPU or GPU.

I initially got it working on my Nexus 7 (2012) tablet, then my Cat B15 phone (pretty low end 800x480), then my Android media player (A95X).

jungle_phone.jpg


Still to go

I still have to put in support for keyboard and gamepads as things like tvs have no touchscreen!! :D I may have to buy a bluetooth gamepad for this. And I'll probably do something hugely simplified for the physics as I can't precalculate it with large maps, so I'll probably just have bouncy spheres around the trees etc and calculate it in realtime. I've increased the scale of the models and will probably re-export the sprites bigger as everything looks so god-d** small on these high res devices.

Still have to work out what an Android 'lifecycle' is, and what I need to do to support resuming, lost OpenGL contexts maybe...

Oh and I have to figure out how to maintain an assets file on the device instead of packaging it in the APK everytime I debug.

Back to the Jungle

I decided to get back to work on my (extremely long running development, "duke nukem forever" style :lol: ) jungle game, after 7 months gap. As is always the nature of returning to something after a long gap, I found myself trying to remember how the thing worked (this normally takes a while to get back into the codebase). With fresh eyes I realised that the codebase had accumulated a lot of mess from the various major changes, and decided it was time for a rewrite, using the old code as a reference. I should be able to just copy paste some bits in, like the sound, scripting and text etc.

Rewrite

I have mixed feelings about rewrites. While refactoring things is often a great idea, there are schools of thought that rewrites can be bad because you neglect all the testing and bug fixing that has gone into the original. But there are lots of benefits when you want to make major changes. :)

The main major change I wanted to make (aside from simplifying how the codebase worked) was to go back to a scrolling engine. Let me explain, when I first began, I used a scrolling totally 2d engine.

Original 2D version:

Then for various reasons I decided to go 3d, and it seemed easier to limit the map to a certain size (say 2048x2048) so everything could be pre-rendered on level load.

3D version:

This worked, but I became dissatisfied because the resolutions of mobile devices (I'm targetting android, perhaps iOS) was higher than I had originally envisaged, and the maps weren't big enough for high res devices. Once I started raising the map sizes, the memory use because prohibitive for small phones (storing pre rendered colour, depth buffer, shadow buffer etc). :o

So I wanted to have a go at going back to a scrolling engine, but this time with the 3d 45 degree view I had moved to.

New Version

Anyway here is some indication of how far I've got. The main rewriting of the actual game code was fun, because it was nice and simple and I had a much clearer idea of what I wanted it to do than when I wrote the first versions. The scrolling graphics engine I knew would be the biggest hurdle.

With scrolling techniques, the idea is essentially to draw the borders of the map as you move onto them, rather than drawing the whole screen window on each frame. While modern PCs are pretty powerful, mobile phones don't always have heaps of horsepower and I like to minimize the CPU where possible to save on battery use. :D

My terrain is as before split into 128x128 tiles, at a 45 degree view angle (this view angle is fairly easy to change, but it seems a good standard). My first thoughts were to render the terrain directly into the scroll 'window' texture with texturing applied. However this resulted in all kinds of aliasing nastiness, so it was clear for best results I would need to have not 1, but 2 scrolling systems, 1 for the terrain texture, and 1 for the terrain + models.

I'd ideally like to get rid of the billboarded trees, but I haven't managed to get 3d tree models to look as good as yet, so they may well return, perhaps with a depth (and even a normal map?) channel. Of course that will limit the viewing perspective, but so does using a scrolling engine. Although I may have a separate indoor perspective for the planned 'nude mud wrestling' scenes.

Progress so far

So far I've just been working on the terrain texture scrolling, the terrain itself (with no elevation as yet) is just drawn in full, that will be the next stage.

Here you can see it in action. Aside from not having done the initial filling, it seems to be working bug free. It is a little bit fiddley to calculate the amount of border tiles to use to avoid 'flipping' happening on the game screen, but it seems to be working now.

Here I have reduced the scroll window size so you can see the flipping / redraw happening as you move through the map:

I might try and get it to redraw sub sections of the tiles to avoid spikes in CPU (a call to glViewport may be all that is required). Next I'll see if I can get the terrain itself rendering with a scrolling system. That is all so far! I'll try and do updates as I add back in the features. :)

Some of you may have been following the development of my little 3d texturing program, which I began in June last year and have been working on in my spare time. 3 months ago I had a diffuse painting only version, but decided to try adding support for PBR (physically based rendering) authoring with normal mapping, specular and metal channels. This would be quite a learning experience, having never worked with PBR!

Here is a video showing the new rendering:

You can download it here:
https://github.com/lawnjelly/3dpaint/

Conversion

It turned out to be quite a bit of work, I had to rip out the guts and completely change how the program worked. The old version worked with old school OpenGL 1.0 (run anywhere TM), however it was clear I'd need to write shaders for PBR, so first I had to convert the setup code to using GLEW so I could get OpenGL 3.0 running.

Then I had to learn how PBR shaders are meant to work. I still have no idea. :lol: I cobbled together a topic on the gamedev forum and got some input, looked through the unreal shaders for inspiration, and finally came up with some kind of mish mash, that looks 'alright for now' to my eye. The many purists would no doubt tell me off, as mine is not so much as 'physically based' as 'physically incorrect'. I should really call it 'physically incorrect shading system' or somesuch.

Channel Authoring

Having never used any of the high end texturing programs like Mari and Substance Painter (I've only seen videos), I had to come up with some kind of sensible way of authoring the height (bump), specular and metal channels. Clearly it would be nice to be able to draw them, but also for speed to be able to derive them somehow from the colours of the brushes (because reference photos don't come with these extra channels).

My current compromise allows two ways of authoring the extra channels:


  1. Deriving them procedurally from the layer RGB (diffuse / albedo)
  2. As part of the procedural noise effects available to each layer

raptor1.jpg

The bump channel is summed into the final composite and converted to a normal map as you edit. In addition, it was clear that most people using normal maps would create their own through xnormal or similar, so I allow you to import a normal map which is blended with the internally calculated heightfield.

Memory Use

The next big change was, with the extra channels, memory use was becoming a concern, particularly at higher texture resolutions. A 4096x4096 layer is 16 million pixels, each of which may have to store RGBA, 3 masks, bump, spec and metal. With a few layers, thats a lot of memory.

My solution was similar to what I used writing a photoshop app. Instead of storing each layer uncompressed, only the current selected layer is stored uncompressed. All the other layers are stored in a compressed format, which are decompressed and compressed on the fly as you select layers. It also means the layers must render from the compressed format into the composite (final) image. With some cunning programming, this is done fast, and also redraw only occurs in areas that contain image data, speeding up the overall speed. It is a win-win situation.

Other

The zoning system has been improved to give smoother boundaries between zones, and there have been vast amount of improvements in many areas.

Of course as this is a first alpha release I am expecting there to be problems running on some machines (particularly because of the change to OpenGL 3.0 and shaders, which can be finicky about compiling, despite being fine in the OpenGL reference compiler). I only have 1 other machine at home to test it on, and already found shaders that linked on my work machine would not link on the test machine, and had to fix this - I anticipate there may be other problems to iron out. Normally I do a flurry of changes to fix compatibility issues in the first couple of weeks.

But at this stage it is 'good enough', so I will release it. There are still obvious things to me which I need to improve - the bleedout techniques at the edge of UV islands are next. And I want to do some tutorial videos.

drum1.jpg

Normal Mapping

Just a quick one to show that I am working hard on a big update to 3D Paint. Although for myself I only currently needed diffuse painting and rendering, it become clear to me it would be more useful for others if I could integrate some multi-channel painting. So since the last release I have switched to a later OpenGL and reworked the main viewport to use shaders. I was very pleased to find I could still use the old school OpenGL code alongside (I'm sure they will phase this out, but for now it is more convenient). :D

Being a maths retard it took me a while to fix the bugs in my shaders and work out my tangents from my bi-tangents, but touch wood it now seems to be mostly working and looks alright to my eye. :blink: At the moment I have a simple blinn-phong shader, and you can move around the light and change the diffuse / specular contributions.

My first thoughts were to bandit the alpha mask for each layer and at present you can use this to affect a bump/displacement map which is converted to a normal map for rendering. The normal map is 16 bit to help prevent banding, I may convert the alpha masks to 16 bit also. You can also of course use the procedural masking effects to affect the normal map. This is all far quicker than building high poly geometry in a modeller for a lot of normal mapping effects, which is good because I am lazy.

Of course I recognise that the process for most people will be to build a high poly model for normal mapping, then bake this to the low poly model in something like xnormal. I support this by allowing you to load an external normal map, which then can be combined as desired with the normal map created by 3D paint. :lol:

normalshark_zpsyt0ngxox.jpg

In this example you can see a simple procedural layer combining with a pre-baked normal map.

Next I will be adding support for the specular (gloss) map, and perhaps a roughness map and another channel such as mirroring.

Update


Being a nOOb to PBR etc I'm now facing the issue that there is no standardized shader model in games / vfx. :blink: Unreal seems to have a metallic channel, other systems may use other channels in their shaders with a different workflow.

Not sure on the best solution for this, ultimately I might have to have the shader model selectable, and this changes the channels that are available. But to start with I might just go for something similar-ish to unreal. But have no idea whether my shader will output anything resembling what you would see in unreal .. might have to do some research, see whether there are any standard shaders I can use for a WYSIWYG appearance.

This past week or two I've been adding a kind of procedural texturing to my paint app. One thing I'd noticed is that just using the standard brushes gave a rather smooth, non organic blend between layers, and I thought I could do better.

The Finished Result - Alpha and Colours
stipple2_zpsl0fhikgt.png

My plan was to use something akin to perlin noise to modify the alpha to get a blend where instead of just blending out the alpha, there was a gradual decrease in 'full blend' noisy areas. So I could have a decrease in density, rather than a decrease in intensity, at joins.

As I wanted the noise to look good in 3d, rather than use perlin noise, I used an equivalent open simplex 3d noise. This wasn't super difficult to get working and made the blends look much better. For variation I also wanted another type of noise, so after doing some reading I wrote a quick 3d worley noise implementation.

Procedural Alpha Only
stipple_alpha_1_zpsft1yvlew.png

Next, as I'd already done 'the hard stuff', I wondered what the results would be like if I went the whole hog and used the noise functions to generate colour, instead of just alpha. My colour experiments were mixed success .. it worked, but the colours looked rather boring and lifeless, and clearly it would take a lot of tweaking to get a good effect. So instead of fully creating the colours, I multiplied noise colours by the original layer colour channels. This produced much nicer results.

Boring procedural colours (ok for some things like lichen maybe?)
stipple_simple_zps5ghpjkjb.png

Alongside this, I had to figure out a good way of writing this system into the GUI, to make it tweakable, but not so much so as to frighten users. The most powerful system would have been to implement a node based custom graph, similar to the cycles renderer in blender, where you could feed the outputs of one node into inputs of others. However, this seemed like overkill, so I went for a simpler linear stack of 4 noise layers.

I experimented with a few GUI layouts, but the current one seems user friendly enough. I have a little preview window, which is faster to calculate as you change parameters, because changing the whole of a 4096x4096 texture for previews, is a little too slow.

It has also taken a bit of playing with the blending modes and maths to get something that looks right. I will probably put in some options for users to change blend modes per noise layer.

Once you have a prodecural stack that looks good for your layer, one of the best ways to use it is to blat a source texture all over the layer, then use a layer mask to determine how much is showing through in each area of the model.

I will tweak this a bit more and hopefully have it soon in the latest release.

Addendum

I have just compiled latest release, with the procedural stuff. Please try it! :)
https://github.com/lawnjelly/3dpaint

Spatial interpolation of scattered data is a vital technique used in graphics programming, games, medical, scientific applications, mining, weather forecasting etc. Some of the most widely used techniques are:


  • Inverse distance weighting
  • Natural neighbour interpolation
  • Kriging

    Here I will discuss a cunning optimization of discrete natural neighbour interpolation, introduced by Sibson (1981).

    Natural neighbour interpolation produces a pleasing result and by its nature deals with a few potential 'gotchas' in spatial interpolation. However in reference form it is very slow, hence the interest in finding faster methods / variations.

    tests_zpss1od5nct.png

    Natural neighbour interpolation is a simple extension to the idea of Voronoi tessellation (aka Dirichlet tessellation). A Voronoi tessellation can be constructed by simply assigning each cell on a grid to the nearest data point. It can also be calculated geometrically.

    As it is, Voronoi tessellation provides for missing values between data points, but the boundaries between tiles are sharp. In order to blend between them, Sibson suggested inserting each pixel to be tested as a new data point, constructing the new Voronoi tessellation, and mapping out which areas are 'stolen' from neighbours. The influence of each neighbour is then weighted by the size of the area stolen.

    This can be done geometrically, or by discrete methods (i.e. using a grid). The discrete method becomes more efficient with large numbers of data points, however it can be very slow with sparse data points. It should also be born in mind that it is an approximation (the accuracy depending on the size of the tesselated areas : hence the density of points and the size of the grid).

    The obvious algorithm would be something like as follows:


    1. Construct Voronoi tessellation of data points on a grid, marking which data point 'owns' each cell.
    2. For each cell to be interpolated, flood outward and add the influence of each cell 'stolen' from a neighbouring data point.
    3. Once all cells that are closer to the test cell are found, divide the total influence by the number of stolen cells to get the interpolated value.

    This works, but is very slow.

    Park et al (2006) introduced a new way of calculating the solution, using the brute force processing power of GPUs. Essentially they realised that instead of calculating the 'gather' for each cell, the problem could be turned on its head, instead calculating the 'scatter' from each cell, which will always be a circle (or sphere, as this works in 3d and higher dimensions) with a radius equal to the distance to the closest neighbour. This is best explained by reference to the Park et al paper rather than giving a full explanation here.

    These simple circles of scatter can more easily be calculated than the irregular Voronoi tiles and can be accelerated by rendering the scatter onto a grid using the GPU, and accumulating the influence at each cell.

    For my libraries I was interested in ways of speeding up the process, without using the GPU.

    Bulk Spheres

    I tried several approaches (gather and scatter). In the end my favourite was a simple modification of Park et al's scatter technique. While the GPU works well at rendering circular areas, I wanted to minimize the monkey work the CPU was doing when following the same approach. I realised that when rendering the scatter 'spheres' from neighbouring cells, by far the majority of the sphere would have been already covered by the previous scatter. Was there some way of combining these 'renders' so I did more work at the same time?

    As it was, there was a way I came up with to combine them. First I grouped cells into a coarser grid that lay across the first. The number of cells to consolidate into one new cell could be varied in the code, I would use say (in 2d) 9 cells, 16 cells, 25 cells etc.

    Then as a preprocess for each coarse grid cell, I would precalculate the radius of a 'bulk sphere' (centred in the centre of the coarse grid cell) that would encompass some of the area used by each scatter sphere (from the cells making up the coarse cell). The bulk sphere would always be smaller than the actual scatter sphere, but provided all the cells in the coarse cell had the same owner (data point), I could calculate all their effects in one pass - essentially doing 9 / 16 / 25 times the work in one pass!
    spheres1_zps9cjdcozz.png

    However, there is one complication, we must now deal with the small areas of the scatter sphere that are not drawn by the bulk sphere. But this is possible. The process is thus:

    1) Create a coarse grid from the main grid
    2) Calculate bulk spheres from the scatter spheres
    3) Apply the bulk spheres to the accumulating grid result
    4) Apply the 'leftovers' for each of the main grid cells

    How to calculate the leftovers becomes simpler when we look at a simple optimization for rendering each y line of the spheres:

    Instead of naively calculating the distance to the sphere centre for each cell when rendering (to decide whether a cell is part of the sphere or not), we can calculate the starting x coord and ending x coord for each y line. This can be done using some handy maths:

    x_offset_sqr = radius_sqr - (y * y)

    Then a whole line can be rendered in one go, without any testing per cell.
    spheres2_zpsquaosefu.png

    This same technique can be used to find the xstart and xend of the bulk sphere as well! So instead of rendering the whole line the process becomes:


    • Render line start segment (xstart -> bulk sphere x start)
    • Render line end segment (bulk sphere x end -> xend)

      Using this technique the results are identical to the reference implementation, but can be significantly faster, 2 to several times faster depending on the input data. This could potentially be useful in many scenarios as it is essentially a 'free' speedup. I have not investigated whether it could be used to speed up the GPU implementation.

      Finally I would like to introduce a modification to natural neighbour scheme.

      Voronoi tessellations and natural neighbourhoods are used in part because they are conceptionally simple. They may also give a good approximation of natural processes where substances move at approximately fixed speed, such as diffusion during organism development. However, because of their apparent simplicity, there is a danger of overbiasing towards their use when other methods may be more appropriate. The main problem with natural neighbour interpolation is that it is slow to calculate. Instead of finding faster ways to calculate it, we should also be putting effort to finding other sensible more computationally efficent ways of interpolating data.

      Here I would like to suggest a simple modification which produces approximately similar results, but runs orders of magnitude faster.

      First, an observation. The speed of discrete natural neighbour interpolation is directly related to the density of the data points. Rendering either scatter or gather, the amount of time needed for each cell raises as the square of the distance between points (or the cube in 3d). This leads to pathologically long runtimes in sparse areas of the grid.

      Incremental Natural Neighbour Interpolation

      Instead, I suggest a better interpolator should be able to incrementally add calculated interpolated points TO THE INPUT DATA, and use calculated points to further calculate the later points. If instead of calculating points in order across the grid, we first calculate points IN FREE SPACE, we will significantly increase the overall density as we go, and thus the speed of calculation will INCREASE as the grid is calculated.

      When I tried this method I found two things. Firstly, it works. It runs like sh*t off a shovel. Secondly, it reveals a 'flaw' in natural neighbourhood interpolation. When you add 'in between' calculated points, the final result is NOT the same as the reference implementation. The best solution, to my mind, is to come up with a measure that produces the same result when using 'inbetweeners' as using a reference implementation. I will leave this as an exercise for readers.

      Having said that, it is possible to get reasonable results using this incremental method, certainly suitable for realtime application and rapid visualization where absolute accuracy is not paramount.

      My first implementation of the incremental method I first calculated points at random across the grid. This works but gives slightly differing results each time, depending on the random seed. Looking for a more consistent method I next tried a regular grid pattern. This works too but the end result has the 'look' of a grid.
      voronoi_zpsaeow0ngr.png
      My final implementation was something that to me makes 'sense' spatially. Each cell I interpolated in turn (and added to the input points) I chose by choosing the cell that was FURTHEST from all the data points. This is also (coincidentally) the point that will give the greatest speedup to later added points. This means after the first few added points there is a massive speedup occurring. The only downside is some extra data is needed for housekeeping for making sure I am choosing the furthest cell each time.

      The results also look visually pleasing, although the best number of these points to add may vary according to the grid size / data.

      The diagram shows some of my results. The absolute times are not vitally important, but the relative differences between the methods are indicative. The incremental technique is easily the fastest, and has not been heavily optimized.

      I would add that I am not a mathematician nor an expert in this field, but hopefully some of these ideas will be of interest to those working in it and spur further work.

      References

      1. Sibson, R. (1981). "A brief description of natural neighbor interpolation (Chapter 2)". In V. Barnett. Interpreting Multivariate Data. Chichester: John Wiley. pp. 21-36.
      2. Park, S. W., Linsen, L., Kreylos, O., Owens, J. D., and Hamann B. (2006). "Discrete sibson interpolation". IEEE Transactions on Visualization and Computer Graphics 12, 2 (Mar./ Apr.), 243-253.

Finally I got around to making a preview video of my little paint app. Screenshots are ok, but a video helps show how the app works a lot better. As with many projects, getting the main functionality working was quite quick, but the various redos (when I realise there is a better way of doing something) and the polishing take 90% of the time. I will do a first alpha release as soon as I can.



The classical 'app dilemma'

I've had to think carefully about where I want to go with the app, and I suspect many of us will have faced the same choice. The big danger I find with most apps is to get carried away without there being a market. One big difference between games and apps, is that with games (like movies), you can (almost) never have too many games. No matter who you are competing against, there will always be people interested in trying a new game. With apps, that isn't true. People invest time in learning an app, and if there is a better one for doing the same job, they will tend to go with the better one (if they can afford it).

I am realistic that there are several other great programs out there already for doing 3d texturing (Mari, Mudbox, 3d Coat to name a few), so in order to compete in that market I would have to dedicate a lot of time, and / or make a lot of effort to differentiate and make a niche. And even though I'm making a free app, effectively my 3d paint would be competing against pirate versions of the above.

So for now my attitude is to release early, move onto something else, and if there is interest I will spend more time on it.

I have achieved most of what I set out to do:

  1. Fast creation of diffuse texture maps (primarily for games) using a layering system similar to photoshop
  2. Easy projection of faces / body parts from photographic reference images

This was born out of frustration from the workflow of 3d texturing in blender, which didn't seem well designed for dealing with layers.

I can see many possible obvious things to add if I do continue with it:

  1. Support for multiple channels : Specular, Normal maps etc
  2. Move OpenGL support to more recent version for shaders
  3. More liquify options for matching reference images to geometry
  4. Layers groups, layer effects and adjustment layers
  5. More automation for common tasks (adding scratches etc)
  6. Support for bezier curves
  7. More brush options and brush tools

But the best thing for me may be to step back once it is released (aside from bug fixing), and assess whether there is really a need to develop it further, and if not move back to other projects.

Addendum
Finally I released a test version! :)
https://github.com/lawnjelly/3dpaint

C++ IDEs - a rant

As regularly happens, I found myself with wrestling to shoehorn SSE code into a codebase compiled with visual c++ 6 (from circa 1998), and realised I should be checking out the more modern IDEs.

I can already hear the *gasps* from the readers. Let me explain. I am one of the users of the earlier IDEs who has been most unimpressed by the later offerings of visual studio. I used VC5, and then when VC6 came out, it was fantastic, it offered code completion, woohoo! I used VC6 at home and at work, until at work we reluctantly decided to move over to 'Visual Studio .NET', sometime around 2003. I think it may have been because the xbox support stuff was moving to only support the newer visual studio, or perhaps a third party library, I can't remember.

Visual Studio 6

vc6_zpsmeh36pcu.png

I can't remember all the problems we faced, but the main annoyance for me, and I've tried the various 'attempts' at improving visual studio over the years, has been how slow and unresponsive the thing is. I've constantly been amazed at how inefficient it is, I don't think I could make it that unresponsive if I tried.

On the other hand, the compiler itself has been vastly improved over the years, so I've been left with a kind of 'damned if you do, damned if you don't' choice of whether to 'upgrade', because the horrible IDE is forced upon you. Although it may be possible to shoehorn in a later compiler with the VC6 IDE, unfortunately you lose the debugger, because it does not seem to be compatible with later output files.

What do I need?

And so, while I've reluctantly used the later visual studios for various work projects, for fun stuff, I've mostly stuck with visual studio 6. Aside from some some bugs with class view .ncb files getting corrupted every now and then (which can be sorted by deleting the ncb and letting it recreate), it does pretty much what I want from an IDE :


  1. Fast classview with folders for working with large projects
  2. Fast and responsive to typing, moving around the codebase, compilation, debugging

I'm a big fan of SSE and things like OpenMP but I've mostly managed to get by compiling these into DLLs with the later compiler than calling them from the main module. And I appreciate the support for things like templates is better in the newer compiler but I've been willing to forego this for a quicker development environment.

2/3 years ago or so I did a quick test of the current IDEs of choice on windows, and found them lacking. I tried code blocks, QT creator, and visual studio 2013. The best of the bunch I found was QT creator, which very nearly ended up being my new 'go to' IDE, and I used it in a couple of projects. But I ended up back in VC6.

Anyway this past week I have been giving CodeLite, Code Blocks, QT Creator and Visual Studio 2013 another try and here are some of my findings. I am obviously not very experienced in any of them, and would love to hear that my criticisms are unfounded, and there is a way around the problems.

CodeLite

codelite-small_zpsu8bzg37o.png

A lite IDE is exactly what I'm really after, without the bells and whistles, just something that runs fast. Unfortunately after a bit of trial, and despite it saying 'it works with major compilers', well yes it does, but it only seems to support GDB so I couldn't get it to debug code from cl.exe (the microsoft compiler). So that ruled that one out.

Code::Blocks

codeblocks_zpsdcpo0zje.png

Despite looking a little 'technical' and harsh, I found it had lots of options and potential for instructing different compilers, and I set about converting my 3d Paint app to compile in it. Unfortunately while I managed to get the project to compile and run, it quickly became apparent that the debugging support seemed to be awful. It didn't even seem to display local variables. Don't get me wrong, I'm sure it is an achievement for the developers but for myself I need something that works. So I wrote off that day's conversion as a waste of time.

QT Creator

qt_zpsicgxqdqz.png

Next up was my pretty much favourite. I had tried QT creator before and pretty much love the design philosophy, it is what I would make myself if I had the time. It is fast, responsive, and doesn't show you 'too much' superfluous options, while still allowing you to tune under the hood. It works easily with cl.exe compiler, and support CDB debugging of the microsoft output code.

Again I spent some time converting. This time the actual code changes were more involved. I should explain that I'm a massive user of 'classview'. I understand that a lot of people (most perhaps) rely more on a solution explorer and organising by file, but I have always preferred to navigate with classview as it better fits the object orientated paradigm.

One unfortunate feature that was missing from QT creator classview was the ability to organise the classes from different libraries into folders. I believe this is what prevented me from fully converting over last time I tried it. But I did figure out that it would folder up classes correctly when using namespaces. So I went about changing my library code to namespace everything up.

All was proceeding beautifully until I had it all finished, and sadly found that the classview was not very good at showing up all the classes in the projects. Some would be missing. They would appear when I actually opened the files in the solution view, and it was 'half usable', but no good for navigating within the project.

One bonus of QT creator is that it is open source, and I did have a brief look at the source for the classview. It was all QT stuff (which I am not familiar with) and I was not really feeling brave enough to try and fix it, not knowing even the basics of how QT creator works. This is something I may revisit. Or perhaps the guy who wrote it (denis mingulov I think?) will improve it, but I think he developed it in 2010.

VS2013

So after discussion in the forum here, I decided to give VS2013 another try, and try and figure out if I could cure the slowdowns. I had managed to find last time that a lot of the problems were due to the the background parsing of the codebase to give intellisense, and classview, but never managed to fix it to a usable state.

VS has grown to become one of the big daddy's of sprawling code bloated messes. I remember back in 98, the common thought was that most MS software sucked somewhat, but that visual studio was 'really good' and finally 'something they got right'. Well unfortunately they didn't learn from their success. In typical 'design by committee' fashion they seem to have tacked on c#, various other languages and flavour of the month technologies, which many c++ programmers aren't the least interested in. I'm sure there is a way of turning off all that 'stuff', but I'd rather not have any of it installed in the first place.

visstudio_zpsvreit3i6.png

VS now seems to suffer from the 'too many options' problem. They clearly have so many developers working on it (independently and working against each other rather than together) that you can see multiple tools doing 'the same thing' rather than having some kind of coherent vision. Nowhere is the 'too many options' approach exemplified better than in the 'theme editor'.

The 'theme editor'. What were they thinking? Are they on crack?

http://i53.photobucket.com/albums/g55/lawnjelly/GameDev/theme_zpsxmabvxvh.png

The theme editor seems to have like 1000 or more options for customizing the colours of every conceivable gui element. To quote a wise woman 'ain't nobody got time for that'. I found myself spending 20 mins trying to find the option to change the window background colour, before giving up in frustration.

And now to the real problem. After googling and finding out that turning off 'graphics hardware acceleration' speeds up the IDE (you need hardware acceleration for a text editor? tell me more...), it looks like the main culprit for the awesome slowness in the new visual studios is the 'awesome' intellisense system.

Code Browsing Databases

It would seem that if you want to make an intelligent IDE that provides code browsing, classviews, intellisense etc, you have a choice, you can either rely on symbolic information produced during a proper compile, or you can run a 'mini-compile' that parses the source files separately and tries to maintain some kind of browse database. On cursory inspection it seems like all the IDEs have gone with the second option. This is great in that it can provide quick updates to the database without you having to ask for a compile, but it does have the potential for letting the IDE get into a tizzy about constantly re-parsing the source files, as we shall see.

What seems to cause all the problems with the need for re-parsing in c++ is that the result of a compile can depend on the particular #defines etc that are set at the time it is called. That is, one header could produce a myriad of different compilation results, even within the same codebase, depending on how and where it was brought in. This is both a powerful feature for developers, and a tricky problem for compiler / IDE writers. It would be much easier for compilation speed if headers were compiled once and just once. Instead, you get the situation where some users will vocally demand a '100% accurate' browse info database, which means these headers have to keep getting compiled and recompiled 'just in case'. Instead of giving us an option which would be ideal 'just compile it once and have a stab at it, if you get it wrong, no biggie'.

So now you navigate around your code tweaking bits here and there, visual studio is *constantly* recompiling, loading files, checking things (just in case you edited outside visual studio). Unless you turn the options off, merely waving the mouse over the source code makes this happen as it tries to produce those 'pretty' tooltips for you.

With all this 'background' (*cough*) parsing, I don't know about you, but it makes my IDE slow and unresponsive. Sometimes it just *hangs* while it figures something out. I keep telling myself, an IDE is just a glorified text editor. How did they make it so wrong?

I used 'processmonitor' to try and pin down what was happening, it shows you all the file accesses etc. It is really frightening. Anyway my current best solution is to turn off all the automatic updating of the browse database, and call 'rescan solution' manually every now and then. My browse database doesn't have to be PERFECT every millisecond, I just want to be able to navigate!!

Of course the fact this browse database is held as some kind of SQL database on disk doesn't help. Haven't you heard of using RAM MS? You know, it's cheap stuff and faster than disks? And you don't have to use SQL if you have a half way sensible access scheme?

Future

I'm sticking with VS2013 for a little while, but once I've done my first release of my current project, I may have to have a look at either writing or modding an IDE, to get something that works properly. The simplest solution would probably be a modification of QT creator. But writing the whole IDE is not out of the question either. Scintilla seems to be a very good open source 'source editor' which you can integrate into your programs (it is used in the excellent notepad++ which I use as an external editor). It looks fairly easy to rig up an environment with a list of files compiling / linking with cl.exe, and displayed / edited with scintilla.

The only 'hard parts' seem to be parsing a code database, and integrating a debugger such as CDB. Myself I would be content with either reading a code database from .pdb files after compilation, or else doing a *very* lazy parse to give very rough classview and intellisense. I suspect CDB integration may be the tricky part, seeing as how much difficulty the code blocks developers seem to be having with it.

Addendum 31/8/16

Well I came back from spending a few days away (without internet) to find there had been lots of interest in this topic (only a quick blog post not even an article)! I never thought anyone would actually read it :lol:, but clearly it struck a nerve, such is the power of the interwebs. It was very quickly written with no doubt many errors, and written slightly provocatively I'll admit, as a result of too many nights banging my head on my keyboard in frustration. I'm glad it got a little bit of discussion going though, as we programmers are totally dependent on the tools we use for our productivity, so if there's anything we should 'program right', or try to improve, it should be the tools that form the foundation of everything else we make. :)

Firstly a small caveat for my earlier waffle, I'd like to emphasise that I am but a lowly worm who can only speak for my own experiences, and the things *I* currently look for in an IDE, which is tied to how I currently work, and my small subset of knowledge of c++ and compilers. These considerations may be completely different with a big team, larger projects etc. So I would expect it to be difficult / impossible to provide a solution that is perfect for *all* users, who may have totally different priorities, and so my whining should be taken with a grain of salt. That said, a bit of constructive criticism isn't something we should be afraid of. I've done a lot of awful work in my time, and after dusting off my ego, criticism really helped pin down what could be improved so I could make it better.

I would also like to say that ALL the IDEs I reviewed here are GREAT in their own way and are massive achievements for the developers, and all are 'so close' to being my perfect choice, and my ranting is only because having an IDE at '98%' is sooo frustrating. I know that with that extra push they can be made even better for us users and make our lives much easier. And one of the many great things about us developer-type doods, is we do listen! :)

Visual Studio 15 Preview 4

http://i53.photobucket.com/albums/g55/lawnjelly/GameDev/vs15_zps6kvkhowc.png

Well on the advice of mluparu I have been trying out the preview of Visual Studio 15, as they had identified the symbol database as a problem and have been working hard on improving it. It looks like 10 months ago they rolled out a fix changing to a new database engine which appears to be making much better use of RAM:

https://blogs.msdn.microsoft.com/vcblog/2015/11/11/new-improved-and-faster-database-engine/

All I can say is, WOW, they seem to have fixed it!! :D I eat my words and take it all back! It is now fast and responsive for me with the classview. So much so that I have already uninstalled VS2013. Obviously I need to do a lot more testing etc, but this is the first version for a *long* time where I'm hoping they may have got the formula right again. Very excited! :lol:

Just a little report for anyone who is following my progress on my little 3d painting app from my previous post. I'm trying not to get too carried away spending too long working on this, but am planning to do a release earlier rather than later, even if it has lots of improvements still to be made. I've already got most of the features I was aiming for working, and it can produce some nice results. It is a bit rough around the edges though, and the user interface I need to improve.

As you can see, I've fleshed out the support for layers a bit more. I had to spend a bit of time getting the the scroll bars working properly for the treeview control (as it is my own GUI, I'm continuously finding things that need improving). The layers now support an alpha mask and painting to the alpha mask. So there are actually 2 alpha channels possible per layer, one in the RGBA surface, and one in the mask. This is similar to how photoshop works. So you can draw with e.g. a stippled alpha brush, then later modify which areas are visible using the mask, without affecting the stippling.

moni_combined_zpshmeq6rfl.png

One thing I may change is the brush alpha is stored as part of the brush RGBA. You can load the brush alpha mask separately, but as a result of being part of the brush RGBA, the mapping will always be the same as the brush RGB. This is just a compromise, it is slightly less powerful, but it avoids having to worry about 2 sets of brush mappings. I may change my mind though, as it could potentially make brushes look less 'tiley' in some cases.

Aside from the layers, I have developed a 'zone' system, so you can mark out zones of polys to use as masks for drawing. So for instance, you could mask out a shirt, or belt, from the geometry of the model, then be sure that your painting wouldn't extend outside the mask. It is easy to paint the zones in a wireframe mode.

moni4_zpsjcukv7lr.png

One aspect of the zones I am not happy with is it shows aliasing 'jaggy' artefacts at the side of the polys. This is because a poly is either within the zone, or not. Blender has this same feature for its texture painting and suffers from the same jaggies. However I will probably fix this with some kind of anti-aliasing.

Talking of artefacts, I have totally solved some in a cool way, and noticed a potential problem with the texture filtering. One problem I was having was artefacts on seams in the UV map:

cameron2_zpsonbfxdxo.png

I had a simple solution before, I would simply draw texels with a brush outside a triangle if they were not 'owned' by any triangle. However, I have come up with a better solution, which speeds up determining whether a texel is within a triangle at the same time. I simply precalculate an 'ID map' for the texture, where each texel contains the ID of owner triangle, or 0 if not owned. Then while painting, instead of doing some triangle intersection test, I just check the ID map to see whether the triangle drawing ID matches the ID map for the texel.

facemap_zpsqpdpyy2m.png

To fix the seams issue was quite easy. I do a 'bleedout' algorithm which iteratively moves the triangle IDs outward on the ID map into unused space. So empty space now gets owned by the nearest triangle, and hence drawn into, and artefacts are removed. These would particularly be a problem with mipmapping in a game for example, if they weren't fixed.

The potential problem I have noticed with texture filtering may take a bit more solving. My texture filtering code is rather basic at the moment, and just does a linear interpolate between the 4 nearest texels. This works when the brush scale is around the final texture scale. However, it will look rubbish for minification. So I may have to implement some kind of mipmapping / anisotropic filtering for drawing to the mesh.

Aside from that the biggest change has been, instead of relying on drawing manually to the mesh, I've added a new 'project layer' feature, whereby you can automatically draw to every poly in the model once you have lined up your brush reference image. This is why the alpha mask was necessary. With every poly covered, you then want to be able to show / hide areas on the layer, non-destructively. It works like a charm, and is much easier than manually painting, for the main areas of a model.

The alternative I looked at was 'locking' the mapping of the brush to the mesh once it was lined up, so you could rotate the model but still draw with the same brush mapping. The former approach I used was simpler, and more powerful. The only cost is that it fills the entire layer texture, which means the save files are bigger.

And that brings me finally to loading and saving. I've only just got this working today, as I figured it would be easy. It was quite easy, except I wanted to store save files in a .zip format, so that users could themselves get access to the layers, for editing in photoshop. And keeping a save file in a single zip keeps it self contained and easier to keep track of for the users. It also protects against me accidentally breaking import from earlier versions - the zip file contains .pngs for the layers, so can be used to reconstruct the project, if all else fails.

cameron4_zpslmbqdbf2.png

I am working on the assumption that the .pngs are lossless, and will store RGB information even with zero alpha. I will have to do some testing to check this out .. I suspect it depends on the implementation of the PNG saving code, and the settings. I could alternatively use another format, but no need to reinvent the wheel. Uncompressed save files would be prohibitively huge. I gather TIFF files can store layers, however I know little about them, and they are probably Adobe patented. I generally love PNGs, the only issue I have is that they are rather slow to compress / decompress. Which could be annoying when often saving the project.

Frustrated with the clunky support for 3d painting in blender, I've been looking for alternatives. I couldn't get Mari and some others to even run on my lowly PC (intel HD graphics 3000! :lol: ). So in typical 'do everything yourself' style :rolleyes: , I decided to knock something up myself. After all I'm not after anything groundbreaking, just something simple that makes asset creation quicker for my jungle game (small polycounts, small texture sizes).

overlay_zpscaxrsjph.png

It has proved easier than I thought so far. Firstly, one thing that helps in quickly building little apps is I have previously written my own cross platform GUI system. The other is that I tend to write as much code as possible in libraries, so that I can reuse it in multiple projects. This was a benefit because I had already written a photoshop-like app, and could reuse some of the code. :ph34r:

It was fairly easy to come up with a simple mesh format, and parse in meshes and UVs from wavefront .obj files. At this stage I'm not really interested in being able to model, and create UV maps, blender has that covered. I'm thinking make the app do one thing, and do it well.

Rendering the model inside the GUI was fairly easy, I have support for 3d sub-windows. And I am using OpenGL 1, just the simple old school, as I have no need for shaders etc. In fact there is not even any lighting yet, just flat shading (easier to see textures).

I was keen to implement a layering system, as that is how photoshop works, and I like it, it is more powerful than trying to do everything on one layer. So I mostly reused my GUI component from my photoshop app, and implemented a simpler system (no groups or adjustment layers) that only supports RGBA 32 bit. I know from experience supporting multiple bit depths is a nightmare. Another thing was I handily could reuse my SSE3 layer blending code. This makes the whole thing faster, as blending is one of the bottlenecks.

The real key to the app was being able to project from drawing on the 3d model to the UV coordinates of the 2d layers. There are several ways of doing this. If I needed to support high poly models I'd probably look at doing this with hardware, but to keep things simple I opted for software methods. I did implement opengl colour ID picking, but didn't need it in the end.

test2_zpsvsnziyyj.png

The way I did the projection (probably not the finest way lol :P ) was as well as doing the hardware transform with opengl, I kept track of the matrices, and did a software transform of the mesh too. But not every frame, only when I had the potential to draw on it, so for instance when releasing a mouse button after a rotate. So the normal interaction (rotate etc) is fast, but it hides the slowness of software transform to only when it is needed.

With screen space triangles, it was possible to use a projection method (similar to shadow mapping) to map a screen space brush to the actual uv coordinates of each triangle been drawn. I then draw the brush onto the UV space layer, update the layers to form the final texture, then finally upload the changed area to OpenGL for drawing the frame.

Aside from this there was the matter of hidden surface removal. That is why I briefly looked at OpenGL colour picking (rendering each tri with the ID encoded as a colour, then reading back the picked pixels). Instead I decided to use software methods, as I'm only using low poly models. :wink:

During the software transform, I batch up the triangles into a screen space grid for acceleration. Then identify all possible occluding triangles for each triangle. Then during the draw to the UV layer, for each texel I perform an occlusion test. Sounds slow, but it works pretty fast. Might not work so well for high poly models, but that doesn't matter for me.

There have also been a few other issues to solve, like the need to draw slightly outside triangles to prevent 'bleedover' artefacts on edges. But this is mostly working now:

Here are edge artefacts:

cameron2_zpsonbfxdxo.png

The next stage was the real reason for the app, the ability to draw from reference images directly onto the 3d model. I do this by the brush being 2 things :

An alpha mask (circular brush)
A source texture (may be a reference image)

The source image is mapped across the screen, and can be transparently overlayed on the model by holding down shift. Then when you line up the 3d model with the source image, and draw, you project the reference image onto it perfectly, voila! :)

I say perfectly, but there are obviously some things which need addressing. First aspect ratio must match, this is adjustable, and scale. You can't yet rotate the texture, but you will be able to (or rotate the model to suit). The next cool feature is I want to be able to warp the source image to match the model, so if for example, an eye or ear is in the wrong place, you can adjust it. I am doing this with a liquify feature, which I am just getting working.

That is it so far. There's no adjustment of brushes yet, or saving and loading layers, but all that should be pretty simple.

I'm also planning to have both a masking channel for layers, and poly masking, where you can mark out poly zones and only have the layer applied to that zone. And also some stuff to make the layers / brushes blend together better : layer effects like drop shadow, and things like a healing brush (which I wrote already for photoshop app).

[size=2]As a first go at a journal entry, just a simple topic:
For a feature for my little 3d painting app, I wanted to be able to warp source images to make them fit better onto 3d models. Never having done image warping before I quickly came to what I presume is the basis for how most people do it...

For a given source image, I created a corresponding sized 2d array of 2D vector floats, I'll call a 'vector map':

grid_zpsjkuagamp.png

Each vector represents a displacement (in pixel space) to find the source pixel for each destination pixel in the resulting image.

So the process to create the final image is as follows:


  1. [font=arial]Loop through all the destination pixels[/font]


  2. [font=arial]For each xy, look up the corresponding vector in the vector map.[/font]


  3. [font=arial]Add the vector to xy, to get a float source xy.[/font]


  4. [font=arial]Look up the source pixel from the source image (using texture filtering if desired)[/font]


  5. [font=arial]Copy the source pixel to the xy destination.[/font]



Here is an example, using a simple circle brush, with vector direction and magnitude depending on distance from the centre:


liquify_zpsyan9p0ss.png

And here is a more subtle brush just pushing in the direction of mouse drag:
liquify2_zpsrb5unkrb.jpg

It also occurs to me that it is fairly easy to extend this vector map to use systems such as a guidance mesh, or anchor and pull points. These could perhaps be done faster by other more direct methods, but as this is not CPU critical in my case it would fit the bill, and is very flexible.

  • Advertisement