mikiex

Members
  • Content count

    280
  • Joined

  • Last visited

Community Reputation

261 Neutral

About mikiex

  • Rank
    Member
  1. Unity 5 or Unreal engine 4

    If there is one thing you can guarantee, someone who uses Unity will recommend you use Unity and someone using Unreal will recommend Unreal. :) First decide the game you are going to make and then make the decsion based on that.   It is worth looking at games that have been built using the engines, but don't just look at the graphics... don't be blinded by the pretty shiny things too much! How many games of the type of game you want to make have been made with that engine is a very, very shallow way to look at it - but often the stats don't lie.
  2. Reducing Unity game file size

    Have you looked into the difference in runtime memory when running using png assets vs 32bit? When I last looked into using pngs I am sure there was talk of it ending up using more memory when running. For you this probably wouldn't be an issue and download size is more important. With a more memory hungry game it could be an issue though.
  3. The Programming Primer

    The problem is you did not understand what the example was about. That is, it was not about adding 5+3, the paragraph heading  explains you are writing a function and the example functions job is to add two numbers. I would agree the function could be more complicated to show the benifit of writing functions.   With any learning, it will take time to understand everything. A few tips:   1.You could find a programmer to chat with and discuss the concepts of programming.   2.Learn by example, follow some tutorials - then make some of your own changes and see what happens.   3.Find something simple you want to program and do it.   4.Don't procrastinate, go for it.
  4. 8-bit or 16-bit?

      There is no confusion, terms like 8bit to describe artwork  etc. have become seperated from any technical meaning.
  5. 8-bit or 16-bit?

        So, to be clear, does 8/16-bit refer to color depth (256-colors vs highcolor) or word size (NES vs SNES)?     I've been thinking about this recently... people say look at this 8bit style gfx... I dont think they mean bit depth I think they classify based on consoles / computers cpu bits   On top of that most people say 8bit for nearly anything pixeled that looks like it has a limited pallete. So if it was more like 16bit, someone will still say 8bit... some of that might even be they forgot the limitations of 8bit machines.
  6. sRGB

      But technically Gamma decoding is tonemapping as well.
  7. sRGB

      It's not a "texture state", it's a compressed texture format. When you use it, you're saying "the data in this texture should be interpreted as if it contains colors in the sRGB color space", which effectively causes the hardware to apply sRGB->Linear conversion when you sample from the texture.   It was a "sampler state" pre- DX10
  8. In mobile games large companys are spending millions on aquiring users. If your game isnt very different, getting your game up the charts is very hard work.
  9. sRGB

    If you do it in the shader it costs.
  10. How to mipmap procedural shaders?

    not so generic but for lines would be to measure the ddx ddy much like the classic checkboard shader
  11. Well there is no reason to be blending the targets alpha SrcBlendAlpha DestBlendAlpha ? I'm not saying thats the problem though, but thought I would point it out. These are normally set to one or zero.
  12. To further what  Mhagain is saying:   You might want to also consider doing a distortion in screen space, or world space   You could: input a float3 constant of the world position you want the centre of the effect from into a shader. In the shader use the world position of a pixel (you can work this out in the vertex shader and output it to the pixel shader) Now you can do some sort of effect in worldspace.   For instance (ugly - pseudocode! - some things maybe the wrong way around!): in the pixel shader:   WorldGradient = distance(CentreOfEffect,WorldPixelPos) //a float result of  a gradient in worldspace ripple = sin((WorldGradient*Freq)+(Time*Speed)) //sine wave out from the centre of  your effect that moves with Time (input that as a constant) ripple = ripple*gradient //fade off effect   //So now you have a greyscale ripple in worldspace, combine this into the texture coords of the texture you want to distort texture = tex2d(tex.xy+ripple,sampler)   //Freq is whatever looks good, this could be multiplied in some way by the distance to make the ripples get larger as they go out.
  13. The brushes using these blend modes in photoshop is the same as layer blend modes, but using the brush colour instead. They can be quite destructive so you might be better implementing them as layers. Search google for: equation dodge burn   You should find plenty of info, the layer blend modes have been documented many times.
  14. Artefacts using big model

      Close as possible is your eyeball touching the surface a near clip of 0 :) I don't know anything about log depth buffers other than the concept exists, certainly DX11 would make things like this possible I am sure. Consider though not many games in the past have bothered to come up with a physically correct solution, yes its been a matter of contention but every project I've worked on we have managed to work around this issue by faking it.
  15. Artefacts using big model

      It's worth pointing out the depth buffer being normally non-linear,  changing the near clip makes a huge difference compared to moving the far. So you want to push the near as far forward as possible, but this depends on how close you camera will be to objects in the scene. The far does matter, but getting the near correct is most important. Also remeber you say 3km, but you mean 3k of unitys as scale is arbitrary.