Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 16 May 2005
Offline Last Active Today, 02:03 PM

#5255959 What's the best system on which to learn ASM?

Posted by AgentC on 07 October 2015 - 01:40 AM

The 68k is a very clean instruction set; you have a number of data registers, which all work the same, and address registers. There are nice instructions for math, including integer division / multiplication.


If your eventual goal is Megadrive and as you have previous C/C++ experience it doesn't seem like a stretch to go directly for the 68k.


However, there may be some difficulty in setting up a development toolchain so you can compile and run Megadrive programs, and also you would be learning the hardware features at the same time (to e.g. learn what addresses you need to poke to get something to show up on the screen). There are reverse-engineered / leaked resources for this, but not as abundant as for retro computers. When an 8/16bit console boots up and starts executing your program, it typically starts from almost nothing, on the other hand a computer typically has the screen already displaying some sensible data (like text), and has ROM operating system routines to help you.


Therefore, for the quickest, hassle-free introduction into the retro/asm programming mindset with minimal setup and immediately visible effects I'd recommend the C64 as well. For example with the VICE emulator, you can break into the built-in debugger/monitor and write & run simple asm programs directly; no toolchain setup needed. The C64 CPU instruction set is extremely limited though, you have three primary registers (A,X,Y) which all are used differently and you can forget about more complex functions like multiplication - they don't exist and must be written manually using bit-shifting arithmetic.


If you don't feel overwhelmed by the prospect of learning the hardware and having to set its state up from scratch, you'll waste less time going directly for your target platform though.

#5250567 Extending Unity editor and new UI

Posted by AgentC on 04 September 2015 - 08:15 AM

If you want to use the GameObject / Component based new UI for tool functionality, then the tools need to be actually running inside your Unity application. In that case you could look into any Unity new UI tutorials, but you are limited by what the Unity runtime API allows you to do.


However if you want to extend the editor itself, and create new windows or inspectors for it, you still need to use the old-style UI functions, as per Unity's documentation: http://docs.unity3d.com/Manual/editor-EditorWindows.html

#5247988 Reducing CPU usage in Enet thread

Posted by AgentC on 21 August 2015 - 01:14 AM

When I used to use Enet, I did just as hplus0603 suggested, run & update Enet in the main thread as part of the rest of the frame loop.


If I was also rendering, the rendering would naturally limit the framerate & CPU usage, especially if vsync was used.


In case of a headless server, the main loop could sleep to avoid spinning at 100% rate, when it doesn't have simulation work to do. Getting the sleep accurate may be a problem depending on the OS.

#5240042 Unity or C++?

Posted by AgentC on 13 July 2015 - 04:06 AM

Note that C++ programming in terms of Unity is used for plugins to extend Unity's functionality, for example making a custom movie recording functionality. Using it for gameplay purposes will be cumbersome, as you'll have to write all data marshalling (for example the positions of scene objects) between C++ / Mono yourself; Unity doesn't come with built-in C++ scene access.

#5235453 Is OpenGL enough or should I also support DirectX?

Posted by AgentC on 18 June 2015 - 06:12 AM

If you're fine with fairly old rendering techniques (no constant buffers, just shader uniforms, no VAO's) I wouldn't call OpenGL 2.0 or 2.1 bad. It has a fairly straightforward mapping to OpenGL ES 2.0, meaning your rendering code won't differ that much between desktop & mobile, and because it's older, it's better guaranteed to work with older drivers. You'll just need to be strict with obeying even the "stupid" parts of the specs, like for example if you use multiple render targets with a FBO, make sure they are the same color format.

#5235439 Is OpenGL enough or should I also support DirectX?

Posted by AgentC on 18 June 2015 - 05:01 AM

On Windows, do you want to support machines that probably never had a GPU driver update installed, and whose users don't even know how to update drivers? If the answer is yes, you pretty much need to default to DirectX on Windows.

#5235262 rendering system design

Posted by AgentC on 17 June 2015 - 04:28 AM

Or if the materials need a different shader, build instance queues/buckets on the CPU before rendering so that the queue key is a combination of the mesh pointer and the material pointer.


e.g. queue 1 contains instances of mesh A with material A, these will be rendered with one instanced draw call

queue 2 contains mesh A with material B, another draw call for them



I'd only put a "default material" pointer to the actual mesh resource, and allow the mesh scene nodes (instances) to override that.

#5233191 glGenBuffers is null in GLEW?

Posted by AgentC on 06 June 2015 - 12:38 PM

On Windows, SDL doesn't fail even if doesn't get the requested OpenGL version, in this case 2.1, but instead initializes the highest version it can. The "GDI Generic" renderer name points to the computer not having an up-to-date Nvidia graphic driver installed, in which case it's using the Windows default OpenGL drivers which support only OpenGL 1.1.

#5220737 Game Engines without "Editors"?

Posted by AgentC on 01 April 2015 - 11:01 AM

Urho3D actually acquired D3D11, OpenGL 3.2 and WebGL (Emscripten) rendering recently, the site documentation was just lagging behind. However it doesn't yet expose some of the new API exclusive features like tessellation or stream out, so if you require those you're indeed wise to choose another engine.

#5219557 Link shader code to program

Posted by AgentC on 27 March 2015 - 04:00 AM

That is inside humor from my workplace, where we used to talk about "boolean farms" where a class accumulates, over the course of development, a large amount booleans to control its state in obscure ways. If each combination of booleans indicates a specific state, a state enum could be more appropriate. As for the "manager" classes, it's somewhat a matter of taste, but usually "manager" tells very little of what the class is actually doing. For example if it's loading or caching resources, then Loader or Cache in the class name could be more descriptive.

#5219436 Link shader code to program

Posted by AgentC on 26 March 2015 - 02:16 PM

Maybe this will explain better, read particularly the beginning and the section "inbuilt compilation defines" http://urho3d.github.io/documentation/1.32/_shaders.html


This is from the engine I've been working on for a couple of years. Using this approach you'd build for example a diffuse shader, or a diffuse-normalmapped shader, as a data file on disk, and the engine would load it, then proceed to compile it possibly several times with different compile defines as needed. The engine would tell in what lighting conditions it will be used, and what kind of geometry (for example static / skinned / instanced) it will be fed. The engine would typically maintain an in-memory structure, like a hash map, of the permutations (e.g. "shadowed directional light + skinning") it has already compiled for that particular shader.


Naturally compilation defines are not the whole story, in this kind of system you also need a convention for uniforms, for example that the view-projection matrix will be called "ViewProjMatrix" so that the engine knows to look for that uniform and set it according to the current camera, when rendering.


Note: I don't insist this is the best way to do things, just the one I'm used to.


#5218437 Link shader code to program

Posted by AgentC on 23 March 2015 - 06:22 AM

Is this an accepatble approach for big engines aswell? It seems a bit "unprofessional" to guess that there will always be N num of attributes and X num of samplers in the shader program, and all that pointers are set in a model loading system which isn't directly connected to the shader code.


Some possible approaches would be:


- The engine just takes in model and shader data and it doesn't care what attribute or sampler index is which. The user of the engine is responsible for loading data that makes sense. In this case the model data format could have a vertex declaration table, where you specify the format & semantic for each vertex element, and these get bound to the attributes in order.


- The engine specifies a convention, for example "attribute index 0 is always position and it's always a float vector3" or "texture unit 0 is always diffuse map". In this case the model data doesn't need to contain a full vertex declaration, but it's enough to have e.g. a bitmask indicating "has positions" or "has normals".

#5216844 How to limit FPS (without vsync)?

Posted by AgentC on 16 March 2015 - 08:22 AM

Graphics APIs do a thing called "render-ahead" where depending on the CPU time taken to submit the draw call data to the API, and other per-frame processing, the CPU may run ahead of the GPU, meaning that after you submit draw calls for frame x, the GPU is only beginning to draw frame x - 2, for example. This results in apparent worse input lag, because the visible results from input, like camera movement, get to the screen delayed. Vsync makes the situation worse, as the maximum amount of frames to buffer is fixed (typically 3) but with vsync enabled there's more time between frames.


There are ways to combat the render-ahead: on D3D9 (and I guess OpenGL as well) you can manually issue a GPU query and wait on it, effectively keeping the CPU->GPU pipe flushed. This will worsen performance, though. On D3D11 you can use the API call IDXGIDevice1::SetMaximumFrameLatency().

#5208019 Light-weight render queues?

Posted by AgentC on 01 February 2015 - 05:15 AM

Is it a sound idea to do view frustrum culling for all 6 faces of a point light? For example, my RenderablePointLight has a collection of meshes for each face.


Is this about a shadow-casting point light which renders a shadow map for each face?


If your culling code has to walk the entire scene, or a hierarchic acceleration structure (such as quadtree or octree) it will likely be faster to do one spherical culling query to first get all the objects associated with any face of the point light, then test those against the individual face frustums. Profiling will reveal if that's the case.


If it's not a shadow-casting light, you shouldn't need to bother with faces, but just do a single spherical culling query to find the lit objects.

#5203292 OpenGL and OO approaches

Posted by AgentC on 10 January 2015 - 10:30 AM

I see the utility, I just don't see the point in stopping the abstraction there, at such a trivial level, if I'm building it. If somebody else has already done it for me, that's fine.
But I don't really want to constantly have to bother with the minutiae of buffers/textures/shaders/programs/whatever on an individual basis, so rather than wrap them in "IVertexBuffer" with "D3DVertexBuffer" and "OpenGLVertextBuffer" implementations all over, I'd put them in a more abstract representation. Perhaps a thing that deals with them together as a submittable entry for the case of mesh-like data, or doesn't bother me with the individual shaders and program object that make up a material, or whatever.
I don't think that the boundary between game logic and render logic needs to be fraught with all of the intricacies of graphics programming (especially given the horrible things I've seen some gameplay programmers do with a rendering API with such a granular surface area), so I'd rather go all the way out to that level of abstraction than bother with the individual API objects. This can also help in the (admittedly few, these days) scenarios where there are not simple 1:1 correspondences between API objects.


IMO, the usefulness of the low-level abstraction depends on whether you want to support multiple rendering API's, and if your game is going to use effects where advanced programmers need efficient access to the low-level constructs, like vertex buffers. If the answer to both is yes, you probably get most savings on engineering effort when you need just to port a minimal low-level abstraction (texture, shader, buffer, graphics context) to multiple render API's, and allow the programmers to use that low-level abstraction where necessary. By all means there should also be a higher-level abstraction (mesh, material) built on top of the low-level.


For example I've been working on a D3D11 / OpenGL3+ renderer now and the only place where the low-level API differences "leak" to the higher level is generation of a camera projection matrix, and the vertical flipping of camera projection when rendering to a texture on OpenGL, so that both API's can address the rendered texture in the same way. I consider this fairly successful.