dave j

Member
  • Content count

    426
  • Joined

  • Last visited

Community Reputation

682 Good

About dave j

  • Rank
    Member

Personal Information

  • Interests
    Programming
  1. CRT shader

    I wrote the crt-pi shader so might be able to offer some advice. Firstly, all the Retroarch CRT shaders, including CRT-Royale, are written by hobbyists. The Super Win the Game and CRT-Royale shaders mentioned in your post implement a lot of features and are correspondingly complicated but you can still get good results by implementing a simpler shader. The real time sinks for writing shaders are doing complicated things or getting simple shaders working fast on limited hardware (e.g. Raspberry Pi). If you write a simple shader assuming a more capable GPU it should be a lot easier. Some shaders go to great lengths to emulate what happens in the CRT tube to produce a theoretically accurate result but what really matters is what the end result looks like. Cheap hacks that look good are fine. A list of things you'll need to consider: Knowing where the pixel you're rendering is in relation to the centre of the source pixel is crucial for getting blending between pixels right. There are two approaches to blending between pixels. a) Sample multiple source pixels and blend them in the shader. b) Tweak the texture coordinates you use to sample the source texture and use the hardware's bilinear filtering to do the blending. The former allows you to do more complicated filtering, the latter is faster. NTSC colour artefacts are difficult to implement properly. Retroarch shaders use a separate pass and I believe the (S)NES Classic Minis use the CPU for doing this and just use the GPU for upscaling. You might want to leave this for a subsequent version. Old PCs, and games consoles don't have a 1:1 pixel ratio when displayed on a 4:3 screen - this might not be an issue for your game. If you scale up by an integer multiple of the vertical resolution you won't have to worry about blending between two source lines. Blending between two source lines can be difficult if you want to have even scan lines. If you implement curvature you can't have an integer multiple scale factor across the whole line and getting even curved scan lines is even harder to achieve. Horizontal blending of pixels can be as simple or complicated as you want it. Some Retroarch shaders use Lanczos filtering. crt-pi uses linear filtering and relies on the shadow mask and bloom emulation to disguise the fact. For shadow mask emulation, Trinitron slot masks are easiest to implement, you just tint every 3rd pixel along a line red, green or blue and not have to bother about your vertical position. Other shadow masks are more complicated (you need to take vertical position into account) but you can avoid having to implement scan lines if you use them. (Slot masks largely hide the fact you haven't got scan lines.) Scan lines bloom (get wider the brighter they are). Ideally you should do this for each colour channel separately and use some complicated formula so that they expand by the correct amount. crt-pi just multiplies all the colour channels by 1.5. Decide what features you really have to have for your first implementation - you can always add more later. For crt-pi, I was limited by the slow GPU and the fact that I wanted to scale to 1080 pixels vertically whilst still maintaining 60 FPS so I aimed for something reminiscent of a 14" CRT fed RGB via a SCART socket (which is what I was used to back in the day).
  2. I haven't got any suggestions for solutions to the precision issue but if your PC graphics card supports 16 bit floats you can use that for testing as 16 bit floats have 10 bits for the mantissa - which is the same precision supported by the Mali 400 series GPU in your TV box. You have to define your fragment shader variables as float16_t (or equivalent vectored version).
  3. WebGL ES 3.0

    WebGL 2.0 is based on OpenGL ES 3.0, not Vulkan. You can find the WebGL 2.0 draft specification here.
  4. You should have a union of structures not a union of unions. The latter will put all the variables in the sample space.
  5. Kylotan's suggestion of using unions is the way to go. You can see an example of how to use them for the sort of thing you are wanting to do by looking at SDL's event handling - specifically SDL_events.h. Look at the SDL_Event union typedef at line 525.
  6. It is pretty much calculating the shear and scaling of the corner points. It might help to think of what happens to the sprite's bounding box when it's transformed. Below is the sprite transformation animation from the blog with the bounding box of a sprite shown: The steps it shows are: 1. Isometric view on isometric grid. 2. Move bottom right point to match distance (i.e. perspective distorted grid). 3. Scale height to match height of right edge at that distance. 4. Move bottom left point to match distance. 5. Scale height of left edge to match that distance. Note the top and bottom edges are not parallel in step 5. This matches the following image from the blog: Calculating the positon of the bottom right point and the height of the right edge are realtively easy. Calculating the bottom left point will be difficult since you really want to match the position of a point some way above it to a position on the ground. It might be easier to calculate the top left point and left edge height instead. (Adjust as appropriate for right facing walls.) Whilst it might be difficult to implement all this in a shader I wouldn't bother trying, at least until you've got it working on the CPU. Remember, this game was released before GPUs had shaders and CPUs were much slower at the time too. You shouldn't have performance problems doing the maths on the CPU and passing all the quads to the GPU for rendering.
  7. The page on SNES Mode7 he links to early on gives a clue in the section on transformations: They alter the scaling of the sprites depending on how far away from the view point each row appears on screen. Think about drawing the ground tile diamonds. The scale at the nearest point is going to be larger than the scale at the farthest point. Scaled in this way, the quad you use to draw your tile will look more like a trapezium rather than a rectangle. The good news is, if you are using 3D hardware to draw your sprites, you only need to work out the scaling at the corners and the hardware will take care of the rest. The bit about sprite orientation for walls seems to be to allow different hacks depending on whether the left or right of a sprite is nearer the viewer.
  8. It's worthwhile understanding the differences between immediate (usually desktop) and tile based (usually moble) GPUs and how they affect performance. Even if you are only targeting desktop systems, it's still useful because desktop GPUs are beginning to implement some tile features and techniques that work well on tile based GPUs may be able to take advantage of Vulkan's renderpasses.
  9. Miguel Cepero's Procedural World blog has lots of discussion about the technology behind his Voxel Farm product. It might not go into enough detail for you but will provide a jumping off point for further research.
  10. It's the amount of illumination emitted by a light source and is something that you define for each light. It's explained in section 5.2.
  11. GNU ownership, Software - an everywhere epidemic

    I don't get people's hatred of GPL. If you don't like the terms of the licence it's simple - don't use the software. Same as it is with any other software. Comparing it to BSD and other more permissive licences just smacks of whining "They let me use their software as I want without paying, why don't you?". The people making such comments never seem to complain that proprietary software doesn't allow that either.
  12. GNU ownership, Software - an everywhere epidemic

    The proportion of developers producing closed source software is far greater than that producing open source software so it would be strange if that was not the case. Even so, as others have mentioned, open source is all over the place even in commercial products as middleware and embedded stuff.
  13. My first Arduino nice

    Pi's are the spiritual successor to the BBC Micro, which had lots of I/O ports, not the Spectrum. You're not the target market for a Pi. Think a 12 year old kid who wants to try programming and has been inspired by seeing some devices people have built with sensors. Her not technically aware parents won't let her plug a circuit she's made into a computer costing a several $100s[1] in case she breaks it but persuading them to let her try it with one costing a few $10s is much more likely. The fact that so many of them have been bought by middle aged men who used Beebs at school just means that the Raspberry Pi Foundation had more money to spend on it's educational initiatives. ;) Pis where originally envisaged to have Python as their main programming language and most of the learning resources created by the Foundation is geared towards that language, with Scratch as an easier introduction for younger kids. For a programming novice, hardware programming on a Pi is easier than on an Arduino because of the development environments available. You can even do GPIO programming in Scratch now too - which has to be the most novice friendly way available. [1] Even if it provided access to GPIO ports - find a desktop or laptop even does that.
  14. It's a bad idea. Think what you will have to do if you want multiple sizes of your model in your scene, you'll have to either: a. Update every vertex of your model and draw from CPU memory or upload it to the GPU every time you want to use it - both of which are slow. b. Keep several copies of your model stored in GPU memory - which isn't inherently slow but you might prefer to use the GPU memory for something else. You could just prescale the x,y,z components in this case as well. It's far better to pass a scale factor into your shader using a separate variable.
  15. Sorry. I didn't realize you wanted to use an already existing device. Your best bet will be to pick a suitable device and see if people have already figured out how to interface to it, it's message formats, etc. Look for ones that have support in Linux - you should be able to look at the driver source code to see how they work.