Jump to content
  • Advertisement

dave j

  • Content Count

  • Joined

  • Last visited

Community Reputation

686 Good

About dave j

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. People here might interested in the Raspberry Pi Foundation's new games magazine: Wireframe: a new games magazine with a difference The content will probably be focussed more at the beginner end of things with some more advanced stuff - similar to the mix of more general computing stuff in their The MagPi magazine. Like The MagPi, PDF copies will be downloadable for free.
  2. Working on open source projects is not meaningless. Contributing to a well known open source project, particularly one where submissions pass a rigorous review process before inclusion, is a more reliable way of demonstrating to potential employers that you are worth employing than a failed attempt at a work for equity project. Even if a work for equity project is successful they still wouldn't know if it's really a hacked together unmaintainable mess underneath - something that they could easily check with an open source project. (That's code focussed, for art assets you could consider things like reuses existing textures or fits within triangle budgets.) Contributing to open source projects also has the very big advantage that you don't have to commit to working 50-60 hours a week and live off ramen for however long the project takes.
  3. dave j

    Assembly language?

    Like Alberth, I skipped 16 bit x86 and went for ARM on an Archimedes, although I'd done 6502 on a Beeb before. If the OP is willing to try ARM rather than x86, a Raspberry Pi would be a good solution. There are lots of tutorials on assembler on the Pi available and the hardware means you can just grab a pointer to the screen buffer and start writing to it.
  4. Flash reads can have a latency of up to 7 wait states but the flash memory is 128 bits wide and there is an eight line cache for flash data reads (it has to enabled programmatically). 128 bits is 8 x 16 bit pixels so any hit from flash latency will be shared across several pixels. I doubt it will be significant.
  5. It depends on what he's going to be doing. The board mentioned contains 2Mbytes of EEPROM on the MCU - which, even with code, should be enough for a 16 bit console style game for instance.
  6. Use 8 bit colour mode The built in display controller supports 8 bits per pixel with a palette. So you could halve the memory requirements by switching to 8 bit mode if you can limit your sprites to 256 colours from a single palette. Use paletted sprites You could stick with a 16 bits per pixel screen and store the sprites with less colours but will a palette for each sprite. This would cost a palette lookup per pixel but would mean you needed fewer bits per pixel. You'd still get to have the full range of colours overall but each sprite would be limited. Importantly, this scheme is supported by the hardware 2D accelerator I mentioned in my earlier post with both 8 and 4 bits per pixel.
  7. Picking a board with a built in screen was a good idea - things get much more awkward, and slower, if you have to support the screen 'manually'. You'll definitely want to look at the reference manual for the STM32F429. The STM32Cube libraries have lots of example code you can get ideas from. Interestingly, that microcontroller has a hardware 2D accelerator. You probably want to do things in software to start with though. If you look at how the 2D engine works, you can write your software implementation so it can be easily changed to work with the hardware later.
  8. dave j

    CRT shader

    I wrote the crt-pi shader so might be able to offer some advice. Firstly, all the Retroarch CRT shaders, including CRT-Royale, are written by hobbyists. The Super Win the Game and CRT-Royale shaders mentioned in your post implement a lot of features and are correspondingly complicated but you can still get good results by implementing a simpler shader. The real time sinks for writing shaders are doing complicated things or getting simple shaders working fast on limited hardware (e.g. Raspberry Pi). If you write a simple shader assuming a more capable GPU it should be a lot easier. Some shaders go to great lengths to emulate what happens in the CRT tube to produce a theoretically accurate result but what really matters is what the end result looks like. Cheap hacks that look good are fine. A list of things you'll need to consider: Knowing where the pixel you're rendering is in relation to the centre of the source pixel is crucial for getting blending between pixels right. There are two approaches to blending between pixels. a) Sample multiple source pixels and blend them in the shader. b) Tweak the texture coordinates you use to sample the source texture and use the hardware's bilinear filtering to do the blending. The former allows you to do more complicated filtering, the latter is faster. NTSC colour artefacts are difficult to implement properly. Retroarch shaders use a separate pass and I believe the (S)NES Classic Minis use the CPU for doing this and just use the GPU for upscaling. You might want to leave this for a subsequent version. Old PCs, and games consoles don't have a 1:1 pixel ratio when displayed on a 4:3 screen - this might not be an issue for your game. If you scale up by an integer multiple of the vertical resolution you won't have to worry about blending between two source lines. Blending between two source lines can be difficult if you want to have even scan lines. If you implement curvature you can't have an integer multiple scale factor across the whole line and getting even curved scan lines is even harder to achieve. Horizontal blending of pixels can be as simple or complicated as you want it. Some Retroarch shaders use Lanczos filtering. crt-pi uses linear filtering and relies on the shadow mask and bloom emulation to disguise the fact. For shadow mask emulation, Trinitron slot masks are easiest to implement, you just tint every 3rd pixel along a line red, green or blue and not have to bother about your vertical position. Other shadow masks are more complicated (you need to take vertical position into account) but you can avoid having to implement scan lines if you use them. (Slot masks largely hide the fact you haven't got scan lines.) Scan lines bloom (get wider the brighter they are). Ideally you should do this for each colour channel separately and use some complicated formula so that they expand by the correct amount. crt-pi just multiplies all the colour channels by 1.5. Decide what features you really have to have for your first implementation - you can always add more later. For crt-pi, I was limited by the slow GPU and the fact that I wanted to scale to 1080 pixels vertically whilst still maintaining 60 FPS so I aimed for something reminiscent of a 14" CRT fed RGB via a SCART socket (which is what I was used to back in the day).
  9. I haven't got any suggestions for solutions to the precision issue but if your PC graphics card supports 16 bit floats you can use that for testing as 16 bit floats have 10 bits for the mantissa - which is the same precision supported by the Mali 400 series GPU in your TV box. You have to define your fragment shader variables as float16_t (or equivalent vectored version).
  10. dave j

    WebGL ES 3.0

    WebGL 2.0 is based on OpenGL ES 3.0, not Vulkan. You can find the WebGL 2.0 draft specification here.
  11. You should have a union of structures not a union of unions. The latter will put all the variables in the sample space.
  12. Kylotan's suggestion of using unions is the way to go. You can see an example of how to use them for the sort of thing you are wanting to do by looking at SDL's event handling - specifically SDL_events.h. Look at the SDL_Event union typedef at line 525.
  13. It is pretty much calculating the shear and scaling of the corner points. It might help to think of what happens to the sprite's bounding box when it's transformed. Below is the sprite transformation animation from the blog with the bounding box of a sprite shown: The steps it shows are: 1. Isometric view on isometric grid. 2. Move bottom right point to match distance (i.e. perspective distorted grid). 3. Scale height to match height of right edge at that distance. 4. Move bottom left point to match distance. 5. Scale height of left edge to match that distance. Note the top and bottom edges are not parallel in step 5. This matches the following image from the blog: Calculating the positon of the bottom right point and the height of the right edge are realtively easy. Calculating the bottom left point will be difficult since you really want to match the position of a point some way above it to a position on the ground. It might be easier to calculate the top left point and left edge height instead. (Adjust as appropriate for right facing walls.) Whilst it might be difficult to implement all this in a shader I wouldn't bother trying, at least until you've got it working on the CPU. Remember, this game was released before GPUs had shaders and CPUs were much slower at the time too. You shouldn't have performance problems doing the maths on the CPU and passing all the quads to the GPU for rendering.
  14. The page on SNES Mode7 he links to early on gives a clue in the section on transformations: They alter the scaling of the sprites depending on how far away from the view point each row appears on screen. Think about drawing the ground tile diamonds. The scale at the nearest point is going to be larger than the scale at the farthest point. Scaled in this way, the quad you use to draw your tile will look more like a trapezium rather than a rectangle. The good news is, if you are using 3D hardware to draw your sprites, you only need to work out the scaling at the corners and the hardware will take care of the rest. The bit about sprite orientation for walls seems to be to allow different hacks depending on whether the left or right of a sprite is nearer the viewer.
  15. It's worthwhile understanding the differences between immediate (usually desktop) and tile based (usually moble) GPUs and how they affect performance. Even if you are only targeting desktop systems, it's still useful because desktop GPUs are beginning to implement some tile features and techniques that work well on tile based GPUs may be able to take advantage of Vulkan's renderpasses.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!