Jump to content
  • Advertisement

dave j

Member
  • Content Count

    438
  • Joined

  • Last visited

Community Reputation

690 Good

About dave j

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Remember, OpenGL errors are sticky - that GL_INVALID_OPERATION may not be from you calling glBindVertexBuffer. From the glGetError documentation: Check the the value returned by glGetError immediately before you call glBindVertexBuffer. You might need to sprinkle calls to glGetError about your code to identify which function is generating the error.
  2. It's an array constructor so the contents of the () are just the elements of the array. The [ptn] at the end is accessing one of the elements of the array which is why q is an int.
  3. Look at issue 9: So if you have both EXT_texture_sRGB and EXT_texture_compression_s3tc you should have support for compressed sRGB textures. Looking in glext.h it looks like GL_COMPRESSED_SRGB_ALPHA_S3TC_DXT5_EXT is the one you want.
  4. dave j

    Programming game using Python

    The Raspberry Pi people have produced a free ebook called Make Games with Python which uses pygame and doesn't use anything Raspberry Pi specific. It starts off with very basic stuff and progresses to a simple space invaders type game but since you did describe yourself as a total beginner you might find it useful. They have lots of other stuff that also includes some game related content but you'd need to search through for it - there's a risk you might get sidetracked into wanting to do hardware projects too. ;)
  5. Memory faults causing different behaviour on different runs is to be expected for all but the most trivial of applications. When you write past the end of an array you write over whatever was after it in memory. If this object is on the stack then it will usually be the same object that gets overwritten. If it is on the heap it may be an unused bit of heap, in which case it may not cause a problem, or it might be an other allocated object which will then be corrupted. If the latter, you might overwrite it with what was already there, with an invalid value that causes another part of the program to crash or a valid value that wasn't the one that was there before which can be really hard to debug. You may even overwrite an internal heap data structure, and everything might appear OK until you try to allocate or free memory to the heap. Beyond that there are techniques such as address space layout randomization that can cause other variations of behaviour resulting from memory corruptions. As others have said, learn how to identify where the problem is occurring. Even just running your application under the debugger so it stops you at the line of code the problem occurred at and you have all your data structures available so you can check that they contain what you expect them to contain is a useful start.
  6. People here might interested in the Raspberry Pi Foundation's new games magazine: Wireframe: a new games magazine with a difference The content will probably be focussed more at the beginner end of things with some more advanced stuff - similar to the mix of more general computing stuff in their The MagPi magazine. Like The MagPi, PDF copies will be downloadable for free.
  7. Working on open source projects is not meaningless. Contributing to a well known open source project, particularly one where submissions pass a rigorous review process before inclusion, is a more reliable way of demonstrating to potential employers that you are worth employing than a failed attempt at a work for equity project. Even if a work for equity project is successful they still wouldn't know if it's really a hacked together unmaintainable mess underneath - something that they could easily check with an open source project. (That's code focussed, for art assets you could consider things like reuses existing textures or fits within triangle budgets.) Contributing to open source projects also has the very big advantage that you don't have to commit to working 50-60 hours a week and live off ramen for however long the project takes.
  8. dave j

    Assembly language?

    Like Alberth, I skipped 16 bit x86 and went for ARM on an Archimedes, although I'd done 6502 on a Beeb before. If the OP is willing to try ARM rather than x86, a Raspberry Pi would be a good solution. There are lots of tutorials on assembler on the Pi available and the hardware means you can just grab a pointer to the screen buffer and start writing to it.
  9. Flash reads can have a latency of up to 7 wait states but the flash memory is 128 bits wide and there is an eight line cache for flash data reads (it has to enabled programmatically). 128 bits is 8 x 16 bit pixels so any hit from flash latency will be shared across several pixels. I doubt it will be significant.
  10. It depends on what he's going to be doing. The board mentioned contains 2Mbytes of EEPROM on the MCU - which, even with code, should be enough for a 16 bit console style game for instance.
  11. Use 8 bit colour mode The built in display controller supports 8 bits per pixel with a palette. So you could halve the memory requirements by switching to 8 bit mode if you can limit your sprites to 256 colours from a single palette. Use paletted sprites You could stick with a 16 bits per pixel screen and store the sprites with less colours but will a palette for each sprite. This would cost a palette lookup per pixel but would mean you needed fewer bits per pixel. You'd still get to have the full range of colours overall but each sprite would be limited. Importantly, this scheme is supported by the hardware 2D accelerator I mentioned in my earlier post with both 8 and 4 bits per pixel.
  12. Picking a board with a built in screen was a good idea - things get much more awkward, and slower, if you have to support the screen 'manually'. You'll definitely want to look at the reference manual for the STM32F429. The STM32Cube libraries have lots of example code you can get ideas from. Interestingly, that microcontroller has a hardware 2D accelerator. You probably want to do things in software to start with though. If you look at how the 2D engine works, you can write your software implementation so it can be easily changed to work with the hardware later.
  13. dave j

    CRT shader

    I wrote the crt-pi shader so might be able to offer some advice. Firstly, all the Retroarch CRT shaders, including CRT-Royale, are written by hobbyists. The Super Win the Game and CRT-Royale shaders mentioned in your post implement a lot of features and are correspondingly complicated but you can still get good results by implementing a simpler shader. The real time sinks for writing shaders are doing complicated things or getting simple shaders working fast on limited hardware (e.g. Raspberry Pi). If you write a simple shader assuming a more capable GPU it should be a lot easier. Some shaders go to great lengths to emulate what happens in the CRT tube to produce a theoretically accurate result but what really matters is what the end result looks like. Cheap hacks that look good are fine. A list of things you'll need to consider: Knowing where the pixel you're rendering is in relation to the centre of the source pixel is crucial for getting blending between pixels right. There are two approaches to blending between pixels. a) Sample multiple source pixels and blend them in the shader. b) Tweak the texture coordinates you use to sample the source texture and use the hardware's bilinear filtering to do the blending. The former allows you to do more complicated filtering, the latter is faster. NTSC colour artefacts are difficult to implement properly. Retroarch shaders use a separate pass and I believe the (S)NES Classic Minis use the CPU for doing this and just use the GPU for upscaling. You might want to leave this for a subsequent version. Old PCs, and games consoles don't have a 1:1 pixel ratio when displayed on a 4:3 screen - this might not be an issue for your game. If you scale up by an integer multiple of the vertical resolution you won't have to worry about blending between two source lines. Blending between two source lines can be difficult if you want to have even scan lines. If you implement curvature you can't have an integer multiple scale factor across the whole line and getting even curved scan lines is even harder to achieve. Horizontal blending of pixels can be as simple or complicated as you want it. Some Retroarch shaders use Lanczos filtering. crt-pi uses linear filtering and relies on the shadow mask and bloom emulation to disguise the fact. For shadow mask emulation, Trinitron slot masks are easiest to implement, you just tint every 3rd pixel along a line red, green or blue and not have to bother about your vertical position. Other shadow masks are more complicated (you need to take vertical position into account) but you can avoid having to implement scan lines if you use them. (Slot masks largely hide the fact you haven't got scan lines.) Scan lines bloom (get wider the brighter they are). Ideally you should do this for each colour channel separately and use some complicated formula so that they expand by the correct amount. crt-pi just multiplies all the colour channels by 1.5. Decide what features you really have to have for your first implementation - you can always add more later. For crt-pi, I was limited by the slow GPU and the fact that I wanted to scale to 1080 pixels vertically whilst still maintaining 60 FPS so I aimed for something reminiscent of a 14" CRT fed RGB via a SCART socket (which is what I was used to back in the day).
  14. I haven't got any suggestions for solutions to the precision issue but if your PC graphics card supports 16 bit floats you can use that for testing as 16 bit floats have 10 bits for the mantissa - which is the same precision supported by the Mali 400 series GPU in your TV box. You have to define your fragment shader variables as float16_t (or equivalent vectored version).
  15. dave j

    WebGL ES 3.0

    WebGL 2.0 is based on OpenGL ES 3.0, not Vulkan. You can find the WebGL 2.0 draft specification here.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!