dave j

  • Content count

  • Joined

  • Last visited

Community Reputation

682 Good

About dave j

  • Rank

Personal Information

  • Interests
  1. I haven't got any suggestions for solutions to the precision issue but if your PC graphics card supports 16 bit floats you can use that for testing as 16 bit floats have 10 bits for the mantissa - which is the same precision supported by the Mali 400 series GPU in your TV box. You have to define your fragment shader variables as float16_t (or equivalent vectored version).
  2. WebGL ES 3.0

    WebGL 2.0 is based on OpenGL ES 3.0, not Vulkan. You can find the WebGL 2.0 draft specification here.
  3. You should have a union of structures not a union of unions. The latter will put all the variables in the sample space.
  4. Kylotan's suggestion of using unions is the way to go. You can see an example of how to use them for the sort of thing you are wanting to do by looking at SDL's event handling - specifically SDL_events.h. Look at the SDL_Event union typedef at line 525.
  5. It is pretty much calculating the shear and scaling of the corner points. It might help to think of what happens to the sprite's bounding box when it's transformed. Below is the sprite transformation animation from the blog with the bounding box of a sprite shown: The steps it shows are: 1. Isometric view on isometric grid. 2. Move bottom right point to match distance (i.e. perspective distorted grid). 3. Scale height to match height of right edge at that distance. 4. Move bottom left point to match distance. 5. Scale height of left edge to match that distance. Note the top and bottom edges are not parallel in step 5. This matches the following image from the blog: Calculating the positon of the bottom right point and the height of the right edge are realtively easy. Calculating the bottom left point will be difficult since you really want to match the position of a point some way above it to a position on the ground. It might be easier to calculate the top left point and left edge height instead. (Adjust as appropriate for right facing walls.) Whilst it might be difficult to implement all this in a shader I wouldn't bother trying, at least until you've got it working on the CPU. Remember, this game was released before GPUs had shaders and CPUs were much slower at the time too. You shouldn't have performance problems doing the maths on the CPU and passing all the quads to the GPU for rendering.
  6. The page on SNES Mode7 he links to early on gives a clue in the section on transformations: They alter the scaling of the sprites depending on how far away from the view point each row appears on screen. Think about drawing the ground tile diamonds. The scale at the nearest point is going to be larger than the scale at the farthest point. Scaled in this way, the quad you use to draw your tile will look more like a trapezium rather than a rectangle. The good news is, if you are using 3D hardware to draw your sprites, you only need to work out the scaling at the corners and the hardware will take care of the rest. The bit about sprite orientation for walls seems to be to allow different hacks depending on whether the left or right of a sprite is nearer the viewer.
  7. It's worthwhile understanding the differences between immediate (usually desktop) and tile based (usually moble) GPUs and how they affect performance. Even if you are only targeting desktop systems, it's still useful because desktop GPUs are beginning to implement some tile features and techniques that work well on tile based GPUs may be able to take advantage of Vulkan's renderpasses.
  8. Miguel Cepero's Procedural World blog has lots of discussion about the technology behind his Voxel Farm product. It might not go into enough detail for you but will provide a jumping off point for further research.
  9. It's the amount of illumination emitted by a light source and is something that you define for each light. It's explained in section 5.2.
  10. GNU ownership, Software - an everywhere epidemic

    I don't get people's hatred of GPL. If you don't like the terms of the licence it's simple - don't use the software. Same as it is with any other software. Comparing it to BSD and other more permissive licences just smacks of whining "They let me use their software as I want without paying, why don't you?". The people making such comments never seem to complain that proprietary software doesn't allow that either.
  11. GNU ownership, Software - an everywhere epidemic

    The proportion of developers producing closed source software is far greater than that producing open source software so it would be strange if that was not the case. Even so, as others have mentioned, open source is all over the place even in commercial products as middleware and embedded stuff.
  12. My first Arduino nice

    Pi's are the spiritual successor to the BBC Micro, which had lots of I/O ports, not the Spectrum. You're not the target market for a Pi. Think a 12 year old kid who wants to try programming and has been inspired by seeing some devices people have built with sensors. Her not technically aware parents won't let her plug a circuit she's made into a computer costing a several $100s[1] in case she breaks it but persuading them to let her try it with one costing a few $10s is much more likely. The fact that so many of them have been bought by middle aged men who used Beebs at school just means that the Raspberry Pi Foundation had more money to spend on it's educational initiatives. ;) Pis where originally envisaged to have Python as their main programming language and most of the learning resources created by the Foundation is geared towards that language, with Scratch as an easier introduction for younger kids. For a programming novice, hardware programming on a Pi is easier than on an Arduino because of the development environments available. You can even do GPIO programming in Scratch now too - which has to be the most novice friendly way available. [1] Even if it provided access to GPIO ports - find a desktop or laptop even does that.
  13. It's a bad idea. Think what you will have to do if you want multiple sizes of your model in your scene, you'll have to either: a. Update every vertex of your model and draw from CPU memory or upload it to the GPU every time you want to use it - both of which are slow. b. Keep several copies of your model stored in GPU memory - which isn't inherently slow but you might prefer to use the GPU memory for something else. You could just prescale the x,y,z components in this case as well. It's far better to pass a scale factor into your shader using a separate variable.
  14. Sorry. I didn't realize you wanted to use an already existing device. Your best bet will be to pick a suitable device and see if people have already figured out how to interface to it, it's message formats, etc. Look for ones that have support in Linux - you should be able to look at the driver source code to see how they work.
  15. Your best bet is likely using an Arduino or other microcontroller board to interface with various combinations of accelerometers/gyroscopes/magnetometers. You can get chips that contain various combinations of these sensors on little circuit boards that are easy to connect to an Arduino. By using multiple, different, sensors you can improve the accuracy. Oversampling, i.e. quicker than you need, and filtering the values can smooth the output. Details of connecting position sensors to Arduinos here.