i am tinkering with my own basic raytracing code, got idea that mixing
raytracing with rsterization would be good idea
Though there is a problem of coordination the raytracing of the figure (I mainly
using spheres and triangles) with the rasterization of the figure - coordinate it to
obtain a pixel area coherent results, of raytracing object and rasterizing it
(it should be just the same area)
Is this hard to do to rasterizer a projection (of triangle, sphere) that would give exactly the same "pixel area" as raytracing this figure? Could someone say me what it would be this projestion formula?
(when doing raytracung the rays i sent are taken from simple geometrical approach
for example I assume my eye is 1 meter distant to the creen screen is 40 cm wide
and 30 cm tall so it gave me coordinate of each pixel (for examle left upper would be just -0.20 -0.15 +1.0 and also direction of normalized (-0.20 -0.15 +1.0) -
what kind of rasterization projection would be idealy coherent with that rays thing?
tnx for answers
PS. casting sole vertices could be probably easy (the way i do it in my rasterizer it is stretchung x/y from proportion related to object to eye / screen to eye, very basic projeztion - but do the triangle edges stay linear when raytracing the triangle ?(I got not triangles in my raytracing code so i cannot see it - but i got spheres and i clearly see
that spheres are not round here, especialy if i set the screen dimension as a wide and eye close to it, then i got it more like rugby balls or something like that)