Particle (star) rendering not working.

Started by
21 comments, last by Sword7 4 years, 6 months ago

Need more info:

What are you up to ? Draw constellation lines between stars ?

How does the data come ? As positions (what kind of positions, coordinate system cartesian, terrestrial lat/lon, declination/azimuth, other ?) and colours (4 channel float and clamped to 0..1 it seems) ? It may be an idea to project them on a cube map.

Are positions sorted by constellation and the order in which to draw the lines ? (That'll make it easy ... memcpy to a buffer, set a special index as restart and draw lines with primitive restart). If they are not presorted, you must do that manually because GL_LINES just connect the vertices in the order they come in.

Your buffer is dynamic. Is that necessary ? If so you must make sure that everything is written and the buffer flushed before the draw call (see documentation of glBufferData() and *SubData()). Or map, copy, unmap, flush and hand over to opengl manually.

There is a mixture between own mat/vec classes and glm::value_ptr. Does that go well ?

 

Sorry, can't dive deeper into it right now because i am in space at a different corner myself ?

 

 

Advertisement
2 hours ago, Green_Baron said:

Need more info:

What are you up to ? Draw constellation lines between stars ?

How does the data come ? As positions (what kind of positions, coordinate system cartesian, terrestrial lat/lon, declination/azimuth, other ?) and colours (4 channel float and clamped to 0..1 it seems) ? It may be an idea to project them on a cube map

It uses star positions as 3D coordination (XYZ) in kilometers from origin (Sun).  I can move through stars and see what happened to constellation lines for interstellar travel.  Yes, colors (4 channel floats) uses 0..1  For star points, it uses GL_POINTS with GL_PROGRAM_POINT_SIZE for varying size.  For constellation lines, it uses GL_LINES.  Each vertex is packed data (positions, colors, and point size) for stars and (positions and colors) for constellation lines.

Yes, draw constellation lines between stars.

Examples are screenshots.  I am still figuring why cause corruption while rendering multi objects.

https://photos.app.goo.gl/VSHXbEva2rDMg9et6

I'd very much like to help but this all makes no sense to me. Here are my ideas, maybe someone else can add more ?

If you're using a star catalog, there must be a description of the dataset together with the reference coordinate system and a timestamp for which the data is valid. You will have to convert the data, that probably comes in some form of degrees on a sphere called rectascension and declination, then into something that is usable for projection in a cartesian system like OpenGL. That's not magic but involves a little trigonometry. When you have such a dataset, self brewed or obtained from a planetarium software or some such (pls. show us a line describing a star's data), you can plan how to store it in a buffer object and draw it.

For the constellations, you need the order in which the stars will be drawn. That involves some of them being visited more than once because for example good old Orion is not a single line strip but has several limbs, the belt, the helmet and its bow to chase Taurus.

With a buffer at hand, you can draw things. But not at their real world positions in km because that's impossible. You will have to project them on a skybox or unit sphere or some. Our PCs can't model real space distances in km (yet ? ever ?).

First, i think, the exact knowledge of the data set would be nice.

Sorry if that doesn't meet your expectations ...

 

Edit: speaking about constellations, be aware that these are 2d projections from our optical perception from earth. Orions stars are actually between 200 and 2000 light years away. Not talking about the other 2D projections. So, when you move through them, even with a correct perspective and everything, it'll be a mess to behold ... ?

I am using XHIP database package (extended Hipparcos compilation) - http://cdsarc.u-strasbg.fr/viz-bin/cat/V/137D

I extracted some fields from them like RA, DEC, PLX and CI (color index) and converted RA, DEC and PLX to XYZ coordination in parsec units.  Also I have HYG database (http://www.astronexus.com/hyg) as well. Both are designed for 3D starmap coordinates.  Many astronomy/space simulators software (Celestia, Orbiter, Space Engine and Gaia Sky) use that in KM units or so.  I now have Gaia DR2 database but not implemented yet. 

I plan to render millions stars (particles) through GPU processor.  Origin is our solar system (Sol system). However I am still using my old routines to render stars below.  For new routine, I plan to write shader program to calculate apparent magnitude depending on camera position for brightness and star size (much faster than using CPU space).  I calculate star position relative to eye coordination for both star and constellation lines in double precision. Then convert to 32-bit floats for rendering.

I first implemented that routines with legacy OpenGL function calls and they worked fine by displaying stars and constellation lines with earth rendering but can't display varying-size points so that I have to use shader program to display varying-size points.

 


StarVertex::StarVertex(Scene &scene, int maxStars)
: scene(scene),
  ctx(*scene.getContext()),
  prm(*scene.getParameter()),
  type(useNotUsed),
  maxStars(maxStars),
  nStars(0), cStars(0),
  flagStarted(false)
{
	buffer = new starVertex[maxStars];
}

StarVertex::~StarVertex()
{
	finish();
	if (buffer != nullptr)
		delete []buffer;
}

void StarVertex::startSprites()
{
	if (pgm == nullptr) {
		ShaderManager &smgr = scene.getShaderManager();

		pgm = smgr.createShader("star");

	    vbuf = new VertexBuffer(ctx, 1);
	   	vbuf->createBuffer(VertexBuffer::VBO, 1);
	}

	pgm->use();
	vbuf->bind();

	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(starVertex), (void *)0);
	glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(starVertex), (void *)(3 * sizeof(float)));
	glVertexAttribPointer(2, 1, GL_FLOAT, GL_FALSE, sizeof(starVertex), (void *)(7 * sizeof(float)));
	glEnableVertexAttribArray(0);
	glEnableVertexAttribArray(1);
	glEnableVertexAttribArray(2);

//	cout << "starVertex size: " << sizeof(starVertex) << endl;
//	cout << "  vec3f_t size:  " << sizeof(vec3f_t) << endl;
//	cout << "  Color size:    " << sizeof(Color) << endl;

	glEnable(GL_PROGRAM_POINT_SIZE);

	mat4f_t mvp = mat4f_t (prm.dmProj * prm.dmView * mat4d_t(1.0));

	uint32_t mvpLoc = glGetUniformLocation(pgm->getID(), "mvp");
    glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, glm::value_ptr(mvp));

	nStars = 0;
	cStars = 0;
	type = useSprites;
	flagStarted = true;
}

void StarVertex::render()
{
	if (nStars == 0)
		return;

	vbuf->assign(VertexBuffer::VBO, buffer, nStars*sizeof(starVertex));

	// Now rendering stars
//	if (txImage != nullptr)
//		txImage->bind();
	glDrawArrays(GL_POINTS, 0, nStars);
	cStars += nStars;
	nStars  = 0;
}

void StarVertex::finish()
{

	render();

	flagStarted = false;

//	cout << "Total " << cStars << " rendered stars." << endl;
	cStars = 0;

	switch (type) {
	case useSprites:
		glDisableVertexAttribArray(0);
		glDisableVertexAttribArray(1);
		glDisableVertexAttribArray(2);
		vbuf->unbind();

		glUseProgram(0);
		glDisable(GL_PROGRAM_POINT_SIZE);
		break;
	case usePoints:
	default:
		break;
	}
	type = useNotUsed;
}

void StarVertex::addStar(const vec3d_t &pos, const Color &color, double size)
{
	if (nStars == maxStars)
		render();

	buffer[nStars].posStar = pos;
	buffer[nStars].color = color;
	buffer[nStars].size = size;

	nStars++;
}

// ****************************************************************

void StarRenderer::process(const CelestialStar& star, double dist, double appMag) const
{
	vec3d_t spos, rpos;
	double  srad;
	double  rdist;
	double  objSize;
	double  discSize;
	double  discScale;
	double  alpha, ptSize;
	Color   color;

	// Calculate relative position between star and camera positions.
	spos  = star.getPosition(0) * KM_PER_PC;
	rpos  = spos - cpos;
	rdist = glm::length(rpos);

	// Calculate apparent size of star in view field
	srad    = star.getRadius();
//	objSize = (srad / (dist * KM_PER_PC)) / pxSize;
	objSize = srad / (dist * pxSize * KM_PER_PC);

	alpha  = faintestMag - appMag;
	discSize = baseSize;
	if (alpha > 1.0) {
		discScale = min(pow(2.0, 0.3 * (saturationMag - appMag)), 100.0);
		discSize *= discScale;
		alpha = 1.0;
	} else if (alpha < 0.0)
		alpha = 0.0;

	color  = starColors->lookup(star.getTemperature());
	color.setAlpha(alpha);

	// Finally, now display star
	starBuffer->addStar(rpos, color, discSize);
}

 

I am beginning to understand the magnitude of what you're doing. Awesome !

To the buffer problem you mentioned i can only hint, because i haven't seen any code doing that: you must map the buffer to obtain a pointer to its data, update the buffer data, and unmap it again to give it back opengl. Then the drawcall is submitted. The functions are glMapBuffer(), glBufferSubData() and glUnmapBuffer(), and some interesting connected api calls for different purposes, like ranged buffer access and the modern ..named... versions. Alternatively there are flags to tell opengl to update the buffers automatically each time something is written, but that can affect performance.

If the data comes in the right order for the purpose it should work as it has done before.

But i also see precision as a possible cause for messed up drawing: double precision covers a range of a few 100pc at 1km resolution (true ?). With the conversion routines we used for terrain rendering there is a little loss compared to full cpu double precision, so it will rather be a little less than that, maybe a hundred pc ? Anyway not enough to faithfully draw many of the constellations even with relative to eye. Actually, rte only is useful when you really can deduct a lot from the camera position. But if the camera itself is in the centre of a very huge volume to draw, the relative position deduction won't reduce the numbers by much. Also, it doesn't reduce the distances between the objects, which in themselves exceed float precision.

And, of course one is limited by the precision of the dataset. So there probably is not that much gain here with rte.

Maybe you can take a lot of complexity from it by simply rendering the overview with a pc or 1/100pc of resolution (just enough to stay in float range), and perform a scene change when it comes to a specific solar system. We want precision when we are close to things.

Sorry for not being more helpful ...

Good news!  I just rewrote star and constellation lines routines.  I accidentally discovered a bug in both routines when I moved them after glUnmapBuffer before glDrawArrays.  Setting vetrex attribute pointers before loading data cause that buffer corruption.  All corruption-related problems went away.  Now successfully rendered everything at once.

https://photos.app.goo.gl/PpfVWMfhkqyfoA7J9

Again not sure if i understand. If you don't change the buffer layout (the order of the data) between mappings, there is no need to change the vertex attribute descriptions ...

No I did not change the buffer layout.  I originally placed them before loading data into buffer. I moved them to by glDrawArrays (after loading data into buffer) and all corruption-related problems went away.

I suspect (but am not sure) that the problem lies elsewhere and may still pop up some time.

In principle, one can create a vertex array and define its vertex attributes before any vertex or element buffer exists and separately from them. When drawing, the array must have a buffer bound to feed the pipeline with data. But the array description and the actual buffer are, to my understanding, separate things. One can leave an array bound (so that the description of the data that is fed into the pipeline stays the same) and switch the buffers associated with that array (Edit: i think that was nonsense; buffers are bound to a target, they are not "associated with an array"). So that for example one buffer is filled by the application while another one is being drawn from. The fact that there is a 1 to 1 relation in all the tutorials between array and buffer is just coincidence. One can even draw without an array and only a vertex buffer bound. It'll be drawn in the order the vertex data comes.

See here for example.

It might be good to keep an eye on it ?️

This topic is closed to new replies.

Advertisement