• Create Account

# EVIL_ENT

Member Since 14 Aug 2010
Offline Last Active May 23 2016 09:11 AM

### #5293015How to round 2 Straight lines

Posted by on 23 May 2016 - 03:33 AM

I wrote you some code for that problem using quadratic Bézier curves: http://no.duckdns.org/curve/index.html

```<!DOCTYPE html>
<canvas id="canvas" width="800" height="800" style="border:1px solid #000000;"></canvas>
<script>

var ax = 0;
var ay = 100;
var bx = 500;
var by = 200;
var cx = 300;
var cy = 800;

var canvas = document.getElementById("canvas");
var context = canvas.getContext("2d");

function drawLine(ax, ay, bx, by){
context.beginPath();
context.moveTo(ax, ay);
context.lineTo(bx, by);
context.stroke();
}

function drawCircle(x, y, r){
context.beginPath();
context.arc(x, y, r, 0, 2*Math.PI);
context.stroke();
}

function drawQuadraticBezierCurve(ax, ay, bx, by, cx, cy){
context.beginPath();
context.moveTo(ax, ay);

var n = 100;
for (var i = 1; i <= n; i++){
var t = i/n;
var s = 1 - t;

var x = ax*s*s + 2*bx*s*t + cx*t*t;
var y = ay*s*s + 2*by*s*t + cy*t*t;

context.lineTo(x, y);
}

context.stroke();
}

function redraw(){
context.clearRect(0, 0, canvas.width, canvas.height);

var bax = bx - ax;
var bay = by - ay;

var bcx = bx - cx;
var bcy = by - cy;

var ba = Math.sqrt(bax*bax + bay*bay);
var bc = Math.sqrt(bcx*bcx + bcy*bcy);

context.strokeStyle = "black";
drawLine(ax, ay, bx - bax, by - bay);
drawLine(bx - bcx, by - bcy, cx, cy);
context.strokeStyle = "green";
drawQuadraticBezierCurve(bx - bax, by - bay, bx, by, bx - bcx, by - bcy);
}

redraw();

window.onmousemove = function(e){
var dx = e.clientX - bx;
var dy = e.clientY - by;
redraw();
}

</script>
</body></html>

```

Just copy&paste into a *.txt file, rename it to *.html and open in your browser.

### #5287191Performance on Android

Posted by on 16 April 2016 - 09:35 AM

The Samsung Galaxy Tab A seems to have a Qualcomm Adreno 306 GPU. In my experience, pixel fillrate is often the limiting factor, which according to https://gfxbench.com/device.jsp?D=Qualcomm+msm8916_32+%28Adreno+306%2C+development+board%29 is about 458 Mtexels/s or roughly 7.6 Mtexels per frame so it should be possible to touch each pixel about 9 times per frame with a simple fragment shader that does a texture lookup at a resolution of 1024x768 while running at 60 fps. But those are just theoretical numbers, so better benchmark it yourself to make sure they aren't off by too much.

### #5287189What is more expensive in float?

Posted by on 16 April 2016 - 09:23 AM

(I am assuming you mean trigonometric functions instead of goniometric functions)

Anyway, the correct answer is: Profile your code and test it yourself. My guess would be that the square root and one trigonometric function might be slightly faster, but I don't know which CPU you are using, so YMMV.

<insert obligatory rant about premature optimization here, blablabla>

### #5282016Rendering Sub-pixel Positioned Glyphs

Posted by on 19 March 2016 - 10:15 AM

Font files often contain hand-drawn bitmap characters for small font sizes and vector characters for big font sizes. Libraries like FreeType will give you integer offsets and kerning values for those.

Having multiple sizes of a font in a texture atlas will make it a little bit bigger, especially for huge fonts, which can be drawn with signed distance fields instead: http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf

### #5275346request HLSL support for sqrt() for integers

Posted by on 11 February 2016 - 04:44 PM

IEEE compliance doesn't guarantee determinism across different GPUs

No, IEEE compliance does in fact guarantee "determinism" across different GPUs. See

ftp://ftp.openwatcom.org/pub/devel/docs/ieee-754.pdf

Section 4. Rounding:

[sqrt] shall be performed as if it first produced an intermediate result correct to infinite precision and with unbounded range, and then rounded that result according to one of the modes in [Section 5]

Someone else with similar interpretation:

http://stackoverflow.com/a/4319151

there is no "small error that doesn't matter" in simulations where determinism is required. One ulp of difference is enough for two simulations to diverge greatly over time

To quote myself:

I assume that what you need this for is not directly related to graphics, since if it was, a small error would most likely not matter.

As you can see, I was talking about graphics, not simulation, where e.g. one ulp of color difference "most likely" will not matter.

### #5273640request HLSL support for sqrt() for integers

Posted by on 01 February 2016 - 06:58 AM

Since you want the same results for sqrt on different GPUs, I assume that what you need this for is not directly related to graphics, since if it was, a small error would most likely not matter.

If that is the case, you should consider OpenCL or CUDA where you can enforce IEEE 754 floating point compliance on recent GPUs.

### #5081540GLSurfaceView lag/delay

Posted by on 29 July 2013 - 02:02 PM

I noticed that even the default CyanogenMod UI has some delay, so it either is a hardware issue or OS issue.

Further I found two more threads which appear to be about the same problem, no solution through:

http://stackoverflow.com/questions/8173284/android-touch-input-delay-problems-for-a-timing-critical-game-is-there-a-delay

http://stackoverflow.com/questions/16660485/android-moving-a-view-with-ontouch-has-definite-lag

It is mentioned that there should be less lag on HTC devices, and I read somewhere else that iphones should have less lag, too.

EDIT:

Seems like not even the developers are sure what the reason is.

Anyway, unless we hack into android we can't change it.

### #5081283GLSurfaceView lag/delay

Posted by on 28 July 2013 - 03:20 PM

I noticed the same thing (input lag when using GLSurfaceView) on a samsung galaxy s2 with android 2.3.3 or something around that weird version number.

If you use the Canvas element for painting instead you have less input lag, so it might be possible that the other apps you checked which didn't have input lag were not using OpenGL.

I suspect it's either double buffering or maybe I caught the touch events at the wrong place.

Anyway, I still have to check if this is still the case on non-stock android roms.

### #5069109How to interct with the five million dots

Posted by on 12 June 2013 - 06:18 AM

If dots can't overlap and have discrete positions your editor could be a paint program. Store pixels (aka dots) in a big array (or hashmap if they are sparse).

Otherwise a quadtree is a good idea as 0r0d pointed out.

If you have a uniform distribution of dots a grid with a fixed size might be faster and easier to implement though.

As you can see it's possible to get down from O(n) to O(log(n)) or even O(1) based on your constraints, so it is essential to mention as much as possible.

### #5063030Separation of concerns in game architecture

Posted by on 19 May 2013 - 11:10 AM

If tile and sprite should be one thing it might become bothersome to animate a SpriteTile to move from one location to another smoothly (in case this would ever happen).

Anyway, this are very specific implementation details which would require a more specific description of the game.

### #5062019How do I replicate the league of legends login screen?

Posted by on 15 May 2013 - 05:53 AM

They are drawn, see this for example:

Don't know how it is animated though

### #5061307Beginner wanting to learn how to program CCG

Posted by on 12 May 2013 - 11:49 AM

I'll save a few people the need to google:

"ccg games" = "collectible card game games"

### #5055680Sorted particles and SIMD

Posted by on 22 April 2013 - 02:26 AM

my code appears to be roughly twice as fast as std::sort for big n in my probably wrong tests:

http://ideone.com/5aBW1x

```#include <stdlib.h>
#include <stdint.h>

#include <time.h>
#include <assert.h>

#include <algorithm>
#include <iostream>

void radix_sort(uint32_t *in, uint32_t *out, int shift, int n){
int index[256] = {};
for (int i=0; i<n; i++) index[(in[i]>>shift)&0xFF]++;
for (int i=0, sum=0; i<256; i++) sum += index[i], index[i] = sum - index[i];
for (int i=0; i<n; i++) out[index[(in[i]>>shift)&0xFF]++] = in[i];
}
void check(int n){
uint32_t *data = new uint32_t[n];
uint32_t *temp = new uint32_t[n];
uint32_t *same = new uint32_t[n];
for (int i=0; i<n; i++) same[i] = data[i] = (rand()<<16)|rand();// Note: rand() might not produce enough randomness

clock_t t_sort = clock();

std::sort(same, same+n);

t_sort = clock() - t_sort;

std::cout << "n: " << n << std::endl;
std::cout << "  std:sort: " << t_sort  << std::endl;
std::cout << std::endl;

for (int i=0; i<n; i++) assert(same[i] == data[i]);

delete[] data;
delete[] temp;
delete[] same;
}

int main(){
for (int i=0; i<30; i++) check(1<<i);
return 0;
}

```

### #5048319Drawing infinite grid

Posted by on 30 March 2013 - 08:41 AM

You don't have to draw an infinite grid to make it look like one:

```#include <GL/glfw.h>

int main(){
int x, y, nx, ny, mx, my, lx, ly;
int dx = 0;
int dy = 0;
int w = 512;
int h = 512;
int cell_w = 32;
int cell_h = 32;

glfwInit();
glfwOpenWindow(w, h, 8, 8, 8, 8, 0, 0, GLFW_WINDOW);

glfwGetMousePos(&lx, &ly);

while (!glfwGetKey(GLFW_KEY_ESC)){
glClear(GL_COLOR_BUFFER_BIT);

/* Make OpenGL cover full window size */
glfwGetWindowSize(&w, &h);
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glOrtho(0, w, h, 0, -1, 1);
glMatrixMode(GL_MODELVIEW);

/* Handle mouse input */
glfwGetMousePos(&mx, &my);
if (glfwGetMouseButton(GLFW_MOUSE_BUTTON_LEFT)){
/* Offset the grid by the distance the mouse moved */
dx += mx - lx;
dy += my - ly;
/* Use the %-operator to jump back */
dx %= cell_w;
dy %= cell_h;

float r = 10.0f;
glRectf(mx-r, my-r, mx+r, my+r);
}
lx = mx;
ly = my;

glTranslatef(dx, dy, 0.0f);

/* Draw a grid which is a little bigger than the screen */
nx = w/cell_w + 2;
ny = h/cell_h + 2;
glBegin(GL_LINES);
for (x=0; x<nx; x++){
glVertex2f(x*cell_w,  -cell_h);
glVertex2f(x*cell_w, h+cell_h);
}
for (y=0; y<ny; y++){
glVertex2f( -cell_w, y*cell_h);
glVertex2f(w+cell_w, y*cell_h);
}
glEnd();

glfwSwapBuffers();
}
glfwTerminate();
return 0;
}

```

This is deprecated OpenGL. With a fragment shader this would be much easier (but much harder to setup).

### #5013614Feasibility of writing android apps purely through the NDK

Posted by on 23 December 2012 - 02:53 AM

Also I heard that the speed difference between NDK and Java code really isn't very noticeable (if at all) because the Java code gets recompiled to native instructions anyway (it isn't interpreted), so I'm not sure if that could really be considered an advantage. I suppose NDK still wins if you really don't like Java, though =P

I wrote a very simple space shooter and got performance problems as soon as there were like 30 ships moving around at once, than I wrote the same again in native code and suddenly could manage 10 times more ships without doing any structural changes to the code.

That was on api 8 though and I have heared that dalvik became faster since then and that FloatBuffers have been fixed (there was a problem that wrapping a floatbuffer around a float array would call floatBuffer.put on every element instead of doing something smart).

There were also some functions missing so one could not write efficient code and had to use JNI wrappers anyway.

In my opinion the google guys are a little too fond of their dalvik vm. In practice native code is a lot faster and should be prefered if there are even remotely computationally expensive tasks to do.

Also will give you more battery life.

I have written a windows/linux framework for developing and debugging, so I don't have to mess with eclipse/android can just copy the files for final release.

PARTNERS