Sign in to follow this  

OpenGL Purchasing a Tegra 4

This topic is 1461 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I wasn't sure where to put this topic, but it is OpenGL-motivated...

 

With the advances of today's mobile SoCs, I'd like to develop a low-cost arcade cabinet. The Tegra 4's looks like it provides all the system resources I would ever need, and I'd like to install a low-end Linux distro on it (like Arch Linux, or something of the sort). The problem is, I don't see any place on NVIDIA's website where I could order just the SoC. Would I have to make some sort of mass-production contract like device manufacturers, such as Asus or Samsung, to get ahold of these chips? Are hobbyists unable to purchase these?

Share this post


Link to post
Share on other sites

The SoC wouldn't do a lot for you, so I don't think you're looking to buy just the chip. It wouldn't have a motherboard or anything.

 

I'm guessing you want a development board. nVidia might make or sell some models, but it looks like they mainly partner with others to manufacture them. If you're planning to develop quite a few low-cost arcade cabinets as a business or something, definitely talk to the different players and tell them what your needs are. If this is just a one-off, you're probably better to go with something aimed at consumers. I'm not sure if the Linux on a stick products would work, but they have options under $100 for a fully functional system.

Share this post


Link to post
Share on other sites

You'd need to design, or have designed, an application-specific motherboard for it -- the kinds of higher-end SOCs you find in phones and tablets, like Tegra chips, are sufficiently complex that a hobbyist has no hope of designing such a board. If you're not doing it professionally already, DIY is not an option.

 

However, if you had the motherboard sorted, you'd source your parts from somewhere like Digikey (for prototyping / low-count production) or direct from the manufacturer for high-volume products. Sometimes SOC designers offer samples for prototyping purposes, but you usually have to be an established relationship with the capacity to deliver on your designs.

 

For chips that are less complex -- simpler ARM-based microcontrollers -- those kinds of designs are within the reach of a dedicated hobbyist to design and have produced, but those will generally be far less capable devices.

 

If you look around, you can find pre-built evaluation and prototyping boards that offer chips of various capability, from very small to very large (comparable to Tegra3/4, possibly even Tegra4-based boards). You could in theory develop a design around one of these boards, the trick is finding one that offers the interfaces you need, and then developing the software and outside hardware interfaces necessary. If the protype was successful, you can order a batch of boards for limited-production, or for high-volume production, work with someone to adapt the boards reference design to your needs and reduce cost by eliminating unused features.

 

Another option would be to base the machine on a low-cost commodity platform like an ITX PC motheboard and an inexpensive, low-power x86 processor, either with integrated or discrete graphics. There are likewise commercially-available peripherals for interfacing standard JAMMA arcade cabinets from commodity PCs. PC processors with integrated graphics get looked down on, and aren't as sexy as saying "Tegra4 inside", but even Intel's latest integrated GPU is far more capable than Tegra4. AMD's brand-new-as-of-today Kabini APUs are even more capable.

Share this post


Link to post
Share on other sites

Embedded hardware for a arcade cabinet seems a bit odd. Arcade cabinets traditionally contained very high end (for their time) computer hardware. The equivalent today would probably be a desktop computer with quad-SLI GTX Titan in it. If you are going for inexpensive and low spec then something like an AMD APU would probably make way more sense that a SoC that is intended for tablets.

Edited by Chris_F

Share this post


Link to post
Share on other sites

This confirms my predictions. I'm starting to think that buying a $300 USD computer that only has onboard graphics would be a good way to go. I'm not looking for a powerhouse computer to develop my arcade games on --just something that could process 2D graphics in HD well enough. Memory also seems really easy to come by as just about every configuration comes with +4GB. Not sure if HD has access to any sort of virtual memory, but what I get will probably be way more than I'll ever need, lol.

 

I wouldn't go too far with something like the Titan. I think decent-quality graphical hardware is pretty cheap nowadays. As much as I like to develop graphics code, I will want to focus more on fun, casual gameplay and party games. Indie developers on mobile platforms and Nintendo seem to be doing well with these strategies. Well, the 3DS is. I think the Wii U has a shot at making a comeback now that I have a PS4, and don't play it because there aren't any games for it.

Edited by Vincent_M

Share this post


Link to post
Share on other sites

I really can't recommend AMD's new kabini Kaviri APUs enough for your use case. A suitable motherboard ($50-$80), processor, ($120-$170), 8GB DDR3-2133 ($80), a small SSD 32-64GB ($50-$75), and a good quality 300W PSU($30) is a good hardware platform that falls into the $300-$400 range.

 

The new APUs run well enough with the closed-source AMD drivers under linux with a bit of fiddling, but that will get better soon enough. There's also support for the TrueAudio DSP for advanced audio processing, and (soon) for AMD's Mantle API which is a more console-like programming paradigm that frees up a lot of CPU overhead compared to Direct3D or OpenGL, and letting you get a bit more out of the GPU itself more easily.

Share this post


Link to post
Share on other sites

This topic is 1461 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now