2

The app I'm developing displays a large-ish (about 1M vertices), static 2D image. Due to memory limitation issues I have been filling the VBOs with new data every time the user scrolls or zooms, giving the impression that the entire image "exists", even though it doesn't. enter image description here There are two problems with this approach: 1) although the responsiveness is "good enough", it would be better if I could make the scrolling zooming faster and less choppy. 2) I've been sticking to the 64k vertex limit that can be done in a single draw, which puts a hard limit of how much of the image can be shown at a time. It would be nice to be able to see more of the image, or even all of it at the same time. Although the performance at the moment is, again, good enough because we are at the prototype stage and have set up the data to work with the limitations, to go to the product level we will have to get rid of this limitation.

Recently I discovered that by using the "android:largeHeap" option I can get 256 MB of heap space on a Motorola Xoom, which means that I can store the entire image in VBOs. In my ideal world I would simply pass the the OpenGL engine the VBOs and either tell it that the camera has moved, or use glScale/glTranslate to zoom/scroll.

My questions are these: am I on the right track? Should I always "draw" all of the chunks and let OpenGl figure out which will actually be seen, or figure out which chunks are visible myself? Any difference between using something like gluLookAt and glScale/glTranslate?

I don't care about aspect ratio distortion (the image is mathematically generated, not a photo), it is much wider than it is high, and in the future the number of vertexes could get much, much larger (e.g. 60M). Thanks for your time. enter image description here

3
  • "see all of it at the same time" I hope that rendering 60M vertices @30fps on a mobile phone is not a strong requirement, btw. You'll have to reduce the 60M, either by limiting the min zoom, and by providing nice degradation (level of detail) Commented May 19, 2011 at 15:17
  • @Calvin1602 No, that is not a requirement, thank goodness. :-) Commented May 19, 2011 at 16:07
  • Hopefully it was obvious that the second image has a typo. The chunk on the right should have been labeled as "Chunk n". Commented May 19, 2011 at 16:08

1 Answer 1

4

Never let OpenGL figure out itself what's on screen. He won't. All vertices will be transformed, and all those who arent' on screen will be clipped; but you have a better knowledge that OpenGL about your scene.

Using a huuuge 256Mo VBO will make you render the whole scene each time, and transforme ALL vertices each time, which isn't good for preformance.

Make a number of small VBOs (e.g. a 3x3 grid only should be visible at any moment), and display only those that are visible. Optionally pre-fill future VBOs based on movement extrapolation...

There is no difference between gluLookAt and glTranslate/... . Both compute matrices, that's it.

By the way, if your image is static, can't you precompute it ( à la Google Maps ) ? Similarly, does your data offer some way to be "reduced" when zoomed out ? e.g. for a point cloud, only display 1 out of N points...

Sign up to request clarification or add additional context in comments.

2 Comments

Thanks for your answer. What do you mean by "precompute it"? I can create a set of VBOs, as you suggest, but it sounds like you mean something beyond that. Perhaps you mean textures? I forgot to mention in the question that these aren't textures, just colorized vertices, and I need OpenGl to do the smoothing/interpolation.
Yes, I mean a real texture. You can render the vertices once, in a texture ("RTT"), and display only the texture ( -> 1 quad -> 4 vertices instead of many ). Btw, when zooming out, you can use the previous texture to render the new one, so it may be possible to draw 60M vertices it your particular case.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.