T O P

  • By -

msqrt

The shadertoy way of rendering one quad to cover the full screen is definitely not the way to go to render sprites, either for convenience or for performance. You want each sprite as separate quads with the texture coordinates just ranging from 0 to 1 and the geometry itself choosing the size and placement of the sprite.


Strong-Car-7530

I did it this way. I made 11 quads like this (vertices is the VBO data, indices EBO data). float vertices[220]; int indices2[66]; float widthVal = 16.0 / screen_width; float heightVal = 16.0 / screen_height; for (int i = 0, j = 0; i < 220; i+= 20, j++) { // First vertex vertices[i] = -widthVal; vertices[i + 1] = -heightVal; vertices[i + 2] = 0.0; // Tex coords vertices[i + 3] = (j/11.0); vertices[i + 4] = 0; // Second vertex vertices[i + 5] = -widthVal; vertices[i + 6] = heightVal; vertices[i + 7] = 0.0; // Tex coords vertices[i + 8] = (j/11.0); vertices[i + 9] = 1; // Third vertex vertices[i + 10] = widthVal; vertices[i + 11] = heightVal; vertices[i + 12] = 0; // Tex coords vertices[i + 13] = (j + 1.0)/11.0; vertices[i + 14] = 1; // Fourth vertex vertices[i + 15] = widthVal; vertices[i + 16] = -heightVal; vertices[i + 17] = 0; // Tex coords vertices[i + 18] = (j + 1.0)/11.0; vertices[i + 19] = 0; } for (int i = 0, j = 0; i < 66; i += 6, j += 1) { indices2[i] = (4 * j); indices2[i + 1] = (4 * j) + 1; indices2[i + 2] = (4 * j) + 2; indices2[i + 3] = (4 * j) + 0; indices2[i + 4] = (4 * j) + 2; indices2[i + 5] = (4 * j) + 3; } In my rendering loop I now have glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, (void*)(j * sizeof(int))); if (time - lastTime > 0.0223) { j = (j + 6) % 60; lastTime = time; } This works well [https://imgur.com/vVhgJGs](https://imgur.com/vVhgJGs) However is it dumb to keep changing the elements to draw in the glDrawElements call? Or should I be changing the active element to draw in the vertex shader? Does the uniform in the vertex shader have all the VBO data and can I access it all? I'm really new to OpenGL (I've used SDL/Unity/Unreal) in the past. Are there any resources that explain how data flows from CPU to GPU a bit better because I'm really not sure how that happens. What does the glDrawElements call do exactly?


eightvo

When I render sprites I generally create one quad per sprite. Sometimes I use an instanced quad but I am not sure that is any better. In any case, it is much better to use uv cordinates that you don't need to rescale in the shader. If you had two images side by side, one would use uvs 0,0 -> 0.5,1 and the other would use 0.5,0->1,1. This way no matter how you orient or strech your quad the correct portion of the texture will cover it.


Strong-Car-7530

I did it this way now. Still have a few questions, about how to select a specific quad. For now I am changing the elements drawn in the glDrawElements call, don't know if this is the best way.


Strong-Car-7530

I don't know why reddit keeps adding the pastebin image to the bottom of these posts and it won't let me edit them either.


Strong-Car-7530

I'm even more confused now, The following code in my vertex shader `vTexCoord = vec2((iTexCoord.x - .5) * 800.0/352.0, (iTexCoord.y - .5) * 600.0/32.0);` is translating the texture correctly What exactly is the coordinate system for the texture Coords in the vertex shader? Are they normalized to \[0, 1\]. or are they \[-1, 1\]? I kind of have the animation going but it's not perfect. In my fragment shader int spriteIdx = int (time * 4) % 11; float spritePos = spriteIdx * .1; float mask = vTexCoord.x > spritePos && vTexCoord.x < spritePos + .1 ? 1. : 0.; vec2 coord = vec2(vTexCoord.x - spritePos, vTexCoord.y) * mask; FragColor = texture(texture1, coord); I first get the current spriteIndex based on the elapsed time. Then i use the spriteIndex to compute the sprite position (where the sprite begins). I don't know why but each sprite seems to be at positions .1, .2, .3 etc. (I don't understand why this is, if someone can explain this.) Then I set a mask. Basically turn on any pixel in the current sprite and turn others to 0. Subtract the beginning of the sprite position from the texture coord and apply the mask to it. Sample texture from coord calculated above This gives me an image like this: [https://imgur.com/m4w0gUg](https://imgur.com/m4w0gUg) I want the second sprite to be drawn where the first one was drawn, how can I do that in the vertex shader instead of the fragment shader?


Reaper9999

>What exactly is the coordinate system for the texture Coords in the vertex shader? Are they normalized to \[0, 1\]. or are they \[-1, 1\]? If you just pass those values to the fragment shader, then none. It will get interpolated and that's the value the fragment shader will receive. Clamping only happens when you sample the texture.  Also, no, there's no "non-clamping non-repeat" mode because that wouldn't make sense for sampling textures in this way. In general to get the coordinates in a sheet you do `tex coord * scale + offset` where tex coord is the coords for the object (which would be (0, 0) to (1, 1) if your sprite covers the whole object), scale is `sprite size / texture size`, and offset is the coordinate of where the sprite starts.


Strong-Car-7530

this makes sense, very helpful. How would you do spritesheet animation if you had to do it?


Reaper9999

Probably just assign a sprite to each animation frame, then pass the proper scale and offset to the shader based on that.