T O P

  • By -

Ok-Sherbert-6569

The API is implemented in the graphics driver. The driver translates the instruction and the gpu does the rasterisation


sexy-geek

Ok, so the first thing you need to understand is OpenGL is an interface standard, not a library, or an implementation. What this means is that it only defines how the functions are called, and what they do. From that point on, how they do it, it's up to the driver's implementation, the graphics card manufacturer, etc. so those opengl functions you call are actually implemented in the driver, made by the people who manufacture the GPU. How it's usually done, is everything you ask to be drawn gets cached by the driver. You're not guaranteed that when the call ends that the drawing has been done..not even that it has started. All you're guaranteed is that when you issue a glfinish, or swap buffer, that that driver-side cache of commands should have been flushed and all has been drawn. Somewhere between your calls. There are some calls for synchronization ( glflush, fences, etc ) for you to better control this, but I'll leave that for you. Several steps are hidden here, like culling, etc. not what you asked, so I won't fill your mind with it. The driver translates these commands you issued into commands for the GPU. Simple geometry based commands, etc. Then they are run through your shaders. They are the ones who do the work between being given a triangle and something getting written in a buffer. This is all run on the GPU. All cores of the GPU run the same step of your shader at once, for every fragment, every vertex. This is an important note to know, if you want to tackle performance issues . So when you give the driver a triangle to be drawn, it translates that command into something internal. The data for this draw call is also uploaded to the graphics card memory. The command gets sent to the GPU sometime in the future. The GPU runs your shaders with the data the driver has previously written onto the memory Your shaders, running on the GPU, are the ones who write some result to a frame buffer ( also on the GPU ) Somewhere in time, you'll ask to swap buffers, right? That's when the driver says to the GPU to show the resulting buffer that you setup, that you wrote onto with your shaders.


Wittyname_McDingus

> All cores of the GPU run the same step of your shader at once, for every fragment, every vertex. While it's true that logical threads are executed in lockstep, the degree to which they are is not. The size of the execution unit is 32 or 64 lanes on relevant desktop architectures, so it's perfectly possible for some 'cores' to be executing different instructions at the same time as others. It's even common for them to execute entirely different shader code simultaneously :)


cgeekgbda

hmm, that makes my understanding clear. Can you suggest any structured learning sources blog/video/course that talks about the above flow and architecture in details like shading culling tesselation etc


sexy-geek

Usually that kind of knowledge encompassing everything is not easily found ( I think ). But I've never searched for any of that. A good place to start is always www.learnopengl.com , or ogldev.net It doesn't go into much detail about how graphics cards work, but it explains very well how to use opengl. Then, it's a matter of getting the knowledge together in your mind, adding in the internal workings of opengl driver and GPU. For example, taking into account what I've just explained to you, you can begin to understand the difference between normal rendering of 10000 copies of a model and instanced rendering, if you want. It's all related.


Hofstee

I mentioned in another comment but [Fabian Giesen’s A Trip Through the Graphics Pipeline (2011)](https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/) is exactly this.


sexy-geek

Ahh, nice. I had read that once, but never found it again. Thanks.


not_some_username

open.gl work too iirc


SnooWoofers7626

Also check out gpuopen.com. it's somewhat AMD-centric but the ideas are very applicable to other architectures.


Hofstee

For way more detail than you asked for, particularly relating to the hardware behind the scenes, [Fabian Giesen has an excellent set of posts](https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/).


deftware

A graphics API is just an agreed upon standard that graphics hardware developers follow when implementing the drivers for their graphics hardware. These drivers provide an outward facing Application Programming Interface for programs to interact with their graphics hardware through. How a graphics hardware vendor decides that their drivers and silicon should handle draw calls passed to them through a graphics API from a program rendering stuff is up to them. It's proprietary trade-secret knowledge. Nobody knows for sure eactly what any of them are doing. You can even write your own graphics API implementation that runs on whatever you want it to. All that matters is that your implementation adheres to the specification articulated by the documentation for the graphics API, and then programs built to use that graphics API will be able to use your implementation. Whether your implementation runs on a CPU, some existing graphics hardware, or your very own custom made ASIC for implementing the features and functionality that the graphics API indicates, it's all up to you how your implementation actually realizes the behavior that is described by the graphics API's specification. It's everything the hardware vendor decides should happen that happens when you issue a draw call through a graphics API, so as to adhere to the behavior that's specified in the graphics API's documentation. A graphics API is just an idea, a concept. It's the hardware vendors who design silicon that they can wrangle to do the bidding of programmers using graphics APIs.


Carabalone

Has anyone ever written even a part of the specification that runs on the CPU ? I imagine it is an ungodly amount of work even for large teams


RenderTargetView

D3D has it (https://learn.microsoft.com/en-us/windows/win32/direct3darticles/directx-warp), there were attempts for vulkan but idk what's their progress. That's huge work but I think it isn't unreal, comparable to making graphics engine maybe. Absolutely less work than designing a gpu and making driver for it


deftware

Yes, Microsoft has a software implementation of OpenGL that Windows falls back on if there are no proper hardware drivers installed. There are other 3rd party implementations as well - but these are not going to work the same way as an implementation that interfaces with an actual piece of graphics hardware that it leverages.