![]() |
|
![]() |
| So I haven't used compute shaders though I remembered Godot having them and double checked. Interestingly they are direct glsl which makes me wonder if they only work in OGL contexts. Which would be... weird because Godot 4.3 shipped easy DirectX output support. I'm sort of tempted to test out making a compute shader and compiling to DX and see if it works.
Edit: Doing more digging according to the end of this forum thread they get compiled to SPIR-V and then to whatever backend is needed, be it GLSL, HLSL, etc. https://forum.godotengine.org/t/compute-shaders-in-godot/461... |
![]() |
| I was referencing the historical motivations that led to where we are today. Yes, I was referring in part to the SDL_Render family APIs. These were insufficient to support things like Nuklear and Dear ImGui, which are reasonable use cases for a simple 2D game, which SDL hoped to help with by introducing the SDL_Render APIs in SDL 2.0 in the first place.
https://www.patreon.com/posts/58563886 Short excerpt:
The next logical thing people were already clamoring for back then was shader support. Basically, if you can provide both batching (i.e. triangles) and shaders, you can cover a surprising amount of use cases, including many beyond 2D.So fast forwarding to today, you're right. Glancing at the commit, the GPU API has 80 functions. It is full-featured beyond its original 2D roots. I haven't followed the development enough to know where they are drawing the lines now, like would raytracing and mesh shaders be on their roadmap, or would those be a bridge too far. |
![]() |
| The more the merrier if you ask me. Eventually one will win but we need more experimentation in this space. The existing GPU APIs are too hard to use and/or vendor-specific. |
![]() |
| The API breaks in SDL2 were sorely needed, if you asked me. SDL1 painted itself into a corner in a few places, e.g. simultaneous use of multiple displays/windows. |
![]() |
| WebGPU would be alot more useful if it hadn't gone with such a needlessly different shader language syntax, makes it much harder to have any single src between the C++ and it. |
![]() |
| It exists, but IMO it's not a good choice.
First of all, it doesn't support RenderGeometry or RenderGeometryRaw, which are necessary for high-performance 2D rendering (absent the new GPU API). I doubt it will support any of the GPU API at this rate, as the geometry rendering is a much simpler API. Maybe both will land all at once, though. To wit, the relevant issue hasn't seen much activity: https://github.com/Rust-SDL2/rust-sdl2/issues/1180 Secondly, the abstractions chosen by rust-sdl2 are quite different from those of SDL2 itself. There seems to have been an aggressive attempt by the Rust library authors to make something more Rust-friendly, which maybe has made it more approachable for people who don't know SDL2 already, but it has IMO made it less approachable for people who do know SDL2. The crate gets plenty of downloads, so maybe it's just me. |
![]() |
| SDL is for everyone. I use it for a terminal emulator because it’s easier to write something cross platform in SDL than it is to use platform native widgets APIs. |
![]() |
| The reason WebGPU took so long was that they decided to write their own shading language instead of using SPIR-V. SDL didn't make that mistake, you bring your own shader compilers and translation tools.
There is a sister project for a cross-platform shading language [1] and another for translating existing ones between each other [2] , but they get done when they get done, and the rest of the API doesn't have to wait for them. WebGPU was made by a committee of vendors and language-lawyers (standards-lawyers?) with politics and bureaucracy, and it shows. SDL_GPU is made by game developers who value pragmatism above all (and often are looked down upon from the ivory tower because of that). [1]: https://github.com/libsdl-org/SDL_shader_tools [2]: https://github.com/flibitijibibo/SDL_gpu_shadercross |
![]() |
| Yeah, legal strikes again. Unfortunately SPIR-V was never going to be an option for WebGPU, because Apple refuses to use any Khronos projects due to a confidential legal dispute between them.[0] If WebGPU used SPIR-V, it just wouldn't be available in Safari.
See also: Not supporting Vulkan or OpenXR at all, using USD instead of glTF for AR content even though it's less well suited for the task, etc. (Well, they probably don't mind that it helps maintain the walled garden either... There's more than one reason for everything) 0: https://docs.google.com/document/d/1F6ns6I3zs-2JL_dT9hOkX_25... |
![]() |
| I might try this out. SDL I have found to be high quality software - compiles fast, compiles easily on multiple platforms, always works. So I have some hopes for this new API. |
![]() |
| AFAICT, if you don't want to use it then you don't have to - just like you didn't have to use SDL_render in SDL2. That is what was pitched by maintainer Ryan Gordon[0][1] at least.
[0]: https://github.com/libsdl-org/SDL_shader_tools/blob/main/doc... , though the approach that ended up getting merged was an initially-competing approach implemented by FNA folks instead and they seem to have made some different decisions than what was outlined in that markdown doc. |
![]() |
| Counter-intuitively, when you actually start caring about performance (easy to write "working" Vulkan code, hard to write efficient Vulkan code that competes with DX11 driver magic) |
![]() |
| True, now that I think back, all it had was a blit function, and nowadays that's not a graphics system. (But back in the old days, I was impressed that it handled alpha blending for me! Fancy!) |
![]() |
| > is this possible or is the backend selected e.g. based on the OS?
Selected in a reasonable order by default, but can be overridden. There are three ways to do so: - Set the SDL_HINT_GPU_DRIVER hint with SDL_SetHint() [1]. - Pass a non-NULL name to SDL_CreateGPUDevice() [2]. - Set the SDL_PROP_GPU_DEVICE_CREATE_NAME_STRING property when calling SDL_CreateGPUDeviceWithProperties() [3]. The name can be one of "D3D11", "D3D12", "Metal" or "Vulkan" (case-insensitive). Setting the driver name for NDA platforms would presumably work as well, but I don't see why you would do that. The second method is just a convenient, albeit limited, wrapper for the third, so that the user does not have to create and destroy their own properties object. The global hint takes precedence over the individual properties. [1] https://wiki.libsdl.org/SDL3/SDL_HINT_GPU_DRIVER [2] https://wiki.libsdl.org/SDL3/SDL_CreateGPUDevice [3] https://wiki.libsdl.org/SDL3/SDL_CreateGPUDeviceWithProperti... |
![]() |
| If control flow statements don't require parentheses to be parseable, doesn't that mean that it is the parentheses that are completely unnecessary? |
As far as I understand: the new GPU API is notable because it should allow writing graphics code & shaders once and have it all work cross-platform (including on consoles) with minimal hassle - and previously that required Unity or Unreal, or your own custom solution.
WebGPU/WGSL is a similar "cross-platform graphics stack" effort but as far as I know nobody has written console backends for it. (Meanwhile the SDL3 GPU API currently doesn't seem to support WebGPU as a backend.)