(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=41396260

最近,简单 DirectMedia 层 (SDL) 2D 应用程序编程接口 (API) 由于其起源于矩形精灵块传输和早期硬件设计时代而被认为是有限的。 如今,有多种高级图形处理单元 (GPU) 选项,例如 Vulkan、Metal 和 Direct3D。 由于 OpenGL 的碎片化和不同的平台支持,SDL 开发人员发现很难专注于它。 轻量级 Nuklear 2D 即时模式 GUI 库演示了 SDL'2D API 限制的一个显着实例,该库缺乏对批量提交顶点以进行高效处理的足够支持。 通过向 SDL 2D 引入简单的批处理 API,其功能显着扩展,可以在保持简单性的同时满足现代性能要求。 然而,如果需要成熟的 3D 功能,建议开发人员选择其他专用 GPU API。 目前,新的 SDL GPU API 提供了 80 多个功能的广泛套件,能够覆盖从基本 2D 渲染到复杂 3D 场景的各种用例。 尽管如此,有关光线追踪和网格着色器等高级功能的进一步开发和未来计划仍不确定。 此外,关于 SDL3 是否仍然使用整数作为坐标还是像其他现代引擎那样切换到浮点数的争论仍在继续。 因此,用户求助于 WebGPU 等替代工具来实现卓越的解耦和对浮点协调的支持。 尽管 SDL 提供了满足游戏开发各个方面的多功能 API,但其新的 GPU API 仍面临批评,因为它忽视了高效 2D 渲染的基本功能,并且与现有的 SDL2 约定有很大差异。

相关文章

原文


SDL3 is still in preview, but the new GPU API is now merged into the main branch while SDL3 maintainers apply some final tweaks.

As far as I understand: the new GPU API is notable because it should allow writing graphics code & shaders once and have it all work cross-platform (including on consoles) with minimal hassle - and previously that required Unity or Unreal, or your own custom solution.

WebGPU/WGSL is a similar "cross-platform graphics stack" effort but as far as I know nobody has written console backends for it. (Meanwhile the SDL3 GPU API currently doesn't seem to support WebGPU as a backend.)



Compared to libraries like bgfx and sokol at least, I think there are two key differences.

1) SDL_gpu is a pure C library, heavily focused on extreme portability and no depedencies. And somehow it's also an order of magnitude less code than the other options. Or at least this is a difference from bgfx, maybe not so much sokol_gfx.

2) The SDL_gpu approach is a bit lower level. It exposes primitives like command buffers directly to your application (so you can more easily reason about multi-threading), and your software allocates transfer buffers, fills them with data, and kicks off a transfer to GPU memory explicitly rather than this happening behind the scenes. It also spawns no threads - it only takes action in response to function calls. It does take care of hard things such as getting barriers right, and provides the GPU memory allocator, so it is still substantially easier to use than something like Vulkan. But in SDL_gpu it is extremely obvious to see the data movements between CPU and GPU (and memory copies within the CPU), and to observe the asynchronous nature of the GPU work. I suspect the end result of this will be that people write far more efficient renderers on top of SDL_gpu than they would have on other APIs.



Worth noting Godot also has cross platform shaders. Its GDShader language is based heavily on OpenGL shader language though not a 1:1 copy and gets compiled for the target platform. Though for PS5 and XBox you have to work with a 3rd party (Someone released the Nintendo build for anyone who's signed the Nintendo NDA).



So I haven't used compute shaders though I remembered Godot having them and double checked. Interestingly they are direct glsl which makes me wonder if they only work in OGL contexts. Which would be... weird because Godot 4.3 shipped easy DirectX output support. I'm sort of tempted to test out making a compute shader and compiling to DX and see if it works.

Edit: Doing more digging according to the end of this forum thread they get compiled to SPIR-V and then to whatever backend is needed, be it GLSL, HLSL, etc.

https://forum.godotengine.org/t/compute-shaders-in-godot/461...



The old SDL 2D API was not powerful enough. It was conceived in the rectangle sprite blitting days, when video hardware was designed very differently and had drastically different performance characteristics. If you wanted anything more, OpenGL used to be 'the best practice'. But today, the landscape competes between Vulkan, Metal, and Direct3D, and hardware is centered around batching and shaders. Trying to target OpenGL is more difficult because OpenGL fragmented between GL vs. GLES and platform support for OpenGL varies (e.g. Apple stopped updating GL after 4.1).

A good example demonstrating where the old SDL 2D API is too limited is with the 2D immediate mode GUI library, Nuklear. It has a few simple API stubs to fill in so it can be adapted to work with any graphics system. But for performance, it wants to batch submit all the vertices (triangle strip). But SDL's old API didn't support anything like that.

The reluctance was the SDL maintainers didn't want to create a monster and couldn't decide where to draw the line, so the line was held at the old 2D API. Then a few years ago, a user successfully changed the maintainers' minds after writing a demonstration showing how much could be achieved by just adding a simple batching API to SDL 2D. So that shifted the mindset and led to this current effort. I have not closely followed the development, but I think it still aims to be a simple API, and you will still be encouraged to pick a full blown 3D API if you go beyond 2D needs. But you no longer should need to go to one of the other APIs to do 2D things in modern ways on modern hardware.



I was messing around a bit with SDL2 and either I was doing something wrong or it was just plain slow. My machine is plenty fast, but even just blitting a few dozen PNGs around a screen 60 times a second was pushing its limits. I freely admit I may have been doing something wrong, but I was surprised at just how inefficient it was at a task that we used to do without too much trouble on 1Mhz CPUs.

Maybe SDL_RenderCopy is the wrong API to use to blit things from a sprite sheet onto a display? The docs didn't give any warning if this is the case.



How recent a version were you using? Plenty of games and graphical apps use SDL2 under the hood, and rendering rects from a spritesheet is trivial. Recent versions use the geometry API for rendering rects, so it should be able to handle tons of sprites without much effort.



I'm using SDL2 2.30.0. The main loop is pretty simple, it does a few SDL_RenderFillRects to create areas, then several SDL_RenderCopy where the source is a SDL_Texture created from a SDL_Surface using SDL_CreateTextureFromSurface that was loaded from files at boot. A final call to SDL_RenderPresent finishes it off. They do include an alpha channel however.

I was expecting the sprite blitting to be trivial, but it is surprisingly slow. The sprites are quite small, only a few hundred pixels total. I have a theory that it is copying the pixels over the X11 channel each time instead of loading the sprite sheets onto the server once and copying regions using XCopyArea to tell the server to do its own blitting.



This should be plenty fast. SDL_RenderCopy generally should be doing things the 'right' way for on any video card made roughly in the last 15ish years (basically binding a texture in GPU RAM to a quad).

You probably need to due some debugging/profiling to find where your problem is. Make sure you aren't creating SDL_Textures (or loading SDL_Surfaces) inside your main game play loop. You also may want to check what backend the SDL_Renderer is utilizing (e.g. OpenGL, Direct3D, Vulkan, Metal, software). If you are on software, that is likely your problem. Try forcing it to something hardware accelerated.

Also, I vaguely recall there was a legacy flag on SDL_Surfaces called "hardware" or "SDL_HWSURFACE" or "SDL_HWACCEL" or something. Don't set that. It was a a very legacy hardware from like 25 years ago that is slow on everything now.



I think you are getting confused between SDL_Render and SDL_GPU. SDL_Render is the old accelerated API that was only suitable for 2D games (or very primitive looking 3D ones). SDL_GPU is a fully-featured wrapper around modern 3D APIs (well, the rasteriser and compute parts anyway, no raytracing or mesh shaders there yet).



I was referencing the historical motivations that led to where we are today. Yes, I was referring in part to the SDL_Render family APIs. These were insufficient to support things like Nuklear and Dear ImGui, which are reasonable use cases for a simple 2D game, which SDL hoped to help with by introducing the SDL_Render APIs in SDL 2.0 in the first place.

https://www.patreon.com/posts/58563886

Short excerpt:

    One day, a valid argument was made that basic 2D triangles are pretty powerful in themselves for not much more code, and it notably makes wiring the excellent Dear Imgui library to an SDL app nice and clean. Even here I was ready to push back but the always-amazing Sylvain Becker showed up not just with a full implementation but also with the software rendering additions and I could fight no longer. In it went.
The next logical thing people were already clamoring for back then was shader support. Basically, if you can provide both batching (i.e. triangles) and shaders, you can cover a surprising amount of use cases, including many beyond 2D.

So fast forwarding to today, you're right. Glancing at the commit, the GPU API has 80 functions. It is full-featured beyond its original 2D roots. I haven't followed the development enough to know where they are drawing the lines now, like would raytracing and mesh shaders be on their roadmap, or would those be a bridge too far.



> where they are drawing the lines now

From what I understand, they are only going to support features that are widely supported and standardised. Thus, even bindless didn't make the cut. Raytracing, mesh shaders, work-graphs, etc. almost certainly won't make it until SDL4 10 years from now; but I am not part of the development team, so don't quote me.



Does SDL3 still use integers for coordinates? I got annoyed enough by coordinates not being floating point in SDL2 that I started learning WebGPU, instead. This was even though the game I was working on was 2D.

The issue is, if you want complete decoupling (in the sense of orthogonality) among all four of:

- screen (window) size & resolution (especially if game doesn't control)

- sprite/tile image quantization into pixels (scaling, resolution)

- sprite display position, with or without subpixel accuracy

- and physics engine that uses floating point natively (BulletPhysics)

then to achieve this with integer drawing coordinates requires carefully calculating ratios while understanding where you do and do not want to drop the fractional part. Even then you can still run into problem such as, accidentally having a gap (one pixel wide blank column) between every 10th and 11th level tile because your zoom factor has a tenth of a pixel overflow, or jaggy movement with wiggly sprites when the player is moving at a shallow diagonal at the same time as the NPC sprites are at different floating point or subpixel integer coords.

A lot of these problems could be (are) because I think of things from bottom up (even as my list above is ordered) where a physics engine, based on floating point math, is the source of Truth, and everything above each layer is just a viewport abstracting something from the layer beneath. I get the impression SDL was written by and for people with the opposite point of view, that the pixels are important and primary.

And all (most) of these have solutions in terms of pre-scaling, tracking remainders, etc. but I have also written an (unfinished) 3D engine and didn't have to do any of that because 3D graphics is floating point native. After getting the 2D engine 90% done with SDL2 (leaving 90% more to go, as we all know), I had a sort of WTF am I even doing moment looking at the pile of work-arounds for a problem that shouldn't exist.

And I say shouldn't exist because I know the final output is actually using floating point in the hardware and the driver; the SDL1/2 API is just applying this fiction that it's integers. (Neither simple, nor direct.) It gets steam coming out my ears knowing I'm being forced to do something stupid to maintain someone else's fiction, so as nice as SDL otherwise is, I ultimately decided to just bite the bullet and learn to program WebGPU directly.



>Does SDL3 still use integers for coordinates?

No, they added float versions for most functions and I think they plan on deprecating the int API in the future. The only exception I can think of offhand is still needing an integer rect to set a viewport.



SDL provides various "backend agnostic" APIs for a variety of needs, including window creation, input (with a gamepad abstraction), audio, system stuff (e.g. threads), etc so that programs written against SDL can work on a variety of systems - and if linked against it dynamically (or using the "static linking but with a dynamic override" that allows a statically linked version to use a newer dynamic version of the library) can use newer/better stuff (which is sometimes needed, e.g. some older gaming using old version of SDL1.x need the DLL/.so replaced to a new version to work on new OSes, especially on Linux).

Exposing a modern (in the sense of how self-proclaimed modern APIs like Vulkan, D3D12 and Metal work) GPU API that lets applications written against it to work with various backends (D3D11, D3D12, Vulkan, Metal, whatever Switch and PS5 uses, etc) fits perfectly with what SDL already does for every other aspect of making a game/game engine/framework/etc.

As if it was "needed", it was needed as much as any other of SDL's "subsystems": strictly speaking, not really as you could use some other library (but that could be said for SDL itself) but from the perspective of what the SDL wants to provide (an API to target so you wont have to target each underlying API separately) it was needed for the sake of completeness (previously OpenGL was used for this task if you wanted 3D graphics but that was when OpenGL was practically universally available for the platforms SDL itself officially supported - but nowadays this is not the case).



While WebGPU isn't a bad API, it also isn't exactly the '3D API to end all 3D APIs'.

WebGPU has a couple of design decisions which were necessary to support Vulkan on mobile devices, which make it a very rigid API and even (desktop) Vulkan is moving away from that rigid programming model, while WebGPU won't be able to adapt so quickly because it still needs to support outdated mobile GPUs across all operating systems.



One important point I haven't seen mentioned yet is that SDL is the defacto minimal compatibility layer on Linux for writing a windowed 'game-y' application if you don't want to directly wrestle with X11, Wayland, GTK or KDE.



Yeah - getting an OpenGL (and presumably same for Vulkan) context is surprisingly annoying if you don't have a library to help you. It also works quite differently on X11, Wayland, or directly on kernel APIs. Many games that don't otherwise use SDL2 (such as ones ported from other platforms, i.e. most games) use it just for that.



The more the merrier if you ask me. Eventually one will win but we need more experimentation in this space. The existing GPU APIs are too hard to use and/or vendor-specific.



Writing bits of Vulkan or D3D12 really isn't that bad if you're working within an engine which does most of the setup for you, which is nearly always the case for practical work. If you're doing everything yourself from scratch, you're probably either a hobbyist tinkering or a well-compensated expert working for a AAA game developer.



Yes and no. SDL 2.x is not backwards compatible with SDL 1.x (and that was an annoyance of mine) but at some point someone wrote an SDL 1.x implementation on top of SDL 2.x that got official blessing, so at least games using SDL 1.x can be made to use SDL 2.x "under the hood" be it in source code form or binary-only form.

Though you can't take an SDL 1.x game and convert it piecemeal to SDL 2.x as the APIs are not backwards compatible, it is an all-or-nothing change.



The API breaks in SDL2 were sorely needed, if you asked me. SDL1 painted itself into a corner in a few places, e.g. simultaneous use of multiple displays/windows.



I don't think they were needed but i value not breaking existing programs and code more than some abstract and often highly subjective form of code purity.

The compatibility layer that was introduced a few years later did solve the "SDL1 apps running under SDL2 under the hood (though with some regressions)" compatibility issue, it did somewhat solve the "compile existing code that uses SDL1 with SDL2" (depending on your language and SDL bindings, i had to compile the real SDL 1.2 library to have Free Pascal's bindings work since they didn't work with sdl12-compat) but it did not solve the "updating existing code to use the new features without rewriting everything" compatibility issue (there was even some user in the PR or Twitter asking about future plans for compatibility because he had spent years updating his code from SDL 1.2 to SDL 2.0 and didn't want to repeat the process again - FWIW the answer was that it probably wont be any less than 10 years for a new major backwards incompatible version).



WebGPU would be alot more useful if it hadn't gone with such a needlessly different shader language syntax, makes it much harder to have any single src between the C++ and it.



It exists, but IMO it's not a good choice.

First of all, it doesn't support RenderGeometry or RenderGeometryRaw, which are necessary for high-performance 2D rendering (absent the new GPU API). I doubt it will support any of the GPU API at this rate, as the geometry rendering is a much simpler API. Maybe both will land all at once, though. To wit, the relevant issue hasn't seen much activity: https://github.com/Rust-SDL2/rust-sdl2/issues/1180

Secondly, the abstractions chosen by rust-sdl2 are quite different from those of SDL2 itself. There seems to have been an aggressive attempt by the Rust library authors to make something more Rust-friendly, which maybe has made it more approachable for people who don't know SDL2 already, but it has IMO made it less approachable for people who do know SDL2. The crate gets plenty of downloads, so maybe it's just me.



SDL is for everyone. I use it for a terminal emulator because it’s easier to write something cross platform in SDL than it is to use platform native widgets APIs.



Can the SDL terminal emulator handle up-arrow /slash commands, and cool CLI things like Textual and IPython's prompt_toolkit readline (.inputrc) alternative which supports multi line editing, argument tab completion, and syntax highlighting?, in a game and/or on a PC?



I think you're confusing the roles of terminal emulator and shell. The emulator mainly hosts the window for a text-based application: print to the screen, send input, implement escape sequences, offer scrollback, handle OS copy-paste, etc. The features you mentioned would be implemented by the hosted application, such as a shell (which they've also implemented separately).



It's exciting to see how this all shakes out. Hopefully we end up with more options for building custom game engines and apps.

I've been going down the Vulkan rabbit hole. It's been fun/enlightening to learn, but the nature of Vulkan makes progress feel slow. I think if SDL3 were available when I started, I would have happily went that route and have more to show for the amount of time I've invested.



How did they managed to pull this off so quickly? Given how long WebGPU native is in development and still not finalized, you would think it will take SDL GPU API even longer because it supports more platforms.



The reason WebGPU took so long was that they decided to write their own shading language instead of using SPIR-V. SDL didn't make that mistake, you bring your own shader compilers and translation tools.

There is a sister project for a cross-platform shading language [1] and another for translating existing ones between each other [2] , but they get done when they get done, and the rest of the API doesn't have to wait for them.

WebGPU was made by a committee of vendors and language-lawyers (standards-lawyers?) with politics and bureaucracy, and it shows. SDL_GPU is made by game developers who value pragmatism above all (and often are looked down upon from the ivory tower because of that).

[1]: https://github.com/libsdl-org/SDL_shader_tools [2]: https://github.com/flibitijibibo/SDL_gpu_shadercross



Yeah, legal strikes again. Unfortunately SPIR-V was never going to be an option for WebGPU, because Apple refuses to use any Khronos projects due to a confidential legal dispute between them.[0] If WebGPU used SPIR-V, it just wouldn't be available in Safari.

See also: Not supporting Vulkan or OpenXR at all, using USD instead of glTF for AR content even though it's less well suited for the task, etc. (Well, they probably don't mind that it helps maintain the walled garden either... There's more than one reason for everything)

0: https://docs.google.com/document/d/1F6ns6I3zs-2JL_dT9hOkX_25...



# Attendance

## Khronos

Neil Trevett

## Apple

Dean Jackson Myles C. Maxfield Robin Morisset Maciej Stachowiak Saam Barati

## Google

Austin Eng Corentin Wallez Dan Sinclair David Neto James Darpinian Kai Ninomiya Ken Russell Shrek Shao Ryan Harrison

## Intel

Yunchao He

## Mozilla

Dzmitry Malyshau

## W3C

François Daoust Dominique Hazael-Massieux

## Timo de Kort [sic?]

———

I get that Apple/Google have significantly more resources than most organizations on the planet but if these demographics are representative of other (web) standards committees that’s depressing.



I don't think that's accurate. Creating a shading language is obviously a huge effort, but there were already years of effort put into WebGPU as well as implementations/games building on top of the work-in-progress specification before the shading language decision was made (implementations at the time accepted SPIR-V).



The PoC was made in 2016, the work started in 2017, but the first spec draft was released on 18 May 2021. [1] This first draft already contained references to WGSL. There is no reference to SPIR-V.

Why did it take this long to release the first draft? Compare it to SDL_GPU timeline, start to finish in 6 months. Well, because the yak shaving on WGSL had already begun, and was eating up all the time.

[1]: https://www.w3.org/TR/2021/WD-webgpu-20210518/



Scaffolding wasn’t a problem at all. Both used SPIRV-Cross for shader conversions at the time and focused on implementing the rest of the API. The shading language barely matters to the rest of the implementation. You can still use SPIR-V with wgpu on its Vulkan backend today for example.



Also tbf, the WebGPU peeps did a lot of investigations for what is the actual set of common and web-safe features across D3D, Vulkan and Metal, and all those investigation results are in the open.

In that sense the WebGPU project is an extremely valuable resource for other wrapper APIs, and saves those other APIs a ton of time.



Yeah. SDL went the path of "wrap native APIs". WebGPU went the path of "exactly what level of floating point precision can we guarantee across all APIs" along with "how do we prevent absolutely all invalid behavior at runtime, e.g. out of bounds accesses in shaders, non-dynamically uniform control flow at invalid times, indirect draws that bypass the given limits, preventing too-large shaders that would kill shader compilers, etc".

WebGPU spends a _lot_ of time investigating buggy driver behavior and trying to make things spec-conformant across a lot of disparate and frankly janky platforms. There's a big difference between writing an RHI, and writing a _spec_.



The core contributors of the SDL3 GPU project have experience with two cross-platform (PC + consoles) GPU abstraction layers, FNA3D and Refresh, which provided a lot of knowledge and existing open source code to use as a springboard to assemble this quickly with high quality.



I might try this out. SDL I have found to be high quality software - compiles fast, compiles easily on multiple platforms, always works. So I have some hopes for this new API.



I’ve never used this library before, but I’m very interested to see some examples of its cross-platform GPU compute abilities, if I understand from the link thread that they are now available. Does anyone have a suggestion on where to get started?



Huge fan of SDL generally.

When I went looking for a cross-platform gaming library, SDL and its API struck the right balance for me. I just wanted a C (++) library I could call to create windows and graphical contexts — a fast sprite rendering framework. I didn't need a whole IDE or a bloated library, didn't want to learn a new language, etc.



This is a separate thing with the same name. Although both share some common ideas. The grimfang4/sdl-gpu is a separate library used with SDL, while the new SDL GPU API is directly part of SDL. grimfang4/sdl-gpu is much older and works with today's SDL 2.

The grimfang4/sdl-gpu was one good way to take advantage of modern GPUs in a simple way and workaround the holes/limitations of the old SDL 2D API. The new SDL 3 GPU API will likely make the need for things like grimfang4/sdl-gpu redundant.



Feels like SDL3 suffers the second system effect. (SDL2 was just SDL1 with explicit window handles, so SDL3 is the second system, not the third). SDL1/2 is a thin layer that wraps the platform-specific boilerplate of opening a window and handling input events, so you can get to the OpenGL rendering stuff that you actually wanted to write.



If you only want to support Windows/Linux/Android, then sure, you can definitely argue that the SDL GPU API is bloat.

But if you want to support Apple's operating systems then you're stuck with OpenGL 4.1 (officially deprecated by Apple 5 years ago) - so no modern GPU features like compute shaders.

You can go the Vulkan route and use MoltenVK for Apple systems, but Vulkan is quite a step up in complexity from OpenGL ("1000 lines of code for a triangle" as people like to say). The goal for SDL3's GPU API is to give you a more approachable (but still plenty flexible) alternative to that.

And similar story for consoles, presumably.

Apparently lots of people asked for "SDL_render but can you add shader support that works for all platforms", so that's the origin story.

SDL3 does also add a higher level audio API - I don't know much about its merits.



Ah, I managed to dig up the original announcement post[0]; relevant snippet:

> But this is terrible advice in 2021, because OpenGL, for all intents and purposes, is a deprecated API. It still works, it's still got some reasonably modern features, but even if you add up the 22 years Microsoft spent trying to kill it with Apple's seven-or-maybe-twenty, it doesn't change the fact that the brains behind OpenGL would rather you migrate to Vulkan, which is also terrible advice.

> It seems bonkers to tell people "write these three lines of code to make a window, and then 2000 more to clear it," but that's the migration funnel--and meat grinder--that SDL users are eventually going to get shoved into, and that's unacceptable to me.

[0]: https://www.patreon.com/posts/new-project-top-58563886



But why does the GPU API need to be in mainline SDL? Couldn't it be a separate project like SDL_net, SDL_mixer, SDL_image, and SDL_ttf? I would think that as a separate project "SDL_gpu" could be versioned independently, evolve independently, and not be obligated to support every platform SDL itself supports. In fact if "SDL_gpu" only required a windowing context, then it could presumably integrate with SDL2 and non-SDL applications!



AFAICT, if you don't want to use it then you don't have to - just like you didn't have to use SDL_render in SDL2. That is what was pitched by maintainer Ryan Gordon[0][1] at least.

[0]: https://github.com/libsdl-org/SDL_shader_tools/blob/main/doc... , though the approach that ended up getting merged was an initially-competing approach implemented by FNA folks instead and they seem to have made some different decisions than what was outlined in that markdown doc.



While using SDL for drawing is optional (and seldom done if you're doing 3D) I would like to add that its drawing API is useful to have out-of-the-box so that new/basic users can get stuff on screen right away without having to write their own high-level graphics engine first.



Slightly off-topic, but where's the complexity of Vulkan (the 1000 lines) coming from? My memory tells me that most of the misery is from the window system integration, and that the rest is pretty pleasant.



Counter-intuitively, when you actually start caring about performance (easy to write "working" Vulkan code, hard to write efficient Vulkan code that competes with DX11 driver magic)



SDL2 was not "just SDL1 with explicit window handles". There were a variety of changes and new features all over the API, including (like SDL3) major changes to the graphics subsystem (SDL1 used software rendering, SDL2 added hardware acceleration).

Also, SDL2 has evolved considerably since 2.0.0, and SDL3 continues that evolution while allowing API-breaking changes. SDL3 is not a from-scratch re-write and as an SDL user I dont anticipate migrating from SDL2 to SDL3 will be that difficult.

[edit] And SDL1/2 was never so "thin" that it didn't have its own high-level graphics system, which is useful to have out-of-the-box so that new/basic users can get stuff on screen right away.

[edit2] As ahefner points out, SDL1 was pretty "thin" by modern standards, but it still gave you enough to draw basic stuff on screen without writing your own pixel math, which was pretty helpful back in the 90's.



True, now that I think back, all it had was a blit function, and nowadays that's not a graphics system. (But back in the old days, I was impressed that it handled alpha blending for me! Fancy!)



I don't run GL games anymore on elf/linux. And it has been a while. Most cross-platform game engines have a vulkan backend now.

Very small teams are able to show games running the latest UE5.x engine on native elf/linux, vulkan ("vein", "shmaragon" something).

But the steam client... is still 32bits and x11/GL hard dependent...

I still plan to code my own wayland compositor once the steam client is ELF64 and does proper wayland->x11/vulkan->CPU fallbacks. It will feel weird to have a clean 64bits system.



Thanks for the clarification. From the sparse documentation of SDL_GPU it was somewhat difficult to understand which parts are part of the SDL 3 merge, and which parts are something else.

I did find an example of using the GPU API, but I didn't see any mention of selecting a backend (Vk, etc.) in the example - is this possible or is the backend selected e.g. based on the OS?



> is this possible or is the backend selected e.g. based on the OS?

Selected in a reasonable order by default, but can be overridden.

There are three ways to do so:

- Set the SDL_HINT_GPU_DRIVER hint with SDL_SetHint() [1].

- Pass a non-NULL name to SDL_CreateGPUDevice() [2].

- Set the SDL_PROP_GPU_DEVICE_CREATE_NAME_STRING property when calling SDL_CreateGPUDeviceWithProperties() [3].

The name can be one of "D3D11", "D3D12", "Metal" or "Vulkan" (case-insensitive). Setting the driver name for NDA platforms would presumably work as well, but I don't see why you would do that.

The second method is just a convenient, albeit limited, wrapper for the third, so that the user does not have to create and destroy their own properties object.

The global hint takes precedence over the individual properties.

[1] https://wiki.libsdl.org/SDL3/SDL_HINT_GPU_DRIVER

[2] https://wiki.libsdl.org/SDL3/SDL_CreateGPUDevice

[3] https://wiki.libsdl.org/SDL3/SDL_CreateGPUDeviceWithProperti...



Deviating from conventions to avoid footguns is so misguided. I've been writing C family languages for like 15 years and never once accidentally done a if (foo); whatever;

The convention itself IS the thing that stops you from fucking that up. It's the kind of thing you do once 2 days into a 30 year career and never again.

I still think it's dumb in Javascript, where you could be using the language on day 2 of learning programming. But in a GPU shader language that it would be almost impossible to understand with no programming experience? It's actually insane.

Having said that everything else about this project looks pretty good, so I guess they can get a pass lol.



If control flow statements don't require parentheses to be parseable, doesn't that mean that it is the parentheses that are completely unnecessary?

联系我们 contact @ memedata.com