![]() |
|
![]() |
| there’s already a PEP to address the GIL. python is going through its JVM optimization phase. it’s too popular and ubiquitous that improving the GIL and things like it are inevitable |
![]() |
| I would like to warn people away from taichi if possible. At least back in 1.7.0 there were some bugs in the code that made it very difficult to work with. |
![]() |
| Yeah this discussion is pretty interesting. I was wondering what what happening with development as well. Taichi is cool tech, that if I had to be honest, it seems like they lacked direction of how to monetize it. For example they tried doing this "Taitopia thing" https://taitopia.design/ (which is already EOL'ed).
IMO if they had focused from the beginning on ML similar to Mojo, they would be in a better place. |
![]() |
| the problem with the GPGPU space is that everything except CUDA is so fractally broken that everything eventually converges to the NVIDIA stuff that actually works.
yes, the heterogeneous compute frameworks are largely broken, except for OneAPI, which does work, but only on CUDA. SPIR-V, works best on CUDA. OpenCL: works best on CUDA. Even once you get past the topline "does it even attempt to support that", you'll find that AMD's runtimes are broken too. Their OpenCL runtime is buggy and has a bunch of paper features which don't work, and a bunch of AMD-specific behavior and bugs that aren't spec-compliant. So basically you have to have an AMD-specific codepath anyway to handle the bugs. Same for SPIR-V: the biggest thing they have working against them is that AMD's Vulkan Compute support is incomplete and buggy too. https://devtalk.blender.org/t/was-gpu-support-just-outright-... https://render.otoy.com/forum/viewtopic.php?f=7&t=75411 ("As of right now, the Vulkan drivers on AMD and Intel are not mature enough to compile (much less ship) Octane for Vulkan") If you are going to all that effort anyway, why are you (a) targeting AMD at all, and (b) why don't you just use CUDA in the first place? So everyone writes more CUDA and nothing gets done. Cue some new whippersnapper who thinks they're gonna cure all AMD's software problems in a month, they bash into the brick wall, write blog post, becomes angry forums commenter, rinse and repeat. And now you have another abandoned cross-platform project that basically only ever supported NVIDIA anyway. Intel, bless their heart, is actually trying and their stuff largely does just work, supposedly, although I'm trying to get their linux runtime up and running on a Serpent Canyon NUC with A770m and am having a hell of a time. But supposedly it does work especially on windows (and I may just have to knuckle under and use windows, or put a pcie card in a server pc). But they just don't have the marketshare to make it stick. AMD is stuck in this perpetual cycle of expecting anyone else but themselves to write the software, and then not even providing enough infrastructure to get people to the starting line, and then surprised-pikachu nothing works, and surprise-pikachu they never get any adoption. Why has nvidia done this!?!? /s The other big exception is Metal, which both works and has an actual userbase. The reason they have Metal support for cycles and octane is because they contribute the code, that's really what needs to happen (and I think what Intel is doing - there's just a lot of work to come from zero). But of course Metal is apple-only, so really ideally you would have a layer that goes over the top... |
![]() |
| I really wish python would stop being the go-to language for GPU orchestration or machine learning, having worked with it again recently for some proof of concepts its been a massive pain in the ass. |
![]() |
| > Warp is designed for spatial computing
What does this mean? I've mainly heard the term "spatial computing" in the context of the Vision Pro release. It doesn't seem like this was intended for AR/VR |
![]() |
| Thats the part I don't get. When you are developing AI, how much code are you really running on GPUs? How bad would it be to write it for something else if you could get 10% more compute per dollar? |
![]() |
| I think you answered your own question there. Python is very accessible, very popular, and already widely used for GPU-based things like machine learning. |
![]() |
| >GPU support requires a CUDA-capable NVIDIA GPU and driver (minimum GeForce GTX 9xx).
Very tactful from nvidia. I have a lovely AMD gpu and this library is worthless for it. |
Does it really matters in performance. I see python in these kind of setups as orchestrators of computing apis/engines. For example from python you instruct to compute following list etc. No hard computing in python. Performance not so much of an issue.