(comments)
原始链接: https://news.ycombinator.com/item?id=44036829
Here's a summary of the Hacker News thread on "Memory Consistency Models: A Tutorial":
The article on memory consistency models was well-received as a helpful primer, particularly for those already familiar with the topic. However, commenters acknowledged the complexity of the subject, emphasizing that mastering concepts like fences, compiler reordering, cache coherence, and mutexes requires extensive expertise.
One user argued that imperative languages fundamentally erred in exposing shared mutable memory, suggesting that JavaScript's single-threaded approach offers a less error-prone concurrency model. Another user linked to the Rustonomicon's section on atomics as a complementary resource. A clarification was made that relaxed memory models are primarily related to pipelined and out-of-order execution, not cache. A tool suite for simulating and running memory model litmus tests was also shared.
One commenter expressed disappointment, finding the article too basic and desiring a more in-depth exploration of CPU and programming language memory models.
My high level take is we mostly got concurrency wrong for imperative languages (probably because they were developed before parallel execution and all these optimizations was a thing). Exposing shared mutable memory access to application developers should have been a no-go from the start.
So, even if parallelism is a Wild West, some form of concurrency is a must-have, and ironically the language that caused the least amount of pain was JS, because they chose to keep business logic single threaded. And even for the perf issues with JS, you rarely see the lack of parallel business logic mentioned as a bottleneck. And web workers (the escape hatch), are quite uncommon in practice, which imo validates that the tradeoff was worth its weight in gold.
reply