(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=44009185

一个 Hacker News 帖子讨论了一篇比较并行函数式数组语言的论文。评论者 devlovstad 发现一个使用 Futhark 和 CUDA 的课程有助于理解 JAX。yubblegum 寻求关于论文相关工作中提到的 Chapel 语言的反馈;bradcray(对 Chapel 有偏见)指出其适用范围超越高性能计算,甚至包括桌面环境。 讨论涉及函数式纯度。munchler 质疑这些语言是否是纯函数式语言,一些回复澄清了 APL 的行为(数组是值,不会就地修改)和 Futhark 的“就地更新”(语义上是纯的)。zfnmxt 确认 Futhark、SaC 和 Accelerate 具有纯函数式语义。 teleforce 声称所有语言都依赖于 BLAS 库,但这对于 Futhark、Accelerate 和 APL 来说是不正确的。joe_the_user 指出 APL 早于 BLAS。DrNosferatu 将 Matlab 和 APL 进行了比较,这引起了 beagle3 的不同意见,他认为两者差异巨大。

相关文章
  • 并行函数式数组语言:编程与性能 2025-05-18
  • (评论) 2025-05-12
  • (评论) 2025-04-14
  • 2025-05-19
  • (评论) 2025-05-18

  • 原文
    Hacker News new | past | comments | ask | show | jobs | submit login
    Comparing Parallel Functional Array Languages: Programming and Performance (arxiv.org)
    87 points by vok 1 day ago | hide | past | favorite | 21 comments










    I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA. While I have not used any of these languages since, I have used JAX[1] quite a lot, where the learnings from this course have been quite helpful. Many people will end up writing code for GPUs through different levels of abstraction, but those who are able to reason about the semantics through functional primitives might have an easier time understanding what's happening under the hood.


    > I took a course on massively parallel programming taught by one of the authors of this paper that extensively used Futhark and CUDA.

    PMPH? :)



    I think the intended footnote was accidentally left out. Were you talking about this Python library?

    https://docs.jax.dev/en/latest/index.html



    There's a JAX for AI/LM too

    https://github.com/jax-ml/jax

    but yeah no idea which the OP meant



    Chapel got a mention in the 'Related Work' section. I looked at it a few years ago and found it compelling (but I don't do HPC so it was just window watching). What's the HN feedback on Chapel?

    https://chapel-lang.org/



    @yubblegum: I'm unfairly biased towards Chapel (positively), so won't try to characterize HN's opinion on it. But I did want to note that while Chapel's original and main reason for being is HPC, now that everyone lives in a parallel-computing world, users also benefits from using Chapel in desktop environments where they want to do multicore and/or GPU programming. One such example is covered in this interview with an atmospheric science researcher for whom it has replaced Python as his go-to desktop language: https://chapel-lang.org/blog/posts/7qs-dias/


    If you scroll down on the Chapel-lang website, there seems to be a lot of activity happening with this language. There is even going to be a ChapelCon 2025.

    https://chapel-lang.org/blog/posts/chapelcon25-announcement/



    Chapel and Lustre (a parallel, distributed file system) from Cray were funded by DARPA’s High Productivity Computing Systems program. This work, along with Fortress, from Sun, were developed explicitly to enable and ‘simplify’ the programming of distributed “supercomputers”. The work and artifacts, along with the published documentation and research is of particularly high quality.

    Even if you aren’t involved in HPC I’d say the concepts transfer or provide a great basis for parallel and distributed idioms and methodologies that can be adapted to existing languages or used in development of new languages.

    TL;DR - Chapel is cool and if you are interested in the general subject matter (despite a different focus) Fortress, which is discontinued, should also be checked out.



    Are these languages pure in the functional sense? E.g. Do they allow/encourage mutation? My understanding is that APL permits mutable state and side effects, but maybe they are rarely used in practice? If you're modifying the contents of an array in-place, I don't think it's reasonable to consider that functional.


    > My understanding is that APL permits mutable state and side effects ... If you're modifying the contents of an array in-place, I don't think it's reasonable to consider that functional.

          a←'hello'
          a[1]←'c'
    
    This does _not_ modify the array in-place. It's actually the same as:

         a←'hello'
         a←'c'@1⊢a
    
    which is more obviously functional. It is easy to convince yourself of this:

          a←'hello'
          b←a
          b[1]←'j'
          a,b
    
    returns 'hellojello' and not 'jellojello'.


    APL arrays are values in the same sense as value types in any functional language. You don't explicitly modify arrays in-place; if they happen to have a refcount of 1 operations may happen in-place as an optimization, but not in a manner which observably alters program behavior.


    Futhark, SaC, and Accelerate have purely functional semantics. Futhark has something called "in-place updates" that operationally mutate the given array, but semantically they work as if a new array is created (and are statically guaranteed to work this way by the type system).


    Accelerate is a Haskell library/eDSL.


    I wasn’t expecting to personally know two of the authors, but having Accelerate included makes sense.


    Matlab supposedly is “portable APL”.


    the man who invented MATLAB, Cleve Moler said: [I’ve] always seen MATLAB as “portable APL”. [1]

    …why the downvoting?

    [1] - https://computinged.wordpress.com/2012/06/14/matlab-and-apl-...



    I didn't downvote, but ... as someone who used both, this statement seems nonsensical.

    APL is mathematical notation that is also executable. It is all about putting a mathematical algorithm in a succinct, terse way.

    MATLAB is a clunky Fortran-like language that does simple 2D matrix stuff reasonably terse (though not remotely as terse as APL), and does everything else horribly awkwardly and verbosely.

    Modern MATLAB might be comparable to 1960s APL, but original MATLAB was most certainly not, and even modern MATLAB isn't comparable to modern APL (and its successors such as BQN and K)



    Notice that all the all the languages mentioned depends on the external BLAS library for example OpenBLAS for performance.

    D language have excellent support functional and array features with parallel support. On top that not known to others it has high performance native BLAS kind of library with ergonomic and intuitiveness similar to python [1].

    [1] Numeric age for D: Mir GLAS is faster than OpenBLAS and Eigen (2016):

    http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...



    > Notice that all the all the languages mentioned depends on the external BLAS library for example OpenBLAS for performance.

    That's incorrect. Futhark doesn't even have linear algebra primitives---everything has to be done in terms of map/reduce/etc: https://github.com/diku-dk/linalg/blob/master/lib/github.com...



    The same holds for Accelerate, and I'm fairly sure also SaC and APL. DaCe even gets a special mention in the paper in section 10.5 stating that they specifically _do_ use BLAS bindings.


    "Notice that all the all the languages mentioned depends on the external BLAS library". I didn't notice this 'cause I don't think it's true. For example, it highly implausible that APL[1] would depend on BLAS[2] considering APL predates BLAS by 5-10 years ("developed in the sixties" versus "between 1971 and 1973"). I don't think Futhark uses BLAS either but in modern stupidity, this currently two hour old parent has taken over Google results so it's hard to find references.

    [1] https://en.wikipedia.org/wiki/APL_(programming_language)

    [2] https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprogra...







    Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



    Search:
    联系我们 contact @ memedata.com