(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43651576

这篇 Hacker News 帖子讨论了 Lisp 编程的优点,起因是一篇题为“为什么我用 Lisp 编程”的文章。评论者们就该语言的可读性,特别是括号问题,以及其语法所需的思维调整展开了辩论。一些人发现 Lisp 由于其嵌套结构难以阅读,而另一些人则认为这种语法反映了代码的逻辑结构。讨论扩展到泛函编程,一些参与者讨论了 I/O 与计算的分离以及如何实现纯度。其他人提到了 Common Lisp、Clojure、Haskell 和 Ruby 等替代方案及其优势。该帖子还涉及 Lisp 的优势,例如其表达能力、通过宏进行自定义以及交互式开发。最后,一些评论者谈到了 Lisp 在专业环境中为何不那么流行,以及新的 LLMs 是否会影响该语言的采用。

相关文章
  • 我为什么用 Lisp 编程 2025-04-11
  • (评论) 2024-04-22
  • (评论) 2024-08-28
  • (评论) 2025-04-09
  • (评论) 2024-09-13

  • 原文
    Hacker News new | past | comments | ask | show | jobs | submit login
    Why I Program in Lisp (funcall.blogspot.com)
    221 points by ska80 14 hours ago | hide | past | favorite | 197 comments










    Terry Pratchett's quote in one of his books (in fact I think this is a running gag, and appeared in multiple books):

      Five exclamation marks, a sure sign of an insane mind
    
    That's what I think about five closing parentheses too... But tbh I am also jealous, because I can't program in lisp at all


    Good article. Funnily enough the throw away line "I don't see parentheses anymore". Is my greatest deterrent with lisp. It's not the parens persay, it's the fact that I'm used to reading up to down and left to right. Lisp without something like the clojure macro ->, means that I am reading from right to left, bottom to top - from inside out.

    If i programmed enough in lisp I think my brain would adjust to this, but it's almost like I can't full appreciate the language because it reads in the "wrong order".



    > It's not the parens persay, it's the fact that I'm used to reading up to down and left to right. Lisp without something like the clojure macro ->, means that I am reading from right to left, bottom to top - from inside out.

    I’m not certain how true that really is. This:

        foo(bar(x), quux(y), z);
    
    looks pretty much identical to:

        (foo (bar x) (quux y) z)
    
    And of course if you want to assign them all to variables:

        int bar_x = bar(x);
        char quux_y = quux(y);
        
        return foo(bar_x, quux_y, z);
    
    is pretty much the same as:

        (let ((bar-x (bar x))
              (quux-y (quux y)))
          (foo bar-x quux-y z))
    
    FWIW, ‘per se’ comes from the Latin for ‘by itself.’


    The lisp is harder to read, for me. The first double paren is confusing.

        (let (bar-x (bar x))
             (quux-y (quux y)))
        (foo bar-x quux-y z)
    
    Why is the second set of parens necessary?

    The nesting makes sense to an interpreter, I'm sure, but it doesn't make sense to me.

    Is each top-level set of parens a 'statement' that executes? Or does everything have to be embedded in a single list?

    This is all semantics, but for my python-addled brain these are the things I get stuck on.



    I am not a Lisp expert by any stretch, but let's clarify a few things:

    1. Just for the sake of other readers, we agree that the code you quoted does not compile, right?

    2. `let` is analogous to a scope in other languages (an extra set of {} in C), I like using it to keep my variables in the local scope.

    3. `let` is structured much like other function calls. Here the first argument is a list of assignments, hence the first double parenthesis (you can declare without assigning,in which case the double parenthesis disappears since it's a list of variables, or `(variable value)` pairs).

    4. The rest of the `let` arguments can be seen as the body of the scope, you can put any number of statements there. Usually these are function calls, so (func args) and it is parenthesis time again.

    I get that the parenthesis can get confusing, especially at first. One adjusts quickly though, using proper indentation helps.

    I mostly know lisp trough guix, and... SKILL, which is a proprietary derivative from Cadence, they added a few things like inline math, SI suffixes (I like that one), and... C "calling convention", which I just find weird: the compiler interprets foo(something) as (foo something). As I understand it, this just moves the opening parenthesis before the preceding word prior to evaluation, if there is no space before it.

    I don't particularly like it, as that messes with my C instincts, respectively when it comes to spotting the scope. I find the syntax more convoluted with it, so harder to parse (not everything is a function, so parenthesis placement becomes arbitrary):

        let( (bar-x(bar(x))
             quux-y(quux(y)))
        foo(bar-x quux-y z)
        )


    > Why is the second set of parens necessary?

    it distinguishes the bindings from the body.

    strictly speaking there's a more direct translation using `setq` which is more analogous to variable assignment in C/Python than the `let` binding, but `let` is idiomatic in lisps and closures in C/Python aren't really distinguished from functions.



    You’re right!

        (let (bar-x quux-y)
          (setq bar-x (bar-x)
                quux-y (quux y))
          (foo bar-x quux-y z))
    
    I just wouldn’t normally write it that way.


    The let construct in Common Lisp and Scheme supports imperative programming, meaning that you have this:

      (let variable-bindings statment1 statement2 ... statementN)
    
    If statementN is reached and evaluates to completion, then its value(s) will be the result value(s) of let.

    The variable-bindings occupy one argument position in let. This argument position has to be a list, so we can have multiple variables:

      (let (...) ...)
    
    Within the list we have about two design choices: just interleave the variables and their initializing expressions:

      (let (var1 value1
            var2 value2
            var3 value3)
        ...)
    
    
    Or pair them together:

      (let ((var1 value1)
            (var2 value2)
            (var3 value3)
        ...)
    
    There is some value in pairing them together in that if something is missing, you know what. Like where is the error here?

      (let (a b c d e) ...)
    
    we can't tell at a glance which variable is missing its initializer.

    Another aspect to this is that Common Lisp allows a variable binding to be expressed in three ways:

      var
      (var)
      (var init-form)
    
    For instance

      (let (i j k (l) (m 9)) ...)
    
    binds i, j and k to an initial value of nil, and m to 9.

    Interleaved vars and initforms would make initforms mandatory. Which is not a bad thing.

    Now suppose we have a form of let which evaluates only one expression (let variable-bindings expr), which is mandatory. Then there is no ambiguity; we know that the last item is the expr, and everything before that is variables. We can contemplate the following syntax:

      (let a 2 b 3 (+ a b)) -> 5
    
    This is doable with a macro. If you would prefer to write your Lisp code like this, you can have that today and never look back. (Just don't call it let; pick another name like le!)

    If I have to work with your code, I will grok that instantly and not have any problems.

    In the wild, I've seen a let1 macro which binds one variable:

      (let1 var init-form statement1 statement2 ... statementn)


    The code is written the same way it is logically structured. `let` takes 1+ arguments: a set of symbol bindings to values, and 0 or more additional statements which can use those symbols. In the example you are replying to, `bar-x` and `quux-y` are symbols whose values are set to the result of `(bar x)` and `(quux y)`. After the binding statement, additional statements can follow. If the bindings aren't kept together in a `[]` or `()` you can't tell them apart from the code within the `let`.


    The tragedy of Lisp is that postfix-esque method notation just plain looks better, especially for people with the expectation of reading left-to-right.

        let bar_x = x.bar()
        let quux_y = y.quux()
        return (bar_x, quux_y, z).foo()


    Looks better is subjective, but it has its advantages both for actual autocomplete - as soon as I hit the dot key my IDE can tell me the useful operations for the obejct - and also for "mental autocomplete" - I know exactly where to look to find useful operations on the particular object because they're organized "underneath" it in the conceptual hierarchy. In Lisps (or other languages/codebases that aren't structured in a non-OOP-ish way) this is often a pain point for me, especially when I'm first trying to make my way into some code/library.

    As a bit of a digression:

    The ML languages, as with most things, get this (mostly) right, in that by convention types are encapsulated in modules that know how to operate on them - although I can't help but think there ought to be more than convention enforcing that, at the language level.

    There is the problem that it's unclear - if you can Frobnicate a Foo and a Baz together to make a Bar, is that an operation on Foos, on Bazes, or on Bars? Or maybe you want a separate Frobnicator to do it? (Pure) OOP languages force you to make an arbitrary choice, Lisp and co. just kind of shrug, the ML languages let you take your take your pick, for better or worse.



    It's not really subjective because people have had the opportunity to program in the nested 'read from the inside out' style of lisp for 50 years and almost no one does it.


    I think the cost of Lisp machines was the determining factor. Had it been ported to more operating systems earlier history could be different right now.


    De gustibus non disputandum est, I personally find the C++/Java/Rust/... style postfix notation (foo.bar()) to be appalling.


    TXR Lisp has this notation, combined with Lisp parethesis placement.

    Tather than obj.f(a, b). we have obj.(f a b).

      1> (defstruct dog ()
           (:method bark (self) (put-line "Woof!")))
      #
      2> (let ((d (new dog)))
           d.(bark))
      Woof!
      t
    
    The dot notation is more restricted than in mainstream languages, and has a strict correspondence to underlying Lisp syntax, with read-print consistency.

      3> '(qref a b c (d) e f)
      a.b.c.(d).e.f
    
    Cannot have a number in there; that won't go to dot notation:

      4> '(qref a b 3 (d) e f)
      (qref a b 3 (d)
        e f)
    
    Chains of dot method calls work, by the way:

      1> (defstruct circular ()
           val
           (:method next (self) self))
      #
      2> (new circular val 42)
      #S(circular val 42)
      3> *2.(next).(next).(next).(next).val
      42
    
    There must not be whitespace around the dot, though; you simply canot split this across lines. In other words:

       *2.(next)
       .(next) ;; nope!
       .(next) ;; what did I say?
    
    The "null safe" dot is .? The following check obj for nil; if so, they yield nil rather than trying to access the object or call a method:

      obj.?slot
      obj.?(method arg ...)


    And what about when `bar` takes several inputs? Postfix seems like an ugly hack that hyper-fixates on functions of a single argument to the detriment of everything else.


    I think it really depends, in Common Lisp for example I don't think that's the case:

      (progn
        (do-something)
        (do-something-else)
        (do-a-third-thing))
    
    The only case where it's a bit different and took some time for me to adjust was that adding bindings adds an indent level.

      (let ((a 12)
            (b 14))
        (do-something a)
        (do-something-else b)
        (setf b (do-third-thing a b)))
    
    It's still mostly top-bottom, left to right. Clojure is quite a bit different, but it's not a property of lisps itself I'd say. I have a hard time coming up with examples usually so I'm open to examples of being wrong here.


    Your example isn't a very functional code style though so I don't know that I'd consider it to be idiomatic. Generally code written in a functional style ends up indented many layers deep. Below is a quick (and quite tame) example from one of the introductory guides for Racket. My code often ends up much deeper. Consider what it would look like if one of the cond branches contained a nested cond.

      (define (start request)
        (define a-blog
          (cond [(can-parse-post? (request-bindings request))
                 (cons (parse-post (request-bindings request))
                       BLOG)]
                [else
                 BLOG]))
        (render-blog-page a-blog request))
    
    https://docs.racket-lang.org/continue/index.html


    Common Lisp, which is what I use, is not really a functional oriented language. I'd say the above is okay in CL.


    I must have missed that memo. Sure it's remarkably flexible and simultaneously accommodates other approaches, but most of the code I see in the wild leans fairly heavily into a functional style. I posted a CL link in an adjacent comment.

    Here's an example that mixes in a decent amount of procedural code that I'd consider idiomatic. https://github.com/ghollisjr/cl-ana/blob/master/hdf-table/hd...



    > reading from right to left, bottom to top - from inside out

    I don't understand why you think this. Can you give an example?





    per se


    This is the most elementary hurdle a lisp programmer will face. You do indeed become adjusted to it quite quickly. I wouldn’t let this deter you from exploring something like Clojure more deeply.


    Whenever I hear someone talking about purely functional programming, no side effects, I wonder what kind of programs they are writing. Pretty much anything I've written over the last 30 years, the main purpose was to do I/O, it doesn't matter whether it's disk, network, or display. And that's where the most complications come from, these devices you are communicating with have quirks that need you need to deal with. Purely functional programming is very nice in theory, but how far can you actually get away with it?


    The idea of pure functional programming is that you can really go quite far if you think of your program as a pure function f(input) -> outputs with a messy impure thing that calls f and does the necessary I/O before/after that.

    Batch programs are easy to fit in this model generally. A compiler is pretty clearly a pure function f(program source code) -> list of instructions, with just a very thin layer to read/write the input/output to files.

    Web servers can often fit this model well too: a web server is an f(request, database snapshot) -> (response, database update). Making that work well is going to be gnarly in the impure side of things, but it's going to be quite doable for a lot of basic CRUD servers--probably every web server I've ever written (which is a lot of tiny stuff, to be fair) could be done purely functional without much issue.

    Display also can be made work: it's f(input event, state) -> (display frame, new state). Building the display frame here is something like an immediate mode GUI, where instead of mutating the state of widgets, you're building the entire widget tree from scratch each time.

    In many cases, the limitations of purely functional isn't that somebody somewhere has to do I/O, but rather the impracticality of faking immutability if the state is too complicated.



    I guess my point is that you actually have to write the impure code somehow and it's hard, external world has tendencies to fail, needs to be retried, coordinated with other things. You have to fake all these issues. In your web server examples, if you need to a cache layer for certain part of the data, you really can't without encoding it to the state management tooling. And this point you are writing a lot of non-functional code in order to glue it together with pure functions and maybe do some simple transformation in the middle. Is it worth it?

    I have respect for OCaml, but that's mostly because it allows you to write mutable code fairly easily.

    Roc codifies the world vs core split, but I'm skeptical how much of the world logic can be actually reused across multiple instances of FP applications.



    There's a spectrum of FP languages with Haskell near the "pure" end where it truly becomes a pain to do things like io and Clojure at the more pragmatic end where not only is it accepted that you'll need to do non functional things but specific facilities are provided to help you do them well and in a way that can be readily brought into the functional parts of the language.

    (I'm biased though as I am immersed in Clojure and have never coded in Haskell. But the creator of Clojure has gone out of his way to praise Haskell a bunch and openly admits where he looked at or borrowed ideas from it.)



    Think of it like other features:

    * Encapsulation? What's the point of having it if's perfectly sealed off from the world? Just dead-code eliminate it.

    * Private? It's not really private if I can Get() to it. I want access to that variable, so why hide it from myself? Private adds nothing because I can just choose not to use that variable.

    * Const? A constant variable is an oxymoron. All the programs I write change variables. If I want a variable to remain the same, I just wont update it.

    Of course I don't believe in any of the framings above, but it's how arguments against FP typically sound.

    Anyway, the above features are small potatoes compared to the big hammer that is functional purity: you (and the compiler) will know and agree upon whether the same input will yield the same output.

    Where am I using it right now?

    I'm doing some record linkage - matching old transactions with new transactions, where some details may have shifted. I say "shifted", but what really happened was that upstream decided to mutate its data in-place. If they'd had an FPer on the team, they would not have mutated shared state, and I wouldn't even need to do this work. But I digress.

    Now I'm trying out Dijkstra's algorithm, to most efficiently match pairs of transactions. It's a search algorithm, which tries out different alternatives, so it can never mutate things in-place - mutating inside one alternative will silently break another alternative. I'm in C#, and was pleasantly surprised that ImmutableList etc actually exist. But I wish I didn't have to be so vigilant. I really miss Haskell doing that part of my carefulness for me.



    >I'm in C#, and was pleasantly surprised that ImmutableList etc actually exist.

    C# has introduced many functional concepts. Records, pattern matching, lambda functions, LINQ.

    The only thing I am missing and will come later is discriminated unions.

    Of course, F# is more fited for the job if you want a mostly functional workflow.



    I don't want functional-flavoured programming, I want functional programming.

    Back when I was more into pushing Haskell on my team (10+ years ago), I pitched the idea something like:

      You get: the knowledge that your function's output will only depend on its input.
    
      You pay: you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.
    
    Those higher-order functions are a tough sell for programmers who only ever want to do things the way they've always done them.

    But 5 years after that, in Java-land everyone was using maps, folds and filters like crazy (Or in C# land, Selects and Wheres and SelectManys etc,) with some half-thought-out bullshit reasoning like "it's functional, so it must good!"

    So we paid the price, but didn't get the reward.



    Using map, fold etc. is not the hard part of functional programming. The hard part is managing effects (via monads, monad transformers, or effects). Trying to convert a procedural inner mutating algorithm to say Haskell is challenging.


    > The hard part is managing effects

    You can say that again!

    Right now I'm working in C#, so I wished my C# managed effects, but it doesn't. It's all left to the programmer.



    I don't know, stacking monads is a comparable level of pain to me.


    One struggle I’ve had with wrapping my head around using FP and lisp like languages for a “real world” system is handling something like logging. Ideally that’s handled outside of the function that might be doing a data transformation but how do you build a lot message that outputs information about old and new values without contamination of your “pure” transducer?

    You could I guess have a “before” step that iterates your data stream and logs all the before values, and then an “after” step that iterates after and logs all the after and get something like:

    ``` (->> (map log-before data) (map transform-data) (map log-after-data)) ```

    But doesn’t that cause you to iterate your data 2x more times than you “need” to and also split your logging into 2x as many statements (and thus 2x as much IO)



    So, do you mean like you have some big array, and you want to do something like this? (Below is not a real programming language.)

      for i in 0 to arr.len() {
          new_val = f(arr[i]);
          log("Changing {arr[i]} to {new_val}.\n");
          arr[i] = new_val;
      }
    
    I haven't used Haskell in a long time, but here's a kind of pure way you might do it in that language, which I got after tinkering in the GHCi REPL for a bit. In Haskell, since you want to separate IO from pure logic as much as possible, functions that would do logging return instead a tuple of the log to print at the end, and the pure value. But because that's annoying and would require rewriting a lot of code manipulating tuples, there's a monad called the Writer monad which does it for you, and you extract it at the end with the `runWriter` function, which gives you back the tuple after you're done doing the computation you want to log.

    You shouldn't use Text or String as the log type, because using the Writer involves appending a lot of strings, which is really inefficient. You should use a Text Builder, because it's efficient to append Builder types together, and because they become Text at the end, which is the string type you're supposed to use for Unicode text in Haskell.

    So, this is it:

      import qualified Data.Text.Lazy as T
      import qualified Data.Text.Lazy.Builder as B
      import qualified Data.Text.Lazy.IO as TIO
      import Control.Monad.Writer
      
      mapWithLog :: (Traversable t, Show a, Show b) => (a -> b) -> t a -> Writer B.Builder (t b)
      mapWithLog f = mapM helper
        where
          helper x = do 
            let x' = f x
            tell (make x <> B.fromString " becomes " <> make x' <> B.fromString ". ")
            pure x'
          make x = B.fromString (show x)
    
      theActualIOFunction list = do
        let (newList, logBuilder) = runWriter (mapWithLog negate list)
        let log = B.toLazyText logBuilder
        TIO.putStrLn log
        -- do something with the new list...
    
    So "theActualIOFunction [1,2,3]" would print:

      1 becomes -1. 2 becomes -2. 3 becomes -3.
    
    And then it does something with the new list, which has been negated now.


    > You get: the knowledge that your function's output will only depend on its input.

    > You pay: you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.

    You're my type of guy. And literally none of my coworkers in the last 10 years were your type of guy. When they read this, they don't look at it in awe, but in horror. For them, functions should be allowed to have side effects, and for loops is a basic thing they don't see good reason to abandon.



    Statistically most of ones coworkers will never have looked at and used to write actual code with a functional language, so it is understandable they don't get it. What makes me sad is the apparent unwillingness to learn such a thing and sticking with "everything must OOP" even in situations where it would be (with a little practice and knowledge in functional languages) simple to make it purely functional and make testing and parallelization trivial.


    > Statistically most of ones coworkers will never have looked at and used to write actual code with a functional language, so it is understandable they don't get it.

    I'm not against functional languages. My point was that if you want to encourage others to try it, those two are not what you want to lead with.



    > I don't want functional-flavoured programming, I want functional programming.

    > you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.

    You mean what C# literally does everywhere because Enumerable is the premier weapon of choice in the language, and has a huge amount of exactly what you want: https://learn.microsoft.com/en-us/dotnet/api/system.linq.enu...

    (well, with the only exception of foreach which is for some odd reason is still a loop).

    > But 5 years after that

    Since .net 3.5 18 years ago: https://learn.microsoft.com/en-us/dotnet/api/system.linq.enu...

    > So we paid the price, but didn't get the reward.

    Who is "we", what was the price, and what was the imagined reward?



    > Who is "we", what was the price, and what was the imagined reward?

    Slow down and re-read.

    >> You get: the knowledge that your function's output will only depend on its input.

    >> You pay: you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.



    Those starred rhetorical questions initially looked to me like a critique of Lisp! Because that's how Lisp (particularly Common Lisp) works. All those things are softish. You can see unexported symbols even if you're not supposed to use them. There is no actual privacy unless you do something special like unintern then recreate a symbol.


    Not even the most fanatical functional programming zealots would claim that programs can be 100% functional. By definition, a program requires inputs and outputs, otherwise there is literally no reason to run it.

    Functional programming simply says: separate the IO from the computation.

    > Pretty much anything I've written over the last 30 years, the main purpose was to do I/O, it doesn't matter whether it's disk, network, or display.

    Every useful program ever written takes inputs and produces outputs. The interesting part is what you actually do in the middle to transforms inputs -> outputs. And that can be entirely functional.



    > Every useful program ever written takes inputs and produces outputs. The interesting part is what you actually do in the middle to transforms inputs -> outputs. And that can be entirely functional.

    My work needs pseudorandom numbers throughout the big middle, for example, drawing samples from probability distributions and running randomized algorithms. That's pretty messy in a FP setting, particularly when the PRNGs get generated within deeply nested libraries.



    At what point does this get messy?


    When deeply nested libraries generate PRNGs, all that layering becomes impure and must be treated like any other stateful or IO code. In Haskell, that typically means living with a monad transformer or effect system managing the whole stack, and relatively little pure code remains.

    The messiness gets worse when libraries use different conventions to manage their PRNG statefulness. This is a non-issue in most languages but a mess in a 100% pure setting.



    >Not even the most fanatical functional programming zealots would claim that programs can be 100% functional. By definition, a program requires inputs and outputs, otherwise there is literally no reason to run it.

    So a program it's a function that transforms the input to the output.



    > separate the IO from the computation.

    Can you please elaborate on this point? I read it as this web page (https://wiki.c2.com/?SeparateIoFromCalculation) describes, but I fail to see why it is a functional programming concept.



    > but I fail to see why it is a functional programming concept.

    "Functional programming" means that you primarily use functions (not C functions, but mathematical pure functions) to solve your problems.

    This means you won't do IO in your computation because you can't do that. It also means you won't modify data, because you can't do that either. Also you might have access to first class functions, and can pass them around as values.

    If you do procedural programming in C++ but your functions don't do IO or modify (not local) values, then congrats, you're doing functional programming.



    Thanks. I now see why it makes sense to me. I work in DE so in most of our cases we do streaming (IO) without any transformation (computation), and then we do transformation in a total different pipeline. We never transform anything we consumed, always keep the original copy, even if it's bad.


    > I fail to see why it is a functional programming concept.

    Excellent! You will encounter 0 friction in using an FP then.

    To the extent that programmers find friction using Haskell, it's usually because their computations unintentionally update the state of the world, and the compiler tells them off for it.



    Think about this: if a function calls another function that produces a side effect, both functions become impure (non-functional). Simply separating them isn't enough. That's the difference when thinking of it in functional terms

    Normally what functional programmers will do is pull their state and side effects up as high as they can so that most of their program is functional



    Having functions which do nothing but computation is core functional programming. I/O should be delegated to the edges of your program, where it is necessary.


    >separate the IO from the computation.

    What about managing state? I think that is an important part and it's easy to mess it.



    Each step calculates the next state and returns it. You can then compose those state calculators. If you need to save the state that’s IO and you have a bit specifically for it.


    It takes a bit of discipline, but generally all state additions should be scoped to the current context. Meaning, when you enter a subcontext, it has become input and treated as holy, and when you leave to the parent context, only the result matters.

    But that particular context has become inpure and decried as such in the documentation, so that carefulness is increased when interacting with it.



    > The interesting part is what you actually do in the middle to transforms inputs -> outputs.

    Can you actually name something? The only thing I can come up with is working with interesting algorithms or datastructures, but that kind of fundamental work is very rare in my experience. Even if you do, it's quite often a very small part of the entire project.



    A whole web app. The IO are generally user facing network connections (request and response), IPC and RPC (databases, other services), and files interaction. Anything else is logic. An FP programs is a collection of pipes, and IO are the endpoints. With FP the blob of data passes cleanly from one section to another while in imperative, some of it sticks. In OOP, there’s a lot of blob, that flings stuff at each other and in the process create more blobs.


    A general "web app"'s germane parts are:

    - The part that receives the connection

    - The part that sends back a response

    - Interacting with other unspecified systems through IPC, RPC or whatever (databases mainly)

    The shit in between, calculating a derivative or setting up a fancy data structure of some kind or something, is interesting but how much of that do we actually do as programmers? I'm not being obtuse - intentionally anyway - I'm actually curious what interesting things functional programmers do because I'm not seeing much of it.

    Edit: my point is, you say "Anything else is logic." to which I respond "What's left?"



    > calculating a derivative or setting up a fancy data structure of some kind or something, is interesting but how much of that do we actually do as programmers?

    A LOT, depending on the domain. There are many R&D and HPC labs throughout the US in which programmers work directly with specialists in the hard sciences. A significant percentage of their work is akin to "calculating a derivative".



    There's lots left!

    "When a customer in our East Coast location makes this purchase then we apply this rate, blah blah blah".

    "When someone with >X karma visits HN they get downvote buttons on comments, blah blah blah".



    Yes! In most projects, those requirements are stretched across tecnicalities like IOs. But you can pull them back to the core of your project. It takes effort, but the end result is a pleasure to work with. It can be done with FP, OOP, LP,…


    > Even if you do, it's quite often a very small part of the entire project.

    So your projects are only moving bits from one place to another? I've literally never seen that in 20 years of programming professionally. Even network systems that are seen as "dumb pipes" need to parse and interpret packet headers, apply validation rules, maintain BGP routing tables, add their own headers etc.

    Surely the program calculates something, otherwise why would you need to run the program at all if the output is just a copy of the input?



    Yes and I notice you still did not provide an interesting example. Surely parsing packets is not an interesting example of functional programming's powers?

    What interesting things do you do as a programmer, really?



    > parse and interpret packet headers, apply validation rules, maintain BGP routing tables, add their own headers etc.

    That's a few more than zero. I don't do network programming, that was just an example to show how even the quintessential IO-heavy application requires non-trivial calculations internally.



    Fair enough. It's just that in my experience the "cool bits" are quickly done and then we get bogged down in endless layers of inter-systems communication (HTTP, RPC, file systems, caches). I often see FP people saying stuff like "it's not 100% pure, of course there are some isolated side-effects" and I'm thinking.. my brother, I live inside side-effects. The days I can have even a few pure functions are few and far between. I'm honestly curious what percentage of your code bases can be this pure.

    But of course this heavily depends on the domain you are working in. Some people work in simulation or physics or whatever and that's where the interesting bits begin. (Even then I'm thinking "programming" is not the interesting bit, it's the physics)



    > The days I can have even a few pure functions are few and far between. I'm honestly curious what percentage of your code bases can be this pure.

    A big part of it, I'm sure, but it requires some work. Pushing the side effects to the edge requires some abstractions to not directly mess with the original mutable state.

    You are, in fact designing a state diagram from something that was evolving continuously on a single dimension: time. The transition of the state diagram are the code and the node are the inputs and output of that code. Then it became clear that IOs only matters when storing and loading those nodes. Because those nodes are finite and well defined, then the non-FP code for dealing with them became simpler to write.



    It's a matter of framing. Think of any of the following:

    - Refreshing daily "points" in some mobile app (handling the clock running backward, network connectivity lapses, ...)

    - Deciding whether to send an marketing e-mail (have you been unsubscribed, how recently did you send one, have you sent the same one, should you fail open or closed, is this person receptive to marketing, ...)

    - How do you represent a person's name and transform it into the things your system needs (different name fields, capitalization rules, max characters, what it you try to put it on an envelope and it doesn't fit, ...)

    - Authorization logic (it's not enough to "just use a framework" no matter your programming style; you'll still have important business logic about who can access what when and how the whole thing works together)

    And so on. Everything you're doing is mapping inputs to outputs, and it's important that you at least get it kind of close to correct. Some people think functional programming helps with that.



    When I see this list all I can think of is how all these things are just generic, abstract rules and have nothing to do with programming. This, of course, is my problem. I have a strange mental model of things.

    I can't shake off the feeling we should be defining some clean sort of "business algebra" that can be used to describe these kind of notions in a proper closed form and can then be used to derive or generate the actual code in whatever paradigm you need. What we call code feels like a distraction.

    I am wrong and strange. But thanks for the list, it's helpful and I see FP's points.



    You're maybe strange (probably not, when restricted to people interested in code), but wrongness hasn't been proven yet.

    I'd push back, slightly, in that you need to encode those abstract rules _somehow_, and in any modern parlance that "somehow" would be a programming language, even if it looks very different from what we're used to.

    From the FP side of things, they'd tend to agree with you. The point is that these really are generic, abstract rules, and we should _just_ encode the rules and not the other state mutations and whatnot that also gets bundled in.

    That implicitly assumes a certain rule representation though -- one which takes in data and outputs data. It's perfectly possible, in theory, to describe constraints instead. Looking at the example of daily scheduling in the presence of the clock running backward; you can define that in terms of inputs and outputs, or you can say that the desired result satisfies (a) never less than the wall clock, (b) never decreases, (c) is the minimal such solution. Whether that's right or not is another story (it probably isn't, by itself -- lots of mobile games have bugs like that allowing you to skip ads or payment forever), but it's an interesting avenue for exploration given that those rules can be understood completely orthogonally and are the business rules we _actually_ care about, whereas the FP, OOP, and imperative versions must be holistically analyzed to ensure they satisfy business rules which are never actually written down in code.



    I agree.

    Especially when reading Rust or C++.

    That's code I would prefer to have generated for me as needed in many cases, I'm generally not that interested in manually filling in all the details.

    Whatever it is, it hasn't been created yet.



    You can name almost anything (these are general-purpose languages, after all), but I'll just throw a couple of things out there:

    1. A compiler. The actual algorithms and datastructures might not be all that interesting (or they might be if you're really interested in that sort of thing), but the kinds of transformations you're doing from stage to stage are sophisticated.

    2. An analytics pipeline. If you're working in the Spark/Scala world, you're writing high-level functional code that represents the transformation of data from input to output, and the framework is compiling it into a distributed program that loads your data across a cluster of nodes, executes the necessary transformations, and assembles the results. In this case there is a ton of stateful I/O involved, all interleaved with your code, but the framework abstracts it away from you.



    Thanks, especially two is very interesting. Admittedly the framework itself is the actually interesting part and that's what I meant with this work being "rare" (I mean how many people work on those kinds of frameworks fulltime? It's not zero, but..)

    I think what I engaged with is the notion that most programming "has some side-effects" ("it's not 100% pure"), but much of what I see is like 95% side-effects with some cool, interesting bits stuffed in between the endless layers of communication (without which the "interesting" stuff won't be worth shit).

    I feel FP is very, very cool if you got yourself isolated in one of those interesting layers but I feel that's a rare place to be.



    Any useful program has side-effects. IMHO the point is to isolate the part of the code that has the side-effects as much as possible, and keep the rest purely functionsl. That makes it easier to debug, test, and create good abstractions. Long term it is a very good approach.


    It's always hard to parse if people mean functional programming when bringing up Lisp. Common Lisp certainly is anything but a functional language. Sure, you have first order functions, but you in a way have that in pretty much all programming languages (including C!).

    But most functions in Common Lisp do mutate things, there is an extensive OO system and the most hideous macros like LOOP.

    I certainly never felt constrained writing Common Lisp.

    That said, there are pretty effective patterns for dealing with IO that allow you to stay in a mostly functional / compositional flow (dare I say monads? but that sounds way more clever than it is in practice).



    > It's always hard to parse if people mean functional programming when bringing up Lisp. Common Lisp certainly is anything but a functional language. Sure, you have first order functions, but you in a way have that in pretty much all programming languages (including C!).

    It's less about what the language "allows" you to do and more about how the ecosystem and libraries "encourage" you to do.



    > Pretty much anything I've written over the last 30 years, the main purpose was to do I/O, it doesn't matter whether it's disk, network, or display.

    Erlang is a strictly (?) a functional language, and the reason why it was invented was to do network-y stuff in the telco space. So I'm not sure why I/O and functional programming would be opposed to each other like you imply.



    > Erlang is a strictly (?) a functional language,

    First and foremost Erlang is a pragmatic programming language :)



    This is discussing Common Lisp which is not even a mostly-functional language, and far from purely functional.


    He says Lisp, rather than Common Lisp. Sure, given the context he's writing in now, maybe he means Common Lisp, but Joe Marshall was a Lisp programmer before Common Lisp existed, so he may not mean Common Lisp specifically.


    Somehow haskell and friends shifted the discussion around functional programming to pure vs non-pure! I am pretty sure it started with functions as first order objects as differentiator in schemes, lisps and ml family languages. Thus functional, but that's just a guess.


    > Somehow haskell and friends shifted the discussion around functional programming to pure vs non-pure

    In direct response every other language in the mid 2010s saying, "Look, we're functional too, we can pass functions to other functions, see?"

      foo.bar()
         .map(x => fireTheMissiles())
         .collect();
    
    C's had that forever:

      void qsort(void *base, size_t nmemb, size_t size,
                 int (*compar)(const void *, const void *))


    In a way this is true.

    A function pointer is already half way there. What it lacks is lexical environment capture.

    And things that are possible to do with closures never stop amazing me.

    Anyways, functional programming is not about purity. It is something that came from the academia, with 2 major language families: ML-likes and Lisp-likes, each focusing on certain key features.

    And purity is not even the key feature of MLs in general.



    Closures bring me joy.


    They are one of those language features that, having learned them, it's a little hard to flip my brain around into the world I knew before I learned them.

    If I think hard, I can sort of remember how I used to do things before I worked almost exclusively in languages that natively support closures ("Let's see... I create a state object, and it copies or retains reference to all the relevant variables... and for convenience I put my function pointer in there too usually... But I still need rules for disposing the state when I'm done with it..." It's so much nicer when the language handles all of that bookkeeping for you and auto-generates those state constructs).



    No, functions aren't first class in C. When you use a function in an expression it undergoes lvalue conversion and "decays" to a pointer to the function. You can only call, store, etc function pointers, not functions. Function pointers are first class. Functions are not as you can't create them at runtime.

    A functional programming language is one with first class functions.



    What is the impact on the user of having first class functions vs first class function pointers?

    Last I checked when you implement lambda in lisp it's also a pointer to the lambda internally.



    I wrote a recursive descent parser in Lisp for a YAML replacement language[1]. It wasn't difficult. Lisp makes it easy to write I/O, but also easy to separate logic from I/O. This made it easy for me to write unit tests without mocking.

    I also wrote a toy resource scheduler at an HTTP endpoint in Haskell[2]. Writing I/O in Haskell was a learning curve but was ultimately fine. Keeping logic separate from I/O was the easy thing to do.

    1: https://github.com/djha-skin/nrdl

    2: https://github.com/djha-skin/lighthouse



    Most of the code in most programs is not the part that is doing the I/O. It's doing stuff on a set of values to transform them. It gets values from somewhere, does stuff using those values, and then outputs some values. The complicated part is not the transfer of the final byte sequence to whatever I/O interface they go to, the core behavior of the program is the stuff that happens before that.


    As others have said, a pure program is a useless program. The only place stuff like that has in this world is as a proof assistant.

    What I will add is look up how the GHC runtime works, and the STGM. You may find it extremely interesting. I didn't "get" functional programming until I found out about how exotic efficient execution of functional programs ends up being.



    It's about minimizing and isolating state and side effects, not eliminating them completely

    Functional core, imperative shell is a common pattern. Keep the side effects on the outside. Instead of doing side effects directly, just return a data structure that can be used to enact the side effect



    "Purely Functional Programming", I guess mostly Haskell/Purescript.

    So this only really mean:

    Purely Functional Programming by default.

    In most programming languages you can write

    "hello " + readLine()

    And this would intermix pure function (string concatenation) and impure effect (asking the user to write some text). And this would work perfectly.

    By doing so, the order of evaluation becomes essential.

    With a pure functional programming (by default).

    you must explicitely separate the part of your program doing I/O and the part of your program doing only pure computation. And this is enforced using a type system focusing on I/O. Thus the difference between Haskell default `IO` and OCamL that does not need it for example.

    in Haskell you are forced by the type system to write something like:

        do 
          name <- getLine
          let s = "Hello " <> name <> "!"
          putStrLn s
    
    you cannot mix the `getLine` directly in the middle of the concatenation operation.

    But while this is a very different style of programming, I/O are just more explicit, and they "cost" more, because writing code with I/O is not as elegant, and easy to manipulate than pure code. Thus it naturally induce a way of coding that try to really makes you conscious about the part of your program that need IO and the part that you could do with only pure function.

    In practice, ... yep, you endup working in a "Specific to your application domain" Monad that looks a lot like the IO Monad, but will most often contains IO.

    Another option is to use a free monad for your entire program that makes you able to write in your own domain language and control its evaluation (either using IO or another system that simulates IO but is not really IO, typically for testing purpose).



    The point of functional, "sans I/O" style is to separate the definition of I/O from the rest of your logic. You're still doing I/O, but what sorts of I/O you're doing has a clear self-contained definition within your program. https://sans-io.readthedocs.io/how-to-sans-io.html


    There is no reason you can't use side effects in pure functional programming. You just need to provide the appropriate description of the side effect to avoid caching and force a particular evaluation order. If you have linear types, you do it by passing around opaque tokens. I'm not entirely sure how IO works in Haskell, but I think the implementation is similar. Even C compilers use a system like that internally.


    The boundary between the program and the rest of the system allows I/O of course. What FP does is "virtualize" I/O by representing it as data (thus it can be passed around). Then at some point these changes get "committed" to the outside. Representing I/O separately from how it is carried out allows a lot of things to be done, such as cancelling (ctrl+z) operations.


    OpenSCAD is such a good school of functional programming. There is no "time" or flow of execution. Or variables, scopes and environments. You are not constructing a program, but a static model which has desired properties in space and time.


    Everyone writes real programs that have side effects. Functional programming is no different. But the side effects happen in specific ways or places, rather than all over the place.


    There are ways to handle side effects with pure functions only (it’s kind of cheating, because the actual side effects are performed by the non-pure runtime/framework that’s abstracted away, while the pure user code just defines when to perform them and how to respond to them). It’s possible, but it gets very awkward very fast. I wouldn’t use FP for any part of the code that deals with IO.


    The pragmatic approach is to see that FP's key point is statelessness and use that in your code (written in more mainstream languages) when appropriate.


    > Whenever I hear someone talking about purely functional programming, no side effects, I wonder what kind of programs they are writing

    Where have you ever heard anyone talk about side-effect free programs, outside of academic exercises? The linked post certainly isn't about 100% side-effect/state free code.

    Usually, people talk about minimizing side-effects as much as possible, but since we build programs to do something, sometimes connected to the real world, it's basically impossible to build a program that is both useful and 100% side-effect free, as you wouldn't be able to print anything to the screen, or communicate with other programs.

    And minimizing side-effects (and minimizing state overall) have a real impact on how easy it is to reason about the program. Being really carefully about where you mutate things, leads to most of the code being very explicit about what it's doing, and code only affects data that is close to where the code itself is, compared to intertwined state mutation, where things everywhere in the codebase can affect state anywhere.



    I had the same question until I understood one key pattern of pure functional programming. Not sure it has a name but here goes.

    There is world, and there is a model of the world - your program. The point of the program, and all functions, is to interact with the model. This part, data structures and all, is pure.

    The world interacts with the model through an IO layer, as in haskell.

    Purity is just an enforcement of this separation.



    I thinks it the imperative shell, functional core. The shell provide the world, the core act on it, the the shell commit it at various intervals.

    Functional React follows this pattern. The issue is when the programmer thinks the world is some kind of stable state that you can store results in. It’s not, the whole point is to be created anew and restart the whole computation flow. The escape hatches are the hooks. And each have a specific usage and pattern to follow to survive world recreation. Which why you should be careful with them as they are effectively world for subcomponents. So when you add to the world with hooks, interactions with the addition should stay at the same level



    I never believed any FP evangelist ever since I realized I can't even write quicksort with it *.

    (* Yes, you can technically write it procedurally like a good C programmer, sure.)



    Always read from experienced developers praising lisps, but why is it so rare in production applications?


    JavaScript and Python have adopted almost every feature that differentiated Lisp from other languages. So in comparison Lisp is just more academic, esoteric, and advanced.


    Looking for a nice, solid, well-documented library to do something is difficult for most stuff. There are some real gems out there, but usually you end up having to roll your own thing. And Lisp generally encourages rolling your own thing.


    People smart enough to read and write it are rare.


    Is it about intelligence or just not being used to/having time for learning a different paradigm?

    I personally have used LISP a lot. It was a little rough at first, but I got it. Despite having used a lot of languages, it felt like learning programming again.

    I don't think there's something special about me that allowed me to grok it. And if that were the case, that's a horrible quality in a language. They're not supposed to be difficult to use.



    In my case, my boss won't let me.


    I agree with some statements OP makes but not others. Ultimately, I write in lisp because it's fun to write in Lisp due to its expressive power, ease of refactoring, and the Lisp Discord[1].

    > Lisp is easier to remember,

    I don't feel this way. I'm always consulting the HyperSpec or googling the function names. It's the same as any other dynamically typed language, such as Python, this way to me.

    > has fewer limitations and hoops you have to jump through,

    Lisp as a language has incredibly powerful features find nowhere else, but there are plenty of hoops. The CLOS truly feels like a superpower. That said, there is a huge dearth of libraries. So in that sense, there's usually lots of hoops to jump through to write an app. It's just I like jumping through them because I like writing code as a hobby. So fewer limitations, more hoops (supporting libraries I feel the need to write).

    > has lower “friction” between my thoughts and my program,

    Unfortunately I often think in Python or Bash because those are my day job languages, so there's often friction between how I think and what I need to write. Also AI is allegedly bad at lisp due to reduced training corpus. Copilot works, sorta.

    > is easily customizable,

    Yup, that's its defining feature. Easy to add to the language with macros. This can be very bad, but also very good, depending on its use. It can be very worth it both to implementer and user to add to the language as part of a library if documented well and done right, or it can make code hard to read or use. It must be used with care.

    > and, frankly, more fun.

    This is the true reason I actually use Lisp. I don't know why. I think it's because it's really fun to write it. There are no limitations. It's super expressive. The article goes into the substitution principle, and this makes it easy to refactor. It just feels good having a REPL that makes it easy to try new ideas and a syntax that makes refactoring a piece of cake. The Lisp Discord[1] has some of the best programmers on the planet in it, all easy to talk to, with many channels spanning a wide range of programming interests. It just feels good to do lisp.

    1: https://discord.gg/HsxkkvQ



    As much as I sympathize with this post and similar ones, and as much I personally like functional thinking, LISP environments are not nearly as advanced anymore as they used to be.

    Which Common LISP or Scheme environment (that runs on, say Ubuntu Linux on a typical machine from today) gets even close to the past's LISP machines, for example? And which could compete with IntelliJ IDEA or PyCharm or Microsoft Code?

    https://ssw.jku.at/General/Staff/PF/genera-screenshots.html



    Common Lisp can compete with Python no problem, that's what matters to me. You get:

    - truly interactive development (never wait for something to restart, resume bugs from any stack frame after you fixed them),

    - self-contained binaries (easy deployment, my web app with all the dependencies, HTML and CSS is ±35MB)

    - useful compile-time warnings and errors, a keystroke away, for Haskell levels see Coalton (so better than Python),

    - fast programs compiled to machine code,

    - no GIL

    - connect to, inspect or update running programs (Slime/Swank),

    - good debugging tools (interactive debugger, trace, stepper, watcher (on some impls)…)

    - stable language and libraries (although the implementations improve),

    - CLOS and MOP,

    - etc

    - good editor support: Emacs, Vim, Atom/Pulsar (SLIMA), VScode (ALIVE), Jetbrains (SLT), Jupyter kernel, Lem, and more: https://lispcookbook.github.io/cl-cookbook/editor-support.ht...

    What we might not get:

    - advanced refactoring tools -also because we need them less, thanks to the REPL and language features (macros, multiple return values…).

    ---

    For a lisp machine of yesterday running on Ubuntu or the browser: https://interlisp.org/



    Had a PalmPilot taped to a modem that did our auth. Lisp made the glue code feel like play. No types barking, no ceremony—just `(lambda (x) (tinker x))`. We didn’t debug, we conversed. Swapped thoughts with the REPL like it was an old friend.


    Though these are minor complaints, there is a couple things I'd like to change about a Lisp language.

    One is its the implicit function calls. For example, you'll usually see calls like this: `(+ 1 2)` which translates to 1 + 2, but I would find it more clear if it was `(+(1,2))` where you have a certain explicitness to it.

    It doesn't stop me from using Lisp languages (Racket is fun, and I been investigating Clojure) but it took way too long for the implicit function stuff to grok in my brain.

    My other complain is how the character `'` can have overloaded meaning, though I'm not entirely sure if this is implementation dependent or not



    It's not really implicit though, the first element of a list that is evaluated is always a function. So (FUN 1 2) is an explicit function call. The problem is that it doesn't look like C-like languages, not that it's not explicit.

    In theory ' just means QUOTE, it should not be overloaded (although I've mostly done Common Lisp, so no idea if in other impl that changes). Can you show an example of overloaded meaning?



    There's an example I saw where `'` was used as a way to denote a symbol, but I can't find that explicit example. It wasn't SBCL, I believe it may have been Clojure. Its possible I'm misremembering.

    That said, since I work in C-like languages during the day, I suppose my minor complaint has to do with ease of transition, it always takes me a minute to get acquainted to Lisp syntax and read Lisp code any time I work with it.

    Its really a minor complaint and one I probably wouldn't have if I worked with a Lisp language all day.



    that's correct, but its not that it denotes a symbol as a different function of ', its the same function. Turn code into data. That's all symbols are (in CL at least).

    For example in a quoted list, you dont need to quote the symbols because they are already in a quoted expression!

    '(hello hola)

    ' really just says "do not evaluate whats next, treat it as data"



    to be a bit more precise, everytime you have a name in common lisp that is already a symbol. But it will get evaluated. If its as the first item of an evaluated list it will be looked at in the function name space, if elsewhere it will be looked up at the variable name space. What quote is doing is just asking the compiler not to evaluate it and treat it as data.


    Symbols are quoted representations of identifiers so it still does the same thing.


    First person to ask for more parenthesis in Lisp.


    It's not implicit in this case, it's explicit. + is the function you're calling. And there's power in having mathematical operations be functions that you can manipulate and compose like all other functions, instead of some special case of infix implicit (to me, yeah) function calling, like 1 + 2, where it's no longer similar to other functions.


    R works exactly as you describe. You can type `+`(1, 2) and get 3 because in R everything that happens is a function call even if a few binary functions get special sugar so you can type 1 + 2 for them as well. The user can of course make their own of these if they wrap them in precents. For example: `%plus%` = function(a, b) { `+`(a, b)}. A few computer algebra systems languages provide even more expressivity like yacas and fricas. The later even has a type system.


    Similar in Nim as well.


    In Standard ML, you can do either 1+2 or op+(1, 2).


    How is it implicit? The open parenthesis is before the function name rather than after, but the function isn’t called without both parentheses.

    If you want to use commas, you can in Lisp dialects I’m familiar with—they’re optional because they’re treated as whitespace, but nothing is stopping you if you find them more readable!



    , is typically “unquote.” Clojure is the only “mainstream” Lisp that allows , as whitespace. Has meaning in CL and Scheme.


    Ancient LISP (caps deliberate) in fact had optional commas that were treated as whitespace. You can see this in the Lisp 1 Programmer's Manual (dated 1960).

    This practice quickly disappeared though. (I don't have an exact time line about this.)



    My mistake!

    That’s what I get for not double checking… well… basically anything I think during my first cup of coffee.



    Surely you mean (+ (, 1 2))

    ;)



    First time I saw (+ 1 2), I thought it was a typo. Spent an hour trying to “fix” it into (1 + 2). My professor let me. Then he pointed at the REPL and said, “That’s not math—it’s music.” Never forgot that. The '? That’s the silent note.


    It's due to Polish notation[0] as far as I understand it. This is how that notation for mathematics works.

    I suppose my suggestion would break those semantics.

    [0]: https://en.wikipedia.org/wiki/Polish_notation



    Aye, Polish notation sure. But what he gave me wasn’t a lecture, it was a spell.

    Syntax mattered less than rhythm. Parens weren’t fences, they were measures. The REPL didn’t care if I understood. It played anyway.



    func(a, b) is basically the same as (func a b). You're just moving the parens around. '+' is extra 'strange' because in most languages it isn't used like other functions: imagine if you had to write +(1, 2) in every C-like.


    [flagged]



    Wat’s the sound of meeting something older than you thought possible.

    Lisp listened. Modem sang. PalmPilot sweated. We talked, not debugged.



    > No types barking

    No thanks



    The reason I switched from Scheme to Common Lisp was because I could say...

      (defun foo (x)
        (declare (type (Integer 0 100) x))
        (* x
           (get-some-value-from-somewhere-else)))
    
    And then do a (describe 'foo) in the REPL to get Lisp to tell me that it wants an integer from 0 to 100.


    Common Lisp supports gradual typing and will (from my experience) do a much better job of analyzing code and pointing out errors than your typical scripting language.


    This is the first article I’ve ever read that made me want to go learn Lisp.


    Watch Rich Hickey's early Clojure videos and be blown away.


    Got any specific suggestion?


    "Simple Made Easy" is pretty popular, there is a transcription with slides:

    https://github.com/matthiasn/talk-transcripts/blob/master/Hi...



    I like "Clojure, Made Simple" even more.

    https://www.youtube.com/watch?v=028LZLUB24s

    Someone helpfully pulled out this chunk, which is a good illustration of why data is better than functions, a key driver of Clojure's design.

    https://www.youtube.com/watch?v=aSEQfqNYNAc



    It's tangentially relevant, but I've enjoyed this one, about hammock driven programming.

    https://www.youtube.com/watch?v=f84n5oFoZBc



    Lispworks has a free editition with lots of examples. Look into PAIP from Peter Norvig.


    Even putting the common lisp aside, PAIP is my favourite book about programming in general, by FAR. Norvig's programming style is so clear and expressive, the book touches on more "pedestrian" parts of programming: building tools / performance / debugging, but also walks you through a serious set of algorithms that are actually practical and that I use regularly (and they shape your thinking): search, pattern matching, to some extent unification, building interpreters and compilers, manipulating code as data.

    It's also extremely fun, you go from building Eliza to a full pattern matcher to a planning agent to a prolog compiler.



    Paul Graham's On Lisp is also a powerful argument to try the language, even if some of the stuff it presents is totally bonkers. :-D

    https://www.paulgraham.com/onlisp.html



    what are the bonkers parts? (just curious)


    Next time you see a HN post on a lisp-centric topic, click into the comments. I'll bet you a nickel that they'll be happier than most. Instead of phrases like "dumpster fire" they're using words like "joyful".

    That's why I keep rekindling my learn-lisp effort. It feels like I'm just scratching the surface re: the fun that can be had.



    Never been happier since building an Erp system in pure lisp and postgresql.


    > Other general purpose languages are more popular and ultimately can do everything that Lisp can (if Church and Turing are correct).

    I find these types of comments extremely odd and I very much support lisp and lisp-likes (I'm a particular fan of clojure). I can only see adding the parenthetical qualifier as a strange bias of throwing some kind of doubt into other languages which is unwarranted considering lisp at its base is usually implemented in those "other general purpose languages".

    If you can implement lisp in a particular language then that particular language can de facto do (at least!) everything lisp can do.



    Isn’t this just a cheeky joke? I.e. “if Einstein is right about this whole theory of relatively thing”


    This is conflating slightly different things, though? One is that you can build a program that does the same thing. The other is that you can do the same things with the language.

    There are special forms in LISP, but that is a far cry from the amount of magic that can only be done in the compiler or at runtime for many languages out there.



    Brainfuck is also Turing complete but that isn’t an argument that it’s a good replacement for LISP or any other language.


    That has a name: Turing tarpit.


    There are several Lisp implementations (including fully-fledged operating systems) which are implemented in Lisp top to bottom.


    Common Lisp at its base is usually written in Common Lisp.


    I'm sure you are aware there is ultimately a chicken and egg problem here. Even given the case you presented, it doesn't invalidate the point that if it can implement lisp it must be able to do everything lisp can do. In fact given lisp's simplicity, I'd be hard pressed to call a language that couldn't implement lisp "general purpose".


    "You're a very clever man, Mr. James, and that's a very good question," replied the little old lady, "but I have an answer to it. And it's this: The first turtle stands on the back of a second, far larger, turtle, who stands directly under him."

    "But what does this second turtle stand on?" persisted James patiently.

    To this, the little old lady crowed triumphantly,

    "It's no use, Mr. James—it's turtles all the way down."



    > I'm sure you are aware there is ultimately a chicken and egg problem here.

    You should learn more about compilers. There is a really cool idea waiting for you.



    Yes, but sometimes doing the things Lisp can do in another language as easily and flexibly as they are done in Lisp has, as a first step, implementing Lisp in the target language.

    For a famous example, see Clasp: https://github.com/clasp-developers/clasp



    One doesn't have to invoke Turing or Church to show all languages can do the same things.

    Any code that runs on a computer (using the von Neumann architecture) boils down to just a few basic operations: Read/write data, arithmetic (add/subtract/etc.), logic (and/or/not/etc.), bit-shifting, branches and jumps. The rest is basically syntactic sugar or macros.

    If your preferred programming language is a pre-compiled type-safe object oriented monster with polymorphic message passing via multi-process co-routines, or high-level interpreted purely functional archetype of computing perfection with just two reserved keywords, or even just COBOL, it's all going to break down eventually to the ops above.



    You can trivially devise a language that doesn't, though? Let's say I have a language that can return 0 and only 0. It cannot reproduce lisp.


    Sometimes, when people say one language can't do what another does, they aren't talking about outputs. Nobody is arguing that lisp programs can do arithmetic and others can't, they're arguing that there are ergonomics to lisp you can't approach in other languages.

    But even so

    > it's all going to break down eventually to the ops above.

    That's not true either. Different runtimes will break down into a completely different version of the above. C is going to boil down to a different set of instructions than Ruby. That would make Ruby incapable of doing some tasks, even with a JIT. And writing performance sensitive parts in C only proves the point.

    "Any language can do anything" is something we tell juniors who have decision paralysis on what to learn. That's good for them, but it's not true. I'm not going to tell a client we're going to target a microcontroller with PHP, even if someone has technically done it.



    I believe ‘twas a joke.


    > Lisp's dreaded Cambridge Polish notation is uniform and universal. I don't have to remember whether a form takes curly braces or square brackets or what the operator precedency is or some weird punctuated syntax that was invented for no good reason. It is (operator operands ...) for everything. Nothing to remember. I basically stopped noticing the parenthesis 40 years ago. I can indent how I please.

    Well, that might be true for Scheme, but not for CL. There are endless forms for loops. I will never remember all of them. Or even a fraction of it. Going through Guy Steel’s CL book, I tend to think that I have a hard time remembering most of the forms, functions, and their signatures.



    I’m not really familiar with Lisp, but from glancing at this article it seems like all of these are really good arguments for programming in Ruby (my language of choice). Easily predictable syntax, simple substitution between variables and method calls, dynamic typing that provides ad hoc polymorphism… these are all prominent features of Ruby that are much clunkier in Python, JavaScript, or really any other commonly used language that I can think of.

    Lisp is on my list of languages to learn someday, but I’ve already tried to pick up Haskell, and while I did enjoy it and have nothing but respect for the language, I ultimately abandoned it because it was just too time-consuming for me to use on a day-to-day basis. Although I definitely got something out of learning to program in a purely functional language, and in fact feel like learning Haskell made me a much better Ruby programmer.



    I have about 6 years of ruby experience and if you're saying that ruby has "easily predictable syntax"...

    You really should try lisp. I liked clojure a lot coming from ruby because it has a lot of nice ergonomics other lisps lack. I think youd get a lot out of it.



    Ruby and LISP have a lot of overlap. To my money, LISP is a little more predictable because the polymorphic nature of the language itself is always in your face; you know that you're always staring at a list, and you have no idea without context whether that list is being evaluated at runtime, is literal, or is the body of a macro.

    Ruby has all those features but (to my personal taste) makes it less obvious that things are that wilding.

    (But in both languages I get to play the game "Where the hell is this function or variable defined?" way more often than I want to. There are some advantages to languages that have a strict rule about modular encapsulation and requiring almost everything into the current context... With Rails, in particular, I find it hard to understand other people's code because I never know if a given symbol was defined in another file in the codebase, defined in a library, or magicked into being by doing string transformations on a data source... In C++, I have to rely on grep a lot to find definitions, but in Ruby on Rails not even grep is likely to find me the answer I want! Common LISP is similarly flexible with a lot of common library functions that magick new symbols into existence, but the codebases I work on in LISP aren't as large as the Ruby on Rails codebases I touch, so it bites me less).



    Go back to the source, use Smalltalk in a nice environment like VisualWorks and get all that built in :-)


    Common Lisp has compilers that produce fast code.


    On this topic: My absolute favorite Common Lisp special operator is `(the`. Usage, `(the value-type form`, as in `(the integer (do-a-bunch-of-math))`.

    At first glance, it looks like your strong-typing tool. And it can be. You can build a static analyzer that will match, as best it can, the type of the form to the value-type and throw errors if they don't match. It can also be a runtime check; the runtime is allowed to treat `the` as an assert and throw an error if there's a mismatch.

    But what the spec actually says is that it's a special operator that does nothing but return the evaluation of the form if the form's value is of the right type and the behavior is undefined otherwise. So relative to, say, C++, it's a clean entrypoint for undefined behavior; it signals to a Lisp compiler or interpreter "Hey, the programmer allows you to do shenanigans here to make the code go faster. Throw away the runtime type identifier, re-represent the data in a faster bit-pattern, slam this value into functions without runtime checks, assume particular optimizations will succeed, paint it red to make it go fasta... Just, go nuts."



    If you want something similar to Ruby but more functional, try Elixir. The similarities are superficial but might be enough to ease you in.

    Haskell is weird. You can express well defined problems with relative ease and clarity, but performance can be kind of wonky and there's a lot more ceremony than your typical Lisp or Scheme or close relative of those. F# can give you a more lispish experience with a threshold about as low as Haskell, but comes with the close ties to an atrocious corporation and similar to Clojure it's not exactly a first class citizen in the environment.

    Building stuff in Lisp-likes typically don't entail the double programming languages of the primary one and a second one to program the type system, in that way they're convenient in a way similar to Ruby. I like the parens and how they explicitly delimit portions that quite closely relate to the AST-step in compilation or whatever the interpreter sees, it helps with moving things around and molding the code quickly.



    I've never programmed in a Lisp, but I'd love to learn, it feels like one of those languages like Perl that are just good to know. I do have a job where getting better with SKILL would be useful.


    The most impressive thing, to me, about LISP is how the very, very small distance between the abstract syntax tree and the textual representation of the program allows for some very powerful extensions to the language with relatively little change.

    Take default values for function arguments. In most languages, that's a careful consideration of the nuances of the parser, how the various symbols nest and prioritize, whether a given symbol might have been co-opted for another purpose... In LISP, it's "You know how you can have a list of symbols that are the arguments for the function? Some of those symbols can be lists now, and if they are, the first element is the symbolic argument name and the second element is a default value."



    Programming is about coordination between tasks. Prove me wrong.


    It doesn't have to be wrong to be irrelevant.

    Many things can be viewed as coordination problems. All of life can be viewed as being about coordination between tasks.

    But I want to engage in good faith and assume you have some way of making this productive. What angle are you going for?



    Hm so your point is life is programming ?


    Isn't that obvious? :)

    I do wonder what happens if one of the tasks to be coordinated is "programming"?



    that's metaprogramming :D


    No. It's deep philosophy on software design.


    >It's less of a big deal these days, but properly working lambda expressions were only available in Lisp until recently.

    I think Haskell and ML had lambda expressions since like 1990.



    Recent, compared to Lisp


    Number of programmers that are in workforce that started before Standard ML (1983) is tiny and this argument would be relevant only to them.


    The author of the referenced post is one of them, though.


    The word “properly” is not only working hard here, but perhaps pointing to deeper concepts.

    In particular, it implies a coherent design around scope and extent. And, much more indirectly, it points to time. EVAL-WHEN has finally made a bit of a stir outside Lisp.



    Does this imply that lambda expressions in Haskell and ML don't have a "coherent design around scope and extent"? This is quite a claim, to be honest....


    "properly working lambda expressions were only available in Lisp until recently."

    until -> since



    "properly working lambda expressions were available only in lisp until recently."


    > "properly working lambda expressions were only available in Lisp until recently."

    > until -> since

    I think "only since recently" is not standard English, but, even if it were, I think it would change the intended meaning to say that they were not available in Lisp until recently, the opposite of what was intended. I find it clearer to move the "only": "were available only in Lisp until recently."



    Personally, I'd probably move "until recently" to the front: "Until recently, properly working lambda expressions were only available in Lisp."


    Sorry, I did indeed misunderstand the sentence due to its phrasing. I cannot edit my comment for some reason.

    I like your fix the most.



    It's perfectly good and idiomatic English, but it's an ambiguous formation and your suggested edit does clarify it.


    I agree that "properly working lambda expressions were only available in Lisp until recently" is perfectly idiomatic, but easily misunderstood, English. I believe that the suggested fix "properly working lambda expressions were only available in Lisp since recently," which is what I was responding to, is not idiomatic. Claims about what is and isn't idiomatic aren't really subject to definitive proof either way, but it doesn't matter, because the suggester now agrees that it is not what was meant (https://news.ycombinator.com/item?id=43653723).


    To be clear, the construction I’m endorsing is: "were available only in Lisp until recently", which is the construction that my editors typically proposed for similarly ambiguous deployments of "only". The ambiguity in the original placement is that it could be interpreted as only available as opposed to available and also something else. My editors always wanted it to be clear exactly what the "only" constrains.


    Given that code is mostly written by LLMs now (or will be soon) isn't it better to just use the best language that fits these requirements:

    - LLM well trained on it. - Easy for human team to review. - Meets performance requirements.

    Prob not lisp?



    how is that anywhere close to a given?


    Llm content is trained in popularity. If we use that as a metric there will never be any improvements or changes again.






    Join us for AI Startup School this June 16-17 in San Francisco!


    Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



    Search:
    联系我们 contact @ memedata.com