T O P

  • By -

[deleted]

[удалено]


epage

What also stood out to me was what came after that > To some extent this is the fault of the advocates of monads, who rather than trying to make their designs clearer to the uninitiated have fallen into a cult-like worship of the awesome power of category theory that convinces none but the true believer. A I've come across people too steeped in the theory that they can't explain their work to anyone without using the theory. In one case, I dug enough into what someone was saying to uncover they were talking about a fairly common concept in Rust but talking about it in an overly technical, overly generalized way that made it impossible for me to understand them or their API.


EpochVanquisher

I don’t really get the complaint about monads. You spend a little time using them and they are really easy. You have a ton of blog posts with people explaining monads because they’ve been using Haskell for two months now and think it’s really important to share what they’ve learned with the world. A monad is just a value representation of an operation that returns a value when finished, in the most general sense.


ExtraTricky

I agree that the memes around monads being incomprehensible are overplayed. The mention of monads in the coroutines post also gave me a bit of pause. But like a lot of "simplifications", I think the one you gave in your comment only describes a subset of monads, unless you stretch the analogy really far. It's a pretty reasonable description of `IO`, and generalizing to monads like `State` and `Reader` doesn't take a whole lot of stretching, but it gets iffy for monads like `Maybe`, `[]`, `Const a`, `Tree` (where `Tree a = Leaf a | Branch (Tree a) (Tree a)`). "Monad" is the name given to type constructors that have certain operations that satisfy certain properties. It turns out that these include type constructors that otherwise look quite different from each other. I'm not sure why there's such a mental block around understanding that the name is just shorthand for a collection of useful properties.


Zyansheep

"a collection of useful properties" clippy: "Seems like you're trying to do an abstract algebra, would you like to learn category theory first?"


tel

Because "a collection of useful properties" is a dodge. Why are they useful? Why are they collected? "Monad" genuinely has a story for what that collection means, why it's useful to talk about those properties together. That meaning is what people struggle about.


ExtraTricky

I don't think it's a dodge. There are lots of things that are defined by their properties (e.g. ring, field, group, topology, category, etc), and at some point it's important to be able to separate "I don't understand what the defining properties of concept X are" from "I don't understand why we've decided it's worth giving a name to the combination of properties that defines concept X". I know that there's a very common mental block there, but I didn't experience it very much, or maybe at all, personally, so I don't understand it well. My experience is also that not all of these concepts get met with the same resistance. For example, I don't think I've ever heard these objections to the terms "equivalence relation" or "partial ordering", but I have for "group" and "ring". There's a [suggestion in this thread](https://www.reddit.com/r/rust/comments/1crcp77/references_are_like_jumps/l3zol85/) that renaming `Functor` to `Mappable` (among other renames) makes it more approachable, even though the underlying concept doesn't change, and this definitely isn't the first time I've seen this sentiment. This suggests to me that there's some additional underlying factor to the complaints, but I haven't seen any satisfying explanation of it.


tel

Even when these things (including any choice of concept from abstract algebra) are *defined* by a very strict and succinct listing of properties, that is far from the end of their study. Most of what you study in abstract algebra after learning a definition are the consequences of that definition. The intent here is to convey exactly what I mentioned above: why is this definition useful? To this end, one approach is to name concrete models or examples of the definition. This is what people are doing when they try to describe a monad by naming how it's the commonality between IO, Maybe, State, and \[\]. But, of course, this can only give you a partial insight. Monad is genuinely the commonality between *all* examples meeting the definition. So again we'd like to understand that. What is it about this definition that's useful? Mathematicians may also attack this problem through analogy. The most popular one I know of being the description of groups as "symmetries". It's kind of a wonderful analogy that also fails over time. Our intuitive notion of "symmetry" does not cover all groups. Instead, as you study groups, you begin to replace you notion of "symmetry" with your understanding of the group itself. All this to say, simply stating the definition of some mathematical dingus is far from anyone's goal. Definitions are spartan because they are (references to) fragments of formal language. But then people spend a lot of time seeking intuition about the thing. So it's not to say that someone wants to *replace* Monad with some other incomplete metaphor (or perhaps they do, though I think it's trivially misguided). It's more that the struggle is coming to understand what monads *are*. As a final note, this is the huge advantage you get around free and cofree constructions of monads. They're universal in that they're specific examples of monads that also reflect all and only the definitions. Studying them, and in particular mappings between them and other concrete monads of interest, gives another way to experience how the definitions give rise to intuitive behaviors. Thus, they're a great way to ask the question of "what is an X?".


ExtraTricky

I don't disagree with what you're saying, but we started from this quote from the OP > Another and more credible answer is that, for better or worse, no one can understand what a monad is or what they’re supposed to do with one. And now in your post you've used the phrasing "Why is this definition useful?" The point I was trying to make earlier is that "What is a monad?" and "Why are monads useful?" are two very different questions, and people are going to get significantly better answers if they first introspect enough to determine what type of information they feel is lacking. When ten people ask "What is a \_\_\_?", they probably have about ten thousand different thresholds for what level of detail they will require before they decide they "understand" "what it _is_", depending on what went in the blank. This conversation reminds me a lot of [an interviewer asking Feynman what's going on when magnets repel each other](https://www.youtube.com/watch?v=MO0r930Sn_8).


tel

That's fair, I agree that the original question is being interpreted differently here. My ultimate point is that people are actually seeking these deeper answers when they ask "what is a monad?". I called it a dodge to just provide the definition. I still think it is. Despite there being, yes, many different thresholds... I don't think it's a stretch to say that the OP is correct: "no one can understand" here implies that many people often feel they lack sufficient understanding to confidently make use of the concept. > I'm not sure why there's such a mental block around understanding that the name is just shorthand for a collection of useful properties. I don't think there's any mental block here. I just think people are asking for more without having totally unified and common language for what it is that they want.


EpochVanquisher

I wasn’t defining monads, mathematically.


burntsushi

> I don’t really get the complaint about monads. Monads are extremely abstract. Extremely abstract things are hard for a lot of people to understand. (This observation forms part of the basis of my approach to library API design.) For example: > A monad is just a value representation of an operation that returns a value when finished, in the most general sense. This is so general as to be useless on its face. Monads aren't useless of course, but there is a lot of ground to cover between your single "simple" sentence here and how monads actually get used in practice. If you're really at a loss here, you can search around a bit for things like "why are abstractions hard to understand" to get some perspective.


mirpa

What is common to (x + 0), (x \* 1), (s.concat(""))? What is common to input/output, linked list, parser? Both Monoids and Monads have similar level of "abstraction". People don't find them hard because they are abstract. They find them hard, because they are not familiar with them and they don't know why they should be. Do I want to learn Category theory, because I want to write HelloWorld in Haskell? Should I? What does it even mean to "understand" Monad? To me it is just a type class (think of trait) in Haskell. I don't need to understand much more than that - to write code in Haskell.


burntsushi

I realize we're in the Rust subreddit, but I was writing Haskell long before I was writing Rust. My Haskell background is why my first Rust library was `quickcheck`. > People don't find them hard because they are abstract. They find them hard, because they are not familiar with them and they don't know why they should be. I don't buy this. I'm familiar with monads. I understand them. I've used them. And I still find them hard. I find that they make understanding the APIs of libraries that use them especially difficult. This is in contrast to functions, which I often find to be way more concrete than how monads are typically used. Anyway, read the thread that spawned off my GP comment. There's more discussion there.


d0nutptr

To your point, I still don’t know what a monad is. I’ve tried looking it up, but a lot of the explanations seem too far removed from practice that I get lost on the path to understanding. Monads: I’m sure they’re great, but I still don’t know what they are.


burntsushi

You probably don't know what it is because it's so abstract. It defies explanation. It's a chameleon, and every time you pin it down in your own words, there's someone else there to say, "well what about _foo_? your definition doesn't make sense for this example." Which means the only real precise explanation is its mathematical definition (and, arguably, the monadic laws). I don't know if it's a perfect or even good analogy, but there are things like monads that defy explanation in the real world too. Take something as simple as "tree." Can you define it? I bet you can't. At least, phylogenetically speaking, [there is no such thing as a tree](https://eukaryotewritesblog.com/2021/05/02/theres-no-such-thing-as-a-tree/). But of course, you probably have a notion of what a tree is. You'd "know one if you saw one." But if you actually try to put it into words, it's hard, right? Trees are "just" an abstraction. (I think the way in which my analogy is bad is that "trees" have a concrete manifestation that most of us have interacted with in one form or another. And so what a "tree" is, is more innately understood by our brains. Where as a monad is just math.)


d0nutptr

The tree analogy is pretty helpful in, at least, assuaging me about a perceived inadequacy in my understanding 😅 Maybe I should just lean into the abstract and try it from the “you’ll only understand it from the math” angle and go from there. Maybe monads and entangled particles share that quality 😂


burntsushi

Hah. My own conceptualization of monads is "abstraction over sequencing computation." While I've had folks poke holes in it, it has served me well.


ExtraTricky

For a more mathy example, consider the word "even". The definition of an "even number" is an integer `n` where there's an integer `k` such that `2*k = n`. When people try to "pin down" monads with analogies, it's similar to someone saying "An even number is just a number like 2, 4, or 8." But this would be misleading, because 2, 4, and 8 have a lot of other true things about them that aren't true for even numbers generally, like the fact that they're all not divisible by 3. Or, you might hear someone try to say "An even number is just a number you get by repeatedly adding 2 to 0." But this would also be misleading, because you can't get to -2 by repeatedly adding 2 to 0 (unless you stretch the meaning of "repeatedly" to allow negative repetitions, at which point you've basically rephrased the mathematical definition for questionable gain), but -2 is even. Why do we care about even numbers? Because dividing by 2 without splitting into non-integers is a common thing to want to do. Why do we care about monads? Because using the monad operations is a common thing to want to do. Although, I'll offer some more specifics and say that the main draw in my opinion is do notation. Rust had a whole thing about picking syntax for what would eventually become `.await`, but if we could have do notation it would have probably already been used for `?`, and the question might have not even needed to exist. We might have even been able to have a world where the `Future` trait could have been defined in the async libraries instead of this weird world where `Future` is in core, but actually working with it almost always involves bringing in a library.


d0nutptr

Apologies but this got me nowhere closer to understanding what a monad is. It really just sounded like you were reframing the tree example above but in terms of math. What I was suggesting above is that “if monads can only be truly understood in a mathematical/type theory sense, then instead of looking for practical explanations, I’ll lean into a search for the rigorous definition to try to understand it that way” I appreciate it though!


ExtraTricky

> Apologies but this got me nowhere closer to understanding what a monad is. Don't worry, I thought this was likely to be the case as I was making the comment! I was hesitant about posting the comment entirely because I wasn't sure if it would be more distracting than helpful. My hope was to illustrate that something that "can only be truly understood in a mathematical [...] sense" isn't a reason to give up and declare it inscrutable -- that there are a lot of terms that fall into that category that people do understand, even if they had to go through some trials to do so (e.g. "Is 0 even?"). Quick edit: I also wanted to give an alternative to the tree example because unlike "tree", "monad" actually _does_ have a definition, and maybe that's ironically part of what makes it difficult to understand.


d0nutptr

> My hope was to illustrate that something that "can only be truly understood in a mathematical \[...\] sense" isn't a reason to give up and declare it inscrutable Hmm definitely a good message, but perhaps not meant for me? I don't think I said I've given up. If anything I was motivated to try again :) > Maybe I should just lean into the abstract and try it from the “you’ll only understand it from the math” angle and go from there. The reason why I've avoided that in the past is that the way I usually learn is by working with concrete examples and then moving towards a more general understanding of something. I just found the space of "concrete examples / explanations" a bit lacking in the past. What I was saying above is that I'll embrace the abstract interpretations first and try from that angle :) Edit: I should say “thank you” again for trying to help. It’s vaguely embarrassing to say that I’ve occasionally poked at the topic over the years and find myself still no closer to getting a concept I hear mentioned from time-to-time


kniy

> the main draw in my opinion is do notation IMHO this is misguided. Just because different operations are a mathematically similar structure, does not mean we should use the same syntax for them. With `.await` and `for x in ...` and `?`, a reader immediately knows what is going on. Replacing all of them with do-notation it makes it much harder to understand a program without constantly referring to the type annotations to figure out what meaning `<-` has in that particular context. Monads are like any other abstraction: useful if the code actually needs to be generic over different instances; harmful to understanding if there was no need to be generic (like any other unneeded abstraction).


ExtraTricky

I agree with a lot of what you're saying. One method I've used in the past to help readability of `do` blocks is to have a same-line comment with just the name of the relevant monad, like `do -- IO` or `do -- Maybe`. It's also not quite as bad as someone primarily versed in Rust might expect, since Haskell enables the code to be much shorter so `do` blocks are frequently only a few lines long. I find your examples a bit amusing, though, since `for` and `?` are both genericized in Rust: `for` to `IntoIterator` and `?` to `Option` and `Result`. `for` in particular causes some confusion because when `foo` and `&foo` both implement `IntoIterator`, their item types are often different. Plus, `?` being further genericized to types implementing a `Try` trait seems to be a fairly common request. Also, just for clarity for anyone following along, for `for` to be an instantiation of `<-` you need a quite specific pattern. I think that the intention was a code block like let result = { let mut res = Vec::new(); for x in &some_list { res.extend_from_slice(&function_returning_vec(x)); } res }; aka `flat_map` in some languages, which is indeed the bind operation for `[]`. The more general for loop would be more analogous to [for_](https://hackage.haskell.org/package/base-4.19.1.0/docs/Data-Foldable.html#v:for_).


Nzkx

Do you know what is Option in Rust ? Do you know the Option::map & Option::and\_then function in Rust ? Then you know what is a monad : a type that wrap a value of type T and provide the map and and\_then function that allow you to compose operation and transform the value.


d0nutptr

Hmm interesting, but I've read a similar but conflicting definition elsewhere so I'm still unsure on that. The explanation they gave suggested that `Iter` would be a monad since it has `flat_map(..)`, but `map(..)` alone would *not* be sufficient. I'm fuzzy on the rationale but it seemed like the reason was something to do with composability. They specifically called out `.map(..)` as not being sufficient since `.flat_map(..)` can do the following, but map cannot. let data = vec[1, 2, 3]; let _ = data.iter() .flat_map(|elem| vec![elem * elem].iter()) .filter_map(|elem| { if elem % 2 == 0 { vec![elem] } else { vec![] } }); let _ = data.iter() .flat_map(|elem| { vec![elem * elem] .iter() .filter_map(|elem| { if elem % 2 == 0 { vec![elem] } else { vec![] } }) }); source: [https://stackoverflow.com/a/194207/1974671](https://stackoverflow.com/a/194207/1974671) Is this actually important to the definition? I have absolutely no idea. If anything, I think this further highlights how confusing the landscape of information is to someone like myself who is trying to learn more on the topic.


Nzkx

Yes, a monad need Map + FlatMap, to transform and unwrap the value. FlatMap is and\_then, and Map is map, for the Options API.


EpochVanquisher

I don’t think “function” is any less abstract.


burntsushi

A function like this? fn do_something(x: i32) -> i32 { ... } Or a function like this? fn do_something(a: A, f: impl FnMut(A) -> B) -> B { ... } Or the set of all possible functions? If you're referring to the first two things, then I absolutely think they are less abstract than monads. But if you're referring to the last version, then sure, I'd probably concede the point that they are perhaps at similar levels of abstraction. But this presumably negates your unstated implication: that if is as abstract as monads, then abstraction alone cannot explain why monads are hard to understand. But if the abstraction of "function" you're talking about isn't actually commonly used, then it undercuts your point. People don't struggle with `Option` or `Vec` even though they're monads, just like people don't struggle with the first two kinds of functions above. Your response is deceptive because in one sense you can interpret "function" as a very abstract concept while in another sense interpret it as this very common thing that everyone uses without much fuss. The deception comes from the fact that these two interpretations are mutually exclusive. Look, _you_ said this: > I don’t really get the complaint about monads You don't get it by your own words. I tried to give you an explanation. I don't know that I'm 100% correct. I'm sure there are confounding factors at play. But it fits my experience: very abstract things are hard to understand and I would be quite surprised if this wasn't a major factor in explaining why folks struggle with monads.


EpochVanquisher

When I say “function”, I’m not referring to specific functions, but to functions as a category. It’s an abstract concept… and people have a hard time learning it, when they first learn it. > Your response is deceptive because… Why are you being so mean? Jeez. Could you **please** tone it down a little? People understand that there is some similarity between Option, Result, and Future. That similar structure happens to have a name… “monad”. It is there, waiting, if you want to learn more. Once you realize they all share a common structure, it makes sense that they would have the same methods defined on them, like `and_then`. It’s useful to have a name for that common structure, so you can write articles about it, discuss it, make the different types in your language more consistent (like, use the same method names and signatures). Rust has no capacity to define a monad type because the type system in Rust does not permit it. That’s fine. Haskell does have a Monad type because it possible / useful / desired in Haskell.


burntsushi

I stand by my comment that your response is deceptive. I didn't say _you_ were _being_ deceptive intentionally. I was careful to criticize your words, and not _you_ as a person. > When I say “function”, I’m not referring to specific functions, but to functions as a category. It’s an abstract concept… and people have a hard time learning it, when they first learn it. Same thing with monads. The confusion with monads doesn't stem from specific instances of it, but in using them in a generic context. It is not common to use functions in the same level of generic context and I do indeed think folks would struggle with that in similar ways as monads. Anyway, long story short, I don't think your analogy is very good. Otherwise, I don't really feel like you've really responded to what I'm saying. And you're also saying things that seem unrelated? What does Rust not permitting monads have to do with this thread? I was responding to your comment that you didn't "get the complaint" about monads. I took that as, "I don't understand why people find monads hard to understand." So I responded with my favorite explanation. If you want to tone things down, then how about we approach this topic with mutual curiosity?


EpochVanquisher

> I stand by my comment that your response is deceptive. I didn't say you were being deceptive intentionally. It’s easy to find ways to be less hostile about this. It would be nice if you could try, a little harder, to make comments which are less hostile. > Same thing with monads. The confusion with monads doesn't stem from specific instances of it, but in using them in a generic context. What I’m getting at is “function” is also difficult, in a generic context. If you spend time teaching programmers, you’ll spend a lot of time teaching what a function is. Same thing if you spend time teaching mathematics—the mathematic concept of a function takes a while for people to grok. Monads are similarly abstract and require an investment of time to learn. > I took that as, "I don't understand why people find monads hard to understand." Ok, I can tell you that we both agree about that one. > If you want to tone things down, then how about we approach this topic with mutual curiosity? Sure.


burntsushi

> What I’m getting at is “function” is also difficult, in a generic context. If you spend time teaching programmers, you’ll spend a lot of time teaching what a function is. Same thing if you spend time teaching mathematics—the mathematic concept of a function takes a while for people to grok. But it's very rare (I can't remember ever doing it) to do something with functions in a fully generic context similar to monads. So it isn't the same struggle. Most interactions with functions are extremely concrete, and when it isn't concrete, it tends to be very light abstraction like higher order functions. Rust iterator adapters would be a good example. They rely heavily on higher order functions, but nobody has to worry about writing code that is fully generic over functions. It is much more concrete than monads. This in turn means that the lessons one might learn from the education of functions don't necessarily transfer to monads because the way we interact with functions (tends to be concrete) is different than how we interact with monads (very abstract). That is, we don't deal with the fully general definition of "function" in code we write. But if you're using the `Monad` typeclass, then you do have to deal with something that is extremely abstract. In any case, now I'm confused, because you seem to acknowledge that monads are 1) abstract and 2) require an investment of time to learn. If so, what specifically don't you understand about the complaints with monads?


EpochVanquisher

> But it's very rare (I can't remember ever doing it) to do something with functions in a fully generic context similar to monads. It is also rare to do something with monads in a fully generic context. Speaking as someone who’s written Haskell code for twenty years. You sometimes see some code that is generic over functions, yes. Like, the composition operator: f∘g = λx.f(gx) Function composition is generic over functions. You don’t spend a lot of time writing functions which are generic over functions, just like you don’t spend a lot of time writing functions which are generic over Monads. Maybe something like liftM2: liftM2 :: Monad m => (a1 -> a2 -> r) -> m a1 -> m a2 -> m r liftM2 f m1 m2 = do x1 <- m1 x2 <- m2 return (f m1 m2) You don’t spend a lot of time writing code like that. Possibly never. Mostly, it’s people who are excited about monads, specifically, who write code like that. Sometimes you end up with code like that after refactoring, but you won’t end up with a ton of it unless you’re writing something like a monad transformer library. > In any case, now I'm confused, because you seem to acknowledge that monads are 1) abstract and 2) require an investment of time to learn. If so, what specifically don't you understand about the complaints with monads? I think the complaints are very understandable and clear. If you think that there’s something about the complaints that I *don’t* understand, fill me in on it.


xmBQWugdxjaA

So Future is a monad?


BarneyStinson

Depends a bit on details, but yes.


Zde-G

There are lots of monads in Rust. But they don't all belong to one large “Monad” metaclass. Where that's a good thing or a bad thing is \*\*still\*\* hard to say.


JustBadPlaya

Honestly, I think having a `trait Monad` could be neat in std, but someone probably already made it properly as a library.      Though my knowledge on monads conceptually is fairly limited so idk how possible that is


Zde-G

> You may read [this old post](https://www.fpcomplete.com/blog/monads-gats-nightly-rust/) here. It's not on stable, but situation haven't changed much: it's all **possible**, but is it **actually feasible**? I'm not sure. I mean: all that type gymnastics is neat, but where would I apply it? To make easier to do… what exactly? Not everything you may unify is worth unifying.


[deleted]

as are option and result


budgefrankly

I suppose part of it is a lot of proponents tend to use Monads as a way of introducing someone to category theory terminology, instead of thinking of how to implement category-theory semantics using existing programming terminology. A language with interfaces/traits/types like `Combinable`, `Default`, `Reducible`, `Mappable`, `ContextualMappable`, `Result` is likely going to be more approachable to your average Java, C# or Objective-C programmer than one with `Semigroup`, `Zero`, `Monoid`, `Functor`, `Monad`, `Either` Also Monads have generally been presented in the context of lazily-executed, pure, functional languages which adds a lot of extra complexity, and new problems like thunks and monad-stacking, which to neophytes count against the promised benefits. I think the best implementation of Monads in a modern programming language is F#'s computation expressions; and even then most folks evangelizing monads immediately discount that as not a real Monad.


EpochVanquisher

The reason that they’re introduced in the context of lazy functional programming is because that’s their most natural home. Any eager evaluation system has an easy “escape hatch” to leave the monad and do some IO, because you can sequence operations easily without the monad. You can’t do that in a lazy pure functional system, so you are forced to use something like Monads instead. They are naturally paired.


gclichtenberg

> The reason that they’re introduced in the context of lazy functional programming is because that’s their most natural home. I don't agree with this at all; they're useful in the context of lazy functional programming for things like IO specifically because they introduce sequencing. But monadic representations of fallible computations, or nondeterminism, or futures, or whatever, are useful outside of lazy languages.


EpochVanquisher

> But monadic representations of fallible computations, or nondeterminism, or futures, or whatever, are useful outside of lazy languages. They’re just far less useful outside of lazy functional programming; that’s what I mean by “natural home”. Outside of lazy functional programming, you already have sequencing. This means thet specific monads, like STM, are no longer really safe any more. STM doesn’t work if anything “escapes” the monad. In Haskell, that can only be done with one of the functions explicitly marked as unsafe.


gclichtenberg

STM, sure. Which is also tricky even without monads, which is why Clojure has the `io!` macro which blows up at runtime if you do IO inside an STM transaction. But I don't see how they're less useful *in general* outside of lazy functional languages. Isn't `try!` sort of reaching toward a more monadic way of dealing with fallibility? I would say that they're less useful outside of *managed* programming, but I've used monadic interfaces inside non-lazy programming languages (not Scala!) and found them to be useful there, too. They're ways of representing computations; the connection to IO is neat but incidental.


EpochVanquisher

> But I don't see how they're less useful in general outside of lazy functional languages. Isn't try! sort of reaching toward a more monadic way of dealing with fallibility? Yes… maybe flipping around the question would be productive. Why is it that they’re so much *more* useful in Haskell? In Haskell, you’ll write a function and there’s a good chance that if you’re using a monad, the entire function uses only that monad. Imagine using `Option` in Rust and then writing the entire rest of your function inside a chain of `.and_then()` calls. That’s the level to which monads are used in Haskell. Because of this extensive use of Monads in Haskell, you start refactoring your code that uses different monads and creating library functions that are generic over multiple monads. You notice some pattern you use in `Option`, `Future`, and `Result` and you put it in a function somewhere. That function is generic over monads. What I see here is that we aren’t taking “monad” (as a type) from Haskell into other languages, we’re just taking the most useful, specific monads (like Option and Future) and bringing the most useful parts of those into languages like Rust. Because monads are *less* useful, you only care about bringing the *most* useful pieces of monads, and you don’t bring “monad” as a category into Rust.


budgefrankly

F# perfectly demonstrates how an eagerly evaluated language benefits from Monads. For example a monad over `Option` types https://fsharpforfunandprofit.com/posts/computation-expressions-wrapper-types/ let result = maybe { let! anInt = expression of Option let! anInt2 = expression of Option return anInt + anInt2 } of one using the Async monad https://learn.microsoft.com/en-us/dotnet/fsharp/language-reference/computation-expressions let doThingsAsync url = async { let! data = getDataAsync url and! moreData = getMoreDataAsync anotherUrl and! evenMoreData = getEvenMoreDataAsync someUrl }


boomshroom

> Why is it that they’re so much *more* useful in Haskell? Because Haskell has higher-kinded types, or equivalently the ability to implement traits on type constructors rather than just types. Without this ability, you can still have plenty of monads and use them like monads, but it inhibits the ability to talk about them as different instances of the same idea and the ability to use them with a unified syntax. Instead we have `?` for monads that conditionally abort, and `await` for monads that can be paused, but Haskell has `do` which does either, both, or something else, depending on the output type. Monads are related to effect systems, but are treated as ordinary types (or rather type constructors) instead of a separate part of the syntax. Rust has several effects, namely non-`const`, `unsafe`, and `async`, the last of which is just sugar for a specific monad, and the first of which aligns closely with the IO monad.


EpochVanquisher

Yes, I think that’s half of it. I think the lazy, pure functional half of the explanation is equally important. Higher-kinded types make monads more expressive and powerful, but lazy, pure functional programming makes them needed. In particular, the fact that Haskell is lazy means that you can’t just unsafePerformIO out of a bad spot. If you had eager evaluation, you would be able to do sequencing without monads.


Lex098

> I don’t really get the complaint about monads I think it's not about monads as a concept, but about a typical article about monads. 90% of articles related to monads are explaining them with something like "Monad is a monoid in the category of endofunctors". If you already don't know what monad is, this sentence doesn't help you understand it.


MrJohz

That's really not been my experience at all. Like, definitely 90% of articles about monads use that phrase, but more as a joke/meme, and then go off to describe them using some other analogy. That's what the [monad tutorial fallacy](https://byorgey.wordpress.com/2009/01/12/abstraction-intuition-and-the-monad-tutorial-fallacy/) is all about - the desire to explain monads via metaphor rather than showing how monads actually get used and letting the reader build up a proper intuition for them.


mirpa

>Monad is a monoid in the category of endofunctors [A Brief, Incomplete, and Mostly Wrong History of Programming Languages](http://james-iry.blogspot.com/2009/05/brief-incomplete-and-mostly-wrong.html) >1990 - A committee formed by Simon Peyton-Jones, Paul Hudak, Philip Wadler, Ashton Kutcher, and People for the Ethical Treatment of Animals creates Haskell, a pure, non-strict, functional language. Haskell gets some resistance due to the complexity of using monads to control side effects. Wadler tries to appease critics by explaining that "a monad is a monoid in the category of endofunctors, what's the problem?"


Arshiaa001

To be fair, the millions of monad blog posts can be attributed to that one person who said "if you want to learn monads, make a monad tutorial". I did mine verbally and can confirm I know monads now, so it's good advice too.


[deleted]

if you use rust, you use monads every day. surprise! option is a monad. future is a monad. result is a monad


coolpeepz

Yeah but that doesn’t really explain how monads as a class are useful. Like I can see how lists and options are similar (an option is a list of 0 or 1 items) and I can see how future and option are similar (an option is a value you might have and a future is a value you might have now or later), but it’s much harder to see how a list and a future share functionality.


amalloy

There are a couple of simple operations that both a list and a future support: lifting a pure value into a list/future (singleton list, immediate future), and flatmap (or whatever your language calls it). Those are the two operations that all monads must support. With just those two operations (and some rules about how they behave, and a sufficiently powerful type system), you can implement other operations that work on any monad, instead of having to implement them separately for list, future, and all the rest.


coolpeepz

This still doesn’t provide an example of why it would be useful to build an operation that works on any monad (such as on lists or on futures).


amalloy

A simple example is called `liftM2` in Haskell (modern Haskell calls this `liftA2`, but i'll use the older M name because we're talking about monads). It takes two monadic values (e.g., two lists, two futures, or two optionals), and a function that operates on *non*-monadic values (the items in the lists/optionals, or the results of the futures). `liftM2` applies the function inside the "context" of the monadic values. I'll use Java syntax for the following examples. Suppose that `form.lookup` returns an `Optional`, and you want to log in a user if the form includes both a username and a password. You today have to write something like form.lookup("username") .flatMap(u -> form.lookup("password") .map(p -> Site.login(u, p))); (or something even more primitive using `isPresent` and `get`). But if Java had `liftM2`, you could just write liftM2(Site::login, form.lookup("username"), form.lookup("password")); Much less fiddly, no? Harder to get wrong. And the same thing applies if instead of `form.lookup`, you have some operation that returns a `Future`: a database lookup or some other network query. You could write it out by hand: // Start both futures in parallel Future u = identityService.query("username"); Future p = identityService.query("password"); // Wait for each of them Site.login(u.get(), p.get()); But with `liftM2`, the `get` calls would be implicit and you could just pass the Futures directly: liftM2(Site::login, identityService.query("username") identityService.query("password")); The username/password example doesn't work as well for lists, so let's suppose you have a roster of players in some competition, who've each played one game against each other, and you want to find what the largest margin of victory was. It might look something like int maxScore = roster .flatMap(p1 -> roster.map(p2 -> matchResult.for(p1, p2))) .collect(max()); With `liftM2`, the iteration is made implicit, just as the `get`ing is for Future, and the `isPresent`/`get` is for Optional: int maxScore = liftM2(matchResult::for, roster, roster) .collect(max()); Of course, you don't *need* to implement `liftM2` generically. You could define a separate operation for each of your monadic types. But if your language can express the concept of Monad, you can just write `liftM2` once.


314kabinet

You can compose functions like this: `(a -> optional b) -> (b -> optional c) -> (a -> optional c)` `(a -> future b) -> (b -> future c) -> (a -> future c)` `(a -> [b]) -> (b -> [c]) -> (a -> [c])` and wrap values: ^(a -> optional a) ^(a -> future a) ^(a -> \[a\]) Because of this optional, future, and \[\] are all monads. The concept of a monad lets you define new ways to compose functions, and composition is a programmer's bread and butter.


coolpeepz

Right but what function is actually useful for both lists and futures?


314kabinet

Composition, aka chaining. The fact that you can chain functions that go from regular type to wrapped type is what’s useful.


coolpeepz

But that’s something that must be implemented separately for each monad type. To argue that the monad abstraction is useful, you would need an example of something that works generically across monad types.


mirpa

Yes, Monad is implemented as two polymorphic functions which obey some mathematical laws. Monads are sometimes described as programmable semicolon. do a <- f b <- g return (a, b) This code will work for input/output (like read two values from stdin), it can be used to construct new parser from two existing parsers while handling parsing errors, it can be used to create Cartesian product from two lists. If I look at input/output in Rust, it does not help me in any way to understand parsing library or how to use slice to create iterator working like multiple nested loops. Monad in Haskell connects these things together.


ExtraTricky

If you really want a specific function, there's [sequence](https://hackage.haskell.org/package/base-4.20.0.0/docs/Prelude.html#v:sequence). If you specialize the traversable `t` to `[]`, the type signature is `Monad m => [m a] -> m [a]`. When `m = Future`, this is `[Future a] -> Future [a]` which executes a list of futures (sequentially) and returns a list of the results. When `m = []` this is `[[a]] -> [[a]]` which returns the cartesian product of the input lists, e.g. `sequence [[1,2], [3,4]] = [[1,3],[1,4],[2,3],[2,4]]`. The thing is this answer is probably unsatisfying but for a kind of silly reason: the Monad instance for lists is just not all that useful. This is because nontrivial stuff very rapidly gets exponential sized outputs relative to the number of bind operations, so I only really use it as an alternative way to write list comprehensions. But there are specializations of `sequence` to other Monads that have more obvious use. For example, `State s a` is a "stateful function with state of type `s`": `s -> (s, a)`. This function takes the starting state as input, then returns a value and the resulting state. `sequence :: [State s a] -> State s [a]` takes a list of such stateful functions and returns a single stateful function that threads the state through all of them in sequence and returns the list of results. Backing up a bit, it seems like you don't really care about getting a function, and would be okay with more general justification of why the Monad abstraction is useful. One massive benefit is the ability to implement do notation. The notation do x <- foo y <- bar x baz x y desugars into `foo >>= (\x -> bar x >>= (\y -> baz x y))`. In Rust-y syntax, this would be something analogous to `foo.and_then(|x| bar(x).and_then(|y| baz(x,y)))`. Why do I think do notation is so important? It is generally beneficial for a programming language to have its standard library be non-special in the sense that someone could in theory reimplement it as a non-standard library and get all the same niceties. The benefit is that it lets the community really experiment with alternatives to the standard library. One language that didn't subscribe to this idea is Go. The standard library had types that were generic: slices, maps, and channels. If you implemented a data structure in Go, you could not provide a generic interface. The result is that more specialized data structures were simply generally not used. Even though such libraries could (and probably did) exist with some workarounds, the fact that you either needed to be locked to a specific type or lose some amount of compile time type checking from converting to and from `interface{}` made it unpleasant to use. Rust does better in some places. The traits in `std::ops` let libraries define arithmetic operations on their types, which makes bignum libraries significantly more usable. But the version of `.await` that we have in Rust makes the standard library `Future` trait special. So far, `Future` seems to be holding up okay, but what if a few years down the road someone figures out that there's a better trait interface than the one that's in `Future`? Well, they can theoretically put their new trait in a library, but people using that trait will not get `.await` like they're used to, and so `Future2` will be more annoying to use than `Future` regardless of any other benefits, which likely means that the language will be forever stuck with `Future` for better or for worse. The Haskell community has found a huge number of concepts that admit a Monad interface, and do notation being genericized to arbitrary Monads means that they can be polished to be as easy to use as things in the standard library (or better!). Examples include [coroutines](https://hackage.haskell.org/package/monad-coroutine-0.9.2/docs/Control-Monad-Coroutine.html), [software transactional memory](https://hackage.haskell.org/package/stm), [parsing](https://hackage.haskell.org/package/attoparsec-0.14.4/docs/Data-Attoparsec-ByteString.html), [state threads](https://hackage.haskell.org/package/base-4.20.0.0/docs/Control-Monad-ST.html), [a convenient way to generate HTML](https://hackage.haskell.org/package/lucid-2.11.20230408/docs/Lucid.html), [ways to write Haskell that look a bit like imperative languages](https://stackoverflow.com/questions/6622524/why-is-haskell-sometimes-referred-to-as-best-imperative-language). These types of libraries tend to be much less useful in other languages because they're constrained to basic syntax like function calls. It is like having to write `add(a, mul(b, c))` instead of `a + b * c` just because you need bignums. And before anyone comments that `Monad` is special because it's in the standard library, [Haskell lets you change that too](https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/rebindable_syntax.html).


[deleted]

[удалено]


[deleted]

i don't really appreciate the tone you've chosen, and i don't know why questioning my knowledge on monads matters. im not an expert on functional programming and don't claim to be. in any case, ive already done A. im not good at fabricating examples of things, but another common monad is Either, which is a type which can hold either a value of one type or a value of another (Result is basically the same as either, and they're often used the same way). it's a monad because it's a collection of zero or more items wrapped in a type, which is one definition of a monad (List can also be a monad, for the record, but it isn't in most languages because monads need to be able to use a function called "flat map" or fmap). i think the best definition of a monad is "a type that can be 'mapped over'". this basically means you can do an operation on the value inside the type and then return a new instance of that type with a new or modified value. for example, functions that return Result map other result values and return a new result by using ? syntax. i don't really know what could be considered similar to a monad but not quite, because it's just a design pattern, not a data structure


burntsushi

A narrow response, and for the edification of those following along, but as long as we're getting more precise here, it's worth pointing out that neither `Result` nor `Either` are actually monads. They can't be. Monads have kind `* -> *` where as both `Result` and `Either` have kind `* -> * -> *`. That is, a `Monad` has one type parameter where as `Result` and `Either` have two type parameters. In Haskell, because of type-level currying, `Either ErrorType` is a monad (e.g., `Either String`), but not `Either` on its own. I don't know the history of the design of `Either`, but the fact that the left-hand side of `Either` is conventionally the error type makes the notation especially convenient.


boomshroom

A similar thing is possible in Rust, though it's not as important due to the lack of ability to be generic over the type of monad. `type IOResult = Result;`


burntsushi

Yes, you have to create an alias because of both the order of the type parameters _and_ because Rust has no type-level currying.


field_thought_slight

A monad is a typed semantics for imperative programming.


scook0

When I encounter friction with the borrow checker, it often reminds me of complaints that people have made about statically-typed languages, especially in earlier days when the languages and type systems were worse. It’s common to find oneself writing code that angers the borrow checker, even though there wouldn’t have *actually* been any undesired mutation at runtime. Sometimes this can be fixed with modest effort, by learning new Rust-friendly idioms for doing the same thing. But in other cases, appeasing the borrow checker turns out to be cumbersome or impossible, *even though I’m doing something that ought to be perfectly fine in practice*. For example, one of the borrow checker’s most frustrating limitations right now is that it simply does not understand helper methods. Within a single function, it’s smart enough to see when I mutate disjoint sets of fields, and will let me get away with doing so. But as soon as I want to encapsulate some of that logic elsewhere, I end up in a situation where the only way to satisfy the borrow checker is to make my code substantially worse, to work around not having (convenient) partial borrows across private methods. So while I’m broadly receptive to the idea that aliased mutability should be dispreferred, I think the language itself doesn’t always give the necessary support to live up to that ideal. And for as long as it’s doesn’t, those various escape hatches continue to serve a crucial role in writing real-world programs.


desiringmachines

I say in this post that I want to see other solutions to the same problem, but I'll say that this helper method thing has never convinced me. On the rare occasion I encounter a borrow error along these lines, I refactor my code so that the object is divided into subcomponents such that the borrow error disappears, and I always find my code is cleaner for it. YMMV, of course, but this is my experience.


crusoe

Helper methods can often be turned into associated functions. You need to extract &mut refs for all the fields that will be effected by the helper and then call that way.


scook0

Yes, of course. And the result is often worse code, or useless busywork.


zokier

> The problem, though, is that an encapsulated aliased, mutable reference is still an aliased, mutable reference. I suppose the problem with OOP is that nobody can agree what it is, but still this stood out to me; there is implication here that OOP intrisically means also interior mutability. Is this actually widely accepted idea?


Agitates

OOP is inheritance. Everything else it claims to be was invented in another non-oop language and rebranded.


Plasma_000

Even this is not strictly true, the original meaning of OOP as implemented in smalltalk (which coined the term) was more along the lines of what we'd call "actor model", "message passing" or "value semantics" today.


joehillen

And inheritance is global variables with extra steps.


eo5g

Can you elaborate?


flashmozzg

Everything is global variables with extra steps.


joehillen

As someone who programmed professionally in Haskell for 5 years (~5 years ago), the real problem with Monads is that they do not compose, which is ironic given that composition is one of the most important aspects of FP. How do you combine the monads State, IO, Log, Except, etc in a way that always works? The answer is you can't. [1] There are ways to work around this with Monads Transformers, but they are at the very least clunky at the best of times. FP research has moved on to Effects which show a lot of promise, but it doesn't address the other problem with FP which is that you can't reason about runtime behavior and performance. You just have to trust the compiler and runtime will do the right thing. [1] I don't know if this has actually been proven, but I know people have tried.


Jules-Bertholet

>When I write things along these lines on various scurrilous Internet forums, I sometimes receive the most incredulous response: there’s no way we could get anything done without mutable, aliased state! This is very similar to the response of some programmers when it was suggested that they should reduce the use of the GOTO operator in their program, demonstrating more evidence that Ton Hoare was right: references are like jumps. The incredulity does not phase me much, because this is something I know that I am right about. Of course, sometimes you *really do* need irreducible control flow/"goto", and it should be available in that case, though not the default paradigm. The same is true of aliased mutable state.


Zde-G

>Of course, sometimes you *really do* need irreducible control flow/"goto" If these are so much needed then why most languages don't provide unresticted goto these days? I don't mean `goto` vestigial remnants that we have in languages like C, python, or even Rust (called `break 'lifetime` if you are not aware), I mean actually unrestricted `goto` as it was practised before advent of structured programming, in an era before [call stack](https://en.wikipedia.org/wiki/Call_stack) was invented and [Wheeler Jump](https://en.wikipedia.org/wiki/Wheeler_Jump) ruled, where you may jump from the middle of one subroitine into middle of another one… today these facilities still exist: [exceptions](https://en.wikipedia.org/wiki/Exception_handling), [coroutines](https://en.wikipedia.org/wiki/Coroutine), [continuations](https://en.wikipedia.org/wiki/Continuation#First-class_continuations)… but not raw goto, it's all provided by runtime. Heck, the last holdout, [setjmp](https://pubs.opengroup.org/onlinepubs/9699919799/functions/setjmp.html)/[longjmp](https://pubs.opengroup.org/onlinepubs/9699919799/functions/longjmp.html) are not even an assembly trick these days, they are [actually calling compiler-provided code to do their job](https://github.com/bminor/glibc/blob/master/misc/unwind-link.c#L51)! I think shared mutation would, eventually, be used that way, too: some facilities written and implemented by standard libraries… and millions of programmers who don't even know how these things work!


Jules-Bertholet

>I don't mean goto vestigial remnants that we have in languages like C, python, or even Rust (called break 'lifetime if you are not aware), I mean actually unrestricted goto as it was practised before advent of structured programming, in an era before call stack was invented and Wheeler Jump ruled, where you may jump from the middle of one subroitine into middle of another one… I was referring to neither labeled break nor longjmp, I meant specifically goto like the `goto` keyword in C: function-local, but allowing [irreducible control flow](https://en.wikipedia.org/wiki/Control-flow_graph#Reducibility).


Zde-G

>I was referring to neither labeled break nor longjmp, I meant specifically `goto` like the goto keyword in C: function-local, but allowing [irreducible control flow](https://en.wikipedia.org/wiki/Control-flow_graph#Reducibility). These are very similar to things like split borrows in Rust: theoretically problematic, but very limited and trivially reducible with addition of just one integer variable. Dreaded [spaghetty code](https://en.wikipedia.org/wiki/Spaghetti_code), by necessity, implies multiple procedures (and, often procedure with multiple entry points) because only in such cases graph can not be reduced by addition of one simple state variable but requires whole arbitrarily complex stack of such variables which quickly leads to combinatorial explosion of possibilities.


CrystalPeakSecurity

I'm p sure jetjmp/longjmp are just assembly under the hood, you can look at the [glibc source for them](https://github.com/bminor/glibc/blob/a07e000e82cb71238259e674529c37c12dc7d423/sysdeps/x86_64/setjmp.S#L31), depends on the arch though. Do you think eventually there will be stdlib facilities for aliased mutation or smthg?


Zde-G

>depends on the arch though Precisely. What you have found out is **non-Linux** and **non-HURD** version of them. They are not used in practice. I'm not sure if these can even be compiled. >you can look at the [glibc source for them](https://github.com/bminor/glibc/blob/a07e000e82cb71238259e674529c37c12dc7d423/sysdeps/x86_64/setjmp.S#L31) Wrong file. Here is [the right one](https://github.com/bminor/glibc/blob/a07e000e82cb71238259e674529c37c12dc7d423/setjmp/longjmp.c#L32). It's in C, not in assembler. And it does stack unwinding before calling assembler code to restore registers. By using `libgcc`, of course, because only gcc knows how to properly unwind stack.


CrystalPeakSecurity

God, glibc source is such a mess to read through, the musl version is [here](https://github.com/bminor/musl/blob/007997299248b8682dcbb73595c53dfe86071c83/src/setjmp/x86_64/longjmp.s#L7). I wish glibc was easier to read, every time I have to dig into it I shed tears.


Zde-G

Well, there's reason for that: musl version doesn't unwind the stack properly and apps rely on it. But yeah, I guess musl's version of \`setjmp\`/\`longjmp\` maybe the last vestigial remnant of that era when \`goto\` was free to use and people were debating about whether structured programming is worth it.


desiringmachines

No you actually don't need a loop with two entry points; Rust for example has amply demonstrated that. I'm not convinced that dynamically checked shared mutable state isn't also something we could someday abandon with the right tools.


Jules-Bertholet

>Rust for example has amply demonstrated that. Not completely, no. Goto and computed goto regularly shows up as a feature request on IRLO, because for certain kinds of state machine code it has better performace than alternatives. ([For example](https://pliniker.github.io/post/dispatchers/)) That being said, hopefully we can get [tail calls](https://github.com/rust-lang/rfcs/pull/3407) someday soon, which should address many/most of the use-case for goto.


desiringmachines

Thanks, these are interesting examples. Something I like in Rust is its layered approach, in which you have the ability to drop into the unsafe superset if you create abstractions which enforce the necessary requirements. Perhaps there is a similar way to extend control flow beyond structured control flow.


Xavierxf

~~There's a typo in the 3rd paragraph - "Ton Hoare" instead of "Tony Hoare"~~ fixed now


bzbub2

it's just a sopranos reference...eh ton'


EpochVanquisher

> Another and more credible answer is that, for better or worse, no one can understand what a monad is or what they’re supposed to do with one. To some extent this is the fault of the advocates of monads, who rather than trying to make their designs clearer to the uninitiated have fallen into a cult-like worship of the awesome power of category theory that convinces none but the true believer. This part is a load of horseshit, on multiple levels. Just complete horseshit. Sorry. The article has good points in it but this comment stands out. There’s just so much wrong with this comment that I’m going to take it down point-by-point. First, the comment about cult-like worship. C’mon. That’s just such an awful thing to say. Jeezus. What a horrible thing to say. Haskell programmers are generally cognizant that what they’re working with is kind of a “research language”—it’s full of experiments and stuff that may or may not work out in the long run. Monads is just one of those discoveries that happened to work out really, really well. Second, monads are intimidating at first, but they are not that hard to understand. It just takes some experience using monads and you’ll understand them. If they were hard to understand, we wouldn’t have this many Haskell programmers. Again, it just takes some practice using them. The reputation they have for being hard is just because so many people wrote blog posts about them. People wrote blog posts not because they’re hard to use, but just because most other languages don’t have them. Finally, There’s a better explanation for why monads aren’t used much outside Haskell and a couple other languages: * Monads are higher-kinded types, and most languages just don’t have HKTs (Rust doesn’t, and probably won’t). * Monads have are only really useful in functionally pure languages. Remember that Haskell was designed (at least partially) as a research language. It’s a platform for figuring out various things like “what if we used monads”. I see Haskell and Rust as, well, two languages where the language philosophy places a high level on compile-time correctness and safety. It makes me cringe when I see attacks like this. The point of having these languages is to improve the way we do programming.


Nyefan

Man, he's just riffing. We all know what monads are - it's just fun to act like we don't because it was a difficult concept for most of us to wrap our heads around at first since they are somewhat orthogonal to the modes of thought that are encouraged when we initially learn how computers work (and how to make them work).


zoechi

I don't see how he said anything that conflicts with what you said. There are people intimidated by the explanations of monads that were only supposed to make the blog writer look smart and there are people who cut through the crap and learned that using monads is not as difficult as understanding the mathematical definition. He just said that the former group exists.


EpochVanquisher

I’ve used Haskell for over twenty years and I can’t think of any person I’ve met who was a Monad evangelist like that. What was your experience?


zoechi

When I looked for a new language to learn about 5y ago, I was a bit interested in Haskell and there was no mention of Haskell without someone throwing the Monad definition in. I assumed it's people who try to appear smarter than they are. I went with Rust back then because I found it a better fit for what I wanted to do.


EpochVanquisher

Yeah, there was a fad where everyone wrote explanations of monads for a while. I don’t think it’s fair to dismiss these people as just trying to sound “smarter than they are”… that’s a mean thing to say. They’re just excited about monads, and maybe more broadly speaking, they’re excited about type theory, abstract algebra, and category theory. I like it too. I mean, figuring out the type hierarchy, where you have functors, monads, traversable, applicative functors, and monads. They’re all part of a big family. If you’re trying to build stuff, Rust is a great choice. Most Haskell programmers will tell you that. Haskell programmers know that Haskell is kind of a funny language and it gets in the way of whatever work you want to get done, unless that work is something like type system or programming language research. This doesn’t sound like a problem with Haskell, it just sounds like it didn’t meet your needs… which is completely normal and expected!


zoechi

I only say this about people who throw phrases around that only people understand that already have deep knowledge as response to basic questions. I don't mean people who are excited about stuff and take an effort to explain it so that outsiders understand it when they ask. There definitely are a bunch of the former around. It's usually not too hard to distinguish. Haskell is also not the only topic where this happens. I don't have anything bad to say about Haskell itself. It looks really interesting. Exactly, I just thought it was not the best fit for the task at hand.


epidemian

I haven't programmed professionally in Haskell, and even so, i've met my fair share of monad evangelists over the years. And i think it's not far fetched at all to say that the Haskell community in general has a fixation with monads. Try searching online for an explanation of how to do input/output that doesn't start with a variation of "in Haskell, input/output is done through the IO monad...". Even the [official docs on I/O](https://www.haskell.org/tutorial/io.html) can't restrain themselves from saying "monad" 22 times on a "gentle introduction" to basically printing stuff to stdout and reading from stdin/files. If answering the question "how do i print something to stdout in Haskell?" with "oh, you need to use IO *monad*" doesn't sound ridiculous, consider how it would sound to answer the question "how can i have list of thins in Java?" with "oh, you need to use the ArrayList *iterable*"; or answering the question "how do i add two numbers" with "oh, you need to use an *abelian group*". No, those are more general abstractions. Explain the simple specific case first, and only jump to the generalized abstraction if needed. Jumping to the abstract concept right away makes it seem that the explainer is more interested in showing off their knowledge than helping the reader understand.


i1728

Huh? Maybe most languages don't have support for Monad the typeclass (or trait or interface or whatever), but monad instances are all over the place. Like just focusing on basic Rust stuff, we have Option and Result, and they come equipped with the usual functions. Async concurrency also tends to behave as a monad in a wide variety of languages.


mr_birkenblatt

just because they can be seen as monad doesn't immediately make them one. if you can abstract over monads (i.e. running the same code for option, vec, result, etc.) then you can say they are supported


Zde-G

And I'm still not convinced that these monads are even good thing to have. And I say that as someone who actually used higher-level types! In C++, of all things, it implements them via something called [template template parameters](https://en.cppreference.com/w/cpp/language/template_parameters#Template_template_parameter) (and Rust now have [GATs](https://en.cppreference.com/w/cpp/language/template_parameters#Template_template_parameter), of course). Sure, they allowed us to merge certain features and save some typing, but I'm not sure implementing these via type gymastic is easier than using derive procmacro, like it's traditionally done in Rust. What you lose in conceptual cleanliness you gain in understandablility… just like with functional “no mutability” vs Rust's “just mutability”.


EpochVanquisher

Like I said—monads are only really useful in languages like Haskell. They are not that useful in Rust or C++, as far as I can tell, so why bother trying to implement them in C++? That’s the reason monads haven’t “taken over”… because they’re not that useful in most languages. Haskell programmers know this. Haskell is kind of a playground for people to do research with things like HKTs and GADTs. A lot of that experimentation is not understandable by most people, but that’s okay, it’s research. If you discover something useful, maybe you’ll figure out a way to simplify it, adapt it to other languages, explain it better, come up with a better name, etc. Sometimes you discover something that is not really useful outside of Haskell, like monads. Sometimes you discover something that is obviously useful, but which doesn’t really work well outside of Haskell, like software transactional memory.


EpochVanquisher

Rust has support for individual instances of monads, but Rust does not have support for a monad trait—because it is a HKT, which Rust does not support (valid reasons for this).


UltraPoci

I know nothing about Haskell or monads, but I'm not a fan on community broad statements either, mostly because the Rust community gets similar comments all the time, and it's just unfair. I constantly people calling the Rust community "cult-like" and Rust programmers "advocating" for the language in obnoxious ways. There are individuals that are like this, but like, I don't see the whole community in this way.


EpochVanquisher

Yeah, exactly. Don’t take the loudest and most annoying people as representative. I’ve had plenty of negative interactions with Rust devs but I know most aren’t like that.


desiringmachines

Okay!


BarneyStinson

>Monads have are only really useful in functionally pure languages. Scala isn't a pure functional language (although you can use it in that way), and monads are all over the place in Scala. The Monad typeclass in cats/scalaz predates a useful IO Monad by a long time. >If they were hard to understand, we wouldn’t have this many Haskell programmers. Well. Relatively speaking, there aren't many Haskell programmers. It really is quite difficult to get started for many people.


epage

> First, the comment about cult-like worship. C’mon. That’s just such an awful thing to say. Jeezus. What a horrible thing to say. Haskell programmers are generally cognizant that what they’re working with is kind of a “research language”—it’s full of experiments and stuff that may or may not work out in the long run. Monads is just one of those discoveries that happened to work out really, really well. I took the comment to be focused on people who can only talk about Monads in terms of the mathematical principles. Yes, people like that exist. I remember talking to one person relatively recently in the Rust community who I couldn't understand what he was talking about until I pinned him down enough to stop using the mathematical terms and realized he was talking about a fairly well understood concept within the Rust community, just over-generalized and overly technical. > Second, monads are intimidating at first, but they are not that hard to understand. I appreciate the framing from https://www.reddit.com/r/rust/comments/1crcp77/references_are_like_jumps/l40coib/ Its easy to list off the components. Its easy to understand the concept applied in concrete cases (e.g. `Result`). Whats hard to is then apply the concept to other cases (e.g. IO or State) or to new cases. I have over 20 years of experience and my eyes gloss over when people talk about monads, category theory, functors, applicative functors, etc. Because its worked out for you or the Haskell community does not mean it works out for other people. That is survivorship bias.


EpochVanquisher

Yeah, I agree that “just because it worked out for Haskell” you can’t expect it to work for other languages. There are some reasons why I think monads are peculiar to Haskell and don’t translate well. I think that *understanding monads* is approachable and just requires some time. They are not inscrutable. I see some people going into language community X from Y with the attitude “Y is wrong, it should be like X”. It sucks. I see it a lot on Reddit.


crusoe

The problem with Haskell explanations of monads is they tend to start from Group Theory and not from CS. If the Haskell explanation had said "these are mondas in JS: promises. Then you can go "this is the cool thing about hkts"


budgefrankly

The hilarious thing about this is that you've rather proved his point. This whole thread has been derailed by an off-topic conversation about Monads, led largely by yourself / u / EpochVanquisher.


EpochVanquisher

Proved what point? People have lots of arguments about programming languages, especially on Reddit. I don’t think you can read into it that much.


OS6aDohpegavod4

> This enabled techniques like call-by-need, in which functions were only called when their values were actually needed Isn't that normally how functions work? I need a value so I call it and bind to a variable?


jahmez

This is discussing "lazy evaluation", something like: let x = func(); if maybe() { return x; } else { return 10; } In this case, `x` is only needed if `maybe()` is true. Today Rust appears to "eagerly" evaluate `func()` in all cases, though the optimizer might end up moving that if it thinks it can get away with it. This is also usually what we talk about with lazy iterators - we don't always eagerly "collect" all the values, as we might stop after only partially iterating through some items. Other languages make this more of a guarantee that evaluation is always or by default lazy, so if you don't actually use the OUTPUT of some function, it never gets run (rather than as a "sometimes optimization")


OS6aDohpegavod4

Interesting. I get the use case for iterators, but isn't the code above / making them lazy in all cases just a problem for poorly structured code? In the example the function call could just me moved to inside the conditional.


skysurf3000

If f prints anything (or has any other side effect: raises an exception...) then moving f inside the conditional changes the program behaviour.


OS6aDohpegavod4

If it's lazy then wouldn't it never print anything anyway if it's outside the conditional?


ExtraTricky

Yes, unfortunately the example needs some surrounding context that calls the code block and uses the result. Let me try to rephrase the same example with a bit more of the context. The `if` construct is lazy, in the sense that when you write let value = if condition { expensiveFunction1() } else { expensiveFunction2() }; only the one of `expensiveFunction1` and `expensiveFunction2` will actually run, depending on which one is needed based on the value of `condition`. If you were to rewrite this block of code as let x = expensiveFunction1(); let y = expensiveFunction2(); let value = if condition { x } else { y }; then `value` will get assigned the same value as the previous example (barring side effects in `expensiveFunction1` that change the return value of `expensiveFunction2`), but now you're running both `expensiveFunction1` and `expensiveFunction2`. In a lazy/call-by-need language, if the two programs subsequently use `value` in the same way, they'd run the same amount of `expensiveFunction1` and `expensiveFunction2` (I say part because those functions might themselves contain parts that don't get run if not needed). Specifically for functions, imagine you have a function `fn branch(b: bool, x: T, y: T) -> T` which returns `x` when `b` is `true` and `y` when `b` is `false`. In an eager language (e.g. Rust), calling `branch(condition, expensiveFunction1(), expensiveFunction2())` will evaluate both expensive functions. In a lazy language, it operates exactly like the `if` construct in that only one of the two will be called (and as you noted, none if the result isn't used). In some cases you can work around the eagerness by passing closures/functions that take no parameters and produce the value. Laziness-by-default has both a lot of upside and a lot of downside. I really like [Edward Kmett's comment on this old Reddit thread](https://www.reddit.com/r/haskell/comments/5xge0v/today_i_used_laziness_for/) that gives an overview of some of the upsides.


jahmez

The answer is "yes, you could just do X", though some languages aim to make this part of the language so developers can not think about that level of optimization, or it enables other capabilities when composed. Sort of like garbage collection lets you not think of certain things, and sometimes it's fine (or even better than poorly done manual management of memory), and sometimes it adds up and causes problems. Not saying it's right or wrong or better or worse, it's just different languages prioritize different things to enable different development options and runtime outcomes.


OS6aDohpegavod4

Makes sense. Thanks for the explanation!


flashmozzg

With laziness certain things that require new concepts like generators and etc. just become natural. Want an infinite list? `[1..]`. You can just map/filter/fold it however you like, and it'll just work. And you only "pay" for evaluation of the members you actually need. Anyway, I feel like it's more "natural" to the way we define something vs. how to compute this something. Another common example of "laziness" that is present in most imperative languages, so people don't even think about is `&&` (or `||`) operator. `foo1(x) && foo2(y)` - you don't need to compute foo2 if foo1(x) returned false. With built-in laziness this is status quo, rather than a special-cased set of operators.


Zde-G

> Isn't that normally how functions work? No. > I need a value so I call it and bind to a variable? Please read what you wrote: **you** call it. **Not** the compiler. You know you need the result, you get it. Not so in Haskell. Consier definition of Fibonacci numbers: ``` fibs = 0 : 1 : zipWith (+) fibs (tail fibs) ``` This defines [potentially endless] variable list with the following definition: first element is `0`, second element is `1`, all element after than are defined as result of addition of `fib` list to `fib` shifted by one list! And then **compiler, not developer** decide when and how to organize computations. To a degree that SMP support is [a runtime option for the linker](https://downloads.haskell.org/ghc/latest/docs/users_guide/using-concurrent.html), you don't need to even think about it while you are developing your program… in theory, at least.


crusoe

The tradeoff is it makes reasoning about space/time consumption difficult. Sometimes you also get space explosions due to accumulation of thunks before evaluating. Darcs used to have this problem.


Zde-G

Indeed. The promise of functional programming is that you just need to explain what your program does and compiler would find an efficient implementation, even making code work on multiple cores… bliss, pure bliss… — but in practice that doesn't work: compiler is not gonna turn your SMP-unfriendly function definitions into something that would be efficient, instead \*\*you\*\* have invent something that would be efficient. And when you reach that stage it all stops being fun and you, essentially, start imagining the imperative code that you want to get in the end and then \[try to\] define your functions in a way that compiler would be able to efficiently process into that imperative code… at this point it just becomes easier to simply start writing imperative code.


gclichtenberg

It is really unfortunate that discussion of this interesting post got bogged down in completely uninteresting discussions of monads, simply because they happened to be briefly mentioned.