Hacker News new | past | comments | ask | show | jobs | submit login
Miranda released as free software (kent.ac.uk)
196 points by kick on Feb 28, 2020 | hide | past | favorite | 121 comments



You've at least heard of (if not used) a programming language heavily inspired by Miranda: Haskell. A lot of things about Miranda were really interesting; it reflects modern languages in some ways, but looks completely foreign in other ways. Like Haskell, whitespace was significant; unlike Haskell, it was fast. Really fast.

It was one of the pioneering purely-functional languages, but seems to have been mostly forgotten. That's unfortunate, because in many ways it still is superior to its successors. Their fault for keeping it proprietary for so long, really.


You've made several comments here suggesting that GHC - pretty widely considered an extremely powerful compiler - is inferior to what Miranda offers. Can you substantiate your claim that Miranda is faster than Haskell/GHC?

FWIW, I just tried doing a very quick dumb benchmark - the classic functional pseudo-quicksort. GHC spit out a binary that can sort 10,000,000 numbers in a few seconds on my laptop. Miranda (after cranking up the heap size a couple orders of magnitude) took noticeably more time to sort only 1,000,000.


Oh wow, Miranda is still using an SK combinator reduction evaluator dating from 1986, in K&R C. It isn't even using supercombinators, which I read about in https://www.microsoft.com/en-us/research/publication/the-imp...

Combinators are fun, but not very fast :-) https://dl.acm.org/doi/10.1145/800087.802798 http://ioccc.org/years.html#1998_fanf


I promise I'll respond to this in the morning, I'm very tired right now and don't want to give a good and reasonable question an answer that hasn't been thought out well, especially given that my responses have been getting less detailed and concise as I've gotten more tired.


Just for comparison, C++ running the shortest, simplest possible quicksort does 100,000,000 32-bit ints in 8s. Three more lines, and it's 3.4s. That's 1.3ns each time it looks at an element.

So when you are already giving up an order of magnitude or two, another doesn't matter so much. It pays then to steer the conversation toward other merits.


> C++ and the shortest, simplest possible quicksort

wk_end was talking about "functional pseudo-quicksort" on immutable linked lists. You're presumably talking about a proper quicksort on mutable arrays. That's comparing apples to oranges.

And even if you weren't, how is that an argument? Person A says "X is faster than Y", then person B says "Actually I just ran X and it took 10x longer than Y" and then you say "Well Z, which isn't even part of this discussion, is 1000x faster than both X and Y, so your argument is invalid". How does that follow?


It means that speed is not a very interesting basis for comparing them. If you cared much about speed, you would look elsewhere.


Again I want to emphasize that you were comparing different algorithms on different data structures. It's like someone made a benchmark using the naive recursive Fibonacci definition and then you implemented the iterative version in another language and concluded from that that the other language must be much faster. The different algorithm is what gave you (most of) the speed up, not the language.

I mean, I don't doubt that C++ is in fact faster than Haskell, just not by that much.


Their criticism is fair: I have been making a performance argument. I do think they've misinterpreted my performance argument, though, which I'll get to. (There's a lot of backlog I need to catch up with; this thread is very large and my typing speed is significantly faster than the speed at which I can think.)


I learned about Miranda from the appendix of "The Implementation of Functional Programming Languages" 1987 book by Simon Peyton Jones (IIRC one can find a PDF copy online). I liked it a lot though I had no access to its implementation. Glad to know it is now available. Its implementation is small by today's standards!


Ditto!


The language was such a pleasure to use in university in the early 90’s. Those were the days where you paid for compilers. My prof was grumbling about the licensing fees for Miranda though.


Yeah, it was a bit ridiculous. Incredibly good software compared to what we've got now, though.


how come it was so much faster ? smaller semantics allowing for more optimizations ? or some obscure reason ?


It isn't, it is much slower. Download them both and try a few tests for yourself, GHC produces much faster binaries.


Short: Haskell made a bunch of poor design decisions that they've since doubled down on, the fastest implementation isn't very good, the art of writing compilers and interpreters has effectively been lost, and Miranda was written by and designed by professionals.


"the art of writing compilers and interpreters has effectively been lost"

I'm curious, how does that happen? Were the techniques used in compilers and interpreters of yore never recorded for posterity in academic papers or technical documentation?


Spent a good while thinking on your question:

The best stuff was all proprietary for decades. Even now, we only got Miranda's source code weeks ago. Miranda was written in the 1980s! It's the same for a lot of languages. Nial's the example I usually bring up for one that was freed far too late. People with the domain knowledge that would have helped deal with the complexity of modern systems are all either retired, have ditched PL-design, or have died.

Turbo Pascal, too, although that one's kind of strange because the author of it later went on to do a complete 180° turn while still working in PL design and implementation (plus it was never freed).

Ken Thompson's cc suite for Plan 9 was behind a million-dollar license fee until a decade ago, and it wasn't even for any modern CPUs. He once quit the industry to become a flight instructor; he's at Google, but he's retired now.

The best advice is "Stay small!" which most Free Software compilers seem to be violently against. It's not their fault, though: it's hard not to take new code when it's offered to you!

We'll probably never see the source code behind any version of the k interpreter by Arthur Whitney, and trying to get an interview with him is like going on a snipe hunt, so good luck figuring out anything about his approach besides "Small!" It doesn't help that Kx sues the hell out of anyone who's actually seen the k source and tries implementing anything remotely like it that's not a toy.

Every modern compiler is for multiple architectures in convoluted ways. This is generally a terrible idea, especially with how divergent they're getting in the times we're in, even between chips that have the same instruction set.

Intel x86_64 processors aggressively speculate almost as intensely as Transmeta did back in the day, but we're still treating them and AMD chips more or less the same, and we're using the same compilers that we're using on that instruction set for RISC-V and for ARM and for Itanium and for obscure 16-bit CPUs and so on.

Portable compilers aren't bad in principle, but when you look at pcc compared to GCC you'll see exactly where we went wrong: pcc wasn't very optimized, it was simple, and it was understandable. Great for bootstrapping. Not great for getting the most out of your CPU. Having portable compilers that try to heavily optimize is a mistake.

No one seems to know what a CPU cache is!

And how many compilers do you know that compile to C, or to LLVM bytecode or similar? It's insane: nobody should be doing that! That approach doesn't make much sense!

And the only argument any of these people can make is "But our language is too big for it to be practical to write something unique for each architecture! Piggybacking makes it way quicker!"

No one seems to realize that their languages are getting too large. They aren't even getting too large in a graceful way: Common Lisp is probably the biggest language there is, but it's extremely portable, and is easy to write an interpreter for.

Things got complicated really fast, and domain knowledge was sort of lost in the waves of proprietary compilers as they were destroyed by GCC. I appreciate that free software "won" to some extent, but if it hadn't have won as fast, we might have had some knowledge transfer happen that was actually useful.

Walter Bright isn't doing magic, but he and the people he work with seem to be some of the only people in Free Software who are making a compiler that's fast and good in the x86_64 world.


Why is compiling to LLVM bytecode insane? Compiling to bytecode and then working with a simpler language has worked for decades since the introduction of BCPL.

It's also the approach used by Open64, where you compile to WHIRL which is then optimised with successive passes until you eventually output whatever the architecture can run.


Good question!

BCPL's O-Code is far different from LLVM bytecode. O-Code was little more than a virtual stack machine. Very similar to Forth, though slightly more complicated.

I could be wrong, but isn't Open64 more or less dead? The only thing I can think of that actually uses it (CUDA) isn't best-in-class.


I believe you're correct about Open64, but it was an interesting examples of a compiler when I was researching them last, particularly its usage of WHIRL [1]. It's a similar approach as used by GCC and LLVM with their front, middle and back-end approach, although I can't quite remember if it pioneered that approach or not.

[1] https://www.mcs.anl.gov/OpenAD/open64A.pdf


So u say, LLVM or whatever is insane because it's large. so what do u think about QBE[0] ?

0 : https://c9x.me/compile/


I don't know enough about QBE to give a well-informed assessment of it, so I'll refrain.


Compiling to bytecode is almost a decade older than BCPL, was the way to go at Xerox PARC and most mainframes that survive to this day.


From what I can tell, Xerox PARC didn't exist until 1970 and apparently used Smalltalk, which was after BCPL in 1967. I don't doubt you're correct, so do you have any links for earlier bytecode examples?


Burroughs Large Systems, released into the market in 1961, nowadays sold as Unisys ClearPath MCP mainframes.

Xerox PARC had Smalltalk, Mesa (later Mesa/Cedar) and Interlisp-D.

The platforms would first load the respective microcode into the CPU for the bytecodes used by the desired workstation environment.

Initially Xerox PARC made use of BCPL, but quickly they realised it wasn't the best way to write systems software and created Mesa to replace it.

After all, BCPL was intended to bootstrap CPL, not to write fully systems with it.

Other 60's computer companies were making use of either Algol or PL/I dialects, all the way up to the 80's.

Lots of juice papers at Bitsavers.


All these points are valid, but that doesn't say

1) why miranda was fast itself (was it compiling perf or runtime perf) ?

2) why haskell is so bad .. since miranda lineage with haskell is quite strong, unless SPJ, JH, PW and the likes had zero access to miranda techniques or no legal right to use them.. I fail to see how they made haskell compiler so bad.


I think you’re conflating language design with language implementation.

If I’m being paid to write a Ruby compiler then ‘keep the language simple’ isn’t an option available to me, is it.

And the techniques from the 80s would be absolutely hopeless at compiling and optimising Ruby. A lot of the implementation approaches you’re talking about work brilliantly for a single-pass compiler for a trivial language like Pascal generating basic machine code but they aren’t expansive or powerful enough for bigger problems.

So the techniques haven’t been lost - they are in most cases either not enough or not applicable to our languages and the output we need.


I'm not conflating anything, as far as I'm aware. The question was:

"the art of writing compilers and interpreters has effectively been lost"

I'm curious, how does that happen? Were the techniques used in compilers and interpreters of yore never recorded for posterity in academic papers or technical documentation?

I think I did a pretty good job in answering it, although it was a bit rambling.

The people on the standards committees for these languages are generally also the ones implementing compilers, at least at first. The ones that don't have significant overlap between the two fail. ALGOL-68 comes to mind in the past, FORTRAN-03/08 comes to mind now.

Also, there were fast Lisp compilers back then (as fast as Lisp can get, at least, and keyword "compilers" as I'm not misinterpreting the two), along with decent (eh) Smalltalk compilers: Ruby isn't that unique. An optimizing compiler for Ruby could have used many of the techniques, given that most of the bottlenecks in Ruby are the same as many Lisps. Of course, it won't map perfectly, but much of it would. (I will admit that this paragraph is written with primarily old versions of Ruby in mind, because beyond 2007 or so, when it was basically a Lisp with Smalltalk tendencies under the hood, I haven't really kept up.)

Many techniques absolutely have been lost over time, even in obvious spots, where you'd expect them to be maintained quite well; I have a folder of poorly-OCR'd, poorly-scanned ACM papers of novel, technically-interesting & fast pre-1990 compilers with significantly less downloads than citations. Academic papers aren't really where you'd expect to see "lost" techniques, but publishing is a wasteland, and most non-famous on this subject (and computers in general, partially; ever tried to find more than a handful of papers on Sun's Spring?) written between 1970 and 1990 are practically lost. The industry has the memory of a dormouse, though it might get better now that more and more things are becoming source-available.


Can you give any concrete examples of techniques that have been lost?


i know you're in high demand in this thread, but if it's possible to publish that folder (or the refs therein) that would be nice.


> The best stuff was all proprietary for decades.

Well, maybe. But it is hard to believe if most of those things aren't rediscovered in these decades. And IIRC that GCC took over partially because it was quite better. GCC / LLVM appear to implement every new technique in someone's PhD thesis like Diophantine equation solvers into optimization passes. I suspect it is no longer the case.

If you're saying compiler itself is slow, well the rat race among compilers is producing benchmark binaries (sadly). But given how much we have advanced, I suppose fine grained incremental compilation should be norm in every compiler.

> Turbo Pascal, too

Here I suppose you are talking about compile speeds. Well Turbo Pascal had the advantage that it was integrated with an IDE. Eclipse and IIRC visual studio can also reach those speeds using incremental / background compilation.

> We'll probably never see the source code behind any version of the k interpreter by Arthur Whitney

I have heard a about this K. What is this? Is this an array programming language like APL? I would like some pointers.

> Having portable compilers that try to heavily optimize is a mistake. Again here there are tradeoffs. People like the comfort of portability and thus don't care about small performance increments as long as code performs reasonably well. Although I would think we can have a peephole optimizer that can optimize based on target machine upon installation.

> No one seems to know what a CPU cache is!

While this may be true for normal programmer, compiler implementers care quite a lot about caches. Especially most JIT optimizations are about cache.

> Compiling to C / LLVM IR doesn't make sense.

Sure that's a tradeoff. But that allows to implement compilers in less time and utilize most optimizations.

However I too think the progress in compilers has stalled in these years. There are native programming languages being created. But it is almost Compile to C or embracing LLVM monoculture. Appreciably Go didn't do that. And due to proliferation of scripting / JIT languages, there's not much scope for optimizing or fast compilers. In particular I am sceptical of LLVM monoculture. The compilers rat race prioritizing benchmarks above other things isn't going to end well. Moreover my gut feeling is today's compiler infrastructure is not suitable for High level languages.


> I appreciate that free software "won" to some extent, but if it hadn't have won as fast, we might have had some knowledge transfer happen that was actually useful.

I was recently listening to the 2007 "Copyleft Capitalism: GPLv3 & the Future of Software Innovation" talk¹ by Eben Moglen (who incidentally happened to have worked on IBM's APL interpreter), and he makes the opposite argument: it was the source code hoarding of the Microsoft-driven personal computer industry that was the biggest impediment to knowledge transfer and innovation.

¹ https://www.youtube.com/watch?v=68aimESyyeU


Oh, certainly! My claim isn't that libre software harms knowledge transfer at all. It's great!

But free compilers won so quickly even when they weren't obviously better that many proprietary compilers died quick, unceremonious deaths, with no obvious successors, and leading their authors to go into different areas. This was a bad thing, in my opinion, because a lot of early compilers were written in incredibly clever ways, and those clever ways died with them.

I think my comment was also worded poorly:

> I appreciate

> that free software "won" to some extent

is how I was hoping it would be interpreted, but I think it was interpreted as:

> I appreciate

> to some extent

> that free software "won"

Free software winning was absolutely the right thing, I just wish it would have been a bit less sudden.


I'm still really interested in concrete specifics of what has been lost. Right now it just sounds like magic, which leaves me skeptical. I'm a huge fan of programming languages as a field and hope to someday do research in it, and this is a bit of history I'm not well versed in.


Compare Symbolics Genera to a Free Software Common Lisp development environment today (SBCL and SLIME on GNU Emacs). The latter does a lot less and is more bloated. Commercial Common Lisp compilers like Allegro and LispWorks have features and optimizations that Free Software compilers lack (Allegro has really good debugging tools and garbage collection, LispWorks has excellent support for concurrent programming).

Intel Fortran compiler and their C++ compiler produce the fastest code. Lucid Energize and IBM VisualAge C++ did incremental compilation and had IDE features not available with Free Software C++ compilers and editors/debuggers.

https://www.youtube.com/watch?v=pQQTScuApWk http://www.edm2.com/index.php/VisualAge_C++_4.0_Review


Genera ran on actual Lisp machines, I feel that that is not a fair comparison.

It's not exactly hidden knowledge that the Intel compilers produce the best code, it's just that they have draconian licensing requirements. Furthermore, that knowledge cannot possibly be lost, because they're still actively developed to this day. I'm interested in the technological specifics of what these older platforms did that the newer ones cannot do, and why the knowledge of how to do them has been lost.


Would you mind elaborating on these points?


I have for most of it elsewhere in this thread (I think you can find my answers by checking my /comments), but for what I didn't get to, I'll get to tomorrow. My responses are getting worse as I tire.


Do you mean that the compiler was fast or that the generated programs were fast? Do we have a competitor for Clean?


Answering to myself: the provided implementation is an interpreter only.


Clean was directly inspired by Miranda.


Are there any more modern functional languages that are as fast or faster?


k, J & APL, in roughly that order.


As far as I know, all three of those languages are interpreted. They might be good at slinging together predefined operations with fast implementations written in another language (C)… as long as you’re operating in parallel on large arrays, so that you spend more time inside the C functions than actually interpreting. But if you need to write your own operations or just do anything that’s isn’t massively parallel, none of those languages even compete. For example, you couldn’t write a performant compiler in them. In contrast, languages like Haskell and OCaml have compilers that generate native code – maybe not C-level native code, but still an order of magnitude faster than an interpreter.

Edit: For that matter, from quickly browsing the source code, it looks like Miranda is interpreted as well. So it’s absurd to say it’s faster than Haskell.


Most of the APLs don't automatically parallelise array operations either, because it's hard to know automatically when it's worth it. This is exacerbated by the fact that most APLs don't have a compiler, so the granularity of independent operations is fine.

While I don't know much about the mysteries of Ks implementation, I know that the most widely used industrial APL implementation, Dyalog, uses a pretty conventional explicit task parallel API for parallelism. They call it "isolates", and it's essentially about launching a thread that has its own internal APL state (with a lot of polish for convenience and communication, of course). There may be a few primitive operations that are automatically parallel internally, but they are rare.


k is faster than hand-written C in many cases, and significantly faster than compiled Haskell.


You're ignoring the main point of the comment though:

>For example, you couldn’t write a performant compiler in them.


APL is cheating since all the magic happens in highly pipelined SIMD-heavy loops :p

but if it's really true that other than array languages, no modern FP language can outperform Miranda then that's really depressing.

I was under the impression that GHC attempted to generate decently good code, maybe it's all the little allocations that slow Haskell down or something like that.


APL isn't cheating because of that, it's cheating because it's small enough to fit in your CPU cache!

I'm sure something else can beat Miranda, I'm just unsure of what. I don't really care for the FP paradigm outside of arrays and Lisp, though, so I'll be the first to admit that I haven't spent days looking; just a few hours here and there.

Oh, actually: Stalin probably does. It's an R4RS compiler. Good luck getting it to compile, though. It took me a few hours and a lot of code changes to get it to compile half a decade ago (I wanted to compare the one I was writing, which was a lot worse, naturally, but much easier to compile). The compiler itself is slow as a tortoise, but it generates really wonderful code. It's probably rotted a bit now, though.


No, it's the SIMD loop thing.

The claim that k runs quickly because its functions fit in the CPU cache is, as far as I can tell, an off-hand comment that Arthur Whitney made once which has been repeated far more than it deserves. It's false—instruction cache behavior doesn't contribute significantly to k's advantage over other languages—for a few reasons: inner loops in array languages are 3–5 orders of magnitude smaller than the CPU cache, the loops that compiled languages produce also fit in cache, and instruction caching doesn't matter all that much for performance anyway. Despite spending plenty of time looking for it, I've never been able to measure an impact of code size on performance. Even data caching doesn't have that much of an effect: current versions of Dyalog APL almost always allocate new arrays from uncached memory (this is mostly fixed in the next version), and it's still one of the fastest array languages around. Unless you're using SIMD, code with linear access patterns can't even keep up with main memory, and the cache has no effect.

Why is using SIMD cheating? SIMD loops are the only way to get the full performance out of a CPU, and fast compilers do try to produce them. If it turns out that array-based interpreters are a better way to convert programmer intentions to SIMD loops than scalar compilers (and it certainly seems that way) then the array languages are legitimately faster. I suppose SIMD is considered "non-portable" because you can't use it from C, but that's an artificial restriction coming from historical programming language design decisions. The most important vector instructions are the same in any modern vector ISA. How is using standard CPU features cheating? They're not even that much newer than double-precision float support.

(I'm an implementor for Dyalog APL.)


My comment was that taking advantage of SIMD wasn't cheating, but "cheating" was a joke in both cases.

Though I disagree with your comment on the cache: it's very blatantly better for interpreters; Whitney isn't the only one who's a believer in that, Moore does too, and Moore is more competent than just about anybody. If you look at the performance of k and Moore-written Forths compared to Dyalog APL, it does seem like they have a point.


If you're talking about parsing speed, Dyalog is slow because it has a much more complicated grammar, and because it stores the execution stack in the workspace in order to make stack overflows impossible. If you're claiming k or Forth is faster for large array processing, I'd like to see some benchmarks. Do you have a citation for Moore on the instruction cache?


> Oh, actually: Stalin probably does. It's an R4RS compiler. Good luck getting it to compile, though. It took me a few hours and a lot of code changes to get it to compile half a decade ago (I wanted to compare the one I was writing, which was a lot worse, naturally, but much easier to compile). The compiler itself is slow as a tortoise, but it generates really wonderful code. It's probably rotted a bit now, though.

Stalin isn't really a good compiler in the realm of high-performance computing. What did and does make Stalin awesome is that it showed how you could compile away most of the overhead of using a very high-level language (Scheme) and end up with code that matched reasonably written C. That does not mean its competitive with the code generated by a heavily vectorising Fortran compiler for number crunching. Stalin is more about removing language overheads than about pushing the hardware to the hilt.

If you want a compiler built around the same rough philosophy as Stalin, then there's MLton, which is also still maintained: http://mlton.org/


You can write code in a functional programming style in k/j/apl, the same as in Scheme etc., but they are imperative languages.


C++.

Not even joking.


Rust.


If you don't box all your closures, functional programming in Rust gets tedious quite quickly. But if you do, I guess it's no longer that fast...


> If you don't box all your closures, functional programming in Rust gets tedious quite quickly.

Somewhat true, but argument position and return position impl Trait have made that slightly nicer.


Haskell. Despite the unsubstantiated claim to the contrary, haskell is much faster.


When I started studying CS in Hamburg, Germany, in the early 90's, they taught us Miranda to have everyone on the same level.

Which was not such a bad idea: I had three years of C & five of C++ under my belt already at the time but never touched a functional programming language.

Very few co-students had worked with Lisp and they did have an advantage. But they still had to new learn a new language.

Trivia 1: I was going out with a German-Iranian girl who's name was Miranda, at the same time.

Trivia 2: As I was also studying graphics design with an emphasis on typography then I made her a t-shirt for her b-day that used the 'Mirinda'[1] logo but turned it into 'Miranda'. She never wore it which hurt me a bit at the time. Maybe I should dig that out and donate it as the logo for that language since theirs is an absolute eyesaw? :]

[1] Mirinda is a carbonated soft drink (https://en.wikipedia.org/wiki/Mirinda) The logo in the 90's looked different -- like this: http://tiny.cc/24rnkz


Heh, cool associations, thanks for sharing.

nit: "eyesaw" -> "eyesore" (unless "eyesaw" was deliberate -- pretty evocative!)


Likewise, when I started my CS degree at UNSW in 1989 they taught us Miranda before moving on to Modula-2.

Fresh out of high-school and our first programming course, the lecturer said something like "Ok all you smartarses who have mucked around in BASIC and think you can program, we're going to teach you Miranda!"


ANU in maybe 2003, we had a single tutorial in Miranda to demonstrate the functional paradigm. It wasn't enough to appreciate the value of functional programming, but I guess they still had licenses and wanted to use them for something.


Miranda is still taught as a language for University College London's Functional Programming course: https://www.ucl.ac.uk/module-catalogue/modules/functional-pr...

Students grumble at having to learn such an esoteric language; the justification of the professor is that it's simpler and more focussed than Haskell and so better for teaching functional concepts. This may be true but perhaps he's just been teaching the same material for decades and doesn't want to update it.


I learnt Miranda in my first year of CS at University College London...in 1991. So I think you might be right!


He's right! Haskell is still stuck with most of the wrong decisions they made in '88, and it was intended as a (clunkier) Miranda clone, anyway.


> Haskell is still stuck with most of the wrong decisions they made in '88

Which ones?


While I don't agree with the general snarkiness towards Haskell of the commenter you are replying to, being lazy by default could be seen as Haskell original mistake. It was interesting from a research point of view though.


Miranda is also lazy, so that can't be it.


it's hard to call the sine qua non of a language a bad design decision. if it were, the language would be dead. haskell survives because it's a lazy purely functional language with some industry support. bad design decisions have to be something apart from that.


I can't disagree more.

What set Haskell apart is how it successfully limited side effect to the IO type and the introduction of type classes.

Laziness by default is definitely a bad design choice as far as I'm concerned. The drawbacks are not worth it.


:: for type signatures, for one; I think that came straight from Miranda.

Honestly though there weren't too many outright "mistakes", but some parts of Haskell have evolved significantly, for example the introduction of the IO monad and all the associated type classes, compared to the original based on infinite lazy lists.


Answering here so everyone in this thread can see:

The most obvious one (and, admittedly, the one that immediately comes to mind at almost 3AM) is their exclusion of non-linear patterns. It didn't reduce complexity, yet made the language worse for the purpose the professor stated in an obvious way.


That reminds me of n+k patterns, another weird wart.


And ML (the language) is still a core part of Cambridge University compsci.


I've been playing with Miranda a little bit since seeing this yesterday, and man, I gotta say, the quality of the REPL environment and documentation is amazing.

Everything is laid out plainly, and you can learn how to work with the language in a matter of minutes.

Even the readme was one of the best I've ever encountered. It makes no undue assumptions and leaves no work up to the reader. It even notes that one may need to make certain edits to the Makefile on certain systems and, if needed, how and where one should do so (I've had to make several other programs in the past that require more tweaking and don't document the possibility or guide the user at all).

The language itself seems slim and elegant so far, but, if nothing else, I'm amazed at the level of quality of the compiler documentation, the manageable size of the source, and the REPL's design--it encourages you to write entire programs from the REPL, by allowing you to direct output to a file and to invoke an editor to modify the current scripts loaded into the environment--all this without the need for special plugins on the editor side; instead of having to invoke the REPL from your editor, your REPL invokes your editor--honestly it seems like the right relationship and now I'm fairly confused why other REPL focused languages don't commonly support this.

Everything comes bundled too--you don't need to go through some gitbook based tutorial docs in your browser to get up to speed--it's all available right from the system itself.

A lot of contemporary programming languages I've used don't have nearly as good of an onboarding experience. It makes me wonder to what extent this is just a rare case of quality work by great programmers and to what extent its symptomatic of what one can deliver with licensing and funding, as opposed to purely open-source contributions largely driven by community interest.


> instead of having to invoke the REPL from your editor, your REPL invokes your editor--honestly it seems like the right relationship and now I'm fairly confused why other REPL focused languages don't commonly support this.

That is originally how REPLs worked in Lisp and APL systems in the 1960s. In BBN Lisp/Interlisp (see http://www.softwarepreservation.org/projects/LISP/bbnlisp/W-...) and APL\360 you called the built-in editor from the REPL. There were not really any stand-alone interactive editor programs around back then, and the operating systems BBN Lisp and APL ran on mostly did not properly support running multiple processes to run editors anyway. Maclisp on ITS had job control, and you could call external editors: http://www.maclisp.info/pitmanual/edit.html

Doing it the other way around is better for projects written in multiple languages, dealing with multiple implementations, running processes on remote systems, and IDE features.


I disagree with your last sentence, but thank you for bringing the history here up!


I shared this comment with a friend because it made me happy that something I posted was getting more than a surface-level review, and they responded like this to your last paragraph:

This is an interesting note to end on, because I'm not sure licensing and funding is one of the main factors that leads to this sort of thing. I think rationally designing environments in a holistic way is just sort of a lost art among programmers in general, in favor of the sort of anarchy and patchwork that characterizes software development in the present day

I agree with their view fairly strongly. Modern proprietary languages aren't this good at all in the same areas, either.

Thanks for checking this out, it makes me happy that someone's gained something from doing so.


That's great! I think your friend's take is sound and I agree with that being the likely root cause. I do wonder if there's a certain relationship between a closed project, however, and rational, focused design—not a necessary relationship at all, but perhaps a synergistic/convenient one (closed or tight-knit projects lending themselves to singular, disciplined visions, projects that are fundamentally open having a greater tendency to allow less focused design decisions creep in).

I would agree on the "lost art" and "anarchy" — this definitely gives me something to think about!

And I'm glad you shared this! I had a blast checking it out.


Hmmm...I'm wondering if there's something wrong with my build (Mac OS X) or if I'm just "using it wrong" because the REPL (not the language) seems pretty unusable to me.

The up and down arrows just show escape sequences instead of going to previous/next history.

Entering `foo = 123` gives the error `UNDEFINED NAME - foo`. Entering `exp foo = 123` gives the same error.

Entering the same expression on a line in a file works though (`/f test.m`).

Am I doing it wrong?


This is really big news, albeit about 27 years too late to matter. Miranda is the elegant sister of Haskell, a really beautiful pure functional language. When I was a student I wrote a Perl(!) script which translated Miranda source to Haskell so I could test my classwork exercises at home on my Linux/386 machine without having to go into the labs (no internet in those days!)


Under "Why the name Miranda?":

> Because it is a proper name, not an acronym, only the first letter of Miranda is capitalised.

It took me a second to realize that most language names of this era were in fact all-caps acronyms


I kindly beg to differ for the 1980s:

https://en.m.wikipedia.org/wiki/Timeline_of_programming_lang...

The all-caps acronyms were the 1960s and before


I read the overview, cool language, very cool to see it open sourced, and just looking at the code it is clear how much it inspired Haskell.

I started hacking (a lot!) this week on an experimental project in Swift. Swift is similar to Haskell and Miranda in supporting functional programming. I like Haskell’s syntax and in general REPL based Haskell development, so at first Swift’s syntax bugged me. However, when I set my project up with Playgrounds for developing bits of low level functionality in what is effectively a REPL environment, and spent a day with XCode, I now think Swift is a worthy substitute for Haskell for some projects (i.e., when targeting macOS and/or iOS).


Swift is not similar to Haskell and Miranda. It's an imperative refcounted manually managed language with strict evaluation, while Haskell and Miranda are GCed languages with lazy evaluation and immutability. It's hard to find languages more different, really.


You are right of course about the non-lazy part and like Scala, it is easy enough to use mutable data, although not good style.


Cool, looks like a pretty browsable/readable code. There's some apparently duplicate code in '/new/' I omitted.

    $ ls *.[chy] |  grep -v y.tab | xargs wc -l |sort -n
          3 version.c
          9 utf8.h
         21 fdate.c
         30 lex.h
         62 big.h
         88 utf8.c
        143 combs.h
        144 cmbnms.c
        295 just.c
        320 menudriver.c
        350 data.h
        656 big.c
       1000 trans.c
       1220 lex.c
       1315 data.c
       1674 types.c
       1689 rules.y
       2241 steer.c
       2394 reduce.c
      13654 total


Here's an idea I have: never close-source any programming language, make 'em free and open from day zero. Languages are not products, but infrastructure, like roads. They benefit and grow from the number of users, not from paywalls. Closed languages tend to fade into obscurity. That's why everyone knows Haskell and nobody knows Miranda, everyone knows Java and nobody Eiffel. I laughed when I read that the author of Shen/Qi changed the license from a commercial to a slightly more permissive one, when no one cares about his little language. Such conceit kills languages. Microsoft has realized this only recently with .NET, when the JVM was already miles ahead. Which is sad because the CLR is so much smarter than JVM.


Haskell was written as academia's response to Miranda's licensing fees. Were Miranda freeware (but not source available or Free Software), it probably would still be well-known. It's still taught in many universities.

I certainly agree that most languages should be libre, but clearly it's not necessary for success. k has brought billions of dollars of profit, but is the most proprietary of all languages. C# developers are the most common in the world, for some reason. VBA is popular. Excel has the most programmers. Mathematica makes millions. Matlab makes millions.


Miranda's cost was a big problem, but not the only one. The other core issue that lead to the creation of Haskell was Miranda's license that essentially prohibited using it as a tool for programming language research. For good reason, Turner (Miranda's creator) wanted to avoid the fragmentation of Miranda into different dialects. From [1]:

> [T]he easiest way to move forward was to begin with an existing language, and evolve it in whatever direction suited us. Of all the lazy languages under development, David Turner’s Miranda was by far the most mature. It was pure, well designed, fulfilled many of our goals, had a robust implementation as a product of Turner’s company, Research Software Ltd, and was running at 120 sites. Turner was not present at the meeting, so we concluded that the first action item of the committee would be to ask Turner if he would allow us to adopt Miranda as the starting point for our new language. After a brief and cordial interchange, Turner declined. His goals were different from ours. We wanted a language that could be used, among other purposes, for research into language features; in particular, we sought the freedom for anyone to extend or modify the language, and to build and distribute an implementation. Turner, by contrast, was strongly committed to maintaining a single language standard, with complete portability of programs within the Miranda community. He did not want there to be multiple dialects of Miranda in circulation and asked that we make our new language sufficiently distinct from Miranda that the two would not be confused. Turner also declined an invitation to join the new design committee [...] Haskell owes a considerable debt to Miranda, both for general inspiration and specific language elements that we freely adopted where they fitted into our emerging design.

[1] P. Hudak, J. Hughes, S. Peyton Jones, P. Wadler, A History of Haskell: Being Lazy With Class. https://www.microsoft.com/en-us/research/wp-content/uploads/...


over a long enough time frame, it seems his view is standard. nowadays, there's not really a haskell standard other than ghc. there's other haskells than ghc (e.g. ghcjs) but they're all forks of ghc. research is achieved by enabling extra features

(i guess eta isn't tracking ghc, but i think that's because of unviability rather than a specific intention to fork the language.)


I think the difference is that today nobody is preventing you from forcing GHC or adding features, while back in the early 1990s, Research Software Ltd, the company creating Miranda, would probably have prevented others from forking Miranda. 30 years ago, the value of open source and the network effect for programming languages was not widely understood. In particular, giving away from free and without restrictions a company's core IP was inconceivable for traditional businesses!


Thanks for sharing this excerpt, I appreciate it!


How is JVM miles ahead of the CLR? All I ever hear (at my current and previous jobs) is about how Java's runtime and tools are miles behind .Net and the CLR.


As someone that works with both platforms since the early days.

It supports more platforms and implementations than CLR ever will, including bare metal deployments with real time GC.

Thanks to those implementations, there is a plethora of GC algorithms and JIT/AOT optimizations not yet available in CLR, like AVX-512 auto-vectorization, JIT code caches with PGO, tiered JIT compilation, real time GC, GC able to deal with multi-TB heaps with ms pauses, ...

If you want something like VisualVM or JFR, you need big pockets for Visual Studio Enterprise, and it still doesn't match in capabilities.

Naturally the CLR has other things going for it like having had NGEN since day one, value types, reiffed generics and designed from the get go to support multiple languages, including C++.


Sorry, I meant the JVM ecosystem, of course.


Back in the 90s I remember taking a course that used Simple ML. I was pretty amazed by it's elegant syntax, and If I remember it correctly, it was either based on, or related to Miranda. I haven't really used a functional language since university. But would love to try one again. Whats worth checking out? I know nothing. Anything I can use to make a website or an interesting project within a week of learning.


F# is an ml inspired language running on the .net clr.

Use the SAFE template if you are interested in doing web development with this language.


This? What does safe stand for? https://safe-stack.github.io


Is this possible to use on a Mac


Definitely, using .NET Core.


You might like Elm if you're looking to do a fun web project and you enjoy MLish syntax.


Just reading about it ... thanks


JavaScript has some FP parts, and is kind of the goto language for websites. But is not strictly functional.


If you want something functional that compiles to JavaScript you can for f# use Fabel [0] and there is a ocaml-like language called reasonML [1] that also does it.

[0]: https://fable.io/ [1]: https://reasonml.github.io/


Reason{ML,} is just a new syntax for OCaml, still perfectly round trip translatable I believe (though that's something I expect to diverge over the years). You can use js_of_ocaml or BuckleScript to translate either OCaml or ReasonML written code to JavaScript.


I think I've responded to everything in this thread. If I've missed something, please point it out. I'll look through again later: I spent like five hours straight replying to this thread today, so I'm kind of tired of talking.


I learned a bit of Miranda in my programming languages class at the University of Iowa, back in... 1999? 2000?, under Prof. Arthur Fleck. Just that little bit of Miranda really prepared me for generic and functional idioms in C++.


I learned a bit of Miranda and Oberon for a programming languages course back in school, prior to knowing what Haskell was (or even functional programming proper). Can anyone speak to its capability for a fun hobby project (web dev, or a CLI)? I liked the language.


If you would make a new language today you would design for readability above all else. Because modern software are huge, while programs back in the day was small.


Readability is entirely and extremely subjective. I don't find Japanese readable, but that's because I never bothered to put the time in to learn it. I find TeX and LaTeX readable, though. Readability becomes slightly more objective, but not much more so, when you have a well defined and sufficiently homogenous target audience.

If you make a new language today, you should figure out who it's for (probably yourself and a couple of other people), and design it accordingly. As was always the case.


> If you would make a new language today you would design for readability above all else.

So, basically, COBOL?


First language I learned at Uni, right before C!


Anyone know of any available-online courses that teach functional programming using Miranda?


This could do with a 2010 label.

Back in the day, Orwell was a free Miranda clone. So I used that.


The page has been updated recently, the 64-bit version was added this year. So it's actually the date on the page that is wrong.


Yeah, the date on the page is wrong.

See:

https://web.archive.org/web/20181015090301/http://miranda.or...

It was still proprietary as of 2018/most of 2019, got a source release in the form of a 32-bit version around December, and then a 64-bit version released around a month ago, from what I can tell. I completely missed this occurring until a friend pointed it out, but I'm so happy it did.


that date seems to be the date "sitemeter" passed away


Wonderful. Along with original MIT Scheme it is a great teaching language and an example of clarity, brevity and conciseness which comes from a rigorously trained mind.

Beautiful piece of software.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: