Are there any programming languages/runtimes/frameworks with a "garbage collection" strategy of just never collecting any garbage? (During program execution; obviously the memory would be freed when the process exited)
It wouldn't work for _most_ use cases, but for extremely short lived programs (e.g. CLI scripts) it seems like this would be an easy way to avoid the stop-the-world costs of a GC without any memory safety risks.
Does this exist? Or am I missing some reason it'd be dumb?
@codesections Scala Native has a 'none' GC, and I think I've heard it recommended for such short-lived programs
> Scala Native has a 'none' GC, and I think I've heard it recommended for such short-lived programs
Very interesting, thanks! As someone who has never used Scala, I had thought of it as pretty inappropriate for short-lived programs (see, e.g.,
Is that wrong? Or wrong for Scala Native? (which I'm even less familiar with, but sounds like something that'd have better startup times)
@codesections (I'm not surprised Java/Scala perform poorly on a 'hello world', that's indeed definitely not what they're optimized for. I'm kinda surprised at the difference between Java and Scala - I wonder if that still holds on more recent versions. But it's largely academic interest, they will still not perform 'well' for this case ;) )
> I'm not surprised Java/Scala perform poorly on a 'hello world' [startup time benchmark], that's indeed definitely not what they're optimized for. I'm kinda surprised at the difference between Java and Scala - I wonder if that still holds on more recent versions.
I've actually been testing startup times locally, and I get 40ms for javac 11.0.10, and 516ms for scalac 2.11.12 (and 810ms for Clojure, for comparison). I should add Scala Native, though.
Does Scala work with nailgun?
@codesections erlang VM does this for processes and when they’re finished it’s heap is recycled — each process has its own heap and gc cycle and typically runs at the end. Not quite what you’re thinking of but worth sharing I think
@codesections A decent amount of D batch programs are written this way. It works fine until you start swapping. If that never happens then you're good to go.
@codesections https://github.com/facebookarchive/warp/blob/master/main.d#33 no longer maintained but was easily faster than gcc/binutils /usr/bin/cpp and not far behind llvm's, no idea if it ever caught up.
Not sure many consider PHP particularly worthy of discussion, but even though it's reference counted by default, the cycle collector can be toggled when needed: https://www.php.net/manual/en/features.gc.performance-considerations.php
Same (in both regards) applies to Python too: https://docs.python.org/3/library/gc.html
So perhaps it's more common than it may seem at first?
@codesections As to whole frameworks that do this, I don't know of any but it seems totally reasonable that there would be place for that. (One can argue that ObjC's autoreleasepool functionality is kind of mimicking this—where you can defer releasing memory of refcounted objects that are dead until some performance critical section has ended—but that's not what people usually mean by GC.)
> So perhaps [a noop GC for short-lived programs is] more common than it may seem at first?
Yeah, that's my overall takeaway from the many interesting examples people have brought up in the replies!
@codesections you could achieve this in perl by declaring all variables as state (or not having more than 1 scope block in your script.) That being said perl uses ref counted GC so its reasonably performant as is (at least with memory allocation)
Fosstodon is an English speaking Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.