Random insight of the night: every couple years, someone stands up and bemoans the fact that programming is still primarily done through the medium of text. And surely with all the power of modern graphical systems there must be a better way. But consider:

* the most powerful tool we have as humans for handling abstract concepts is language
* our brains have several hundred millenia of optimizations for processing language
* we have about 5 millenia of experimenting with ways to represent language outside our heads, using media (paper, parchment, clay, cave walls) that don't prejudice any particular form of representation at least in two dimensions
* the most wildly successful and enduring scheme we have stuck with over all that time is linear strings of symbols. Which is text.

So it is no great surprise that text is well adapted to our latest adventure in encoding and manipulating abstract concepts.

@rafial Both accurate and also misses the fact that Excel is REGULARLY misused for scientific calculations and near-programming level things since its GUI is so intuitive for doing math on things.

Like, GUI programming is HERE, we just don't want to admit it due to how embarrassing it is.

@Canageek very good point. Excel is actually the most widely used programming environment by far.

@rafial Now what we need to do is make a cheap, easy to use version of it that is designed for what scientists are using it for it. Column labels, semantic labels, faster calculations, better dealing with mid-sized data (tens of thousands of data point range), etc

@Canageek I'm wondering, given your professional leanings if you can comment on the use of "notebook" style programming systems such as Jupyter and of course Mathematica. Do you have experience with those? And if so how do they address those needs?

@urusan @rafial Whomever wrote this is an idiot that has never worked in actual physical sciences where you have actual physical experiments, just FYI. Like 10% of my labs works involves computers at all. But sure, science now moves at the speed of software.

@Canageek @rafial The attention grabbing headline is pretty stupid, but there's a lot of interesting content in the article. It's especially relevant in the fields I've worked in.

@urusan @rafial It totally ignores the advantages of PDF though, like the fact there are a stack of independent implementations that can view it, which means we will still be able to read these files in 50 years, unlike whatever format they are using, and we can even print them out on paper to edit them (for example, how my boss and I do it, as he doesn't know LaTeX, which is what I write in).

Or the fact that like, 90% of scientists don't know how to program and are unlikely to learn.

@urusan @rafial In my lab right now... I can program a little, one post doc has spent some time doing python tutorials, and I think that is it, out of ten people.

Organic labs might have less then that, as there is a reputation at least that organic chemists and programming are like oil and water: if you are good at one, you are likely terrible at the other.

@Canageek @rafial Well, I don't personally think this kind of dynamic documentation is going to fully replace static documentation. Static webpages haven't been fully replaced by dynamic webpages either.

However, the growing notebook ecosystem does address issues that didn't have well-formed norms around them before.

In addition to the obvious CS/Math applications, there's a lot of areas where complicated statistical work gets done with code, and then how do you publish that?

@urusan @rafial You public the source files as ESI the way we do with input files in crystallography?

@urusan @rafial (I might be giving opinions different then normal due to spending two days trying to get this figure to compile and dealing with moving to lualatex from pdflatex as a result)

@Canageek @rafial Actually, this gets at the heart of what I'm talking about.

Jupyter handles the software dependencies for you as part of the kernel selection.

As these norms become better established over time, the tools to deal with these issues automatically will just be there.

Just dropping some source code on someone is putting the burden of replication on them and not on the platform.

@urusan @rafial Right but then you are stuck with a system 0 people understand instead of 1 person.

If I wanted to use GNUplot as it was built I could use the script a former grad student passed on, but I want it in LaTeX so the fonts and such match.

(Honestly, if I could go into campus it would be done, but the software we have that makes these graphs is trapped on one computer, and not worth me going into campus during a pandemic to get, at least not yet)

Honestly, if you are going to try and replicate any of my work you aren't going to go to my data: You are going to synthesize the compounds yourself and take the measurements on your equipment against your standards so that it doesn't turn out to be some dumb different between how I set up my experiments and how you do, or some defect in my hardware, etc etc.

@Canageek @urusan @rafial > 90% of scientists don't know how to program and are unlikely to learn

Do you think that's stable, or does it really mean "90% of old scientists"?

I probably listen to biased sources, but my impression is that basically any field doing any amount of statistics these days does it in Jupyter.

If that counts as "know how to program", of course.

@clacke @urusan @rafial Well yeah, but most scientists don't do statistics. Most chemists, most biologists, geologists, etc.

Like, there is a reason computational is a subfeild of every discipline.

I think it is going up, due to more stats and computations being used, but I also think we are way to reliant on stats these days and use them instead of getting good data.

@Canageek @clacke @rafial Getting good data is a noble goal, but you'll always have to cope with statistical uncertainty in science.

Even in computer science, where we theoretically control the underlying systems we're studying perfectly, there's often still statistical uncertainty to deal with.

I don't see how that would be any better in the real world where there's uncertainty in measurement.

That said, you're right that you want to get good enough data that your statistics are simple.

@urusan @clacke @rafial Right, I've been frustrated with this in science for decades. We should do half as many studies and put twice or more as much funding into each one so we have actually decent stats.

For example, lately you have to justify the minimum number of rats for ethics committees for any experiment. Fuck that, use 4 times as many so we can be confident in our work instead of justifying it to heck and back.

@urusan @clacke @rafial (This is part of why I picked a field where all the stats were baked into software and validated before my Dad was born, and the sample sizes for everything else is around 10^26 (ie the number of atoms in solution)

@Canageek @clacke @rafial That's kinda crazy.

Either killing rats is totally unethical and we shouldn't be doing it all or it doesn't matter how many you kill. There's no middle ground here.

@urusan @clacke @rafial Nope, its counted as bad, but justifiable so you have to minimize the number you use, at least as I understand it.

Likewise, academic human studies are typically very underfunded, which is why there is such a bias towards small sample sizes and all the participants being undergrads found on campus.

@Canageek @urusan @rafial A Swedish stand-up comedy musician has a routine "Ain't it Weird" about modern life. The very first segment goes:

> Listen, my grandpa was a mason, from Borlänge, Gösta was his name, construction worker, he used to say this:
> "In my day, we made an honor out of building houses as strong as possible, so they would last as long as possible. But now, they have computers that calculate how weak they can build a joint without it falling in on itself."
> *ding*
> Ain't it weird?

Peter Carlsson & Blå Grodorna: "Än'te könstut?" (Dalecarlian Swedish)

He's wrong, but he's not wrong.

youtube.com/watch?v=9s0reaWAjw…

@clacke @urusan @rafial I've heard from engineers that anyone can build a bridge. And engineer can build a bridge that only JUST carries the required load to make it not cost a fortune

Thanks @urusan, I found the article interesting, and it touched on the issue how to balance the coherence of a centrally designed tool with the need for something open, inspectable, non-gatekept, and universally accessible.

PDF started its life tied to what was once a very expensive, proprietary tool set. The outside implementations that @Canageek refers to were crucial in it becoming a universally accepted format.

I think the core idea of the computational notebook is a strong one. The question for me remains if we can arrive at a point where a notebook created 5, 10, 20 or more years ago can still be read and executed without resorting to software archeology. Even old PDFs sometimes break when viewed through new apps.

@rafial @urusan Aim for longer then that. I can compile TeX documents from the 80s, and I could run ShelX files from the 60s if I wantd to.

@Canageek @urusan oh hey, I'm all for longer, eventually we'll need centuries. But I've also seen the hair pulling it can take to revive archived source code from even a couple years back, and realizing we've got to start by getting to the point where even a decade is a reliable and expected thing.

@rafial @urusan Fair, though I'd say source code is pointless and what we need is more focus on good, easy access to the raw data.

If you can't reproduce what was done from what is in the paper, you haven't described what you've done well enough, and redoing it is better then just rerunning code as a bug might have been removed between software versions, you might notice something not seen in the original, etc.

@Canageek @rafial This is something I have been thinking about while talking about this. The Jupyter notebook approach is much better when code gets involved.

However, the main alternative is to just eschew code entirely. I think this is valid, especially in fields where code is largely irrelevant and you can just provide your data and describe your statistical approach and let the reader deal with it.

@urusan @rafial That would be my favoured approach. Raw data plus a good set of standard statistical tools. More basic analysis = less care if the EXACT toolchain is lost.

If you can't redo the analysis in alternatives then it isn't good science in the first place.

@Canageek @rafial You aren't processing those ShelX files on any sort of hardware (or software binaries) that existed in the late 1960's. At best, you're running the original code in an emulation of the original hardware, but you are probably running it on modern software designed to run on modern hardware

Software archeology is inevitable and even desirable

What we want is an open platform maintained by software archeology experts that lets users not sweat the details

@Canageek @rafial Admittedly, we also want the software we use to communicate science with to not change at a blistering pace.

However, natural language and scientific techniques naturally change over time too, so it's inevitable that we will have to cope with change.

We already have to do this, it's just our brains do a good job smoothing inconsistencies out.

@urusan @rafial No, they've kept updating the software since then so it can use the same input files and data files. I'm reprocessing the data using the newest version of the software using the same list of reflections that was measured using optical data from wayyyy back.

The code has been through two major rewrites in that time, so I don't know how much of the original Fortran is the same, but it doesn't matter? I'm doing the calculations on the same raw data as was measured in the 60s.

There is rarely a POINT to doing so rather then growing a new crystal but I know someone that has done it (he used Crystals rather then Shelx, but he could do that as the modern input file converter works on old data just fine)

@Canageek @rafial We're talking about 2 different things here. Of course data from over half a century ago is still useful.

The thing that's hard to keep running decades later is the code, and code is becoming more and more relevant in many areas of science.

Keeping old code alive so it can produce consistent results for future researchers is a specialized job

Ignoring the issue isn't going to stop researchers from using and publishing code, so it's best to have norms

@urusan @Canageek one other thing to keep in mind is that data formats are in some ways only relevant if there is code that consumes it. Even with a standard, at the end of the day a valid PDF document is by de-facto definition, one that can be rendered by extent software. Similar with ShelX scripts. To keep the data alive, one must also keep the code alive.

@rafial @urusan No, what you need is a good description of how the data was gathered. Analysis is just processing and modeling and can be redone whenever. As long as you know enough about the data.

There are *six* programs I can think of that can process hkl data and model it (shelx, crystals, GSAS-II, Jana, olex2) so it doesn't REALLY matter which you use or if any of them are around in ten years as long as there is *A* program that can do the same type or better modeling (reading the same input file is a really good idea as well as it makes thing easy)

If a solution is physically relevant any program should be able to do the same thing.

@rafial @urusan Standardized data formats are more important then software.

Simple, standardized analysis is better then fancy, complicated work.

@rafial @urusan @Canageek And this is why all software should be written in FORTRAN-77 or COBOL.

@mdhughes @rafial @urusan I mean, that is why Shelx first major version came out in 1965 and the most recent one in 2013 (last minor revision was 2018)

I mean, modern versions of Fortran aren't any harder to write them C, which is still one of the most used programming languages in the planet, I don't see why everyone makes fun of it.

@Canageek @rafial @urusan I'm kind of not making fun of Fortran, though the last time I saw any in production it was still F-77, because F-90 changed something they relied on and was too slow; I last worked on some F-77 for the same reason ~30 years ago.

I am indeed making fun of COBOL, but it'll outlive us by thousands of years as well.

Stable languages are good… but also fossilize practices that we've improved on slightly in the many decades since.

@mdhughes @rafial @urusan Isn't Fortran-90 like three versions old now? I know I used it in 2005 because you could talk to F77 with it and we had certified hydrodynamics code in Fortran 77 that was never going to be updated due to the expense of recertifying a new piece of code

@Canageek @rafial @urusan Yes, newer Fortrans are actually useful for multithreading (F-77 can only be parallel on matrix operations, IIRC). And yet I expect F-77 to be the one that lasts forever.

@mdhughes @Canageek @rafial Modern language development will slow down eventually, at least for languages worth using decades from now.

While Fortran, Cobol, and C will never die, they'll be joined by long-term, stable versions of newer languages, such as Python.

@Canageek @mdhughes @urusan @rafial Ok, that's it. I need to check this ShelX thing out.

en.wikipedia.org/wiki/ShelXle

> SHELX is developed by George M. Sheldrick since the late 1960s. Important releases are SHELX76 and SHELX97. It is still developed but releases are usually after ten years of testing.

This is amazing.

@clacke @mdhughes @urusan @rafial yeah, the big worry is that George Sheldrick is getting very, very old and there are wonders if anyone will take over maintaining and improving the software when he dies. luckily it's largest competitor does have two people working on it the original author and a younger professor so it has a clear succession path.

@mdhughes @Canageek @urusan @rafial Any language that has a reasonably-sized human-readable bootstrap path from bare metal x86, 68000, Z80 or 6502 should be fine.

They don't exist. Yet. Except Forth and PicoLisp.

Also I'd add standard Scheme and standard CL to the list. You can still run R4RS Scheme code from 1991 in Racket and most (all? is there a pure R5RS implementation?) modern Schemes. CL hasn't been updated since 1994.

@clacke @Canageek @mdhughes @rafial Really you just need a well defined language spec (which is easier said than done).

The semantics of, say, addition isn't going to change. Once you define c = a + b means adding a and b, then assigning the value into c, then you no longer need a reference implementation and you can treat this code like a well defined data format.

Of course, I'm leaving out a lot of detail here, like what do you do on overflow?

@clacke @Canageek @mdhughes @rafial Having a reference implementation just lets you defer to the reference implementation as your spec, and if it's on a well known platform then it can be reasonably emulated on different hardware.

When you think about the reference implementation as a quasi-spec, then it becomes clear that most mainstream languages already have a reference implementation, and thus one of these quasi-specs already.

@clacke @Canageek @mdhughes @rafial In either case though, the end user doesn't care about the code archeology aspects of this.

Just because we can theoretically re-implement Python 2.5.1 as it would run on a 64-bit x86 on your future 128-bit RISC-V processor doesn't mean that you would want to

You just want to see the results, and you don't want them to differ, say because of the 64-bit vs 128-bit difference

A standard platform facilitates this

@clacke @Canageek @mdhughes @rafial Language specs and reference implementations make the code archeology work possible for the maintainers of this open platform.

It's necessary for them to be able to cope, so the end user can ultimately have a smooth experience, and get back to their scientific research.

@urusan @clacke @mdhughes @rafial See, this is a lot of focus on getting the exact same results, which for science I think is a mistake.

You don't want the same results, you want the *best* results. If newer versions of the code use 128-bit floating point numbers instead of 64-bit, GREAT. Less rounding errors.

Its like, I can create this model in Shelx or Crystals. They don't implement things EXACTLY the same, but a good, physically relevant model should be able to be created in either. If I try and do the same thing in two sets of (reliable) software and it doesn't work in one, perhaps I'm trying to do something without physical meaning?

Like, it shouldn't matter if i use the exact same Fourier transform or do analysis in R, SAS, or Python. It should give the same results. Stop focusing on code preservation and focus on making analysis platform agnostic.

@urusan @clacke @mdhughes @rafial If you can only do your analysis on ONE specific piece of code one ONE platform how do you know your results aren't due to a bug?

Also it is going to be *helllll* for someone in 20 years. I know a grad student in physics who has to revisit some code his prof wrote when he was in grad school. On the upside it is apparently well documented. On the downside, the documentation is all in Polish as that is the profs first language and where he went to grad school, whereas the grad student only speaks English.

Now nuclear physics is a bit of an exception, but asdfljk that sounds like hell.

@clacke @mdhughes @urusan @rafial To be fair, you could also compare new code to the published results. That way you can tell they both produce the same results (or close enough) on the range it has been published on.

@clacke @Canageek @urusan @rafial And in a thousand years, will Polish still exist in any recognizable form? So now you've got two archaeology problems.

At least keep your language spec with the code so there's some Rosetta Stone.

@Canageek @mdhughes @urusan @rafial You want to first know that you are getting the exact same results in the part of the analysis that is supposed to be deterministic. *Then* you can upgrade things and see differences and identify whether any changes are because you broke something or because the new setup is better.

If the original setup had bugs, you want to know that, and you want to know why, and you won't be able to do that if you can't reproduce the results.

@clacke @Canageek @mdhughes @rafial Yes, this is exactly what I wanted to say.

I'd like to add that the norm of a Jupyter notebook additionally promotes the explanation of whatever you are doing in the code.

You're clearly supposed to interleave explanation (okay, now I'm doing this to the data) and code (here is exactly what I did in a format a computer can replicate).

This gives you the best of both worlds.

@clacke @Canageek @mdhughes @rafial It also helps one spot and correct errors. Maybe they meant to do one thing, but did another thing, and now all their downstream numbers are incorrect.

If all you have is their explanation (or worse, final results), with them having run hidden/unexplained code then it's not as easy to correct them, and you don't know whether their reasoning is incorrect or if it was caused by a software bug without a lot of work

@clacke @Canageek @mdhughes @rafial Another critical factor here is language drift. Even if you ignore hardware and specific software differences, languages change over time. This is even true of natural language.

While I do think the current pace of change is excessively fast, even Fortran and C got new specs every decade or so.

You need to be able to run old language versions on your new hardware, and old languages means old dependencies.

@urusan @clacke @mdhughes @rafial Yeah, but aren't compilers for F77 and ANSI C still being made for everything under the sun?

Sheldrick has said the reason his code has been so easy to port to everything is that he only used a minimal subset of Fortran when he wrote it.

I'm interested in how things like Fortran and C and LaTeX have stayed so popular and usable after so long. I wanted to read the Nethack 1.0 guidebook and it came as a .tex file, so I just rand pdflatex on it and boom, usable PDF, something like 30 years after that with no fuss. And yet try opening ANY OTHER file format from the 90s.

@Canageek @clacke @mdhughes @rafial Yeah, but those compilers don't just magically exist. They're being ported to new architectures and specific systems whenever they become available.

If this work wasn't being done by specialists, then these languages would eventually lose their relevance like so many other old languages.

Show newer

@urusan @clacke @mdhughes @rafial That is fair, I'm from an area of science where you don't go into other people's work like that very often. We are far more likely to remake a compound and do all the measurements over again then we are to try and figure out what someone else did wrong.

If we find a difference between our results and the published ones the older ones probably had an impurity or something and it isn't really worth worrying about. Heck, sometimes you even get COLOUR differences when you make literature compounds, like white crystals vs red crystals.

Sign in to participate in the conversation
Fosstodon

Fosstodon is an English speaking Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.