codesections is a user on fosstodon.org. You can follow them or interact with them if you have an account anywhere in the fediverse. If you don't, you can sign up here.

Anyone have any tips for working with large text files? (~30Gb)

I'm assuming that pretty much any normal text editor is out of the question? Is a combination of sed and grep my best bet?

This almost seems like a case for Ed, man!
gnu.org/fun/jokes/ed-msg.html

@codesections Use grep to separate them into separate files by first character?

Use /usr/bin/split to section the data into separate files?

@lord @drwho Yeah, that does seem like the most promising option. I'd thought of splitting the file, but splitting based on letter is much better than just arbitrarily. Thanks, both!

@codesections @drwho Note that you could virtually store it virtually without any storage.
Creating folders with only one letters over and over until you have the complete hash.

0 space used in the filesystem !
You won't need ram anymore too !

:blobamused:

It reminds me this story : patrickcraig.co.uk/other/compr

codesections @codesections

@lord @drwho (also, I've seen that story before. It's a good one!)