Anyone have any tips for working with large text files? (~30Gb)
I'm assuming that pretty much any normal text editor is out of the question? Is a combination of sed and grep my best bet?
This almost seems like a case for Ed, man!
https://www.gnu.org/fun/jokes/ed-msg.html
@codesections Use grep to separate them into separate files by first character?
Use /usr/bin/split to section the data into separate files?
@drwho @codesections This is the way to go.
@codesections @drwho Note that you could virtually store it virtually without any storage.
Creating folders with only one letters over and over until you have the complete hash.
0 space used in the filesystem !
You won't need ram anymore too !

It reminds me this story : http://www.patrickcraig.co.uk/other/compression.php
@lord @codesections That's a really interesting idea... wouldn't you eventually run out of inodes, though?