I suppose that might help some. But I *think* the only way you'll really be able to deal with a dataset that large is by not attempting to store all the hashes in memory at the same time. Something like the `read_line` method should let you read a number of lines into a buffer, search for matches, and then re-use that buffer for the next batch of lines. https://doc.rust-lang.org/std/io/trait.BufRead.html#method.read_line
Fosstodon is an English speaking Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.