Doing 'git clone' over either of git://
or http://
backed by git-http-backend results in big memory usage for big repositories (mostly in 'compressing objects' phase). This means that you can usually kill any git server hosting big repos by doing concurrent 'git clone' runs of those repos at the same time.
Instead of doing that, I'd like to keep benefits of the git-http-backend for all users except those using 'git clone'. Anyone has any ideas on how to do that?
OTOH, maybe I am looking for the wrong solution. git supposedly does mmap() of the pack files, and the only thing I need to do is ensure packs are suitably small so they could me "swapped-out" (which equals to being discarded from memory with mmaped files). Or, since git does a mmap() of uncompressed temporary file, maybe I can get git to store uncompressed data instead, allowing it to mmap them directly?
I wouldn't be surprised if I am entirely on the wrong track here. If anybody has ideas on what to do to run a git server with big repositories without needing gazilions of memory, please direct me in the comments section below.