mirror of
https://github.com/git/git.git
synced 2024-10-31 06:17:56 +01:00
4dd1fbc7b1
When adding a new content to the repository, we have always slurped the blob in its entirety in-core first, and computed the object name and compressed it into a loose object file. Handling large binary files (e.g. video and audio asset for games) has been problematic because of this design. At the middle level of "git add" callchain is an internal API index_fd() that takes an open file descriptor to read from the working tree file being added with its size. Teach it to call out to fast-import when adding a large blob. The write-out codepath in entry.c::write_entry() should be taught to stream, instead of reading everything in core. This should not be so hard to implement, especially if we limit ourselves only to loose object files and non-delta representation in packfiles. Signed-off-by: Junio C Hamano <gitster@pobox.com>
27 lines
678 B
Bash
Executable file
27 lines
678 B
Bash
Executable file
#!/bin/sh
|
|
# Copyright (c) 2011, Google Inc.
|
|
|
|
test_description='adding and checking out large blobs'
|
|
|
|
. ./test-lib.sh
|
|
|
|
test_expect_success setup '
|
|
git config core.bigfilethreshold 200k &&
|
|
echo X | dd of=large bs=1k seek=2000
|
|
'
|
|
|
|
test_expect_success 'add a large file' '
|
|
git add large &&
|
|
# make sure we got a packfile and no loose objects
|
|
test -f .git/objects/pack/pack-*.pack &&
|
|
test ! -f .git/objects/??/??????????????????????????????????????
|
|
'
|
|
|
|
test_expect_success 'checkout a large file' '
|
|
large=$(git rev-parse :large) &&
|
|
git update-index --add --cacheinfo 100644 $large another &&
|
|
git checkout another &&
|
|
cmp large another ;# this must not be test_cmp
|
|
'
|
|
|
|
test_done
|