1
0
Fork 0
mirror of https://github.com/git/git.git synced 2024-10-31 22:37:54 +01:00
git/t/t0061-run-command.sh

115 lines
2.6 KiB
Bash
Raw Normal View History

#!/bin/sh
#
# Copyright (c) 2009 Ilari Liusvaara
#
test_description='Test run command'
. ./test-lib.sh
cat >hello-script <<-EOF
#!$SHELL_PATH
cat hello-script
EOF
>empty
test_expect_success 'start_command reports ENOENT' '
test-run-command start-command-ENOENT ./does-not-exist
'
test_expect_success 'run_command can run a command' '
cat hello-script >hello.sh &&
chmod +x hello.sh &&
test-run-command run-command ./hello.sh >actual 2>err &&
test_cmp hello-script actual &&
test_cmp empty err
'
test_expect_success !MINGW 'run_command can run a script without a #! line' '
cat >hello <<-\EOF &&
cat hello-script
EOF
chmod +x hello &&
test-run-command run-command ./hello >actual 2>err &&
test_cmp hello-script actual &&
test_cmp empty err
'
test_expect_success POSIXPERM 'run_command reports EACCES' '
cat hello-script >hello.sh &&
chmod -x hello.sh &&
test_must_fail test-run-command run-command ./hello.sh 2>err &&
grep "fatal: cannot exec.*hello.sh" err
'
tests: correct misuses of POSIXPERM POSIXPERM requires that a later call to stat(2) (hence "ls -l") faithfully reproduces what an earlier chmod(2) did. Some filesystems cannot satisify this. SANITY requires that a file or a directory is indeed accessible (or inaccessible) when its permission bits would say it ought to be accessible (or inaccessible). Running tests as root would lose this prerequisite for obvious reasons. Fix a few tests that misuse POSIXPERM. t0061-run-command.sh has two uses of POSIXPERM. - One checks that an attempt to execute a file that is marked as unexecutable results in a failure with EACCES; I do not think having root-ness or any other capability that busts the filesystem permission mode bits will make you run an unexecutable file, so this should be left as-is. The test does not have anything to do with SANITY. - The other one expects 'git nitfol' runs the alias when an alias.nitfol is defined and a directory on the PATH is marked as unreadable and unsearchable. I _think_ the test tries to reject the alternative expectation that we want to refuse to run the alias because it would break "no alias may mask a command" rule if a file 'git-nitfol' exists in the unreadable directory but we cannot even determine if that is the case. Under !SANITY that busts the permission bits, this test no longer checks that, so it must be protected with SANITY. t1509-root-worktree.sh expects to be run on a / that is writable by the user and sees if Git behaves "sensibly" when /.git is the repository to govern a worktree that is the whole filesystem, and also if Git behaves "sensibly" when / itself is a bare repository with refs, objects, and friends (I find the definition of "behaves sensibly" under these conditions hard to fathom, but it is a different matter). The implementation of the test is very much problematic. - It requires POSIXPERM, but it does not do chmod or checks modes in any way. - It runs "rm /*" and "rm -fr /refs /objects ..." in one of the tests, and also does "cd / && git init --bare". If done on a live system that takes advantages of the "feature" being tested, these obviously will clobber the system. But there is no guard against such a breakage. - It uses "test $UID = 0" to see rootness, which now should be spelled "! test_have_prereq NOT_ROOT" Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-16 19:32:09 +01:00
test_expect_success POSIXPERM,SANITY 'unreadable directory in PATH' '
mkdir local-command &&
test_when_finished "chmod u+rwx local-command && rm -fr local-command" &&
git config alias.nitfol "!echo frotz" &&
chmod a-rx local-command &&
(
PATH=./local-command:$PATH &&
git nitfol >actual
) &&
echo frotz >expect &&
test_cmp expect actual
'
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
cat >expect <<-EOF
preloaded output of a child
Hello
World
preloaded output of a child
Hello
World
preloaded output of a child
Hello
World
preloaded output of a child
Hello
World
EOF
test_expect_success 'run_command runs in parallel with more jobs available than tasks' '
test-run-command run-command-parallel 5 sh -c "printf \"%s\n%s\n\" Hello World" 2>actual &&
test_cmp expect actual
'
test_expect_success 'run_command runs in parallel with as many jobs as tasks' '
test-run-command run-command-parallel 4 sh -c "printf \"%s\n%s\n\" Hello World" 2>actual &&
test_cmp expect actual
'
test_expect_success 'run_command runs in parallel with more tasks than jobs available' '
test-run-command run-command-parallel 3 sh -c "printf \"%s\n%s\n\" Hello World" 2>actual &&
test_cmp expect actual
'
cat >expect <<-EOF
preloaded output of a child
asking for a quick stop
preloaded output of a child
asking for a quick stop
preloaded output of a child
asking for a quick stop
EOF
test_expect_success 'run_command is asked to abort gracefully' '
test-run-command run-command-abort 3 false 2>actual &&
test_cmp expect actual
'
cat >expect <<-EOF
no further jobs available
EOF
test_expect_success 'run_command outputs ' '
test-run-command run-command-no-jobs 3 sh -c "printf \"%s\n%s\n\" Hello World" 2>actual &&
test_cmp expect actual
'
test_done