1
0
Fork 0
mirror of https://github.com/git/git.git synced 2024-11-05 08:47:56 +01:00
git/run-command.c

1789 lines
39 KiB
C
Raw Normal View History

#include "cache.h"
#include "run-command.h"
#include "exec_cmd.h"
#include "sigchain.h"
#include "argv-array.h"
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
#include "thread-utils.h"
#include "strbuf.h"
#include "string-list.h"
#include "quote.h"
void child_process_init(struct child_process *child)
{
memset(child, 0, sizeof(*child));
argv_array_init(&child->args);
argv_array_init(&child->env_array);
}
void child_process_clear(struct child_process *child)
{
argv_array_clear(&child->args);
argv_array_clear(&child->env_array);
}
struct child_to_clean {
pid_t pid;
struct child_process *process;
struct child_to_clean *next;
};
static struct child_to_clean *children_to_clean;
static int installed_child_cleanup_handler;
pager: don't use unsafe functions in signal handlers Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-04 11:35:57 +02:00
static void cleanup_children(int sig, int in_signal)
{
execv_dashed_external: wait for child on signal death When you hit ^C to interrupt a git command going to a pager, this usually leaves the pager running. But when a dashed external is in use, the pager ends up in a funny state and quits (but only after eating one more character from the terminal!). This fixes it. Explaining the reason will require a little background. When git runs a pager, it's important for the git process to hang around and wait for the pager to finish, even though it has no more data to feed it. This is because git spawns the pager as a child, and thus the git process is the session leader on the terminal. After it dies, the pager will finish its current read from the terminal (eating the one character), and then get EIO trying to read again. When you hit ^C, that sends SIGINT to git and to the pager, and it's a similar situation. The pager ignores it, but the git process needs to hang around until the pager is done. We addressed that long ago in a3da882120 (pager: do wait_for_pager on signal death, 2009-01-22). But when you have a dashed external (or an alias pointing to a builtin, which will re-exec git for the builtin), there's an extra process in the mix. For instance, running: $ git -c alias.l=log l will end up with a process tree like: git (parent) \ git-log (child) \ less (pager) If you hit ^C, SIGINT goes to all of them. The pager ignores it, and the child git process will end up in wait_for_pager(). But the parent git process will die, and the usual EIO trouble happens. So we really want the parent git process to wait_for_pager(), but of course it doesn't know anything about the pager at all, since it was started by the child. However, we can have it wait on the git-log child, which in turn is waiting on the pager. And that's what this patch does. There are a few design decisions here worth explaining: 1. The new feature is attached to run-command's clean_on_exit feature. Partly this is convenience, since that feature already has a signal handler that deals with child cleanup. But it's also a meaningful connection. The main reason that dashed externals use clean_on_exit is to bind the two processes together. If somebody kills the parent with a signal, we propagate that to the child (in this instance with SIGINT, we do propagate but it doesn't matter because the original signal went to the whole process group). Likewise, we do not want the parent to go away until the child has done so. In a traditional Unix world, we'd probably accomplish this binding by just having the parent execve() the child directly. But since that doesn't work on Windows, everything goes through run_command's more spawn-like interface. 2. We do _not_ automatically waitpid() on any clean_on_exit children. For dashed externals this makes sense; we know that the parent is doing nothing but waiting for the child to exit anyway. But with other children, it's possible that the child, after getting the signal, could be waiting on the parent to do something (like closing a descriptor). If we were to wait on such a child, we'd end up in a deadlock. So this errs on the side of caution, and lets callers enable the feature explicitly. 3. When we send children the cleanup signal, we send all the signals first, before waiting on any children. This is to avoid the case where one child might be waiting on another one to exit, causing a deadlock. We inform all of them that it's time to die before reaping any. In practice, there is only ever one dashed external run from a given process, so this doesn't matter much now. But it future-proofs us if other callers start using the wait_after_clean mechanism. There's no automated test here, because it would end up racy and unportable. But it's easy to reproduce the situation by running the log command given above and hitting ^C. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-07 02:22:23 +01:00
struct child_to_clean *children_to_wait_for = NULL;
while (children_to_clean) {
struct child_to_clean *p = children_to_clean;
children_to_clean = p->next;
if (p->process && !in_signal) {
struct child_process *process = p->process;
if (process->clean_on_exit_handler) {
trace_printf(
"trace: run_command: running exit handler for pid %"
PRIuMAX, (uintmax_t)p->pid
);
process->clean_on_exit_handler(process);
}
}
kill(p->pid, sig);
execv_dashed_external: wait for child on signal death When you hit ^C to interrupt a git command going to a pager, this usually leaves the pager running. But when a dashed external is in use, the pager ends up in a funny state and quits (but only after eating one more character from the terminal!). This fixes it. Explaining the reason will require a little background. When git runs a pager, it's important for the git process to hang around and wait for the pager to finish, even though it has no more data to feed it. This is because git spawns the pager as a child, and thus the git process is the session leader on the terminal. After it dies, the pager will finish its current read from the terminal (eating the one character), and then get EIO trying to read again. When you hit ^C, that sends SIGINT to git and to the pager, and it's a similar situation. The pager ignores it, but the git process needs to hang around until the pager is done. We addressed that long ago in a3da882120 (pager: do wait_for_pager on signal death, 2009-01-22). But when you have a dashed external (or an alias pointing to a builtin, which will re-exec git for the builtin), there's an extra process in the mix. For instance, running: $ git -c alias.l=log l will end up with a process tree like: git (parent) \ git-log (child) \ less (pager) If you hit ^C, SIGINT goes to all of them. The pager ignores it, and the child git process will end up in wait_for_pager(). But the parent git process will die, and the usual EIO trouble happens. So we really want the parent git process to wait_for_pager(), but of course it doesn't know anything about the pager at all, since it was started by the child. However, we can have it wait on the git-log child, which in turn is waiting on the pager. And that's what this patch does. There are a few design decisions here worth explaining: 1. The new feature is attached to run-command's clean_on_exit feature. Partly this is convenience, since that feature already has a signal handler that deals with child cleanup. But it's also a meaningful connection. The main reason that dashed externals use clean_on_exit is to bind the two processes together. If somebody kills the parent with a signal, we propagate that to the child (in this instance with SIGINT, we do propagate but it doesn't matter because the original signal went to the whole process group). Likewise, we do not want the parent to go away until the child has done so. In a traditional Unix world, we'd probably accomplish this binding by just having the parent execve() the child directly. But since that doesn't work on Windows, everything goes through run_command's more spawn-like interface. 2. We do _not_ automatically waitpid() on any clean_on_exit children. For dashed externals this makes sense; we know that the parent is doing nothing but waiting for the child to exit anyway. But with other children, it's possible that the child, after getting the signal, could be waiting on the parent to do something (like closing a descriptor). If we were to wait on such a child, we'd end up in a deadlock. So this errs on the side of caution, and lets callers enable the feature explicitly. 3. When we send children the cleanup signal, we send all the signals first, before waiting on any children. This is to avoid the case where one child might be waiting on another one to exit, causing a deadlock. We inform all of them that it's time to die before reaping any. In practice, there is only ever one dashed external run from a given process, so this doesn't matter much now. But it future-proofs us if other callers start using the wait_after_clean mechanism. There's no automated test here, because it would end up racy and unportable. But it's easy to reproduce the situation by running the log command given above and hitting ^C. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-07 02:22:23 +01:00
if (p->process && p->process->wait_after_clean) {
execv_dashed_external: wait for child on signal death When you hit ^C to interrupt a git command going to a pager, this usually leaves the pager running. But when a dashed external is in use, the pager ends up in a funny state and quits (but only after eating one more character from the terminal!). This fixes it. Explaining the reason will require a little background. When git runs a pager, it's important for the git process to hang around and wait for the pager to finish, even though it has no more data to feed it. This is because git spawns the pager as a child, and thus the git process is the session leader on the terminal. After it dies, the pager will finish its current read from the terminal (eating the one character), and then get EIO trying to read again. When you hit ^C, that sends SIGINT to git and to the pager, and it's a similar situation. The pager ignores it, but the git process needs to hang around until the pager is done. We addressed that long ago in a3da882120 (pager: do wait_for_pager on signal death, 2009-01-22). But when you have a dashed external (or an alias pointing to a builtin, which will re-exec git for the builtin), there's an extra process in the mix. For instance, running: $ git -c alias.l=log l will end up with a process tree like: git (parent) \ git-log (child) \ less (pager) If you hit ^C, SIGINT goes to all of them. The pager ignores it, and the child git process will end up in wait_for_pager(). But the parent git process will die, and the usual EIO trouble happens. So we really want the parent git process to wait_for_pager(), but of course it doesn't know anything about the pager at all, since it was started by the child. However, we can have it wait on the git-log child, which in turn is waiting on the pager. And that's what this patch does. There are a few design decisions here worth explaining: 1. The new feature is attached to run-command's clean_on_exit feature. Partly this is convenience, since that feature already has a signal handler that deals with child cleanup. But it's also a meaningful connection. The main reason that dashed externals use clean_on_exit is to bind the two processes together. If somebody kills the parent with a signal, we propagate that to the child (in this instance with SIGINT, we do propagate but it doesn't matter because the original signal went to the whole process group). Likewise, we do not want the parent to go away until the child has done so. In a traditional Unix world, we'd probably accomplish this binding by just having the parent execve() the child directly. But since that doesn't work on Windows, everything goes through run_command's more spawn-like interface. 2. We do _not_ automatically waitpid() on any clean_on_exit children. For dashed externals this makes sense; we know that the parent is doing nothing but waiting for the child to exit anyway. But with other children, it's possible that the child, after getting the signal, could be waiting on the parent to do something (like closing a descriptor). If we were to wait on such a child, we'd end up in a deadlock. So this errs on the side of caution, and lets callers enable the feature explicitly. 3. When we send children the cleanup signal, we send all the signals first, before waiting on any children. This is to avoid the case where one child might be waiting on another one to exit, causing a deadlock. We inform all of them that it's time to die before reaping any. In practice, there is only ever one dashed external run from a given process, so this doesn't matter much now. But it future-proofs us if other callers start using the wait_after_clean mechanism. There's no automated test here, because it would end up racy and unportable. But it's easy to reproduce the situation by running the log command given above and hitting ^C. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-07 02:22:23 +01:00
p->next = children_to_wait_for;
children_to_wait_for = p;
} else {
if (!in_signal)
free(p);
}
}
while (children_to_wait_for) {
struct child_to_clean *p = children_to_wait_for;
children_to_wait_for = p->next;
while (waitpid(p->pid, NULL, 0) < 0 && errno == EINTR)
; /* spin waiting for process exit or error */
pager: don't use unsafe functions in signal handlers Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-04 11:35:57 +02:00
if (!in_signal)
free(p);
}
}
static void cleanup_children_on_signal(int sig)
{
pager: don't use unsafe functions in signal handlers Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-04 11:35:57 +02:00
cleanup_children(sig, 1);
sigchain_pop(sig);
raise(sig);
}
static void cleanup_children_on_exit(void)
{
pager: don't use unsafe functions in signal handlers Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-04 11:35:57 +02:00
cleanup_children(SIGTERM, 0);
}
static void mark_child_for_cleanup(pid_t pid, struct child_process *process)
{
struct child_to_clean *p = xmalloc(sizeof(*p));
p->pid = pid;
p->process = process;
p->next = children_to_clean;
children_to_clean = p;
if (!installed_child_cleanup_handler) {
atexit(cleanup_children_on_exit);
sigchain_push_common(cleanup_children_on_signal);
installed_child_cleanup_handler = 1;
}
}
static void clear_child_for_cleanup(pid_t pid)
{
struct child_to_clean **pp;
for (pp = &children_to_clean; *pp; pp = &(*pp)->next) {
struct child_to_clean *clean_me = *pp;
if (clean_me->pid == pid) {
*pp = clean_me->next;
free(clean_me);
return;
}
}
}
static inline void close_pair(int fd[2])
{
close(fd[0]);
close(fd[1]);
}
int is_executable(const char *name)
{
struct stat st;
if (stat(name, &st) || /* stat, not lstat */
!S_ISREG(st.st_mode))
return 0;
#if defined(GIT_WINDOWS_NATIVE)
/*
* On Windows there is no executable bit. The file extension
* indicates whether it can be run as an executable, and Git
* has special-handling to detect scripts and launch them
* through the indicated script interpreter. We test for the
* file extension first because virus scanners may make
* it quite expensive to open many files.
*/
if (ends_with(name, ".exe"))
return S_IXUSR;
{
/*
* Now that we know it does not have an executable extension,
* peek into the file instead.
*/
char buf[3] = { 0 };
int n;
int fd = open(name, O_RDONLY);
st.st_mode &= ~S_IXUSR;
if (fd >= 0) {
n = read(fd, buf, 2);
if (n == 2)
/* look for a she-bang */
if (!strcmp(buf, "#!"))
st.st_mode |= S_IXUSR;
close(fd);
}
}
#endif
return st.st_mode & S_IXUSR;
}
/*
* Search $PATH for a command. This emulates the path search that
* execvp would perform, without actually executing the command so it
* can be used before fork() to prepare to run a command using
* execve() or after execvp() to diagnose why it failed.
*
* The caller should ensure that file contains no directory
* separators.
*
* Returns the path to the command, as found in $PATH or NULL if the
* command could not be found. The caller inherits ownership of the memory
* used to store the resultant path.
*
* This should not be used on Windows, where the $PATH search rules
* are more complicated (e.g., a search for "foo" should find
* "foo.exe").
*/
static char *locate_in_PATH(const char *file)
{
const char *p = getenv("PATH");
struct strbuf buf = STRBUF_INIT;
if (!p || !*p)
return NULL;
while (1) {
const char *end = strchrnul(p, ':');
strbuf_reset(&buf);
/* POSIX specifies an empty entry as the current directory. */
if (end != p) {
strbuf_add(&buf, p, end - p);
strbuf_addch(&buf, '/');
}
strbuf_addstr(&buf, file);
if (is_executable(buf.buf))
return strbuf_detach(&buf, NULL);
if (!*end)
break;
p = end + 1;
}
strbuf_release(&buf);
return NULL;
}
static int exists_in_PATH(const char *file)
{
char *r = locate_in_PATH(file);
free(r);
return r != NULL;
}
int sane_execvp(const char *file, char * const argv[])
{
if (!execvp(file, argv))
return 0; /* cannot happen ;-) */
/*
* When a command can't be found because one of the directories
* listed in $PATH is unsearchable, execvp reports EACCES, but
* careful usability testing (read: analysis of occasional bug
* reports) reveals that "No such file or directory" is more
* intuitive.
*
* We avoid commands with "/", because execvp will not do $PATH
* lookups in that case.
*
* The reassignment of EACCES to errno looks like a no-op below,
* but we need to protect against exists_in_PATH overwriting errno.
*/
if (errno == EACCES && !strchr(file, '/'))
errno = exists_in_PATH(file) ? EACCES : ENOENT;
else if (errno == ENOTDIR && !strchr(file, '/'))
errno = ENOENT;
return -1;
}
static const char **prepare_shell_cmd(struct argv_array *out, const char **argv)
{
if (!argv[0])
die("BUG: shell command is empty");
if (strcspn(argv[0], "|&;<>()$`\\\"' \t\n*?[#~=%") != strlen(argv[0])) {
#ifndef GIT_WINDOWS_NATIVE
argv_array_push(out, SHELL_PATH);
#else
argv_array_push(out, "sh");
#endif
argv_array_push(out, "-c");
/*
* If we have no extra arguments, we do not even need to
* bother with the "$@" magic.
*/
if (!argv[1])
argv_array_push(out, argv[0]);
else
argv_array_pushf(out, "%s \"$@\"", argv[0]);
}
argv_array_pushv(out, argv);
return out->argv;
}
#ifndef GIT_WINDOWS_NATIVE
static int child_notifier = -1;
enum child_errcode {
CHILD_ERR_CHDIR,
CHILD_ERR_DUP2,
CHILD_ERR_CLOSE,
CHILD_ERR_SIGPROCMASK,
CHILD_ERR_ENOENT,
CHILD_ERR_SILENT,
CHILD_ERR_ERRNO
};
struct child_err {
enum child_errcode err;
int syserr; /* errno */
};
static void child_die(enum child_errcode err)
{
struct child_err buf;
buf.err = err;
buf.syserr = errno;
/* write(2) on buf smaller than PIPE_BUF (min 512) is atomic: */
xwrite(child_notifier, &buf, sizeof(buf));
_exit(1);
}
static void child_dup2(int fd, int to)
{
if (dup2(fd, to) < 0)
child_die(CHILD_ERR_DUP2);
}
static void child_close(int fd)
{
if (close(fd))
child_die(CHILD_ERR_CLOSE);
}
static void child_close_pair(int fd[2])
{
child_close(fd[0]);
child_close(fd[1]);
}
/*
* parent will make it look like the child spewed a fatal error and died
* this is needed to prevent changes to t0061.
*/
static void fake_fatal(const char *err, va_list params)
{
vreportf("fatal: ", err, params);
}
static void child_error_fn(const char *err, va_list params)
{
const char msg[] = "error() should not be called in child\n";
xwrite(2, msg, sizeof(msg) - 1);
}
static void child_warn_fn(const char *err, va_list params)
{
const char msg[] = "warn() should not be called in child\n";
xwrite(2, msg, sizeof(msg) - 1);
}
static void NORETURN child_die_fn(const char *err, va_list params)
{
const char msg[] = "die() should not be called in child\n";
xwrite(2, msg, sizeof(msg) - 1);
_exit(2);
}
/* this runs in the parent process */
static void child_err_spew(struct child_process *cmd, struct child_err *cerr)
{
static void (*old_errfn)(const char *err, va_list params);
old_errfn = get_error_routine();
set_error_routine(fake_fatal);
errno = cerr->syserr;
switch (cerr->err) {
case CHILD_ERR_CHDIR:
error_errno("exec '%s': cd to '%s' failed",
cmd->argv[0], cmd->dir);
break;
case CHILD_ERR_DUP2:
error_errno("dup2() in child failed");
break;
case CHILD_ERR_CLOSE:
error_errno("close() in child failed");
break;
case CHILD_ERR_SIGPROCMASK:
error_errno("sigprocmask failed restoring signals");
break;
case CHILD_ERR_ENOENT:
error_errno("cannot run %s", cmd->argv[0]);
break;
case CHILD_ERR_SILENT:
break;
case CHILD_ERR_ERRNO:
error_errno("cannot exec '%s'", cmd->argv[0]);
break;
}
set_error_routine(old_errfn);
}
static void prepare_cmd(struct argv_array *out, const struct child_process *cmd)
{
if (!cmd->argv[0])
die("BUG: command is empty");
/*
* Add SHELL_PATH so in the event exec fails with ENOEXEC we can
* attempt to interpret the command with 'sh'.
*/
argv_array_push(out, SHELL_PATH);
if (cmd->git_cmd) {
argv_array_push(out, "git");
argv_array_pushv(out, cmd->argv);
} else if (cmd->use_shell) {
prepare_shell_cmd(out, cmd->argv);
} else {
argv_array_pushv(out, cmd->argv);
}
/*
* If there are no '/' characters in the command then perform a path
* lookup and use the resolved path as the command to exec. If there
* are no '/' characters or if the command wasn't found in the path,
* have exec attempt to invoke the command directly.
*/
if (!strchr(out->argv[1], '/')) {
char *program = locate_in_PATH(out->argv[1]);
if (program) {
free((char *)out->argv[1]);
out->argv[1] = program;
}
}
}
static char **prep_childenv(const char *const *deltaenv)
{
extern char **environ;
char **childenv;
struct string_list env = STRING_LIST_INIT_DUP;
struct strbuf key = STRBUF_INIT;
const char *const *p;
int i;
/* Construct a sorted string list consisting of the current environ */
for (p = (const char *const *) environ; p && *p; p++) {
const char *equals = strchr(*p, '=');
if (equals) {
strbuf_reset(&key);
strbuf_add(&key, *p, equals - *p);
string_list_append(&env, key.buf)->util = (void *) *p;
} else {
string_list_append(&env, *p)->util = (void *) *p;
}
}
string_list_sort(&env);
/* Merge in 'deltaenv' with the current environ */
for (p = deltaenv; p && *p; p++) {
const char *equals = strchr(*p, '=');
if (equals) {
/* ('key=value'), insert or replace entry */
strbuf_reset(&key);
strbuf_add(&key, *p, equals - *p);
string_list_insert(&env, key.buf)->util = (void *) *p;
} else {
/* otherwise ('key') remove existing entry */
string_list_remove(&env, *p, 0);
}
}
/* Create an array of 'char *' to be used as the childenv */
ALLOC_ARRAY(childenv, env.nr + 1);
for (i = 0; i < env.nr; i++)
childenv[i] = env.items[i].util;
childenv[env.nr] = NULL;
string_list_clear(&env, 0);
strbuf_release(&key);
return childenv;
}
struct atfork_state {
#ifndef NO_PTHREADS
int cs;
#endif
sigset_t old;
};
#ifndef NO_PTHREADS
static void bug_die(int err, const char *msg)
{
if (err) {
errno = err;
die_errno("BUG: %s", msg);
}
}
#endif
static void atfork_prepare(struct atfork_state *as)
{
sigset_t all;
if (sigfillset(&all))
die_errno("sigfillset");
#ifdef NO_PTHREADS
if (sigprocmask(SIG_SETMASK, &all, &as->old))
die_errno("sigprocmask");
#else
bug_die(pthread_sigmask(SIG_SETMASK, &all, &as->old),
"blocking all signals");
bug_die(pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &as->cs),
"disabling cancellation");
#endif
}
static void atfork_parent(struct atfork_state *as)
{
#ifdef NO_PTHREADS
if (sigprocmask(SIG_SETMASK, &as->old, NULL))
die_errno("sigprocmask");
#else
bug_die(pthread_setcancelstate(as->cs, NULL),
"re-enabling cancellation");
bug_die(pthread_sigmask(SIG_SETMASK, &as->old, NULL),
"restoring signal mask");
#endif
}
#endif /* GIT_WINDOWS_NATIVE */
static inline void set_cloexec(int fd)
{
int flags = fcntl(fd, F_GETFD);
if (flags >= 0)
fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
}
pager: don't use unsafe functions in signal handlers Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-04 11:35:57 +02:00
static int wait_or_whine(pid_t pid, const char *argv0, int in_signal)
{
int status, code = -1;
pid_t waiting;
int failed_errno = 0;
while ((waiting = waitpid(pid, &status, 0)) < 0 && errno == EINTR)
; /* nothing */
pager: don't use unsafe functions in signal handlers Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-04 11:35:57 +02:00
if (in_signal)
return 0;
if (waiting < 0) {
failed_errno = errno;
error_errno("waitpid for %s failed", argv0);
} else if (waiting != pid) {
error("waitpid is confused (%s)", argv0);
} else if (WIFSIGNALED(status)) {
code = WTERMSIG(status);
run-command: don't warn on SIGPIPE deaths When git executes a sub-command, we print a warning if the command dies due to a signal, but make an exception for "uninteresting" cases like SIGINT and SIGQUIT (since the user presumably just hit ^C). We should make a similar exception for SIGPIPE, because it's an expected and uninteresting return in most cases; it generally means the user quit the pager before git had finished generating all output. This used to be very hard to trigger in practice, because: 1. We only complain if we see a real SIGPIPE death, not the shell-induced 141 exit code. This means that anything we run via the shell does not trigger the warning, which includes most non-trivial aliases. 2. The common case for SIGPIPE is the user quitting the pager before git has finished generating all output. But if the user triggers a pager with "-p", we redirect the git wrapper's stderr to that pager, too. Since the pager is dead, it means that the message goes nowhere. 3. You can see it if you run your own pager, like "git foo | head". But that only happens if "foo" is a non-builtin (so it doesn't work with "log", for example). However, it may become more common after 86d26f2, which teaches alias to re-exec builtins rather than running them in the same process. This case doesn't trigger (1), as we don't need a shell to run a git command. It doesn't trigger (2), because the pager is not started by the original git, but by the inner re-exec of git. And it doesn't trigger (3), because builtins are treated more like non-builtins in this case. Given how flaky this message already is (e.g., you cannot even know whether you will see it, as git optimizes out some shell invocations behind the scenes based on the contents of the command!), and that it is unlikely to ever provide useful information, let's suppress it for all cases of SIGPIPE. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-29 09:12:22 +01:00
if (code != SIGINT && code != SIGQUIT && code != SIGPIPE)
error("%s died of signal %d", argv0, code);
/*
* This return value is chosen so that code & 0xff
* mimics the exit code that a POSIX shell would report for
* a program that died from this signal.
*/
run-command: encode signal death as a positive integer When a sub-command dies due to a signal, we encode the signal number into the numeric exit status as "signal - 128". This is easy to identify (versus a regular positive error code), and when cast to an unsigned integer (e.g., by feeding it to exit), matches what a POSIX shell would return when reporting a signal death in $? or through its own exit code. So we have a negative value inside the code, but once it passes across an exit() barrier, it looks positive (and any code we receive from a sub-shell will have the positive form). E.g., death by SIGPIPE (signal 13) will look like -115 to us in inside git, but will end up as 141 when we call exit() with it. And a program killed by SIGPIPE but run via the shell will come to us with an exit code of 141. Unfortunately, this means that when the "use_shell" option is set, we need to be on the lookout for _both_ forms. We might or might not have actually invoked the shell (because we optimize out some useless shell calls). If we didn't invoke the shell, we will will see the sub-process's signal death directly, and run-command converts it into a negative value. But if we did invoke the shell, we will see the shell's 128+signal exit status. To be thorough, we would need to check both, or cast the value to an unsigned char (after checking that it is not -1, which is a magic error value). Fortunately, most callsites do not care at all whether the exit was from a code or from a signal; they merely check for a non-zero status, and sometimes propagate the error via exit(). But for the callers that do care, we can make life slightly easier by just using the consistent positive form. This actually fixes two minor bugs: 1. In launch_editor, we check whether the editor died from SIGINT or SIGQUIT. But we checked only the negative form, meaning that we would fail to notice a signal death exit code which was propagated through the shell. 2. In handle_alias, we assume that a negative return value from run_command means that errno tells us something interesting (like a fork failure, or ENOENT). Otherwise, we simply propagate the exit code. Negative signal death codes confuse us, and we print a useless "unable to run alias 'foo': Success" message. By encoding signal deaths using the positive form, the existing code just propagates it as it would a normal non-zero exit code. The downside is that callers of run_command can no longer differentiate between a signal received directly by the sub-process, and one propagated. However, no caller currently cares, and since we already optimize out some calls to the shell under the hood, that distinction is not something that should be relied upon by callers. Fix the same logic in t/test-terminal.perl for consistency [jc: raised by Jonathan in the discussion]. Signed-off-by: Jeff King <peff@peff.net> Acked-by: Johannes Sixt <j6t@kdbg.org> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-05 15:49:49 +01:00
code += 128;
} else if (WIFEXITED(status)) {
code = WEXITSTATUS(status);
} else {
error("waitpid is confused (%s)", argv0);
}
clear_child_for_cleanup(pid);
errno = failed_errno;
return code;
}
static void trace_add_env(struct strbuf *dst, const char *const *deltaenv)
{
struct string_list envs = STRING_LIST_INIT_DUP;
const char *const *e;
int i;
int printed_unset = 0;
/* Last one wins, see run-command.c:prep_childenv() for context */
for (e = deltaenv; e && *e; e++) {
struct strbuf key = STRBUF_INIT;
char *equals = strchr(*e, '=');
if (equals) {
strbuf_add(&key, *e, equals - *e);
string_list_insert(&envs, key.buf)->util = equals + 1;
} else {
string_list_insert(&envs, *e)->util = NULL;
}
strbuf_release(&key);
}
/* "unset X Y...;" */
for (i = 0; i < envs.nr; i++) {
const char *var = envs.items[i].string;
const char *val = envs.items[i].util;
if (val || !getenv(var))
continue;
if (!printed_unset) {
strbuf_addstr(dst, " unset");
printed_unset = 1;
}
strbuf_addf(dst, " %s", var);
}
if (printed_unset)
strbuf_addch(dst, ';');
/* ... followed by "A=B C=D ..." */
for (i = 0; i < envs.nr; i++) {
const char *var = envs.items[i].string;
const char *val = envs.items[i].util;
const char *oldval;
if (!val)
continue;
oldval = getenv(var);
if (oldval && !strcmp(val, oldval))
continue;
strbuf_addf(dst, " %s=", var);
sq_quote_buf_pretty(dst, val);
}
string_list_clear(&envs, 0);
}
static void trace_run_command(const struct child_process *cp)
{
struct strbuf buf = STRBUF_INIT;
if (!trace_want(&trace_default_key))
return;
strbuf_addstr(&buf, "trace: run_command:");
if (cp->dir) {
strbuf_addstr(&buf, " cd ");
sq_quote_buf_pretty(&buf, cp->dir);
strbuf_addch(&buf, ';');
}
/*
* The caller is responsible for initializing cp->env from
* cp->env_array if needed. We only check one place.
*/
if (cp->env)
trace_add_env(&buf, cp->env);
if (cp->git_cmd)
strbuf_addstr(&buf, " git");
sq_quote_argv_pretty(&buf, cp->argv);
trace_printf("%s", buf.buf);
strbuf_release(&buf);
}
int start_command(struct child_process *cmd)
{
int need_in, need_out, need_err;
int fdin[2], fdout[2], fderr[2];
int failed_errno;
char *str;
if (!cmd->argv)
cmd->argv = cmd->args.argv;
if (!cmd->env)
cmd->env = cmd->env_array.argv;
/*
* In case of errors we must keep the promise to close FDs
* that have been passed in via ->in and ->out.
*/
need_in = !cmd->no_stdin && cmd->in < 0;
if (need_in) {
if (pipe(fdin) < 0) {
run_command: report system call errors instead of returning error codes The motivation for this change is that system call failures are serious errors that should be reported to the user, but only few callers took the burden to decode the error codes that the functions returned into error messages. If at all, then only an unspecific error message was given. A prominent example is this: $ git upload-pack . | : fatal: unable to run 'git-upload-pack' In this example, git-upload-pack, the external command invoked through the git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to report the real cause. In fact, this very error message is copied to the syslog if git-daemon's client aborts the connection early. With this change, system call failures are reported immediately after the failure and only a generic failure code is returned to the caller. In the above example the error is now to the point: $ git upload-pack . | : error: git-upload-pack died of signal Note that there is no error report if the invoked program terminated with a non-zero exit code, because it is reasonable to expect that the invoked program has already reported an error. (But many run_command call sites nevertheless write a generic error message.) There was one special return code that was used to identify the case where run_command failed because the requested program could not be exec'd. This special case is now treated like a system call failure with errno set to ENOENT. No error is reported in this case, because the call site in git.c expects this as a normal result. Therefore, the callers that carefully decoded the return value still check for this condition. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-04 21:26:40 +02:00
failed_errno = errno;
if (cmd->out > 0)
close(cmd->out);
str = "standard input";
run_command: report system call errors instead of returning error codes The motivation for this change is that system call failures are serious errors that should be reported to the user, but only few callers took the burden to decode the error codes that the functions returned into error messages. If at all, then only an unspecific error message was given. A prominent example is this: $ git upload-pack . | : fatal: unable to run 'git-upload-pack' In this example, git-upload-pack, the external command invoked through the git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to report the real cause. In fact, this very error message is copied to the syslog if git-daemon's client aborts the connection early. With this change, system call failures are reported immediately after the failure and only a generic failure code is returned to the caller. In the above example the error is now to the point: $ git upload-pack . | : error: git-upload-pack died of signal Note that there is no error report if the invoked program terminated with a non-zero exit code, because it is reasonable to expect that the invoked program has already reported an error. (But many run_command call sites nevertheless write a generic error message.) There was one special return code that was used to identify the case where run_command failed because the requested program could not be exec'd. This special case is now treated like a system call failure with errno set to ENOENT. No error is reported in this case, because the call site in git.c expects this as a normal result. Therefore, the callers that carefully decoded the return value still check for this condition. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-04 21:26:40 +02:00
goto fail_pipe;
}
cmd->in = fdin[1];
}
need_out = !cmd->no_stdout
&& !cmd->stdout_to_stderr
&& cmd->out < 0;
if (need_out) {
if (pipe(fdout) < 0) {
run_command: report system call errors instead of returning error codes The motivation for this change is that system call failures are serious errors that should be reported to the user, but only few callers took the burden to decode the error codes that the functions returned into error messages. If at all, then only an unspecific error message was given. A prominent example is this: $ git upload-pack . | : fatal: unable to run 'git-upload-pack' In this example, git-upload-pack, the external command invoked through the git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to report the real cause. In fact, this very error message is copied to the syslog if git-daemon's client aborts the connection early. With this change, system call failures are reported immediately after the failure and only a generic failure code is returned to the caller. In the above example the error is now to the point: $ git upload-pack . | : error: git-upload-pack died of signal Note that there is no error report if the invoked program terminated with a non-zero exit code, because it is reasonable to expect that the invoked program has already reported an error. (But many run_command call sites nevertheless write a generic error message.) There was one special return code that was used to identify the case where run_command failed because the requested program could not be exec'd. This special case is now treated like a system call failure with errno set to ENOENT. No error is reported in this case, because the call site in git.c expects this as a normal result. Therefore, the callers that carefully decoded the return value still check for this condition. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-04 21:26:40 +02:00
failed_errno = errno;
if (need_in)
close_pair(fdin);
else if (cmd->in)
close(cmd->in);
str = "standard output";
run_command: report system call errors instead of returning error codes The motivation for this change is that system call failures are serious errors that should be reported to the user, but only few callers took the burden to decode the error codes that the functions returned into error messages. If at all, then only an unspecific error message was given. A prominent example is this: $ git upload-pack . | : fatal: unable to run 'git-upload-pack' In this example, git-upload-pack, the external command invoked through the git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to report the real cause. In fact, this very error message is copied to the syslog if git-daemon's client aborts the connection early. With this change, system call failures are reported immediately after the failure and only a generic failure code is returned to the caller. In the above example the error is now to the point: $ git upload-pack . | : error: git-upload-pack died of signal Note that there is no error report if the invoked program terminated with a non-zero exit code, because it is reasonable to expect that the invoked program has already reported an error. (But many run_command call sites nevertheless write a generic error message.) There was one special return code that was used to identify the case where run_command failed because the requested program could not be exec'd. This special case is now treated like a system call failure with errno set to ENOENT. No error is reported in this case, because the call site in git.c expects this as a normal result. Therefore, the callers that carefully decoded the return value still check for this condition. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-04 21:26:40 +02:00
goto fail_pipe;
}
cmd->out = fdout[0];
}
need_err = !cmd->no_stderr && cmd->err < 0;
if (need_err) {
if (pipe(fderr) < 0) {
run_command: report system call errors instead of returning error codes The motivation for this change is that system call failures are serious errors that should be reported to the user, but only few callers took the burden to decode the error codes that the functions returned into error messages. If at all, then only an unspecific error message was given. A prominent example is this: $ git upload-pack . | : fatal: unable to run 'git-upload-pack' In this example, git-upload-pack, the external command invoked through the git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to report the real cause. In fact, this very error message is copied to the syslog if git-daemon's client aborts the connection early. With this change, system call failures are reported immediately after the failure and only a generic failure code is returned to the caller. In the above example the error is now to the point: $ git upload-pack . | : error: git-upload-pack died of signal Note that there is no error report if the invoked program terminated with a non-zero exit code, because it is reasonable to expect that the invoked program has already reported an error. (But many run_command call sites nevertheless write a generic error message.) There was one special return code that was used to identify the case where run_command failed because the requested program could not be exec'd. This special case is now treated like a system call failure with errno set to ENOENT. No error is reported in this case, because the call site in git.c expects this as a normal result. Therefore, the callers that carefully decoded the return value still check for this condition. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-04 21:26:40 +02:00
failed_errno = errno;
if (need_in)
close_pair(fdin);
else if (cmd->in)
close(cmd->in);
if (need_out)
close_pair(fdout);
else if (cmd->out)
close(cmd->out);
str = "standard error";
run_command: report system call errors instead of returning error codes The motivation for this change is that system call failures are serious errors that should be reported to the user, but only few callers took the burden to decode the error codes that the functions returned into error messages. If at all, then only an unspecific error message was given. A prominent example is this: $ git upload-pack . | : fatal: unable to run 'git-upload-pack' In this example, git-upload-pack, the external command invoked through the git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to report the real cause. In fact, this very error message is copied to the syslog if git-daemon's client aborts the connection early. With this change, system call failures are reported immediately after the failure and only a generic failure code is returned to the caller. In the above example the error is now to the point: $ git upload-pack . | : error: git-upload-pack died of signal Note that there is no error report if the invoked program terminated with a non-zero exit code, because it is reasonable to expect that the invoked program has already reported an error. (But many run_command call sites nevertheless write a generic error message.) There was one special return code that was used to identify the case where run_command failed because the requested program could not be exec'd. This special case is now treated like a system call failure with errno set to ENOENT. No error is reported in this case, because the call site in git.c expects this as a normal result. Therefore, the callers that carefully decoded the return value still check for this condition. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-04 21:26:40 +02:00
fail_pipe:
error("cannot create %s pipe for %s: %s",
str, cmd->argv[0], strerror(failed_errno));
child_process_clear(cmd);
run_command: report system call errors instead of returning error codes The motivation for this change is that system call failures are serious errors that should be reported to the user, but only few callers took the burden to decode the error codes that the functions returned into error messages. If at all, then only an unspecific error message was given. A prominent example is this: $ git upload-pack . | : fatal: unable to run 'git-upload-pack' In this example, git-upload-pack, the external command invoked through the git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to report the real cause. In fact, this very error message is copied to the syslog if git-daemon's client aborts the connection early. With this change, system call failures are reported immediately after the failure and only a generic failure code is returned to the caller. In the above example the error is now to the point: $ git upload-pack . | : error: git-upload-pack died of signal Note that there is no error report if the invoked program terminated with a non-zero exit code, because it is reasonable to expect that the invoked program has already reported an error. (But many run_command call sites nevertheless write a generic error message.) There was one special return code that was used to identify the case where run_command failed because the requested program could not be exec'd. This special case is now treated like a system call failure with errno set to ENOENT. No error is reported in this case, because the call site in git.c expects this as a normal result. Therefore, the callers that carefully decoded the return value still check for this condition. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-04 21:26:40 +02:00
errno = failed_errno;
return -1;
}
cmd->err = fderr[0];
}
trace_run_command(cmd);
fflush(NULL);
#ifndef GIT_WINDOWS_NATIVE
{
int notify_pipe[2];
int null_fd = -1;
char **childenv;
struct argv_array argv = ARGV_ARRAY_INIT;
struct child_err cerr;
struct atfork_state as;
if (pipe(notify_pipe))
notify_pipe[0] = notify_pipe[1] = -1;
if (cmd->no_stdin || cmd->no_stdout || cmd->no_stderr) {
null_fd = open("/dev/null", O_RDWR | O_CLOEXEC);
if (null_fd < 0)
die_errno(_("open /dev/null failed"));
set_cloexec(null_fd);
}
prepare_cmd(&argv, cmd);
childenv = prep_childenv(cmd->env);
atfork_prepare(&as);
/*
* NOTE: In order to prevent deadlocking when using threads special
* care should be taken with the function calls made in between the
* fork() and exec() calls. No calls should be made to functions which
* require acquiring a lock (e.g. malloc) as the lock could have been
* held by another thread at the time of forking, causing the lock to
* never be released in the child process. This means only
* Async-Signal-Safe functions are permitted in the child.
*/
cmd->pid = fork();
failed_errno = errno;
if (!cmd->pid) {
int sig;
/*
* Ensure the default die/error/warn routines do not get
* called, they can take stdio locks and malloc.
*/
set_die_routine(child_die_fn);
set_error_routine(child_error_fn);
set_warn_routine(child_warn_fn);
close(notify_pipe[0]);
set_cloexec(notify_pipe[1]);
child_notifier = notify_pipe[1];
if (cmd->no_stdin)
child_dup2(null_fd, 0);
else if (need_in) {
child_dup2(fdin[0], 0);
child_close_pair(fdin);
} else if (cmd->in) {
child_dup2(cmd->in, 0);
child_close(cmd->in);
}
if (cmd->no_stderr)
child_dup2(null_fd, 2);
else if (need_err) {
child_dup2(fderr[1], 2);
child_close_pair(fderr);
} else if (cmd->err > 1) {
child_dup2(cmd->err, 2);
child_close(cmd->err);
}
if (cmd->no_stdout)
child_dup2(null_fd, 1);
else if (cmd->stdout_to_stderr)
child_dup2(2, 1);
else if (need_out) {
child_dup2(fdout[1], 1);
child_close_pair(fdout);
} else if (cmd->out > 1) {
child_dup2(cmd->out, 1);
child_close(cmd->out);
}
if (cmd->dir && chdir(cmd->dir))
child_die(CHILD_ERR_CHDIR);
/*
* restore default signal handlers here, in case
* we catch a signal right before execve below
*/
for (sig = 1; sig < NSIG; sig++) {
/* ignored signals get reset to SIG_DFL on execve */
if (signal(sig, SIG_DFL) == SIG_IGN)
signal(sig, SIG_IGN);
}
if (sigprocmask(SIG_SETMASK, &as.old, NULL) != 0)
child_die(CHILD_ERR_SIGPROCMASK);
/*
* Attempt to exec using the command and arguments starting at
* argv.argv[1]. argv.argv[0] contains SHELL_PATH which will
* be used in the event exec failed with ENOEXEC at which point
* we will try to interpret the command using 'sh'.
*/
execve(argv.argv[1], (char *const *) argv.argv + 1,
(char *const *) childenv);
if (errno == ENOEXEC)
execve(argv.argv[0], (char *const *) argv.argv,
(char *const *) childenv);
notice error exit from pager If the pager fails to run, git produces no output, e.g.: $ GIT_PAGER=not-a-command git log The error reporting fails for two reasons: (1) start_command: There is a mechanism that detects errors during execvp introduced in 2b541bf8 (start_command: detect execvp failures early). The child writes one byte to a pipe only if execvp fails. The parent waits for either EOF, when the successful execvp automatically closes the pipe (see FD_CLOEXEC in fcntl(1)), or it reads a single byte, in which case it knows that the execvp failed. This mechanism is incompatible with the workaround introduced in 35ce8622 (pager: Work around window resizing bug in 'less'), which waits for input from the parent before the exec. Since both the parent and the child are waiting for input from each other, that would result in a deadlock. In order to avoid that, the mechanism is disabled by closing the child_notifier file descriptor. (2) finish_command: The parent correctly detects the 127 exit status from the child, but the error output goes nowhere, since by that time it is already being redirected to the child. No simple solution for (1) comes to mind. Number (2) can be solved by not sending error output to the pager. Not redirecting error output to the pager can result in the pager overwriting error output with standard output, however. Since there is no reliable way to handle error reporting in the parent, produce the output in the child instead. Signed-off-by: Clemens Buchacher <drizzd@aon.at> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-01 19:59:21 +02:00
if (errno == ENOENT) {
if (cmd->silent_exec_failure)
child_die(CHILD_ERR_SILENT);
child_die(CHILD_ERR_ENOENT);
notice error exit from pager If the pager fails to run, git produces no output, e.g.: $ GIT_PAGER=not-a-command git log The error reporting fails for two reasons: (1) start_command: There is a mechanism that detects errors during execvp introduced in 2b541bf8 (start_command: detect execvp failures early). The child writes one byte to a pipe only if execvp fails. The parent waits for either EOF, when the successful execvp automatically closes the pipe (see FD_CLOEXEC in fcntl(1)), or it reads a single byte, in which case it knows that the execvp failed. This mechanism is incompatible with the workaround introduced in 35ce8622 (pager: Work around window resizing bug in 'less'), which waits for input from the parent before the exec. Since both the parent and the child are waiting for input from each other, that would result in a deadlock. In order to avoid that, the mechanism is disabled by closing the child_notifier file descriptor. (2) finish_command: The parent correctly detects the 127 exit status from the child, but the error output goes nowhere, since by that time it is already being redirected to the child. No simple solution for (1) comes to mind. Number (2) can be solved by not sending error output to the pager. Not redirecting error output to the pager can result in the pager overwriting error output with standard output, however. Since there is no reliable way to handle error reporting in the parent, produce the output in the child instead. Signed-off-by: Clemens Buchacher <drizzd@aon.at> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-01 19:59:21 +02:00
} else {
child_die(CHILD_ERR_ERRNO);
notice error exit from pager If the pager fails to run, git produces no output, e.g.: $ GIT_PAGER=not-a-command git log The error reporting fails for two reasons: (1) start_command: There is a mechanism that detects errors during execvp introduced in 2b541bf8 (start_command: detect execvp failures early). The child writes one byte to a pipe only if execvp fails. The parent waits for either EOF, when the successful execvp automatically closes the pipe (see FD_CLOEXEC in fcntl(1)), or it reads a single byte, in which case it knows that the execvp failed. This mechanism is incompatible with the workaround introduced in 35ce8622 (pager: Work around window resizing bug in 'less'), which waits for input from the parent before the exec. Since both the parent and the child are waiting for input from each other, that would result in a deadlock. In order to avoid that, the mechanism is disabled by closing the child_notifier file descriptor. (2) finish_command: The parent correctly detects the 127 exit status from the child, but the error output goes nowhere, since by that time it is already being redirected to the child. No simple solution for (1) comes to mind. Number (2) can be solved by not sending error output to the pager. Not redirecting error output to the pager can result in the pager overwriting error output with standard output, however. Since there is no reliable way to handle error reporting in the parent, produce the output in the child instead. Signed-off-by: Clemens Buchacher <drizzd@aon.at> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-01 19:59:21 +02:00
}
}
atfork_parent(&as);
run_command: report system call errors instead of returning error codes The motivation for this change is that system call failures are serious errors that should be reported to the user, but only few callers took the burden to decode the error codes that the functions returned into error messages. If at all, then only an unspecific error message was given. A prominent example is this: $ git upload-pack . | : fatal: unable to run 'git-upload-pack' In this example, git-upload-pack, the external command invoked through the git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to report the real cause. In fact, this very error message is copied to the syslog if git-daemon's client aborts the connection early. With this change, system call failures are reported immediately after the failure and only a generic failure code is returned to the caller. In the above example the error is now to the point: $ git upload-pack . | : error: git-upload-pack died of signal Note that there is no error report if the invoked program terminated with a non-zero exit code, because it is reasonable to expect that the invoked program has already reported an error. (But many run_command call sites nevertheless write a generic error message.) There was one special return code that was used to identify the case where run_command failed because the requested program could not be exec'd. This special case is now treated like a system call failure with errno set to ENOENT. No error is reported in this case, because the call site in git.c expects this as a normal result. Therefore, the callers that carefully decoded the return value still check for this condition. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-04 21:26:40 +02:00
if (cmd->pid < 0)
error_errno("cannot fork() for %s", cmd->argv[0]);
else if (cmd->clean_on_exit)
mark_child_for_cleanup(cmd->pid, cmd);
/*
* Wait for child's exec. If the exec succeeds (or if fork()
* failed), EOF is seen immediately by the parent. Otherwise, the
* child process sends a child_err struct.
* Note that use of this infrastructure is completely advisory,
* therefore, we keep error checks minimal.
*/
close(notify_pipe[1]);
if (xread(notify_pipe[0], &cerr, sizeof(cerr)) == sizeof(cerr)) {
/*
* At this point we know that fork() succeeded, but exec()
* failed. Errors have been reported to our stderr.
*/
pager: don't use unsafe functions in signal handlers Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-04 11:35:57 +02:00
wait_or_whine(cmd->pid, cmd->argv[0], 0);
child_err_spew(cmd, &cerr);
failed_errno = errno;
cmd->pid = -1;
}
close(notify_pipe[0]);
if (null_fd >= 0)
close(null_fd);
argv_array_clear(&argv);
free(childenv);
}
#else
{
Windows: avoid the "dup dance" when spawning a child process When stdin, stdout, or stderr must be redirected for a child process that on Windows is spawned using one of the spawn() functions of Microsoft's C runtime, then there is no choice other than to 1. make a backup copy of fd 0,1,2 with dup 2. dup2 the redirection source fd into 0,1,2 3. spawn 4. dup2 the backup back into 0,1,2 5. close the backup copy and the redirection source We used this idiom as well -- but we are not using the spawn() functions anymore! Instead, we have our own implementation. We had hardcoded that stdin, stdout, and stderr of the child process were inherited from the parent's fds 0, 1, and 2. But we can actually specify any fd. With this patch, the fds to inherit are passed from start_command()'s WIN32 section to our spawn implementation. This way, we can avoid the backup copies of the fds. The backup copies were a bug waiting to surface: The OS handles underlying the dup()ed fds were inherited by the child process (but were not associated with a file descriptor in the child). Consequently, the file or pipe represented by the OS handle remained open even after the backup copy was closed in the parent process until the child exited. Since our implementation of pipe() creates non-inheritable OS handles, we still dup() file descriptors in start_command() because dup() happens to create inheritable duplicates. (A nice side effect is that the fd cleanup in start_command is the same for Windows and Unix and remains unchanged.) Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-15 21:12:18 +01:00
int fhin = 0, fhout = 1, fherr = 2;
const char **sargv = cmd->argv;
struct argv_array nargv = ARGV_ARRAY_INIT;
Windows: avoid the "dup dance" when spawning a child process When stdin, stdout, or stderr must be redirected for a child process that on Windows is spawned using one of the spawn() functions of Microsoft's C runtime, then there is no choice other than to 1. make a backup copy of fd 0,1,2 with dup 2. dup2 the redirection source fd into 0,1,2 3. spawn 4. dup2 the backup back into 0,1,2 5. close the backup copy and the redirection source We used this idiom as well -- but we are not using the spawn() functions anymore! Instead, we have our own implementation. We had hardcoded that stdin, stdout, and stderr of the child process were inherited from the parent's fds 0, 1, and 2. But we can actually specify any fd. With this patch, the fds to inherit are passed from start_command()'s WIN32 section to our spawn implementation. This way, we can avoid the backup copies of the fds. The backup copies were a bug waiting to surface: The OS handles underlying the dup()ed fds were inherited by the child process (but were not associated with a file descriptor in the child). Consequently, the file or pipe represented by the OS handle remained open even after the backup copy was closed in the parent process until the child exited. Since our implementation of pipe() creates non-inheritable OS handles, we still dup() file descriptors in start_command() because dup() happens to create inheritable duplicates. (A nice side effect is that the fd cleanup in start_command is the same for Windows and Unix and remains unchanged.) Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-15 21:12:18 +01:00
if (cmd->no_stdin)
fhin = open("/dev/null", O_RDWR);
else if (need_in)
fhin = dup(fdin[0]);
else if (cmd->in)
fhin = dup(cmd->in);
if (cmd->no_stderr)
fherr = open("/dev/null", O_RDWR);
else if (need_err)
fherr = dup(fderr[1]);
else if (cmd->err > 2)
fherr = dup(cmd->err);
Windows: avoid the "dup dance" when spawning a child process When stdin, stdout, or stderr must be redirected for a child process that on Windows is spawned using one of the spawn() functions of Microsoft's C runtime, then there is no choice other than to 1. make a backup copy of fd 0,1,2 with dup 2. dup2 the redirection source fd into 0,1,2 3. spawn 4. dup2 the backup back into 0,1,2 5. close the backup copy and the redirection source We used this idiom as well -- but we are not using the spawn() functions anymore! Instead, we have our own implementation. We had hardcoded that stdin, stdout, and stderr of the child process were inherited from the parent's fds 0, 1, and 2. But we can actually specify any fd. With this patch, the fds to inherit are passed from start_command()'s WIN32 section to our spawn implementation. This way, we can avoid the backup copies of the fds. The backup copies were a bug waiting to surface: The OS handles underlying the dup()ed fds were inherited by the child process (but were not associated with a file descriptor in the child). Consequently, the file or pipe represented by the OS handle remained open even after the backup copy was closed in the parent process until the child exited. Since our implementation of pipe() creates non-inheritable OS handles, we still dup() file descriptors in start_command() because dup() happens to create inheritable duplicates. (A nice side effect is that the fd cleanup in start_command is the same for Windows and Unix and remains unchanged.) Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-15 21:12:18 +01:00
if (cmd->no_stdout)
fhout = open("/dev/null", O_RDWR);
else if (cmd->stdout_to_stderr)
fhout = dup(fherr);
else if (need_out)
fhout = dup(fdout[1]);
else if (cmd->out > 1)
fhout = dup(cmd->out);
if (cmd->git_cmd)
cmd->argv = prepare_git_cmd(&nargv, cmd->argv);
else if (cmd->use_shell)
cmd->argv = prepare_shell_cmd(&nargv, cmd->argv);
cmd->pid = mingw_spawnvpe(cmd->argv[0], cmd->argv, (char**) cmd->env,
cmd->dir, fhin, fhout, fherr);
run_command: report system call errors instead of returning error codes The motivation for this change is that system call failures are serious errors that should be reported to the user, but only few callers took the burden to decode the error codes that the functions returned into error messages. If at all, then only an unspecific error message was given. A prominent example is this: $ git upload-pack . | : fatal: unable to run 'git-upload-pack' In this example, git-upload-pack, the external command invoked through the git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to report the real cause. In fact, this very error message is copied to the syslog if git-daemon's client aborts the connection early. With this change, system call failures are reported immediately after the failure and only a generic failure code is returned to the caller. In the above example the error is now to the point: $ git upload-pack . | : error: git-upload-pack died of signal Note that there is no error report if the invoked program terminated with a non-zero exit code, because it is reasonable to expect that the invoked program has already reported an error. (But many run_command call sites nevertheless write a generic error message.) There was one special return code that was used to identify the case where run_command failed because the requested program could not be exec'd. This special case is now treated like a system call failure with errno set to ENOENT. No error is reported in this case, because the call site in git.c expects this as a normal result. Therefore, the callers that carefully decoded the return value still check for this condition. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-04 21:26:40 +02:00
failed_errno = errno;
if (cmd->pid < 0 && (!cmd->silent_exec_failure || errno != ENOENT))
error_errno("cannot spawn %s", cmd->argv[0]);
if (cmd->clean_on_exit && cmd->pid >= 0)
mark_child_for_cleanup(cmd->pid, cmd);
argv_array_clear(&nargv);
cmd->argv = sargv;
Windows: avoid the "dup dance" when spawning a child process When stdin, stdout, or stderr must be redirected for a child process that on Windows is spawned using one of the spawn() functions of Microsoft's C runtime, then there is no choice other than to 1. make a backup copy of fd 0,1,2 with dup 2. dup2 the redirection source fd into 0,1,2 3. spawn 4. dup2 the backup back into 0,1,2 5. close the backup copy and the redirection source We used this idiom as well -- but we are not using the spawn() functions anymore! Instead, we have our own implementation. We had hardcoded that stdin, stdout, and stderr of the child process were inherited from the parent's fds 0, 1, and 2. But we can actually specify any fd. With this patch, the fds to inherit are passed from start_command()'s WIN32 section to our spawn implementation. This way, we can avoid the backup copies of the fds. The backup copies were a bug waiting to surface: The OS handles underlying the dup()ed fds were inherited by the child process (but were not associated with a file descriptor in the child). Consequently, the file or pipe represented by the OS handle remained open even after the backup copy was closed in the parent process until the child exited. Since our implementation of pipe() creates non-inheritable OS handles, we still dup() file descriptors in start_command() because dup() happens to create inheritable duplicates. (A nice side effect is that the fd cleanup in start_command is the same for Windows and Unix and remains unchanged.) Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-15 21:12:18 +01:00
if (fhin != 0)
close(fhin);
if (fhout != 1)
close(fhout);
if (fherr != 2)
close(fherr);
}
#endif
if (cmd->pid < 0) {
if (need_in)
close_pair(fdin);
else if (cmd->in)
close(cmd->in);
if (need_out)
close_pair(fdout);
else if (cmd->out)
close(cmd->out);
if (need_err)
close_pair(fderr);
else if (cmd->err)
close(cmd->err);
child_process_clear(cmd);
run_command: report system call errors instead of returning error codes The motivation for this change is that system call failures are serious errors that should be reported to the user, but only few callers took the burden to decode the error codes that the functions returned into error messages. If at all, then only an unspecific error message was given. A prominent example is this: $ git upload-pack . | : fatal: unable to run 'git-upload-pack' In this example, git-upload-pack, the external command invoked through the git wrapper, dies due to SIGPIPE, but the git wrapper does not bother to report the real cause. In fact, this very error message is copied to the syslog if git-daemon's client aborts the connection early. With this change, system call failures are reported immediately after the failure and only a generic failure code is returned to the caller. In the above example the error is now to the point: $ git upload-pack . | : error: git-upload-pack died of signal Note that there is no error report if the invoked program terminated with a non-zero exit code, because it is reasonable to expect that the invoked program has already reported an error. (But many run_command call sites nevertheless write a generic error message.) There was one special return code that was used to identify the case where run_command failed because the requested program could not be exec'd. This special case is now treated like a system call failure with errno set to ENOENT. No error is reported in this case, because the call site in git.c expects this as a normal result. Therefore, the callers that carefully decoded the return value still check for this condition. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-07-04 21:26:40 +02:00
errno = failed_errno;
return -1;
}
if (need_in)
close(fdin[0]);
else if (cmd->in)
close(cmd->in);
if (need_out)
close(fdout[1]);
else if (cmd->out)
close(cmd->out);
if (need_err)
close(fderr[1]);
else if (cmd->err)
close(cmd->err);
return 0;
}
int finish_command(struct child_process *cmd)
{
pager: don't use unsafe functions in signal handlers Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-04 11:35:57 +02:00
int ret = wait_or_whine(cmd->pid, cmd->argv[0], 0);
child_process_clear(cmd);
return ret;
}
pager: don't use unsafe functions in signal handlers Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-04 11:35:57 +02:00
int finish_command_in_signal(struct child_process *cmd)
{
return wait_or_whine(cmd->pid, cmd->argv[0], 1);
}
int run_command(struct child_process *cmd)
{
int code;
if (cmd->out < 0 || cmd->err < 0)
die("BUG: run_command with a pipe can cause deadlock");
code = start_command(cmd);
if (code)
return code;
return finish_command(cmd);
}
int run_command_v_opt(const char **argv, int opt)
{
return run_command_v_opt_cd_env(argv, opt, NULL, NULL);
}
int run_command_v_opt_cd_env(const char **argv, int opt, const char *dir, const char *const *env)
{
struct child_process cmd = CHILD_PROCESS_INIT;
cmd.argv = argv;
cmd.no_stdin = opt & RUN_COMMAND_NO_STDIN ? 1 : 0;
cmd.git_cmd = opt & RUN_GIT_CMD ? 1 : 0;
cmd.stdout_to_stderr = opt & RUN_COMMAND_STDOUT_TO_STDERR ? 1 : 0;
cmd.silent_exec_failure = opt & RUN_SILENT_EXEC_FAILURE ? 1 : 0;
cmd.use_shell = opt & RUN_USING_SHELL ? 1 : 0;
cmd.clean_on_exit = opt & RUN_CLEAN_ON_EXIT ? 1 : 0;
cmd.dir = dir;
cmd.env = env;
return run_command(&cmd);
}
#ifndef NO_PTHREADS
static pthread_t main_thread;
static int main_thread_set;
static pthread_key_t async_key;
run-command: use thread-aware die_is_recursing routine If we die from an async thread, we do not actually exit the program, but just kill the thread. This confuses the static counter in usage.c's default die_is_recursing function; it updates the counter once for the thread death, and then when the main program calls die() itself, it erroneously thinks we are recursing. The end result is that we print "recursion detected in die handler" instead of the real error in such a case (the easiest way to trigger this is having a remote connection hang up while running a sideband demultiplexer). This patch solves it by using a per-thread counter when the async_die function is installed; we detect recursion in each thread (including the main one), but they do not step on each other's toes. Other threaded code does not need to worry about this, as they do not install specialized die handlers; they just let a die() from a sub-thread take down the whole program. Since we are overriding the default recursion-check function, there is an interesting corner case that is not a problem, but bears some explanation. Imagine the main thread calls die(), and then in the die_routine starts an async call. We will switch to using thread-local storage, which starts at 0, for the main thread's counter, even though the original counter was actually at 1. That's OK, though, for two reasons: 1. It would miss only the first level of recursion, and would still find recursive failures inside the async helper. 2. We do not currently and are not likely to start doing anything as heavyweight as starting an async routine from within a die routine or helper function. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-16 21:50:07 +02:00
static pthread_key_t async_die_counter;
static void *run_thread(void *data)
{
struct async *async = data;
intptr_t ret;
run-command: teach async threads to ignore SIGPIPE Async processes can be implemented as separate forked processes, or as threads (depending on the NO_PTHREADS setting). In the latter case, if an async thread gets SIGPIPE, it takes down the whole process. This is obviously bad if the main process was not otherwise going to die, but even if we were going to die, it means the main process does not have a chance to report a useful error message. There's also the small matter that forked async processes will not take the main process down on a signal, meaning git will behave differently depending on the NO_PTHREADS setting. This patch fixes it by adding a new flag to "struct async" to block SIGPIPE just in the async thread. In theory, this should always be on (which makes async threads behave more like async processes), but we would first want to make sure that each async process we spawn is careful about checking return codes from write() and would not spew endlessly into a dead pipe. So let's start with it as optional, and we can enable it for specific sites in future patches. The natural name for this option would be "ignore_sigpipe", since that's what it does for the threaded case. But since that name might imply that we are ignoring it in all cases (including the separate-process one), let's call it "isolate_sigpipe". What we are really asking for is isolation. I.e., not to have our main process taken down by signals spawned by the async process. How that is implemented is up to the run-command code. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-04-20 00:49:41 +02:00
if (async->isolate_sigpipe) {
sigset_t mask;
sigemptyset(&mask);
sigaddset(&mask, SIGPIPE);
if (pthread_sigmask(SIG_BLOCK, &mask, NULL) < 0) {
ret = error("unable to block SIGPIPE in async thread");
return (void *)ret;
}
}
pthread_setspecific(async_key, async);
ret = async->proc(async->proc_in, async->proc_out, async->data);
return (void *)ret;
}
static NORETURN void die_async(const char *err, va_list params)
{
vreportf("fatal: ", err, params);
if (in_async()) {
struct async *async = pthread_getspecific(async_key);
if (async->proc_in >= 0)
close(async->proc_in);
if (async->proc_out >= 0)
close(async->proc_out);
pthread_exit((void *)128);
}
exit(128);
}
run-command: use thread-aware die_is_recursing routine If we die from an async thread, we do not actually exit the program, but just kill the thread. This confuses the static counter in usage.c's default die_is_recursing function; it updates the counter once for the thread death, and then when the main program calls die() itself, it erroneously thinks we are recursing. The end result is that we print "recursion detected in die handler" instead of the real error in such a case (the easiest way to trigger this is having a remote connection hang up while running a sideband demultiplexer). This patch solves it by using a per-thread counter when the async_die function is installed; we detect recursion in each thread (including the main one), but they do not step on each other's toes. Other threaded code does not need to worry about this, as they do not install specialized die handlers; they just let a die() from a sub-thread take down the whole program. Since we are overriding the default recursion-check function, there is an interesting corner case that is not a problem, but bears some explanation. Imagine the main thread calls die(), and then in the die_routine starts an async call. We will switch to using thread-local storage, which starts at 0, for the main thread's counter, even though the original counter was actually at 1. That's OK, though, for two reasons: 1. It would miss only the first level of recursion, and would still find recursive failures inside the async helper. 2. We do not currently and are not likely to start doing anything as heavyweight as starting an async routine from within a die routine or helper function. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-16 21:50:07 +02:00
static int async_die_is_recursing(void)
{
void *ret = pthread_getspecific(async_die_counter);
pthread_setspecific(async_die_counter, (void *)1);
return ret != NULL;
}
int in_async(void)
{
if (!main_thread_set)
return 0; /* no asyncs started yet */
return !pthread_equal(main_thread, pthread_self());
}
static void NORETURN async_exit(int code)
write_or_die: handle EPIPE in async threads When write_or_die() sees EPIPE, it treats it specially by converting it into a SIGPIPE death. We obviously cannot ignore it, as the write has failed and the caller expects us to die. But likewise, we cannot just call die(), because printing any message at all would be a nuisance during normal operations. However, this is a problem if write_or_die() is called from a thread. Our raised signal ends up killing the whole process, when logically we just need to kill the thread (after all, if we are ignoring SIGPIPE, there is good reason to think that the main thread is expecting to handle it). Inside an async thread, the die() code already does the right thing, because we use our custom die_async() routine, which calls pthread_join(). So ideally we would piggy-back on that, and simply call: die_quietly_with_code(141); or similar. But refactoring the die code to do this is surprisingly non-trivial. The die_routines themselves handle both printing and the decision of the exit code. Every one of them would have to be modified to take new parameters for the code, and to tell us to be quiet. Instead, we can just teach write_or_die() to check for the async case and handle it specially. We do have to build an interface to abstract the async exit, but it's simple and self-contained. If we had many call-sites that wanted to do this die_quietly_with_code(), this approach wouldn't scale as well, but we don't. This is the only place where do this weird exit trick. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-24 08:40:16 +01:00
{
pthread_exit((void *)(intptr_t)code);
}
#else
static struct {
void (**handlers)(void);
size_t nr;
size_t alloc;
} git_atexit_hdlrs;
static int git_atexit_installed;
static void git_atexit_dispatch(void)
{
size_t i;
for (i=git_atexit_hdlrs.nr ; i ; i--)
git_atexit_hdlrs.handlers[i-1]();
}
static void git_atexit_clear(void)
{
free(git_atexit_hdlrs.handlers);
memset(&git_atexit_hdlrs, 0, sizeof(git_atexit_hdlrs));
git_atexit_installed = 0;
}
#undef atexit
int git_atexit(void (*handler)(void))
{
ALLOC_GROW(git_atexit_hdlrs.handlers, git_atexit_hdlrs.nr + 1, git_atexit_hdlrs.alloc);
git_atexit_hdlrs.handlers[git_atexit_hdlrs.nr++] = handler;
if (!git_atexit_installed) {
if (atexit(&git_atexit_dispatch))
return -1;
git_atexit_installed = 1;
}
return 0;
}
#define atexit git_atexit
static int process_is_async;
int in_async(void)
{
return process_is_async;
}
static void NORETURN async_exit(int code)
write_or_die: handle EPIPE in async threads When write_or_die() sees EPIPE, it treats it specially by converting it into a SIGPIPE death. We obviously cannot ignore it, as the write has failed and the caller expects us to die. But likewise, we cannot just call die(), because printing any message at all would be a nuisance during normal operations. However, this is a problem if write_or_die() is called from a thread. Our raised signal ends up killing the whole process, when logically we just need to kill the thread (after all, if we are ignoring SIGPIPE, there is good reason to think that the main thread is expecting to handle it). Inside an async thread, the die() code already does the right thing, because we use our custom die_async() routine, which calls pthread_join(). So ideally we would piggy-back on that, and simply call: die_quietly_with_code(141); or similar. But refactoring the die code to do this is surprisingly non-trivial. The die_routines themselves handle both printing and the decision of the exit code. Every one of them would have to be modified to take new parameters for the code, and to tell us to be quiet. Instead, we can just teach write_or_die() to check for the async case and handle it specially. We do have to build an interface to abstract the async exit, but it's simple and self-contained. If we had many call-sites that wanted to do this die_quietly_with_code(), this approach wouldn't scale as well, but we don't. This is the only place where do this weird exit trick. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-24 08:40:16 +01:00
{
exit(code);
}
#endif
void check_pipe(int err)
{
if (err == EPIPE) {
if (in_async())
async_exit(141);
signal(SIGPIPE, SIG_DFL);
raise(SIGPIPE);
/* Should never happen, but just in case... */
exit(141);
}
}
int start_async(struct async *async)
{
int need_in, need_out;
int fdin[2], fdout[2];
int proc_in, proc_out;
need_in = async->in < 0;
if (need_in) {
if (pipe(fdin) < 0) {
if (async->out > 0)
close(async->out);
return error_errno("cannot create pipe");
}
async->in = fdin[1];
}
need_out = async->out < 0;
if (need_out) {
if (pipe(fdout) < 0) {
if (need_in)
close_pair(fdin);
else if (async->in)
close(async->in);
return error_errno("cannot create pipe");
}
async->out = fdout[0];
}
if (need_in)
proc_in = fdin[0];
else if (async->in)
proc_in = async->in;
else
proc_in = -1;
if (need_out)
proc_out = fdout[1];
else if (async->out)
proc_out = async->out;
else
proc_out = -1;
#ifdef NO_PTHREADS
/* Flush stdio before fork() to avoid cloning buffers */
fflush(NULL);
async->pid = fork();
if (async->pid < 0) {
error_errno("fork (async) failed");
goto error;
}
if (!async->pid) {
if (need_in)
close(fdin[1]);
if (need_out)
close(fdout[0]);
git_atexit_clear();
process_is_async = 1;
exit(!!async->proc(proc_in, proc_out, async->data));
}
mark_child_for_cleanup(async->pid, NULL);
if (need_in)
close(fdin[0]);
else if (async->in)
close(async->in);
if (need_out)
close(fdout[1]);
else if (async->out)
close(async->out);
#else
if (!main_thread_set) {
/*
* We assume that the first time that start_async is called
* it is from the main thread.
*/
main_thread_set = 1;
main_thread = pthread_self();
pthread_key_create(&async_key, NULL);
run-command: use thread-aware die_is_recursing routine If we die from an async thread, we do not actually exit the program, but just kill the thread. This confuses the static counter in usage.c's default die_is_recursing function; it updates the counter once for the thread death, and then when the main program calls die() itself, it erroneously thinks we are recursing. The end result is that we print "recursion detected in die handler" instead of the real error in such a case (the easiest way to trigger this is having a remote connection hang up while running a sideband demultiplexer). This patch solves it by using a per-thread counter when the async_die function is installed; we detect recursion in each thread (including the main one), but they do not step on each other's toes. Other threaded code does not need to worry about this, as they do not install specialized die handlers; they just let a die() from a sub-thread take down the whole program. Since we are overriding the default recursion-check function, there is an interesting corner case that is not a problem, but bears some explanation. Imagine the main thread calls die(), and then in the die_routine starts an async call. We will switch to using thread-local storage, which starts at 0, for the main thread's counter, even though the original counter was actually at 1. That's OK, though, for two reasons: 1. It would miss only the first level of recursion, and would still find recursive failures inside the async helper. 2. We do not currently and are not likely to start doing anything as heavyweight as starting an async routine from within a die routine or helper function. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-16 21:50:07 +02:00
pthread_key_create(&async_die_counter, NULL);
set_die_routine(die_async);
run-command: use thread-aware die_is_recursing routine If we die from an async thread, we do not actually exit the program, but just kill the thread. This confuses the static counter in usage.c's default die_is_recursing function; it updates the counter once for the thread death, and then when the main program calls die() itself, it erroneously thinks we are recursing. The end result is that we print "recursion detected in die handler" instead of the real error in such a case (the easiest way to trigger this is having a remote connection hang up while running a sideband demultiplexer). This patch solves it by using a per-thread counter when the async_die function is installed; we detect recursion in each thread (including the main one), but they do not step on each other's toes. Other threaded code does not need to worry about this, as they do not install specialized die handlers; they just let a die() from a sub-thread take down the whole program. Since we are overriding the default recursion-check function, there is an interesting corner case that is not a problem, but bears some explanation. Imagine the main thread calls die(), and then in the die_routine starts an async call. We will switch to using thread-local storage, which starts at 0, for the main thread's counter, even though the original counter was actually at 1. That's OK, though, for two reasons: 1. It would miss only the first level of recursion, and would still find recursive failures inside the async helper. 2. We do not currently and are not likely to start doing anything as heavyweight as starting an async routine from within a die routine or helper function. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-16 21:50:07 +02:00
set_die_is_recursing_routine(async_die_is_recursing);
}
if (proc_in >= 0)
set_cloexec(proc_in);
if (proc_out >= 0)
set_cloexec(proc_out);
async->proc_in = proc_in;
async->proc_out = proc_out;
{
int err = pthread_create(&async->tid, NULL, run_thread, async);
if (err) {
error_errno("cannot create thread");
goto error;
}
}
#endif
return 0;
error:
if (need_in)
close_pair(fdin);
else if (async->in)
close(async->in);
if (need_out)
close_pair(fdout);
else if (async->out)
close(async->out);
return -1;
}
int finish_async(struct async *async)
{
#ifdef NO_PTHREADS
pager: don't use unsafe functions in signal handlers Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-04 11:35:57 +02:00
return wait_or_whine(async->pid, "child process", 0);
#else
void *ret = (void *)(intptr_t)(-1);
if (pthread_join(async->tid, &ret))
error("pthread_join failed");
return (int)(intptr_t)ret;
#endif
}
const char *find_hook(const char *name)
{
static struct strbuf path = STRBUF_INIT;
strbuf_reset(&path);
strbuf_git_path(&path, "hooks/%s", name);
if (access(path.buf, X_OK) < 0) {
int err = errno;
#ifdef STRIP_EXTENSION
strbuf_addstr(&path, STRIP_EXTENSION);
if (access(path.buf, X_OK) >= 0)
return path.buf;
if (errno == EACCES)
err = errno;
#endif
if (err == EACCES && advice_ignored_hook) {
static struct string_list advise_given = STRING_LIST_INIT_DUP;
if (!string_list_lookup(&advise_given, name)) {
string_list_insert(&advise_given, name);
advise(_("The '%s' hook was ignored because "
"it's not set as executable.\n"
"You can disable this warning with "
"`git config advice.ignoredHook false`."),
path.buf);
}
}
return NULL;
}
return path.buf;
}
int run_hook_ve(const char *const *env, const char *name, va_list args)
{
struct child_process hook = CHILD_PROCESS_INIT;
const char *p;
p = find_hook(name);
if (!p)
return 0;
argv_array_push(&hook.args, p);
while ((p = va_arg(args, const char *)))
argv_array_push(&hook.args, p);
hook.env = env;
hook.no_stdin = 1;
hook.stdout_to_stderr = 1;
return run_command(&hook);
}
int run_hook_le(const char *const *env, const char *name, ...)
{
va_list args;
int ret;
va_start(args, name);
ret = run_hook_ve(env, name, args);
va_end(args);
return ret;
}
struct io_pump {
/* initialized by caller */
int fd;
int type; /* POLLOUT or POLLIN */
union {
struct {
const char *buf;
size_t len;
} out;
struct {
struct strbuf *buf;
size_t hint;
} in;
} u;
/* returned by pump_io */
int error; /* 0 for success, otherwise errno */
/* internal use */
struct pollfd *pfd;
};
static int pump_io_round(struct io_pump *slots, int nr, struct pollfd *pfd)
{
int pollsize = 0;
int i;
for (i = 0; i < nr; i++) {
struct io_pump *io = &slots[i];
if (io->fd < 0)
continue;
pfd[pollsize].fd = io->fd;
pfd[pollsize].events = io->type;
io->pfd = &pfd[pollsize++];
}
if (!pollsize)
return 0;
if (poll(pfd, pollsize, -1) < 0) {
if (errno == EINTR)
return 1;
die_errno("poll failed");
}
for (i = 0; i < nr; i++) {
struct io_pump *io = &slots[i];
if (io->fd < 0)
continue;
if (!(io->pfd->revents & (POLLOUT|POLLIN|POLLHUP|POLLERR|POLLNVAL)))
continue;
if (io->type == POLLOUT) {
ssize_t len = xwrite(io->fd,
io->u.out.buf, io->u.out.len);
if (len < 0) {
io->error = errno;
close(io->fd);
io->fd = -1;
} else {
io->u.out.buf += len;
io->u.out.len -= len;
if (!io->u.out.len) {
close(io->fd);
io->fd = -1;
}
}
}
if (io->type == POLLIN) {
ssize_t len = strbuf_read_once(io->u.in.buf,
io->fd, io->u.in.hint);
if (len < 0)
io->error = errno;
if (len <= 0) {
close(io->fd);
io->fd = -1;
}
}
}
return 1;
}
static int pump_io(struct io_pump *slots, int nr)
{
struct pollfd *pfd;
int i;
for (i = 0; i < nr; i++)
slots[i].error = 0;
ALLOC_ARRAY(pfd, nr);
while (pump_io_round(slots, nr, pfd))
; /* nothing */
free(pfd);
/* There may be multiple errno values, so just pick the first. */
for (i = 0; i < nr; i++) {
if (slots[i].error) {
errno = slots[i].error;
return -1;
}
}
return 0;
}
int pipe_command(struct child_process *cmd,
const char *in, size_t in_len,
struct strbuf *out, size_t out_hint,
struct strbuf *err, size_t err_hint)
{
struct io_pump io[3];
int nr = 0;
if (in)
cmd->in = -1;
if (out)
cmd->out = -1;
if (err)
cmd->err = -1;
if (start_command(cmd) < 0)
return -1;
if (in) {
io[nr].fd = cmd->in;
io[nr].type = POLLOUT;
io[nr].u.out.buf = in;
io[nr].u.out.len = in_len;
nr++;
}
if (out) {
io[nr].fd = cmd->out;
io[nr].type = POLLIN;
io[nr].u.in.buf = out;
io[nr].u.in.hint = out_hint;
nr++;
}
if (err) {
io[nr].fd = cmd->err;
io[nr].type = POLLIN;
io[nr].u.in.buf = err;
io[nr].u.in.hint = err_hint;
nr++;
}
if (pump_io(io, nr) < 0) {
finish_command(cmd); /* throw away exit code */
return -1;
}
return finish_command(cmd);
}
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
enum child_state {
GIT_CP_FREE,
GIT_CP_WORKING,
GIT_CP_WAIT_CLEANUP,
};
struct parallel_processes {
void *data;
int max_processes;
int nr_processes;
get_next_task_fn get_next_task;
start_failure_fn start_failure;
task_finished_fn task_finished;
struct {
enum child_state state;
struct child_process process;
struct strbuf err;
void *data;
} *children;
/*
* The struct pollfd is logically part of *children,
* but the system call expects it as its own array.
*/
struct pollfd *pfd;
unsigned shutdown : 1;
int output_owner;
struct strbuf buffered_output; /* of finished children */
};
static int default_start_failure(struct strbuf *out,
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
void *pp_cb,
void *pp_task_cb)
{
return 0;
}
static int default_task_finished(int result,
struct strbuf *out,
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
void *pp_cb,
void *pp_task_cb)
{
return 0;
}
static void kill_children(struct parallel_processes *pp, int signo)
{
int i, n = pp->max_processes;
for (i = 0; i < n; i++)
if (pp->children[i].state == GIT_CP_WORKING)
kill(pp->children[i].process.pid, signo);
}
static struct parallel_processes *pp_for_signal;
static void handle_children_on_signal(int signo)
{
kill_children(pp_for_signal, signo);
sigchain_pop(signo);
raise(signo);
}
static void pp_init(struct parallel_processes *pp,
int n,
get_next_task_fn get_next_task,
start_failure_fn start_failure,
task_finished_fn task_finished,
void *data)
{
int i;
if (n < 1)
n = online_cpus();
pp->max_processes = n;
trace_printf("run_processes_parallel: preparing to run up to %d tasks", n);
pp->data = data;
if (!get_next_task)
die("BUG: you need to specify a get_next_task function");
pp->get_next_task = get_next_task;
pp->start_failure = start_failure ? start_failure : default_start_failure;
pp->task_finished = task_finished ? task_finished : default_task_finished;
pp->nr_processes = 0;
pp->output_owner = 0;
pp->shutdown = 0;
pp->children = xcalloc(n, sizeof(*pp->children));
pp->pfd = xcalloc(n, sizeof(*pp->pfd));
strbuf_init(&pp->buffered_output, 0);
for (i = 0; i < n; i++) {
strbuf_init(&pp->children[i].err, 0);
child_process_init(&pp->children[i].process);
pp->pfd[i].events = POLLIN | POLLHUP;
pp->pfd[i].fd = -1;
}
pp_for_signal = pp;
sigchain_push_common(handle_children_on_signal);
}
static void pp_cleanup(struct parallel_processes *pp)
{
int i;
trace_printf("run_processes_parallel: done");
for (i = 0; i < pp->max_processes; i++) {
strbuf_release(&pp->children[i].err);
child_process_clear(&pp->children[i].process);
}
free(pp->children);
free(pp->pfd);
/*
* When get_next_task added messages to the buffer in its last
* iteration, the buffered output is non empty.
*/
strbuf_write(&pp->buffered_output, stderr);
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
strbuf_release(&pp->buffered_output);
sigchain_pop_common();
}
/* returns
* 0 if a new task was started.
* 1 if no new jobs was started (get_next_task ran out of work, non critical
* problem with starting a new command)
* <0 no new job was started, user wishes to shutdown early. Use negative code
* to signal the children.
*/
static int pp_start_one(struct parallel_processes *pp)
{
int i, code;
for (i = 0; i < pp->max_processes; i++)
if (pp->children[i].state == GIT_CP_FREE)
break;
if (i == pp->max_processes)
die("BUG: bookkeeping is hard");
code = pp->get_next_task(&pp->children[i].process,
&pp->children[i].err,
pp->data,
&pp->children[i].data);
if (!code) {
strbuf_addbuf(&pp->buffered_output, &pp->children[i].err);
strbuf_reset(&pp->children[i].err);
return 1;
}
pp->children[i].process.err = -1;
pp->children[i].process.stdout_to_stderr = 1;
pp->children[i].process.no_stdin = 1;
if (start_command(&pp->children[i].process)) {
code = pp->start_failure(&pp->children[i].err,
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
pp->data,
pp->children[i].data);
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
strbuf_addbuf(&pp->buffered_output, &pp->children[i].err);
strbuf_reset(&pp->children[i].err);
if (code)
pp->shutdown = 1;
return code;
}
pp->nr_processes++;
pp->children[i].state = GIT_CP_WORKING;
pp->pfd[i].fd = pp->children[i].process.err;
return 0;
}
static void pp_buffer_stderr(struct parallel_processes *pp, int output_timeout)
{
int i;
while ((i = poll(pp->pfd, pp->max_processes, output_timeout)) < 0) {
if (errno == EINTR)
continue;
pp_cleanup(pp);
die_errno("poll");
}
/* Buffer output from all pipes. */
for (i = 0; i < pp->max_processes; i++) {
if (pp->children[i].state == GIT_CP_WORKING &&
pp->pfd[i].revents & (POLLIN | POLLHUP)) {
int n = strbuf_read_once(&pp->children[i].err,
pp->children[i].process.err, 0);
if (n == 0) {
close(pp->children[i].process.err);
pp->children[i].state = GIT_CP_WAIT_CLEANUP;
} else if (n < 0)
if (errno != EAGAIN)
die_errno("read");
}
}
}
static void pp_output(struct parallel_processes *pp)
{
int i = pp->output_owner;
if (pp->children[i].state == GIT_CP_WORKING &&
pp->children[i].err.len) {
strbuf_write(&pp->children[i].err, stderr);
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
strbuf_reset(&pp->children[i].err);
}
}
static int pp_collect_finished(struct parallel_processes *pp)
{
int i, code;
int n = pp->max_processes;
int result = 0;
while (pp->nr_processes > 0) {
for (i = 0; i < pp->max_processes; i++)
if (pp->children[i].state == GIT_CP_WAIT_CLEANUP)
break;
if (i == pp->max_processes)
break;
code = finish_command(&pp->children[i].process);
code = pp->task_finished(code,
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
&pp->children[i].err, pp->data,
pp->children[i].data);
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
if (code)
result = code;
if (code < 0)
break;
pp->nr_processes--;
pp->children[i].state = GIT_CP_FREE;
pp->pfd[i].fd = -1;
child_process_init(&pp->children[i].process);
if (i != pp->output_owner) {
strbuf_addbuf(&pp->buffered_output, &pp->children[i].err);
strbuf_reset(&pp->children[i].err);
} else {
strbuf_write(&pp->children[i].err, stderr);
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
strbuf_reset(&pp->children[i].err);
/* Output all other finished child processes */
strbuf_write(&pp->buffered_output, stderr);
run-command: add an asynchronous parallel child processor This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16 01:04:10 +01:00
strbuf_reset(&pp->buffered_output);
/*
* Pick next process to output live.
* NEEDSWORK:
* For now we pick it randomly by doing a round
* robin. Later we may want to pick the one with
* the most output or the longest or shortest
* running process time.
*/
for (i = 0; i < n; i++)
if (pp->children[(pp->output_owner + i) % n].state == GIT_CP_WORKING)
break;
pp->output_owner = (pp->output_owner + i) % n;
}
}
return result;
}
int run_processes_parallel(int n,
get_next_task_fn get_next_task,
start_failure_fn start_failure,
task_finished_fn task_finished,
void *pp_cb)
{
int i, code;
int output_timeout = 100;
int spawn_cap = 4;
struct parallel_processes pp;
pp_init(&pp, n, get_next_task, start_failure, task_finished, pp_cb);
while (1) {
for (i = 0;
i < spawn_cap && !pp.shutdown &&
pp.nr_processes < pp.max_processes;
i++) {
code = pp_start_one(&pp);
if (!code)
continue;
if (code < 0) {
pp.shutdown = 1;
kill_children(&pp, -code);
}
break;
}
if (!pp.nr_processes)
break;
pp_buffer_stderr(&pp, output_timeout);
pp_output(&pp);
code = pp_collect_finished(&pp);
if (code) {
pp.shutdown = 1;
if (code < 0)
kill_children(&pp, -code);
}
}
pp_cleanup(&pp);
return 0;
}