2007-03-14 04:58:22 +03:00
|
|
|
/*
|
|
|
|
* git gc builtin command
|
|
|
|
*
|
|
|
|
* Cleanup unreachable files and optimize the repository.
|
|
|
|
*
|
|
|
|
* Copyright (c) 2007 James Bowes
|
|
|
|
*
|
|
|
|
* Based on git-gc.sh, which is
|
|
|
|
*
|
|
|
|
* Copyright (c) 2006 Shawn O. Pearce
|
|
|
|
*/
|
|
|
|
|
2007-07-15 03:14:45 +04:00
|
|
|
#include "builtin.h"
|
2018-03-23 20:20:59 +03:00
|
|
|
#include "repository.h"
|
2017-06-14 21:07:36 +03:00
|
|
|
#include "config.h"
|
2015-08-10 12:47:49 +03:00
|
|
|
#include "tempfile.h"
|
2014-10-01 14:28:42 +04:00
|
|
|
#include "lockfile.h"
|
2007-11-02 04:02:27 +03:00
|
|
|
#include "parse-options.h"
|
2007-03-14 04:58:22 +03:00
|
|
|
#include "run-command.h"
|
gc: remove gc.pid file at end of execution
This file isn't really harmful, but isn't useful either, and can create
minor annoyance for the user:
* It's confusing, as the presence of a *.pid file often implies that a
process is currently running. A user running "ls .git/" and finding
this file may incorrectly guess that a "git gc" is currently running.
* Leaving this file means that a "git gc" in an already gc-ed repo is
no-longer a no-op. A user running "git gc" in a set of repositories,
and then synchronizing this set (e.g. rsync -av, unison, ...) will see
all the gc.pid files as changed, which creates useless noise.
This patch unlinks the file after the garbage collection is done, so that
gc.pid is actually present only during execution.
Future versions of Git may want to use the information left in the gc.pid
file (e.g. for policies like "don't attempt to run a gc if one has
already been ran less than X hours ago"). If so, this patch can safely be
reverted. For now, let's not bother the users.
Explained-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-17 03:11:46 +04:00
|
|
|
#include "sigchain.h"
|
2020-07-28 23:23:39 +03:00
|
|
|
#include "strvec.h"
|
2013-12-05 17:02:54 +04:00
|
|
|
#include "commit.h"
|
2018-06-27 16:24:46 +03:00
|
|
|
#include "commit-graph.h"
|
2017-08-19 01:20:26 +03:00
|
|
|
#include "packfile.h"
|
2018-03-23 20:20:59 +03:00
|
|
|
#include "object-store.h"
|
2018-04-15 18:36:17 +03:00
|
|
|
#include "pack.h"
|
|
|
|
#include "pack-objects.h"
|
|
|
|
#include "blob.h"
|
|
|
|
#include "tree.h"
|
2019-06-25 16:40:31 +03:00
|
|
|
#include "promisor-remote.h"
|
2020-09-17 21:11:51 +03:00
|
|
|
#include "refs.h"
|
maintenance: add prefetch task
When working with very large repositories, an incremental 'git fetch'
command can download a large amount of data. If there are many other
users pushing to a common repo, then this data can rival the initial
pack-file size of a 'git clone' of a medium-size repo.
Users may want to keep the data on their local repos as close as
possible to the data on the remote repos by fetching periodically in
the background. This can break up a large daily fetch into several
smaller hourly fetches.
The task is called "prefetch" because it is work done in advance
of a foreground fetch to make that 'git fetch' command much faster.
However, if we simply ran 'git fetch <remote>' in the background,
then the user running a foreground 'git fetch <remote>' would lose
some important feedback when a new branch appears or an existing
branch updates. This is especially true if a remote branch is
force-updated and this isn't noticed by the user because it occurred
in the background. Further, the functionality of 'git push
--force-with-lease' becomes suspect.
When running 'git fetch <remote> <options>' in the background, use
the following options for careful updating:
1. --no-tags prevents getting a new tag when a user wants to see
the new tags appear in their foreground fetches.
2. --refmap= removes the configured refspec which usually updates
refs/remotes/<remote>/* with the refs advertised by the remote.
While this looks confusing, this was documented and tested by
b40a50264ac (fetch: document and test --refmap="", 2020-01-21),
including this sentence in the documentation:
Providing an empty `<refspec>` to the `--refmap` option
causes Git to ignore the configured refspecs and rely
entirely on the refspecs supplied as command-line arguments.
3. By adding a new refspec "+refs/heads/*:refs/prefetch/<remote>/*"
we can ensure that we actually load the new values somewhere in
our refspace while not updating refs/heads or refs/remotes. By
storing these refs here, the commit-graph job will update the
commit-graph with the commits from these hidden refs.
4. --prune will delete the refs/prefetch/<remote> refs that no
longer appear on the remote.
5. --no-write-fetch-head prevents updating FETCH_HEAD.
We've been using this step as a critical background job in Scalar
[1] (and VFS for Git). This solved a pain point that was showing up
in user reports: fetching was a pain! Users do not like waiting to
download the data that was created while they were away from their
machines. After implementing background fetch, the foreground fetch
commands sped up significantly because they mostly just update refs
and download a small amount of new data. The effect is especially
dramatic when paried with --no-show-forced-udpates (through
fetch.showForcedUpdates=false).
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/FetchStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:31 +03:00
|
|
|
#include "remote.h"
|
2020-09-25 15:33:38 +03:00
|
|
|
#include "object-store.h"
|
2020-09-11 20:49:18 +03:00
|
|
|
#include "exec-cmd.h"
|
2007-03-14 04:58:22 +03:00
|
|
|
|
|
|
|
#define FAILED_RUN "failed to run %s"
|
|
|
|
|
2007-11-02 04:02:27 +03:00
|
|
|
static const char * const builtin_gc_usage[] = {
|
2015-01-13 10:44:47 +03:00
|
|
|
N_("git gc [<options>]"),
|
2007-11-02 04:02:27 +03:00
|
|
|
NULL
|
|
|
|
};
|
2007-03-14 04:58:22 +03:00
|
|
|
|
Make "git gc" pack all refs by default
I've taught myself to use "git gc" instead of doing the repack explicitly,
but it doesn't actually do what I think it should do.
We've had packed refs for a long time now, and I think it just makes sense
to pack normal branches too. So I end up having to do
git pack-refs --all --prune
in order to get a nice git repo that doesn't have any unnecessary files.
So why not just do that in "git gc"? It's not as if there really is any
downside to packing branches, even if they end up changing later. Quite
often they don't, and even if they do, so what?
Also, make the default for refs packing just be an unambiguous "do it",
rather than "do it by default only for non-bare repositories". If you want
that behaviour, you can always just add a
[gc]
packrefs = notbare
in your ~/.gitconfig file, but I don't actually see why bare would be any
different (except for the broken reason that http-fetching used to be
totally broken, and not doing it just meant that it didn't even get
fixed in a timely manner!).
So here's a trivial patch to make "git gc" do a better job. Hmm?
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-24 22:41:39 +04:00
|
|
|
static int pack_refs = 1;
|
2014-05-25 04:38:29 +04:00
|
|
|
static int prune_reflogs = 1;
|
gc: default aggressive depth to 50
This commit message is long and has lots of background and
numbers. The summary is: the current default of 250 doesn't
save much space, and costs CPU. It's not a good tradeoff.
Read on for details.
The "--aggressive" flag to git-gc does three things:
1. use "-f" to throw out existing deltas and recompute from
scratch
2. use "--window=250" to look harder for deltas
3. use "--depth=250" to make longer delta chains
Items (1) and (2) are good matches for an "aggressive"
repack. They ask the repack to do more computation work in
the hopes of getting a better pack. You pay the costs during
the repack, and other operations see only the benefit.
Item (3) is not so clear. Allowing longer chains means fewer
restrictions on the deltas, which means potentially finding
better ones and saving some space. But it also means that
operations which access the deltas have to follow longer
chains, which affects their performance. So it's a tradeoff,
and it's not clear that the tradeoff is even a good one.
The existing "250" numbers for "--aggressive" come
originally from this thread:
http://public-inbox.org/git/alpine.LFD.0.9999.0712060803430.13796@woody.linux-foundation.org/
where Linus says:
So when I said "--depth=250 --window=250", I chose those
numbers more as an example of extremely aggressive
packing, and I'm not at all sure that the end result is
necessarily wonderfully usable. It's going to save disk
space (and network bandwidth - the delta's will be re-used
for the network protocol too!), but there are definitely
downsides too, and using long delta chains may
simply not be worth it in practice.
There are some numbers in that thread, but they're mostly
focused on the improved window size, and measure the
improvement from --depth=250 and --window=250 together.
E.g.:
http://public-inbox.org/git/9e4733910712062006l651571f3w7f76ce64c6650dff@mail.gmail.com/
talks about the improved run-time of "git-blame", which
comes from the reduced pack size. But most of that reduction
is coming from --window=250, whereas most of the extra costs
come from --depth=250. There's a link in that thread showing
that increasing the depth beyond 50 doesn't seem to help
much with the size:
https://vcscompare.blogspot.com/2008/06/git-repack-parameters.html
but again, no discussion of the timing impact.
In an earlier thread from Ted Ts'o which discussed setting
the non-aggressive default (from 10 to 50):
http://public-inbox.org/git/20070509134958.GA21489%40thunk.org/
we have more numbers, with the conclusion that going past 50
does not help size much, and hurts the speed of normal
operations.
So from that, we might guess that 50 is actually a sweet
spot, even for aggressive, if we interpret aggressive to
"spend time now to make a better pack". It is not clear that
"--depth=250" is actually a better pack. It may be slightly
_smaller_, but it carries a run-time penalty.
Here are some more recent timings I did to verify that. They
show three things:
- the size of the resulting pack (so disk saved to store,
bandwidth saved on clones/fetches)
- the cost of "rev-list --objects --all", which shows the
effect of the delta chains on trees (commits typically
don't delta, and the command doesn't touch the blobs at
all)
- the cost of "log -Sfoo", which will additionally access
each blob
All cases were repacked with "git repack -adf --depth=$d
--window=250" (so basically, what would happen if we tweaked
the "gc --aggressive" default depth).
The timings are all wall-clock best-of-3. The machine itself
has plenty of RAM compared to the repositories (which is
probably typical of most workstations these days), so we're
really measuring CPU usage, as the whole thing will be in
disk cache after the first run.
The core.deltaBaseCacheLimit is at its default of 96MiB.
It's possible that tweaking it would have some impact on the
tests, as some of them (especially "log -S" on a large repo)
are likely to overflow that. But bumping that carries a
run-time memory cost, so for these tests, I focused on what
we could do just with the on-disk pack tradeoffs.
Each test is done for four depths: 250 (the current value),
50 (the current default that tested well previously), 100
(to show something on the larger side, which previous tests
showed was not a good tradeoff), and 10 (the very old
default, which previous tests showed was worse than 50).
Here are the numbers for linux.git:
depth | size | % | rev-list | % | log -Sfoo | %
-------+-------+-------+----------+--------+-----------+-------
250 | 967MB | n/a | 48.159s | n/a | 378.088 | n/a
100 | 971MB | +0.4% | 41.471s | -13.9% | 342.060 | -9.5%
50 | 979MB | +1.2% | 37.778s | -21.6% | 311.040s | -17.7%
10 | 1.1GB | +6.6% | 32.518s | -32.5% | 279.890s | -25.9%
and for git.git:
depth | size | % | rev-list | % | log -Sfoo | %
-------+-------+-------+----------+--------+-----------+-------
250 | 48MB | n/a | 2.215s | n/a | 20.922s | n/a
100 | 49MB | +0.5% | 2.140s | -3.4% | 17.736s | -15.2%
50 | 49MB | +1.7% | 2.099s | -5.2% | 15.418s | -26.3%
10 | 53MB | +9.3% | 2.001s | -9.7% | 12.677s | -39.4%
You can see that that the CPU savings for regular operations improves as we
decrease the depth. The savings are less for "rev-list" on a smaller repository
than they are for blob-accessing operations, or even rev-list on a larger
repository. This may mean that a larger delta cache would help (though setting
core.deltaBaseCacheLimit by itself doesn't).
But we can also see that the space savings are not that great as the depth goes
higher. Saving 5-10% between 10 and 50 is probably worth the CPU tradeoff.
Saving 1% to go from 50 to 100, or another 0.5% to go from 100 to 250 is
probably not.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-08-11 19:13:09 +03:00
|
|
|
static int aggressive_depth = 50;
|
2007-12-06 15:03:38 +03:00
|
|
|
static int aggressive_window = 250;
|
2007-09-06 00:01:37 +04:00
|
|
|
static int gc_auto_threshold = 6700;
|
2008-03-23 10:04:48 +03:00
|
|
|
static int gc_auto_pack_limit = 50;
|
2014-02-08 11:08:52 +04:00
|
|
|
static int detach_auto = 1;
|
2017-04-26 22:29:31 +03:00
|
|
|
static timestamp_t gc_log_expire_time;
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
static const char *gc_log_expire = "1.day.ago";
|
2008-10-01 00:28:58 +04:00
|
|
|
static const char *prune_expire = "2.weeks.ago";
|
2014-11-30 11:24:53 +03:00
|
|
|
static const char *prune_worktrees_expire = "3.months.ago";
|
2018-04-15 18:36:15 +03:00
|
|
|
static unsigned long big_pack_threshold;
|
2018-04-15 18:36:17 +03:00
|
|
|
static unsigned long max_delta_cache_size = DEFAULT_DELTA_CACHE_SIZE;
|
2007-03-14 04:58:22 +03:00
|
|
|
|
2020-07-28 23:24:27 +03:00
|
|
|
static struct strvec pack_refs_cmd = STRVEC_INIT;
|
|
|
|
static struct strvec reflog = STRVEC_INIT;
|
|
|
|
static struct strvec repack = STRVEC_INIT;
|
|
|
|
static struct strvec prune = STRVEC_INIT;
|
|
|
|
static struct strvec prune_worktrees = STRVEC_INIT;
|
|
|
|
static struct strvec rerere = STRVEC_INIT;
|
2007-03-14 04:58:22 +03:00
|
|
|
|
tempfile: auto-allocate tempfiles on heap
The previous commit taught the tempfile code to give up
ownership over tempfiles that have been renamed or deleted.
That makes it possible to use a stack variable like this:
struct tempfile t;
create_tempfile(&t, ...);
...
if (!err)
rename_tempfile(&t, ...);
else
delete_tempfile(&t);
But doing it this way has a high potential for creating
memory errors. The tempfile we pass to create_tempfile()
ends up on a global linked list, and it's not safe for it to
go out of scope until we've called one of those two
deactivation functions.
Imagine that we add an early return from the function that
forgets to call delete_tempfile(). With a static or heap
tempfile variable, the worst case is that the tempfile hangs
around until the program exits (and some functions like
setup_shallow_temporary rely on this intentionally, creating
a tempfile and then leaving it for later cleanup).
But with a stack variable as above, this is a serious memory
error: the variable goes out of scope and may be filled with
garbage by the time the tempfile code looks at it. Let's
see if we can make it harder to get this wrong.
Since many callers need to allocate arbitrary numbers of
tempfiles, we can't rely on static storage as a general
solution. So we need to turn to the heap. We could just ask
all callers to pass us a heap variable, but that puts the
burden on them to call free() at the right time.
Instead, let's have the tempfile code handle the heap
allocation _and_ the deallocation (when the tempfile is
deactivated and removed from the list).
This changes the return value of all of the creation
functions. For the cleanup functions (delete and rename),
we'll add one extra bit of safety: instead of taking a
tempfile pointer, we'll take a pointer-to-pointer and set it
to NULL after freeing the object. This makes it safe to
double-call functions like delete_tempfile(), as the second
call treats the NULL input as a noop. Several callsites
follow this pattern.
The resulting patch does have a fair bit of noise, as each
caller needs to be converted to handle:
1. Storing a pointer instead of the struct itself.
2. Passing the pointer instead of taking the struct
address.
3. Handling a "struct tempfile *" return instead of a file
descriptor.
We could play games to make this less noisy. For example, by
defining the tempfile like this:
struct tempfile {
struct heap_allocated_part_of_tempfile {
int fd;
...etc
} *actual_data;
}
Callers would continue to have a "struct tempfile", and it
would be "active" only when the inner pointer was non-NULL.
But that just makes things more awkward in the long run.
There aren't that many callers, so we can simply bite
the bullet and adjust all of them. And the compiler makes it
easy for us to find them all.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-05 15:15:08 +03:00
|
|
|
static struct tempfile *pidfile;
|
2015-09-19 08:13:23 +03:00
|
|
|
static struct lock_file log_lock;
|
gc: remove gc.pid file at end of execution
This file isn't really harmful, but isn't useful either, and can create
minor annoyance for the user:
* It's confusing, as the presence of a *.pid file often implies that a
process is currently running. A user running "ls .git/" and finding
this file may incorrectly guess that a "git gc" is currently running.
* Leaving this file means that a "git gc" in an already gc-ed repo is
no-longer a no-op. A user running "git gc" in a set of repositories,
and then synchronizing this set (e.g. rsync -av, unison, ...) will see
all the gc.pid files as changed, which creates useless noise.
This patch unlinks the file after the garbage collection is done, so that
gc.pid is actually present only during execution.
Future versions of Git may want to use the information left in the gc.pid
file (e.g. for policies like "don't attempt to run a gc if one has
already been ran less than X hours ago"). If so, this patch can safely be
reverted. For now, let's not bother the users.
Explained-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-17 03:11:46 +04:00
|
|
|
|
2015-11-04 06:05:08 +03:00
|
|
|
static struct string_list pack_garbage = STRING_LIST_INIT_DUP;
|
|
|
|
|
|
|
|
static void clean_pack_garbage(void)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
for (i = 0; i < pack_garbage.nr; i++)
|
|
|
|
unlink_or_warn(pack_garbage.items[i].string);
|
|
|
|
string_list_clear(&pack_garbage, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void report_pack_garbage(unsigned seen_bits, const char *path)
|
|
|
|
{
|
|
|
|
if (seen_bits == PACKDIR_FILE_IDX)
|
|
|
|
string_list_append(&pack_garbage, path);
|
|
|
|
}
|
|
|
|
|
2015-09-19 08:13:23 +03:00
|
|
|
static void process_log_file(void)
|
|
|
|
{
|
|
|
|
struct stat st;
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
if (fstat(get_lock_file_fd(&log_lock), &st)) {
|
|
|
|
/*
|
|
|
|
* Perhaps there was an i/o error or another
|
|
|
|
* unlikely situation. Try to make a note of
|
|
|
|
* this in gc.log along with any existing
|
|
|
|
* messages.
|
|
|
|
*/
|
|
|
|
int saved_errno = errno;
|
|
|
|
fprintf(stderr, _("Failed to fstat %s: %s"),
|
tempfile: auto-allocate tempfiles on heap
The previous commit taught the tempfile code to give up
ownership over tempfiles that have been renamed or deleted.
That makes it possible to use a stack variable like this:
struct tempfile t;
create_tempfile(&t, ...);
...
if (!err)
rename_tempfile(&t, ...);
else
delete_tempfile(&t);
But doing it this way has a high potential for creating
memory errors. The tempfile we pass to create_tempfile()
ends up on a global linked list, and it's not safe for it to
go out of scope until we've called one of those two
deactivation functions.
Imagine that we add an early return from the function that
forgets to call delete_tempfile(). With a static or heap
tempfile variable, the worst case is that the tempfile hangs
around until the program exits (and some functions like
setup_shallow_temporary rely on this intentionally, creating
a tempfile and then leaving it for later cleanup).
But with a stack variable as above, this is a serious memory
error: the variable goes out of scope and may be filled with
garbage by the time the tempfile code looks at it. Let's
see if we can make it harder to get this wrong.
Since many callers need to allocate arbitrary numbers of
tempfiles, we can't rely on static storage as a general
solution. So we need to turn to the heap. We could just ask
all callers to pass us a heap variable, but that puts the
burden on them to call free() at the right time.
Instead, let's have the tempfile code handle the heap
allocation _and_ the deallocation (when the tempfile is
deactivated and removed from the list).
This changes the return value of all of the creation
functions. For the cleanup functions (delete and rename),
we'll add one extra bit of safety: instead of taking a
tempfile pointer, we'll take a pointer-to-pointer and set it
to NULL after freeing the object. This makes it safe to
double-call functions like delete_tempfile(), as the second
call treats the NULL input as a noop. Several callsites
follow this pattern.
The resulting patch does have a fair bit of noise, as each
caller needs to be converted to handle:
1. Storing a pointer instead of the struct itself.
2. Passing the pointer instead of taking the struct
address.
3. Handling a "struct tempfile *" return instead of a file
descriptor.
We could play games to make this less noisy. For example, by
defining the tempfile like this:
struct tempfile {
struct heap_allocated_part_of_tempfile {
int fd;
...etc
} *actual_data;
}
Callers would continue to have a "struct tempfile", and it
would be "active" only when the inner pointer was non-NULL.
But that just makes things more awkward in the long run.
There aren't that many callers, so we can simply bite
the bullet and adjust all of them. And the compiler makes it
easy for us to find them all.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-05 15:15:08 +03:00
|
|
|
get_tempfile_path(log_lock.tempfile),
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
strerror(saved_errno));
|
|
|
|
fflush(stderr);
|
2015-09-19 08:13:23 +03:00
|
|
|
commit_lock_file(&log_lock);
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
errno = saved_errno;
|
|
|
|
} else if (st.st_size) {
|
|
|
|
/* There was some error recorded in the lock file */
|
|
|
|
commit_lock_file(&log_lock);
|
|
|
|
} else {
|
|
|
|
/* No error, clean up any old gc.log */
|
|
|
|
unlink(git_path("gc.log"));
|
2015-09-19 08:13:23 +03:00
|
|
|
rollback_lock_file(&log_lock);
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
}
|
2015-09-19 08:13:23 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void process_log_file_at_exit(void)
|
|
|
|
{
|
|
|
|
fflush(stderr);
|
|
|
|
process_log_file();
|
|
|
|
}
|
|
|
|
|
|
|
|
static void process_log_file_on_signal(int signo)
|
|
|
|
{
|
|
|
|
process_log_file();
|
|
|
|
sigchain_pop(signo);
|
|
|
|
raise(signo);
|
|
|
|
}
|
|
|
|
|
gc: handle & check gc.reflogExpire config
Don't redundantly run "git reflog expire --all" when gc.reflogExpire
and gc.reflogExpireUnreachable are set to "never", and die immediately
if those configuration valuer are bad.
As an earlier "assert lack of early exit" change to the tests for "git
reflog expire" shows, an early check of gc.reflogExpire{Unreachable,}
isn't wanted in general for "git reflog expire", but it makes sense
for "gc" because:
1) Similarly to 8ab5aa4bd8 ("parseopt: handle malformed --expire
arguments more nicely", 2018-04-21) we'll now die early if the
config variables are set to invalid values.
We run "pack-refs" before "reflog expire", which can take a while,
only to then die on an invalid gc.reflogExpire{Unreachable,}
configuration.
2) Not invoking the command at all means it won't show up in trace
output, which makes what's going on more obvious when the two are
set to "never".
3) As a later change documents we lock the refs when looping over the
refs to expire, even in cases where we end up doing nothing due to
this config.
For the reasons noted in the earlier "assert lack of early exit"
change I don't think it's worth it to bend over backwards in "git
reflog expire" itself to carefully detect if we'll really do
nothing given the combination of all its possible options and skip
that locking, but that's easy to detect here in "gc" where we'll
only run "reflog expire" in a relatively simple mode.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-28 19:14:34 +03:00
|
|
|
static int gc_config_is_timestamp_never(const char *var)
|
|
|
|
{
|
|
|
|
const char *value;
|
|
|
|
timestamp_t expire;
|
|
|
|
|
|
|
|
if (!git_config_get_value(var, &value) && value) {
|
|
|
|
if (parse_expiry_date(value, &expire))
|
|
|
|
die(_("failed to parse '%s' value '%s'"), var, value);
|
|
|
|
return expire == 0;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2014-08-07 20:21:22 +04:00
|
|
|
static void gc_config(void)
|
2007-03-14 04:58:22 +03:00
|
|
|
{
|
2014-08-07 20:21:22 +04:00
|
|
|
const char *value;
|
|
|
|
|
|
|
|
if (!git_config_get_value("gc.packrefs", &value)) {
|
2008-02-08 17:26:18 +03:00
|
|
|
if (value && !strcmp(value, "notbare"))
|
2007-03-14 04:58:22 +03:00
|
|
|
pack_refs = -1;
|
|
|
|
else
|
2014-08-07 20:21:22 +04:00
|
|
|
pack_refs = git_config_bool("gc.packrefs", value);
|
2007-09-17 11:55:13 +04:00
|
|
|
}
|
2014-08-07 20:21:22 +04:00
|
|
|
|
gc: handle & check gc.reflogExpire config
Don't redundantly run "git reflog expire --all" when gc.reflogExpire
and gc.reflogExpireUnreachable are set to "never", and die immediately
if those configuration valuer are bad.
As an earlier "assert lack of early exit" change to the tests for "git
reflog expire" shows, an early check of gc.reflogExpire{Unreachable,}
isn't wanted in general for "git reflog expire", but it makes sense
for "gc" because:
1) Similarly to 8ab5aa4bd8 ("parseopt: handle malformed --expire
arguments more nicely", 2018-04-21) we'll now die early if the
config variables are set to invalid values.
We run "pack-refs" before "reflog expire", which can take a while,
only to then die on an invalid gc.reflogExpire{Unreachable,}
configuration.
2) Not invoking the command at all means it won't show up in trace
output, which makes what's going on more obvious when the two are
set to "never".
3) As a later change documents we lock the refs when looping over the
refs to expire, even in cases where we end up doing nothing due to
this config.
For the reasons noted in the earlier "assert lack of early exit"
change I don't think it's worth it to bend over backwards in "git
reflog expire" itself to carefully detect if we'll really do
nothing given the combination of all its possible options and skip
that locking, but that's easy to detect here in "gc" where we'll
only run "reflog expire" in a relatively simple mode.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-28 19:14:34 +03:00
|
|
|
if (gc_config_is_timestamp_never("gc.reflogexpire") &&
|
|
|
|
gc_config_is_timestamp_never("gc.reflogexpireunreachable"))
|
|
|
|
prune_reflogs = 0;
|
|
|
|
|
2014-08-07 20:21:22 +04:00
|
|
|
git_config_get_int("gc.aggressivewindow", &aggressive_window);
|
|
|
|
git_config_get_int("gc.aggressivedepth", &aggressive_depth);
|
|
|
|
git_config_get_int("gc.auto", &gc_auto_threshold);
|
|
|
|
git_config_get_int("gc.autopacklimit", &gc_auto_pack_limit);
|
|
|
|
git_config_get_bool("gc.autodetach", &detach_auto);
|
2017-02-27 21:00:13 +03:00
|
|
|
git_config_get_expiry("gc.pruneexpire", &prune_expire);
|
|
|
|
git_config_get_expiry("gc.worktreepruneexpire", &prune_worktrees_expire);
|
2017-03-17 23:50:23 +03:00
|
|
|
git_config_get_expiry("gc.logexpiry", &gc_log_expire);
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
|
2018-04-15 18:36:15 +03:00
|
|
|
git_config_get_ulong("gc.bigpackthreshold", &big_pack_threshold);
|
2018-04-15 18:36:17 +03:00
|
|
|
git_config_get_ulong("pack.deltacachesize", &max_delta_cache_size);
|
2018-04-15 18:36:15 +03:00
|
|
|
|
2014-08-07 20:21:22 +04:00
|
|
|
git_config(git_default_config, NULL);
|
2007-03-14 04:58:22 +03:00
|
|
|
}
|
|
|
|
|
2007-09-17 11:44:17 +04:00
|
|
|
static int too_many_loose_objects(void)
|
2007-09-06 00:01:37 +04:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Quickly check if a "gc" is needed, by estimating how
|
|
|
|
* many loose objects there are. Because SHA-1 is evenly
|
|
|
|
* distributed, we can check only one and get a reasonable
|
|
|
|
* estimate.
|
|
|
|
*/
|
|
|
|
DIR *dir;
|
|
|
|
struct dirent *ent;
|
|
|
|
int auto_threshold;
|
|
|
|
int num_loose = 0;
|
|
|
|
int needed = 0;
|
2019-03-15 18:59:53 +03:00
|
|
|
const unsigned hexsz_loose = the_hash_algo->hexsz - 2;
|
2007-09-06 00:01:37 +04:00
|
|
|
|
2017-03-28 22:47:03 +03:00
|
|
|
dir = opendir(git_path("objects/17"));
|
2007-09-06 00:01:37 +04:00
|
|
|
if (!dir)
|
|
|
|
return 0;
|
|
|
|
|
2017-07-08 13:35:35 +03:00
|
|
|
auto_threshold = DIV_ROUND_UP(gc_auto_threshold, 256);
|
2007-09-06 00:01:37 +04:00
|
|
|
while ((ent = readdir(dir)) != NULL) {
|
2019-03-15 18:59:53 +03:00
|
|
|
if (strspn(ent->d_name, "0123456789abcdef") != hexsz_loose ||
|
|
|
|
ent->d_name[hexsz_loose] != '\0')
|
2007-09-06 00:01:37 +04:00
|
|
|
continue;
|
|
|
|
if (++num_loose > auto_threshold) {
|
|
|
|
needed = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
closedir(dir);
|
|
|
|
return needed;
|
|
|
|
}
|
|
|
|
|
2018-04-15 18:36:17 +03:00
|
|
|
static struct packed_git *find_base_packs(struct string_list *packs,
|
|
|
|
unsigned long limit)
|
2018-04-15 18:36:14 +03:00
|
|
|
{
|
|
|
|
struct packed_git *p, *base = NULL;
|
|
|
|
|
2018-08-20 19:52:04 +03:00
|
|
|
for (p = get_all_packs(the_repository); p; p = p->next) {
|
2018-04-15 18:36:14 +03:00
|
|
|
if (!p->pack_local)
|
|
|
|
continue;
|
2018-04-15 18:36:15 +03:00
|
|
|
if (limit) {
|
|
|
|
if (p->pack_size >= limit)
|
|
|
|
string_list_append(packs, p->pack_name);
|
|
|
|
} else if (!base || base->pack_size < p->pack_size) {
|
2018-04-15 18:36:14 +03:00
|
|
|
base = p;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (base)
|
|
|
|
string_list_append(packs, base->pack_name);
|
2018-04-15 18:36:17 +03:00
|
|
|
|
|
|
|
return base;
|
2018-04-15 18:36:14 +03:00
|
|
|
}
|
|
|
|
|
2007-09-17 11:55:13 +04:00
|
|
|
static int too_many_packs(void)
|
|
|
|
{
|
|
|
|
struct packed_git *p;
|
|
|
|
int cnt;
|
|
|
|
|
|
|
|
if (gc_auto_pack_limit <= 0)
|
|
|
|
return 0;
|
|
|
|
|
2018-08-20 19:52:04 +03:00
|
|
|
for (cnt = 0, p = get_all_packs(the_repository); p; p = p->next) {
|
2007-09-17 11:55:13 +04:00
|
|
|
if (!p->pack_local)
|
|
|
|
continue;
|
2008-11-12 20:59:07 +03:00
|
|
|
if (p->pack_keep)
|
2007-09-17 11:55:13 +04:00
|
|
|
continue;
|
|
|
|
/*
|
|
|
|
* Perhaps check the size of the pack and count only
|
|
|
|
* very small ones here?
|
|
|
|
*/
|
|
|
|
cnt++;
|
|
|
|
}
|
2016-06-25 09:46:47 +03:00
|
|
|
return gc_auto_pack_limit < cnt;
|
2007-09-17 11:55:13 +04:00
|
|
|
}
|
|
|
|
|
2018-04-15 18:36:17 +03:00
|
|
|
static uint64_t total_ram(void)
|
|
|
|
{
|
|
|
|
#if defined(HAVE_SYSINFO)
|
|
|
|
struct sysinfo si;
|
|
|
|
|
|
|
|
if (!sysinfo(&si))
|
|
|
|
return si.totalram;
|
|
|
|
#elif defined(HAVE_BSD_SYSCTL) && (defined(HW_MEMSIZE) || defined(HW_PHYSMEM))
|
|
|
|
int64_t physical_memory;
|
|
|
|
int mib[2];
|
|
|
|
size_t length;
|
|
|
|
|
|
|
|
mib[0] = CTL_HW;
|
|
|
|
# if defined(HW_MEMSIZE)
|
|
|
|
mib[1] = HW_MEMSIZE;
|
|
|
|
# else
|
|
|
|
mib[1] = HW_PHYSMEM;
|
|
|
|
# endif
|
|
|
|
length = sizeof(int64_t);
|
|
|
|
if (!sysctl(mib, 2, &physical_memory, &length, NULL, 0))
|
|
|
|
return physical_memory;
|
|
|
|
#elif defined(GIT_WINDOWS_NATIVE)
|
|
|
|
MEMORYSTATUSEX memInfo;
|
|
|
|
|
|
|
|
memInfo.dwLength = sizeof(MEMORYSTATUSEX);
|
|
|
|
if (GlobalMemoryStatusEx(&memInfo))
|
|
|
|
return memInfo.ullTotalPhys;
|
|
|
|
#endif
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint64_t estimate_repack_memory(struct packed_git *pack)
|
|
|
|
{
|
|
|
|
unsigned long nr_objects = approximate_object_count();
|
|
|
|
size_t os_cache, heap;
|
|
|
|
|
|
|
|
if (!pack || !nr_objects)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* First we have to scan through at least one pack.
|
|
|
|
* Assume enough room in OS file cache to keep the entire pack
|
|
|
|
* or we may accidentally evict data of other processes from
|
|
|
|
* the cache.
|
|
|
|
*/
|
|
|
|
os_cache = pack->pack_size + pack->index_size;
|
|
|
|
/* then pack-objects needs lots more for book keeping */
|
|
|
|
heap = sizeof(struct object_entry) * nr_objects;
|
|
|
|
/*
|
|
|
|
* internal rev-list --all --objects takes up some memory too,
|
|
|
|
* let's say half of it is for blobs
|
|
|
|
*/
|
|
|
|
heap += sizeof(struct blob) * nr_objects / 2;
|
|
|
|
/*
|
|
|
|
* and the other half is for trees (commits and tags are
|
|
|
|
* usually insignificant)
|
|
|
|
*/
|
|
|
|
heap += sizeof(struct tree) * nr_objects / 2;
|
|
|
|
/* and then obj_hash[], underestimated in fact */
|
|
|
|
heap += sizeof(struct object *) * nr_objects;
|
|
|
|
/* revindex is used also */
|
|
|
|
heap += sizeof(struct revindex_entry) * nr_objects;
|
|
|
|
/*
|
|
|
|
* read_sha1_file() (either at delta calculation phase, or
|
|
|
|
* writing phase) also fills up the delta base cache
|
|
|
|
*/
|
|
|
|
heap += delta_base_cache_limit;
|
|
|
|
/* and of course pack-objects has its own delta cache */
|
|
|
|
heap += max_delta_cache_size;
|
|
|
|
|
|
|
|
return os_cache + heap;
|
|
|
|
}
|
|
|
|
|
2018-04-15 18:36:14 +03:00
|
|
|
static int keep_one_pack(struct string_list_item *item, void *data)
|
|
|
|
{
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_pushf(&repack, "--keep-pack=%s", basename(item->string));
|
2018-04-15 18:36:14 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void add_repack_all_option(struct string_list *keep_pack)
|
2012-04-07 14:30:09 +04:00
|
|
|
{
|
|
|
|
if (prune_expire && !strcmp(prune_expire, "now"))
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_push(&repack, "-a");
|
2012-04-07 14:30:09 +04:00
|
|
|
else {
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_push(&repack, "-A");
|
2012-04-19 01:10:19 +04:00
|
|
|
if (prune_expire)
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_pushf(&repack, "--unpack-unreachable=%s", prune_expire);
|
2012-04-07 14:30:09 +04:00
|
|
|
}
|
2018-04-15 18:36:14 +03:00
|
|
|
|
|
|
|
if (keep_pack)
|
|
|
|
for_each_string_list(keep_pack, keep_one_pack, NULL);
|
2012-04-07 14:30:09 +04:00
|
|
|
}
|
|
|
|
|
2016-12-29 01:45:41 +03:00
|
|
|
static void add_repack_incremental_option(void)
|
|
|
|
{
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_push(&repack, "--no-write-bitmap-index");
|
2016-12-29 01:45:41 +03:00
|
|
|
}
|
|
|
|
|
2007-09-17 11:44:17 +04:00
|
|
|
static int need_to_gc(void)
|
|
|
|
{
|
|
|
|
/*
|
2008-03-20 00:53:20 +03:00
|
|
|
* Setting gc.auto to 0 or negative can disable the
|
|
|
|
* automatic gc.
|
2007-09-17 11:44:17 +04:00
|
|
|
*/
|
2008-03-20 00:53:20 +03:00
|
|
|
if (gc_auto_threshold <= 0)
|
2007-09-17 11:48:39 +04:00
|
|
|
return 0;
|
|
|
|
|
2007-09-17 11:55:13 +04:00
|
|
|
/*
|
|
|
|
* If there are too many loose objects, but not too many
|
|
|
|
* packs, we run "repack -d -l". If there are too many packs,
|
|
|
|
* we run "repack -A -d -l". Otherwise we tell the caller
|
|
|
|
* there is no need.
|
|
|
|
*/
|
2018-04-15 18:36:15 +03:00
|
|
|
if (too_many_packs()) {
|
|
|
|
struct string_list keep_pack = STRING_LIST_INIT_NODUP;
|
|
|
|
|
2018-04-15 18:36:16 +03:00
|
|
|
if (big_pack_threshold) {
|
2018-04-15 18:36:15 +03:00
|
|
|
find_base_packs(&keep_pack, big_pack_threshold);
|
2018-04-15 18:36:16 +03:00
|
|
|
if (keep_pack.nr >= gc_auto_pack_limit) {
|
|
|
|
big_pack_threshold = 0;
|
|
|
|
string_list_clear(&keep_pack, 0);
|
|
|
|
find_base_packs(&keep_pack, 0);
|
|
|
|
}
|
2018-04-15 18:36:17 +03:00
|
|
|
} else {
|
|
|
|
struct packed_git *p = find_base_packs(&keep_pack, 0);
|
|
|
|
uint64_t mem_have, mem_want;
|
|
|
|
|
|
|
|
mem_have = total_ram();
|
|
|
|
mem_want = estimate_repack_memory(p);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Only allow 1/2 of memory for pack-objects, leave
|
|
|
|
* the rest for the OS and other processes in the
|
|
|
|
* system.
|
|
|
|
*/
|
|
|
|
if (!mem_have || mem_want < mem_have / 2)
|
|
|
|
string_list_clear(&keep_pack, 0);
|
2018-04-15 18:36:16 +03:00
|
|
|
}
|
2018-04-15 18:36:15 +03:00
|
|
|
|
|
|
|
add_repack_all_option(&keep_pack);
|
|
|
|
string_list_clear(&keep_pack, 0);
|
|
|
|
} else if (too_many_loose_objects())
|
2016-12-29 01:45:41 +03:00
|
|
|
add_repack_incremental_option();
|
|
|
|
else
|
2007-09-17 11:55:13 +04:00
|
|
|
return 0;
|
2008-04-02 23:34:38 +04:00
|
|
|
|
2014-03-18 14:00:53 +04:00
|
|
|
if (run_hook_le(NULL, "pre-auto-gc", NULL))
|
2008-04-02 23:34:38 +04:00
|
|
|
return 0;
|
2007-09-17 11:48:39 +04:00
|
|
|
return 1;
|
2007-09-17 11:44:17 +04:00
|
|
|
}
|
|
|
|
|
2013-08-08 15:05:38 +04:00
|
|
|
/* return NULL on success, else hostname running the gc */
|
|
|
|
static const char *lock_repo_for_gc(int force, pid_t* ret_pid)
|
|
|
|
{
|
2018-05-09 23:55:38 +03:00
|
|
|
struct lock_file lock = LOCK_INIT;
|
2017-04-19 00:57:42 +03:00
|
|
|
char my_host[HOST_NAME_MAX + 1];
|
2013-08-08 15:05:38 +04:00
|
|
|
struct strbuf sb = STRBUF_INIT;
|
|
|
|
struct stat st;
|
|
|
|
uintmax_t pid;
|
|
|
|
FILE *fp;
|
2014-01-29 20:59:37 +04:00
|
|
|
int fd;
|
2015-08-10 12:47:48 +03:00
|
|
|
char *pidfile_path;
|
2013-08-08 15:05:38 +04:00
|
|
|
|
tempfile: auto-allocate tempfiles on heap
The previous commit taught the tempfile code to give up
ownership over tempfiles that have been renamed or deleted.
That makes it possible to use a stack variable like this:
struct tempfile t;
create_tempfile(&t, ...);
...
if (!err)
rename_tempfile(&t, ...);
else
delete_tempfile(&t);
But doing it this way has a high potential for creating
memory errors. The tempfile we pass to create_tempfile()
ends up on a global linked list, and it's not safe for it to
go out of scope until we've called one of those two
deactivation functions.
Imagine that we add an early return from the function that
forgets to call delete_tempfile(). With a static or heap
tempfile variable, the worst case is that the tempfile hangs
around until the program exits (and some functions like
setup_shallow_temporary rely on this intentionally, creating
a tempfile and then leaving it for later cleanup).
But with a stack variable as above, this is a serious memory
error: the variable goes out of scope and may be filled with
garbage by the time the tempfile code looks at it. Let's
see if we can make it harder to get this wrong.
Since many callers need to allocate arbitrary numbers of
tempfiles, we can't rely on static storage as a general
solution. So we need to turn to the heap. We could just ask
all callers to pass us a heap variable, but that puts the
burden on them to call free() at the right time.
Instead, let's have the tempfile code handle the heap
allocation _and_ the deallocation (when the tempfile is
deactivated and removed from the list).
This changes the return value of all of the creation
functions. For the cleanup functions (delete and rename),
we'll add one extra bit of safety: instead of taking a
tempfile pointer, we'll take a pointer-to-pointer and set it
to NULL after freeing the object. This makes it safe to
double-call functions like delete_tempfile(), as the second
call treats the NULL input as a noop. Several callsites
follow this pattern.
The resulting patch does have a fair bit of noise, as each
caller needs to be converted to handle:
1. Storing a pointer instead of the struct itself.
2. Passing the pointer instead of taking the struct
address.
3. Handling a "struct tempfile *" return instead of a file
descriptor.
We could play games to make this less noisy. For example, by
defining the tempfile like this:
struct tempfile {
struct heap_allocated_part_of_tempfile {
int fd;
...etc
} *actual_data;
}
Callers would continue to have a "struct tempfile", and it
would be "active" only when the inner pointer was non-NULL.
But that just makes things more awkward in the long run.
There aren't that many callers, so we can simply bite
the bullet and adjust all of them. And the compiler makes it
easy for us to find them all.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-05 15:15:08 +03:00
|
|
|
if (is_tempfile_active(pidfile))
|
gc: remove gc.pid file at end of execution
This file isn't really harmful, but isn't useful either, and can create
minor annoyance for the user:
* It's confusing, as the presence of a *.pid file often implies that a
process is currently running. A user running "ls .git/" and finding
this file may incorrectly guess that a "git gc" is currently running.
* Leaving this file means that a "git gc" in an already gc-ed repo is
no-longer a no-op. A user running "git gc" in a set of repositories,
and then synchronizing this set (e.g. rsync -av, unison, ...) will see
all the gc.pid files as changed, which creates useless noise.
This patch unlinks the file after the garbage collection is done, so that
gc.pid is actually present only during execution.
Future versions of Git may want to use the information left in the gc.pid
file (e.g. for policies like "don't attempt to run a gc if one has
already been ran less than X hours ago"). If so, this patch can safely be
reverted. For now, let's not bother the users.
Explained-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-17 03:11:46 +04:00
|
|
|
/* already locked */
|
|
|
|
return NULL;
|
|
|
|
|
2017-04-19 00:57:43 +03:00
|
|
|
if (xgethostname(my_host, sizeof(my_host)))
|
2015-09-25 00:06:08 +03:00
|
|
|
xsnprintf(my_host, sizeof(my_host), "unknown");
|
2013-08-08 15:05:38 +04:00
|
|
|
|
2015-08-10 12:47:48 +03:00
|
|
|
pidfile_path = git_pathdup("gc.pid");
|
|
|
|
fd = hold_lock_file_for_update(&lock, pidfile_path,
|
2013-08-08 15:05:38 +04:00
|
|
|
LOCK_DIE_ON_ERROR);
|
|
|
|
if (!force) {
|
2017-04-19 00:57:42 +03:00
|
|
|
static char locking_host[HOST_NAME_MAX + 1];
|
|
|
|
static char *scan_fmt;
|
2014-01-29 20:59:37 +04:00
|
|
|
int should_exit;
|
2017-04-19 00:57:42 +03:00
|
|
|
|
|
|
|
if (!scan_fmt)
|
2017-09-17 06:16:55 +03:00
|
|
|
scan_fmt = xstrfmt("%s %%%ds", "%"SCNuMAX, HOST_NAME_MAX);
|
2015-08-10 12:47:48 +03:00
|
|
|
fp = fopen(pidfile_path, "r");
|
2013-08-08 15:05:38 +04:00
|
|
|
memset(locking_host, 0, sizeof(locking_host));
|
|
|
|
should_exit =
|
|
|
|
fp != NULL &&
|
|
|
|
!fstat(fileno(fp), &st) &&
|
|
|
|
/*
|
|
|
|
* 12 hour limit is very generous as gc should
|
|
|
|
* never take that long. On the other hand we
|
|
|
|
* don't really need a strict limit here,
|
|
|
|
* running gc --auto one day late is not a big
|
|
|
|
* problem. --force can be used in manual gc
|
|
|
|
* after the user verifies that no gc is
|
|
|
|
* running.
|
|
|
|
*/
|
|
|
|
time(NULL) - st.st_mtime <= 12 * 3600 &&
|
2017-04-19 00:57:42 +03:00
|
|
|
fscanf(fp, scan_fmt, &pid, locking_host) == 2 &&
|
2013-08-08 15:05:38 +04:00
|
|
|
/* be gentle to concurrent "gc" on remote hosts */
|
2013-12-31 16:07:39 +04:00
|
|
|
(strcmp(locking_host, my_host) || !kill(pid, 0) || errno == EPERM);
|
2013-08-08 15:05:38 +04:00
|
|
|
if (fp != NULL)
|
|
|
|
fclose(fp);
|
|
|
|
if (should_exit) {
|
|
|
|
if (fd >= 0)
|
|
|
|
rollback_lock_file(&lock);
|
|
|
|
*ret_pid = pid;
|
2015-08-10 12:47:48 +03:00
|
|
|
free(pidfile_path);
|
2013-08-08 15:05:38 +04:00
|
|
|
return locking_host;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
strbuf_addf(&sb, "%"PRIuMAX" %s",
|
|
|
|
(uintmax_t) getpid(), my_host);
|
|
|
|
write_in_full(fd, sb.buf, sb.len);
|
|
|
|
strbuf_release(&sb);
|
|
|
|
commit_lock_file(&lock);
|
tempfile: auto-allocate tempfiles on heap
The previous commit taught the tempfile code to give up
ownership over tempfiles that have been renamed or deleted.
That makes it possible to use a stack variable like this:
struct tempfile t;
create_tempfile(&t, ...);
...
if (!err)
rename_tempfile(&t, ...);
else
delete_tempfile(&t);
But doing it this way has a high potential for creating
memory errors. The tempfile we pass to create_tempfile()
ends up on a global linked list, and it's not safe for it to
go out of scope until we've called one of those two
deactivation functions.
Imagine that we add an early return from the function that
forgets to call delete_tempfile(). With a static or heap
tempfile variable, the worst case is that the tempfile hangs
around until the program exits (and some functions like
setup_shallow_temporary rely on this intentionally, creating
a tempfile and then leaving it for later cleanup).
But with a stack variable as above, this is a serious memory
error: the variable goes out of scope and may be filled with
garbage by the time the tempfile code looks at it. Let's
see if we can make it harder to get this wrong.
Since many callers need to allocate arbitrary numbers of
tempfiles, we can't rely on static storage as a general
solution. So we need to turn to the heap. We could just ask
all callers to pass us a heap variable, but that puts the
burden on them to call free() at the right time.
Instead, let's have the tempfile code handle the heap
allocation _and_ the deallocation (when the tempfile is
deactivated and removed from the list).
This changes the return value of all of the creation
functions. For the cleanup functions (delete and rename),
we'll add one extra bit of safety: instead of taking a
tempfile pointer, we'll take a pointer-to-pointer and set it
to NULL after freeing the object. This makes it safe to
double-call functions like delete_tempfile(), as the second
call treats the NULL input as a noop. Several callsites
follow this pattern.
The resulting patch does have a fair bit of noise, as each
caller needs to be converted to handle:
1. Storing a pointer instead of the struct itself.
2. Passing the pointer instead of taking the struct
address.
3. Handling a "struct tempfile *" return instead of a file
descriptor.
We could play games to make this less noisy. For example, by
defining the tempfile like this:
struct tempfile {
struct heap_allocated_part_of_tempfile {
int fd;
...etc
} *actual_data;
}
Callers would continue to have a "struct tempfile", and it
would be "active" only when the inner pointer was non-NULL.
But that just makes things more awkward in the long run.
There aren't that many callers, so we can simply bite
the bullet and adjust all of them. And the compiler makes it
easy for us to find them all.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-05 15:15:08 +03:00
|
|
|
pidfile = register_tempfile(pidfile_path);
|
2015-08-10 12:47:49 +03:00
|
|
|
free(pidfile_path);
|
2013-08-08 15:05:38 +04:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
gc: do not return error for prior errors in daemonized mode
Some build machines started consistently failing to fetch updated
source using "repo sync", with error
error: The last gc run reported the following. Please correct the root cause
and remove /build/.repo/projects/tools/git.git/gc.log.
Automatic cleanup will not be performed until the file is removed.
warning: There are too many unreachable loose objects; run 'git prune' to remove them.
The cause takes some time to describe.
In v2.0.0-rc0~145^2 (gc: config option for running --auto in
background, 2014-02-08), "git gc --auto" learned to run in the
background instead of blocking the invoking command. In this mode, it
closed stderr to avoid interleaving output with any subsequent
commands, causing warnings like the above to be swallowed; v2.6.3~24^2
(gc: save log from daemonized gc --auto and print it next time,
2015-09-19) addressed that by storing any diagnostic output in
.git/gc.log and allowing the next "git gc --auto" run to print it.
To avoid wasteful repeated fruitless gcs, when gc.log is present, the
subsequent "gc --auto" would die after printing its contents. Most
git commands, such as "git fetch", ignore the exit status from "git gc
--auto" so all is well at this point: the user gets to see the error
message, and the fetch succeeds, without a wasteful additional attempt
at an automatic gc.
External tools like repo[1], though, do care about the exit status
from "git gc --auto". In non-daemonized mode, the exit status is
straightforward: if there is an error, it is nonzero, but after a
warning like the above, the status is zero. The daemonized mode, as a
side effect of the other properties provided, offers a very strange
exit code convention:
- if no housekeeping was required, the exit status is 0
- the first real run, after forking into the background, returns exit
status 0 unconditionally. The parent process has no way to know
whether gc will succeed.
- if there is any diagnostic output in gc.log, subsequent runs return
a nonzero exit status to indicate that gc was not triggered.
There's nothing for the calling program to act on on the basis of that
error. Use status 0 consistently instead, to indicate that we decided
not to run a gc (just like if no housekeeping was required). This
way, repo and similar tools can get the benefit of the same behavior
as tools like "git fetch" that ignore the exit status from gc --auto.
Once the period of time described by gc.pruneExpire elapses, the
unreachable loose objects will be removed by "git gc --auto"
automatically.
[1] https://gerrit-review.googlesource.com/c/git-repo/+/10598/
Reported-by: Andrii Dehtiarov <adehtiarov@google.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-17 09:57:40 +03:00
|
|
|
/*
|
|
|
|
* Returns 0 if there was no previous error and gc can proceed, 1 if
|
|
|
|
* gc should not proceed due to an error in the last run. Prints a
|
2019-11-05 20:07:23 +03:00
|
|
|
* message and returns -1 if an error occurred while reading gc.log
|
gc: do not return error for prior errors in daemonized mode
Some build machines started consistently failing to fetch updated
source using "repo sync", with error
error: The last gc run reported the following. Please correct the root cause
and remove /build/.repo/projects/tools/git.git/gc.log.
Automatic cleanup will not be performed until the file is removed.
warning: There are too many unreachable loose objects; run 'git prune' to remove them.
The cause takes some time to describe.
In v2.0.0-rc0~145^2 (gc: config option for running --auto in
background, 2014-02-08), "git gc --auto" learned to run in the
background instead of blocking the invoking command. In this mode, it
closed stderr to avoid interleaving output with any subsequent
commands, causing warnings like the above to be swallowed; v2.6.3~24^2
(gc: save log from daemonized gc --auto and print it next time,
2015-09-19) addressed that by storing any diagnostic output in
.git/gc.log and allowing the next "git gc --auto" run to print it.
To avoid wasteful repeated fruitless gcs, when gc.log is present, the
subsequent "gc --auto" would die after printing its contents. Most
git commands, such as "git fetch", ignore the exit status from "git gc
--auto" so all is well at this point: the user gets to see the error
message, and the fetch succeeds, without a wasteful additional attempt
at an automatic gc.
External tools like repo[1], though, do care about the exit status
from "git gc --auto". In non-daemonized mode, the exit status is
straightforward: if there is an error, it is nonzero, but after a
warning like the above, the status is zero. The daemonized mode, as a
side effect of the other properties provided, offers a very strange
exit code convention:
- if no housekeeping was required, the exit status is 0
- the first real run, after forking into the background, returns exit
status 0 unconditionally. The parent process has no way to know
whether gc will succeed.
- if there is any diagnostic output in gc.log, subsequent runs return
a nonzero exit status to indicate that gc was not triggered.
There's nothing for the calling program to act on on the basis of that
error. Use status 0 consistently instead, to indicate that we decided
not to run a gc (just like if no housekeeping was required). This
way, repo and similar tools can get the benefit of the same behavior
as tools like "git fetch" that ignore the exit status from gc --auto.
Once the period of time described by gc.pruneExpire elapses, the
unreachable loose objects will be removed by "git gc --auto"
automatically.
[1] https://gerrit-review.googlesource.com/c/git-repo/+/10598/
Reported-by: Andrii Dehtiarov <adehtiarov@google.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-17 09:57:40 +03:00
|
|
|
*/
|
|
|
|
static int report_last_gc_error(void)
|
2015-09-19 08:13:23 +03:00
|
|
|
{
|
|
|
|
struct strbuf sb = STRBUF_INIT;
|
gc: do not return error for prior errors in daemonized mode
Some build machines started consistently failing to fetch updated
source using "repo sync", with error
error: The last gc run reported the following. Please correct the root cause
and remove /build/.repo/projects/tools/git.git/gc.log.
Automatic cleanup will not be performed until the file is removed.
warning: There are too many unreachable loose objects; run 'git prune' to remove them.
The cause takes some time to describe.
In v2.0.0-rc0~145^2 (gc: config option for running --auto in
background, 2014-02-08), "git gc --auto" learned to run in the
background instead of blocking the invoking command. In this mode, it
closed stderr to avoid interleaving output with any subsequent
commands, causing warnings like the above to be swallowed; v2.6.3~24^2
(gc: save log from daemonized gc --auto and print it next time,
2015-09-19) addressed that by storing any diagnostic output in
.git/gc.log and allowing the next "git gc --auto" run to print it.
To avoid wasteful repeated fruitless gcs, when gc.log is present, the
subsequent "gc --auto" would die after printing its contents. Most
git commands, such as "git fetch", ignore the exit status from "git gc
--auto" so all is well at this point: the user gets to see the error
message, and the fetch succeeds, without a wasteful additional attempt
at an automatic gc.
External tools like repo[1], though, do care about the exit status
from "git gc --auto". In non-daemonized mode, the exit status is
straightforward: if there is an error, it is nonzero, but after a
warning like the above, the status is zero. The daemonized mode, as a
side effect of the other properties provided, offers a very strange
exit code convention:
- if no housekeeping was required, the exit status is 0
- the first real run, after forking into the background, returns exit
status 0 unconditionally. The parent process has no way to know
whether gc will succeed.
- if there is any diagnostic output in gc.log, subsequent runs return
a nonzero exit status to indicate that gc was not triggered.
There's nothing for the calling program to act on on the basis of that
error. Use status 0 consistently instead, to indicate that we decided
not to run a gc (just like if no housekeeping was required). This
way, repo and similar tools can get the benefit of the same behavior
as tools like "git fetch" that ignore the exit status from gc --auto.
Once the period of time described by gc.pruneExpire elapses, the
unreachable loose objects will be removed by "git gc --auto"
automatically.
[1] https://gerrit-review.googlesource.com/c/git-repo/+/10598/
Reported-by: Andrii Dehtiarov <adehtiarov@google.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-17 09:57:40 +03:00
|
|
|
int ret = 0;
|
2018-07-17 09:53:21 +03:00
|
|
|
ssize_t len;
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
struct stat st;
|
|
|
|
char *gc_log_path = git_pathdup("gc.log");
|
2015-09-19 08:13:23 +03:00
|
|
|
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
if (stat(gc_log_path, &st)) {
|
|
|
|
if (errno == ENOENT)
|
|
|
|
goto done;
|
|
|
|
|
gc: do not return error for prior errors in daemonized mode
Some build machines started consistently failing to fetch updated
source using "repo sync", with error
error: The last gc run reported the following. Please correct the root cause
and remove /build/.repo/projects/tools/git.git/gc.log.
Automatic cleanup will not be performed until the file is removed.
warning: There are too many unreachable loose objects; run 'git prune' to remove them.
The cause takes some time to describe.
In v2.0.0-rc0~145^2 (gc: config option for running --auto in
background, 2014-02-08), "git gc --auto" learned to run in the
background instead of blocking the invoking command. In this mode, it
closed stderr to avoid interleaving output with any subsequent
commands, causing warnings like the above to be swallowed; v2.6.3~24^2
(gc: save log from daemonized gc --auto and print it next time,
2015-09-19) addressed that by storing any diagnostic output in
.git/gc.log and allowing the next "git gc --auto" run to print it.
To avoid wasteful repeated fruitless gcs, when gc.log is present, the
subsequent "gc --auto" would die after printing its contents. Most
git commands, such as "git fetch", ignore the exit status from "git gc
--auto" so all is well at this point: the user gets to see the error
message, and the fetch succeeds, without a wasteful additional attempt
at an automatic gc.
External tools like repo[1], though, do care about the exit status
from "git gc --auto". In non-daemonized mode, the exit status is
straightforward: if there is an error, it is nonzero, but after a
warning like the above, the status is zero. The daemonized mode, as a
side effect of the other properties provided, offers a very strange
exit code convention:
- if no housekeeping was required, the exit status is 0
- the first real run, after forking into the background, returns exit
status 0 unconditionally. The parent process has no way to know
whether gc will succeed.
- if there is any diagnostic output in gc.log, subsequent runs return
a nonzero exit status to indicate that gc was not triggered.
There's nothing for the calling program to act on on the basis of that
error. Use status 0 consistently instead, to indicate that we decided
not to run a gc (just like if no housekeeping was required). This
way, repo and similar tools can get the benefit of the same behavior
as tools like "git fetch" that ignore the exit status from gc --auto.
Once the period of time described by gc.pruneExpire elapses, the
unreachable loose objects will be removed by "git gc --auto"
automatically.
[1] https://gerrit-review.googlesource.com/c/git-repo/+/10598/
Reported-by: Andrii Dehtiarov <adehtiarov@google.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-17 09:57:40 +03:00
|
|
|
ret = error_errno(_("cannot stat '%s'"), gc_log_path);
|
|
|
|
goto done;
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (st.st_mtime < gc_log_expire_time)
|
|
|
|
goto done;
|
|
|
|
|
2018-07-17 09:53:21 +03:00
|
|
|
len = strbuf_read_file(&sb, gc_log_path, 0);
|
|
|
|
if (len < 0)
|
gc: do not return error for prior errors in daemonized mode
Some build machines started consistently failing to fetch updated
source using "repo sync", with error
error: The last gc run reported the following. Please correct the root cause
and remove /build/.repo/projects/tools/git.git/gc.log.
Automatic cleanup will not be performed until the file is removed.
warning: There are too many unreachable loose objects; run 'git prune' to remove them.
The cause takes some time to describe.
In v2.0.0-rc0~145^2 (gc: config option for running --auto in
background, 2014-02-08), "git gc --auto" learned to run in the
background instead of blocking the invoking command. In this mode, it
closed stderr to avoid interleaving output with any subsequent
commands, causing warnings like the above to be swallowed; v2.6.3~24^2
(gc: save log from daemonized gc --auto and print it next time,
2015-09-19) addressed that by storing any diagnostic output in
.git/gc.log and allowing the next "git gc --auto" run to print it.
To avoid wasteful repeated fruitless gcs, when gc.log is present, the
subsequent "gc --auto" would die after printing its contents. Most
git commands, such as "git fetch", ignore the exit status from "git gc
--auto" so all is well at this point: the user gets to see the error
message, and the fetch succeeds, without a wasteful additional attempt
at an automatic gc.
External tools like repo[1], though, do care about the exit status
from "git gc --auto". In non-daemonized mode, the exit status is
straightforward: if there is an error, it is nonzero, but after a
warning like the above, the status is zero. The daemonized mode, as a
side effect of the other properties provided, offers a very strange
exit code convention:
- if no housekeeping was required, the exit status is 0
- the first real run, after forking into the background, returns exit
status 0 unconditionally. The parent process has no way to know
whether gc will succeed.
- if there is any diagnostic output in gc.log, subsequent runs return
a nonzero exit status to indicate that gc was not triggered.
There's nothing for the calling program to act on on the basis of that
error. Use status 0 consistently instead, to indicate that we decided
not to run a gc (just like if no housekeeping was required). This
way, repo and similar tools can get the benefit of the same behavior
as tools like "git fetch" that ignore the exit status from gc --auto.
Once the period of time described by gc.pruneExpire elapses, the
unreachable loose objects will be removed by "git gc --auto"
automatically.
[1] https://gerrit-review.googlesource.com/c/git-repo/+/10598/
Reported-by: Andrii Dehtiarov <adehtiarov@google.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-17 09:57:40 +03:00
|
|
|
ret = error_errno(_("cannot read '%s'"), gc_log_path);
|
|
|
|
else if (len > 0) {
|
|
|
|
/*
|
|
|
|
* A previous gc failed. Report the error, and don't
|
|
|
|
* bother with an automatic gc run since it is likely
|
|
|
|
* to fail in the same way.
|
|
|
|
*/
|
|
|
|
warning(_("The last gc run reported the following. "
|
2015-09-19 08:13:23 +03:00
|
|
|
"Please correct the root cause\n"
|
|
|
|
"and remove %s.\n"
|
|
|
|
"Automatic cleanup will not be performed "
|
|
|
|
"until the file is removed.\n\n"
|
|
|
|
"%s"),
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
gc_log_path, sb.buf);
|
gc: do not return error for prior errors in daemonized mode
Some build machines started consistently failing to fetch updated
source using "repo sync", with error
error: The last gc run reported the following. Please correct the root cause
and remove /build/.repo/projects/tools/git.git/gc.log.
Automatic cleanup will not be performed until the file is removed.
warning: There are too many unreachable loose objects; run 'git prune' to remove them.
The cause takes some time to describe.
In v2.0.0-rc0~145^2 (gc: config option for running --auto in
background, 2014-02-08), "git gc --auto" learned to run in the
background instead of blocking the invoking command. In this mode, it
closed stderr to avoid interleaving output with any subsequent
commands, causing warnings like the above to be swallowed; v2.6.3~24^2
(gc: save log from daemonized gc --auto and print it next time,
2015-09-19) addressed that by storing any diagnostic output in
.git/gc.log and allowing the next "git gc --auto" run to print it.
To avoid wasteful repeated fruitless gcs, when gc.log is present, the
subsequent "gc --auto" would die after printing its contents. Most
git commands, such as "git fetch", ignore the exit status from "git gc
--auto" so all is well at this point: the user gets to see the error
message, and the fetch succeeds, without a wasteful additional attempt
at an automatic gc.
External tools like repo[1], though, do care about the exit status
from "git gc --auto". In non-daemonized mode, the exit status is
straightforward: if there is an error, it is nonzero, but after a
warning like the above, the status is zero. The daemonized mode, as a
side effect of the other properties provided, offers a very strange
exit code convention:
- if no housekeeping was required, the exit status is 0
- the first real run, after forking into the background, returns exit
status 0 unconditionally. The parent process has no way to know
whether gc will succeed.
- if there is any diagnostic output in gc.log, subsequent runs return
a nonzero exit status to indicate that gc was not triggered.
There's nothing for the calling program to act on on the basis of that
error. Use status 0 consistently instead, to indicate that we decided
not to run a gc (just like if no housekeeping was required). This
way, repo and similar tools can get the benefit of the same behavior
as tools like "git fetch" that ignore the exit status from gc --auto.
Once the period of time described by gc.pruneExpire elapses, the
unreachable loose objects will be removed by "git gc --auto"
automatically.
[1] https://gerrit-review.googlesource.com/c/git-repo/+/10598/
Reported-by: Andrii Dehtiarov <adehtiarov@google.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-17 09:57:40 +03:00
|
|
|
ret = 1;
|
|
|
|
}
|
2015-09-19 08:13:23 +03:00
|
|
|
strbuf_release(&sb);
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
done:
|
|
|
|
free(gc_log_path);
|
gc: do not return error for prior errors in daemonized mode
Some build machines started consistently failing to fetch updated
source using "repo sync", with error
error: The last gc run reported the following. Please correct the root cause
and remove /build/.repo/projects/tools/git.git/gc.log.
Automatic cleanup will not be performed until the file is removed.
warning: There are too many unreachable loose objects; run 'git prune' to remove them.
The cause takes some time to describe.
In v2.0.0-rc0~145^2 (gc: config option for running --auto in
background, 2014-02-08), "git gc --auto" learned to run in the
background instead of blocking the invoking command. In this mode, it
closed stderr to avoid interleaving output with any subsequent
commands, causing warnings like the above to be swallowed; v2.6.3~24^2
(gc: save log from daemonized gc --auto and print it next time,
2015-09-19) addressed that by storing any diagnostic output in
.git/gc.log and allowing the next "git gc --auto" run to print it.
To avoid wasteful repeated fruitless gcs, when gc.log is present, the
subsequent "gc --auto" would die after printing its contents. Most
git commands, such as "git fetch", ignore the exit status from "git gc
--auto" so all is well at this point: the user gets to see the error
message, and the fetch succeeds, without a wasteful additional attempt
at an automatic gc.
External tools like repo[1], though, do care about the exit status
from "git gc --auto". In non-daemonized mode, the exit status is
straightforward: if there is an error, it is nonzero, but after a
warning like the above, the status is zero. The daemonized mode, as a
side effect of the other properties provided, offers a very strange
exit code convention:
- if no housekeeping was required, the exit status is 0
- the first real run, after forking into the background, returns exit
status 0 unconditionally. The parent process has no way to know
whether gc will succeed.
- if there is any diagnostic output in gc.log, subsequent runs return
a nonzero exit status to indicate that gc was not triggered.
There's nothing for the calling program to act on on the basis of that
error. Use status 0 consistently instead, to indicate that we decided
not to run a gc (just like if no housekeeping was required). This
way, repo and similar tools can get the benefit of the same behavior
as tools like "git fetch" that ignore the exit status from gc --auto.
Once the period of time described by gc.pruneExpire elapses, the
unreachable loose objects will be removed by "git gc --auto"
automatically.
[1] https://gerrit-review.googlesource.com/c/git-repo/+/10598/
Reported-by: Andrii Dehtiarov <adehtiarov@google.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-17 09:57:40 +03:00
|
|
|
return ret;
|
2015-09-19 08:13:23 +03:00
|
|
|
}
|
|
|
|
|
2018-07-17 09:54:16 +03:00
|
|
|
static void gc_before_repack(void)
|
2014-05-25 04:38:29 +04:00
|
|
|
{
|
2019-03-15 18:59:54 +03:00
|
|
|
/*
|
|
|
|
* We may be called twice, as both the pre- and
|
|
|
|
* post-daemonized phases will call us, but running these
|
|
|
|
* commands more than once is pointless and wasteful.
|
|
|
|
*/
|
|
|
|
static int done = 0;
|
|
|
|
if (done++)
|
|
|
|
return;
|
|
|
|
|
2020-07-29 03:37:20 +03:00
|
|
|
if (pack_refs && run_command_v_opt(pack_refs_cmd.v, RUN_GIT_CMD))
|
|
|
|
die(FAILED_RUN, pack_refs_cmd.v[0]);
|
2014-05-25 04:38:29 +04:00
|
|
|
|
2020-07-29 03:37:20 +03:00
|
|
|
if (prune_reflogs && run_command_v_opt(reflog.v, RUN_GIT_CMD))
|
|
|
|
die(FAILED_RUN, reflog.v[0]);
|
2014-05-25 04:38:29 +04:00
|
|
|
}
|
|
|
|
|
2007-03-14 04:58:22 +03:00
|
|
|
int cmd_gc(int argc, const char **argv, const char *prefix)
|
|
|
|
{
|
2007-11-02 04:02:27 +03:00
|
|
|
int aggressive = 0;
|
2007-09-06 00:01:37 +04:00
|
|
|
int auto_gc = 0;
|
2008-03-01 00:53:39 +03:00
|
|
|
int quiet = 0;
|
2013-08-08 15:05:38 +04:00
|
|
|
int force = 0;
|
|
|
|
const char *name;
|
|
|
|
pid_t pid;
|
2015-09-19 08:13:23 +03:00
|
|
|
int daemonized = 0;
|
2020-11-20 14:55:22 +03:00
|
|
|
int keep_largest_pack = -1;
|
2018-04-21 06:13:13 +03:00
|
|
|
timestamp_t dummy;
|
2007-03-14 04:58:22 +03:00
|
|
|
|
2007-11-02 04:02:27 +03:00
|
|
|
struct option builtin_gc_options[] = {
|
2012-08-20 16:32:14 +04:00
|
|
|
OPT__QUIET(&quiet, N_("suppress progress reporting")),
|
|
|
|
{ OPTION_STRING, 0, "prune", &prune_expire, N_("date"),
|
|
|
|
N_("prune unreferenced objects"),
|
2009-02-15 01:10:10 +03:00
|
|
|
PARSE_OPT_OPTARG, NULL, (intptr_t)prune_expire },
|
2013-08-03 15:51:19 +04:00
|
|
|
OPT_BOOL(0, "aggressive", &aggressive, N_("be more thorough (increased runtime)")),
|
2018-02-09 14:01:58 +03:00
|
|
|
OPT_BOOL_F(0, "auto", &auto_gc, N_("enable auto-gc mode"),
|
|
|
|
PARSE_OPT_NOCOMPLETE),
|
|
|
|
OPT_BOOL_F(0, "force", &force,
|
|
|
|
N_("force running gc even if there may be another gc running"),
|
|
|
|
PARSE_OPT_NOCOMPLETE),
|
2020-11-20 14:55:22 +03:00
|
|
|
OPT_BOOL(0, "keep-largest-pack", &keep_largest_pack,
|
2018-04-15 18:36:14 +03:00
|
|
|
N_("repack all other packs except the largest pack")),
|
2007-11-02 04:02:27 +03:00
|
|
|
OPT_END()
|
|
|
|
};
|
|
|
|
|
2010-10-22 10:47:19 +04:00
|
|
|
if (argc == 2 && !strcmp(argv[1], "-h"))
|
|
|
|
usage_with_options(builtin_gc_usage, builtin_gc_options);
|
|
|
|
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_pushl(&pack_refs_cmd, "pack-refs", "--all", "--prune", NULL);
|
|
|
|
strvec_pushl(&reflog, "reflog", "expire", "--all", NULL);
|
|
|
|
strvec_pushl(&repack, "repack", "-d", "-l", NULL);
|
|
|
|
strvec_pushl(&prune, "prune", "--expire", NULL);
|
|
|
|
strvec_pushl(&prune_worktrees, "worktree", "prune", "--expire", NULL);
|
|
|
|
strvec_pushl(&rerere, "rerere", "gc", NULL);
|
2012-04-19 01:10:19 +04:00
|
|
|
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
/* default expiry time, overwritten in gc_config */
|
2014-08-07 20:21:22 +04:00
|
|
|
gc_config();
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
if (parse_expiry_date(gc_log_expire, &gc_log_expire_time))
|
2018-04-23 16:36:14 +03:00
|
|
|
die(_("failed to parse gc.logexpiry value %s"), gc_log_expire);
|
2007-03-14 04:58:22 +03:00
|
|
|
|
|
|
|
if (pack_refs < 0)
|
|
|
|
pack_refs = !is_bare_repository();
|
|
|
|
|
2009-05-23 22:53:12 +04:00
|
|
|
argc = parse_options(argc, argv, prefix, builtin_gc_options,
|
|
|
|
builtin_gc_usage, 0);
|
2007-11-02 04:02:27 +03:00
|
|
|
if (argc > 0)
|
|
|
|
usage_with_options(builtin_gc_usage, builtin_gc_options);
|
|
|
|
|
2018-04-21 06:13:13 +03:00
|
|
|
if (prune_expire && parse_expiry_date(prune_expire, &dummy))
|
|
|
|
die(_("failed to parse prune expiry value %s"), prune_expire);
|
|
|
|
|
2007-11-02 04:02:27 +03:00
|
|
|
if (aggressive) {
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_push(&repack, "-f");
|
gc --aggressive: make --depth configurable
When 1c192f3 (gc --aggressive: make it really aggressive - 2007-12-06)
made --depth=250 the default value, it didn't really explain the
reason behind, especially the pros and cons of --depth=250.
An old mail from Linus below explains it at length. Long story short,
--depth=250 is a disk saver and a performance killer. Not everybody
agrees on that aggressiveness. Let the user configure it.
From: Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [PATCH] gc --aggressive: make it really aggressive
Date: Thu, 6 Dec 2007 08:19:24 -0800 (PST)
Message-ID: <alpine.LFD.0.9999.0712060803430.13796@woody.linux-foundation.org>
Gmane-URL: http://article.gmane.org/gmane.comp.gcc.devel/94637
On Thu, 6 Dec 2007, Harvey Harrison wrote:
>
> 7:41:25elapsed 86%CPU
Heh. And this is why you want to do it exactly *once*, and then just
export the end result for others ;)
> -r--r--r-- 1 hharrison hharrison 324094684 2007-12-06 07:26 pack-1d46...pack
But yeah, especially if you allow longer delta chains, the end result can
be much smaller (and what makes the one-time repack more expensive is the
window size, not the delta chain - you could make the delta chains longer
with no cost overhead at packing time)
HOWEVER.
The longer delta chains do make it potentially much more expensive to then
use old history. So there's a trade-off. And quite frankly, a delta depth
of 250 is likely going to cause overflows in the delta cache (which is
only 256 entries in size *and* it's a hash, so it's going to start having
hash conflicts long before hitting the 250 depth limit).
So when I said "--depth=250 --window=250", I chose those numbers more as
an example of extremely aggressive packing, and I'm not at all sure that
the end result is necessarily wonderfully usable. It's going to save disk
space (and network bandwidth - the delta's will be re-used for the network
protocol too!), but there are definitely downsides too, and using long
delta chains may simply not be worth it in practice.
(And some of it might just want to have git tuning, ie if people think
that long deltas are worth it, we could easily just expand on the delta
hash, at the cost of some more memory used!)
That said, the good news is that working with *new* history will not be
affected negatively, and if you want to be _really_ sneaky, there are ways
to say "create a pack that contains the history up to a version one year
ago, and be very aggressive about those old versions that we still want to
have around, but do a separate pack for newer stuff using less aggressive
parameters"
So this is something that can be tweaked, although we don't really have
any really nice interfaces for stuff like that (ie the git delta cache
size is hardcoded in the sources and cannot be set in the config file, and
the "pack old history more aggressively" involves some manual scripting
and knowing how "git pack-objects" works rather than any nice simple
command line switch).
So the thing to take away from this is:
- git is certainly flexible as hell
- .. but to get the full power you may need to tweak things
- .. happily you really only need to have one person to do the tweaking,
and the tweaked end results will be available to others that do not
need to know/care.
And whether the difference between 320MB and 500MB is worth any really
involved tweaking (considering the potential downsides), I really don't
know. Only testing will tell.
Linus
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-16 17:35:03 +04:00
|
|
|
if (aggressive_depth > 0)
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_pushf(&repack, "--depth=%d", aggressive_depth);
|
2012-04-19 01:10:19 +04:00
|
|
|
if (aggressive_window > 0)
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_pushf(&repack, "--window=%d", aggressive_window);
|
2007-03-14 04:58:22 +03:00
|
|
|
}
|
2008-03-01 00:53:39 +03:00
|
|
|
if (quiet)
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_push(&repack, "-q");
|
2007-03-14 04:58:22 +03:00
|
|
|
|
2007-09-06 00:01:37 +04:00
|
|
|
if (auto_gc) {
|
|
|
|
/*
|
|
|
|
* Auto-gc should be least intrusive as possible.
|
|
|
|
*/
|
|
|
|
if (!need_to_gc())
|
|
|
|
return 0;
|
2014-02-08 11:08:52 +04:00
|
|
|
if (!quiet) {
|
|
|
|
if (detach_auto)
|
|
|
|
fprintf(stderr, _("Auto packing the repository in background for optimum performance.\n"));
|
|
|
|
else
|
|
|
|
fprintf(stderr, _("Auto packing the repository for optimum performance.\n"));
|
|
|
|
fprintf(stderr, _("See \"git help gc\" for manual housekeeping.\n"));
|
|
|
|
}
|
2014-05-25 04:38:29 +04:00
|
|
|
if (detach_auto) {
|
gc: do not return error for prior errors in daemonized mode
Some build machines started consistently failing to fetch updated
source using "repo sync", with error
error: The last gc run reported the following. Please correct the root cause
and remove /build/.repo/projects/tools/git.git/gc.log.
Automatic cleanup will not be performed until the file is removed.
warning: There are too many unreachable loose objects; run 'git prune' to remove them.
The cause takes some time to describe.
In v2.0.0-rc0~145^2 (gc: config option for running --auto in
background, 2014-02-08), "git gc --auto" learned to run in the
background instead of blocking the invoking command. In this mode, it
closed stderr to avoid interleaving output with any subsequent
commands, causing warnings like the above to be swallowed; v2.6.3~24^2
(gc: save log from daemonized gc --auto and print it next time,
2015-09-19) addressed that by storing any diagnostic output in
.git/gc.log and allowing the next "git gc --auto" run to print it.
To avoid wasteful repeated fruitless gcs, when gc.log is present, the
subsequent "gc --auto" would die after printing its contents. Most
git commands, such as "git fetch", ignore the exit status from "git gc
--auto" so all is well at this point: the user gets to see the error
message, and the fetch succeeds, without a wasteful additional attempt
at an automatic gc.
External tools like repo[1], though, do care about the exit status
from "git gc --auto". In non-daemonized mode, the exit status is
straightforward: if there is an error, it is nonzero, but after a
warning like the above, the status is zero. The daemonized mode, as a
side effect of the other properties provided, offers a very strange
exit code convention:
- if no housekeeping was required, the exit status is 0
- the first real run, after forking into the background, returns exit
status 0 unconditionally. The parent process has no way to know
whether gc will succeed.
- if there is any diagnostic output in gc.log, subsequent runs return
a nonzero exit status to indicate that gc was not triggered.
There's nothing for the calling program to act on on the basis of that
error. Use status 0 consistently instead, to indicate that we decided
not to run a gc (just like if no housekeeping was required). This
way, repo and similar tools can get the benefit of the same behavior
as tools like "git fetch" that ignore the exit status from gc --auto.
Once the period of time described by gc.pruneExpire elapses, the
unreachable loose objects will be removed by "git gc --auto"
automatically.
[1] https://gerrit-review.googlesource.com/c/git-repo/+/10598/
Reported-by: Andrii Dehtiarov <adehtiarov@google.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-17 09:57:40 +03:00
|
|
|
int ret = report_last_gc_error();
|
|
|
|
if (ret < 0)
|
2019-11-05 20:07:23 +03:00
|
|
|
/* an I/O error occurred, already reported */
|
gc: do not return error for prior errors in daemonized mode
Some build machines started consistently failing to fetch updated
source using "repo sync", with error
error: The last gc run reported the following. Please correct the root cause
and remove /build/.repo/projects/tools/git.git/gc.log.
Automatic cleanup will not be performed until the file is removed.
warning: There are too many unreachable loose objects; run 'git prune' to remove them.
The cause takes some time to describe.
In v2.0.0-rc0~145^2 (gc: config option for running --auto in
background, 2014-02-08), "git gc --auto" learned to run in the
background instead of blocking the invoking command. In this mode, it
closed stderr to avoid interleaving output with any subsequent
commands, causing warnings like the above to be swallowed; v2.6.3~24^2
(gc: save log from daemonized gc --auto and print it next time,
2015-09-19) addressed that by storing any diagnostic output in
.git/gc.log and allowing the next "git gc --auto" run to print it.
To avoid wasteful repeated fruitless gcs, when gc.log is present, the
subsequent "gc --auto" would die after printing its contents. Most
git commands, such as "git fetch", ignore the exit status from "git gc
--auto" so all is well at this point: the user gets to see the error
message, and the fetch succeeds, without a wasteful additional attempt
at an automatic gc.
External tools like repo[1], though, do care about the exit status
from "git gc --auto". In non-daemonized mode, the exit status is
straightforward: if there is an error, it is nonzero, but after a
warning like the above, the status is zero. The daemonized mode, as a
side effect of the other properties provided, offers a very strange
exit code convention:
- if no housekeeping was required, the exit status is 0
- the first real run, after forking into the background, returns exit
status 0 unconditionally. The parent process has no way to know
whether gc will succeed.
- if there is any diagnostic output in gc.log, subsequent runs return
a nonzero exit status to indicate that gc was not triggered.
There's nothing for the calling program to act on on the basis of that
error. Use status 0 consistently instead, to indicate that we decided
not to run a gc (just like if no housekeeping was required). This
way, repo and similar tools can get the benefit of the same behavior
as tools like "git fetch" that ignore the exit status from gc --auto.
Once the period of time described by gc.pruneExpire elapses, the
unreachable loose objects will be removed by "git gc --auto"
automatically.
[1] https://gerrit-review.googlesource.com/c/git-repo/+/10598/
Reported-by: Andrii Dehtiarov <adehtiarov@google.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-17 09:57:40 +03:00
|
|
|
exit(128);
|
|
|
|
if (ret == 1)
|
|
|
|
/* Last gc --auto failed. Skip this one. */
|
|
|
|
return 0;
|
2015-09-19 08:13:23 +03:00
|
|
|
|
gc: run pre-detach operations under lock
We normally try to avoid having two auto-gc operations run
at the same time, because it wastes resources. This was done
long ago in 64a99eb47 (gc: reject if another gc is running,
unless --force is given, 2013-08-08).
When we do a detached auto-gc, we run the ref-related
commands _before_ detaching, to avoid confusing lock
contention. This was done by 62aad1849 (gc --auto: do not
lock refs in the background, 2014-05-25).
These two features do not interact well. The pre-detach
operations are run before we check the gc.pid lock, meaning
that on a busy repository we may run many of them
concurrently. Ideally we'd take the lock before spawning any
operations, and hold it for the duration of the program.
This is tricky, though, with the way the pid-file interacts
with the daemonize() process. Other processes will check
that the pid recorded in the pid-file still exists. But
detaching causes us to fork and continue running under a
new pid. So if we take the lock before detaching, the
pid-file will have a bogus pid in it. We'd have to go back
and update it with the new pid after detaching. We'd also
have to play some tricks with the tempfile subsystem to
tweak the "owner" field, so that the parent process does not
clean it up on exit, but the child process does.
Instead, we can do something a bit simpler: take the lock
only for the duration of the pre-detach work, then detach,
then take it again for the post-detach work. Technically,
this means that the post-detach lock could lose to another
process doing pre-detach work. But in the long run this
works out.
That second process would then follow-up by doing
post-detach work. Unless it was in turn blocked by a third
process doing pre-detach work, and so on. This could in
theory go on indefinitely, as the pre-detach work does not
repack, and so need_to_gc() will continue to trigger. But
in each round we are racing between the pre- and post-detach
locks. Eventually, one of the post-detach locks will win the
race and complete the full gc. So in the worst case, we may
racily repeat the pre-detach work, but we would never do so
simultaneously (it would happen via a sequence of serialized
race-wins).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-11 12:06:35 +03:00
|
|
|
if (lock_repo_for_gc(force, &pid))
|
|
|
|
return 0;
|
2018-07-17 09:54:16 +03:00
|
|
|
gc_before_repack(); /* dies on failure */
|
gc: run pre-detach operations under lock
We normally try to avoid having two auto-gc operations run
at the same time, because it wastes resources. This was done
long ago in 64a99eb47 (gc: reject if another gc is running,
unless --force is given, 2013-08-08).
When we do a detached auto-gc, we run the ref-related
commands _before_ detaching, to avoid confusing lock
contention. This was done by 62aad1849 (gc --auto: do not
lock refs in the background, 2014-05-25).
These two features do not interact well. The pre-detach
operations are run before we check the gc.pid lock, meaning
that on a busy repository we may run many of them
concurrently. Ideally we'd take the lock before spawning any
operations, and hold it for the duration of the program.
This is tricky, though, with the way the pid-file interacts
with the daemonize() process. Other processes will check
that the pid recorded in the pid-file still exists. But
detaching causes us to fork and continue running under a
new pid. So if we take the lock before detaching, the
pid-file will have a bogus pid in it. We'd have to go back
and update it with the new pid after detaching. We'd also
have to play some tricks with the tempfile subsystem to
tweak the "owner" field, so that the parent process does not
clean it up on exit, but the child process does.
Instead, we can do something a bit simpler: take the lock
only for the duration of the pre-detach work, then detach,
then take it again for the post-detach work. Technically,
this means that the post-detach lock could lose to another
process doing pre-detach work. But in the long run this
works out.
That second process would then follow-up by doing
post-detach work. Unless it was in turn blocked by a third
process doing pre-detach work, and so on. This could in
theory go on indefinitely, as the pre-detach work does not
repack, and so need_to_gc() will continue to trigger. But
in each round we are racing between the pre- and post-detach
locks. Eventually, one of the post-detach locks will win the
race and complete the full gc. So in the worst case, we may
racily repeat the pre-detach work, but we would never do so
simultaneously (it would happen via a sequence of serialized
race-wins).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-11 12:06:35 +03:00
|
|
|
delete_tempfile(&pidfile);
|
|
|
|
|
2014-02-08 11:08:52 +04:00
|
|
|
/*
|
|
|
|
* failure to daemonize is ok, we'll continue
|
|
|
|
* in foreground
|
|
|
|
*/
|
2015-09-19 08:13:23 +03:00
|
|
|
daemonized = !daemonize();
|
2014-05-25 04:38:29 +04:00
|
|
|
}
|
2018-04-15 18:36:14 +03:00
|
|
|
} else {
|
|
|
|
struct string_list keep_pack = STRING_LIST_INIT_NODUP;
|
|
|
|
|
2020-11-20 14:55:22 +03:00
|
|
|
if (keep_largest_pack != -1) {
|
|
|
|
if (keep_largest_pack)
|
2018-04-15 18:36:15 +03:00
|
|
|
find_base_packs(&keep_pack, 0);
|
|
|
|
} else if (big_pack_threshold) {
|
|
|
|
find_base_packs(&keep_pack, big_pack_threshold);
|
2018-04-15 18:36:14 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
add_repack_all_option(&keep_pack);
|
|
|
|
string_list_clear(&keep_pack, 0);
|
|
|
|
}
|
2007-09-06 00:01:37 +04:00
|
|
|
|
2013-08-08 15:05:38 +04:00
|
|
|
name = lock_repo_for_gc(force, &pid);
|
|
|
|
if (name) {
|
|
|
|
if (auto_gc)
|
|
|
|
return 0; /* be quiet on --auto */
|
|
|
|
die(_("gc is already running on machine '%s' pid %"PRIuMAX" (use --force if not)"),
|
|
|
|
name, (uintmax_t)pid);
|
|
|
|
}
|
|
|
|
|
2015-09-19 08:13:23 +03:00
|
|
|
if (daemonized) {
|
|
|
|
hold_lock_file_for_update(&log_lock,
|
|
|
|
git_path("gc.log"),
|
|
|
|
LOCK_DIE_ON_ERROR);
|
2015-10-16 01:43:32 +03:00
|
|
|
dup2(get_lock_file_fd(&log_lock), 2);
|
2015-09-19 08:13:23 +03:00
|
|
|
sigchain_push_common(process_log_file_on_signal);
|
|
|
|
atexit(process_log_file_at_exit);
|
|
|
|
}
|
|
|
|
|
2018-07-17 09:54:16 +03:00
|
|
|
gc_before_repack();
|
2007-03-14 04:58:22 +03:00
|
|
|
|
2015-06-23 13:54:11 +03:00
|
|
|
if (!repository_format_precious_objects) {
|
2019-05-17 21:41:49 +03:00
|
|
|
close_object_store(the_repository->objects);
|
2020-07-29 03:37:20 +03:00
|
|
|
if (run_command_v_opt(repack.v, RUN_GIT_CMD))
|
|
|
|
die(FAILED_RUN, repack.v[0]);
|
2015-06-23 13:54:11 +03:00
|
|
|
|
|
|
|
if (prune_expire) {
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_push(&prune, prune_expire);
|
2015-06-23 13:54:11 +03:00
|
|
|
if (quiet)
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_push(&prune, "--no-progress");
|
2019-06-25 16:40:31 +03:00
|
|
|
if (has_promisor_remote())
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_push(&prune,
|
strvec: fix indentation in renamed calls
Code which split an argv_array call across multiple lines, like:
argv_array_pushl(&args, "one argument",
"another argument", "and more",
NULL);
was recently mechanically renamed to use strvec, which results in
mis-matched indentation like:
strvec_pushl(&args, "one argument",
"another argument", "and more",
NULL);
Let's fix these up to align the arguments with the opening paren. I did
this manually by sifting through the results of:
git jump grep 'strvec_.*,$'
and liberally applying my editor's auto-format. Most of the changes are
of the form shown above, though I also normalized a few that had
originally used a single-tab indentation (rather than our usual style of
aligning with the open paren). I also rewrapped a couple of obvious
cases (e.g., where previously too-long lines became short enough to fit
on one), but I wasn't aggressive about it. In cases broken to three or
more lines, the grouping of arguments is sometimes meaningful, and it
wasn't worth my time or reviewer time to ponder each case individually.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28 23:26:31 +03:00
|
|
|
"--exclude-promisor-objects");
|
2020-07-29 03:37:20 +03:00
|
|
|
if (run_command_v_opt(prune.v, RUN_GIT_CMD))
|
|
|
|
die(FAILED_RUN, prune.v[0]);
|
2015-06-23 13:54:11 +03:00
|
|
|
}
|
2009-02-15 01:10:10 +03:00
|
|
|
}
|
2007-03-14 04:58:22 +03:00
|
|
|
|
2014-11-30 11:24:53 +03:00
|
|
|
if (prune_worktrees_expire) {
|
2020-07-28 23:24:27 +03:00
|
|
|
strvec_push(&prune_worktrees, prune_worktrees_expire);
|
2020-07-29 03:37:20 +03:00
|
|
|
if (run_command_v_opt(prune_worktrees.v, RUN_GIT_CMD))
|
|
|
|
die(FAILED_RUN, prune_worktrees.v[0]);
|
2014-11-30 11:24:53 +03:00
|
|
|
}
|
|
|
|
|
2020-07-29 03:37:20 +03:00
|
|
|
if (run_command_v_opt(rerere.v, RUN_GIT_CMD))
|
|
|
|
die(FAILED_RUN, rerere.v[0]);
|
2007-03-14 04:58:22 +03:00
|
|
|
|
2015-11-04 06:05:08 +03:00
|
|
|
report_garbage = report_pack_garbage;
|
2018-03-23 20:45:21 +03:00
|
|
|
reprepare_packed_git(the_repository);
|
2018-12-16 01:04:01 +03:00
|
|
|
if (pack_garbage.nr > 0) {
|
2019-05-17 21:41:49 +03:00
|
|
|
close_object_store(the_repository->objects);
|
2015-11-04 06:05:08 +03:00
|
|
|
clean_pack_garbage();
|
2018-12-16 01:04:01 +03:00
|
|
|
}
|
2015-11-04 06:05:08 +03:00
|
|
|
|
2019-08-13 21:37:43 +03:00
|
|
|
prepare_repo_settings(the_repository);
|
|
|
|
if (the_repository->settings.gc_write_commit_graph == 1)
|
2020-02-04 08:51:50 +03:00
|
|
|
write_commit_graph_reachable(the_repository->objects->odb,
|
2019-09-09 22:26:36 +03:00
|
|
|
!quiet && !daemonized ? COMMIT_GRAPH_WRITE_PROGRESS : 0,
|
2019-08-13 21:37:43 +03:00
|
|
|
NULL);
|
2018-06-27 16:24:46 +03:00
|
|
|
|
2007-09-17 11:44:17 +04:00
|
|
|
if (auto_gc && too_many_loose_objects())
|
2011-02-23 02:42:24 +03:00
|
|
|
warning(_("There are too many unreachable loose objects; "
|
|
|
|
"run 'git prune' to remove them."));
|
2007-09-17 11:44:17 +04:00
|
|
|
|
gc: ignore old gc.log files
A server can end up in a state where there are lots of unreferenced
loose objects (say, because many users are doing a bunch of rebasing
and pushing their rebased branches). Running "git gc --auto" in
this state would cause a gc.log file to be created, preventing
future auto gcs, causing pack files to pile up. Since many git
operations are O(n) in the number of pack files, this would lead to
poor performance.
Git should never get itself into a state where it refuses to do any
maintenance, just because at some point some piece of the maintenance
didn't make progress.
Teach Git to ignore gc.log files which are older than (by default)
one day old, which can be tweaked via the gc.logExpiry configuration
variable. That way, these pack files will get cleaned up, if
necessary, at least once per day. And operators who find a need for
more-frequent gcs can adjust gc.logExpiry to meet their needs.
There is also some cleanup: a successful manual gc, or a
warning-free auto gc with an old log file, will remove any old
gc.log files.
It might still happen that manual intervention is required
(e.g. because the repo is corrupt), but at the very least it won't
be because Git is too dumb to try again.
Signed-off-by: David Turner <dturner@twosigma.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-11 00:28:22 +03:00
|
|
|
if (!daemonized)
|
|
|
|
unlink(git_path("gc.log"));
|
|
|
|
|
2007-03-14 04:58:22 +03:00
|
|
|
return 0;
|
|
|
|
}
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
|
2020-09-11 20:49:15 +03:00
|
|
|
static const char *const builtin_maintenance_run_usage[] = {
|
|
|
|
N_("git maintenance run [--auto] [--[no-]quiet] [--task=<task>] [--schedule]"),
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
NULL
|
|
|
|
};
|
|
|
|
|
2020-09-11 20:49:15 +03:00
|
|
|
enum schedule_priority {
|
|
|
|
SCHEDULE_NONE = 0,
|
|
|
|
SCHEDULE_WEEKLY = 1,
|
|
|
|
SCHEDULE_DAILY = 2,
|
|
|
|
SCHEDULE_HOURLY = 3,
|
|
|
|
};
|
|
|
|
|
|
|
|
static enum schedule_priority parse_schedule(const char *value)
|
|
|
|
{
|
|
|
|
if (!value)
|
|
|
|
return SCHEDULE_NONE;
|
|
|
|
if (!strcasecmp(value, "hourly"))
|
|
|
|
return SCHEDULE_HOURLY;
|
|
|
|
if (!strcasecmp(value, "daily"))
|
|
|
|
return SCHEDULE_DAILY;
|
|
|
|
if (!strcasecmp(value, "weekly"))
|
|
|
|
return SCHEDULE_WEEKLY;
|
|
|
|
return SCHEDULE_NONE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int maintenance_opt_schedule(const struct option *opt, const char *arg,
|
|
|
|
int unset)
|
|
|
|
{
|
|
|
|
enum schedule_priority *priority = opt->value;
|
|
|
|
|
|
|
|
if (unset)
|
|
|
|
die(_("--no-schedule is not allowed"));
|
|
|
|
|
|
|
|
*priority = parse_schedule(arg);
|
|
|
|
|
|
|
|
if (!*priority)
|
|
|
|
die(_("unrecognized --schedule argument '%s'"), arg);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
struct maintenance_run_opts {
|
|
|
|
int auto_flag;
|
2020-09-17 21:11:43 +03:00
|
|
|
int quiet;
|
2020-09-11 20:49:15 +03:00
|
|
|
enum schedule_priority schedule;
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
};
|
|
|
|
|
2020-09-17 21:11:51 +03:00
|
|
|
/* Remember to update object flag allocation in object.h */
|
|
|
|
#define SEEN (1u<<0)
|
|
|
|
|
|
|
|
struct cg_auto_data {
|
|
|
|
int num_not_in_graph;
|
|
|
|
int limit;
|
|
|
|
};
|
|
|
|
|
|
|
|
static int dfs_on_ref(const char *refname,
|
|
|
|
const struct object_id *oid, int flags,
|
|
|
|
void *cb_data)
|
|
|
|
{
|
|
|
|
struct cg_auto_data *data = (struct cg_auto_data *)cb_data;
|
|
|
|
int result = 0;
|
|
|
|
struct object_id peeled;
|
|
|
|
struct commit_list *stack = NULL;
|
|
|
|
struct commit *commit;
|
|
|
|
|
|
|
|
if (!peel_ref(refname, &peeled))
|
|
|
|
oid = &peeled;
|
|
|
|
if (oid_object_info(the_repository, oid, NULL) != OBJ_COMMIT)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
commit = lookup_commit(the_repository, oid);
|
|
|
|
if (!commit)
|
|
|
|
return 0;
|
2020-10-08 03:50:39 +03:00
|
|
|
if (parse_commit(commit) ||
|
|
|
|
commit_graph_position(commit) != COMMIT_NOT_FROM_GRAPH)
|
2020-09-17 21:11:51 +03:00
|
|
|
return 0;
|
|
|
|
|
2020-10-08 03:50:39 +03:00
|
|
|
data->num_not_in_graph++;
|
|
|
|
|
|
|
|
if (data->num_not_in_graph >= data->limit)
|
|
|
|
return 1;
|
|
|
|
|
2020-09-17 21:11:51 +03:00
|
|
|
commit_list_append(commit, &stack);
|
|
|
|
|
|
|
|
while (!result && stack) {
|
|
|
|
struct commit_list *parent;
|
|
|
|
|
|
|
|
commit = pop_commit(&stack);
|
|
|
|
|
|
|
|
for (parent = commit->parents; parent; parent = parent->next) {
|
|
|
|
if (parse_commit(parent->item) ||
|
|
|
|
commit_graph_position(parent->item) != COMMIT_NOT_FROM_GRAPH ||
|
|
|
|
parent->item->object.flags & SEEN)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
parent->item->object.flags |= SEEN;
|
|
|
|
data->num_not_in_graph++;
|
|
|
|
|
|
|
|
if (data->num_not_in_graph >= data->limit) {
|
|
|
|
result = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
commit_list_append(parent->item, &stack);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
free_commit_list(stack);
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int should_write_commit_graph(void)
|
|
|
|
{
|
|
|
|
int result;
|
|
|
|
struct cg_auto_data data;
|
|
|
|
|
|
|
|
data.num_not_in_graph = 0;
|
|
|
|
data.limit = 100;
|
|
|
|
git_config_get_int("maintenance.commit-graph.auto",
|
|
|
|
&data.limit);
|
|
|
|
|
|
|
|
if (!data.limit)
|
|
|
|
return 0;
|
|
|
|
if (data.limit < 0)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
result = for_each_ref(dfs_on_ref, &data);
|
|
|
|
|
2020-10-31 15:46:08 +03:00
|
|
|
repo_clear_commit_marks(the_repository, SEEN);
|
2020-09-17 21:11:51 +03:00
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2020-09-17 21:11:46 +03:00
|
|
|
static int run_write_commit_graph(struct maintenance_run_opts *opts)
|
|
|
|
{
|
|
|
|
struct child_process child = CHILD_PROCESS_INIT;
|
|
|
|
|
|
|
|
child.git_cmd = 1;
|
|
|
|
strvec_pushl(&child.args, "commit-graph", "write",
|
|
|
|
"--split", "--reachable", NULL);
|
|
|
|
|
|
|
|
if (opts->quiet)
|
|
|
|
strvec_push(&child.args, "--no-progress");
|
|
|
|
|
|
|
|
return !!run_command(&child);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int maintenance_task_commit_graph(struct maintenance_run_opts *opts)
|
|
|
|
{
|
2020-10-12 16:28:34 +03:00
|
|
|
prepare_repo_settings(the_repository);
|
|
|
|
if (!the_repository->settings.core_commit_graph)
|
|
|
|
return 0;
|
|
|
|
|
2020-09-17 21:11:46 +03:00
|
|
|
close_object_store(the_repository->objects);
|
|
|
|
if (run_write_commit_graph(opts)) {
|
|
|
|
error(_("failed to write commit-graph"));
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
maintenance: add prefetch task
When working with very large repositories, an incremental 'git fetch'
command can download a large amount of data. If there are many other
users pushing to a common repo, then this data can rival the initial
pack-file size of a 'git clone' of a medium-size repo.
Users may want to keep the data on their local repos as close as
possible to the data on the remote repos by fetching periodically in
the background. This can break up a large daily fetch into several
smaller hourly fetches.
The task is called "prefetch" because it is work done in advance
of a foreground fetch to make that 'git fetch' command much faster.
However, if we simply ran 'git fetch <remote>' in the background,
then the user running a foreground 'git fetch <remote>' would lose
some important feedback when a new branch appears or an existing
branch updates. This is especially true if a remote branch is
force-updated and this isn't noticed by the user because it occurred
in the background. Further, the functionality of 'git push
--force-with-lease' becomes suspect.
When running 'git fetch <remote> <options>' in the background, use
the following options for careful updating:
1. --no-tags prevents getting a new tag when a user wants to see
the new tags appear in their foreground fetches.
2. --refmap= removes the configured refspec which usually updates
refs/remotes/<remote>/* with the refs advertised by the remote.
While this looks confusing, this was documented and tested by
b40a50264ac (fetch: document and test --refmap="", 2020-01-21),
including this sentence in the documentation:
Providing an empty `<refspec>` to the `--refmap` option
causes Git to ignore the configured refspecs and rely
entirely on the refspecs supplied as command-line arguments.
3. By adding a new refspec "+refs/heads/*:refs/prefetch/<remote>/*"
we can ensure that we actually load the new values somewhere in
our refspace while not updating refs/heads or refs/remotes. By
storing these refs here, the commit-graph job will update the
commit-graph with the commits from these hidden refs.
4. --prune will delete the refs/prefetch/<remote> refs that no
longer appear on the remote.
5. --no-write-fetch-head prevents updating FETCH_HEAD.
We've been using this step as a critical background job in Scalar
[1] (and VFS for Git). This solved a pain point that was showing up
in user reports: fetching was a pain! Users do not like waiting to
download the data that was created while they were away from their
machines. After implementing background fetch, the foreground fetch
commands sped up significantly because they mostly just update refs
and download a small amount of new data. The effect is especially
dramatic when paried with --no-show-forced-udpates (through
fetch.showForcedUpdates=false).
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/FetchStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:31 +03:00
|
|
|
static int fetch_remote(const char *remote, struct maintenance_run_opts *opts)
|
|
|
|
{
|
|
|
|
struct child_process child = CHILD_PROCESS_INIT;
|
|
|
|
|
|
|
|
child.git_cmd = 1;
|
|
|
|
strvec_pushl(&child.args, "fetch", remote, "--prune", "--no-tags",
|
|
|
|
"--no-write-fetch-head", "--recurse-submodules=no",
|
|
|
|
"--refmap=", NULL);
|
|
|
|
|
|
|
|
if (opts->quiet)
|
|
|
|
strvec_push(&child.args, "--quiet");
|
|
|
|
|
|
|
|
strvec_pushf(&child.args, "+refs/heads/*:refs/prefetch/%s/*", remote);
|
|
|
|
|
|
|
|
return !!run_command(&child);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int append_remote(struct remote *remote, void *cbdata)
|
|
|
|
{
|
|
|
|
struct string_list *remotes = (struct string_list *)cbdata;
|
|
|
|
|
|
|
|
string_list_append(remotes, remote->name);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int maintenance_task_prefetch(struct maintenance_run_opts *opts)
|
|
|
|
{
|
|
|
|
int result = 0;
|
|
|
|
struct string_list_item *item;
|
|
|
|
struct string_list remotes = STRING_LIST_INIT_DUP;
|
|
|
|
|
|
|
|
if (for_each_remote(append_remote, &remotes)) {
|
|
|
|
error(_("failed to fill remotes"));
|
|
|
|
result = 1;
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
for_each_string_list_item(item, &remotes)
|
|
|
|
result |= fetch_remote(item->string, opts);
|
|
|
|
|
|
|
|
cleanup:
|
|
|
|
string_list_clear(&remotes, 0);
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
static int maintenance_task_gc(struct maintenance_run_opts *opts)
|
|
|
|
{
|
|
|
|
struct child_process child = CHILD_PROCESS_INIT;
|
|
|
|
|
|
|
|
child.git_cmd = 1;
|
|
|
|
strvec_push(&child.args, "gc");
|
|
|
|
|
|
|
|
if (opts->auto_flag)
|
|
|
|
strvec_push(&child.args, "--auto");
|
2020-09-17 21:11:43 +03:00
|
|
|
if (opts->quiet)
|
|
|
|
strvec_push(&child.args, "--quiet");
|
|
|
|
else
|
|
|
|
strvec_push(&child.args, "--no-quiet");
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
|
|
|
|
close_object_store(the_repository->objects);
|
|
|
|
return run_command(&child);
|
|
|
|
}
|
|
|
|
|
2020-09-25 15:33:32 +03:00
|
|
|
static int prune_packed(struct maintenance_run_opts *opts)
|
|
|
|
{
|
|
|
|
struct child_process child = CHILD_PROCESS_INIT;
|
|
|
|
|
|
|
|
child.git_cmd = 1;
|
|
|
|
strvec_push(&child.args, "prune-packed");
|
|
|
|
|
|
|
|
if (opts->quiet)
|
|
|
|
strvec_push(&child.args, "--quiet");
|
|
|
|
|
|
|
|
return !!run_command(&child);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct write_loose_object_data {
|
|
|
|
FILE *in;
|
|
|
|
int count;
|
|
|
|
int batch_size;
|
|
|
|
};
|
|
|
|
|
2020-09-25 15:33:33 +03:00
|
|
|
static int loose_object_auto_limit = 100;
|
|
|
|
|
|
|
|
static int loose_object_count(const struct object_id *oid,
|
|
|
|
const char *path,
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
int *count = (int*)data;
|
|
|
|
if (++(*count) >= loose_object_auto_limit)
|
|
|
|
return 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int loose_object_auto_condition(void)
|
|
|
|
{
|
|
|
|
int count = 0;
|
|
|
|
|
|
|
|
git_config_get_int("maintenance.loose-objects.auto",
|
|
|
|
&loose_object_auto_limit);
|
|
|
|
|
|
|
|
if (!loose_object_auto_limit)
|
|
|
|
return 0;
|
|
|
|
if (loose_object_auto_limit < 0)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return for_each_loose_file_in_objdir(the_repository->objects->odb->path,
|
|
|
|
loose_object_count,
|
|
|
|
NULL, NULL, &count);
|
|
|
|
}
|
|
|
|
|
2020-09-25 15:33:32 +03:00
|
|
|
static int bail_on_loose(const struct object_id *oid,
|
|
|
|
const char *path,
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int write_loose_object_to_stdin(const struct object_id *oid,
|
|
|
|
const char *path,
|
|
|
|
void *data)
|
|
|
|
{
|
|
|
|
struct write_loose_object_data *d = (struct write_loose_object_data *)data;
|
|
|
|
|
|
|
|
fprintf(d->in, "%s\n", oid_to_hex(oid));
|
|
|
|
|
|
|
|
return ++(d->count) > d->batch_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int pack_loose(struct maintenance_run_opts *opts)
|
|
|
|
{
|
|
|
|
struct repository *r = the_repository;
|
|
|
|
int result = 0;
|
|
|
|
struct write_loose_object_data data;
|
|
|
|
struct child_process pack_proc = CHILD_PROCESS_INIT;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do not start pack-objects process
|
|
|
|
* if there are no loose objects.
|
|
|
|
*/
|
|
|
|
if (!for_each_loose_file_in_objdir(r->objects->odb->path,
|
|
|
|
bail_on_loose,
|
|
|
|
NULL, NULL, NULL))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
pack_proc.git_cmd = 1;
|
|
|
|
|
|
|
|
strvec_push(&pack_proc.args, "pack-objects");
|
|
|
|
if (opts->quiet)
|
|
|
|
strvec_push(&pack_proc.args, "--quiet");
|
|
|
|
strvec_pushf(&pack_proc.args, "%s/pack/loose", r->objects->odb->path);
|
|
|
|
|
|
|
|
pack_proc.in = -1;
|
|
|
|
|
|
|
|
if (start_command(&pack_proc)) {
|
|
|
|
error(_("failed to start 'git pack-objects' process"));
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
data.in = xfdopen(pack_proc.in, "w");
|
|
|
|
data.count = 0;
|
|
|
|
data.batch_size = 50000;
|
|
|
|
|
|
|
|
for_each_loose_file_in_objdir(r->objects->odb->path,
|
|
|
|
write_loose_object_to_stdin,
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
&data);
|
|
|
|
|
|
|
|
fclose(data.in);
|
|
|
|
|
|
|
|
if (finish_command(&pack_proc)) {
|
|
|
|
error(_("failed to finish 'git pack-objects' process"));
|
|
|
|
result = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int maintenance_task_loose_objects(struct maintenance_run_opts *opts)
|
|
|
|
{
|
|
|
|
return prune_packed(opts) || pack_loose(opts);
|
|
|
|
}
|
|
|
|
|
2020-09-25 15:33:38 +03:00
|
|
|
static int incremental_repack_auto_condition(void)
|
|
|
|
{
|
|
|
|
struct packed_git *p;
|
|
|
|
int enabled;
|
|
|
|
int incremental_repack_auto_limit = 10;
|
|
|
|
int count = 0;
|
|
|
|
|
|
|
|
if (git_config_get_bool("core.multiPackIndex", &enabled) ||
|
|
|
|
!enabled)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
git_config_get_int("maintenance.incremental-repack.auto",
|
|
|
|
&incremental_repack_auto_limit);
|
|
|
|
|
|
|
|
if (!incremental_repack_auto_limit)
|
|
|
|
return 0;
|
|
|
|
if (incremental_repack_auto_limit < 0)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
for (p = get_packed_git(the_repository);
|
|
|
|
count < incremental_repack_auto_limit && p;
|
|
|
|
p = p->next) {
|
|
|
|
if (!p->multi_pack_index)
|
|
|
|
count++;
|
|
|
|
}
|
|
|
|
|
|
|
|
return count >= incremental_repack_auto_limit;
|
|
|
|
}
|
|
|
|
|
maintenance: add incremental-repack task
The previous change cleaned up loose objects using the
'loose-objects' that can be run safely in the background. Add a
similar job that performs similar cleanups for pack-files.
One issue with running 'git repack' is that it is designed to
repack all pack-files into a single pack-file. While this is the
most space-efficient way to store object data, it is not time or
memory efficient. This becomes extremely important if the repo is
so large that a user struggles to store two copies of the pack on
their disk.
Instead, perform an "incremental" repack by collecting a few small
pack-files into a new pack-file. The multi-pack-index facilitates
this process ever since 'git multi-pack-index expire' was added in
19575c7 (multi-pack-index: implement 'expire' subcommand,
2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1
(midx: implement midx_repack(), 2019-06-10).
The 'incremental-repack' task runs the following steps:
1. 'git multi-pack-index write' creates a multi-pack-index file if
one did not exist, and otherwise will update the multi-pack-index
with any new pack-files that appeared since the last write. This
is particularly relevant with the background fetch job.
When the multi-pack-index sees two copies of the same object, it
stores the offset data into the newer pack-file. This means that
some old pack-files could become "unreferenced" which I will use
to mean "a pack-file that is in the pack-file list of the
multi-pack-index but none of the objects in the multi-pack-index
reference a location inside that pack-file."
2. 'git multi-pack-index expire' deletes any unreferenced pack-files
and updaes the multi-pack-index to drop those pack-files from the
list. This is safe to do as concurrent Git processes will see the
multi-pack-index and not open those packs when looking for object
contents. (Similar to the 'loose-objects' job, there are some Git
commands that open pack-files regardless of the multi-pack-index,
but they are rarely used. Further, a user that self-selects to
use background operations would likely refrain from using those
commands.)
3. 'git multi-pack-index repack --bacth-size=<size>' collects a set
of pack-files that are listed in the multi-pack-index and creates
a new pack-file containing the objects whose offsets are listed
by the multi-pack-index to be in those objects. The set of pack-
files is selected greedily by sorting the pack-files by modified
time and adding a pack-file to the set if its "expected size" is
smaller than the batch size until the total expected size of the
selected pack-files is at least the batch size. The "expected
size" is calculated by taking the size of the pack-file divided
by the number of objects in the pack-file and multiplied by the
number of objects from the multi-pack-index with offset in that
pack-file. The expected size approximates how much data from that
pack-file will contribute to the resulting pack-file size. The
intention is that the resulting pack-file will be close in size
to the provided batch size.
The next run of the incremental-repack task will delete these
repacked pack-files during the 'expire' step.
In this version, the batch size is set to "0" which ignores the
size restrictions when selecting the pack-files. It instead
selects all pack-files and repacks all packed objects into a
single pack-file. This will be updated in the next change, but
it requires doing some calculations that are better isolated to
a separate change.
These steps are based on a similar background maintenance step in
Scalar (and VFS for Git) [1]. This was incredibly effective for
users of the Windows OS repository. After using the same VFS for Git
repository for over a year, some users had _thousands_ of pack-files
that combined to up to 250 GB of data. We noticed a few users were
running into the open file descriptor limits (due in part to a bug
in the multi-pack-index fixed by af96fe3 (midx: add packs to
packed_git linked list, 2019-04-29).
These pack-files were mostly small since they contained the commits
and trees that were pushed to the origin in a given hour. The GVFS
protocol includes a "prefetch" step that asks for pre-computed pack-
files containing commits and trees by timestamp. These pack-files
were grouped into "daily" pack-files once a day for up to 30 days.
If a user did not request prefetch packs for over 30 days, then they
would get the entire history of commits and trees in a new, large
pack-file. This led to a large number of pack-files that had poor
delta compression.
By running this pack-file maintenance step once per day, these repos
with thousands of packs spanning 200+ GB dropped to dozens of pack-
files spanning 30-50 GB. This was done all without removing objects
from the system and using a constant batch size of two gigabytes.
Once the work was done to reduce the pack-files to small sizes, the
batch size of two gigabytes means that not every run triggers a
repack operation, so the following run will not expire a pack-file.
This has kept these repos in a "clean" state.
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:36 +03:00
|
|
|
static int multi_pack_index_write(struct maintenance_run_opts *opts)
|
|
|
|
{
|
|
|
|
struct child_process child = CHILD_PROCESS_INIT;
|
|
|
|
|
|
|
|
child.git_cmd = 1;
|
|
|
|
strvec_pushl(&child.args, "multi-pack-index", "write", NULL);
|
|
|
|
|
|
|
|
if (opts->quiet)
|
|
|
|
strvec_push(&child.args, "--no-progress");
|
|
|
|
|
|
|
|
if (run_command(&child))
|
|
|
|
return error(_("failed to write multi-pack-index"));
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int multi_pack_index_expire(struct maintenance_run_opts *opts)
|
|
|
|
{
|
|
|
|
struct child_process child = CHILD_PROCESS_INIT;
|
|
|
|
|
|
|
|
child.git_cmd = 1;
|
|
|
|
strvec_pushl(&child.args, "multi-pack-index", "expire", NULL);
|
|
|
|
|
|
|
|
if (opts->quiet)
|
|
|
|
strvec_push(&child.args, "--no-progress");
|
|
|
|
|
|
|
|
close_object_store(the_repository->objects);
|
|
|
|
|
|
|
|
if (run_command(&child))
|
|
|
|
return error(_("'git multi-pack-index expire' failed"));
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
maintenance: auto-size incremental-repack batch
When repacking during the 'incremental-repack' task, we use the
--batch-size option in 'git multi-pack-index repack'. The initial setting
used --batch-size=0 to repack everything into a single pack-file. This is
not sustainable for a large repository. The amount of work required is
also likely to use too many system resources for a background job.
Update the 'incremental-repack' task by dynamically computing a
--batch-size option based on the current pack-file structure.
The dynamic default size is computed with this idea in mind for a client
repository that was cloned from a very large remote: there is likely one
"big" pack-file that was created at clone time. Thus, do not try
repacking it as it is likely packed efficiently by the server.
Instead, we select the second-largest pack-file, and create a batch size
that is one larger than that pack-file. If there are three or more
pack-files, then this guarantees that at least two will be combined into
a new pack-file.
Of course, this means that the second-largest pack-file size is likely
to grow over time and may eventually surpass the initially-cloned
pack-file. Recall that the pack-file batch is selected in a greedy
manner: the packs are considered from oldest to newest and are selected
if they have size smaller than the batch size until the total selected
size is larger than the batch size. Thus, that oldest "clone" pack will
be first to repack after the new data creates a pack larger than that.
We also want to place some limits on how large these pack-files become,
in order to bound the amount of time spent repacking. A maximum
batch-size of two gigabytes means that large repositories will never be
packed into a single pack-file using this job, but also that repack is
rather expensive. This is a trade-off that is valuable to have if the
maintenance is being run automatically or in the background. Users who
truly want to optimize for space and performance (and are willing to pay
the upfront cost of a full repack) can use the 'gc' task to do so.
Create a test for this two gigabyte limit by creating an EXPENSIVE test
that generates two pack-files of roughly 2.5 gigabytes in size, then
performs an incremental repack. Check that the --batch-size argument in
the subcommand uses the hard-coded maximum.
Helped-by: Chris Torek <chris.torek@gmail.com>
Reported-by: Son Luong Ngoc <sluongng@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:37 +03:00
|
|
|
#define TWO_GIGABYTES (INT32_MAX)
|
|
|
|
|
|
|
|
static off_t get_auto_pack_size(void)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* The "auto" value is special: we optimize for
|
|
|
|
* one large pack-file (i.e. from a clone) and
|
|
|
|
* expect the rest to be small and they can be
|
|
|
|
* repacked quickly.
|
|
|
|
*
|
|
|
|
* The strategy we select here is to select a
|
|
|
|
* size that is one more than the second largest
|
|
|
|
* pack-file. This ensures that we will repack
|
|
|
|
* at least two packs if there are three or more
|
|
|
|
* packs.
|
|
|
|
*/
|
|
|
|
off_t max_size = 0;
|
|
|
|
off_t second_largest_size = 0;
|
|
|
|
off_t result_size;
|
|
|
|
struct packed_git *p;
|
|
|
|
struct repository *r = the_repository;
|
|
|
|
|
|
|
|
reprepare_packed_git(r);
|
|
|
|
for (p = get_all_packs(r); p; p = p->next) {
|
|
|
|
if (p->pack_size > max_size) {
|
|
|
|
second_largest_size = max_size;
|
|
|
|
max_size = p->pack_size;
|
|
|
|
} else if (p->pack_size > second_largest_size)
|
|
|
|
second_largest_size = p->pack_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
result_size = second_largest_size + 1;
|
|
|
|
|
|
|
|
/* But limit ourselves to a batch size of 2g */
|
|
|
|
if (result_size > TWO_GIGABYTES)
|
|
|
|
result_size = TWO_GIGABYTES;
|
|
|
|
|
|
|
|
return result_size;
|
|
|
|
}
|
|
|
|
|
maintenance: add incremental-repack task
The previous change cleaned up loose objects using the
'loose-objects' that can be run safely in the background. Add a
similar job that performs similar cleanups for pack-files.
One issue with running 'git repack' is that it is designed to
repack all pack-files into a single pack-file. While this is the
most space-efficient way to store object data, it is not time or
memory efficient. This becomes extremely important if the repo is
so large that a user struggles to store two copies of the pack on
their disk.
Instead, perform an "incremental" repack by collecting a few small
pack-files into a new pack-file. The multi-pack-index facilitates
this process ever since 'git multi-pack-index expire' was added in
19575c7 (multi-pack-index: implement 'expire' subcommand,
2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1
(midx: implement midx_repack(), 2019-06-10).
The 'incremental-repack' task runs the following steps:
1. 'git multi-pack-index write' creates a multi-pack-index file if
one did not exist, and otherwise will update the multi-pack-index
with any new pack-files that appeared since the last write. This
is particularly relevant with the background fetch job.
When the multi-pack-index sees two copies of the same object, it
stores the offset data into the newer pack-file. This means that
some old pack-files could become "unreferenced" which I will use
to mean "a pack-file that is in the pack-file list of the
multi-pack-index but none of the objects in the multi-pack-index
reference a location inside that pack-file."
2. 'git multi-pack-index expire' deletes any unreferenced pack-files
and updaes the multi-pack-index to drop those pack-files from the
list. This is safe to do as concurrent Git processes will see the
multi-pack-index and not open those packs when looking for object
contents. (Similar to the 'loose-objects' job, there are some Git
commands that open pack-files regardless of the multi-pack-index,
but they are rarely used. Further, a user that self-selects to
use background operations would likely refrain from using those
commands.)
3. 'git multi-pack-index repack --bacth-size=<size>' collects a set
of pack-files that are listed in the multi-pack-index and creates
a new pack-file containing the objects whose offsets are listed
by the multi-pack-index to be in those objects. The set of pack-
files is selected greedily by sorting the pack-files by modified
time and adding a pack-file to the set if its "expected size" is
smaller than the batch size until the total expected size of the
selected pack-files is at least the batch size. The "expected
size" is calculated by taking the size of the pack-file divided
by the number of objects in the pack-file and multiplied by the
number of objects from the multi-pack-index with offset in that
pack-file. The expected size approximates how much data from that
pack-file will contribute to the resulting pack-file size. The
intention is that the resulting pack-file will be close in size
to the provided batch size.
The next run of the incremental-repack task will delete these
repacked pack-files during the 'expire' step.
In this version, the batch size is set to "0" which ignores the
size restrictions when selecting the pack-files. It instead
selects all pack-files and repacks all packed objects into a
single pack-file. This will be updated in the next change, but
it requires doing some calculations that are better isolated to
a separate change.
These steps are based on a similar background maintenance step in
Scalar (and VFS for Git) [1]. This was incredibly effective for
users of the Windows OS repository. After using the same VFS for Git
repository for over a year, some users had _thousands_ of pack-files
that combined to up to 250 GB of data. We noticed a few users were
running into the open file descriptor limits (due in part to a bug
in the multi-pack-index fixed by af96fe3 (midx: add packs to
packed_git linked list, 2019-04-29).
These pack-files were mostly small since they contained the commits
and trees that were pushed to the origin in a given hour. The GVFS
protocol includes a "prefetch" step that asks for pre-computed pack-
files containing commits and trees by timestamp. These pack-files
were grouped into "daily" pack-files once a day for up to 30 days.
If a user did not request prefetch packs for over 30 days, then they
would get the entire history of commits and trees in a new, large
pack-file. This led to a large number of pack-files that had poor
delta compression.
By running this pack-file maintenance step once per day, these repos
with thousands of packs spanning 200+ GB dropped to dozens of pack-
files spanning 30-50 GB. This was done all without removing objects
from the system and using a constant batch size of two gigabytes.
Once the work was done to reduce the pack-files to small sizes, the
batch size of two gigabytes means that not every run triggers a
repack operation, so the following run will not expire a pack-file.
This has kept these repos in a "clean" state.
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:36 +03:00
|
|
|
static int multi_pack_index_repack(struct maintenance_run_opts *opts)
|
|
|
|
{
|
|
|
|
struct child_process child = CHILD_PROCESS_INIT;
|
|
|
|
|
|
|
|
child.git_cmd = 1;
|
|
|
|
strvec_pushl(&child.args, "multi-pack-index", "repack", NULL);
|
|
|
|
|
|
|
|
if (opts->quiet)
|
|
|
|
strvec_push(&child.args, "--no-progress");
|
|
|
|
|
maintenance: auto-size incremental-repack batch
When repacking during the 'incremental-repack' task, we use the
--batch-size option in 'git multi-pack-index repack'. The initial setting
used --batch-size=0 to repack everything into a single pack-file. This is
not sustainable for a large repository. The amount of work required is
also likely to use too many system resources for a background job.
Update the 'incremental-repack' task by dynamically computing a
--batch-size option based on the current pack-file structure.
The dynamic default size is computed with this idea in mind for a client
repository that was cloned from a very large remote: there is likely one
"big" pack-file that was created at clone time. Thus, do not try
repacking it as it is likely packed efficiently by the server.
Instead, we select the second-largest pack-file, and create a batch size
that is one larger than that pack-file. If there are three or more
pack-files, then this guarantees that at least two will be combined into
a new pack-file.
Of course, this means that the second-largest pack-file size is likely
to grow over time and may eventually surpass the initially-cloned
pack-file. Recall that the pack-file batch is selected in a greedy
manner: the packs are considered from oldest to newest and are selected
if they have size smaller than the batch size until the total selected
size is larger than the batch size. Thus, that oldest "clone" pack will
be first to repack after the new data creates a pack larger than that.
We also want to place some limits on how large these pack-files become,
in order to bound the amount of time spent repacking. A maximum
batch-size of two gigabytes means that large repositories will never be
packed into a single pack-file using this job, but also that repack is
rather expensive. This is a trade-off that is valuable to have if the
maintenance is being run automatically or in the background. Users who
truly want to optimize for space and performance (and are willing to pay
the upfront cost of a full repack) can use the 'gc' task to do so.
Create a test for this two gigabyte limit by creating an EXPENSIVE test
that generates two pack-files of roughly 2.5 gigabytes in size, then
performs an incremental repack. Check that the --batch-size argument in
the subcommand uses the hard-coded maximum.
Helped-by: Chris Torek <chris.torek@gmail.com>
Reported-by: Son Luong Ngoc <sluongng@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:37 +03:00
|
|
|
strvec_pushf(&child.args, "--batch-size=%"PRIuMAX,
|
|
|
|
(uintmax_t)get_auto_pack_size());
|
maintenance: add incremental-repack task
The previous change cleaned up loose objects using the
'loose-objects' that can be run safely in the background. Add a
similar job that performs similar cleanups for pack-files.
One issue with running 'git repack' is that it is designed to
repack all pack-files into a single pack-file. While this is the
most space-efficient way to store object data, it is not time or
memory efficient. This becomes extremely important if the repo is
so large that a user struggles to store two copies of the pack on
their disk.
Instead, perform an "incremental" repack by collecting a few small
pack-files into a new pack-file. The multi-pack-index facilitates
this process ever since 'git multi-pack-index expire' was added in
19575c7 (multi-pack-index: implement 'expire' subcommand,
2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1
(midx: implement midx_repack(), 2019-06-10).
The 'incremental-repack' task runs the following steps:
1. 'git multi-pack-index write' creates a multi-pack-index file if
one did not exist, and otherwise will update the multi-pack-index
with any new pack-files that appeared since the last write. This
is particularly relevant with the background fetch job.
When the multi-pack-index sees two copies of the same object, it
stores the offset data into the newer pack-file. This means that
some old pack-files could become "unreferenced" which I will use
to mean "a pack-file that is in the pack-file list of the
multi-pack-index but none of the objects in the multi-pack-index
reference a location inside that pack-file."
2. 'git multi-pack-index expire' deletes any unreferenced pack-files
and updaes the multi-pack-index to drop those pack-files from the
list. This is safe to do as concurrent Git processes will see the
multi-pack-index and not open those packs when looking for object
contents. (Similar to the 'loose-objects' job, there are some Git
commands that open pack-files regardless of the multi-pack-index,
but they are rarely used. Further, a user that self-selects to
use background operations would likely refrain from using those
commands.)
3. 'git multi-pack-index repack --bacth-size=<size>' collects a set
of pack-files that are listed in the multi-pack-index and creates
a new pack-file containing the objects whose offsets are listed
by the multi-pack-index to be in those objects. The set of pack-
files is selected greedily by sorting the pack-files by modified
time and adding a pack-file to the set if its "expected size" is
smaller than the batch size until the total expected size of the
selected pack-files is at least the batch size. The "expected
size" is calculated by taking the size of the pack-file divided
by the number of objects in the pack-file and multiplied by the
number of objects from the multi-pack-index with offset in that
pack-file. The expected size approximates how much data from that
pack-file will contribute to the resulting pack-file size. The
intention is that the resulting pack-file will be close in size
to the provided batch size.
The next run of the incremental-repack task will delete these
repacked pack-files during the 'expire' step.
In this version, the batch size is set to "0" which ignores the
size restrictions when selecting the pack-files. It instead
selects all pack-files and repacks all packed objects into a
single pack-file. This will be updated in the next change, but
it requires doing some calculations that are better isolated to
a separate change.
These steps are based on a similar background maintenance step in
Scalar (and VFS for Git) [1]. This was incredibly effective for
users of the Windows OS repository. After using the same VFS for Git
repository for over a year, some users had _thousands_ of pack-files
that combined to up to 250 GB of data. We noticed a few users were
running into the open file descriptor limits (due in part to a bug
in the multi-pack-index fixed by af96fe3 (midx: add packs to
packed_git linked list, 2019-04-29).
These pack-files were mostly small since they contained the commits
and trees that were pushed to the origin in a given hour. The GVFS
protocol includes a "prefetch" step that asks for pre-computed pack-
files containing commits and trees by timestamp. These pack-files
were grouped into "daily" pack-files once a day for up to 30 days.
If a user did not request prefetch packs for over 30 days, then they
would get the entire history of commits and trees in a new, large
pack-file. This led to a large number of pack-files that had poor
delta compression.
By running this pack-file maintenance step once per day, these repos
with thousands of packs spanning 200+ GB dropped to dozens of pack-
files spanning 30-50 GB. This was done all without removing objects
from the system and using a constant batch size of two gigabytes.
Once the work was done to reduce the pack-files to small sizes, the
batch size of two gigabytes means that not every run triggers a
repack operation, so the following run will not expire a pack-file.
This has kept these repos in a "clean" state.
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:36 +03:00
|
|
|
|
|
|
|
close_object_store(the_repository->objects);
|
|
|
|
|
|
|
|
if (run_command(&child))
|
|
|
|
return error(_("'git multi-pack-index repack' failed"));
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int maintenance_task_incremental_repack(struct maintenance_run_opts *opts)
|
|
|
|
{
|
|
|
|
prepare_repo_settings(the_repository);
|
|
|
|
if (!the_repository->settings.core_multi_pack_index) {
|
|
|
|
warning(_("skipping incremental-repack task because core.multiPackIndex is disabled"));
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (multi_pack_index_write(opts))
|
|
|
|
return 1;
|
|
|
|
if (multi_pack_index_expire(opts))
|
|
|
|
return 1;
|
|
|
|
if (multi_pack_index_repack(opts))
|
|
|
|
return 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-09-17 21:11:45 +03:00
|
|
|
typedef int maintenance_task_fn(struct maintenance_run_opts *opts);
|
|
|
|
|
2020-09-17 21:11:50 +03:00
|
|
|
/*
|
|
|
|
* An auto condition function returns 1 if the task should run
|
|
|
|
* and 0 if the task should NOT run. See needs_to_gc() for an
|
|
|
|
* example.
|
|
|
|
*/
|
|
|
|
typedef int maintenance_auto_fn(void);
|
|
|
|
|
2020-09-17 21:11:45 +03:00
|
|
|
struct maintenance_task {
|
|
|
|
const char *name;
|
|
|
|
maintenance_task_fn *fn;
|
2020-09-17 21:11:50 +03:00
|
|
|
maintenance_auto_fn *auto_condition;
|
2020-09-17 21:11:45 +03:00
|
|
|
unsigned enabled:1;
|
2020-09-17 21:11:47 +03:00
|
|
|
|
2020-09-11 20:49:15 +03:00
|
|
|
enum schedule_priority schedule;
|
|
|
|
|
2020-09-17 21:11:47 +03:00
|
|
|
/* -1 if not selected. */
|
|
|
|
int selected_order;
|
2020-09-17 21:11:45 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
enum maintenance_task_label {
|
maintenance: add prefetch task
When working with very large repositories, an incremental 'git fetch'
command can download a large amount of data. If there are many other
users pushing to a common repo, then this data can rival the initial
pack-file size of a 'git clone' of a medium-size repo.
Users may want to keep the data on their local repos as close as
possible to the data on the remote repos by fetching periodically in
the background. This can break up a large daily fetch into several
smaller hourly fetches.
The task is called "prefetch" because it is work done in advance
of a foreground fetch to make that 'git fetch' command much faster.
However, if we simply ran 'git fetch <remote>' in the background,
then the user running a foreground 'git fetch <remote>' would lose
some important feedback when a new branch appears or an existing
branch updates. This is especially true if a remote branch is
force-updated and this isn't noticed by the user because it occurred
in the background. Further, the functionality of 'git push
--force-with-lease' becomes suspect.
When running 'git fetch <remote> <options>' in the background, use
the following options for careful updating:
1. --no-tags prevents getting a new tag when a user wants to see
the new tags appear in their foreground fetches.
2. --refmap= removes the configured refspec which usually updates
refs/remotes/<remote>/* with the refs advertised by the remote.
While this looks confusing, this was documented and tested by
b40a50264ac (fetch: document and test --refmap="", 2020-01-21),
including this sentence in the documentation:
Providing an empty `<refspec>` to the `--refmap` option
causes Git to ignore the configured refspecs and rely
entirely on the refspecs supplied as command-line arguments.
3. By adding a new refspec "+refs/heads/*:refs/prefetch/<remote>/*"
we can ensure that we actually load the new values somewhere in
our refspace while not updating refs/heads or refs/remotes. By
storing these refs here, the commit-graph job will update the
commit-graph with the commits from these hidden refs.
4. --prune will delete the refs/prefetch/<remote> refs that no
longer appear on the remote.
5. --no-write-fetch-head prevents updating FETCH_HEAD.
We've been using this step as a critical background job in Scalar
[1] (and VFS for Git). This solved a pain point that was showing up
in user reports: fetching was a pain! Users do not like waiting to
download the data that was created while they were away from their
machines. After implementing background fetch, the foreground fetch
commands sped up significantly because they mostly just update refs
and download a small amount of new data. The effect is especially
dramatic when paried with --no-show-forced-udpates (through
fetch.showForcedUpdates=false).
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/FetchStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:31 +03:00
|
|
|
TASK_PREFETCH,
|
2020-09-25 15:33:32 +03:00
|
|
|
TASK_LOOSE_OBJECTS,
|
maintenance: add incremental-repack task
The previous change cleaned up loose objects using the
'loose-objects' that can be run safely in the background. Add a
similar job that performs similar cleanups for pack-files.
One issue with running 'git repack' is that it is designed to
repack all pack-files into a single pack-file. While this is the
most space-efficient way to store object data, it is not time or
memory efficient. This becomes extremely important if the repo is
so large that a user struggles to store two copies of the pack on
their disk.
Instead, perform an "incremental" repack by collecting a few small
pack-files into a new pack-file. The multi-pack-index facilitates
this process ever since 'git multi-pack-index expire' was added in
19575c7 (multi-pack-index: implement 'expire' subcommand,
2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1
(midx: implement midx_repack(), 2019-06-10).
The 'incremental-repack' task runs the following steps:
1. 'git multi-pack-index write' creates a multi-pack-index file if
one did not exist, and otherwise will update the multi-pack-index
with any new pack-files that appeared since the last write. This
is particularly relevant with the background fetch job.
When the multi-pack-index sees two copies of the same object, it
stores the offset data into the newer pack-file. This means that
some old pack-files could become "unreferenced" which I will use
to mean "a pack-file that is in the pack-file list of the
multi-pack-index but none of the objects in the multi-pack-index
reference a location inside that pack-file."
2. 'git multi-pack-index expire' deletes any unreferenced pack-files
and updaes the multi-pack-index to drop those pack-files from the
list. This is safe to do as concurrent Git processes will see the
multi-pack-index and not open those packs when looking for object
contents. (Similar to the 'loose-objects' job, there are some Git
commands that open pack-files regardless of the multi-pack-index,
but they are rarely used. Further, a user that self-selects to
use background operations would likely refrain from using those
commands.)
3. 'git multi-pack-index repack --bacth-size=<size>' collects a set
of pack-files that are listed in the multi-pack-index and creates
a new pack-file containing the objects whose offsets are listed
by the multi-pack-index to be in those objects. The set of pack-
files is selected greedily by sorting the pack-files by modified
time and adding a pack-file to the set if its "expected size" is
smaller than the batch size until the total expected size of the
selected pack-files is at least the batch size. The "expected
size" is calculated by taking the size of the pack-file divided
by the number of objects in the pack-file and multiplied by the
number of objects from the multi-pack-index with offset in that
pack-file. The expected size approximates how much data from that
pack-file will contribute to the resulting pack-file size. The
intention is that the resulting pack-file will be close in size
to the provided batch size.
The next run of the incremental-repack task will delete these
repacked pack-files during the 'expire' step.
In this version, the batch size is set to "0" which ignores the
size restrictions when selecting the pack-files. It instead
selects all pack-files and repacks all packed objects into a
single pack-file. This will be updated in the next change, but
it requires doing some calculations that are better isolated to
a separate change.
These steps are based on a similar background maintenance step in
Scalar (and VFS for Git) [1]. This was incredibly effective for
users of the Windows OS repository. After using the same VFS for Git
repository for over a year, some users had _thousands_ of pack-files
that combined to up to 250 GB of data. We noticed a few users were
running into the open file descriptor limits (due in part to a bug
in the multi-pack-index fixed by af96fe3 (midx: add packs to
packed_git linked list, 2019-04-29).
These pack-files were mostly small since they contained the commits
and trees that were pushed to the origin in a given hour. The GVFS
protocol includes a "prefetch" step that asks for pre-computed pack-
files containing commits and trees by timestamp. These pack-files
were grouped into "daily" pack-files once a day for up to 30 days.
If a user did not request prefetch packs for over 30 days, then they
would get the entire history of commits and trees in a new, large
pack-file. This led to a large number of pack-files that had poor
delta compression.
By running this pack-file maintenance step once per day, these repos
with thousands of packs spanning 200+ GB dropped to dozens of pack-
files spanning 30-50 GB. This was done all without removing objects
from the system and using a constant batch size of two gigabytes.
Once the work was done to reduce the pack-files to small sizes, the
batch size of two gigabytes means that not every run triggers a
repack operation, so the following run will not expire a pack-file.
This has kept these repos in a "clean" state.
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:36 +03:00
|
|
|
TASK_INCREMENTAL_REPACK,
|
2020-09-17 21:11:45 +03:00
|
|
|
TASK_GC,
|
2020-09-17 21:11:46 +03:00
|
|
|
TASK_COMMIT_GRAPH,
|
2020-09-17 21:11:45 +03:00
|
|
|
|
|
|
|
/* Leave as final value */
|
|
|
|
TASK__COUNT
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct maintenance_task tasks[] = {
|
maintenance: add prefetch task
When working with very large repositories, an incremental 'git fetch'
command can download a large amount of data. If there are many other
users pushing to a common repo, then this data can rival the initial
pack-file size of a 'git clone' of a medium-size repo.
Users may want to keep the data on their local repos as close as
possible to the data on the remote repos by fetching periodically in
the background. This can break up a large daily fetch into several
smaller hourly fetches.
The task is called "prefetch" because it is work done in advance
of a foreground fetch to make that 'git fetch' command much faster.
However, if we simply ran 'git fetch <remote>' in the background,
then the user running a foreground 'git fetch <remote>' would lose
some important feedback when a new branch appears or an existing
branch updates. This is especially true if a remote branch is
force-updated and this isn't noticed by the user because it occurred
in the background. Further, the functionality of 'git push
--force-with-lease' becomes suspect.
When running 'git fetch <remote> <options>' in the background, use
the following options for careful updating:
1. --no-tags prevents getting a new tag when a user wants to see
the new tags appear in their foreground fetches.
2. --refmap= removes the configured refspec which usually updates
refs/remotes/<remote>/* with the refs advertised by the remote.
While this looks confusing, this was documented and tested by
b40a50264ac (fetch: document and test --refmap="", 2020-01-21),
including this sentence in the documentation:
Providing an empty `<refspec>` to the `--refmap` option
causes Git to ignore the configured refspecs and rely
entirely on the refspecs supplied as command-line arguments.
3. By adding a new refspec "+refs/heads/*:refs/prefetch/<remote>/*"
we can ensure that we actually load the new values somewhere in
our refspace while not updating refs/heads or refs/remotes. By
storing these refs here, the commit-graph job will update the
commit-graph with the commits from these hidden refs.
4. --prune will delete the refs/prefetch/<remote> refs that no
longer appear on the remote.
5. --no-write-fetch-head prevents updating FETCH_HEAD.
We've been using this step as a critical background job in Scalar
[1] (and VFS for Git). This solved a pain point that was showing up
in user reports: fetching was a pain! Users do not like waiting to
download the data that was created while they were away from their
machines. After implementing background fetch, the foreground fetch
commands sped up significantly because they mostly just update refs
and download a small amount of new data. The effect is especially
dramatic when paried with --no-show-forced-udpates (through
fetch.showForcedUpdates=false).
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/FetchStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:31 +03:00
|
|
|
[TASK_PREFETCH] = {
|
|
|
|
"prefetch",
|
|
|
|
maintenance_task_prefetch,
|
|
|
|
},
|
2020-09-25 15:33:32 +03:00
|
|
|
[TASK_LOOSE_OBJECTS] = {
|
|
|
|
"loose-objects",
|
|
|
|
maintenance_task_loose_objects,
|
2020-09-25 15:33:33 +03:00
|
|
|
loose_object_auto_condition,
|
2020-09-25 15:33:32 +03:00
|
|
|
},
|
maintenance: add incremental-repack task
The previous change cleaned up loose objects using the
'loose-objects' that can be run safely in the background. Add a
similar job that performs similar cleanups for pack-files.
One issue with running 'git repack' is that it is designed to
repack all pack-files into a single pack-file. While this is the
most space-efficient way to store object data, it is not time or
memory efficient. This becomes extremely important if the repo is
so large that a user struggles to store two copies of the pack on
their disk.
Instead, perform an "incremental" repack by collecting a few small
pack-files into a new pack-file. The multi-pack-index facilitates
this process ever since 'git multi-pack-index expire' was added in
19575c7 (multi-pack-index: implement 'expire' subcommand,
2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1
(midx: implement midx_repack(), 2019-06-10).
The 'incremental-repack' task runs the following steps:
1. 'git multi-pack-index write' creates a multi-pack-index file if
one did not exist, and otherwise will update the multi-pack-index
with any new pack-files that appeared since the last write. This
is particularly relevant with the background fetch job.
When the multi-pack-index sees two copies of the same object, it
stores the offset data into the newer pack-file. This means that
some old pack-files could become "unreferenced" which I will use
to mean "a pack-file that is in the pack-file list of the
multi-pack-index but none of the objects in the multi-pack-index
reference a location inside that pack-file."
2. 'git multi-pack-index expire' deletes any unreferenced pack-files
and updaes the multi-pack-index to drop those pack-files from the
list. This is safe to do as concurrent Git processes will see the
multi-pack-index and not open those packs when looking for object
contents. (Similar to the 'loose-objects' job, there are some Git
commands that open pack-files regardless of the multi-pack-index,
but they are rarely used. Further, a user that self-selects to
use background operations would likely refrain from using those
commands.)
3. 'git multi-pack-index repack --bacth-size=<size>' collects a set
of pack-files that are listed in the multi-pack-index and creates
a new pack-file containing the objects whose offsets are listed
by the multi-pack-index to be in those objects. The set of pack-
files is selected greedily by sorting the pack-files by modified
time and adding a pack-file to the set if its "expected size" is
smaller than the batch size until the total expected size of the
selected pack-files is at least the batch size. The "expected
size" is calculated by taking the size of the pack-file divided
by the number of objects in the pack-file and multiplied by the
number of objects from the multi-pack-index with offset in that
pack-file. The expected size approximates how much data from that
pack-file will contribute to the resulting pack-file size. The
intention is that the resulting pack-file will be close in size
to the provided batch size.
The next run of the incremental-repack task will delete these
repacked pack-files during the 'expire' step.
In this version, the batch size is set to "0" which ignores the
size restrictions when selecting the pack-files. It instead
selects all pack-files and repacks all packed objects into a
single pack-file. This will be updated in the next change, but
it requires doing some calculations that are better isolated to
a separate change.
These steps are based on a similar background maintenance step in
Scalar (and VFS for Git) [1]. This was incredibly effective for
users of the Windows OS repository. After using the same VFS for Git
repository for over a year, some users had _thousands_ of pack-files
that combined to up to 250 GB of data. We noticed a few users were
running into the open file descriptor limits (due in part to a bug
in the multi-pack-index fixed by af96fe3 (midx: add packs to
packed_git linked list, 2019-04-29).
These pack-files were mostly small since they contained the commits
and trees that were pushed to the origin in a given hour. The GVFS
protocol includes a "prefetch" step that asks for pre-computed pack-
files containing commits and trees by timestamp. These pack-files
were grouped into "daily" pack-files once a day for up to 30 days.
If a user did not request prefetch packs for over 30 days, then they
would get the entire history of commits and trees in a new, large
pack-file. This led to a large number of pack-files that had poor
delta compression.
By running this pack-file maintenance step once per day, these repos
with thousands of packs spanning 200+ GB dropped to dozens of pack-
files spanning 30-50 GB. This was done all without removing objects
from the system and using a constant batch size of two gigabytes.
Once the work was done to reduce the pack-files to small sizes, the
batch size of two gigabytes means that not every run triggers a
repack operation, so the following run will not expire a pack-file.
This has kept these repos in a "clean" state.
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:36 +03:00
|
|
|
[TASK_INCREMENTAL_REPACK] = {
|
|
|
|
"incremental-repack",
|
|
|
|
maintenance_task_incremental_repack,
|
2020-09-25 15:33:38 +03:00
|
|
|
incremental_repack_auto_condition,
|
maintenance: add incremental-repack task
The previous change cleaned up loose objects using the
'loose-objects' that can be run safely in the background. Add a
similar job that performs similar cleanups for pack-files.
One issue with running 'git repack' is that it is designed to
repack all pack-files into a single pack-file. While this is the
most space-efficient way to store object data, it is not time or
memory efficient. This becomes extremely important if the repo is
so large that a user struggles to store two copies of the pack on
their disk.
Instead, perform an "incremental" repack by collecting a few small
pack-files into a new pack-file. The multi-pack-index facilitates
this process ever since 'git multi-pack-index expire' was added in
19575c7 (multi-pack-index: implement 'expire' subcommand,
2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1
(midx: implement midx_repack(), 2019-06-10).
The 'incremental-repack' task runs the following steps:
1. 'git multi-pack-index write' creates a multi-pack-index file if
one did not exist, and otherwise will update the multi-pack-index
with any new pack-files that appeared since the last write. This
is particularly relevant with the background fetch job.
When the multi-pack-index sees two copies of the same object, it
stores the offset data into the newer pack-file. This means that
some old pack-files could become "unreferenced" which I will use
to mean "a pack-file that is in the pack-file list of the
multi-pack-index but none of the objects in the multi-pack-index
reference a location inside that pack-file."
2. 'git multi-pack-index expire' deletes any unreferenced pack-files
and updaes the multi-pack-index to drop those pack-files from the
list. This is safe to do as concurrent Git processes will see the
multi-pack-index and not open those packs when looking for object
contents. (Similar to the 'loose-objects' job, there are some Git
commands that open pack-files regardless of the multi-pack-index,
but they are rarely used. Further, a user that self-selects to
use background operations would likely refrain from using those
commands.)
3. 'git multi-pack-index repack --bacth-size=<size>' collects a set
of pack-files that are listed in the multi-pack-index and creates
a new pack-file containing the objects whose offsets are listed
by the multi-pack-index to be in those objects. The set of pack-
files is selected greedily by sorting the pack-files by modified
time and adding a pack-file to the set if its "expected size" is
smaller than the batch size until the total expected size of the
selected pack-files is at least the batch size. The "expected
size" is calculated by taking the size of the pack-file divided
by the number of objects in the pack-file and multiplied by the
number of objects from the multi-pack-index with offset in that
pack-file. The expected size approximates how much data from that
pack-file will contribute to the resulting pack-file size. The
intention is that the resulting pack-file will be close in size
to the provided batch size.
The next run of the incremental-repack task will delete these
repacked pack-files during the 'expire' step.
In this version, the batch size is set to "0" which ignores the
size restrictions when selecting the pack-files. It instead
selects all pack-files and repacks all packed objects into a
single pack-file. This will be updated in the next change, but
it requires doing some calculations that are better isolated to
a separate change.
These steps are based on a similar background maintenance step in
Scalar (and VFS for Git) [1]. This was incredibly effective for
users of the Windows OS repository. After using the same VFS for Git
repository for over a year, some users had _thousands_ of pack-files
that combined to up to 250 GB of data. We noticed a few users were
running into the open file descriptor limits (due in part to a bug
in the multi-pack-index fixed by af96fe3 (midx: add packs to
packed_git linked list, 2019-04-29).
These pack-files were mostly small since they contained the commits
and trees that were pushed to the origin in a given hour. The GVFS
protocol includes a "prefetch" step that asks for pre-computed pack-
files containing commits and trees by timestamp. These pack-files
were grouped into "daily" pack-files once a day for up to 30 days.
If a user did not request prefetch packs for over 30 days, then they
would get the entire history of commits and trees in a new, large
pack-file. This led to a large number of pack-files that had poor
delta compression.
By running this pack-file maintenance step once per day, these repos
with thousands of packs spanning 200+ GB dropped to dozens of pack-
files spanning 30-50 GB. This was done all without removing objects
from the system and using a constant batch size of two gigabytes.
Once the work was done to reduce the pack-files to small sizes, the
batch size of two gigabytes means that not every run triggers a
repack operation, so the following run will not expire a pack-file.
This has kept these repos in a "clean" state.
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:36 +03:00
|
|
|
},
|
2020-09-17 21:11:45 +03:00
|
|
|
[TASK_GC] = {
|
|
|
|
"gc",
|
|
|
|
maintenance_task_gc,
|
2020-09-17 21:11:50 +03:00
|
|
|
need_to_gc,
|
2020-09-17 21:11:45 +03:00
|
|
|
1,
|
|
|
|
},
|
2020-09-17 21:11:46 +03:00
|
|
|
[TASK_COMMIT_GRAPH] = {
|
|
|
|
"commit-graph",
|
|
|
|
maintenance_task_commit_graph,
|
2020-09-17 21:11:51 +03:00
|
|
|
should_write_commit_graph,
|
2020-09-17 21:11:46 +03:00
|
|
|
},
|
2020-09-17 21:11:45 +03:00
|
|
|
};
|
|
|
|
|
2020-09-17 21:11:47 +03:00
|
|
|
static int compare_tasks_by_selection(const void *a_, const void *b_)
|
|
|
|
{
|
2020-11-18 00:59:49 +03:00
|
|
|
const struct maintenance_task *a = a_;
|
|
|
|
const struct maintenance_task *b = b_;
|
2020-09-17 21:11:47 +03:00
|
|
|
|
|
|
|
return b->selected_order - a->selected_order;
|
|
|
|
}
|
|
|
|
|
2020-09-17 21:11:45 +03:00
|
|
|
static int maintenance_run_tasks(struct maintenance_run_opts *opts)
|
|
|
|
{
|
2020-09-17 21:11:47 +03:00
|
|
|
int i, found_selected = 0;
|
2020-09-17 21:11:45 +03:00
|
|
|
int result = 0;
|
2020-09-17 21:11:48 +03:00
|
|
|
struct lock_file lk;
|
|
|
|
struct repository *r = the_repository;
|
|
|
|
char *lock_path = xstrfmt("%s/maintenance", r->objects->odb->path);
|
|
|
|
|
|
|
|
if (hold_lock_file_for_update(&lk, lock_path, LOCK_NO_DEREF) < 0) {
|
|
|
|
/*
|
|
|
|
* Another maintenance command is running.
|
|
|
|
*
|
|
|
|
* If --auto was provided, then it is likely due to a
|
|
|
|
* recursive process stack. Do not report an error in
|
|
|
|
* that case.
|
|
|
|
*/
|
|
|
|
if (!opts->auto_flag && !opts->quiet)
|
|
|
|
warning(_("lock file '%s' exists, skipping maintenance"),
|
|
|
|
lock_path);
|
|
|
|
free(lock_path);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
free(lock_path);
|
2020-09-17 21:11:45 +03:00
|
|
|
|
2020-09-17 21:11:47 +03:00
|
|
|
for (i = 0; !found_selected && i < TASK__COUNT; i++)
|
|
|
|
found_selected = tasks[i].selected_order >= 0;
|
|
|
|
|
|
|
|
if (found_selected)
|
|
|
|
QSORT(tasks, TASK__COUNT, compare_tasks_by_selection);
|
|
|
|
|
2020-09-17 21:11:45 +03:00
|
|
|
for (i = 0; i < TASK__COUNT; i++) {
|
2020-09-17 21:11:47 +03:00
|
|
|
if (found_selected && tasks[i].selected_order < 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (!found_selected && !tasks[i].enabled)
|
2020-09-17 21:11:45 +03:00
|
|
|
continue;
|
|
|
|
|
2020-09-17 21:11:50 +03:00
|
|
|
if (opts->auto_flag &&
|
|
|
|
(!tasks[i].auto_condition ||
|
|
|
|
!tasks[i].auto_condition()))
|
|
|
|
continue;
|
|
|
|
|
2020-09-11 20:49:15 +03:00
|
|
|
if (opts->schedule && tasks[i].schedule < opts->schedule)
|
|
|
|
continue;
|
|
|
|
|
2020-09-17 21:11:52 +03:00
|
|
|
trace2_region_enter("maintenance", tasks[i].name, r);
|
2020-09-17 21:11:45 +03:00
|
|
|
if (tasks[i].fn(opts)) {
|
|
|
|
error(_("task '%s' failed"), tasks[i].name);
|
|
|
|
result = 1;
|
|
|
|
}
|
2020-09-17 21:11:52 +03:00
|
|
|
trace2_region_leave("maintenance", tasks[i].name, r);
|
2020-09-17 21:11:45 +03:00
|
|
|
}
|
|
|
|
|
2020-09-17 21:11:48 +03:00
|
|
|
rollback_lock_file(&lk);
|
2020-09-17 21:11:45 +03:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2020-10-15 20:22:02 +03:00
|
|
|
static void initialize_maintenance_strategy(void)
|
|
|
|
{
|
|
|
|
char *config_str;
|
|
|
|
|
|
|
|
if (git_config_get_string("maintenance.strategy", &config_str))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!strcasecmp(config_str, "incremental")) {
|
|
|
|
tasks[TASK_GC].schedule = SCHEDULE_NONE;
|
|
|
|
tasks[TASK_COMMIT_GRAPH].enabled = 1;
|
|
|
|
tasks[TASK_COMMIT_GRAPH].schedule = SCHEDULE_HOURLY;
|
|
|
|
tasks[TASK_PREFETCH].enabled = 1;
|
|
|
|
tasks[TASK_PREFETCH].schedule = SCHEDULE_HOURLY;
|
|
|
|
tasks[TASK_INCREMENTAL_REPACK].enabled = 1;
|
|
|
|
tasks[TASK_INCREMENTAL_REPACK].schedule = SCHEDULE_DAILY;
|
|
|
|
tasks[TASK_LOOSE_OBJECTS].enabled = 1;
|
|
|
|
tasks[TASK_LOOSE_OBJECTS].schedule = SCHEDULE_DAILY;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void initialize_task_config(int schedule)
|
2020-09-17 21:11:49 +03:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
struct strbuf config_name = STRBUF_INIT;
|
2020-09-17 21:11:50 +03:00
|
|
|
gc_config();
|
|
|
|
|
2020-10-15 20:22:02 +03:00
|
|
|
if (schedule)
|
|
|
|
initialize_maintenance_strategy();
|
|
|
|
|
2020-09-17 21:11:49 +03:00
|
|
|
for (i = 0; i < TASK__COUNT; i++) {
|
|
|
|
int config_value;
|
2020-09-11 20:49:15 +03:00
|
|
|
char *config_str;
|
2020-09-17 21:11:49 +03:00
|
|
|
|
2020-09-11 20:49:15 +03:00
|
|
|
strbuf_reset(&config_name);
|
2020-09-17 21:11:49 +03:00
|
|
|
strbuf_addf(&config_name, "maintenance.%s.enabled",
|
|
|
|
tasks[i].name);
|
|
|
|
|
|
|
|
if (!git_config_get_bool(config_name.buf, &config_value))
|
|
|
|
tasks[i].enabled = config_value;
|
2020-09-11 20:49:15 +03:00
|
|
|
|
|
|
|
strbuf_reset(&config_name);
|
|
|
|
strbuf_addf(&config_name, "maintenance.%s.schedule",
|
|
|
|
tasks[i].name);
|
|
|
|
|
|
|
|
if (!git_config_get_string(config_name.buf, &config_str)) {
|
|
|
|
tasks[i].schedule = parse_schedule(config_str);
|
|
|
|
free(config_str);
|
|
|
|
}
|
2020-09-17 21:11:49 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
strbuf_release(&config_name);
|
|
|
|
}
|
|
|
|
|
2020-09-17 21:11:47 +03:00
|
|
|
static int task_option_parse(const struct option *opt,
|
|
|
|
const char *arg, int unset)
|
|
|
|
{
|
|
|
|
int i, num_selected = 0;
|
|
|
|
struct maintenance_task *task = NULL;
|
|
|
|
|
|
|
|
BUG_ON_OPT_NEG(unset);
|
|
|
|
|
|
|
|
for (i = 0; i < TASK__COUNT; i++) {
|
|
|
|
if (tasks[i].selected_order >= 0)
|
|
|
|
num_selected++;
|
|
|
|
if (!strcasecmp(tasks[i].name, arg)) {
|
|
|
|
task = &tasks[i];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!task) {
|
|
|
|
error(_("'%s' is not a valid task"), arg);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (task->selected_order >= 0) {
|
|
|
|
error(_("task '%s' cannot be selected multiple times"), arg);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
task->selected_order = num_selected + 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
static int maintenance_run(int argc, const char **argv, const char *prefix)
|
|
|
|
{
|
2020-09-17 21:11:47 +03:00
|
|
|
int i;
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
struct maintenance_run_opts opts;
|
|
|
|
struct option builtin_maintenance_run_options[] = {
|
|
|
|
OPT_BOOL(0, "auto", &opts.auto_flag,
|
|
|
|
N_("run tasks based on the state of the repository")),
|
2020-09-11 20:49:15 +03:00
|
|
|
OPT_CALLBACK(0, "schedule", &opts.schedule, N_("frequency"),
|
|
|
|
N_("run tasks based on frequency"),
|
|
|
|
maintenance_opt_schedule),
|
2020-09-17 21:11:43 +03:00
|
|
|
OPT_BOOL(0, "quiet", &opts.quiet,
|
|
|
|
N_("do not report progress or other information over stderr")),
|
2020-09-17 21:11:47 +03:00
|
|
|
OPT_CALLBACK_F(0, "task", NULL, N_("task"),
|
|
|
|
N_("run a specific task"),
|
|
|
|
PARSE_OPT_NONEG, task_option_parse),
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
OPT_END()
|
|
|
|
};
|
|
|
|
memset(&opts, 0, sizeof(opts));
|
|
|
|
|
2020-09-17 21:11:43 +03:00
|
|
|
opts.quiet = !isatty(2);
|
|
|
|
|
2020-09-17 21:11:47 +03:00
|
|
|
for (i = 0; i < TASK__COUNT; i++)
|
|
|
|
tasks[i].selected_order = -1;
|
|
|
|
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
argc = parse_options(argc, argv, prefix,
|
|
|
|
builtin_maintenance_run_options,
|
|
|
|
builtin_maintenance_run_usage,
|
|
|
|
PARSE_OPT_STOP_AT_NON_OPTION);
|
|
|
|
|
2020-09-11 20:49:15 +03:00
|
|
|
if (opts.auto_flag && opts.schedule)
|
|
|
|
die(_("use at most one of --auto and --schedule=<frequency>"));
|
|
|
|
|
2020-10-15 20:22:02 +03:00
|
|
|
initialize_task_config(opts.schedule);
|
|
|
|
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
if (argc != 0)
|
|
|
|
usage_with_options(builtin_maintenance_run_usage,
|
|
|
|
builtin_maintenance_run_options);
|
2020-09-17 21:11:45 +03:00
|
|
|
return maintenance_run_tasks(&opts);
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
}
|
|
|
|
|
2020-09-11 20:49:17 +03:00
|
|
|
static int maintenance_register(void)
|
|
|
|
{
|
2020-10-15 20:22:03 +03:00
|
|
|
char *config_value;
|
2020-09-11 20:49:17 +03:00
|
|
|
struct child_process config_set = CHILD_PROCESS_INIT;
|
|
|
|
struct child_process config_get = CHILD_PROCESS_INIT;
|
|
|
|
|
2020-10-15 20:22:03 +03:00
|
|
|
/* Disable foreground maintenance */
|
|
|
|
git_config_set("maintenance.auto", "false");
|
|
|
|
|
|
|
|
/* Set maintenance strategy, if unset */
|
|
|
|
if (!git_config_get_string("maintenance.strategy", &config_value))
|
|
|
|
free(config_value);
|
|
|
|
else
|
|
|
|
git_config_set("maintenance.strategy", "incremental");
|
|
|
|
|
2020-09-11 20:49:17 +03:00
|
|
|
config_get.git_cmd = 1;
|
2020-11-26 01:12:56 +03:00
|
|
|
strvec_pushl(&config_get.args, "config", "--global", "--get",
|
|
|
|
"--fixed-value", "maintenance.repo",
|
2020-09-11 20:49:17 +03:00
|
|
|
the_repository->worktree ? the_repository->worktree
|
|
|
|
: the_repository->gitdir,
|
|
|
|
NULL);
|
|
|
|
config_get.out = -1;
|
|
|
|
|
|
|
|
if (start_command(&config_get))
|
|
|
|
return error(_("failed to run 'git config'"));
|
|
|
|
|
|
|
|
/* We already have this value in our config! */
|
|
|
|
if (!finish_command(&config_get))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
config_set.git_cmd = 1;
|
|
|
|
strvec_pushl(&config_set.args, "config", "--add", "--global", "maintenance.repo",
|
|
|
|
the_repository->worktree ? the_repository->worktree
|
|
|
|
: the_repository->gitdir,
|
|
|
|
NULL);
|
|
|
|
|
|
|
|
return run_command(&config_set);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int maintenance_unregister(void)
|
|
|
|
{
|
|
|
|
struct child_process config_unset = CHILD_PROCESS_INIT;
|
|
|
|
|
|
|
|
config_unset.git_cmd = 1;
|
|
|
|
strvec_pushl(&config_unset.args, "config", "--global", "--unset",
|
2020-11-26 01:12:56 +03:00
|
|
|
"--fixed-value", "maintenance.repo",
|
2020-09-11 20:49:17 +03:00
|
|
|
the_repository->worktree ? the_repository->worktree
|
|
|
|
: the_repository->gitdir,
|
|
|
|
NULL);
|
|
|
|
|
|
|
|
return run_command(&config_unset);
|
|
|
|
}
|
|
|
|
|
2020-09-11 20:49:18 +03:00
|
|
|
#define BEGIN_LINE "# BEGIN GIT MAINTENANCE SCHEDULE"
|
|
|
|
#define END_LINE "# END GIT MAINTENANCE SCHEDULE"
|
|
|
|
|
|
|
|
static int update_background_schedule(int run_maintenance)
|
|
|
|
{
|
|
|
|
int result = 0;
|
|
|
|
int in_old_region = 0;
|
|
|
|
struct child_process crontab_list = CHILD_PROCESS_INIT;
|
|
|
|
struct child_process crontab_edit = CHILD_PROCESS_INIT;
|
|
|
|
FILE *cron_list, *cron_in;
|
|
|
|
const char *crontab_name;
|
|
|
|
struct strbuf line = STRBUF_INIT;
|
|
|
|
struct lock_file lk;
|
|
|
|
char *lock_path = xstrfmt("%s/schedule", the_repository->objects->odb->path);
|
|
|
|
|
|
|
|
if (hold_lock_file_for_update(&lk, lock_path, LOCK_NO_DEREF) < 0)
|
|
|
|
return error(_("another process is scheduling background maintenance"));
|
|
|
|
|
|
|
|
crontab_name = getenv("GIT_TEST_CRONTAB");
|
|
|
|
if (!crontab_name)
|
|
|
|
crontab_name = "crontab";
|
|
|
|
|
|
|
|
strvec_split(&crontab_list.args, crontab_name);
|
|
|
|
strvec_push(&crontab_list.args, "-l");
|
|
|
|
crontab_list.in = -1;
|
|
|
|
crontab_list.out = dup(lk.tempfile->fd);
|
|
|
|
crontab_list.git_cmd = 0;
|
|
|
|
|
|
|
|
if (start_command(&crontab_list)) {
|
|
|
|
result = error(_("failed to run 'crontab -l'; your system might not support 'cron'"));
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Ignore exit code, as an empty crontab will return error. */
|
|
|
|
finish_command(&crontab_list);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Read from the .lock file, filtering out the old
|
|
|
|
* schedule while appending the new schedule.
|
|
|
|
*/
|
|
|
|
cron_list = fdopen(lk.tempfile->fd, "r");
|
|
|
|
rewind(cron_list);
|
|
|
|
|
|
|
|
strvec_split(&crontab_edit.args, crontab_name);
|
|
|
|
crontab_edit.in = -1;
|
|
|
|
crontab_edit.git_cmd = 0;
|
|
|
|
|
|
|
|
if (start_command(&crontab_edit)) {
|
|
|
|
result = error(_("failed to run 'crontab'; your system might not support 'cron'"));
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
cron_in = fdopen(crontab_edit.in, "w");
|
|
|
|
if (!cron_in) {
|
|
|
|
result = error(_("failed to open stdin of 'crontab'"));
|
|
|
|
goto done_editing;
|
|
|
|
}
|
|
|
|
|
|
|
|
while (!strbuf_getline_lf(&line, cron_list)) {
|
|
|
|
if (!in_old_region && !strcmp(line.buf, BEGIN_LINE))
|
|
|
|
in_old_region = 1;
|
gc: fix handling of crontab magic markers
On `git maintenance start`, we add a few entries to the user's cron
table. We wrap our entries using two magic markers, "# BEGIN GIT
MAINTENANCE SCHEDULE" and "# END GIT MAINTENANCE SCHEDULE". At a later
`git maintenance stop`, we will go through the table and remove these
lines. Or rather, we will remove the "BEGIN" marker, the "END" marker
and everything between them.
Alas, we have a bug in how we detect the "END" marker: we don't. As we
loop through all the lines of the crontab, if we are in the "old
region", i.e., the region we're aiming to remove, we make an early
`continue` and don't get as far as checking for the "END" marker. Thus,
once we've seen our "BEGIN", we remove everything until the end of the
file.
Rewrite the logic for identifying these markers. There are four cases
that are mutually exclusive: The current line starts a region or it ends
it, or it's firmly within the region, or it's outside of it (and should
be printed).
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Acked-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-22 00:26:32 +03:00
|
|
|
else if (in_old_region && !strcmp(line.buf, END_LINE))
|
2020-09-11 20:49:18 +03:00
|
|
|
in_old_region = 0;
|
gc: fix handling of crontab magic markers
On `git maintenance start`, we add a few entries to the user's cron
table. We wrap our entries using two magic markers, "# BEGIN GIT
MAINTENANCE SCHEDULE" and "# END GIT MAINTENANCE SCHEDULE". At a later
`git maintenance stop`, we will go through the table and remove these
lines. Or rather, we will remove the "BEGIN" marker, the "END" marker
and everything between them.
Alas, we have a bug in how we detect the "END" marker: we don't. As we
loop through all the lines of the crontab, if we are in the "old
region", i.e., the region we're aiming to remove, we make an early
`continue` and don't get as far as checking for the "END" marker. Thus,
once we've seen our "BEGIN", we remove everything until the end of the
file.
Rewrite the logic for identifying these markers. There are four cases
that are mutually exclusive: The current line starts a region or it ends
it, or it's firmly within the region, or it's outside of it (and should
be printed).
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Acked-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-22 00:26:32 +03:00
|
|
|
else if (!in_old_region)
|
|
|
|
fprintf(cron_in, "%s\n", line.buf);
|
2020-09-11 20:49:18 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (run_maintenance) {
|
|
|
|
struct strbuf line_format = STRBUF_INIT;
|
|
|
|
const char *exec_path = git_exec_path();
|
|
|
|
|
|
|
|
fprintf(cron_in, "%s\n", BEGIN_LINE);
|
|
|
|
fprintf(cron_in,
|
|
|
|
"# The following schedule was created by Git\n");
|
|
|
|
fprintf(cron_in, "# Any edits made in this region might be\n");
|
|
|
|
fprintf(cron_in,
|
|
|
|
"# replaced in the future by a Git command.\n\n");
|
|
|
|
|
|
|
|
strbuf_addf(&line_format,
|
|
|
|
"%%s %%s * * %%s \"%s/git\" --exec-path=\"%s\" for-each-repo --config=maintenance.repo maintenance run --schedule=%%s\n",
|
|
|
|
exec_path, exec_path);
|
|
|
|
fprintf(cron_in, line_format.buf, "0", "1-23", "*", "hourly");
|
|
|
|
fprintf(cron_in, line_format.buf, "0", "0", "1-6", "daily");
|
|
|
|
fprintf(cron_in, line_format.buf, "0", "0", "0", "weekly");
|
|
|
|
strbuf_release(&line_format);
|
|
|
|
|
|
|
|
fprintf(cron_in, "\n%s\n", END_LINE);
|
|
|
|
}
|
|
|
|
|
|
|
|
fflush(cron_in);
|
|
|
|
fclose(cron_in);
|
|
|
|
close(crontab_edit.in);
|
|
|
|
|
|
|
|
done_editing:
|
|
|
|
if (finish_command(&crontab_edit)) {
|
|
|
|
result = error(_("'crontab' died"));
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
fclose(cron_list);
|
|
|
|
|
|
|
|
cleanup:
|
|
|
|
rollback_lock_file(&lk);
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int maintenance_start(void)
|
|
|
|
{
|
|
|
|
if (maintenance_register())
|
|
|
|
warning(_("failed to add repo to global config"));
|
|
|
|
|
|
|
|
return update_background_schedule(1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int maintenance_stop(void)
|
|
|
|
{
|
|
|
|
return update_background_schedule(0);
|
|
|
|
}
|
|
|
|
|
2020-09-11 20:49:17 +03:00
|
|
|
static const char builtin_maintenance_usage[] = N_("git maintenance <subcommand> [<options>]");
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
|
|
|
|
int cmd_maintenance(int argc, const char **argv, const char *prefix)
|
|
|
|
{
|
|
|
|
if (argc < 2 ||
|
|
|
|
(argc == 2 && !strcmp(argv[1], "-h")))
|
|
|
|
usage(builtin_maintenance_usage);
|
|
|
|
|
|
|
|
if (!strcmp(argv[1], "run"))
|
|
|
|
return maintenance_run(argc - 1, argv + 1, prefix);
|
2020-09-11 20:49:18 +03:00
|
|
|
if (!strcmp(argv[1], "start"))
|
|
|
|
return maintenance_start();
|
|
|
|
if (!strcmp(argv[1], "stop"))
|
|
|
|
return maintenance_stop();
|
2020-09-11 20:49:17 +03:00
|
|
|
if (!strcmp(argv[1], "register"))
|
|
|
|
return maintenance_register();
|
|
|
|
if (!strcmp(argv[1], "unregister"))
|
|
|
|
return maintenance_unregister();
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
|
|
|
|
die(_("invalid subcommand: %s"), argv[1]);
|
|
|
|
}
|