maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
#!/bin/sh
|
|
|
|
|
|
|
|
test_description='git maintenance builtin'
|
|
|
|
|
|
|
|
. ./test-lib.sh
|
|
|
|
|
2020-09-17 21:11:46 +03:00
|
|
|
GIT_TEST_COMMIT_GRAPH=0
|
maintenance: add incremental-repack task
The previous change cleaned up loose objects using the
'loose-objects' that can be run safely in the background. Add a
similar job that performs similar cleanups for pack-files.
One issue with running 'git repack' is that it is designed to
repack all pack-files into a single pack-file. While this is the
most space-efficient way to store object data, it is not time or
memory efficient. This becomes extremely important if the repo is
so large that a user struggles to store two copies of the pack on
their disk.
Instead, perform an "incremental" repack by collecting a few small
pack-files into a new pack-file. The multi-pack-index facilitates
this process ever since 'git multi-pack-index expire' was added in
19575c7 (multi-pack-index: implement 'expire' subcommand,
2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1
(midx: implement midx_repack(), 2019-06-10).
The 'incremental-repack' task runs the following steps:
1. 'git multi-pack-index write' creates a multi-pack-index file if
one did not exist, and otherwise will update the multi-pack-index
with any new pack-files that appeared since the last write. This
is particularly relevant with the background fetch job.
When the multi-pack-index sees two copies of the same object, it
stores the offset data into the newer pack-file. This means that
some old pack-files could become "unreferenced" which I will use
to mean "a pack-file that is in the pack-file list of the
multi-pack-index but none of the objects in the multi-pack-index
reference a location inside that pack-file."
2. 'git multi-pack-index expire' deletes any unreferenced pack-files
and updaes the multi-pack-index to drop those pack-files from the
list. This is safe to do as concurrent Git processes will see the
multi-pack-index and not open those packs when looking for object
contents. (Similar to the 'loose-objects' job, there are some Git
commands that open pack-files regardless of the multi-pack-index,
but they are rarely used. Further, a user that self-selects to
use background operations would likely refrain from using those
commands.)
3. 'git multi-pack-index repack --bacth-size=<size>' collects a set
of pack-files that are listed in the multi-pack-index and creates
a new pack-file containing the objects whose offsets are listed
by the multi-pack-index to be in those objects. The set of pack-
files is selected greedily by sorting the pack-files by modified
time and adding a pack-file to the set if its "expected size" is
smaller than the batch size until the total expected size of the
selected pack-files is at least the batch size. The "expected
size" is calculated by taking the size of the pack-file divided
by the number of objects in the pack-file and multiplied by the
number of objects from the multi-pack-index with offset in that
pack-file. The expected size approximates how much data from that
pack-file will contribute to the resulting pack-file size. The
intention is that the resulting pack-file will be close in size
to the provided batch size.
The next run of the incremental-repack task will delete these
repacked pack-files during the 'expire' step.
In this version, the batch size is set to "0" which ignores the
size restrictions when selecting the pack-files. It instead
selects all pack-files and repacks all packed objects into a
single pack-file. This will be updated in the next change, but
it requires doing some calculations that are better isolated to
a separate change.
These steps are based on a similar background maintenance step in
Scalar (and VFS for Git) [1]. This was incredibly effective for
users of the Windows OS repository. After using the same VFS for Git
repository for over a year, some users had _thousands_ of pack-files
that combined to up to 250 GB of data. We noticed a few users were
running into the open file descriptor limits (due in part to a bug
in the multi-pack-index fixed by af96fe3 (midx: add packs to
packed_git linked list, 2019-04-29).
These pack-files were mostly small since they contained the commits
and trees that were pushed to the origin in a given hour. The GVFS
protocol includes a "prefetch" step that asks for pre-computed pack-
files containing commits and trees by timestamp. These pack-files
were grouped into "daily" pack-files once a day for up to 30 days.
If a user did not request prefetch packs for over 30 days, then they
would get the entire history of commits and trees in a new, large
pack-file. This led to a large number of pack-files that had poor
delta compression.
By running this pack-file maintenance step once per day, these repos
with thousands of packs spanning 200+ GB dropped to dozens of pack-
files spanning 30-50 GB. This was done all without removing objects
from the system and using a constant batch size of two gigabytes.
Once the work was done to reduce the pack-files to small sizes, the
batch size of two gigabytes means that not every run triggers a
repack operation, so the following run will not expire a pack-file.
This has kept these repos in a "clean" state.
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:36 +03:00
|
|
|
GIT_TEST_MULTI_PACK_INDEX=0
|
2020-09-17 21:11:46 +03:00
|
|
|
|
maintenance: use launchctl on macOS
The existing mechanism for scheduling background maintenance is done
through cron. The 'crontab -e' command allows updating the schedule
while cron itself runs those commands. While this is technically
supported by macOS, it has some significant deficiencies:
1. Every run of 'crontab -e' must request elevated privileges through
the user interface. When running 'git maintenance start' from the
Terminal app, it presents a dialog box saying "Terminal.app would
like to administer your computer. Administration can include
modifying passwords, networking, and system settings." This is more
alarming than what we are hoping to achieve. If this alert had some
information about how "git" is trying to run "crontab" then we would
have some reason to believe that this dialog might be fine. However,
it also doesn't help that some scenarios just leave Git waiting for
a response without presenting anything to the user. I experienced
this when executing the command from a Bash terminal view inside
Visual Studio Code.
2. While cron initializes a user environment enough for "git config
--global --show-origin" to show the correct config file information,
it does not set up the environment enough for Git Credential Manager
Core to load credentials during a 'prefetch' task. My prefetches
against private repositories required re-authenticating through UI
pop-ups in a way that should not be required.
The solution is to switch from cron to the Apple-recommended [1]
'launchd' tool.
[1] https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/ScheduledJobs.html
The basics of this tool is that we need to create XML-formatted
"plist" files inside "~/Library/LaunchAgents/" and then use the
'launchctl' tool to make launchd aware of them. The plist files
include all of the scheduling information, along with the command-line
arguments split across an array of <string> tags.
For example, here is my plist file for the weekly scheduled tasks:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0"><dict>
<key>Label</key><string>org.git-scm.git.weekly</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/libexec/git-core/git</string>
<string>--exec-path=/usr/local/libexec/git-core</string>
<string>for-each-repo</string>
<string>--config=maintenance.repo</string>
<string>maintenance</string>
<string>run</string>
<string>--schedule=weekly</string>
</array>
<key>StartCalendarInterval</key>
<array>
<dict>
<key>Day</key><integer>0</integer>
<key>Hour</key><integer>0</integer>
<key>Minute</key><integer>0</integer>
</dict>
</array>
</dict>
</plist>
The schedules for the daily and hourly tasks are more complicated
since we need to use an array for the StartCalendarInterval with
an entry for each of the six days other than the 0th day (to avoid
colliding with the weekly task), and each of the 23 hours other
than the 0th hour (to avoid colliding with the daily task).
The "Label" value is currently filled with "org.git-scm.git.X"
where X is the frequency. We need a different plist file for each
frequency.
The launchctl command needs to be aligned with a user id in order
to initialize the command environment. This must be done using
the 'launchctl bootstrap' subcommand. This subcommand is new as
of macOS 10.11, which was released in September 2015. Before that
release the 'launchctl load' subcommand was recommended. The best
source of information on this transition I have seen is available
at [2]. The current design does not preclude a future version that
detects the available fatures of 'launchctl' to use the older
commands. However, it is best to rely on the newest version since
Apple might completely remove the deprecated version on short
notice.
[2] https://babodee.wordpress.com/2016/04/09/launchctl-2-0-syntax/
To remove a schedule, we must run 'launchctl bootout' with a valid
plist file. We also need to 'bootout' a task before the 'bootstrap'
subcommand will succeed, if such a task already exists.
The need for a user id requires us to run 'id -u' which works on
POSIX systems but not Windows. Further, the need for fully-qualitifed
path names including $HOME behaves differently in the Git internals and
the external test suite. The $HOME variable starts with "C:\..." instead
of the "/c/..." that is provided by Git in these subcommands. The test
therefore has a prerequisite that we are not on Windows. The cross-
platform logic still allows us to test the macOS logic on a Linux
machine.
We can verify the commands that were run by 'git maintenance start'
and 'git maintenance stop' by injecting a script that writes the
command-line arguments into GIT_TEST_MAINT_SCHEDULER.
An earlier version of this patch accidentally had an opening
"<dict>" tag when it should have had a closing "</dict>" tag. This
was caught during manual testing with actual 'launchctl' commands,
but we do not want to update developers' tasks when running tests.
It appears that macOS includes the "xmllint" tool which can verify
the XML format. This is useful for any system that might contain
the tool, so use it whenever it is available.
We strive to make these tests work on all platforms, but Windows caused
some headaches. In particular, the value of getuid() called by the C
code is not guaranteed to be the same as `$(id -u)` invoked by a test.
This is because `git.exe` is a native Windows program, whereas the
utility programs run by the test script mostly utilize the MSYS2 runtime,
which emulates a POSIX-like environment. Since the purpose of the test
is to check that the input to the hook is well-formed, the actual user
ID is immaterial, thus we can work around the problem by making the the
test UID-agnostic. Another subtle issue is the $HOME environment
variable being a Windows-style path instead of a Unix-style path. We can
be more flexible here instead of expecting exact path matches.
Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Co-authored-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-05 16:08:27 +03:00
|
|
|
test_lazy_prereq XMLLINT '
|
|
|
|
xmllint --version
|
|
|
|
'
|
|
|
|
|
|
|
|
test_xmllint () {
|
|
|
|
if test_have_prereq XMLLINT
|
|
|
|
then
|
|
|
|
xmllint --noout "$@"
|
|
|
|
else
|
|
|
|
true
|
|
|
|
fi
|
|
|
|
}
|
|
|
|
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
test_expect_success 'help text' '
|
|
|
|
test_expect_code 129 git maintenance -h 2>err &&
|
2020-09-11 20:49:17 +03:00
|
|
|
test_i18ngrep "usage: git maintenance <subcommand>" err &&
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
test_expect_code 128 git maintenance barf 2>err &&
|
|
|
|
test_i18ngrep "invalid subcommand: barf" err &&
|
|
|
|
test_expect_code 129 git maintenance 2>err &&
|
|
|
|
test_i18ngrep "usage: git maintenance" err
|
|
|
|
'
|
|
|
|
|
2020-09-17 21:11:43 +03:00
|
|
|
test_expect_success 'run [--auto|--quiet]' '
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/run-no-auto.txt" \
|
|
|
|
git maintenance run 2>/dev/null &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/run-auto.txt" \
|
|
|
|
git maintenance run --auto 2>/dev/null &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/run-no-quiet.txt" \
|
|
|
|
git maintenance run --no-quiet 2>/dev/null &&
|
|
|
|
test_subcommand git gc --quiet <run-no-auto.txt &&
|
2020-09-17 21:11:50 +03:00
|
|
|
test_subcommand ! git gc --auto --quiet <run-auto.txt &&
|
2020-09-17 21:11:43 +03:00
|
|
|
test_subcommand git gc --no-quiet <run-no-quiet.txt
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
'
|
|
|
|
|
2020-08-28 18:45:12 +03:00
|
|
|
test_expect_success 'maintenance.auto config option' '
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/default" git commit --quiet --allow-empty -m 1 &&
|
|
|
|
test_subcommand git maintenance run --auto --quiet <default &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/true" \
|
|
|
|
git -c maintenance.auto=true \
|
|
|
|
commit --quiet --allow-empty -m 2 &&
|
|
|
|
test_subcommand git maintenance run --auto --quiet <true &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/false" \
|
|
|
|
git -c maintenance.auto=false \
|
|
|
|
commit --quiet --allow-empty -m 3 &&
|
|
|
|
test_subcommand ! git maintenance run --auto --quiet <false
|
|
|
|
'
|
|
|
|
|
2020-09-17 21:11:49 +03:00
|
|
|
test_expect_success 'maintenance.<task>.enabled' '
|
|
|
|
git config maintenance.gc.enabled false &&
|
|
|
|
git config maintenance.commit-graph.enabled true &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/run-config.txt" git maintenance run 2>err &&
|
|
|
|
test_subcommand ! git gc --quiet <run-config.txt &&
|
|
|
|
test_subcommand git commit-graph write --split --reachable --no-progress <run-config.txt
|
|
|
|
'
|
|
|
|
|
2020-09-17 21:11:47 +03:00
|
|
|
test_expect_success 'run --task=<task>' '
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/run-commit-graph.txt" \
|
|
|
|
git maintenance run --task=commit-graph 2>/dev/null &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/run-gc.txt" \
|
|
|
|
git maintenance run --task=gc 2>/dev/null &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/run-commit-graph.txt" \
|
|
|
|
git maintenance run --task=commit-graph 2>/dev/null &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/run-both.txt" \
|
|
|
|
git maintenance run --task=commit-graph --task=gc 2>/dev/null &&
|
|
|
|
test_subcommand ! git gc --quiet <run-commit-graph.txt &&
|
|
|
|
test_subcommand git gc --quiet <run-gc.txt &&
|
|
|
|
test_subcommand git gc --quiet <run-both.txt &&
|
|
|
|
test_subcommand git commit-graph write --split --reachable --no-progress <run-commit-graph.txt &&
|
|
|
|
test_subcommand ! git commit-graph write --split --reachable --no-progress <run-gc.txt &&
|
|
|
|
test_subcommand git commit-graph write --split --reachable --no-progress <run-both.txt
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'run --task=bogus' '
|
|
|
|
test_must_fail git maintenance run --task=bogus 2>err &&
|
|
|
|
test_i18ngrep "is not a valid task" err
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'run --task duplicate' '
|
|
|
|
test_must_fail git maintenance run --task=gc --task=gc 2>err &&
|
|
|
|
test_i18ngrep "cannot be selected multiple times" err
|
|
|
|
'
|
|
|
|
|
maintenance: add prefetch task
When working with very large repositories, an incremental 'git fetch'
command can download a large amount of data. If there are many other
users pushing to a common repo, then this data can rival the initial
pack-file size of a 'git clone' of a medium-size repo.
Users may want to keep the data on their local repos as close as
possible to the data on the remote repos by fetching periodically in
the background. This can break up a large daily fetch into several
smaller hourly fetches.
The task is called "prefetch" because it is work done in advance
of a foreground fetch to make that 'git fetch' command much faster.
However, if we simply ran 'git fetch <remote>' in the background,
then the user running a foreground 'git fetch <remote>' would lose
some important feedback when a new branch appears or an existing
branch updates. This is especially true if a remote branch is
force-updated and this isn't noticed by the user because it occurred
in the background. Further, the functionality of 'git push
--force-with-lease' becomes suspect.
When running 'git fetch <remote> <options>' in the background, use
the following options for careful updating:
1. --no-tags prevents getting a new tag when a user wants to see
the new tags appear in their foreground fetches.
2. --refmap= removes the configured refspec which usually updates
refs/remotes/<remote>/* with the refs advertised by the remote.
While this looks confusing, this was documented and tested by
b40a50264ac (fetch: document and test --refmap="", 2020-01-21),
including this sentence in the documentation:
Providing an empty `<refspec>` to the `--refmap` option
causes Git to ignore the configured refspecs and rely
entirely on the refspecs supplied as command-line arguments.
3. By adding a new refspec "+refs/heads/*:refs/prefetch/<remote>/*"
we can ensure that we actually load the new values somewhere in
our refspace while not updating refs/heads or refs/remotes. By
storing these refs here, the commit-graph job will update the
commit-graph with the commits from these hidden refs.
4. --prune will delete the refs/prefetch/<remote> refs that no
longer appear on the remote.
5. --no-write-fetch-head prevents updating FETCH_HEAD.
We've been using this step as a critical background job in Scalar
[1] (and VFS for Git). This solved a pain point that was showing up
in user reports: fetching was a pain! Users do not like waiting to
download the data that was created while they were away from their
machines. After implementing background fetch, the foreground fetch
commands sped up significantly because they mostly just update refs
and download a small amount of new data. The effect is especially
dramatic when paried with --no-show-forced-udpates (through
fetch.showForcedUpdates=false).
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/FetchStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:31 +03:00
|
|
|
test_expect_success 'run --task=prefetch with no remotes' '
|
|
|
|
git maintenance run --task=prefetch 2>err &&
|
|
|
|
test_must_be_empty err
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'prefetch multiple remotes' '
|
|
|
|
git clone . clone1 &&
|
|
|
|
git clone . clone2 &&
|
|
|
|
git remote add remote1 "file://$(pwd)/clone1" &&
|
|
|
|
git remote add remote2 "file://$(pwd)/clone2" &&
|
|
|
|
git -C clone1 switch -c one &&
|
|
|
|
git -C clone2 switch -c two &&
|
|
|
|
test_commit -C clone1 one &&
|
|
|
|
test_commit -C clone2 two &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/run-prefetch.txt" git maintenance run --task=prefetch 2>/dev/null &&
|
|
|
|
fetchargs="--prune --no-tags --no-write-fetch-head --recurse-submodules=no --refmap= --quiet" &&
|
|
|
|
test_subcommand git fetch remote1 $fetchargs +refs/heads/\\*:refs/prefetch/remote1/\\* <run-prefetch.txt &&
|
|
|
|
test_subcommand git fetch remote2 $fetchargs +refs/heads/\\*:refs/prefetch/remote2/\\* <run-prefetch.txt &&
|
|
|
|
test_path_is_missing .git/refs/remotes &&
|
|
|
|
git log prefetch/remote1/one &&
|
|
|
|
git log prefetch/remote2/two &&
|
|
|
|
git fetch --all &&
|
|
|
|
test_cmp_rev refs/remotes/remote1/one refs/prefetch/remote1/one &&
|
|
|
|
test_cmp_rev refs/remotes/remote2/two refs/prefetch/remote2/two
|
|
|
|
'
|
|
|
|
|
2020-09-25 15:33:32 +03:00
|
|
|
test_expect_success 'loose-objects task' '
|
|
|
|
# Repack everything so we know the state of the object dir
|
|
|
|
git repack -adk &&
|
|
|
|
|
|
|
|
# Hack to stop maintenance from running during "git commit"
|
|
|
|
echo in use >.git/objects/maintenance.lock &&
|
|
|
|
|
|
|
|
# Assuming that "git commit" creates at least one loose object
|
|
|
|
test_commit create-loose-object &&
|
|
|
|
rm .git/objects/maintenance.lock &&
|
|
|
|
|
|
|
|
ls .git/objects >obj-dir-before &&
|
|
|
|
test_file_not_empty obj-dir-before &&
|
|
|
|
ls .git/objects/pack/*.pack >packs-before &&
|
|
|
|
test_line_count = 1 packs-before &&
|
|
|
|
|
|
|
|
# The first run creates a pack-file
|
|
|
|
# but does not delete loose objects.
|
|
|
|
git maintenance run --task=loose-objects &&
|
|
|
|
ls .git/objects >obj-dir-between &&
|
|
|
|
test_cmp obj-dir-before obj-dir-between &&
|
|
|
|
ls .git/objects/pack/*.pack >packs-between &&
|
|
|
|
test_line_count = 2 packs-between &&
|
|
|
|
ls .git/objects/pack/loose-*.pack >loose-packs &&
|
|
|
|
test_line_count = 1 loose-packs &&
|
|
|
|
|
|
|
|
# The second run deletes loose objects
|
|
|
|
# but does not create a pack-file.
|
|
|
|
git maintenance run --task=loose-objects &&
|
|
|
|
ls .git/objects >obj-dir-after &&
|
|
|
|
cat >expect <<-\EOF &&
|
|
|
|
info
|
|
|
|
pack
|
|
|
|
EOF
|
|
|
|
test_cmp expect obj-dir-after &&
|
|
|
|
ls .git/objects/pack/*.pack >packs-after &&
|
|
|
|
test_cmp packs-between packs-after
|
|
|
|
'
|
|
|
|
|
2020-09-25 15:33:33 +03:00
|
|
|
test_expect_success 'maintenance.loose-objects.auto' '
|
|
|
|
git repack -adk &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/trace-lo1.txt" \
|
|
|
|
git -c maintenance.loose-objects.auto=1 maintenance \
|
|
|
|
run --auto --task=loose-objects 2>/dev/null &&
|
|
|
|
test_subcommand ! git prune-packed --quiet <trace-lo1.txt &&
|
|
|
|
printf data-A | git hash-object -t blob --stdin -w &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/trace-loA" \
|
|
|
|
git -c maintenance.loose-objects.auto=2 \
|
|
|
|
maintenance run --auto --task=loose-objects 2>/dev/null &&
|
|
|
|
test_subcommand ! git prune-packed --quiet <trace-loA &&
|
|
|
|
printf data-B | git hash-object -t blob --stdin -w &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/trace-loB" \
|
|
|
|
git -c maintenance.loose-objects.auto=2 \
|
|
|
|
maintenance run --auto --task=loose-objects 2>/dev/null &&
|
|
|
|
test_subcommand git prune-packed --quiet <trace-loB &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/trace-loC" \
|
|
|
|
git -c maintenance.loose-objects.auto=2 \
|
|
|
|
maintenance run --auto --task=loose-objects 2>/dev/null &&
|
|
|
|
test_subcommand git prune-packed --quiet <trace-loC
|
|
|
|
'
|
|
|
|
|
maintenance: add incremental-repack task
The previous change cleaned up loose objects using the
'loose-objects' that can be run safely in the background. Add a
similar job that performs similar cleanups for pack-files.
One issue with running 'git repack' is that it is designed to
repack all pack-files into a single pack-file. While this is the
most space-efficient way to store object data, it is not time or
memory efficient. This becomes extremely important if the repo is
so large that a user struggles to store two copies of the pack on
their disk.
Instead, perform an "incremental" repack by collecting a few small
pack-files into a new pack-file. The multi-pack-index facilitates
this process ever since 'git multi-pack-index expire' was added in
19575c7 (multi-pack-index: implement 'expire' subcommand,
2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1
(midx: implement midx_repack(), 2019-06-10).
The 'incremental-repack' task runs the following steps:
1. 'git multi-pack-index write' creates a multi-pack-index file if
one did not exist, and otherwise will update the multi-pack-index
with any new pack-files that appeared since the last write. This
is particularly relevant with the background fetch job.
When the multi-pack-index sees two copies of the same object, it
stores the offset data into the newer pack-file. This means that
some old pack-files could become "unreferenced" which I will use
to mean "a pack-file that is in the pack-file list of the
multi-pack-index but none of the objects in the multi-pack-index
reference a location inside that pack-file."
2. 'git multi-pack-index expire' deletes any unreferenced pack-files
and updaes the multi-pack-index to drop those pack-files from the
list. This is safe to do as concurrent Git processes will see the
multi-pack-index and not open those packs when looking for object
contents. (Similar to the 'loose-objects' job, there are some Git
commands that open pack-files regardless of the multi-pack-index,
but they are rarely used. Further, a user that self-selects to
use background operations would likely refrain from using those
commands.)
3. 'git multi-pack-index repack --bacth-size=<size>' collects a set
of pack-files that are listed in the multi-pack-index and creates
a new pack-file containing the objects whose offsets are listed
by the multi-pack-index to be in those objects. The set of pack-
files is selected greedily by sorting the pack-files by modified
time and adding a pack-file to the set if its "expected size" is
smaller than the batch size until the total expected size of the
selected pack-files is at least the batch size. The "expected
size" is calculated by taking the size of the pack-file divided
by the number of objects in the pack-file and multiplied by the
number of objects from the multi-pack-index with offset in that
pack-file. The expected size approximates how much data from that
pack-file will contribute to the resulting pack-file size. The
intention is that the resulting pack-file will be close in size
to the provided batch size.
The next run of the incremental-repack task will delete these
repacked pack-files during the 'expire' step.
In this version, the batch size is set to "0" which ignores the
size restrictions when selecting the pack-files. It instead
selects all pack-files and repacks all packed objects into a
single pack-file. This will be updated in the next change, but
it requires doing some calculations that are better isolated to
a separate change.
These steps are based on a similar background maintenance step in
Scalar (and VFS for Git) [1]. This was incredibly effective for
users of the Windows OS repository. After using the same VFS for Git
repository for over a year, some users had _thousands_ of pack-files
that combined to up to 250 GB of data. We noticed a few users were
running into the open file descriptor limits (due in part to a bug
in the multi-pack-index fixed by af96fe3 (midx: add packs to
packed_git linked list, 2019-04-29).
These pack-files were mostly small since they contained the commits
and trees that were pushed to the origin in a given hour. The GVFS
protocol includes a "prefetch" step that asks for pre-computed pack-
files containing commits and trees by timestamp. These pack-files
were grouped into "daily" pack-files once a day for up to 30 days.
If a user did not request prefetch packs for over 30 days, then they
would get the entire history of commits and trees in a new, large
pack-file. This led to a large number of pack-files that had poor
delta compression.
By running this pack-file maintenance step once per day, these repos
with thousands of packs spanning 200+ GB dropped to dozens of pack-
files spanning 30-50 GB. This was done all without removing objects
from the system and using a constant batch size of two gigabytes.
Once the work was done to reduce the pack-files to small sizes, the
batch size of two gigabytes means that not every run triggers a
repack operation, so the following run will not expire a pack-file.
This has kept these repos in a "clean" state.
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:36 +03:00
|
|
|
test_expect_success 'incremental-repack task' '
|
|
|
|
packDir=.git/objects/pack &&
|
|
|
|
for i in $(test_seq 1 5)
|
|
|
|
do
|
|
|
|
test_commit $i || return 1
|
|
|
|
done &&
|
|
|
|
|
|
|
|
# Create three disjoint pack-files with size BIG, small, small.
|
|
|
|
echo HEAD~2 | git pack-objects --revs $packDir/test-1 &&
|
|
|
|
test_tick &&
|
|
|
|
git pack-objects --revs $packDir/test-2 <<-\EOF &&
|
|
|
|
HEAD~1
|
|
|
|
^HEAD~2
|
|
|
|
EOF
|
|
|
|
test_tick &&
|
|
|
|
git pack-objects --revs $packDir/test-3 <<-\EOF &&
|
|
|
|
HEAD
|
|
|
|
^HEAD~1
|
|
|
|
EOF
|
|
|
|
rm -f $packDir/pack-* &&
|
|
|
|
rm -f $packDir/loose-* &&
|
|
|
|
ls $packDir/*.pack >packs-before &&
|
|
|
|
test_line_count = 3 packs-before &&
|
|
|
|
|
|
|
|
# the job repacks the two into a new pack, but does not
|
|
|
|
# delete the old ones.
|
|
|
|
git maintenance run --task=incremental-repack &&
|
|
|
|
ls $packDir/*.pack >packs-between &&
|
|
|
|
test_line_count = 4 packs-between &&
|
|
|
|
|
|
|
|
# the job deletes the two old packs, and does not write
|
maintenance: auto-size incremental-repack batch
When repacking during the 'incremental-repack' task, we use the
--batch-size option in 'git multi-pack-index repack'. The initial setting
used --batch-size=0 to repack everything into a single pack-file. This is
not sustainable for a large repository. The amount of work required is
also likely to use too many system resources for a background job.
Update the 'incremental-repack' task by dynamically computing a
--batch-size option based on the current pack-file structure.
The dynamic default size is computed with this idea in mind for a client
repository that was cloned from a very large remote: there is likely one
"big" pack-file that was created at clone time. Thus, do not try
repacking it as it is likely packed efficiently by the server.
Instead, we select the second-largest pack-file, and create a batch size
that is one larger than that pack-file. If there are three or more
pack-files, then this guarantees that at least two will be combined into
a new pack-file.
Of course, this means that the second-largest pack-file size is likely
to grow over time and may eventually surpass the initially-cloned
pack-file. Recall that the pack-file batch is selected in a greedy
manner: the packs are considered from oldest to newest and are selected
if they have size smaller than the batch size until the total selected
size is larger than the batch size. Thus, that oldest "clone" pack will
be first to repack after the new data creates a pack larger than that.
We also want to place some limits on how large these pack-files become,
in order to bound the amount of time spent repacking. A maximum
batch-size of two gigabytes means that large repositories will never be
packed into a single pack-file using this job, but also that repack is
rather expensive. This is a trade-off that is valuable to have if the
maintenance is being run automatically or in the background. Users who
truly want to optimize for space and performance (and are willing to pay
the upfront cost of a full repack) can use the 'gc' task to do so.
Create a test for this two gigabyte limit by creating an EXPENSIVE test
that generates two pack-files of roughly 2.5 gigabytes in size, then
performs an incremental repack. Check that the --batch-size argument in
the subcommand uses the hard-coded maximum.
Helped-by: Chris Torek <chris.torek@gmail.com>
Reported-by: Son Luong Ngoc <sluongng@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:37 +03:00
|
|
|
# a new one because the batch size is not high enough to
|
|
|
|
# pack the largest pack-file.
|
maintenance: add incremental-repack task
The previous change cleaned up loose objects using the
'loose-objects' that can be run safely in the background. Add a
similar job that performs similar cleanups for pack-files.
One issue with running 'git repack' is that it is designed to
repack all pack-files into a single pack-file. While this is the
most space-efficient way to store object data, it is not time or
memory efficient. This becomes extremely important if the repo is
so large that a user struggles to store two copies of the pack on
their disk.
Instead, perform an "incremental" repack by collecting a few small
pack-files into a new pack-file. The multi-pack-index facilitates
this process ever since 'git multi-pack-index expire' was added in
19575c7 (multi-pack-index: implement 'expire' subcommand,
2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1
(midx: implement midx_repack(), 2019-06-10).
The 'incremental-repack' task runs the following steps:
1. 'git multi-pack-index write' creates a multi-pack-index file if
one did not exist, and otherwise will update the multi-pack-index
with any new pack-files that appeared since the last write. This
is particularly relevant with the background fetch job.
When the multi-pack-index sees two copies of the same object, it
stores the offset data into the newer pack-file. This means that
some old pack-files could become "unreferenced" which I will use
to mean "a pack-file that is in the pack-file list of the
multi-pack-index but none of the objects in the multi-pack-index
reference a location inside that pack-file."
2. 'git multi-pack-index expire' deletes any unreferenced pack-files
and updaes the multi-pack-index to drop those pack-files from the
list. This is safe to do as concurrent Git processes will see the
multi-pack-index and not open those packs when looking for object
contents. (Similar to the 'loose-objects' job, there are some Git
commands that open pack-files regardless of the multi-pack-index,
but they are rarely used. Further, a user that self-selects to
use background operations would likely refrain from using those
commands.)
3. 'git multi-pack-index repack --bacth-size=<size>' collects a set
of pack-files that are listed in the multi-pack-index and creates
a new pack-file containing the objects whose offsets are listed
by the multi-pack-index to be in those objects. The set of pack-
files is selected greedily by sorting the pack-files by modified
time and adding a pack-file to the set if its "expected size" is
smaller than the batch size until the total expected size of the
selected pack-files is at least the batch size. The "expected
size" is calculated by taking the size of the pack-file divided
by the number of objects in the pack-file and multiplied by the
number of objects from the multi-pack-index with offset in that
pack-file. The expected size approximates how much data from that
pack-file will contribute to the resulting pack-file size. The
intention is that the resulting pack-file will be close in size
to the provided batch size.
The next run of the incremental-repack task will delete these
repacked pack-files during the 'expire' step.
In this version, the batch size is set to "0" which ignores the
size restrictions when selecting the pack-files. It instead
selects all pack-files and repacks all packed objects into a
single pack-file. This will be updated in the next change, but
it requires doing some calculations that are better isolated to
a separate change.
These steps are based on a similar background maintenance step in
Scalar (and VFS for Git) [1]. This was incredibly effective for
users of the Windows OS repository. After using the same VFS for Git
repository for over a year, some users had _thousands_ of pack-files
that combined to up to 250 GB of data. We noticed a few users were
running into the open file descriptor limits (due in part to a bug
in the multi-pack-index fixed by af96fe3 (midx: add packs to
packed_git linked list, 2019-04-29).
These pack-files were mostly small since they contained the commits
and trees that were pushed to the origin in a given hour. The GVFS
protocol includes a "prefetch" step that asks for pre-computed pack-
files containing commits and trees by timestamp. These pack-files
were grouped into "daily" pack-files once a day for up to 30 days.
If a user did not request prefetch packs for over 30 days, then they
would get the entire history of commits and trees in a new, large
pack-file. This led to a large number of pack-files that had poor
delta compression.
By running this pack-file maintenance step once per day, these repos
with thousands of packs spanning 200+ GB dropped to dozens of pack-
files spanning 30-50 GB. This was done all without removing objects
from the system and using a constant batch size of two gigabytes.
Once the work was done to reduce the pack-files to small sizes, the
batch size of two gigabytes means that not every run triggers a
repack operation, so the following run will not expire a pack-file.
This has kept these repos in a "clean" state.
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:36 +03:00
|
|
|
git maintenance run --task=incremental-repack &&
|
|
|
|
ls .git/objects/pack/*.pack >packs-after &&
|
maintenance: auto-size incremental-repack batch
When repacking during the 'incremental-repack' task, we use the
--batch-size option in 'git multi-pack-index repack'. The initial setting
used --batch-size=0 to repack everything into a single pack-file. This is
not sustainable for a large repository. The amount of work required is
also likely to use too many system resources for a background job.
Update the 'incremental-repack' task by dynamically computing a
--batch-size option based on the current pack-file structure.
The dynamic default size is computed with this idea in mind for a client
repository that was cloned from a very large remote: there is likely one
"big" pack-file that was created at clone time. Thus, do not try
repacking it as it is likely packed efficiently by the server.
Instead, we select the second-largest pack-file, and create a batch size
that is one larger than that pack-file. If there are three or more
pack-files, then this guarantees that at least two will be combined into
a new pack-file.
Of course, this means that the second-largest pack-file size is likely
to grow over time and may eventually surpass the initially-cloned
pack-file. Recall that the pack-file batch is selected in a greedy
manner: the packs are considered from oldest to newest and are selected
if they have size smaller than the batch size until the total selected
size is larger than the batch size. Thus, that oldest "clone" pack will
be first to repack after the new data creates a pack larger than that.
We also want to place some limits on how large these pack-files become,
in order to bound the amount of time spent repacking. A maximum
batch-size of two gigabytes means that large repositories will never be
packed into a single pack-file using this job, but also that repack is
rather expensive. This is a trade-off that is valuable to have if the
maintenance is being run automatically or in the background. Users who
truly want to optimize for space and performance (and are willing to pay
the upfront cost of a full repack) can use the 'gc' task to do so.
Create a test for this two gigabyte limit by creating an EXPENSIVE test
that generates two pack-files of roughly 2.5 gigabytes in size, then
performs an incremental repack. Check that the --batch-size argument in
the subcommand uses the hard-coded maximum.
Helped-by: Chris Torek <chris.torek@gmail.com>
Reported-by: Son Luong Ngoc <sluongng@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:37 +03:00
|
|
|
test_line_count = 2 packs-after
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success EXPENSIVE 'incremental-repack 2g limit' '
|
|
|
|
for i in $(test_seq 1 5)
|
|
|
|
do
|
|
|
|
test-tool genrandom foo$i $((512 * 1024 * 1024 + 1)) >>big ||
|
|
|
|
return 1
|
|
|
|
done &&
|
|
|
|
git add big &&
|
|
|
|
git commit -m "Add big file (1)" &&
|
|
|
|
|
|
|
|
# ensure any possible loose objects are in a pack-file
|
|
|
|
git maintenance run --task=loose-objects &&
|
|
|
|
|
|
|
|
rm big &&
|
|
|
|
for i in $(test_seq 6 10)
|
|
|
|
do
|
|
|
|
test-tool genrandom foo$i $((512 * 1024 * 1024 + 1)) >>big ||
|
|
|
|
return 1
|
|
|
|
done &&
|
|
|
|
git add big &&
|
|
|
|
git commit -m "Add big file (2)" &&
|
|
|
|
|
|
|
|
# ensure any possible loose objects are in a pack-file
|
|
|
|
git maintenance run --task=loose-objects &&
|
|
|
|
|
|
|
|
# Now run the incremental-repack task and check the batch-size
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/run-2g.txt" git maintenance run \
|
|
|
|
--task=incremental-repack 2>/dev/null &&
|
|
|
|
test_subcommand git multi-pack-index repack \
|
|
|
|
--no-progress --batch-size=2147483647 <run-2g.txt
|
maintenance: add incremental-repack task
The previous change cleaned up loose objects using the
'loose-objects' that can be run safely in the background. Add a
similar job that performs similar cleanups for pack-files.
One issue with running 'git repack' is that it is designed to
repack all pack-files into a single pack-file. While this is the
most space-efficient way to store object data, it is not time or
memory efficient. This becomes extremely important if the repo is
so large that a user struggles to store two copies of the pack on
their disk.
Instead, perform an "incremental" repack by collecting a few small
pack-files into a new pack-file. The multi-pack-index facilitates
this process ever since 'git multi-pack-index expire' was added in
19575c7 (multi-pack-index: implement 'expire' subcommand,
2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1
(midx: implement midx_repack(), 2019-06-10).
The 'incremental-repack' task runs the following steps:
1. 'git multi-pack-index write' creates a multi-pack-index file if
one did not exist, and otherwise will update the multi-pack-index
with any new pack-files that appeared since the last write. This
is particularly relevant with the background fetch job.
When the multi-pack-index sees two copies of the same object, it
stores the offset data into the newer pack-file. This means that
some old pack-files could become "unreferenced" which I will use
to mean "a pack-file that is in the pack-file list of the
multi-pack-index but none of the objects in the multi-pack-index
reference a location inside that pack-file."
2. 'git multi-pack-index expire' deletes any unreferenced pack-files
and updaes the multi-pack-index to drop those pack-files from the
list. This is safe to do as concurrent Git processes will see the
multi-pack-index and not open those packs when looking for object
contents. (Similar to the 'loose-objects' job, there are some Git
commands that open pack-files regardless of the multi-pack-index,
but they are rarely used. Further, a user that self-selects to
use background operations would likely refrain from using those
commands.)
3. 'git multi-pack-index repack --bacth-size=<size>' collects a set
of pack-files that are listed in the multi-pack-index and creates
a new pack-file containing the objects whose offsets are listed
by the multi-pack-index to be in those objects. The set of pack-
files is selected greedily by sorting the pack-files by modified
time and adding a pack-file to the set if its "expected size" is
smaller than the batch size until the total expected size of the
selected pack-files is at least the batch size. The "expected
size" is calculated by taking the size of the pack-file divided
by the number of objects in the pack-file and multiplied by the
number of objects from the multi-pack-index with offset in that
pack-file. The expected size approximates how much data from that
pack-file will contribute to the resulting pack-file size. The
intention is that the resulting pack-file will be close in size
to the provided batch size.
The next run of the incremental-repack task will delete these
repacked pack-files during the 'expire' step.
In this version, the batch size is set to "0" which ignores the
size restrictions when selecting the pack-files. It instead
selects all pack-files and repacks all packed objects into a
single pack-file. This will be updated in the next change, but
it requires doing some calculations that are better isolated to
a separate change.
These steps are based on a similar background maintenance step in
Scalar (and VFS for Git) [1]. This was incredibly effective for
users of the Windows OS repository. After using the same VFS for Git
repository for over a year, some users had _thousands_ of pack-files
that combined to up to 250 GB of data. We noticed a few users were
running into the open file descriptor limits (due in part to a bug
in the multi-pack-index fixed by af96fe3 (midx: add packs to
packed_git linked list, 2019-04-29).
These pack-files were mostly small since they contained the commits
and trees that were pushed to the origin in a given hour. The GVFS
protocol includes a "prefetch" step that asks for pre-computed pack-
files containing commits and trees by timestamp. These pack-files
were grouped into "daily" pack-files once a day for up to 30 days.
If a user did not request prefetch packs for over 30 days, then they
would get the entire history of commits and trees in a new, large
pack-file. This led to a large number of pack-files that had poor
delta compression.
By running this pack-file maintenance step once per day, these repos
with thousands of packs spanning 200+ GB dropped to dozens of pack-
files spanning 30-50 GB. This was done all without removing objects
from the system and using a constant batch size of two gigabytes.
Once the work was done to reduce the pack-files to small sizes, the
batch size of two gigabytes means that not every run triggers a
repack operation, so the following run will not expire a pack-file.
This has kept these repos in a "clean" state.
[1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 15:33:36 +03:00
|
|
|
'
|
|
|
|
|
2020-09-25 15:33:38 +03:00
|
|
|
test_expect_success 'maintenance.incremental-repack.auto' '
|
|
|
|
git repack -adk &&
|
|
|
|
git config core.multiPackIndex true &&
|
|
|
|
git multi-pack-index write &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/midx-init.txt" git \
|
|
|
|
-c maintenance.incremental-repack.auto=1 \
|
|
|
|
maintenance run --auto --task=incremental-repack 2>/dev/null &&
|
|
|
|
test_subcommand ! git multi-pack-index write --no-progress <midx-init.txt &&
|
|
|
|
test_commit A &&
|
|
|
|
git pack-objects --revs .git/objects/pack/pack <<-\EOF &&
|
|
|
|
HEAD
|
|
|
|
^HEAD~1
|
|
|
|
EOF
|
|
|
|
GIT_TRACE2_EVENT=$(pwd)/trace-A git \
|
|
|
|
-c maintenance.incremental-repack.auto=2 \
|
|
|
|
maintenance run --auto --task=incremental-repack 2>/dev/null &&
|
|
|
|
test_subcommand ! git multi-pack-index write --no-progress <trace-A &&
|
|
|
|
test_commit B &&
|
|
|
|
git pack-objects --revs .git/objects/pack/pack <<-\EOF &&
|
|
|
|
HEAD
|
|
|
|
^HEAD~1
|
|
|
|
EOF
|
|
|
|
GIT_TRACE2_EVENT=$(pwd)/trace-B git \
|
|
|
|
-c maintenance.incremental-repack.auto=2 \
|
|
|
|
maintenance run --auto --task=incremental-repack 2>/dev/null &&
|
|
|
|
test_subcommand git multi-pack-index write --no-progress <trace-B
|
|
|
|
'
|
|
|
|
|
2020-09-11 20:49:15 +03:00
|
|
|
test_expect_success '--auto and --schedule incompatible' '
|
|
|
|
test_must_fail git maintenance run --auto --schedule=daily 2>err &&
|
|
|
|
test_i18ngrep "at most one" err
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'invalid --schedule value' '
|
|
|
|
test_must_fail git maintenance run --schedule=annually 2>err &&
|
|
|
|
test_i18ngrep "unrecognized --schedule" err
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success '--schedule inheritance weekly -> daily -> hourly' '
|
|
|
|
git config maintenance.loose-objects.enabled true &&
|
|
|
|
git config maintenance.loose-objects.schedule hourly &&
|
|
|
|
git config maintenance.commit-graph.enabled true &&
|
|
|
|
git config maintenance.commit-graph.schedule daily &&
|
|
|
|
git config maintenance.incremental-repack.enabled true &&
|
|
|
|
git config maintenance.incremental-repack.schedule weekly &&
|
|
|
|
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/hourly.txt" \
|
|
|
|
git maintenance run --schedule=hourly 2>/dev/null &&
|
|
|
|
test_subcommand git prune-packed --quiet <hourly.txt &&
|
|
|
|
test_subcommand ! git commit-graph write --split --reachable \
|
|
|
|
--no-progress <hourly.txt &&
|
|
|
|
test_subcommand ! git multi-pack-index write --no-progress <hourly.txt &&
|
|
|
|
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/daily.txt" \
|
|
|
|
git maintenance run --schedule=daily 2>/dev/null &&
|
|
|
|
test_subcommand git prune-packed --quiet <daily.txt &&
|
|
|
|
test_subcommand git commit-graph write --split --reachable \
|
|
|
|
--no-progress <daily.txt &&
|
|
|
|
test_subcommand ! git multi-pack-index write --no-progress <daily.txt &&
|
|
|
|
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/weekly.txt" \
|
|
|
|
git maintenance run --schedule=weekly 2>/dev/null &&
|
|
|
|
test_subcommand git prune-packed --quiet <weekly.txt &&
|
|
|
|
test_subcommand git commit-graph write --split --reachable \
|
|
|
|
--no-progress <weekly.txt &&
|
|
|
|
test_subcommand git multi-pack-index write --no-progress <weekly.txt
|
|
|
|
'
|
|
|
|
|
2020-10-15 20:22:02 +03:00
|
|
|
test_expect_success 'maintenance.strategy inheritance' '
|
|
|
|
for task in commit-graph loose-objects incremental-repack
|
|
|
|
do
|
|
|
|
git config --unset maintenance.$task.schedule || return 1
|
|
|
|
done &&
|
|
|
|
|
|
|
|
test_when_finished git config --unset maintenance.strategy &&
|
|
|
|
git config maintenance.strategy incremental &&
|
|
|
|
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/incremental-hourly.txt" \
|
|
|
|
git maintenance run --schedule=hourly --quiet &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/incremental-daily.txt" \
|
|
|
|
git maintenance run --schedule=daily --quiet &&
|
|
|
|
|
|
|
|
test_subcommand git commit-graph write --split --reachable \
|
|
|
|
--no-progress <incremental-hourly.txt &&
|
|
|
|
test_subcommand ! git prune-packed --quiet <incremental-hourly.txt &&
|
|
|
|
test_subcommand ! git multi-pack-index write --no-progress \
|
|
|
|
<incremental-hourly.txt &&
|
|
|
|
|
|
|
|
test_subcommand git commit-graph write --split --reachable \
|
|
|
|
--no-progress <incremental-daily.txt &&
|
|
|
|
test_subcommand git prune-packed --quiet <incremental-daily.txt &&
|
|
|
|
test_subcommand git multi-pack-index write --no-progress \
|
|
|
|
<incremental-daily.txt &&
|
|
|
|
|
|
|
|
# Modify defaults
|
|
|
|
git config maintenance.commit-graph.schedule daily &&
|
|
|
|
git config maintenance.loose-objects.schedule hourly &&
|
|
|
|
git config maintenance.incremental-repack.enabled false &&
|
|
|
|
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/modified-hourly.txt" \
|
|
|
|
git maintenance run --schedule=hourly --quiet &&
|
|
|
|
GIT_TRACE2_EVENT="$(pwd)/modified-daily.txt" \
|
|
|
|
git maintenance run --schedule=daily --quiet &&
|
|
|
|
|
|
|
|
test_subcommand ! git commit-graph write --split --reachable \
|
|
|
|
--no-progress <modified-hourly.txt &&
|
|
|
|
test_subcommand git prune-packed --quiet <modified-hourly.txt &&
|
|
|
|
test_subcommand ! git multi-pack-index write --no-progress \
|
|
|
|
<modified-hourly.txt &&
|
|
|
|
|
|
|
|
test_subcommand git commit-graph write --split --reachable \
|
|
|
|
--no-progress <modified-daily.txt &&
|
|
|
|
test_subcommand git prune-packed --quiet <modified-daily.txt &&
|
|
|
|
test_subcommand ! git multi-pack-index write --no-progress \
|
|
|
|
<modified-daily.txt
|
|
|
|
'
|
|
|
|
|
2020-09-11 20:49:17 +03:00
|
|
|
test_expect_success 'register and unregister' '
|
|
|
|
test_when_finished git config --global --unset-all maintenance.repo &&
|
|
|
|
git config --global --add maintenance.repo /existing1 &&
|
|
|
|
git config --global --add maintenance.repo /existing2 &&
|
|
|
|
git config --global --get-all maintenance.repo >before &&
|
2020-10-15 20:22:03 +03:00
|
|
|
|
2020-09-11 20:49:17 +03:00
|
|
|
git maintenance register &&
|
2020-10-15 20:22:03 +03:00
|
|
|
test_cmp_config false maintenance.auto &&
|
|
|
|
git config --global --get-all maintenance.repo >between &&
|
|
|
|
cp before expect &&
|
|
|
|
pwd >>expect &&
|
|
|
|
test_cmp expect between &&
|
|
|
|
|
2020-09-11 20:49:17 +03:00
|
|
|
git maintenance unregister &&
|
|
|
|
git config --global --get-all maintenance.repo >actual &&
|
|
|
|
test_cmp before actual
|
|
|
|
'
|
|
|
|
|
2020-09-11 20:49:18 +03:00
|
|
|
test_expect_success 'start from empty cron table' '
|
2020-11-24 07:16:42 +03:00
|
|
|
GIT_TEST_MAINT_SCHEDULER="crontab:test-tool crontab cron.txt" git maintenance start &&
|
2020-09-11 20:49:18 +03:00
|
|
|
|
|
|
|
# start registers the repo
|
|
|
|
git config --get --global maintenance.repo "$(pwd)" &&
|
|
|
|
|
|
|
|
grep "for-each-repo --config=maintenance.repo maintenance run --schedule=daily" cron.txt &&
|
|
|
|
grep "for-each-repo --config=maintenance.repo maintenance run --schedule=hourly" cron.txt &&
|
|
|
|
grep "for-each-repo --config=maintenance.repo maintenance run --schedule=weekly" cron.txt
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'stop from existing schedule' '
|
2020-11-24 07:16:42 +03:00
|
|
|
GIT_TEST_MAINT_SCHEDULER="crontab:test-tool crontab cron.txt" git maintenance stop &&
|
2020-09-11 20:49:18 +03:00
|
|
|
|
|
|
|
# stop does not unregister the repo
|
|
|
|
git config --get --global maintenance.repo "$(pwd)" &&
|
|
|
|
|
|
|
|
# Operation is idempotent
|
2020-11-24 07:16:42 +03:00
|
|
|
GIT_TEST_MAINT_SCHEDULER="crontab:test-tool crontab cron.txt" git maintenance stop &&
|
2020-09-11 20:49:18 +03:00
|
|
|
test_must_be_empty cron.txt
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'start preserves existing schedule' '
|
|
|
|
echo "Important information!" >cron.txt &&
|
2020-11-24 07:16:42 +03:00
|
|
|
GIT_TEST_MAINT_SCHEDULER="crontab:test-tool crontab cron.txt" git maintenance start &&
|
2020-09-11 20:49:18 +03:00
|
|
|
grep "Important information!" cron.txt
|
|
|
|
'
|
|
|
|
|
maintenance: use launchctl on macOS
The existing mechanism for scheduling background maintenance is done
through cron. The 'crontab -e' command allows updating the schedule
while cron itself runs those commands. While this is technically
supported by macOS, it has some significant deficiencies:
1. Every run of 'crontab -e' must request elevated privileges through
the user interface. When running 'git maintenance start' from the
Terminal app, it presents a dialog box saying "Terminal.app would
like to administer your computer. Administration can include
modifying passwords, networking, and system settings." This is more
alarming than what we are hoping to achieve. If this alert had some
information about how "git" is trying to run "crontab" then we would
have some reason to believe that this dialog might be fine. However,
it also doesn't help that some scenarios just leave Git waiting for
a response without presenting anything to the user. I experienced
this when executing the command from a Bash terminal view inside
Visual Studio Code.
2. While cron initializes a user environment enough for "git config
--global --show-origin" to show the correct config file information,
it does not set up the environment enough for Git Credential Manager
Core to load credentials during a 'prefetch' task. My prefetches
against private repositories required re-authenticating through UI
pop-ups in a way that should not be required.
The solution is to switch from cron to the Apple-recommended [1]
'launchd' tool.
[1] https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/ScheduledJobs.html
The basics of this tool is that we need to create XML-formatted
"plist" files inside "~/Library/LaunchAgents/" and then use the
'launchctl' tool to make launchd aware of them. The plist files
include all of the scheduling information, along with the command-line
arguments split across an array of <string> tags.
For example, here is my plist file for the weekly scheduled tasks:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0"><dict>
<key>Label</key><string>org.git-scm.git.weekly</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/libexec/git-core/git</string>
<string>--exec-path=/usr/local/libexec/git-core</string>
<string>for-each-repo</string>
<string>--config=maintenance.repo</string>
<string>maintenance</string>
<string>run</string>
<string>--schedule=weekly</string>
</array>
<key>StartCalendarInterval</key>
<array>
<dict>
<key>Day</key><integer>0</integer>
<key>Hour</key><integer>0</integer>
<key>Minute</key><integer>0</integer>
</dict>
</array>
</dict>
</plist>
The schedules for the daily and hourly tasks are more complicated
since we need to use an array for the StartCalendarInterval with
an entry for each of the six days other than the 0th day (to avoid
colliding with the weekly task), and each of the 23 hours other
than the 0th hour (to avoid colliding with the daily task).
The "Label" value is currently filled with "org.git-scm.git.X"
where X is the frequency. We need a different plist file for each
frequency.
The launchctl command needs to be aligned with a user id in order
to initialize the command environment. This must be done using
the 'launchctl bootstrap' subcommand. This subcommand is new as
of macOS 10.11, which was released in September 2015. Before that
release the 'launchctl load' subcommand was recommended. The best
source of information on this transition I have seen is available
at [2]. The current design does not preclude a future version that
detects the available fatures of 'launchctl' to use the older
commands. However, it is best to rely on the newest version since
Apple might completely remove the deprecated version on short
notice.
[2] https://babodee.wordpress.com/2016/04/09/launchctl-2-0-syntax/
To remove a schedule, we must run 'launchctl bootout' with a valid
plist file. We also need to 'bootout' a task before the 'bootstrap'
subcommand will succeed, if such a task already exists.
The need for a user id requires us to run 'id -u' which works on
POSIX systems but not Windows. Further, the need for fully-qualitifed
path names including $HOME behaves differently in the Git internals and
the external test suite. The $HOME variable starts with "C:\..." instead
of the "/c/..." that is provided by Git in these subcommands. The test
therefore has a prerequisite that we are not on Windows. The cross-
platform logic still allows us to test the macOS logic on a Linux
machine.
We can verify the commands that were run by 'git maintenance start'
and 'git maintenance stop' by injecting a script that writes the
command-line arguments into GIT_TEST_MAINT_SCHEDULER.
An earlier version of this patch accidentally had an opening
"<dict>" tag when it should have had a closing "</dict>" tag. This
was caught during manual testing with actual 'launchctl' commands,
but we do not want to update developers' tasks when running tests.
It appears that macOS includes the "xmllint" tool which can verify
the XML format. This is useful for any system that might contain
the tool, so use it whenever it is available.
We strive to make these tests work on all platforms, but Windows caused
some headaches. In particular, the value of getuid() called by the C
code is not guaranteed to be the same as `$(id -u)` invoked by a test.
This is because `git.exe` is a native Windows program, whereas the
utility programs run by the test script mostly utilize the MSYS2 runtime,
which emulates a POSIX-like environment. Since the purpose of the test
is to check that the input to the hook is well-formed, the actual user
ID is immaterial, thus we can work around the problem by making the the
test UID-agnostic. Another subtle issue is the $HOME environment
variable being a Windows-style path instead of a Unix-style path. We can
be more flexible here instead of expecting exact path matches.
Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Co-authored-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-05 16:08:27 +03:00
|
|
|
test_expect_success 'start and stop macOS maintenance' '
|
|
|
|
# ensure $HOME can be compared against hook arguments on all platforms
|
|
|
|
pfx=$(cd "$HOME" && pwd) &&
|
|
|
|
|
|
|
|
write_script print-args <<-\EOF &&
|
|
|
|
echo $* | sed "s:gui/[0-9][0-9]*:gui/[UID]:" >>args
|
|
|
|
EOF
|
|
|
|
|
|
|
|
rm -f args &&
|
|
|
|
GIT_TEST_MAINT_SCHEDULER=launchctl:./print-args git maintenance start &&
|
|
|
|
|
|
|
|
# start registers the repo
|
|
|
|
git config --get --global maintenance.repo "$(pwd)" &&
|
|
|
|
|
|
|
|
ls "$HOME/Library/LaunchAgents" >actual &&
|
|
|
|
cat >expect <<-\EOF &&
|
|
|
|
org.git-scm.git.daily.plist
|
|
|
|
org.git-scm.git.hourly.plist
|
|
|
|
org.git-scm.git.weekly.plist
|
|
|
|
EOF
|
|
|
|
test_cmp expect actual &&
|
|
|
|
|
|
|
|
rm -f expect &&
|
|
|
|
for frequency in hourly daily weekly
|
|
|
|
do
|
|
|
|
PLIST="$pfx/Library/LaunchAgents/org.git-scm.git.$frequency.plist" &&
|
|
|
|
test_xmllint "$PLIST" &&
|
|
|
|
grep schedule=$frequency "$PLIST" &&
|
|
|
|
echo "bootout gui/[UID] $PLIST" >>expect &&
|
|
|
|
echo "bootstrap gui/[UID] $PLIST" >>expect || return 1
|
|
|
|
done &&
|
|
|
|
test_cmp expect args &&
|
|
|
|
|
|
|
|
rm -f args &&
|
|
|
|
GIT_TEST_MAINT_SCHEDULER=launchctl:./print-args git maintenance stop &&
|
|
|
|
|
|
|
|
# stop does not unregister the repo
|
|
|
|
git config --get --global maintenance.repo "$(pwd)" &&
|
|
|
|
|
|
|
|
printf "bootout gui/[UID] $pfx/Library/LaunchAgents/org.git-scm.git.%s.plist\n" \
|
|
|
|
hourly daily weekly >expect &&
|
|
|
|
test_cmp expect args &&
|
|
|
|
ls "$HOME/Library/LaunchAgents" >actual &&
|
|
|
|
test_line_count = 0 actual
|
|
|
|
'
|
|
|
|
|
2020-10-15 20:22:03 +03:00
|
|
|
test_expect_success 'register preserves existing strategy' '
|
|
|
|
git config maintenance.strategy none &&
|
|
|
|
git maintenance register &&
|
|
|
|
test_config maintenance.strategy none &&
|
|
|
|
git config --unset maintenance.strategy &&
|
|
|
|
git maintenance register &&
|
|
|
|
test_config maintenance.strategy incremental
|
|
|
|
'
|
|
|
|
|
maintenance: create basic maintenance runner
The 'gc' builtin is our current entrypoint for automatically maintaining
a repository. This one tool does many operations, such as repacking the
repository, packing refs, and rewriting the commit-graph file. The name
implies it performs "garbage collection" which means several different
things, and some users may not want to use this operation that rewrites
the entire object database.
Create a new 'maintenance' builtin that will become a more general-
purpose command. To start, it will only support the 'run' subcommand,
but will later expand to add subcommands for scheduling maintenance in
the background.
For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin.
In fact, the only option is the '--auto' toggle, which is handed
directly to the 'gc' builtin. The current change is isolated to this
simple operation to prevent more interesting logic from being lost in
all of the boilerplate of adding a new builtin.
Use existing builtin/gc.c file because we want to share code between the
two builtins. It is possible that we will have 'maintenance' replace the
'gc' builtin entirely at some point, leaving 'git gc' as an alias for
some specific arguments to 'git maintenance run'.
Create a new test_subcommand helper that allows us to test if a certain
subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a
file. A negation mode is available that will be used in later tests.
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 21:11:42 +03:00
|
|
|
test_done
|