Граф коммитов

350 Коммитов

Автор SHA1 Сообщение Дата
Junio C Hamano cf0b1793ea Merge branch 'sb/object-store'
Refactoring the internal global data structure to make it possible
to open multiple repositories, work with and then close them.

Rerolled by Duy on top of a separate preliminary clean-up topic.
The resulting structure of the topics looked very sensible.

* sb/object-store: (27 commits)
  sha1_file: allow sha1_loose_object_info to handle arbitrary repositories
  sha1_file: allow map_sha1_file to handle arbitrary repositories
  sha1_file: allow map_sha1_file_1 to handle arbitrary repositories
  sha1_file: allow open_sha1_file to handle arbitrary repositories
  sha1_file: allow stat_sha1_file to handle arbitrary repositories
  sha1_file: allow sha1_file_name to handle arbitrary repositories
  sha1_file: add repository argument to sha1_loose_object_info
  sha1_file: add repository argument to map_sha1_file
  sha1_file: add repository argument to map_sha1_file_1
  sha1_file: add repository argument to open_sha1_file
  sha1_file: add repository argument to stat_sha1_file
  sha1_file: add repository argument to sha1_file_name
  sha1_file: allow prepare_alt_odb to handle arbitrary repositories
  sha1_file: allow link_alt_odb_entries to handle arbitrary repositories
  sha1_file: add repository argument to prepare_alt_odb
  sha1_file: add repository argument to link_alt_odb_entries
  sha1_file: add repository argument to read_info_alternates
  sha1_file: add repository argument to link_alt_odb_entry
  sha1_file: add raw_object_store argument to alt_odb_usable
  pack: move approximate object count to object store
  ...
2018-04-11 13:09:55 +09:00
Stefan Beller 0d4a132144 object-store: migrate alternates struct and functions from cache.h
Migrate the struct alternate_object_database and all its related
functions to the object store as these functions are easier found in
that header. The migration is just a verbatim copy, no need to
include the object store header at any C file, because cache.h includes
repository.h which in turn includes the object-store.h

Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-23 11:06:01 -07:00
brian m. carlson aab9583f7b Convert find_unique_abbrev* to struct object_id
Convert find_unique_abbrev and find_unique_abbrev_r to each take a
pointer to struct object_id.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-14 09:23:48 -07:00
brian m. carlson 30e677e0e2 strbuf: convert strbuf_add_unique_abbrev to use struct object_id
Convert the declaration and definition of strbuf_add_unique_abbrev to
make it take a pointer to struct object_id.  Predeclare the struct in
strbuf.h, as cache.h includes strbuf.h before it declares the struct,
and otherwise the struct declaration would have the wrong scope.

Apply the following semantic patch, along with the standard object_id
transforms, to adjust the callers:

@@
expression E1, E2, E3;
@@
- strbuf_add_unique_abbrev(E1, E2.hash, E3);
+ strbuf_add_unique_abbrev(E1, &E2, E3);

@@
expression E1, E2, E3;
@@
- strbuf_add_unique_abbrev(E1, E2->hash, E3);
+ strbuf_add_unique_abbrev(E1, E2, E3);

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-14 09:23:48 -07:00
Junio C Hamano 6bed209a20 Merge branch 'jh/partial-clone'
The machinery to clone & fetch, which in turn involves packing and
unpacking objects, have been told how to omit certain objects using
the filtering mechanism introduced by the jh/object-filtering
topic, and also mark the resulting pack as a promisor pack to
tolerate missing objects, taking advantage of the mechanism
introduced by the jh/fsck-promisors topic.

* jh/partial-clone:
  t5616: test bulk prefetch after partial fetch
  fetch: inherit filter-spec from partial clone
  t5616: end-to-end tests for partial clone
  fetch-pack: restore save_commit_buffer after use
  unpack-trees: batch fetching of missing blobs
  clone: partial clone
  partial-clone: define partial clone settings in config
  fetch: support filters
  fetch: refactor calculation of remote list
  fetch-pack: test support excluding large blobs
  fetch-pack: add --no-filter
  fetch-pack, index-pack, transport: partial clone
  upload-pack: add object filtering for partial clone
2018-02-13 13:39:04 -08:00
Junio C Hamano f3d618d2bf Merge branch 'jh/fsck-promisors'
In preparation for implementing narrow/partial clone, the machinery
for checking object connectivity used by gc and fsck has been
taught that a missing object is OK when it is referenced by a
packfile specially marked as coming from trusted repository that
promises to make them available on-demand and lazily.

* jh/fsck-promisors:
  gc: do not repack promisor packfiles
  rev-list: support termination at promisor objects
  sha1_file: support lazily fetching missing objects
  introduce fetch-object: fetch one promisor object
  index-pack: refactor writing of .keep files
  fsck: support promisor objects as CLI argument
  fsck: support referenced promisor objects
  fsck: support refs pointing to promisor objects
  fsck: introduce partialclone extension
  extension.partialclone: introduce partial clone extension
2018-02-13 13:39:03 -08:00
Junio C Hamano 2546de27c3 Merge branch 'jt/transport-hide-vtable'
Code clean-up.

* jt/transport-hide-vtable:
  transport: make transport vtable more private
  clone, fetch: remove redundant transport check
2017-12-28 14:08:47 -08:00
Junio C Hamano d22114ac24 Merge branch 'jt/transport-no-more-rsync'
Code clean-up.

* jt/transport-no-more-rsync:
  transport: remove unused "push" in vtable
2017-12-27 11:16:30 -08:00
Jonathan Tan e967ca3847 transport: make transport vtable more private
Move the definition of the transport-specific functions provided by
transports, whether declared in transport.c or transport-helper.c, into
an internal header. This means that transport-using code (as opposed to
transport-declaring code) can no longer access these functions (without
importing the internal header themselves), making it clear that they
should use the transport_*() functions instead, and also allowing the
interface between the transport mechanism and an individual transport to
independently evolve.

This is superficially a reversal of commit 824d5776c3 ("Refactor
struct transport_ops inlined into struct transport", 2007-09-19).
However, the scope of the involved variables was neither affected nor
discussed in that commit, and I think that the advantages in making
those functions more private outweigh the advantages described in that
commit's commit message. A minor additional point is that the code has
gotten more complicated since then, in that the function-pointer
variables are potentially mutated twice (once initially and once if
transport_take_over() is invoked), increasing the value of corralling
them into their own struct.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-14 14:28:04 -08:00
Jonathan Tan 170078693f transport: remove unused "push" in vtable
After commit 0d0bac67ce ("transport: drop support for git-over-rsync",
2016-02-01), no transport in Git populates the "push" entry in the
transport vtable. Remove this entry.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-12 16:50:56 -08:00
Jeff Hostetler 640d8b72fe fetch-pack, index-pack, transport: partial clone
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-08 09:58:51 -08:00
Jonathan Tan 88e2f9ed8e introduce fetch-object: fetch one promisor object
Introduce fetch-object, providing the ability to fetch one object from a
promisor remote.

This uses fetch-pack. To do this, the transport mechanism has been
updated with 2 flags, "from-promisor" to indicate that the resulting
pack comes from a promisor remote (and thus should be annotated as such
by index-pack), and "no-dependents" to indicate that only the objects
themselves need to be fetched (but fetching additional objects is
nevertheless safe).

Whenever "no-dependents" is used, fetch-pack will refrain from using any
object flags, because it is most likely invoked as part of a dynamic
object fetch by another Git command (which may itself use object flags).
An alternative to this is to leave fetch-pack alone, and instead update
the allocation of flags so that fetch-pack's flags never overlap with
any others, but this will end up shrinking the number of flags available
to nearly every other Git command (that is, every Git command that
accesses objects), so the approach in this commit was used instead.

This will be tested in a subsequent commit.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-05 09:46:05 -08:00
brian m. carlson ae077771b0 refs: convert update_ref and refs_update_ref to use struct object_id
Convert update_ref, refs_update_ref, and write_pseudoref to use struct
object_id.  Update the existing callers as well.  Remove update_ref_oid,
as it is no longer needed.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16 11:05:50 +09:00
René Scharfe efbd4fdfc9 refs: pass NULL to resolve_refdup() if hash is not needed
This allows us to get rid of several write-only variables.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-01 17:27:14 +09:00
René Scharfe 744c040b19 refs: pass NULL to resolve_ref_unsafe() if hash is not needed
This allows us to get rid of some write-only variables, among them seven
SHA1 buffers.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-24 10:18:21 +09:00
Junio C Hamano 50f03c6676 Merge branch 'ab/free-and-null'
A common pattern to free a piece of memory and assign NULL to the
pointer that used to point at it has been replaced with a new
FREE_AND_NULL() macro.

* ab/free-and-null:
  *.[ch] refactoring: make use of the FREE_AND_NULL() macro
  coccinelle: make use of the "expression" FREE_AND_NULL() rule
  coccinelle: add a rule to make "expression" code use FREE_AND_NULL()
  coccinelle: make use of the "type" FREE_AND_NULL() rule
  coccinelle: add a rule to make "type" code use FREE_AND_NULL()
  git-compat-util: add a FREE_AND_NULL() wrapper around free(ptr); ptr = NULL
2017-06-24 14:28:41 -07:00
Junio C Hamano f31d23a399 Merge branch 'bw/config-h'
Fix configuration codepath to pay proper attention to commondir
that is used in multi-worktree situation, and isolate config API
into its own header file.

* bw/config-h:
  config: don't implicitly use gitdir or commondir
  config: respect commondir
  setup: teach discover_git_directory to respect the commondir
  config: don't include config.h by default
  config: remove git_config_iter
  config: create config.h
2017-06-24 14:28:41 -07:00
Ævar Arnfjörð Bjarmason 6a83d90207 coccinelle: make use of the "type" FREE_AND_NULL() rule
Apply the result of the just-added coccinelle rule. This manually
excludes a few occurrences, mostly things that resulted in many
FREE_AND_NULL() on one line, that'll be manually fixed in a subsequent
change.

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-16 12:44:03 -07:00
Brandon Williams b2141fc1d2 config: don't include config.h by default
Stop including config.h by default in cache.h.  Instead only include
config.h in those files which require use of the config system.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-15 12:56:22 -07:00
brian m. carlson b8607f35b1 bundle: convert to struct object_id
Convert the bundle code, plus the sole external user of struct
ref_list_entry, to use struct object_id.  Include cache.h from within
bundle.h to provide the definition.  Convert some of the hash parsing
code to use parse_oid_hex to avoid needing to hard-code constant values.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-02 10:46:41 +09:00
Junio C Hamano 872e2cf00a Merge branch 'bw/push-options-recursively-to-submodules'
"git push --recurse-submodules --push-option=<string>" learned to
propagate the push option recursively down to pushes in submodules.

* bw/push-options-recursively-to-submodules:
  push: propagate remote and refspec with --recurse-submodules
  submodule--helper: add push-check subcommand
  remote: expose parse_push_refspec function
  push: propagate push-options with --recurse-submodules
  push: unmark a local variable as static
2017-04-19 21:37:14 -07:00
Brandon Williams 06bf4ad1db push: propagate remote and refspec with --recurse-submodules
Teach "push --recurse-submodules" to propagate, if given a name as remote, the
provided remote and refspec recursively to the pushes performed in the
submodules. The push will therefore only succeed if all submodules have a
remote with such a name configured.

Note that "push --recurse-submodules" with a path or URL as remote will not
propagate the remote or refspec and instead use the default remote and refspec
configured in the submodule, preserving the current behavior.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-11 00:45:26 -07:00
Brandon Williams 2a90556dde push: propagate push-options with --recurse-submodules
Teach push --recurse-submodules to propagate push-options recursively to
the pushes performed in the submodules.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-11 00:45:03 -07:00
brian m. carlson 910650d2f8 Rename sha1_array to oid_array
Since this structure handles an array of object IDs, rename it to struct
oid_array.  Also rename the accessor functions and the initialization
constant.

This commit was produced mechanically by providing non-Documentation
files to the following Perl one-liners:

    perl -pi -E 's/struct sha1_array/struct oid_array/g'
    perl -pi -E 's/\bsha1_array_/oid_array_/g'
    perl -pi -E 's/SHA1_ARRAY_INIT/OID_ARRAY_INIT/g'

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-31 08:33:56 -07:00
brian m. carlson 98a72ddc12 Make sha1_array_append take a struct object_id *
Convert the callers to pass struct object_id by changing the function
declaration and definition and applying the following semantic patch:

@@
expression E1, E2;
@@
- sha1_array_append(E1, E2.hash)
+ sha1_array_append(E1, &E2)

@@
expression E1, E2;
@@
- sha1_array_append(E1, E2->hash)
+ sha1_array_append(E1, E2)

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-31 08:33:55 -07:00
brian m. carlson dc01505f7f Convert GIT_SHA1_HEXSZ used for allocation to GIT_MAX_HEXSZ
Since we will likely be introducing a new hash function at some point,
and that hash function might be longer than 40 hex characters, use the
constant GIT_MAX_HEXSZ, which is designed to be suitable for
allocations, instead of GIT_SHA1_HEXSZ.  This will ease the transition
down the line by distinguishing between places where we need to allocate
memory suitable for the largest hash from those where we need to handle
the current hash.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-26 22:08:21 -07:00
Junio C Hamano e1fae93019 Merge branch 'bc/object-id'
"uchar [40]" to "struct object_id" conversion continues.

* bc/object-id:
  wt-status: convert to struct object_id
  builtin/merge-base: convert to struct object_id
  Convert object iteration callbacks to struct object_id
  sha1_file: introduce an nth_packed_object_oid function
  refs: simplify parsing of reflog entries
  refs: convert each_reflog_ent_fn to struct object_id
  reflog-walk: convert struct reflog_info to struct object_id
  builtin/replace: convert to struct object_id
  Convert remaining callers of resolve_refdup to object_id
  builtin/merge: convert to struct object_id
  builtin/clone: convert to struct object_id
  builtin/branch: convert to struct object_id
  builtin/grep: convert to struct object_id
  builtin/fmt-merge-message: convert to struct object_id
  builtin/fast-export: convert to struct object_id
  builtin/describe: convert to struct object_id
  builtin/diff-tree: convert to struct object_id
  builtin/commit: convert to struct object_id
  hex: introduce parse_oid_hex
2017-03-17 13:50:25 -07:00
Junio C Hamano 07198afbd1 Merge branch 'mm/fetch-show-error-message-on-unadvertised-object'
"git fetch" that requests a commit by object name, when the other
side does not allow such an request, failed without much
explanation.

* mm/fetch-show-error-message-on-unadvertised-object:
  fetch-pack: add specific error for fetching an unadvertised object
  fetch_refs_via_pack: call report_unmatched_refs
  fetch-pack: move code to report unmatched refs to a function
2017-03-14 15:23:18 -07:00
Matt McCutchen e70a65c5d8 fetch_refs_via_pack: call report_unmatched_refs
"git fetch" currently doesn't bother to check that it got all refs it
sought, because the common case of requesting a nonexistent ref triggers
a die() in get_fetch_map.  However, there's at least one case that
slipped through: "git fetch REMOTE SHA1" if the server doesn't allow
requests for unadvertised objects.  Make fetch_refs_via_pack (which is
on the "git fetch" code path) call report_unmatched_refs so that we at
least get an error message in that case.

Signed-off-by: Matt McCutchen <matt@mattmccutchen.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-02 11:12:53 -08:00
Junio C Hamano c13c783c9d Merge branch 'km/delete-ref-reflog-message'
"git update-ref -d" and other operations to delete references did
not leave any entry in HEAD's reflog when the reference being
deleted was the current branch.  This is not a problem in practice
because you do not want to delete the branch you are currently on,
but caused renaming of the current branch to something else not to
be logged in a useful way.

* km/delete-ref-reflog-message:
  branch: record creation of renamed branch in HEAD's log
  rename_ref: replace empty message in HEAD's log
  update-ref: pass reflog message to delete_ref()
  delete_ref: accept a reflog message argument
2017-02-27 13:57:18 -08:00
brian m. carlson 2928325fc0 Convert remaining callers of resolve_refdup to object_id
There are a few leaf functions in various files that call
resolve_refdup.  Convert these functions to use struct object_id
internally to prepare for transitioning resolve_refdup itself.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-22 10:12:15 -08:00
Kyle Meyer 755b49ae96 delete_ref: accept a reflog message argument
When the current branch is renamed with 'git branch -m/-M' or deleted
with 'git update-ref -m<msg> -d', the event is recorded in HEAD's log
with an empty message.  In preparation for adding a more meaningful
message to HEAD's log in these cases, update delete_ref() to take a
message argument and pass it along to ref_transaction_delete().
Modify all callers to pass NULL for the new message argument; no
change in behavior is intended.

Note that this is relevant for HEAD's log but not for the deleted
ref's log, which is currently deleted along with the ref.  Even if it
were not, an entry for the deletion wouldn't be present in the deleted
ref's log.  files_transaction_commit() writes to the log if
REF_NEEDS_COMMIT or REF_LOG_ONLY are set, but lock_ref_for_update()
doesn't set REF_NEEDS_COMMIT for the deleted ref because REF_DELETING
is set.  In contrast, the update for HEAD has REF_LOG_ONLY set by
split_head_update(), resulting in the deletion being logged.

Signed-off-by: Kyle Meyer <kyle@kyleam.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-20 22:04:47 -08:00
Jeff King a10a17877b for_each_alternate_ref: replace transport code with for-each-ref
The current method for getting the refs from an alternate is
to run upload-pack in the alternate and parse its output
using the normal transport code.  This works and is
reasonably short, but it has a very bad memory footprint
when there are a lot of refs in the alternate. There are two
problems:

  1. It reads in all of the refs before passing any back to
     us. Which means that our peak memory usage has to store
     every ref (including duplicates for peeled variants),
     even if our callback could determine that some are not
     interesting (e.g., because they point to the same sha1
     as another ref).

  2. It allocates a "struct ref" for each one. Among other
     things, this contains 3 separate 20-byte oids, along
     with the name and various pointers.  That can add up,
     especially if the callback is only interested in the
     sha1 (which it can store in a sha1_array as just 20
     bytes).

On a particularly pathological case, where the alternate had
over 80 million refs pointing to only around 60,000 unique
objects, the peak heap usage of "git clone --reference" grew
to over 25GB.

This patch instead calls git-for-each-ref in the alternate
repository, and passes each line to the callback as we read
it. That drops the peak heap of the same command to 50MB.

I considered and rejected a few alternatives.

We could read all of the refs in the alternate using our own
ref code, just as we do with submodules.  However, as memory
footprint is one of the concerns here, we want to avoid
loading those refs into our own memory as a whole.

It's possible that this will be a better technique in the
future when the ref code can more easily iterate without
loading all of packed-refs into memory.

Another option is to keep calling upload-pack, and just
parse its output ourselves in a streaming fashion. Besides
for-each-ref being simpler (we get to define the format
ourselves, and don't have to deal with speaking the git
protocol), it's more flexible for possible future changes.

For instance, it might be useful for the caller to be able
to limit the set of "interesting" alternate refs.  The
motivating example is one where many "forks" of a particular
repository share object storage, and the shared storage has
refs for each fork (which is why so many of the refs are
duplicates; each fork has the same tags).  A plausible
future optimization would be to ask for the alternate refs
for just _one_ fork (if you had some out-of-band way of
knowing which was the most interesting or important for the
current operation).

Similarly, no callbacks actually care about the symref value
of alternate refs, and as before, this patch ignores them
entirely.  However, if we wanted to add them, for-each-ref's
"%(symref)" is going to be more flexible than upload-pack,
because the latter only handles the HEAD symref due to
historical constraints.

There is one potential downside, though: unlike upload-pack,
our for-each-ref command doesn't report the peeled value of
refs. The existing code calls the alternate_ref_fn callback
twice for tags: once for the tag, and once for the peeled
value with the refname set to "ref^{}".

For the callers in fetch-pack, this doesn't matter at all.
We immediately peel each tag down to a commit either way (so
there's a slight improvement, as do not bother passing the
redundant data over the pipe). For the caller in
receive-pack, it means we will not advertise the peeled
values of tags in our alternate. However, we also don't
advertise peeled values for our _own_ tags, so this is
actually making things more consistent.

It's unclear whether receive-pack advertising peeled values
is a win or not. On one hand, giving more information to the
other side may let it omit some objects from the push. On
the other hand, for tags which both sides have, they simply
bloat the advertisement. The upload-pack advertisement of
git.git is about 30% larger than the receive-pack
advertisement due to its peeled information.

This patch omits the peeled information from
for_each_alternate_ref entirely, and leaves it up to the
caller whether they want to dig up the information.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-08 15:39:55 -08:00
Jeff King 2429d63a46 for_each_alternate_ref: pass name/oid instead of ref struct
Breaking down the fields in the interface makes it easier to
change the backend of for_each_alternate_ref to something
that doesn't use "struct ref" internally.

The only field that callers actually look at is the oid,
anyway. The refname is kept in the interface as a plausible
thing for future code to want.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-08 15:39:55 -08:00
Jeff King 5e8c968c64 for_each_alternate_ref: use strbuf for path allocation
We have a string with ".../objects" pointing to the
alternate object store, and overwrite bits of it to look at
other paths in the (potential) git repository holding it.
This works because the only path we care about is "refs",
which is shorter than "objects".

Using a strbuf to hold the path lets us get rid of some
magic numbers, and makes it more obvious that the memory
operations are safe.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-08 15:39:55 -08:00
Jeff King ece657f399 for_each_alternate_ref: stop trimming trailing slashes
The real_pathdup() function will have removed extra slashes
for us already (on top of the normalize_path() done when we
created the alternate_object_database struct in the first
place).

Incidentally, this also fixes the case where the path is
just "/", which would read off the start of the array.
That doesn't seem possible to trigger in practice, though,
as link_alt_odb_entry() blindly eats trailing slashes,
including a bare "/".

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-08 15:39:55 -08:00
Jeff King 3a1345af28 for_each_alternate_ref: handle failure from real_pathdup()
In older versions of git, if real_path() failed to resolve
the alternate object store path, we would die() with an
error. However, since 4ac9006f8 (real_path: have callers use
real_pathdup and strbuf_realpath, 2016-12-12) we use the
real_pathdup() function, which may return NULL. Since we
don't check the return value, we can segfault.

This is hard to trigger in practice, since we check that the
path is accessible before creating the alternate_object_database
struct. But it could be removed racily, or we could see a
transient filesystem error.

We could restore the original behavior by switching back to
xstrdup(real_path()).  However, dying is probably not the
best option here. This whole function is best-effort
already; there might not even be a repository around the
shared objects at all. And if the alternate store has gone
away, there are no objects to show.

So let's just quietly return, as we would if we failed to
open "refs/", or if upload-pack failed to start, etc.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-08 15:39:55 -08:00
Junio C Hamano 792e22e3fd Merge branch 'bw/push-submodule-only'
"git submodule push" learned "--recurse-submodules=only option to
push submodules out without pushing the top-level superproject.

* bw/push-submodule-only:
  push: add option to push only submodules
  submodules: add RECURSE_SUBMODULES_ONLY value
  transport: reformat flag #defines to be more readable
2017-01-31 13:14:56 -08:00
Junio C Hamano 55d128ae06 Merge branch 'bw/grep-recurse-submodules'
"git grep" has been taught to optionally recurse into submodules.

* bw/grep-recurse-submodules:
  grep: search history of moved submodules
  grep: enable recurse-submodules to work on <tree> objects
  grep: optionally recurse into submodules
  grep: add submodules as a grep source type
  submodules: load gitmodules file from commit sha1
  submodules: add helper to determine if a submodule is initialized
  submodules: add helper to determine if a submodule is populated
  real_path: canonicalize directory separators in root parts
  real_path: have callers use real_pathdup and strbuf_realpath
  real_path: create real_pathdup
  real_path: convert real_path_internal to strbuf_realpath
  real_path: resolve symlinks by hand
2017-01-18 15:12:11 -08:00
Junio C Hamano 3075e40c75 Merge branch 'bw/push-dry-run' into maint
"git push --dry-run --recurse-submodule=on-demand" wasn't
"--dry-run" in the submodules.

* bw/push-dry-run:
  push: fix --dry-run to not push submodules
  push: --dry-run updates submodules when --recurse-submodules=on-demand
2017-01-17 14:49:27 -08:00
Junio C Hamano 9da9965ba6 Merge branch 'hv/submodule-not-yet-pushed-fix' into maint
The code in "git push" to compute if any commit being pushed in the
superproject binds a commit in a submodule that hasn't been pushed
out was overly inefficient, making it unusable even for a small
project that does not have any submodule but have a reasonable
number of refs.

* hv/submodule-not-yet-pushed-fix:
  submodule_needs_pushing(): explain the behaviour when we cannot answer
  batch check whether submodule needs pushing into one call
  serialize collection of refs that contain submodule changes
  serialize collection of changed submodules
2017-01-17 14:49:26 -08:00
Junio C Hamano 9d540e9726 Merge branch 'bw/transport-protocol-policy'
Finer-grained control of what protocols are allowed for transports
during clone/fetch/push have been enabled via a new configuration
mechanism.

* bw/transport-protocol-policy:
  http: respect protocol.*.allow=user for http-alternates
  transport: add from_user parameter to is_transport_allowed
  http: create function to get curl allowed protocols
  transport: add protocol policy config option
  http: always warn if libcurl version is too old
  lib-proto-disable: variable name fix
2016-12-27 00:11:41 -08:00
Brandon Williams 225e8bf778 push: add option to push only submodules
Teach push the --recurse-submodules=only option.  This enables push
to recursively push all unpushed submodules while leaving the
superproject unpushed.

This is a desirable feature in a scenario where updates to the
superproject are handled automatically by some other means, perhaps
a tool like Gerrit code review.  In this scenario, a developer could
make a change which spans multiple submodules and then push their
commits for code review.  Upon completion of the code review, their
commits can be accepted and applied to their respective submodules
while the code review tool can then automatically update the
superproject to the most recent SHA1 of each submodule.  This would
reduce the merge conflicts in the superproject that could occur if
multiple people are contributing to the same submodule.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-20 12:26:34 -08:00
Junio C Hamano 12cf1135df Merge branch 'bw/push-dry-run'
"git push --dry-run --recurse-submodule=on-demand" wasn't
"--dry-run" in the submodules.

* bw/push-dry-run:
  push: fix --dry-run to not push submodules
  push: --dry-run updates submodules when --recurse-submodules=on-demand
2016-12-16 15:27:48 -08:00
Junio C Hamano af952dac7a Merge branch 'hv/submodule-not-yet-pushed-fix'
The code in "git push" to compute if any commit being pushed in the
superproject binds a commit in a submodule that hasn't been pushed
out was overly inefficient, making it unusable even for a small
project that does not have any submodule but have a reasonable
number of refs.

* hv/submodule-not-yet-pushed-fix:
  submodule_needs_pushing(): explain the behaviour when we cannot answer
  batch check whether submodule needs pushing into one call
  serialize collection of refs that contain submodule changes
  serialize collection of changed submodules
2016-12-16 15:27:47 -08:00
Brandon Williams a768a02265 transport: add from_user parameter to is_transport_allowed
Add a from_user parameter to is_transport_allowed() to allow http to be
able to distinguish between protocol restrictions for redirects versus
initial requests.  CURLOPT_REDIR_PROTOCOLS can now be set differently
from CURLOPT_PROTOCOLS to disallow use of protocols with the "user"
policy in redirects.

This change allows callers to query if a transport protocol is allowed,
given that the caller knows that the protocol is coming from the user
(1) or not from the user (0) such as redirects in libcurl.  If unknown a
-1 should be provided which falls back to reading
`GIT_PROTOCOL_FROM_USER` to determine if the protocol came from the
user.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-15 09:29:13 -08:00
Brandon Williams f1762d772e transport: add protocol policy config option
Previously the `GIT_ALLOW_PROTOCOL` environment variable was used to
specify a whitelist of protocols to be used in clone/fetch/push
commands.  This patch introduces new configuration options for more
fine-grained control for allowing/disallowing protocols.  This also has
the added benefit of allowing easier construction of a protocol
whitelist on systems where setting an environment variable is
non-trivial.

Now users can specify a policy to be used for each type of protocol via
the 'protocol.<name>.allow' config option.  A default policy for all
unconfigured protocols can be set with the 'protocol.allow' config
option.  If no user configured default is made git will allow known-safe
protocols (http, https, git, ssh, file), disallow known-dangerous
protocols (ext), and have a default policy of `user` for all other
protocols.

The supported policies are `always`, `never`, and `user`.  The `user`
policy can be used to configure a protocol to be usable when explicitly
used by a user, while disallowing it for commands which run
clone/fetch/push commands without direct user intervention (e.g.
recursive initialization of submodules).  Commands which can potentially
clone/fetch/push from untrusted repositories without user intervention
can export `GIT_PROTOCOL_FROM_USER` with a value of '0' to prevent
protocols configured to the `user` policy from being used.

Fix remote-ext tests to use the new config to allow the ext
protocol to be tested.

Based on a patch by Jeff King <peff@peff.net>

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-15 09:29:13 -08:00
Brandon Williams f962ddf6ed http: always warn if libcurl version is too old
Always warn if libcurl version is too old because:

1. Even without a protocol whitelist, newer versions of curl have all
   non-standard protocols disabled by default.
2. A future patch will introduce default "known-good" and "known-bad"
   protocols which are allowed/disallowed by 'is_transport_allowed'
   which older version of libcurl can't respect.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-15 09:28:37 -08:00
Brandon Williams 4ac9006f83 real_path: have callers use real_pathdup and strbuf_realpath
Migrate callers of real_path() who duplicate the retern value to use
real_pathdup or strbuf_realpath.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-12 15:22:32 -08:00
Brandon Williams 0301c821c5 push: fix --dry-run to not push submodules
Teach push to respect the --dry-run option when configured to
recursively push submodules 'on-demand'.  This is done by passing the
--dry-run flag to the child process which performs a push for a
submodules when performing a dry-run.

In order to preserve good user experience, the additional check for
unpushed submodules is skipped during a dry-run when
--recurse-submodules=on-demand.  The check is skipped because the submodule
pushes were performed as dry-runs and this check would always fail as the
submodules would still need to be pushed.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-11-23 08:39:14 -08:00