Граф коммитов

390 Коммитов

Автор SHA1 Сообщение Дата
Junio C Hamano ffb0b5762e Merge branch 'mh/avoid-rewriting-packed-refs'
Recent update to the refs infrastructure implementation started
rewriting packed-refs file more often than before; this has been
optimized again for most trivial cases.

* mh/avoid-rewriting-packed-refs:
  files-backend: don't rewrite the `packed-refs` file unnecessarily
  t1409: check that `packed-refs` is not rewritten unnecessarily
2017-11-15 12:14:27 +09:00
Junio C Hamano e7e456f500 Merge branch 'bc/object-id'
Conversion from uchar[20] to struct object_id continues.

* bc/object-id: (25 commits)
  refs/files-backend: convert static functions to object_id
  refs: convert read_raw_ref backends to struct object_id
  refs: convert peel_object to struct object_id
  refs: convert resolve_ref_unsafe to struct object_id
  worktree: convert struct worktree to object_id
  refs: convert resolve_gitlink_ref to struct object_id
  Convert remaining callers of resolve_gitlink_ref to object_id
  sha1_file: convert index_path and index_fd to struct object_id
  refs: convert reflog_expire parameter to struct object_id
  refs: convert read_ref_at to struct object_id
  refs: convert peel_ref to struct object_id
  builtin/pack-objects: convert to struct object_id
  pack-bitmap: convert traverse_bitmap_commit_list to object_id
  refs: convert dwim_log to struct object_id
  builtin/reflog: convert remaining unsigned char uses to object_id
  refs: convert dwim_ref and expand_ref to struct object_id
  refs: convert read_ref and read_ref_full to object_id
  refs: convert resolve_refdup and refs_resolve_refdup to struct object_id
  Convert check_connected to use struct object_id
  refs: update ref transactions to use struct object_id
  ...
2017-11-06 14:24:27 +09:00
Michael Haggerty 78fb457968 refs: update some more docs to use "oid" rather than "sha1"
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-11-06 10:31:08 +09:00
Michael Haggerty 4170188262 write_packed_entry(): take `object_id` arguments
Change `write_packed_entry()` to take `struct object_id *` rather than
`unsigned char *` arguments.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-11-06 10:31:08 +09:00
Michael Haggerty acedcde76d refs: rename constant `REF_ISPRUNING` to `REF_IS_PRUNING`
Underscores are cheap, and help readability.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-11-06 10:31:08 +09:00
Michael Haggerty 91774afcc3 refs: rename constant `REF_NODEREF` to `REF_NO_DEREF`
Even after working with this code for years, I still see this constant
name as "ref node ref". Rename it to make it's meaning clearer.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-11-06 10:31:08 +09:00
Michael Haggerty 5ac95fee3d refs: tidy up and adjust visibility of the `ref_update` flags
The constants used for `ref_update::flags` were rather disorganized:

* The definitions in `refs.h` were not close to the functions that
  used them.

* Maybe constants were defined in `refs-internal.h`, making them
  visible to the whole refs module, when in fact they only made sense
  for the files backend.

* Their documentation wasn't very consistent and partly still referred
  to sha1s rather than oids.

* The numerical values followed no rational scheme

Fix all of these problems. The main functional improvement is that
some constants' visibility is now limited to `files-backend.c`.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-11-06 10:31:07 +09:00
Michael Haggerty 62c72d1fd0 ref_transaction_add_update(): remove a check
We want to make `REF_ISPRUNING` internal to the files backend. For
this to be possible, `ref_transaction_add_update()` mustn't know about
it. So move the check that `REF_ISPRUNING` is only used with
`REF_NODEREF` from this function to `files_transaction_prepare()`.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-11-06 10:31:07 +09:00
Michael Haggerty b00f3cfa92 prune_ref(): call `ref_transaction_add_update()` directly
`prune_ref()` needs to use the `REF_ISPRUNING` flag, but we want to
make that flag private to the files backend. So instead of calling
`ref_transaction_delete()`, which is a public function and therefore
shouldn't allow the `REF_ISPRUNING` flag, change `prune_ref()` to call
`ref_transaction_add_update()`, which is private to the refs
module. (Note that we don't need any of the other services provided by
`ref_transaction_delete()`.)

This allows us to change `ref_transaction_update()` to reject the
`REF_ISPRUNING` flag. Do so by adjusting
`REF_TRANSACTION_UPDATE_ALLOWED_FLAGS`. Also add parentheses to its
definition to avoid potential future mishaps.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-11-06 10:31:07 +09:00
Michael Haggerty b0ca411051 files_transaction_prepare(): don't leak flags to packed transaction
The files backend uses `ref_update::flags` for several internal flags.
But those flags have no meaning to the packed backend. So when adding
updates for the packed-refs transaction, only use flags that make
sense to the packed backend.

`REF_NODEREF` is part of the public interface, and it's logically what
we want, so include it. In fact it is actually ignored by the packed
backend (which doesn't support symbolic references), but that's its
own business.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-11-06 10:31:07 +09:00
Michael Haggerty 7c6bd25c7d files-backend: don't rewrite the `packed-refs` file unnecessarily
Even when we are deleting references, we needn't overwrite the
`packed-refs` file if the references that we are deleting only exist
as loose references. Implement this optimization as follows:

* Add a function `is_packed_transaction_needed()`, which checks
  whether a given packed-refs transaction actually needs to be carried
  out (i.e., it returns false if the transaction obviously wouldn't
  have any effect). This function must be called while holding the
  `packed-refs` lock to avoid races.

* Change `files_transaction_prepare()` to check whether the
  packed-refs transaction is actually needed. If not, squelch it, but
  continue holding the `packed-refs` lock until the end of the
  transaction to avoid races.

This fixes a mild regression caused by dc39e09942 (files_ref_store:
use a transaction to update packed refs, 2017-09-08). Before that
commit, unnecessary rewrites of `packed-refs` were suppressed by
`repack_without_refs()`. But the transaction-based writing introduced
by that commit didn't perform that optimization.

Note that the pre-dc39e09942 code still had to *read* the whole
`packed-refs` file to determine that the rewrite could be skipped, so
the performance for the cases that the write could be elided was
`O(N)` in the number of packed references both before and after
dc39e09942. But after that commit the constant factor increased.

This commit reimplements the optimization of eliding unnecessary
`packed-refs` rewrites. That, plus the fact that since
cfa2e29c34 (packed_ref_store: get rid of the `ref_cache` entirely,
2017-03-17) we don't necessarily have to read the whole `packed-refs`
file at all, means that deletes of one or a few loose references can
now be done with `O(n lg N)` effort, where `n` is the number of loose
references being deleted and `N` is the total number of packed
references.

This commit fixes two tests in t1409.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-30 09:45:15 +09:00
Michael Haggerty ff08e56cde Merge branch 'bc/object-id' into base 2017-10-28 09:27:15 +02:00
Junio C Hamano 4e40fb302e Merge branch 'mh/ref-locking-fix'
Transactions to update multiple references that involves a deletion
was quite broken in an error codepath and did not abort everything
correctly.

* mh/ref-locking-fix:
  files_transaction_prepare(): fix handling of ref lock failure
  t1404: add a bunch of tests of D/F conflicts
2017-10-26 12:29:23 +09:00
Michael Haggerty da5267f1b6 files_transaction_prepare(): fix handling of ref lock failure
Since dc39e09942 (files_ref_store: use a transaction to update packed
refs, 2017-09-08), failure to lock a reference has been handled
incorrectly by `files_transaction_prepare()`. If
`lock_ref_for_update()` fails in the lock-acquisition loop of that
function, it sets `ret` then breaks out of that loop. Prior to
dc39e09942, that was OK, because the only thing following the loop was
the cleanup code. But dc39e09942 added another blurb of code between
the loop and the cleanup. That blurb sometimes resets `ret` to zero,
making the cleanup code think that the locking was successful.

Specifically, whenever

* One or more reference deletions have been processed successfully in
  the lock-acquisition loop. (Processing the first such reference
  causes a packed-ref transaction to be initialized.)

* Then `lock_ref_for_update()` fails for a subsequent reference. Such
  a failure can happen for a number of reasons, such as the old SHA-1
  not being correct, lock contention, etc. This causes a `break` out
  of the lock-acquisition loop.

* The `packed-refs` lock is acquired successfully and
  `ref_transaction_prepare()` succeeds for the packed-ref transaction.
  This has the effect of resetting `ret` back to 0, and making the
  cleanup code think that lock acquisition was successful.

In that case, any reference updates that were processed prior to
breaking out of the loop would be carried out (loose and packed), but
the reference that couldn't be locked and any subsequent references
would silently be ignored.

This can easily cause data loss if, for example, the user was trying
to push a new name for an existing branch while deleting the old name.
After the push, the branch could be left unreachable, and could even
subsequently be garbage-collected.

This problem was noticed in the context of deleting one reference and
creating another in a single transaction, when the two references D/F
conflict with each other, like

    git update-ref --stdin <<EOF
    delete refs/foo
    create refs/foo/bar HEAD
    EOF

This triggers the above bug because the deletion is processed
successfully for `refs/foo`, then the D/F conflict causes
`lock_ref_for_update()` to fail when `refs/foo/bar` is processed. In
this case the transaction *should* fail, but instead it causes
`refs/foo` to be deleted without creating `refs/foo`. This could
easily result in data loss.

The fix is simple: instead of just breaking out of the loop, jump
directly to the cleanup code. This fixes some tests in t1404 that were
added in the previous commit.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-25 15:08:26 +09:00
brian m. carlson 4f01e5080c refs/files-backend: convert static functions to object_id
Convert several static functions to take pointers to struct object_id.
Change the relevant parameters to write_packed_entry to be const, as we
don't modify them.  Rename lock_ref_sha1_basic to lock_ref_oid_basic to
reflect its new argument.  Update the docstring for verify lock to
account for the new parameter name, and note additionally that the
old_oid may be NULL.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16 11:05:52 +09:00
brian m. carlson 99afe91a6c refs: convert read_raw_ref backends to struct object_id
Convert the unsigned char * parameter to struct object_id * for
files_read_raw_ref and packed_read_raw_ref.  Update the documentation.
Switch from using get_sha1_hex and a hard-coded 40 to using
parse_oid_hex.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16 11:05:52 +09:00
brian m. carlson ac2ed0d7d5 refs: convert peel_object to struct object_id
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16 11:05:52 +09:00
brian m. carlson 49e61479be refs: convert resolve_ref_unsafe to struct object_id
Convert resolve_ref_unsafe to take a pointer to struct object_id by
converting one remaining caller to use struct object_id, removing the
temporary NULL pointer check in expand_ref, converting the declaration
and definition, and applying the following semantic patch:

@@
expression E1, E2, E3, E4;
@@
- resolve_ref_unsafe(E1, E2, E3.hash, E4)
+ resolve_ref_unsafe(E1, E2, &E3, E4)

@@
expression E1, E2, E3, E4;
@@
- resolve_ref_unsafe(E1, E2, E3->hash, E4)
+ resolve_ref_unsafe(E1, E2, E3, E4)

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16 11:05:51 +09:00
brian m. carlson 0155f710b8 refs: convert reflog_expire parameter to struct object_id
reflog_expire already used struct object_id internally, but it did not
take it as a parameter.  Adjust the parameter (and the callers) to pass
a pointer to struct object_id instead of a pointer to unsigned char.
Remove the temporary inserted earlier as it is no longer required.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16 11:05:51 +09:00
brian m. carlson 34c290a6fc refs: convert read_ref and read_ref_full to object_id
All but two of the call sites already have parameters using the hash
parameter of struct object_id, so convert them to take a pointer to the
struct directly.  Also convert refs_read_refs_full, the underlying
implementation.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16 11:05:50 +09:00
brian m. carlson 89f3bbdd3b refs: update ref transactions to use struct object_id
Update the ref transaction code to use struct object_id.  Remove one
NULL pointer check which was previously inserted around a dereference;
since we now pass a pointer to struct object_id directly through, the
code we're calling handles this for us.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16 11:05:50 +09:00
brian m. carlson 2616a5e508 refs: convert delete_ref and refs_delete_ref to struct object_id
Convert delete_ref and refs_delete_ref to take a pointer to struct
object_id.  Update the documentation accordingly, including referring to
null_oid in lowercase, as it is not a #define constant.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16 11:05:50 +09:00
brian m. carlson 49e9958869 refs/files-backend: convert struct ref_to_prune to object_id
Change the member of this struct to be a struct object_id.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16 11:05:50 +09:00
Junio C Hamano e46ebc2754 Merge branch 'rs/cleanup-strbuf-users'
Code clean-up.

* rs/cleanup-strbuf-users:
  graph: use strbuf_addchars() to add spaces
  use strbuf_addstr() for adding strings to strbufs
  path: use strbuf_add_real_path()
2017-10-05 13:48:19 +09:00
Junio C Hamano efe9d6ce33 Merge branch 'rs/resolve-ref-optional-result'
Code clean-up.

* rs/resolve-ref-optional-result:
  refs: pass NULL to resolve_refdup() if hash is not needed
  refs: pass NULL to refs_resolve_refdup() if hash is not needed
2017-10-05 13:48:19 +09:00
Junio C Hamano 1a2e1a76ec Merge branch 'mh/mmap-packed-refs'
Operations that do not touch (majority of) packed refs have been
optimized by making accesses to packed-refs file lazy; we no longer
pre-parse everything, and an access to a single ref in the
packed-refs does not touch majority of irrelevant refs, either.

* mh/mmap-packed-refs: (21 commits)
  packed-backend.c: rename a bunch of things and update comments
  mmapped_ref_iterator: inline into `packed_ref_iterator`
  ref_cache: remove support for storing peeled values
  packed_ref_store: get rid of the `ref_cache` entirely
  ref_store: implement `refs_peel_ref()` generically
  packed_read_raw_ref(): read the reference from the mmapped buffer
  packed_ref_iterator_begin(): iterate using `mmapped_ref_iterator`
  read_packed_refs(): ensure that references are ordered when read
  packed_ref_cache: keep the `packed-refs` file mmapped if possible
  packed-backend.c: reorder some definitions
  mmapped_ref_iterator_advance(): no peeled value for broken refs
  mmapped_ref_iterator: add iterator over a packed-refs file
  packed_ref_cache: remember the file-wide peeling state
  read_packed_refs(): read references with minimal copying
  read_packed_refs(): make parsing of the header line more robust
  read_packed_refs(): only check for a header at the top of the file
  read_packed_refs(): use mmap to read the `packed-refs` file
  die_unterminated_line(), die_invalid_line(): new functions
  packed_ref_cache: add a backlink to the associated `packed_ref_store`
  prefix_ref_iterator: break when we leave the prefix
  ...
2017-10-03 15:42:50 +09:00
Junio C Hamano cb1083ca23 Merge branch 'jk/read-in-full'
Code clean-up to prevent future mistakes by copying and pasting
code that checks the result of read_in_full() function.

* jk/read-in-full:
  worktree: check the result of read_in_full()
  worktree: use xsize_t to access file size
  distinguish error versus short read from read_in_full()
  avoid looking at errno for short read_in_full() returns
  prefer "!=" when checking read_in_full() result
  notes-merge: drop dead zero-write code
  files-backend: prefer "0" for write_in_full() error check
2017-10-03 15:42:49 +09:00
Junio C Hamano 3b48045c6c Merge branch 'sd/branch-copy'
"git branch" learned "-c/-C" to create a new branch by copying an
existing one.

* sd/branch-copy:
  branch: fix "copy" to never touch HEAD
  branch: add a --copy (-c) option to go with --move (-m)
  branch: add test for -m renaming multiple config sections
  config: create a function to format section headers
2017-10-03 15:42:48 +09:00
René Scharfe 72d4a9a721 use strbuf_addstr() for adding strings to strbufs
Use strbuf_addstr() instead of strbuf_addf() for adding strings.  That's
simpler and makes the intent clearer.

Patch generated by Coccinelle and contrib/coccinelle/strbuf.cocci;
adjusted indentation in refs/packed-backend.c manually.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-02 13:13:46 +09:00
René Scharfe 872ccb2c69 refs: pass NULL to refs_resolve_refdup() if hash is not needed
This gets us rid of a write-only variable.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-01 17:26:58 +09:00
Junio C Hamano 73ecdc606e Merge branch 'rs/resolve-ref-optional-result'
Code clean-up.

* rs/resolve-ref-optional-result:
  refs: pass NULL to resolve_ref_unsafe() if hash is not needed
  refs: pass NULL to refs_resolve_ref_unsafe() if hash is not needed
  refs: make sha1 output parameter of refs_resolve_ref_unsafe() optional
2017-09-28 14:47:56 +09:00
Jeff King 88780c37b3 files-backend: prefer "0" for write_in_full() error check
Commit 06f46f237a (avoid "write_in_full(fd, buf, len) !=
len" pattern, 2017-09-13) converted this callsite from:

  write_in_full(...) != 1

to

  write_in_full(...) < 0

But during the conflict resolution in c50424a6f0 (Merge
branch 'jk/write-in-full-fix', 2017-09-25), this morphed
into

  write_in_full(...) < 1

This behaves as we want, but we prefer to avoid modeling the
"less than length" error-check which can be subtly buggy, as
shown in efacf609c8 (config: avoid "write_in_full(fd, buf,
len) < len" pattern, 2017-09-13).

Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-26 12:54:43 +09:00
Michael Haggerty cff28ca94c packed-backend.c: rename a bunch of things and update comments
We've made huge changes to this file, and some of the old names and
comments are no longer very fitting. So rename a bunch of things:

* `struct packed_ref_cache` → `struct snapshot`
* `acquire_packed_ref_cache()` → `acquire_snapshot()`
* `release_packed_ref_buffer()` → `clear_snapshot_buffer()`
* `release_packed_ref_cache()` → `release_snapshot()`
* `clear_packed_ref_cache()` → `clear_snapshot()`
* `struct packed_ref_entry` → `struct snapshot_record`
* `cmp_packed_ref_entries()` → `cmp_packed_ref_records()`
* `cmp_entry_to_refname()` → `cmp_record_to_refname()`
* `sort_packed_refs()` → `sort_snapshot()`
* `read_packed_refs()` → `create_snapshot()`
* `validate_packed_ref_cache()` → `validate_snapshot()`
* `get_packed_ref_cache()` → `get_snapshot()`
* Renamed local variables and struct members accordingly.

Also update a bunch of comments to reflect the renaming and the
accumulated changes that the code has undergone.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:46 +09:00
Michael Haggerty 523ee2d785 mmapped_ref_iterator: inline into `packed_ref_iterator`
Since `packed_ref_iterator` is now delegating to
`mmapped_ref_iterator` rather than `cache_ref_iterator` to do the
heavy lifting, there is no need to keep the two iterators separate. So
"inline" `mmapped_ref_iterator` into `packed_ref_iterator`. This
removes a bunch of boilerplate.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:46 +09:00
Michael Haggerty a6e19bcdad ref_cache: remove support for storing peeled values
Now that the `packed-refs` backend doesn't use `ref_cache`, there is
nobody left who might want to store peeled values of references in
`ref_cache`. So remove that feature.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:46 +09:00
Michael Haggerty 9dd389f3d8 packed_ref_store: get rid of the `ref_cache` entirely
Now that everything has been changed to read what it needs directly
out of the `packed-refs` file, `packed_ref_store` doesn't need to
maintain a `ref_cache` at all. So get rid of it.

First of all, this will save a lot of memory and lots of little
allocations. Instead of needing to store complicated parsed data
structures in memory, we just mmap the file (potentially sharing
memory with other processes) and parse only what we need.

Moreover, since the mmapped access to the file reads only the parts of
the file that it needs, this might save reading all of the data from
disk at all (at least if the file starts out sorted).

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:46 +09:00
Michael Haggerty ba1c052fa6 ref_store: implement `refs_peel_ref()` generically
We're about to stop storing packed refs in a `ref_cache`. That means
that the only way we have left to optimize `peel_ref()` is by checking
whether the reference being peeled is the one currently being iterated
over (in `current_ref_iter`), and if so, using `ref_iterator_peel()`.
But this can be done generically; it doesn't have to be implemented
per-backend.

So implement `refs_peel_ref()` in `refs.c` and remove the `peel_ref()`
method from the refs API.

This removes the last callers of a couple of functions, so delete
them. More cleanup to come...

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:46 +09:00
Michael Haggerty f3987ab36d packed_read_raw_ref(): read the reference from the mmapped buffer
Instead of reading the reference from the `ref_cache`, read it
directly from the mmapped buffer.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:45 +09:00
Michael Haggerty d1cf15516f packed_ref_iterator_begin(): iterate using `mmapped_ref_iterator`
Now that we have an efficient way to iterate, in order, over the
mmapped contents of the `packed-refs` file, we can use that directly
to implement reference iteration for the `packed_ref_store`, rather
than iterating over the `ref_cache`. This is the next step towards
getting rid of the `ref_cache` entirely.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:45 +09:00
Michael Haggerty 02b920f3f7 read_packed_refs(): ensure that references are ordered when read
It doesn't actually matter now, because the references are only
iterated over to fill the associated `ref_cache`, which itself puts
them in the correct order. But we want to get rid of the `ref_cache`,
so we want to be able to iterate directly over the `packed-refs`
buffer, and then the iteration will need to be ordered correctly.

In fact, we already write the `packed-refs` file sorted, but it is
possible that other Git clients don't get it right. So let's not
assume that a `packed-refs` file is sorted unless it is explicitly
declared to be so via a `sorted` trait in its header line.

If it is *not* declared to be sorted, then scan quickly through the
file to check. If it is found to be out of order, then sort the
records into a new memory-only copy. This checking and sorting is done
quickly, without parsing the full file contents. However, it needs a
little bit of care to avoid reading past the end of the buffer even if
the `packed-refs` file is corrupt.

Since *we* always write the file correctly sorted, include that trait
when we write or rewrite a `packed-refs` file. This means that the
scan described in the previous paragraph should only have to be done
for `packed-refs` files that were written by older versions of the Git
command-line client, or by other clients that haven't yet learned to
write the `sorted` trait.

If `packed-refs` was already sorted, then (if the system allows it) we
can use the mmapped file contents directly. But if the system doesn't
allow a file that is currently mmapped to be replaced using
`rename()`, then it would be bad for us to keep the file mmapped for
any longer than necessary. So, on such systems, always make a copy of
the file contents, either as part of the sorting process, or
afterwards.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:45 +09:00
Michael Haggerty 5b633610ec packed_ref_cache: keep the `packed-refs` file mmapped if possible
Keep a copy of the `packed-refs` file contents in memory for as long
as a `packed_ref_cache` object is in use:

* If the system allows it, keep the `packed-refs` file mmapped.

* If not (either because the system doesn't support `mmap()` at all,
  or because a file that is currently mmapped cannot be replaced via
  `rename()`), then make a copy of the file's contents in
  heap-allocated space, and keep that around instead.

We base the choice of behavior on a new build-time switch,
`MMAP_PREVENTS_DELETE`. By default, this switch is set for Windows
variants.

After this commit, `MMAP_NONE` and `MMAP_TEMPORARY` are still handled
identically. But the next commit will introduce a difference.

This whole change is still pointless, because we only read the
`packed-refs` file contents immediately after instantiating the
`packed_ref_cache`. But that will soon change.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:45 +09:00
Michael Haggerty 14b3c344ea packed-backend.c: reorder some definitions
No code has been changed. This will make subsequent patches more
self-contained.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:45 +09:00
Michael Haggerty 81b9b5aea7 mmapped_ref_iterator_advance(): no peeled value for broken refs
If a reference is broken, suppress its peeled value.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:45 +09:00
Michael Haggerty 9cfb3dc0d1 mmapped_ref_iterator: add iterator over a packed-refs file
Add a new `mmapped_ref_iterator`, which can iterate over the
references in an mmapped `packed-refs` file directly. Use this
iterator from `read_packed_refs()` to fill the packed refs cache.

Note that we are not yet willing to promise that the new iterator
generates its output in order. That doesn't matter for now, because
the packed refs cache doesn't care what order it is filled.

This change adds a lot of boilerplate without providing any obvious
benefits. The benefits will come soon, when we get rid of the
`ref_cache` for packed references altogether.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:45 +09:00
Michael Haggerty daa45408c1 packed_ref_cache: remember the file-wide peeling state
Rather than store the peeling state (i.e., the one defined by traits
in the `packed-refs` file header line) in a local variable in
`read_packed_refs()`, store it permanently in `packed_ref_cache`. This
will be needed when we stop reading all packed refs at once.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:45 +09:00
Michael Haggerty 6a9bc4034a read_packed_refs(): read references with minimal copying
Instead of copying data from the `packed-refs` file one line at time
and then processing it, process the data in place as much as possible.

Also, instead of processing one line per iteration of the main loop,
process a reference line plus its corresponding peeled line (if
present) together.

Note that this change slightly tightens up the parsing of the
`packed-refs` file. Previously, the parser would have accepted
multiple "peeled" lines for a single reference (ignoring all but the
last one). Now it would reject that.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25 18:02:45 +09:00
Junio C Hamano c50424a6f0 Merge branch 'jk/write-in-full-fix'
Many codepaths did not diagnose write failures correctly when disks
go full, due to their misuse of write_in_full() helper function,
which have been corrected.

* jk/write-in-full-fix:
  read_pack_header: handle signed/unsigned comparison in read result
  config: flip return value of store_write_*()
  notes-merge: use ssize_t for write_in_full() return value
  pkt-line: check write_in_full() errors against "< 0"
  convert less-trivial versions of "write_in_full() != len"
  avoid "write_in_full(fd, buf, len) != len" pattern
  get-tar-commit-id: check write_in_full() return against 0
  config: avoid "write_in_full(fd, buf, len) < len" pattern
2017-09-25 15:24:06 +09:00
René Scharfe e691b027b6 refs: pass NULL to refs_resolve_ref_unsafe() if hash is not needed
This allows us to get rid of two write-only variables, one of them
being a SHA1 buffer.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-24 10:18:18 +09:00
Junio C Hamano 07f0542da3 Merge branch 'mh/packed-ref-transactions'
Implement transactional update to the packed-ref representation of
references.

* mh/packed-ref-transactions:
  files_transaction_finish(): delete reflogs before references
  packed-backend: rip out some now-unused code
  files_ref_store: use a transaction to update packed refs
  t1404: demonstrate two problems with reference transactions
  files_initial_transaction_commit(): use a transaction for packed refs
  prune_refs(): also free the linked list
  files_pack_refs(): use a reference transaction to write packed refs
  packed_delete_refs(): implement method
  packed_ref_store: implement reference transactions
  struct ref_transaction: add a place for backends to store data
  packed-backend: don't adjust the reference count on lock/unlock
2017-09-19 10:47:56 +09:00
Junio C Hamano 89563ec379 Merge branch 'jk/incore-lockfile-removal'
The long-standing rule that an in-core lockfile instance, once it
is used, must not be freed, has been lifted and the lockfile and
tempfile APIs have been updated to reduce the chance of programming
errors.

* jk/incore-lockfile-removal:
  stop leaking lock structs in some simple cases
  ref_lock: stop leaking lock_files
  lockfile: update lifetime requirements in documentation
  tempfile: auto-allocate tempfiles on heap
  tempfile: remove deactivated list entries
  tempfile: use list.h for linked list
  tempfile: release deactivated strbufs instead of resetting
  tempfile: robustify cleanup handler
  tempfile: factor out deactivation
  tempfile: factor out activation
  tempfile: replace die("BUG") with BUG()
  tempfile: handle NULL tempfile pointers gracefully
  tempfile: prefer is_tempfile_active to bare access
  lockfile: do not rollback lock on failed close
  tempfile: do not delete tempfile on failed close
  always check return value of close_tempfile
  verify_signed_buffer: prefer close_tempfile() to close()
  setup_temporary_shallow: move tempfile struct into function
  setup_temporary_shallow: avoid using inactive tempfile
  write_index_as_tree: cleanup tempfile on error
2017-09-19 10:47:53 +09:00
Junio C Hamano 8a044c7f1d Merge branch 'nd/prune-in-worktree'
"git gc" and friends when multiple worktrees are used off of a
single repository did not consider the index and per-worktree refs
of other worktrees as the root for reachability traversal, making
objects that are in use only in other worktrees to be subject to
garbage collection.

* nd/prune-in-worktree:
  refs.c: reindent get_submodule_ref_store()
  refs.c: remove fallback-to-main-store code get_submodule_ref_store()
  rev-list: expose and document --single-worktree
  revision.c: --reflog add HEAD reflog from all worktrees
  files-backend: make reflog iterator go through per-worktree reflog
  revision.c: --all adds HEAD from all worktrees
  refs: remove dead for_each_*_submodule()
  refs.c: move for_each_remote_ref_submodule() to submodule.c
  revision.c: use refs_for_each*() instead of for_each_*_submodule()
  refs: add refs_head_ref()
  refs: move submodule slash stripping code to get_submodule_ref_store
  refs.c: refactor get_submodule_ref_store(), share common free block
  revision.c: --indexed-objects add objects from all worktrees
  revision.c: refactor add_index_objects_to_pending()
  refs.c: use is_dir_sep() in resolve_gitlink_ref()
  revision.h: new flag in struct rev_info wrt. worktree-related refs
2017-09-19 10:47:53 +09:00
Junio C Hamano dafbe1993e Merge branch 'ma/split-symref-update-fix'
A leakfix.

* ma/split-symref-update-fix:
  refs/files-backend: add `refname`, not "HEAD", to list
  refs/files-backend: correct return value in lock_ref_for_update
  refs/files-backend: fix memory leak in lock_ref_for_update
  refs/files-backend: add longer-scoped copy of string to list
2017-09-19 10:47:53 +09:00
Michael Haggerty a8811695e3 read_packed_refs(): make parsing of the header line more robust
The old code parsed the traits in the `packed-refs` header by looking
for the string " trait " (i.e., the name of the trait with a space on
either side) in the header line. This is fragile, because if any other
implementation of Git forgets to write the trailing space, the last
trait would silently be ignored (and the error might never be
noticed).

So instead, use `string_list_split_in_place()` to split the traits
into tokens then use `unsorted_string_list_has_string()` to look for
the tokens we are interested in. This means that we can read the
traits correctly even if the header line is missing a trailing
space (or indeed, if it is missing the space after the colon, or if it
has multiple spaces somewhere).

However, older Git clients (and perhaps other Git implementations)
still require the surrounding spaces, so we still have to output the
header with a trailing space.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14 15:19:07 +09:00
Michael Haggerty 36f23534ae read_packed_refs(): only check for a header at the top of the file
This tightens up the parsing a bit; previously, stray header-looking
lines would have been processed.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14 15:19:07 +09:00
Michael Haggerty 49a03ef466 read_packed_refs(): use mmap to read the `packed-refs` file
It's still done in a pretty stupid way, involving more data copying
than necessary. That will improve in future commits.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14 15:19:07 +09:00
Michael Haggerty 735267aa10 die_unterminated_line(), die_invalid_line(): new functions
Extract some helper functions for reporting errors. While we're at it,
prevent them from spewing unlimited output to the terminal. These
functions will soon have more callers.

These functions accept the problematic line as a `(ptr, len)` pair
rather than a NUL-terminated string, and `die_invalid_line()` checks
for an EOL itself, because these calling conventions will be
convenient for future callers. (Efficiency is not a concern here
because these functions are only ever called if the `packed-refs` file
is corrupt.)

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14 15:19:07 +09:00
Michael Haggerty f0a7dc86d2 packed_ref_cache: add a backlink to the associated `packed_ref_store`
It will prove convenient in upcoming patches.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14 15:19:07 +09:00
Jeff King 157113c614 prefix_ref_iterator: break when we leave the prefix
If the underlying iterator is ordered, then `prefix_ref_iterator` can
stop as soon as it sees a refname that comes after the prefix. This
will rarely make a big difference now, because `ref_cache_iterator`
only iterates over the directory containing the prefix (and usually
the prefix will span a whole directory anyway). But if *hint, hint* a
future reference backend doesn't itself know where to stop the
iteration, then this optimization will be a big win.

Note that there is no guarantee that the underlying iterator doesn't
include output preceding the prefix, so we have to skip over any
unwanted references before we get to the ones that we want.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14 15:19:07 +09:00
Michael Haggerty 8738a8a4df ref_iterator: keep track of whether the iterator output is ordered
References are iterated over in order by refname, but reflogs are not.
Some consumers of reference iteration care about the difference. Teach
each `ref_iterator` to keep track of whether its output is ordered.

`overlay_ref_iterator` is one of the picky consumers. Add a sanity
check in `overlay_ref_iterator_begin()` to verify that its inputs are
ordered.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14 15:19:07 +09:00
Jeff King 564bde9ae6 convert less-trivial versions of "write_in_full() != len"
The prior commit converted many sites to check the return
value of write_in_full() for negativity, rather than a
mismatch with the input length. This patch covers similar
cases, but where the return value is stored in an
intermediate variable. These should get the same treatment,
but they need to be reviewed more carefully since it would
be a bug if the return value is stored in an unsigned type
(which indeed, it is in one of the cases).

Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14 15:17:59 +09:00
Jeff King 06f46f237a avoid "write_in_full(fd, buf, len) != len" pattern
The return value of write_in_full() is either "-1", or the
requested number of bytes[1]. If we make a partial write
before seeing an error, we still return -1, not a partial
value. This goes back to f6aa66cb95 (write_in_full: really
write in full or return error on disk full., 2007-01-11).

So checking anything except "was the return value negative"
is pointless. And there are a couple of reasons not to do
so:

  1. It can do a funny signed/unsigned comparison. If your
     "len" is signed (e.g., a size_t) then the compiler will
     promote the "-1" to its unsigned variant.

     This works out for "!= len" (unless you really were
     trying to write the maximum size_t bytes), but is a
     bug if you check "< len" (an example of which was fixed
     recently in config.c).

     We should avoid promoting the mental model that you
     need to check the length at all, so that new sites are
     not tempted to copy us.

  2. Checking for a negative value is shorter to type,
     especially when the length is an expression.

  3. Linus says so. In d34cf19b89 (Clean up write_in_full()
     users, 2007-01-11), right after the write_in_full()
     semantics were changed, he wrote:

       I really wish every "write_in_full()" user would just
       check against "<0" now, but this fixes the nasty and
       stupid ones.

     Appeals to authority aside, this makes it clear that
     writing it this way does not have an intentional
     benefit. It's a historical curiosity that we never
     bothered to clean up (and which was undoubtedly
     cargo-culted into new sites).

So let's convert these obviously-correct cases (this
includes write_str_in_full(), which is just a wrapper for
write_in_full()).

[1] A careful reader may notice there is one way that
    write_in_full() can return a different value. If we ask
    write() to write N bytes and get a return value that is
    _larger_ than N, we could return a larger total. But
    besides the fact that this would imply a totally broken
    version of write(), it would already invoke undefined
    behavior. Our internal remaining counter is an unsigned
    size_t, which means that subtracting too many byte will
    wrap it around to a very large number. So we'll instantly
    begin reading off the end of the buffer, trying to write
    gigabytes (or petabytes) of data.

Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14 15:17:59 +09:00
Martin Ågren 276d0e35c0 refs/files-backend: add `refname`, not "HEAD", to list
An earlier patch rewrote `split_symref_update()` to add a copy of a
string to a string list instead of adding the original string. That was
so that the original string could be freed in a later patch, but it is
also conceptually cleaner, since now all calls to `string_list_insert()`
and `string_list_append()` add `update->refname`. --- Except a literal
"HEAD" is added in `split_head_update()`.

Restructure `split_head_update()` in the same way as the earlier patch
did for `split_symref_update()`. This does not correct any practical
problem, but makes things conceptually cleaner. The downside is a call
to `string_list_has_string()`, which should be relatively cheap.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-10 16:36:58 +09:00
Martin Ågren 3f5ef95b5e refs/files-backend: correct return value in lock_ref_for_update
In one code path we return a literal -1 and not a symbolic constant. The
value -1 would be interpreted as TRANSACTION_NAME_CONFLICT, which is
wrong. Use TRANSACTION_GENERIC_ERROR instead (that is the only other
return value we have to choose from).

Noticed-by: Michael Haggerty <mhagger@alum.mit.edu>
Reviewed-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-10 16:36:58 +09:00
Martin Ågren 851e1fbd01 refs/files-backend: fix memory leak in lock_ref_for_update
After the previous patch, none of the functions we call hold on to
`referent.buf`, so we can safely release the string buffer before
returning.

Reviewed-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-10 16:36:58 +09:00
Martin Ågren c299468bd7 refs/files-backend: add longer-scoped copy of string to list
split_symref_update() receives a string-pointer `referent` and adds it
to the list of `affected_refnames`. The list simply holds on to the
pointers it is given, it does not copy the strings and it does not ever
free them. The `referent` string in split_symref_update() belongs to a
string buffer in the caller. After we return, the string will be leaked.

In the next patch, we want to properly release the string buffer in the
caller, but we can't safely do so until we've made sure that
`affected_refnames` will not be holding on to a pointer to the string.
We could configure the list to handle its own resources, but it would
mean some alloc/free-churning. The list is already handling other
strings (through other code paths) which we do not need to worry about,
and we'd be memory-churning those strings too, completely unnecessary.

Observe that split_symref_update() creates a `new_update`-object through
ref_transaction_add_update(), after which `new_update->refname` is a
copy of `referent`. The difference is, this copy will be freed, and it
will be freed *after* `affected_refnames` has been cleared.

Rearrange the handling of `referent`, so that we don't add it directly
to `affected_refnames`. Instead, first just check whether `referent`
exists in the string list, and later add `new_update->refname`.

Helped-by: Michael Haggerty <mhagger@alum.mit.edu>
Reviewed-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-10 16:36:58 +09:00
Michael Haggerty 5e00a6c873 files_transaction_finish(): delete reflogs before references
If the deletion steps unexpectedly fail, it is less bad to leave a
reference without its reflog than it is to leave a reflog without its
reference, since the latter is an invalid repository state.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09 03:18:04 +09:00
Michael Haggerty 9939b33d6a packed-backend: rip out some now-unused code
Now the outside world interacts with the packed ref store only via the
generic refs API plus a few lock-related functions. This allows us to
delete some functions that are no longer used, thereby completing the
encapsulation of the packed ref store.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09 03:18:04 +09:00
Michael Haggerty dc39e09942 files_ref_store: use a transaction to update packed refs
When processing a `files_ref_store` transaction, it is sometimes
necessary to delete some references from the "packed-refs" file. Do
that using a reference transaction conducted against the
`packed_ref_store`.

This change further decouples `files_ref_store` from
`packed_ref_store`. It also fixes multiple problems, including the two
revealed by test cases added in the previous commit.

First, the old code didn't obtain the `packed-refs` lock until
`files_transaction_finish()`. This means that a failure to acquire the
`packed-refs` lock (e.g., due to contention with another process)
wasn't detected until it was too late (problems like this are supposed
to be detected in the "prepare" phase). The new code acquires the
`packed-refs` lock in `files_transaction_prepare()`, the same stage of
the processing when the loose reference locks are being acquired,
removing another reason why the "prepare" phase might succeed and the
"finish" phase might nevertheless fail.

Second, the old code deleted the loose version of a reference before
deleting any packed version of the same reference. This left a moment
when another process might think that the packed version of the
reference is current, which is incorrect. (Even worse, the packed
version of the reference can be arbitrarily old, and might even point
at an object that has since been garbage-collected.)

Third, if a reference deletion fails to acquire the `packed-refs` lock
altogether, then the old code might leave the repository in the
incorrect state (possibly corrupt) described in the previous
paragraph.

Now we activate the new "packed-refs" file (sans any references that
are being deleted) *before* deleting the corresponding loose
references. But we hold the "packed-refs" lock until after the loose
references have been finalized, thus preventing a simultaneous
"pack-refs" process from packing the loose version of the reference in
the time gap, which would otherwise defeat our attempt to delete it.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09 03:18:04 +09:00
Michael Haggerty 1444bfe027 files_initial_transaction_commit(): use a transaction for packed refs
Use a `packed_ref_store` transaction in the implementation of
`files_initial_transaction_commit()` rather than using internal
features of the packed ref store. This further decouples
`files_ref_store` from `packed_ref_store`.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09 03:18:04 +09:00
Michael Haggerty 22b09cdfad prune_refs(): also free the linked list
At least since v1.7, the elements of the `refs_to_prune` linked list
have been leaked. Fix the leak by teaching `prune_refs()` to free the
list elements as it processes them.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09 03:18:04 +09:00
Michael Haggerty 27d03d04d5 files_pack_refs(): use a reference transaction to write packed refs
Now that the packed reference store supports transactions, we can use
a transaction to write the packed versions of references that we want
to pack. This decreases the coupling between `files_ref_store` and
`packed_ref_store`.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09 03:18:04 +09:00
Michael Haggerty 2fb330ca72 packed_delete_refs(): implement method
Implement `packed_delete_refs()` using a reference transaction. This
means that `files_delete_refs()` can use `refs_delete_refs()` instead
of `repack_without_refs()` to delete any packed references, decreasing
the coupling between the classes.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09 03:18:04 +09:00
Michael Haggerty 2775d8724d packed_ref_store: implement reference transactions
Implement the methods needed to support reference transactions for
the packed-refs backend. The new methods are not yet used.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09 03:18:03 +09:00
Michael Haggerty 3bf4f56134 struct ref_transaction: add a place for backends to store data
`packed_ref_store` is going to want to store some transaction-wide
data, so make a place for it.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09 03:18:03 +09:00
Michael Haggerty 39c8df0cfe packed-backend: don't adjust the reference count on lock/unlock
The old code incremented the packed ref cache reference count when
acquiring the packed-refs lock, and decremented the count when
releasing the lock. This is unnecessary because:

* Another process cannot change the packed-refs file because it is
  locked.

* When we ourselves change the packed-refs file, we do so by first
  modifying the packed ref-cache, and then writing the data from the
  ref-cache to disk. So the packed ref-cache remains fresh because any
  changes that we plan to make to the file are made in the cache first
  anyway.

So there is no reason for the cache to become stale.

Moreover, the extra reference count causes a problem if we
intentionally clear the packed refs cache, as we sometimes need to do
if we change the cache in anticipation of writing a change to disk,
but then the write to disk fails. In that case, `packed_refs_unlock()`
would have no easy way to find the cache whose reference count it
needs to decrement.

This whole issue will soon become moot due to upcoming changes that
avoid changing the in-memory cache as part of updating the packed-refs
on disk, but this change makes that transition easier.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09 03:18:03 +09:00
Jeff King ee4d8e455c ref_lock: stop leaking lock_files
Since the tempfile code recently relaxed the rule that
tempfile structs (and thus locks) need to hang around
forever, we no longer have to leak our lock_file structs.

In fact, we don't even need to heap-allocate them anymore,
since their lifetime can just match that of the surrounding
ref_lock (and if we forget to delete a lock, the effect is
the same as before: it will eventually go away at program
exit).

Note that there is a check in unlock_ref() to only rollback
a lock file if it has been allocated. We don't need that
check anymore; we zero the ref_lock (and thus the
lock_file), so at worst we pass a NULL pointer to
delete_tempfile(), which considers that a noop.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-06 17:19:54 +09:00
Jeff King 076aa2cbda tempfile: auto-allocate tempfiles on heap
The previous commit taught the tempfile code to give up
ownership over tempfiles that have been renamed or deleted.
That makes it possible to use a stack variable like this:

  struct tempfile t;

  create_tempfile(&t, ...);
  ...
  if (!err)
          rename_tempfile(&t, ...);
  else
          delete_tempfile(&t);

But doing it this way has a high potential for creating
memory errors. The tempfile we pass to create_tempfile()
ends up on a global linked list, and it's not safe for it to
go out of scope until we've called one of those two
deactivation functions.

Imagine that we add an early return from the function that
forgets to call delete_tempfile(). With a static or heap
tempfile variable, the worst case is that the tempfile hangs
around until the program exits (and some functions like
setup_shallow_temporary rely on this intentionally, creating
a tempfile and then leaving it for later cleanup).

But with a stack variable as above, this is a serious memory
error: the variable goes out of scope and may be filled with
garbage by the time the tempfile code looks at it.  Let's
see if we can make it harder to get this wrong.

Since many callers need to allocate arbitrary numbers of
tempfiles, we can't rely on static storage as a general
solution. So we need to turn to the heap. We could just ask
all callers to pass us a heap variable, but that puts the
burden on them to call free() at the right time.

Instead, let's have the tempfile code handle the heap
allocation _and_ the deallocation (when the tempfile is
deactivated and removed from the list).

This changes the return value of all of the creation
functions. For the cleanup functions (delete and rename),
we'll add one extra bit of safety: instead of taking a
tempfile pointer, we'll take a pointer-to-pointer and set it
to NULL after freeing the object. This makes it safe to
double-call functions like delete_tempfile(), as the second
call treats the NULL input as a noop. Several callsites
follow this pattern.

The resulting patch does have a fair bit of noise, as each
caller needs to be converted to handle:

  1. Storing a pointer instead of the struct itself.

  2. Passing the pointer instead of taking the struct
     address.

  3. Handling a "struct tempfile *" return instead of a file
     descriptor.

We could play games to make this less noisy. For example, by
defining the tempfile like this:

  struct tempfile {
	struct heap_allocated_part_of_tempfile {
                int fd;
                ...etc
        } *actual_data;
  }

Callers would continue to have a "struct tempfile", and it
would be "active" only when the inner pointer was non-NULL.
But that just makes things more awkward in the long run.
There aren't that many callers, so we can simply bite
the bullet and adjust all of them. And the compiler makes it
easy for us to find them all.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-06 17:19:54 +09:00
Jeff King 83a3069a38 lockfile: do not rollback lock on failed close
Since the lockfile code is based on the tempfile code, it
has some of the same problems, including that close_lock_file()
erases the tempfile's filename buf, making it hard for the
caller to write a good error message.

In practice this comes up less for lockfiles than for
straight tempfiles, since we usually just report the
refname. But there is at least one buggy case in
write_ref_to_lockfile(). Besides, given the coupling between
the lockfile and tempfile modules, it's less confusing if
their close() functions have the same semantics.

Just as the previous commit did for close_tempfile(), let's
teach close_lock_file() and its wrapper close_ref() not to
rollback on error. And just as before, we'll give them new
"gently" names to catch any new callers that are added.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-06 17:19:53 +09:00
Junio C Hamano f2dd90fc1c Merge branch 'mh/ref-lock-entry'
The code to acquire a lock on a reference (e.g. while accepting a
push from a client) used to immediately fail when the reference is
already locked---now it waits for a very short while and retries,
which can make it succeed if the lock holder was holding it during
a read-only operation.

* mh/ref-lock-entry:
  refs: retry acquiring reference locks for 100ms
2017-08-26 22:55:09 -07:00
Nguyễn Thái Ngọc Duy 944b4e3013 files-backend: make reflog iterator go through per-worktree reflog
refs/bisect is unfortunately per-worktree, so we need to look in
per-worktree logs/refs/bisect in addition to per-repo logs/refs. The
current iterator only goes through per-repo logs/refs.

Use merge iterator to walk two ref stores at the same time and pick
per-worktree refs from the right iterator.

PS. Note the unsorted order of for_each_reflog in the test. This is
supposed to be OK, for now. If we enforce order on for_each_reflog()
then some more work will be required.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-24 14:57:56 -07:00
Michael Haggerty 4ff0f01cb7 refs: retry acquiring reference locks for 100ms
The philosophy of reference locking has been, "if another process is
changing a reference, then whatever I'm trying to do to it will
probably fail anyway because my old-SHA-1 value is probably no longer
current". But this argument falls down if the other process has locked
the reference to do something that doesn't actually change the value
of the reference, such as `pack-refs` or `reflog expire`. There
actually *is* a decent chance that a planned reference update will
still be able to go through after the other process has released the
lock.

So when trying to lock an individual reference (e.g., when creating
"refs/heads/master.lock"), if it is already locked, then retry the
lock acquisition for approximately 100 ms before giving up. This
should eliminate some unnecessary lock conflicts without wasting a lot
of time.

Add a configuration setting, `core.filesRefLockTimeout`, to allow this
setting to be tweaked.

Note: the function `get_files_ref_lock_timeout_ms()` cannot be private
to the files backend because it is also used by `write_pseudoref()`
and `delete_pseudoref()`, which are defined in `refs.c` so that they
can be used by other reference backends.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-23 10:37:21 -07:00
Junio C Hamano 44c2339e55 Merge branch 'mh/packed-ref-store'
The "ref-store" code reorganization continues.

* mh/packed-ref-store: (32 commits)
  files-backend: cheapen refname_available check when locking refs
  packed_ref_store: handle a packed-refs file that is a symlink
  read_packed_refs(): die if `packed-refs` contains bogus data
  t3210: add some tests of bogus packed-refs file contents
  repack_without_refs(): don't lock or unlock the packed refs
  commit_packed_refs(): remove call to `packed_refs_unlock()`
  clear_packed_ref_cache(): don't protest if the lock is held
  packed_refs_unlock(), packed_refs_is_locked(): new functions
  packed_refs_lock(): report errors via a `struct strbuf *err`
  packed_refs_lock(): function renamed from lock_packed_refs()
  commit_packed_refs(): use a staging file separate from the lockfile
  commit_packed_refs(): report errors rather than dying
  packed_ref_store: make class into a subclass of `ref_store`
  packed-backend: new module for handling packed references
  packed_read_raw_ref(): new function, replacing `resolve_packed_ref()`
  packed_ref_store: support iteration
  packed_peel_ref(): new function, extracted from `files_peel_ref()`
  repack_without_refs(): take a `packed_ref_store *` parameter
  get_packed_ref(): take a `packed_ref_store *` parameter
  rollback_packed_refs(): take a `packed_ref_store *` parameter
  ...
2017-08-22 10:29:16 -07:00
Michael Haggerty 8ec617c80c files-backend: cheapen refname_available check when locking refs
When locking references in preparation for updating them, we need to
check that none of the newly added references D/F conflict with
existing references (e.g., we don't allow `refs/foo` to be added if
`refs/foo/bar` already exists, or vice versa).

Prior to 524a9fdb51 (refs_verify_refname_available(): use function in
more places, 2017-04-16), conflicts with existing loose references
were checked by looking directly in the filesystem, and then conflicts
with existing packed references were checked by running
`verify_refname_available_dir()` against the packed-refs cache.

But that commit changed the final check to call
`refs_verify_refname_available()` against the *whole* files ref-store,
including both loose and packed references, with the following
comment:

> This means that those callsites now check for conflicts with all
> references rather than just packed refs, but the performance cost
> shouldn't be significant (and will be regained later).

That comment turned out to be too sanguine. User s@kazlauskas.me
reported that fetches involving a very large number of references in
neighboring directories were slowed down by that change.

The problem is that when fetching, each reference is updated
individually, within its own reference transaction. This is done
because some reference updates might succeed even though others fail.
But every time a reference update transaction is finished,
`clear_loose_ref_cache()` is called. So when it is time to update the
next reference, part of the loose ref cache has to be repopulated for
the `refs_verify_refname_available()` call. If the references are all
in neighboring directories, then the cost of repopulating the
reference cache increases with the number of references, resulting in
O(N²) effort.

The comment above also claims that the performance cost "will be
regained later". The idea was that once the packed-refs were finished
being split out into a separate ref-store, we could limit the
`refs_verify_refname_available()` call to the packed references again.
That is what we do now.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-17 14:32:23 -07:00
Michael Haggerty 198b808e20 packed_ref_store: handle a packed-refs file that is a symlink
One of the tricks that `contrib/workdir/git-new-workdir` plays is to
making `packed-refs` in the new workdir a symlink to the `packed-refs`
file in the original repository. Before
42dfa7ecef ("commit_packed_refs(): use a staging file separate from
the lockfile", 2017-06-23), a lockfile was used as the staging file,
and because the `LOCK_NO_DEREF` was not used, the pointed-to file was
locked and modified.

But after that commit, the staging file was created using a tempfile,
with the end result that rewriting the `packed-refs` file in the
workdir overwrote the symlink rather than the original `packed-refs`
file.

Change `commit_packed_refs()` to use `get_locked_file_path()` to find
the path of the file that it should overwrite. Since that path was
properly resolved when the lockfile was created, this restores the
pre-42dfa7ecef behavior.

Also add a test case to document this use case and prevent a
regression like this from recurring.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-27 10:19:56 -07:00
Michael Haggerty 9308b7f3ca read_packed_refs(): die if `packed-refs` contains bogus data
The old code ignored any lines that it didn't understand, including
unterminated lines. This is dangerous. Instead, `die()` if the
`packed-refs` file contains any unterminated lines or lines that we
don't know how to handle.

This fixes the tests added in the last commit.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-03 10:01:57 -07:00
Michael Haggerty e5cc7d7d2b repack_without_refs(): don't lock or unlock the packed refs
Change `repack_without_refs()` to expect the packed-refs lock to be
held already, and not to release the lock before returning. Change the
callers to deal with lock management.

This change makes it possible for callers to hold the packed-refs lock
for a longer span of time, a possibility that will eventually make it
possible to fix some longstanding races.

The only semantic change here is that `repack_without_refs()` used to
forget to release the lock in the `if (!removed)` exit path. That
omission is now fixed.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-03 10:01:56 -07:00
Junio C Hamano 5c83d850d0 Merge branch 'mh/packed-ref-store-prep'
Bugfix for a topic that is (only) in 'master'.

* mh/packed-ref-store-prep:
  for_each_bisect_ref(): don't trim refnames
  lock_packed_refs(): fix cache validity check
2017-06-26 14:09:29 -07:00
Junio C Hamano 50f03c6676 Merge branch 'ab/free-and-null'
A common pattern to free a piece of memory and assign NULL to the
pointer that used to point at it has been replaced with a new
FREE_AND_NULL() macro.

* ab/free-and-null:
  *.[ch] refactoring: make use of the FREE_AND_NULL() macro
  coccinelle: make use of the "expression" FREE_AND_NULL() rule
  coccinelle: add a rule to make "expression" code use FREE_AND_NULL()
  coccinelle: make use of the "type" FREE_AND_NULL() rule
  coccinelle: add a rule to make "type" code use FREE_AND_NULL()
  git-compat-util: add a FREE_AND_NULL() wrapper around free(ptr); ptr = NULL
2017-06-24 14:28:41 -07:00
Junio C Hamano f31d23a399 Merge branch 'bw/config-h'
Fix configuration codepath to pay proper attention to commondir
that is used in multi-worktree situation, and isolate config API
into its own header file.

* bw/config-h:
  config: don't implicitly use gitdir or commondir
  config: respect commondir
  setup: teach discover_git_directory to respect the commondir
  config: don't include config.h by default
  config: remove git_config_iter
  config: create config.h
2017-06-24 14:28:41 -07:00
Michael Haggerty 42c7f7ff96 commit_packed_refs(): remove call to `packed_refs_unlock()`
Instead, change the callers of `commit_packed_refs()` to call
`packed_refs_unlock()`.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-23 13:27:33 -07:00
Michael Haggerty 9051198214 clear_packed_ref_cache(): don't protest if the lock is held
The existing callers already check that the lock isn't held just
before calling `clear_packed_ref_cache()`, and in the near future we
want to be able to call this function when the lock is held.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-23 13:27:33 -07:00
Michael Haggerty 49aebcf432 packed_refs_unlock(), packed_refs_is_locked(): new functions
Add two new public functions, `packed_refs_unlock()` and
`packed_refs_is_locked()`, with which callers can manage and query the
`packed-refs` lock externally.

Call `packed_refs_unlock()` from `commit_packed_refs()` and
`rollback_packed_refs()`.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-23 13:27:33 -07:00
Michael Haggerty c8bed835c2 packed_refs_lock(): report errors via a `struct strbuf *err`
That way the callers don't have to come up with error messages
themselves.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-23 13:27:33 -07:00
Michael Haggerty b7de57d8d1 packed_refs_lock(): function renamed from lock_packed_refs()
Rename `lock_packed_refs()` to `packed_refs_lock()` for consistency
with how other methods are named. Also, it's about to get some
companions.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-23 13:27:33 -07:00
Michael Haggerty 42dfa7ecef commit_packed_refs(): use a staging file separate from the lockfile
We will want to be able to hold the lockfile for `packed-refs` even
after we have activated the new values. So use a separate tempfile,
`packed-refs.new`, as a place to stage the new contents of the
`packed-refs` file. For now this is all done within
`commit_packed_refs()`, but that will change shortly.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-23 13:27:33 -07:00
Michael Haggerty 3478983b51 commit_packed_refs(): report errors rather than dying
Report errors via a `struct strbuf *err` rather than by calling
`die()`. To enable this goal, change `write_packed_entry()` to report
errors via a return value and `errno` rather than dying.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-23 13:27:33 -07:00
Michael Haggerty e0cc8ac820 packed_ref_store: make class into a subclass of `ref_store`
Add the infrastructure to make `packed_ref_store` implement
`ref_store`, at least formally (few of the methods are actually
implemented yet). Change the functions in its interface to take
`ref_store *` arguments. Change `files_ref_store` to store a pointer
to `ref_store *` and to call functions via the virtual `ref_store`
interface where possible. This also means that a few
`packed_ref_store` functions can become static.

This is a work in progress. Some more `ref_store` methods will soon be
implemented (e.g., those having to do with reference transactions).
But some of them will never be implemented (e.g., those having to do
with symrefs or reflogs).

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-23 13:27:33 -07:00
Michael Haggerty 67be7c5a59 packed-backend: new module for handling packed references
Now that the interface between `files_ref_store` and
`packed_ref_store` is relatively narrow, move the latter into a new
module, "refs/packed-backend.h" and "refs/packed-backend.c". It still
doesn't quite implement the `ref_store` interface, but it will soon.

This commit moves code around and adjusts its visibility, but doesn't
change anything.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-23 13:27:32 -07:00
Michael Haggerty d13fa1a9ba packed_read_raw_ref(): new function, replacing `resolve_packed_ref()`
Add a new function, `packed_read_raw_ref()`, which is nearly a
`read_raw_ref_fn`. Use it in place of `resolve_packed_ref()`.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-23 13:27:32 -07:00
Michael Haggerty 38b86e81ae packed_ref_store: support iteration
Add the infrastructure to iterate over a `packed_ref_store`. It's a
lot of boilerplate, but it's all part of a campaign to make
`packed_ref_store` implement `ref_store`. In the future, this iterator
will work much differently.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-23 13:27:32 -07:00