Right now, we don't close the read end of the pipe when git-upload-pack
runs git-pack-object, so we hang forever (why don't we get SIGALRM?)
instead of dying with SIGPIPE if the latter dies, which seems to be the
norm if the client disconnects.
Thanks to Johannes Schindelin <Johannes.Schindelin@gmx.de> for
pointing out where this close() needed to go.
This patch has been tested on kernel.org for several weeks and appear
to resolve the problem of git-upload-pack processes hanging around
forever.
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* 'js/fetch-progress' (early part):
Fixup no-progress for fetch & clone
fetch & clone: do not output progress when not on a tty
Conflicts:
git-fetch.sh
The intent of the commit 'fetch & clone: do not output progress when
not on a tty' was to make fetching and cloning less chatty when
output was not redirected (such as in a cron job).
However, there was a serious thinko in that commit. It assumed that
the client _and_ the server got this update at the same time. But
this is obviously not the case, and therefore upload-pack died on
seeing the option "--no-progress".
This patch fixes that issue by making it a protocol option. So, until
your server is updated, you still see the progress, but once the
server has this patch, it will be quiet.
A minor issue was also fixed: when cloning, the checkout did not
heed no_progress.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Previous step converted use of strncmp() with literal string
mechanically even when the result is only used as a boolean:
if (!strncmp("foo", arg, 3)) ==> if (!(-prefixcmp(arg, "foo")))
This step manually cleans them up to read:
if (!prefixcmp(arg, "foo"))
Signed-off-by: Junio C Hamano <junkio@cox.net>
This mechanically converts strncmp() to use prefixcmp(), but only when
the parameters match specific patterns, so that they can be verified
easily. Leftover from this will be fixed in a separate step, including
idiotic conversions like
if (!strncmp("foo", arg, 3))
=>
if (!(-prefixcmp(arg, "foo")))
This was done by using this script in px.perl
#!/usr/bin/perl -i.bak -p
if (/strncmp\(([^,]+), "([^\\"]*)", (\d+)\)/ && (length($2) == $3)) {
s|strncmp\(([^,]+), "([^\\"]*)", (\d+)\)|prefixcmp($1, "$2")|;
}
if (/strncmp\("([^\\"]*)", ([^,]+), (\d+)\)/ && (length($1) == $3)) {
s|strncmp\("([^\\"]*)", ([^,]+), (\d+)\)|(-prefixcmp($2, "$1"))|;
}
and running:
$ git grep -l strncmp -- '*.c' | xargs perl px.perl
Signed-off-by: Junio C Hamano <junkio@cox.net>
This adds the option "--no-progress" to fetch-pack and upload-pack,
and makes fetch and clone pass this option when stdout is not a tty.
While at documenting that option, also document --strict and --timeout
options for upload-pack.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We currently do not support fetching/cloning from a shallow repository
nor pushing into one. Make sure these are not attempted so that we
do not have to worry about corrupting repositories needlessly.
Signed-off-by: Junio C Hamano <junkio@cox.net>
We have a number of badly checked write() calls. Often we are
expecting write() to write exactly the size we requested or fail,
this fails to handle interrupts or short writes. Switch to using
the new write_in_full(). Otherwise we at a minimum need to check
for EINTR and EAGAIN, where this is appropriate use xwrite().
Note, the changes to config handling are much larger and handled
in the next patch in the sequence.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
We have a number of badly checked read() calls. Often we are
expecting read() to read exactly the size we requested or fail, this
fails to handle interrupts or short reads. Add a read_in_full()
providing those semantics. Otherwise we at a minimum need to check
for EINTR and EAGAIN, where this is appropriate use xread().
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is to adjust to:
count-objects -v: show number of packs as well.
which will break a test in this series.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is a mechanical clean-up of the way *.c files include
system header files.
(1) sources under compat/, platform sha-1 implementations, and
xdelta code are exempt from the following rules;
(2) the first #include must be "git-compat-util.h" or one of
our own header file that includes it first (e.g. config.h,
builtin.h, pkt-line.h);
(3) system headers that are included in "git-compat-util.h"
need not be included in individual C source files.
(4) "git-compat-util.h" does not have to include subsystem
specific header files (e.g. expat.h).
Signed-off-by: Junio C Hamano <junkio@cox.net>
A commit may have been put on the shallow list, and then reached from
another branch and marked NOT_SHALLOW without being removed from the
list.
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Now, by saying "git fetch -depth <n> <repo>" you can deepen
a shallow repository.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
By specifying a depth, you can now clone a repository such that
all fetched ancestor-chains' length is at most "depth". For example,
if the upstream repository has only 2 branches ("A" and "B"), which
are linear, and you specify depth 3, you will get A, A~1, A~2, A~3,
B, B~1, B~2, and B~3. The ends are automatically made shallow
commits.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
A shallow commit is a commit which has parents, which in turn are
"grafted away", i.e. the commit appears as if it were a root.
Since these shallow commits should not be edited by the user, but
only by core git, they are recorded in the file $GIT_DIR/shallow.
A repository containing shallow commits is called shallow.
The advantage of a shallow repository is that even if the upstream
contains lots of history, your local (shallow) repository needs not
occupy much disk space.
The disadvantage is that you might miss a merge base when pulling
some remote branch.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
It is trivial to do now, and it is needed for the upcoming shallow
clone stuff.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* lj/refs: (63 commits)
Fix show-ref usagestring
t3200: git-branch testsuite update
sha1_name.c: avoid compilation warnings.
Make git-branch a builtin
ref-log: fix D/F conflict coming from deleted refs.
git-revert with conflicts to behave as git-merge with conflicts
core.logallrefupdates thinko-fix
git-pack-refs --all
core.logallrefupdates create new log file only for branch heads.
Remove bashism from t3210-pack-refs.sh
ref-log: allow ref@{count} syntax.
pack-refs: call fflush before fsync.
pack-refs: use lockfile as everybody else does.
git-fetch: do not look into $GIT_DIR/refs to see if a tag exists.
lock_ref_sha1_basic does not remove empty directories on BSD
Do not create tag leading directories since git update-ref does it.
Check that a tag exists using show-ref instead of looking for the ref file.
Use git-update-ref to delete a tag instead of rm()ing the ref file.
Fix refs.c;:repack_without_ref() clean-up path
Clean up "git-branch.sh" and add remove recursive dir test cases.
...
There is no reason not to always do this when both ends agree.
Therefore a client that can accept offsets to delta base always sends
the "ofs-delta" flag. The server will stream a pack with or without
offset to delta base depending on whether that flag is provided or not
with no additional cost.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This adds a "int *flag" parameter to resolve_ref() and makes
for_each_ref() family to call callback function with an extra
"int flag" parameter. They are used to give two bits of
information (REF_ISSYMREF and REF_ISPACKED) about the ref.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is a long overdue fix to the API for for_each_ref() family
of functions. It allows the callers to specify a callback data
pointer, so that the caller does not have to use static
variables to communicate with the callback funciton.
The updated for_each_ref() family takes a function of type
int (*fn)(const char *, const unsigned char *, void *)
and a void pointer as parameters, and calls the function with
the name of the ref and its SHA-1 with the caller-supplied void
pointer as parameters.
The commit updates two callers, builtin-name-rev.c and
builtin-pack-refs.c as an example.
Signed-off-by: Junio C Hamano <junkio@cox.net>
The original side-band support added to the upload-pack protocol used the
default 1000-byte packet length. The pkt-line format allows up to 64k, so
prepare the receiver for the maximum size, and have the uploader and
downloader negotiate if larger packet length is allowed.
Signed-off-by: Junio C Hamano <junkio@cox.net>
The server side support; this is just the very low level, and the
caller needs to know which band it wants to send things out.
Signed-off-by: Junio C Hamano <junkio@cox.net>
(cherry picked from b786552b67878c7780c50def4c069d46dc54efbe commit)
This abstracts away the size of the hash values when copying them
from memory location to memory location, much as the introduction
of hashcmp abstracted away hash value comparsion.
A few call sites were using char* rather than unsigned char* so
I added the cast rather than open hashcpy to be void*. This is a
reasonable tradeoff as most call sites already use unsigned char*
and the existing hashcmp is also declared to be unsigned char*.
[jc: Splitted the patch to "master" part, to be followed by a
patch for merge-recursive.c which is not in "master" yet.
Fixed the cast in the latter hunk to combine-diff.c which was
wrong in the original.
Also converted ones left-over in combine-diff.c, diff-lib.c and
upload-pack.c ]
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
[jc: I needed to hand merge the changes to the updated codebase,
so the result needs to be checked.]
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When the downloader has more roots than we do, we will see many
"have" that leads to the root we do not have and let the other
side walk all the way to that root. Tell them to stop sending the
branch'es ancestors by sending a fake "ACK" when we already have
common ancestor for the wanted refs.
Signed-off-by: Junio C Hamano <junkio@cox.net>
No changes to what it does, but separating the codepath clearly
with if ... else if ... chain makes it easier to follow.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This updates the type-enumeration constants introduced to reduce
the memory footprint of "struct object" to match the type bits
already used in the packfile format, by removing the former
(i.e. TYPE_* constant macros) and using the latter (i.e. enum
object_type) throughout the code for consistency.
Eventually we can stop passing around the "type strings"
entirely, and this will help - no confusion about two different
integer enumeration.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
By using object_array data structure, lift the old limitation of
MAX_HAS/MAX_NEEDS. While we are at it, rename the variables
that hold the objects we use to compute common ancestor to match
the message used at the protocol level. What the other end has
and we also do are "have"s, and what the other end asks for are
"want"s.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Merlyn reports that <sys/poll.h> on OpenBSD 3.8 includes <ctype.h>
and having our custom ctype (done in git-compat-util.h which is
included via cache.h) makes upload-pack.c uncompilable. Try to
work it around by including the system headers first.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This implements a protocol extension between fetch-pack and
upload-pack to allow stderr stream from upload-pack (primarily
used for the progress bar display) to be passed back.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This does not implement sideband for propagating the status to
the downloader yet, but add code to capture the standard error
output from the pack-objects process in preparation for sending
it off to the client when the protocol extension allows us to do
so.
Signed-off-by: Junio C Hamano <junkio@cox.net>
When the repository on the remote side is corrupted, rev-list
spawned from upload-pack would die with error, but pack-objects
that reads from the rev-list happily created a packfile that can
be unpacked by the downloader. When this happens, the resulting
packfile is not corrupted and unpacks cleanly, but the list of
the objects contained in it is not what the protocol exchange
computed.
This update makes upload-pack to monitor its subprocesses, and
when either of them dies with error, sends an incomplete pack
data to the downloader to cause it to fail.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This shrinks "struct object" by a small amount, by getting rid of the
"struct type *" pointer and replacing it with a 3-bit bitfield instead.
In addition, we merge the bitfields and the "flags" field, which
incidentally should also remove a useless 4-byte padding from the object
when in 64-bit mode.
Now, our "struct object" is still too damn large, but it's now less
obviously bloated, and of the remaining fields, only the "util" (which is
not used by most things) is clearly something that should be eventually
discarded.
This shrinks the "git-rev-list --all" memory use by about 2.5% on the
kernel archive (and, perhaps more importantly, on the larger mozilla
archive). That may not sound like much, but I suspect it's more on a
64-bit platform.
There are other remaining inefficiencies (the parent lists, for example,
probably have horrible malloc overhead), but this was pretty obvious.
Most of the patch is just changing the comparison of the "type" pointer
from one of the constant string pointers to the appropriate new TYPE_xxx
small integer constant.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Mark Wooding noticed there was a type mismatch warning in git.c; this
patch does things slightly differently (mostly tightening const) and
was what I was holding onto, waiting for the setup-revisions change
to be merged into the master branch.
Signed-off-by: Junio C Hamano <junkio@cox.net>
* jc/rev-list:
rev-list --objects: use full pathname to help hashing.
rev-list --objects-edge: remove duplicated edge commit output.
rev-list --objects-edge
* jc/pack-thin:
pack-objects: hash basename and direname a bit differently.
pack-objects: allow "thin" packs to exceed depth limits
pack-objects: use full pathname to help hashing with "thin" pack.
pack-objects: thin pack micro-optimization.
Use thin pack transfer in "git fetch".
Add git-push --thin.
send-pack --thin: use "thin pack" delta transfer.
Thin pack - create packfile with missing delta base.
Conflicts:
pack-objects.c (taking "next")
send-pack.c (taking "next")
The git suite may not be in PATH (and thus programs such as
git-send-pack could not exec git-rev-list). Thus there is a need for
logic that will locate these programs. Modifying PATH is not
desirable as it result in behavior differing from the user's
intentions, as we may end up prepending "/usr/bin" to PATH.
- git C programs will use exec*_git_cmd() APIs to exec sub-commands.
- exec*_git_cmd() will execute a git program by searching for it in
the following directories:
1. --exec-path (as used by "git")
2. The GIT_EXEC_PATH environment variable.
3. $(gitexecdir) as set in Makefile (default value $(bindir)).
- git wrapper will modify PATH as before to enable shell scripts to
invoke "git-foo" commands.
Ideally, shell scripts should use the git wrapper to become independent
of PATH, and then modifying PATH will not be necessary.
[jc: with minor updates after a brief review.]
Signed-off-by: Michal Ostrowski <mostrows@watson.ibm.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This patch basically just removes the redundant code from
{receive,upload}-pack.c in favour of the library code in path.c.
Signed-off-by: Andreas Ericsson <ae@op5.se>
Signed-off-by: Junio C Hamano <junkio@cox.net>
One caller of deref_tag() was not careful enough to make sure
what deref_tag() returned was not NULL (i.e. we found a tag
object that points at an object we do not have). Fix it, and
warn about refs that point at such an incomplete tag where
needed.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This implements three things (trying very hard to be backwards
compatible):
It sends the "multi_ack" capability via the mechanism proposed by
Sergey Vlasov.
When the client sends "multi_ack" with at least one "want", multi_ack
is enabled.
When multi_ack is enabled, "continue" is appended to each "ACK" until
either the server can not store more refs, or "done" is received.
In contrast to the original protocol, as long as "continue" is sent,
flushes are answered by a "NAK" (not just until an "ACK" was sent),
and if "continue" was sent at least once, the last message is an
"ACK" without "continue".
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This patch is based on Junio's proposal. It marks parents of common revs
so that they do not clutter up the has_sha1 array.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
upload-pack would set create_full_pack=1 if nr_has==0, but would ask later
if nr_needs<MAX_NEEDS. If that proves true, it would ignore create_full_pack,
and arguments would be written into unreserved memory.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This makes sure what the other end asks for are among what we
offered to give them. Otherwise we would end up running
git-rev-list with 20-byte nonsense, only to find it either die
(because the object was not found) or waste time (because we
ended up serving that phony 'client').
Also avoid wasting needs_sha1 pool to record duplicates, and
detect cloning requests better.
[this used to be on top of Johannes fetch-pack enhancements,
which we are rewinding it for further testing for now, so
the commit is rebased.]
Signed-off-by: Junio C Hamano <junkio@cox.net>
The current fetch/upload protocol works like this:
- client sends revs it wants to have via "want" messages
- client sends a flush message (message with len 0)
- client sends revs it has via "have" messages
- after one window (32 revs), a flush is sent
- after each subsequent window, a flush is sent, and an ACK/NAK is received.
(NAK means that server does not have any of the transmitted revs;
ACK sends also the sha1 of the rev server has)
- when the first ACK is received, client sends "done", and does not expect
any further messages
One special case, though:
- if no ACK is received (only NAK's), and client runs out of revs to send,
"done" is sent, and server sends just one more "NAK"
A smarter scheme, which actually has a chance to detect more than one
common rev, would be to send more than just one ACK. This patch implements
the server side of the following extension to the protocol:
- client sends at least one "want" message with "multi_ack" appended, like
"want 1234567890123456789012345678901234567890 multi_ack"
- if the server understands that extension, it will send ACK messages for all
revs it has, not just the first one
- server appends "continue" to the ACK messages like
"ACK 1234567890123456789012345678901234567890 continue"
until it has MAX_HAS-1 revs. In this manner, client knows when to
stop sending revs by checking for the substring "continue" (and
further knows that server understands multi_ack)
In this manner, the protocol stays backwards compatible, since both client
must send "want ... multi_ack" and server must answer with "ACK ...
continue" to enable the extension.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This patch is based on Junio's proposal. It marks parents of common revs
so that they do not clutter up the has_sha1 array.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Later round would further improve fetch-pack not to send useless "have",
but in the meantime, increase it to help upload-pack to find more common
commits, as discussed on the list.
Signed-off-by: Junio C Hamano <junkio@cox.net>
It turns out that not only did git-daemon do DWIM, but git-upload-pack
does as well. This is bad; security checks have to be performed *after*
canonicalization, not before.
Additionally, the current git-daemon can be trivially DoSed by spewing
SYNs at the target port.
This patch adds a --strict option to git-upload-pack to disable all
DWIM, a --timeout option to git-daemon and git-upload-pack, and an
--init-timeout option to git-daemon (which is typically set to a much
lower value, since the initial request should come immediately from the
client.)
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This updates git-ls-remote to show SHA1 names of objects that are
referred by tags, in the "ref^{}" notation.
This would make git-findtags (without -t flag) almost trivial.
git-peek-remote . |
sed -ne "s:^$target "'refs/tags/\(.*\)^{}$:\1:p'
Also Pasky could do:
git-ls-remote --tags $remote |
sed -ne 's:\( refs/tags/.*\)^{}$:\1:p'
to find out what object each of the remote tags refers to, and
if he has one locally, run "git-fetch $remote tag $tagname" to
automatically catch up with the upstream tags.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Cloning from a repository with more than 256 refs (heads and tags
included) will choke, because upload-pack has a built-in limit of
feeding not more than MAX_NEEDS (currently 256) heads to underlying
git-rev-list. This is a problem when cloning a repository with many
tags, like http://www.linux-mips.org/pub/scm/linux.git, which has 290+
tags.
This commit introduces a new flag, --all, to git-rev-list, to include
all refs in the repository. Updated upload-pack detects requests that
ask more than MAX_NEEDS refs, and sends everything back instead.
We may probably want to tweak the definitions of MAX_NEEDS and
MAX_HAS, but that is a separate topic.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Solaris 8 doesn't have the newer unsetenv() and setenv()
functions, so replace them with putenv(). The one use of
unsetenv() in fsck-cache.c now sets GIT_ALTERNATE_OBJECT_
DIRECTORIES to the empty string. Every place that var
is used, NULLs are also replaced with empty strings, so
it's ok.
Signed-off-by: Jason Riedy <ejr@cs.berkeley.edu>
Now that git-clone-pack exists, we actually have somebody requesting
more than just a single head in a pack. So allow the Jeff's of this
world to clone things with tens of heads.
"git_path()" returns a static pathname pointer into the git directory
using a printf-like format specifier.
"head_ref()" works like "for_each_ref()", except for just the HEAD.
It returns the result SHA1 on stdout, so you can do
remote=$(git-fetch-pack host:dir branchname)
and it will unpack the objects and "remote" will be the SHA1 name of the
branch on the other side. You can then save that off, or merge it, or
whatever.
It's meant to be used by "git fetch" for the local and ssh case.
It doesn't actually do the fetching now, but it does discover the common
commit point.