Add discussion section to git-checkout documentation and mention
detached HEAD in repository-layout document.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Branches are only contained by a packfile if the branch actually
had its most recent commit in that packfile. So new branches are
set to MAX_PACK_ID to ensure they don't cause their commit to list
as part of the first packfile when it closes out if the commit was
actually in existance before fast-import started.
Also corrected the type of last_commit to be umaxint_t to prevent
overflow and wraparound on very large imports. Though that is
highly unlikely to occur as we're talking 4 billion commits, which
no real project has right now.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Apparently the git convention is to declare any function which
takes no arguments as taking void. I did not do this during the
early fast-import development, but should have.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The length of an atom string cannot be negative. So make it
explicit and declare it as an unsigned value.
The shift width in a mark table node also cannot be negative.
I'm also moving it to after the pointer arrays to prevent any
possible alignment problems on a 64 bit system.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Now that fast-import uses uintmax_t (the largest available unsigned
integer type) for marks we don't want to say its an unsigned 32
bit integer in ASCII base 10 notation. It could be much larger,
especially on 64 bit systems, and especially if a frontend uses
a very large number of marks (1 per file revision on a very, very
large import).
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This allows "git checkout -m <other-branch>" to notice renames and
carry local changes in the working tree forward.
Signed-off-by: Junio C Hamano <junkio@cox.net>
* Promiscous pull shows the distributed nature of git better.
* Add a new step after that to teach "remote add".
* Highlight that with the shorthand defined you will get
remote tracking branches for free.
* Fix Alice's workflow.
Signed-off-by: Santi Béjar <sbejar@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
From time to time, I would get this error:
[...]
sed: -e expression #8, char 41: Unterminated `s' command
make: *** [git-add--interactive] Error 1
Turns out that the function WriteMakefile() called in Makefile.PL
outputs the message "Writing perl.mak for Git" to stdout! Thus,
the output of "make -C perl -s --no-print-directory instlibdir"
would be prefixed by that message whenever Makefile.PL was newer
than perl.mak.
This is fixed by redirecting stdout to stderr in Makefile.PL.
Signed-off-by: Johannes E. Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
To help callers repack very large repositories into a series of
packfiles fast-import now outputs the last commits/tags it wrote to
a packfile when it prints out the packfile name. This information
can be feed to pack-objects --revs to repack. For the first pack
of an initial import this is pretty easy (just feed those SHA1s on
stdin) but for subsequent packs you want to feed the subsequent
pack's final SHA1s but also all prior pack's SHA1s prefixed with
the negation operator. This way the prior pack's data does not
get included into the subsequent pack.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The text is just copied from git-send-pack.txt.
Signed-off-by: Uwe Kleine-K,Av(Bnig <zeisberg@informatik.uni-freiburg.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Since object_count is limited to 'unsigned long' (really an
unsigned 32 bit integer value) by the pack file format we may as
well use exactly that type here in fast-import for that counter.
An earlier change by me incorrectly made it uintmax_t.
But since object_count is a counter for the current packfile only,
we don't want to output its value at the end. Instead we should
sum up the individual type counters and report that total, as that
will cover all of the packfiles.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Apparently amd64 has defined 'unsigned long' to be a 64 bit value,
which means -1 was way over the 4 GiB packfile limit. Whoops.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
People that redirect STDOUT output should always see STDERR
prompts interactively.
STDERR should always be flushed without buffering, so
they should always show up. If that is unset, we still
explicitly flush by calling STDERR->flush.
The svn command-line client prompts to STDERR, too.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The reflog code clears empty directories when rename returns
either EISDIR or ENOTDIR. Seems to be the only place.
Signed-off-by: Jason Riedy <ejr@cs.berkeley.edu>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Not all echos know -n. This was causing a test failure in
t5401-update-hooks.sh, but not t3800-mktag.sh for some reason.
Signed-off-by: Jason Riedy <ejr@cs.berkeley.edu>
Signed-off-by: Junio C Hamano <junkio@cox.net>
AIX 5.3 seems to need _ALL_SOURCE for struct addrinfo, but that
introduces a struct list in grp.h.
Signed-off-by: Jason Riedy <ejr@cs.berkeley.edu>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Much like the pack_sha1 the pack_fd is an unnecessary global
variable, we already have the fd stored in our struct packed_git
*pack_data so that the core library functions in sha1_file.c are
able to lookup and decompress object data that we have previously
written. Keeping an extra copy of this value in our own variable
is just a hold-over from earlier versions of fast-import and is
now completely unnecessary.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Because we are renaming the packfile into its file destination we
need to be sure its not open when the rename is called, otherwise
some operating systems (e.g. Windows) may prevent the rename from
occurring.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Because fast-import automatically updates all references (heads
and tags) at the end of its run the repository is corrupt unless
the objects are available in the .git/objects/pack directory prior
to the refs being modified. The easiest way to ensure that is true
is to move the packfile and its associated index directly into the
.git/objects/pack directory as soon as we have finished output to it.
But the only safe way to do this is to create the a temporary .keep
file for that pack, so we use the same tricks that index-pack uses
when its being invoked by receive-pack.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rather than maintaing our own packfile level sha1 variable we
can make use of the one already available in struct packed_git.
Its meant for the SHA1 of the index but it can also hold the
SHA1 of the packfile itself between final checksumming of the
packfile and creation of the index.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Prior to git having read_in_full() fast-import used its own private
function yread to perform the header reading task. No sense in
keeping that around now that read_in_full is a public, stable
function.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If a frontend wants to use a mark per file revision and per commit
and is doing a truly huge import (such as a 32 GiB SVN repository)
we may need more than 2**32 unique mark values, especially if the
frontend is unable (or unwilling) to recycle mark values. For mark
idnums we should use the largest unsigned integer type available,
hoping that will be at least 64 bits when we are compiled as a 64
bit executable. This way we may consume huge amounts of memory
storing our mark table, but we'll at least be able to process
the entire import without failing.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If we previously were using a delta but we needed to checkpoint the
current packfile and switch to a new packfile we need to throw away
the delta and compress the raw object by itself, as delta chains
cannot span non-thin packfiles. Unfortunately the output buffer
in this case needs to grow, as the size of the compressed object
may be quite a bit larger than the size of the compressed delta.
I've also avoided recompressing the object if we are checkpointing
and we didn't use a delta. In this case the output buffer is the
correct size and has already been populated with the right data,
we just need to close out the current packfile and open a new one.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
My bash refused to run the two scripts missing a #!, and it's
better to use the same line for all the scripts.
Signed-off-by: Jason Riedy <ejr@cs.berkeley.edu>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Instead, we complain to the user and suggest that they explicitly
specify the remote and branch. We depend on the exit status of
git-symbolic-ref, so let's go ahead and document that.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When we are on a detached HEAD, there is no current branch.
There is no reason to leak the error messages to the end user
since this is a situation we expect to see.
This adds -q option to git-symbolic-ref to exit without issuing
an error message if the given name is not a symbolic ref.
By the way, with or without this patch, there currently is no
good way to tell failure modes between "git symbolic-ref HAED"
and "git symbolic-ref HEAD". Both says "is not a symbolic ref".
We may want to do something about it.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Caller scripts may want to know what packfiles the fast-import
process just wrote out for them. This is now output to stdout,
one packfile name per line, after we checkpoint each packfile.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When the number of objects or number of bytes gets close to the limit
allowed by the packfile format (or configured on the command line by
our caller) we should automatically checkpoint the current packfile
and start a new one before writing the object out. This does however
require that we abandon the delta (if we had one) as its not valid
in a new packfile.
I also added the simple rule that if we got a delta back but the
delta itself is the same size as or larger than the uncompressed
object to ignore the delta and just store the object data. This
should avoid some really bad behavior caused by our current delta
strategy.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When we are generating multiple packfiles at once we only need
to scan the blocks of object_entry structs which contain objects
for the current packfile. Because the most recent blocks are at
the front of the linked list, and because all new objects going
into the current file are allocated from the front of that list,
we can stop scanning for objects as soon as we identify one which
doesn't belong to the current packfile.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the last packfile is going to be empty (has 0 objects) then it
shouldn't be kept after the import has terminated, as there is no
point to the packfile. So rather than hashing it and making the
index file, just delete the packfile.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>