Граф коммитов

308 Коммитов

Автор SHA1 Сообщение Дата
Junio C Hamano e82973cfb0 sha1_file.c (write_sha1_file): Detect close failure
This is in the same spirit as earlier fix to write_sha1_from_fd().

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-27 12:56:01 -07:00
Jim Meyering 0d315468f3 sha1_file.c (write_sha1_from_fd): Detect close failure.
I stumbled across this in the context of the fchmod 0444 patch.
At first, I was going to unlink and call error like the two subsequent
tests do, but a failed write (above) provokes a "die", so I made
this do the same.  This is testing for a write failure, after all.

Signed-off-by: Jim Meyering <jim@meyering.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-27 12:43:49 -07:00
Nicolas Pitre b5b8d8141a write_sha1_from_fd() should make new objects read-only
... like it is done everywhere else.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-24 22:32:41 -07:00
Nicolas Pitre 0e55181f29 make it more obvious that temporary files are temporary files
When some operations are interrupted (or "die()'d" or crashed) then the
partial object/pack/index file may remain around.  Make it more obvious
in their name that those files are temporary stuff and can be cleaned up
if no operation is in progress.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-24 22:32:39 -07:00
Linus Torvalds ac54c277f0 Be more careful about zlib return values
When creating a new object, we use "deflate(stream, Z_FINISH)" in a loop
until it no longer returns Z_OK, and then we do "deflateEnd()" to finish
up business.

That should all work, but the fact is, it's not how you're _supposed_ to
use the zlib return values properly:

 - deflate() should never return Z_OK in the first place, except if we
   need to increase the output buffer size (which we're not doing, and
   should never need to do, since we pre-allocated a buffer that is
   supposed to be able to hold the output in full). So the "while()" loop
   was incorrect: Z_OK doesn't actually mean "ok, continue", it means "ok,
   allocate more memory for me and continue"!

 - if we got an error return, we would consider it to be end-of-stream,
   but it could be some internal zlib error.  In short, we should check
   for Z_STREAM_END explicitly, since that's the only valid return value
   anyway for the Z_FINISH case.

 - we never checked deflateEnd() return codes at all.

Now, admittedly, none of these issues should ever happen, unless there is
some internal bug in zlib. So this patch should make zero difference, but
it seems to be the right thing to do.

We should probablybe anal and check the return value of "deflateInit()"
too!

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-20 22:17:32 -07:00
Nicolas Pitre ce9fbf16e0 index-pack: use hash_sha1_file()
Use hash_sha1_file() instead of duplicating code to compute object SHA1.
While at it make it accept a const pointer.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-20 22:09:57 -07:00
Linus Torvalds 456cdf6edb Fix loose object uncompression check.
The thing is, if the output buffer is empty, we should *still* actually
use the zlib routines to *unpack* that empty output buffer.

But we had a test that said "only unpack if we still expect more output".

So we wouldn't use up all the zlib stream, because we felt that we didn't
need it, because we already had all the bytes we wanted. And it was
"true": we did have all the output data. We just needed to also eat all
the input data!

We've had this bug before - thinking that we don't need to inflate()
anything because we already had it all..

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-19 23:13:17 -07:00
Nicolas Pitre 5e08ecbff2 use a LRU eviction policy for the delta base cache
This provides a smoother degradation in performance when the cache
gets trashed due to the delta_base_cache_limit being reached.  Limited
testing with really small delta_base_cache_limit values appears to confirm
this.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-19 18:16:02 -07:00
Nicolas Pitre 3358004a00 clean up the delta base cache size a bit
Currently there are 3 different ways to deal with the cache size.
Let's stick to only one.  The compiler is smart enough to produce the exact
same code in those cases anyway.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-19 18:15:59 -07:00
Shawn O. Pearce 18bdec1118 Limit the size of the new delta_base_cache
The new configuration variable core.deltaBaseCacheLimit allows the
user to control how much memory they are willing to give to Git for
caching base objects of deltas.  This is not normally meant to be
a user tweakable knob; the "out of the box" settings are meant to
be suitable for almost all workloads.

We default to 16 MiB under the assumption that the cache is not
meant to consume all of the user's available memory, and that the
cache's main purpose was to cache trees, for faster path limiters
during revision traversal.  Since trees tend to be relatively small
objects, this relatively small limit should still allow a large
number of objects.

On the other hand we don't want the cache to start storing 200
different versions of a 200 MiB blob, as this could easily blow
the entire address space of a 32 bit process.

We evict OBJ_BLOB from the cache first (credit goes to Junio) as
we want to favor OBJ_TREE within the cache.  These are the objects
that have the highest inflate() startup penalty, as they tend to
be small and thus don't have that much of a chance to ammortize
that penalty over the entire data.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-18 22:43:37 -07:00
Nicolas Pitre a0cba10847 Reuse cached data out of delta base cache.
A malloc() + memcpy() will always be faster than mmap() +
malloc() + inflate().  If the data is already there it is
certainly better to copy it straight away.

With this patch below I can do 'git log drivers/scsi/ >
/dev/null' about 7% faster.  I bet it might be even more on
those platforms with bad mmap() support.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-18 15:36:59 -07:00
Linus Torvalds e5e01619bc Implement a simple delta_base cache
This trivial 256-entry delta_base cache improves performance for some
loads by a factor of 2.5 or so.

Instead of always re-generating the delta bases (possibly over and over
and over again), just cache the last few ones. They often can get re-used.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-18 15:36:59 -07:00
Linus Torvalds 62f255ad58 Make trivial wrapper functions around delta base generation and freeing
This doesn't change any code, it just creates a point for where we'd
actually do the caching of delta bases that have been generated.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-18 15:36:59 -07:00
Nicolas Pitre 4287307833 [PATCH] clean up pack index handling a bit
Especially with the new index format to come, it is more appropriate
to encapsulate more into check_packed_git_idx() and assume less of the
index format in struct packed_git.

To that effect, the index_base is renamed to index_data with void * type
so it is not used directly but other pointers initialized with it. This
allows for a couple pointer cast removal, as well as providing a better
generic name to grep for when adding support for new index versions or
formats.

And index_data is declared const too while at it.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-16 21:27:36 -07:00
Junio C Hamano b867092fec prepare_packed_git(): sort packs by age and localness.
When accessing objects, we first look for them in packs that
are linked together in the reverse order of discovery.

Since younger packs tend to contain more recent objects, which
are more likely to be accessed often, and local packs tend to
contain objects more relevant to our specific projects, sort the
list of packs before starting to access them.  In addition,
favoring local packs over the ones borrowed from alternates can
be a win when alternates are mounted on network file systems.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-11 00:04:05 -08:00
Junio C Hamano 8509fed75d Merge branch 'jc/fsck'
* jc/fsck:
  fsck: exit with non-zero status upon errors
  unpack_sha1_file(): detect corrupt loose object files.
  fsck: fix broken loose object check.
2007-03-10 23:10:26 -08:00
Shawn O. Pearce dc49cd769b Cast 64 bit off_t to 32 bit size_t
Some systems have sizeof(off_t) == 8 while sizeof(size_t) == 4.
This implies that we are able to access and work on files whose
maximum length is around 2^63-1 bytes, but we can only malloc or
mmap somewhat less than 2^32-1 bytes of memory.

On such a system an implicit conversion of off_t to size_t can cause
the size_t to wrap, resulting in unexpected and exciting behavior.
Right now we are working around all gcc warnings generated by the
-Wshorten-64-to-32 option by passing the off_t through xsize_t().

In the future we should make xsize_t on such problematic platforms
detect the wrapping and die if such a file is accessed.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-07 11:15:26 -08:00
Shawn O. Pearce c4001d92be Use off_t when we really mean a file offset.
Not all platforms have declared 'unsigned long' to be a 64 bit value,
but we want to support a 64 bit packfile (or close enough anyway)
in the near future as some projects are getting large enough that
their packed size exceeds 4 GiB.

By using off_t, the POSIX type that is declared to mean an offset
within a file, we support whatever maximum file size the underlying
operating system will handle.  For most modern systems this is up
around 2^60 or higher.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-07 11:06:25 -08:00
Shawn O. Pearce 326bf39677 Use uint32_t for all packed object counts.
As we permit up to 2^32-1 objects in a single packfile we cannot
use a signed int to represent the object offset within a packfile,
after 2^31-1 objects we will start seeing negative indexes and
error out or compute bad addresses within the mmap'd index.

This is a minor cleanup that does not introduce any significant
logic changes.  It is roach free.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-07 11:02:33 -08:00
Shawn O. Pearce 3a55602eec General const correctness fixes
We shouldn't attempt to assign constant strings into char*, as the
string is not writable at runtime.  Likewise we should always be
treating unsigned values as unsigned values, not as signed values.

Most of these are very straightforward.  The only exception is the
(unnecessary) xstrdup/free in builtin-branch.c for the detached
head case.  Since this is a user-level interactive type program
and that particular code path is executed no more than once, I feel
that the extra xstrdup call is well worth the easy elimination of
this warning.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-07 10:47:10 -08:00
Shawn O. Pearce 2d88451b7a Fix mmap leak caused by reading bad indexes.
If an index is corrupt, or is simply too new for us to understand,
we were leaking the mmap that held the entire content of the index.
This could be a considerable size on large projects, given that
the index is at least 24 bytes * nr_objects.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-07 10:41:33 -08:00
Shawn O. Pearce 30fee0625d Display the null SHA-1 as the base for an OBJ_OFS_DELTA.
Because we are currently cheating and never supplying the delta base
for an OBJ_OFS_DELTA we get a random SHA-1 in the delta base field.
Instead lets clear the hash out so its at least all 0's.  This is
somewhat more obvious that something fishy is going on, like we
don't actually have the SHA-1 of the base handy.  :)

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-07 10:35:16 -08:00
Junio C Hamano 7efbff7531 unpack_sha1_file(): detect corrupt loose object files.
We did not detect broken loose object files, either when
underlying inflate() signalled the breakage, nor inflate()
finished and we had garbage trailing at the end.  We do better
now.

We also make unpack_sha1_file() a static function to
sha1_file.c, since it is not used by anybody outside.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-05 00:55:19 -08:00
Junio C Hamano d0d8e14d1b index_fd(): convert blob only if it is a regular file.
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-28 12:00:00 -08:00
Junio C Hamano 53bca91a7d index_fd(): pass optional path parameter as hint for blob conversion
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-28 12:00:00 -08:00
Junio C Hamano edaec3fbe8 index_fd(): use enum object_type instead of type name string.
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-28 12:00:00 -08:00
Nicolas Pitre 21666f1aae convert object type handling from a string to a number
We currently have two parallel notation for dealing with object types
in the code: a string and a numerical value.  One of them is obviously
redundent, and the most used one requires more stack space and a bunch
of strcmp() all over the place.

This is an initial step for the removal of the version using a char array
found in object reading code paths.  The patch is unfortunately large but
there is no sane way to split it in smaller parts without breaking the
system.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-27 01:34:21 -08:00
Nicolas Pitre df8436622f formalize typename(), and add its reverse type_from_string()
Sometime typename() is used, sometimes type_names[] is accessed directly.
Let's enforce typename() all the time which allows for validating the
type.

Also let's add a function to go from a name to a type and use it instead
of manual memcpy() when appropriate.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-27 01:34:21 -08:00
Nicolas Pitre 9ba630318f sha1_file.c: don't ignore an error condition in sha1_loose_object_info()
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-27 01:34:21 -08:00
Nicolas Pitre 2b87c45ba6 sha1_file.c: cleanup "offset" usage
First there are too many offsets there and it is getting confusing.
So 'offset' is now 'curpos' to distinguish from other offsets like
'obj_offset'.

Then structures like x = foo(x, &y) are now done as y = foo(&x).
It looks more natural that the result y be returned directly and
x be passed as reference to be updated in place.  This has the effect
of reducing some line length and removing a few, needing a bit less
stack space, and it even reduces the compiled code size.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-27 01:34:21 -08:00
Nicolas Pitre d65a16f6c4 sha1_file.c: cleanup hdr usage
Let's have hdr be a simple char pointer/array when possible, and let's
reduce its storage to 32 bytes.  Especially for sha1_loose_object_info()
where 128 bytes is way excessive and wastes extra CPU cycles inflating.

The object type is already restricted to 10 bytes in parse_sha1_header()
and the size, even if it is 64 bits, will fit in 20 decimal numbers.  So
32 bytes is plenty.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-27 01:34:21 -08:00
Junio C Hamano ef1a5c2fa8 Merge branches 'lt/crlf' and 'jc/apply-config'
* lt/crlf:
  Teach core.autocrlf to 'git apply'
  t0020: add test for auto-crlf
  Make AutoCRLF ternary variable.
  Lazy man's auto-CRLF

* jc/apply-config:
  t4119: test autocomputing -p<n> for traditional diff input.
  git-apply: guess correct -p<n> value for non-git patches.
  git-apply: notice "diff --git" patch again
  Fix botched "leak fix"
  t4119: add test for traditional patch and different p_value
  apply: fix memory leak in prefix_one()
  git-apply: require -p<n> when working in a subdirectory.
  git-apply: do not lose cwd when run from a subdirectory.
  Teach 'git apply' to look at $HOME/.gitconfig even outside of a repository
  Teach 'git apply' to look at $GIT_DIR/config
2007-02-22 21:34:36 -08:00
Junio C Hamano efa13f7b7e pretend-sha1: grave bugfix.
We stashed away objects that we pretend to have, but did not save the
actual data.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-15 17:03:11 -08:00
Alexandre Julliard 78a28df938 sha1_file.c: Round the mmap offset to half the window size.
This ensures that a given area is mapped at most twice, and greatly
reduces the virtual address space usage.

Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-14 15:22:08 -08:00
Linus Torvalds 6c510bee20 Lazy man's auto-CRLF
It currently does NOT know about file attributes, so it does its
conversion purely based on content. Maybe that is more in the "git
philosophy" anyway, since content is king, but I think we should try to do
the file attributes to turn it off on demand.

Anyway, BY DEFAULT it is off regardless, because it requires a

	[core]
		AutoCRLF = true

in your config file to be enabled. We could make that the default for
Windows, of course, the same way we do some other things (filemode etc).

But you can actually enable it on UNIX, and it will cause:

 - "git update-index" will write blobs without CRLF
 - "git diff" will diff working tree files without CRLF
 - "git checkout" will write files to the working tree _with_ CRLF

and things work fine.

Funnily, it actually shows an odd file in git itself:

	git clone -n git test-crlf
	cd test-crlf
	git config core.autocrlf true
	git checkout
	git diff

shows a diff for "Documentation/docbook-xsl.css". Why? Because we have
actually checked in that file *with* CRLF! So when "core.autocrlf" is
true, we'll always generate a *different* hash for it in the index,
because the index hash will be for the content _without_ CRLF.

Is this complete? I dunno. It seems to work for me. It doesn't use the
filename at all right now, and that's probably a deficiency (we could
certainly make the "is_binary()" heuristics also take standard filename
heuristics into account).

I don't pass in the filename at all for the "index_fd()" case
(git-update-index), so that would need to be passed around, but this
actually works fine.

NOTE NOTE NOTE! The "is_binary()" heuristics are totally made-up by yours
truly. I will not guarantee that they work at all reasonable. Caveat
emptor. But it _is_ simple, and it _is_ safe, since it's all off by
default.

The patch is pretty simple - the biggest part is the new "convert.c" file,
but even that is really just basic stuff that anybody can write in
"Teaching C 101" as a final project for their first class in programming.
Not to say that it's bug-free, of course - but at least we're not talking
about rocket surgery here.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-14 11:19:22 -08:00
Linus Torvalds bd3a5b5ee5 Mark places that need blob munging later for CRLF conversion.
Here's a patch that I think we can merge right now. There may be
other places that need this, but this at least points out the
three places that read/write working tree files for git
update-index, checkout and diff respectively. That should cover
a lot of it [jc: git-apply uses an entirely different codepath
both for reading and writing].

Some day we can actually implement it. In the meantime, this
points out a place for people to start. We *can* even start with
a really simple "we do CRLF conversion automatically, regardless
of filename" kind of approach, that just look at the data (all
three cases have the _full_ file data already in memory) and
says "ok, this is text, so let's convert to/from DOS format
directly".

THAT somebody can write in ten minutes, and it would already
make git much nicer on a DOS/Windows platform, I suspect.

And it would be totally zero-cost if you just make it a config
option (but please make it dynamic with the _default_ just being
0/1 depending on whether it's UNIX/Windows, just so that UNIX
people can _test_ it easily).

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-13 10:12:37 -08:00
Junio C Hamano d66b37bb19 Add pretend_sha1_file() interface.
The new interface allows an application to temporarily hash a
small number of objects and pretend that they are available in
the object store without actually writing them.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-05 14:55:11 -08:00
Pavel Roskin 3dff5379bf Assorted typo fixes
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-03 21:49:54 -08:00
Shawn O. Pearce 3cf8b462d2 Don't leak file descriptors from unavailable pack files.
If open_packed_git failed it may have been because the packfile
actually exists and is readable, but some sort of verification
did not pass.  In this case open_packed_git left pack_fd filled
in, as the file descriptor is valid.  We don't want to leak the
file descriptor, nor do we want to allow someone in the future
to use this packed_git.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-02 21:33:18 -08:00
Shawn O. Pearce c715f78369 Don't find objects in packs which aren't available anymore.
Matthias Lederhofer identified a race condition where a Git reader
process was able to locate an object in a packed_git index, but
was then preempted while a `git repack -a -d` ran and completed.
By the time the reader was able to seek in the packfile to get the
object data, the packfile no longer existed on disk.

In this particular case the reader process did not attempt to
open the packfile before it was deleted, so it did not already
have the pack_fd field popuplated.  With the packfile itself gone,
there was no way for the reader to open it and fetch the data.

I'm fixing the race condition by teaching find_pack_entry to ignore
a packed_git whose packfile is not currently open and which cannot
be opened.  If none of the currently known packs can supply the
object, we will return 0 and the caller will decide the object is
not available.  If this is the first attempt at finding an object,
the caller will reprepare_packed_git and try again.  If it was
the second attempt, the caller will typically return NULL back,
and an error message about a missing object will be reported.

This patch does not address the situation of a reader which is
being starved out by a tight sequence of `git repack -a -d` runs.
In this particular case the reader will try twice, probably fail
both times, and declare the object in question cannot be found.
As it is highly unlikely that a real world `git repack -a -d` can
complete faster than a reader can open a packfile, so I don't think
this is a huge concern.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-01 22:27:47 -08:00
Shawn O. Pearce 072db2789c Refactor open_packed_git to return an error code.
Because I want to reuse open_packed_git in a context where I don't
want the process to die if the packfile in question is bogus, I'm
changing its behavior to return error("...") rather than die("...")
when it detects something is wrong with the packfile it was given.

Right now we still must die out of use_pack should open_packed_git
fail, as none of use_pack's callers are prepared to handle a failure
from that function.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-01 22:24:17 -08:00
Shawn O. Pearce 54a15a8df2 Correct comment in prepare_packed_git_one.
After staring at the comment and the associated for loop, I
realized the comment was completely bogus.  The section of
code its talking about is trying to avoid duplicate mapping
of the same packfile.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-01 22:22:51 -08:00
Shawn O. Pearce 625e9421df Cleanup prepare_packed_git_one to reuse install_packed_git.
There is little point in having the linked list insertion code
appearing in install_packed_git, and then again just 30 lines
further down in the same file.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-01 22:21:19 -08:00
Junio C Hamano a69e542989 Refactor the pack header reading function out of receive-pack.c
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-24 18:08:02 -08:00
Peter Eriksen 8276c0070f sha1_file.c: Avoid multiple calls to find_pack_entry().
We used to call find_pack_entry() twice from read_sha1_file() in order
to avoid printing an error message, when the object did not exist.  This
is fixed by moving the call to error() to the only place it really
could be called.

Signed-off-by: Peter Eriksen <s022018@student.dtu.dk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-22 13:11:46 -08:00
Junio C Hamano b18b00a661 Use fixed-size integers for .idx file I/O
This attempts to finish what Simon started in the previous commit.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-18 14:11:50 -08:00
Shawn O. Pearce df1b059d8d Document pack .idx file format upgrade strategy.
Way back when Junio developed the 64 bit index topic he came up
with a means of changing the .idx file format so that older Git
clients would recognize that they don't understand the file and
refuse to read it, while newer clients could tell the difference
between the old-style and new-style .idx files.  Unfortunately
this wasn't recorded anywhere.

This change documents how we might go about changing the .idx
file format by using a special signature in the first four bytes.
Credit (and possible blame) goes completely to Junio for thinking
up this technique.

The change also modifies the error message of the current Git code
so that users get a recommendation to upgrade their Git software
should this version or later encounter a new-style .idx which it
cannot process.  We already do this for the .pack files, but since
we usually process the .idx files first its important that these
files are recognized and encourage an upgrade.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-17 20:51:45 -08:00
Shawn O. Pearce e6e2bd6201 Remove read_or_die in favor of better error messages.
Originally I introduced read_or_die for the purpose of reading
the pack header and trailer, and I was too lazy to print proper
error messages.

Linus Torvalds <torvalds@osdl.org>:
> For a read error, at the very least you have to say WHICH FILE
> couldn't be read, because it's usually a matter of some file just
> being too short, not some system-wide problem.

and of course Linus is right. Make it so.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-14 00:42:41 -08:00
Linus Torvalds d34cf19b89 Clean up write_in_full() users
With the new-and-improved write_in_full() semantics, where a partial write
simply always returns a real error (and always sets 'errno' when that
happens, including for the disk full case), a lot of the callers of
write_in_full() were just unnecessarily complex.

In particular, there's no reason to ever check for a zero length or
return: if the length was zero, we'll return zero, otherwise, if a disk
full resulted in the actual write() system call returning zero the
write_in_full() logic would have correctly turned that into a negative
return value, with 'errno' set to ENOSPC.

I really wish every "write_in_full()" user would just check against "<0"
now, but this fixes the nasty and stupid ones.

Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-11 21:02:58 -08:00
Eric Wong 3b97fee23d Avoid errors and warnings when attempting to do I/O on zero bytes
Unfortunately, while {read,write}_in_full do take into account
zero-sized reads/writes; their die and whine variants do not.

I have a repository where there are zero-sized files in
the history that was triggering these things.

Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-11 14:49:45 -08:00
Pavel Roskin e05db0fd4f Fix warnings in sha1_file.c - use C99 printf format if available
Signed-off-by: Pavel Roskin <proski@gnu.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-09 22:43:58 -08:00
Andy Whitcroft 93822c2239 short i/o: fix calls to write to use xwrite or write_in_full
We have a number of badly checked write() calls.  Often we are
expecting write() to write exactly the size we requested or fail,
this fails to handle interrupts or short writes.  Switch to using
the new write_in_full().  Otherwise we at a minimum need to check
for EINTR and EAGAIN, where this is appropriate use xwrite().

Note, the changes to config handling are much larger and handled
in the next patch in the sequence.

Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-08 15:44:47 -08:00
Andy Whitcroft 93d26e4cb9 short i/o: fix calls to read to use xread or read_in_full
We have a number of badly checked read() calls.  Often we are
expecting read() to read exactly the size we requested or fail, this
fails to handle interrupts or short reads.  Add a read_in_full()
providing those semantics.  Otherwise we at a minimum need to check
for EINTR and EAGAIN, where this is appropriate use xread().

Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-01-08 15:44:47 -08:00
Junio C Hamano 2c039da804 mmap: set FD_CLOEXEC for file descriptors we keep open for mmap()
I do not have any proof that this matters to any existing
problems I am seeing, but I do not think of any reason not to do
this.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:46 -08:00
Shawn O. Pearce c4712e4553 Replace mmap with xmmap, better handling MAP_FAILED.
In some cases we did not even bother to check the return value of
mmap() and just assume it worked.  This is bad, because if we are
out of virtual address space the kernel returned MAP_FAILED and we
would attempt to dereference that address, segfaulting without any
real error output to the user.

We are replacing all calls to mmap() with xmmap() and moving all
MAP_FAILED checking into that single location.  If a mmap call
fails we try to release enough least-recently-used pack windows
to possibly succeed, then retry the mmap() attempt.  If we cannot
mmap even after releasing pack memory then we die() as none of our
callers have any reasonable recovery strategy for a failed mmap.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:45 -08:00
Shawn O. Pearce 97bfeb34df Release pack windows before reporting out of memory.
If we are about to fail because this process has run out of memory we
should first try to automatically control our appetite for address
space by releasing enough least-recently-used pack windows to gain
back enough memory such that we might actually be able to meet the
current allocation request.

This should help users who have fairly large repositories but are
working on systems with relatively small virtual address space.
Many times we see reports on the mailing list of these users running
out of memory during various Git operations.  Dynamically decreasing
the amount of pack memory used when the demand for heap memory is
increasing is an intelligent solution to this problem.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:45 -08:00
Shawn O. Pearce a53128b601 Create pack_report() as a debugging aid.
Much like the alloc_report() function can be useful to report on
object allocation statistics while debugging the new pack_report()
function can be useful to report on the behavior of the mmap window
code used for packfile access.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:45 -08:00
Shawn O. Pearce 11daf39b74 Support unmapping windows on 'temporary' packfiles.
If a command opens a packfile for only temporary access and does not
install the struct packed_git* into the global packed_git list then
we are unable to unmap any inactive windows within that packed_git,
causing the overall process to exceed core.packedGitLimit.

We cannot force the callers to install their temporary packfile
into the packed_git chain as doing so would allow that (possibly
corrupt but currently being verified) temporary packfile to become
part of the local ODB, which may allow it to be considered for
object resolution when it may not actually be a valid packfile.

So to support unmapping the windows of these temporary packfiles we
also scan the windows of the struct packed_git which was supplied
to use_pack().  Since commands only work with one temporary packfile
at a time scanning the one supplied to use_pack() and all packs
installed into packed_git should cover everything available in
memory.

We also have to be careful to not close the file descriptor of
the packed_git which was handed to use_pack() when all of that
packfile's windows have been unmapped, as we are already past the
open call that would open the packfile and need the file descriptor
to be ready for mmap() after unuse_one_window returns.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:45 -08:00
Shawn O. Pearce 73b4e4be71 Improve error message when packfile mmap fails.
If we are unable to mmap the a region of the packfile with the mmap()
system call there may be a good reason why, such as a closed file
descriptor or out of address space.  Reporting the system level
error message can help to debug such problems.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:45 -08:00
Shawn O. Pearce 60bb8b1453 Fully activate the sliding window pack access.
This finally turns on the sliding window behavior for packfile data
access by mapping limited size windows and chaining them under the
packed_git->windows list.

We consider a given byte offset to be within the window only if there
would be at least 20 bytes (one hash worth of data) accessible after
the requested offset.  This range selection relates to the contract
that use_pack() makes with its callers, allowing them to access
one hash or one object header without needing to call use_pack()
for every byte of data obtained.

In the worst case scenario we will map the same page of data twice
into memory: once at the end of one window and once again at the
start of the next window.  This duplicate page mapping will happen
only when an object header or a delta base reference is spanned
over the end of a window and is always limited to just one page of
duplication, as no sane operating system will ever have a page size
smaller than a hash.

I am assuming that the possible wasted page of virtual address
space is going to perform faster than the alternatives, which
would be to copy the object header or ref delta into a temporary
buffer prior to parsing, or to check the window range on every byte
during header parsing.  We may decide to revisit this decision in
the future since this is just a gut instinct decision and has not
actually been proven out by experimental testing.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:45 -08:00
Shawn O. Pearce 54044bf825 Unmap individual windows rather than entire files.
To support multiple windows per packfile we need to unmap only one
window at a time from that packfile, leaving any other windows in
place and available for reference.

We treat all windows from all packfiles equally; the least recently
used, not-in-use window across all packfiles will always be closed
first.

If we have unmapped all windows in a packfile then we can also close
the packfile's file descriptor as its possible we won't need to map
any window from that file in the near future.  This decision about
when to close the pack file descriptor may need to be revisited in
the future after additional testing on several different platforms
can be performed.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:44 -08:00
Shawn O. Pearce 8d8a4ea553 Document why header parsing won't exceed a window.
When we parse the object header or the delta base reference we
don't bother to loop over use_pack() calls.  The reason we don't
need to bother with calling use_pack for each byte accessed is that
use_pack will always promise us at least 20 bytes (really the hash
size) after the offset.  This promise from use_pack simplifies a
lot of code in the header parsing logic, as well as helps out the
zlib library by ensuring there's always some data for it to consume
during an inflate call.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:44 -08:00
Shawn O. Pearce 079afb18fe Loop over pack_windows when inflating/accessing data.
When multiple mmaps start getting used for all pack file access it
is not possible to get all data associated with a specific object
in one contiguous memory region.  This limitation prevents simply
passing a single address and length to SHA1_Update or to inflate.

Instead we need to loop until we have processed all data of interest.

As we loop over the data we are always interested in reusing the same
window 'cursor', as the prior window will no longer be of any use
to us.  This allows the use_pack() call to automatically decrement
the use count of the prior window before setting up access for us
to the next window.

Within each loop we need to make use of the available length output
parameter of use_pack() to tell us how many bytes are available in
the current memory region, as we cannot tell otherwise.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:44 -08:00
Shawn O. Pearce 03e79c88aa Replace use_packed_git with window cursors.
Part of the implementation concept of the sliding mmap window for
pack access is to permit multiple windows per pack to be mapped
independently.  Since the inuse_cnt is associated with the mmap and
not with the file, this value is in struct pack_window and needs to
be incremented/decremented for each pack_window accessed by any code.

To faciliate that implementation we need to replace all uses of
use_packed_git() and unuse_packed_git() with a different API that
follows struct pack_window objects rather than struct packed_git.

The way this works is when we need to start accessing a pack for
the first time we should setup a new window 'cursor' by declaring
a local and setting it to NULL:

  struct pack_windows *w_curs = NULL;

To obtain the memory region which contains a specific section of
the pack file we invoke use_pack(), supplying the address of our
current window cursor:

  unsigned int len;
  unsigned char *addr = use_pack(p, &w_curs, offset, &len);

the returned address `addr` will be the first byte at `offset`
within the pack file.  The optional variable len will also be
updated with the number of bytes remaining following the address.

Multiple calls to use_pack() with the same window cursor will
update the window cursor, moving it from one window to another
when necessary.  In this way each window cursor variable maintains
only one struct pack_window inuse at a time.

Finally before exiting the scope which originally declared the window
cursor we must invoke unuse_pack() to unuse the current window (which
may be different from the one that was first obtained from use_pack):

  unuse_pack(&w_curs);

This implementation is still not complete with regards to multiple
windows, as only one window per pack file is supported right now.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:44 -08:00
Shawn O. Pearce 9bc879c1ce Refactor how we open pack files to prepare for multiple windows.
To efficiently support mmaping of multiple regions of the same pack
file we want to keep the pack's file descriptor open while we are
actively working with that pack.  So we are now keeping that file
descriptor in packed_git.pack_fd and closing it only after we unmap
the last window.

This is going to increase the number of file descriptors that are
in use at once, however that will be bounded by the total number of
pack files present and therefore should not be very high.  It is
a small tradeoff which we may need to revisit after some testing
can be done on various repositories and systems.

For code clarity we also want to seperate out the implementation
of how we open a pack file from the implementation which locates
a suitable window (or makes a new one) from the given pack file.
Since this is a rather large delta I'm taking advantage of doing
it now, in a fairly isolated change.

When we open a pack file we need to examine the header and trailer
without having a mmap in place, as we may only need to mmap
the middle section of this particular pack.  Consequently the
verification code has been refactored to make use of the new
read_or_die function.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:44 -08:00
Shawn O. Pearce c41ee586dc Refactor packed_git to prepare for sliding mmap windows.
The idea behind the sliding mmap window pack reader implementation
is to have multiple mmap regions active against the same pack file,
thereby allowing the process to mmap in only the active/hot sections
of the pack and reduce overall virtual address space usage.

To implement this we need to refactor the mmap related data
(pack_base, pack_use_cnt) out of struct packed_git and move them
into a new struct pack_window.

We are refactoring the code to support a single struct pack_window
per packfile, thereby emulating the prior behavior of mmap'ing the
entire pack file.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:44 -08:00
Shawn O. Pearce 77ccc5bbd1 Introduce new config option for mmap limit.
Rather than hardcoding the maximum number of bytes which can be
mmapped from pack files we should make this value configurable,
allowing the end user to increase or decrease this limit on a
per-repository basis depending on the size of the repository
and the capabilities of their operating system.

In general users should not need to manually tune such a low-level
setting within the core code, but being able to artifically limit
the number of bytes which we can mmap at once from pack files will
make it easier to craft test cases for the new mmap sliding window
implementation.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:44 -08:00
Shawn O. Pearce 4d703a1a90 Replace unpack_entry_gently with unpack_entry.
The unpack_entry_gently function currently has only two callers:
the delta base resolution in sha1_file.c and the main loop of
pack-check.c.  Both of these must change to using unpack_entry
directly when we implement sliding window mmap logic, so I'm doing
it earlier to help break down the change set.

This may cause a slight performance decrease for delta base
resolution as well as for pack-check.c's verify_packfile(), as
the pack use counter will be incremented and decremented for every
object that is unpacked.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-29 11:36:44 -08:00
Nicolas Pitre 08a19d873c clarify some error messages wrt unknown object types
If ever new object types are added for future extensions then better
have current git version report them as "unknown" instead of
"corrupted".

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-20 10:46:34 -08:00
Johannes Schindelin f0df4ed562 sha1_object_info(): be consistent with read_sha1_file()
We used to try loose objects first with sha1_object_info(), but packed
objects first with read_sha1_file(). Now, prefer packed objects over loose
ones with sha1_object_info(), too.

Usually the old behaviour would pose no problem, but when you tried to fix
a fscked up repository by inserting a known-good pack,

	git cat-file $(git cat-file -t <sha1>) <sha1>

could fail, even when

	git cat-file blob <sha1>

would _not_ fail. Worse, a repack would fail, too.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-11-27 16:56:54 -08:00
Junio C Hamano 6a96b32d3b Merge branch 'maint'
* maint:
  Nicer error messages in case saving an object to db goes wrong
2006-11-09 09:40:59 -08:00
Petr Baudis 916d081bba Nicer error messages in case saving an object to db goes wrong
Currently the error e.g. when pushing to a read-only repository is quite
confusing, this attempts to clean it up, unifies error reporting between
various object writers and uses error() on couple more places.

Signed-off-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-11-09 09:33:53 -08:00
Shawn Pearce fc04c412d8 Teach receive-pack how to keep pack files based on object count.
Since keeping a pushed pack or exploding it into loose objects
should be a local repository decision this teaches receive-pack
to decide if it should call unpack-objects or index-pack --stdin
--fix-thin based on the setting of receive.unpackLimit and the
number of objects contained in the received pack.

If the number of objects (hdr_entries) in the received pack is
below the value of receive.unpackLimit (which is 5000 by default)
then we unpack-objects as we have in the past.

If the hdr_entries >= receive.unpackLimit then we call index-pack and
ask it to include our pid and hostname in the .keep file to make it
easier to identify why a given pack has been kept in the repository.

Currently this leaves every received pack as a kept pack.  We really
don't want that as received packs will tend to be small.  Instead we
want to delete the .keep file automatically after all refs have
been updated.  That is being left as room for future improvement.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-11-03 00:24:07 -08:00
Junio C Hamano 407e1d6e12 Merge branch 'master' into np/index-pack
* master: (90 commits)
  gitweb: Better support for non-CSS aware web browsers
  gitweb: Output also empty patches in "commitdiff" view
  gitweb: Use git-for-each-ref to generate list of heads and/or tags
  for-each-ref: "creator" and "creatordate" fields
  Add --global option to git-repo-config.
  pack-refs: Store the full name of the ref even when packing only tags.
  git-clone documentation didn't mention --origin as equivalent of -o
  Minor grammar fixes for git-diff-index.txt
  link_temp_to_file: call adjust_shared_perm() only when we created the directory
  Remove uneccessarily similar printf() from print_ref_list() in builtin-branch
  pack-objects doesn't create random pack names
  branch: work in subdirectories.
  gitweb: Use 's' regexp modifier to secure against filenames with LF
  gitweb: Secure against commit-ish/tree-ish with the same name as path
  gitweb: esc_html() author in blame
  git-svnimport: support for partial imports
  link_temp_to_file: don't leave the path truncated on adjust_shared_perm failure
  Move deny_non_fast_forwards handling completely into receive-pack.
  revision traversal: --unpacked does not limit commit list anymore.
  Continue traversal when rev-list --unpacked finds a packed commit.
  ...
2006-11-03 00:23:52 -08:00
Junio C Hamano c954d33da1 Merge branch 'maint'
* maint:
  git-clone documentation didn't mention --origin as equivalent of -o
  Minor grammar fixes for git-diff-index.txt
  link_temp_to_file: call adjust_shared_perm() only when we created the directory
2006-11-02 18:05:33 -08:00
Johannes Schindelin 866cae0db4 link_temp_to_file: call adjust_shared_perm() only when we created the directory 2006-11-02 18:02:17 -08:00
Junio C Hamano 7854e526ff Merge branch 'maint'
* maint:
  pack-objects doesn't create random pack names
  link_temp_to_file: don't leave the path truncated on adjust_shared_perm failure
2006-11-01 15:09:55 -08:00
Junio C Hamano 91c23e48d0 link_temp_to_file: don't leave the path truncated on adjust_shared_perm failure
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-10-31 15:56:58 -08:00
Shawn Pearce d4ff6d92c3 Allow short pack names to git-pack-objects --unpacked=.
This allows us to pass just the file name of a pack rather than
the complete path when we want pack-objects to consider its
contents as though they were loose objects.  This can be helpful
if $GIT_OBJECT_DIRECTORY contains shell metacharacters which make
it cumbersome to pass complete paths safely in a shell script.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-10-29 12:46:21 -08:00
Junio C Hamano 05eb811aa1 Merge branch 'np/pack'
* np/pack:
  add the capability for index-pack to read from a stream
  index-pack: compare only the first 20-bytes of the key.
  git-repack: repo.usedeltabaseoffset
  pack-objects: document --delta-base-offset option
  allow delta data reuse even if base object is a preferred base
  zap a debug remnant
  let the GIT native protocol use offsets to delta base when possible
  make pack data reuse compatible with both delta types
  make git-pack-objects able to create deltas with offset to base
  teach git-index-pack about deltas with offset to base
  teach git-unpack-objects about deltas with offset to base
  introduce delta objects with offset to base
2006-10-22 22:51:42 -07:00
Nicolas Pitre 1a3b55c6b4 reduce delta head inflated size
Supposing that both the base and result sizes were both full size 64-bit
values, their encoding would occupy only 9.2 bytes each.  Therefore
inflating 64 bytes is way overkill.  Limit it to 20 bytes instead which
should be plenty enough for a couple years to come.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-10-18 21:18:42 -07:00
Rene Scharfe 7cfb5f367e Replace open-coded version of hash_sha1_file()
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-10-15 12:35:25 -07:00
Rene Scharfe 972a915583 Make write_sha1_file_prepare() void
Move file name generation from write_sha1_file_prepare() to the one
caller that cares and make it a void function.

Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-10-15 12:35:07 -07:00
Rene Scharfe 8f9777801d Make write_sha1_file_prepare() static
There are no callers of write_sha1_file_prepare() left outside of
sha1_file.c, so make it static.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-10-14 11:49:59 -07:00
Rene Scharfe abdc3fc842 Add hash_sha1_file()
Most callers of write_sha1_file_prepare() are only interested in the
resulting hash but don't care about the returned file name or the header.
This patch adds a simple wrapper named hash_sha1_file() which does just
that, and converts potential callers.

Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-10-14 11:49:52 -07:00
Nicolas Pitre 780e6e735b make pack data reuse compatible with both delta types
This is the missing part to git-pack-objects allowing it to reuse delta
data to/from any of the two delta types.  It can reuse delta from any
type, and it outputs base offsets when --allow-delta-base-offset is
provided and the base is also included in the pack.  Otherwise it
outputs base sha1 references just like it always did.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-27 00:12:00 -07:00
Nicolas Pitre eb32d236df introduce delta objects with offset to base
This adds a new object, namely OBJ_OFS_DELTA, renames OBJ_DELTA to
OBJ_REF_DELTA to better make the distinction between those two delta
objects, and adds support for the handling of those new delta objects
in sha1_file.c only.

The OBJ_OFS_DELTA contains a relative offset from the delta object's
position in a pack instead of the 20-byte SHA1 reference to identify
the base object.  Since the base is likely to be not so far away, the
relative offset is more likely to have a smaller encoding on average
than an absolute offset.  And for those delta objects the base must
always be stored first because there is no way to know the distance of
later objects when streaming a pack.  Hence this relative offset is
always meant to be negative.

The offset encoding is slightly denser than the one used for object
size -- credits to <linux@horizon.com> (whoever this is) for bringing
it to my attention.

This allows for pack size reduction between 3.2% (Linux-2.6) to over 5%
(linux-historic).  Runtime pack access should be faster too since delta
replay does skip a search in the pack index for each delta in a chain.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-27 00:11:59 -07:00
Nicolas Pitre 43057304c0 many cleanups to sha1_file.c
Those cleanups are mainly to set the table for the support of deltas
with base objects referenced by offsets instead of sha1.  This means
that many pack lookup functions are converted to take a pack/offset
tuple instead of a sha1.

This eliminates many struct pack_entry usages since this structure
carried redundent information in many cases, and it increased stack
footprint needlessly for a couple recursively called functions that used
to declare a local copy of it for every recursion loop.

In the process, packed_object_info_detail() has been reorganized as well
so to look much saner and more amenable to deltas with offset support.

Finally the appropriate adjustments have been made to functions that
depend on the above changes.  But there is no functionality changes yet
simply some code refactoring at this point.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-23 01:51:33 -07:00
Junio C Hamano e49521b56d Make hexval() available to others.
builtin-mailinfo.c has its own hexval implementaiton but it can
share the table-lookup one recently implemented in sha1_file.c

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-20 16:08:14 -07:00
Junio C Hamano 4405fb77f4 Merge branch 'jc/pack'
* jc/pack:
  pack-objects: document --revs, --unpacked and --all.
  pack-objects --unpacked=<existing pack> option.
  pack-objects: further work on internal rev-list logic.
  pack-objects: run rev-list equivalent internally.
  Separate object listing routines out of rev-list
2006-09-17 18:32:03 -07:00
Junio C Hamano a41fae9c46 get_sha1_hex() micro-optimization
The function appeared high on a gprof output for a rev-list run of
a non-trivial size, and it was an obvious low-hanging fruit.

The code is from Linus.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-09 22:21:27 -07:00
Junio C Hamano 106d710bc1 pack-objects --unpacked=<existing pack> option.
Incremental repack without -a essentially boils down to:

	rev-list --objects --unpacked --all |
        pack-objects $new_pack

which picks up all loose objects that are still live and creates
a new pack.

This implements --unpacked=<existing pack> option to tell the
revision walking machinery to pretend as if objects in such a
pack are unpacked for the purpose of object listing.  With this,
we could say:

	rev-list --objects --unpacked=$active_pack --all |
	pack-objects $new_pack

instead, to mean "all live loose objects but pretend as if
objects that are in this pack are also unpacked".  The newly
created pack would be perfect for updating $active_pack by
replacing it.

Since pack-objects now knows how to do the rev-list's work
itself internally, you can also write the above example by:

	pack-objects --unpacked=$active_pack --all $new_pack </dev/null

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-07 02:46:03 -07:00
Junio C Hamano 72518e9c26 more lightweight revalidation while reusing deflated stream in packing
When copying from an existing pack and when copying from a loose
object with new style header, the code makes sure that the piece
we are going to copy out inflates well and inflate() consumes
the data in full while doing so.

The check to see if the xdelta really apply is quite expensive
as you described, because you would need to have the image of
the base object which can be represented as a delta against
something else.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-03 21:09:18 -07:00
Shawn Pearce 9befac470b Replace uses of strdup with xstrdup.
Like xmalloc and xrealloc xstrdup dies with a useful message if
the native strdup() implementation returns NULL rather than a
valid pointer.

I just tried to use xstrdup in new code and found it to be missing.
However I expected it to be present as xmalloc and xrealloc are
already commonly used throughout the code.

[jc: removed the part that deals with last_XXX, which I am
 finding more and more dubious these days.]

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-02 03:24:37 -07:00
Junio C Hamano ad1ed5ee89 consolidate two copies of new style object header parsing code.
Also while we are at it, remove redundant typename[] array from
unpack_sha1_header.  The only reason it is different from the
type_names[] array in object.c module is that this code cares
about the subset of object types that are valid in a loose
object, so prepare a separate array of boolean that tells us
which types are valid, and share the name translation with the
others.

Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-01 15:17:01 -07:00
Junio C Hamano 839837b953 Constness tightening for move/link_temp_to_file()
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-01 00:24:06 -07:00
Jonas Fonseca 2d7320d0b0 Use xmalloc instead of malloc
Signed-off-by: Jonas Fonseca <fonseca@diku.dk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-08-31 16:24:39 -07:00
Jonas Fonseca 83572c1a91 Use xrealloc instead of realloc
Change places that use realloc, without a proper error path, to instead use
xrealloc. Drop an erroneous error path in the daemon code that used errno
in the die message in favour of the simpler xrealloc.

Signed-off-by: Jonas Fonseca <fonseca@diku.dk>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-08-26 17:54:06 -07:00
Shawn Pearce eb950c192a Convert unpack_entry_gently and friends to use offsets.
Change unpack_entry_gently and its helper functions to use offsets
rather than addresses and left counts to supply pack position
information.  In most cases this makes the code easier to follow,
and it reduces the number of local variables in a few functions.
It also better prepares this code for mapping partial segments of
packs and altering what regions of a pack are mapped while unpacking
an entry.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-08-26 17:35:21 -07:00
Shawn Pearce 465b26eeef Cleanup unpack_object_header to use only offsets.
If we're always incrementing both the offset and the pointer we
aren't gaining anything by keeping both.  Instead just use the
offset since that's what we were given and what we are expected
to return.  Also using offset is likely to make it easier to remap
the pack in the future should partial mapping of very large packs
get implemented.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-08-26 17:35:20 -07:00