Граф коммитов

138 Коммитов

Автор SHA1 Сообщение Дата
Junio C Hamano a35138af75 Merge branch 'sp/maint-smart-http-sans-100-continue'
* sp/maint-smart-http-sans-100-continue:
  smart-http: Really never use Expect: 100-continue
2011-03-14 11:59:10 -07:00
Shawn O. Pearce 959dfcf42f smart-http: Really never use Expect: 100-continue
libcurl may choose to try and use Expect: 100-continue for
any type of POST, not just a Transfer: chunked-encoding type.
Force it to disable this feature, as not all proxy servers support
100-continue and leaving it enabled can cause 1 second stalls during
the negotiation phase of fetch-pack/upload-pack.

In ("206b099d26 smart-http: Don't use Expect: 100-Continue") we
tried to disable this for only large POST bodies, but it should be
disabled for every POST body.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-14 11:58:51 -07:00
Junio C Hamano 28afcbfe8b Merge branch 'sp/maint-smart-http-sans-100-continue'
* sp/maint-smart-http-sans-100-continue:
  smart-http: Don't use Expect: 100-Continue
2011-02-27 21:58:29 -08:00
Shawn O. Pearce 206b099d26 smart-http: Don't use Expect: 100-Continue
Some HTTP/1.1 servers or proxies don't correctly implement the
100-Continue feature of HTTP/1.1.  Its a difficult feature to
implement right, and isn't commonly used by browsers, so many
developers may not even be aware that their server (or proxy)
doesn't honor it.

Within the smart HTTP protocol for Git we only use this newer
"Expect: 100-Continue" feature to probe for missing authentication
before uploading a large payload like a pack file during push.
If authentication is necessary, we expect the server to send the
401 Not Authorized response before the bulk data transfer starts,
thus saving the client bandwidth during the retry.

A different method to probe for working authentication is to send an
empty command list (that is just "0000") to $URL/git-receive-pack.
or $URL/git-upload-pack.  All versions of both receive-pack and
upload-pack since the introduction of smart HTTP in Git 1.6.6
cleanly accept just a flush-pkt under --stateless-rpc mode, and
exit with success.

If HTTP level authentication is successful, the backend will return
an empty response, but with HTTP status code 200.  This enables
the client to continue with the transfer.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-02-15 11:42:20 -08:00
Junio C Hamano 1c80c9b2cb Merge branch 'sp/fix-smart-http-deadlock-on-error'
* sp/fix-smart-http-deadlock-on-error:
  smart-http: Don't deadlock on server failure
2010-08-12 18:27:01 -07:00
Shawn O. Pearce b4ee10f60f smart-http: Don't deadlock on server failure
If the remote HTTP server fails (e.g. returns 404 or 500) when we
posted the RPC to it, we won't have sent anything to the background
Git process that is supposed to handle the stream.  Because we
didn't send anything, its waiting for input from remote-curl, and
remote-curl cannot read its response payload because doing so would
lead to a deadlock.

Send the background task EOF on its input before we try to read
its response back, that way it will break out of its read loop
and terminate.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-08-06 15:30:16 -07:00
Junio C Hamano 3cc9caadf7 Merge branch 'rc/maint-curl-helper'
* rc/maint-curl-helper:
  remote-curl: ensure that URLs have a trailing slash
  http: make end_url_with_slash() public
  t5541-http-push: add test for URLs with trailing slash

Conflicts:
	remote-curl.c
2010-05-08 22:37:24 -07:00
Tay Ray Chuan d8fab07208 remote-curl: ensure that URLs have a trailing slash
Previously, we blindly assumed that URLs passed to the remote-curl
helper did not end with a trailing slash.

Use the convenience function end_url_with_slash() from http.[ch] to
ensure that URLs have a trailing slash on invocation of the remote-curl
helper, and use the URL as one with a trailing slash throughout.

It is possible for users to pass a URL with a trailing slash to
remote-curl, by, say, setting it in remote.<name>.url in their git
config. The resulting requests have an empty path component (//) and may
break implementations of the http git protocol.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-09 21:16:11 -07:00
Scott Chacon 42653c09c8 Prompt for a username when an HTTP request 401s
When an HTTP request returns a 401, Git will currently fail with a
confusing message saying that it got a 401, which is not very
descriptive.

Currently if a user wants to use Git over HTTP, they have to use one
URL with the username in the URL (e.g. "http://user@host.com/repo.git")
for write access and another without the username for unauthenticated
read access (unless they want to be prompted for the password each
time). However, since the HTTP servers will return a 401 if an action
requires authentication, we can prompt for username and password if we
see this, allowing us to use a single URL for both purposes.

This patch changes http_request to prompt for the username and password,
then return HTTP_REAUTH so http_get_strbuf can try again.  If it gets
a 401 even when a user/pass is supplied, http_request will now return
HTTP_NOAUTH which remote_curl can then use to display a more
intelligent error message that is less confusing.

Signed-off-by: Scott Chacon <schacon@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-01 23:24:59 -07:00
Junio C Hamano 78d909a494 Merge branch 'tc/http-cleanup'
* tc/http-cleanup:
  remote-curl: init walker only when needed
  remote-curl: use http_fetch_ref() instead of walker wrapper
  http: init and cleanup separately from http-walker
  http-walker: cleanup more thoroughly
  http-push: remove "|| 1" to enable verbose check
  t554[01]-http-push: refactor, add non-ff tests
  t5541-http-push: check that ref is unchanged for non-ff test
2010-03-15 00:58:50 -07:00
Junio C Hamano a886ba2801 Merge branch 'sp/maint-push-sideband' into maint
* sp/maint-push-sideband:
  receive-pack: Send internal errors over side-band #2
  t5401: Use a bare repository for the remote peer
  receive-pack: Send hook output over side band #2
  receive-pack: Wrap status reports inside side-band-64k
  receive-pack: Refactor how capabilities are shown to the client
  send-pack: demultiplex a sideband stream with status data
  run-command: support custom fd-set in async
  run-command: Allow stderr to be a caller supplied pipe

Conflicts:
	builtin-receive-pack.c
	run-command.c
	t/t5401-update-hooks.sh
2010-03-02 22:54:50 -08:00
Tay Ray Chuan 26e1e0b23a remote-curl: init walker only when needed
Invoke get_http_walker() only when fetching with the dumb protocol.
Additionally, add an invocation to walker_free() after we're done using
the walker.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-02 11:10:36 -08:00
Tay Ray Chuan aec4975602 remote-curl: use http_fetch_ref() instead of walker wrapper
The http-walker implementation of walker->fetch_ref() doesn't do
anything special compared to http_fetch_ref() anyway.

Remove init_walker() invocation before fetching the ref, since we aren't
using the walker wrapper and don't need a walker instance anymore.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-02 11:10:36 -08:00
Tay Ray Chuan 888692b733 http: init and cleanup separately from http-walker
Previously, all our http operations were done with http-walker. With the
new remote-curl helper, we find ourselves using http methods outside of
http-walker - for example, fetching info/refs.

Accomodate this by separating http_init() and http_cleanup() invocations
from http-walker.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-02 11:10:36 -08:00
Junio C Hamano 76d44c8cfd Merge branch 'sp/maint-push-sideband' into sp/push-sideband
* sp/maint-push-sideband:
  receive-pack: Send hook output over side band #2
  receive-pack: Wrap status reports inside side-band-64k
  receive-pack: Refactor how capabilities are shown to the client
  send-pack: demultiplex a sideband stream with status data
  run-command: support custom fd-set in async
  run-command: Allow stderr to be a caller supplied pipe
  Update git fsck --full short description to mention packs

Conflicts:
	run-command.c
2010-02-05 21:08:53 -08:00
Erik Faye-Lund ae6a5609c0 run-command: support custom fd-set in async
This patch adds the possibility to supply a set of non-0 file
descriptors for async process communication instead of the
default-created pipe.

Additionally, we now support bi-directional communiction with the
async procedure, by giving the async function both read and write
file descriptors.

To retain compatiblity and similar "API feel" with start_command,
we require start_async callers to set .out = -1 to get a readable
file descriptor.  If either of .in or .out is 0, we supply no file
descriptor to the async process.

[sp: Note: Erik started this patch, and a huge bulk of it is
     his work.  All bugs were introduced later by Shawn.]

Signed-off-by: Erik Faye-Lund <kusmabite@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-02-05 20:57:22 -08:00
Junio C Hamano 2d0d706e5f Merge branch 'maint'
* maint:
  merge-recursive: do not return NULL only to cause segfault
  retry request without query when info/refs?query fails
2010-01-21 20:08:31 -08:00
Tay Ray Chuan 703e6e76a1 retry request without query when info/refs?query fails
When "info/refs" is a static file and not behind a CGI handler, some
servers may not handle a GET request for it with a query string
appended (eg. "?foo=bar") properly.

If such a request fails, retry it sans the query string. In addition,
ensure that the "smart" http protocol is not used (a service has to be
specified with "?service=<service name>" to be conformant).

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Reported-and-tested-by: Yaroslav Halchenko <debian@onerussian.com>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-21 15:01:37 -08:00
Junio C Hamano 56eb8b43eb Merge branch 'jc/symbol-static'
* jc/symbol-static:
  date.c: mark file-local function static
  Replace parse_blob() with an explanatory comment
  symlinks.c: remove unused functions
  object.c: remove unused functions
  strbuf.c: remove unused function
  sha1_file.c: remove unused function
  mailmap.c: remove unused function
  utf8.c: mark file-local function static
  submodule.c: mark file-local function static
  quote.c: mark file-local function static
  remote-curl.c: mark file-local function static
  read-cache.c: mark file-local functions static
  parse-options.c: mark file-local function static
  entry.c: mark file-local function static
  http.c: mark file-local functions static
  pretty.c: mark file-local function static
  builtin-rev-list.c: mark file-local function static
  bisect.c: mark file-local function static
2010-01-20 14:37:25 -08:00
Junio C Hamano 054d2fa05c Merge branch 'maint'
* maint:
  remote-curl: Fix Accept header for smart HTTP connections
  grep: -L should show empty files
  rebase--interactive: Ignore comments and blank lines in peek_next_command
2010-01-12 15:48:38 -08:00
Shawn O. Pearce 8efa5f629e remote-curl: Fix Accept header for smart HTTP connections
We actually expect to see an application/x-git-upload-pack-result
but we lied and said we Accept *-response.  This was a typo on my
part when I was writing the code.

Fortunately the wrong Accept header had no real impact, as the
deployed git-http-backend servers were not testing the Accept
header before they returned their content.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-12 13:09:44 -08:00
Junio C Hamano 5092d3ec21 remote-curl.c: mark file-local function static
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-12 01:06:08 -08:00
Martin Storsjö 6c81a99082 Allow curl to rewind the RPC read buffer
When using multi-pass authentication methods, the curl library may need
to rewind the read buffers used for providing data to HTTP POST, if data
has been output before a 401 error is received.

This is needed only when the first request (when the multi-pass
authentication method isn't initialized and hasn't received its challenge
yet) for a certain curl session is a chunked HTTP POST.

As long as the current rpc read buffer is the first one, we're able to
rewind without need for additional buffering.

The curl library currently starts sending data without waiting for a
response to the Expect: 100-continue header, due to a bug in curl that
exists up to curl version 7.19.7.

If the HTTP server doesn't handle Expect: 100-continue headers properly
(e.g. Lighttpd), the library has to start sending data without knowing
if the request will be successfully authenticated. In this case, this
rewinding solution is not sufficient - the whole request will be sent
before the 401 error is received.

Signed-off-by: Martin Storsjo <martin@martin.st>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-12-01 14:15:27 -08:00
Tay Ray Chuan 483106089a remote-curl.c: fix rpc_out()
Remove the extraneous semicolon (';') at the end of the if statement
that allowed the code in its block to execute regardless of the
condition.

This fixes pushing to a smart http backend with chunked encoding.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-23 21:25:55 -08:00
Martin Storsjö d21f9794ce Disable CURLOPT_NOBODY before enabling CURLOPT_PUT and CURLOPT_POST
This works around a bug in curl versions up to 7.19.4, where disabling the
CURLOPT_NOBODY option sets the internal state incorrectly considering that
CURLOPT_PUT was enabled earlier.

The bug is discussed at http://curl.haxx.se/bug/view.cgi?id=2727981 and is
corrected in the latest version of curl in CVS.

This bug usually has no impact on git, but may surface if using multi-pass
authentication methods.

Signed-off-by: Martin Storsjo <martin@martin.st>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-22 22:56:54 -08:00
Junio C Hamano 905bf7742c Merge branch 'sp/smart-http'
* sp/smart-http: (37 commits)
  http-backend: Let gcc check the format of more printf-type functions.
  http-backend: Fix access beyond end of string.
  http-backend: Fix bad treatment of uintmax_t in Content-Length
  t5551-http-fetch: Work around broken Accept header in libcurl
  t5551-http-fetch: Work around some libcurl versions
  http-backend: Protect GIT_PROJECT_ROOT from /../ requests
  Git-aware CGI to provide dumb HTTP transport
  http-backend: Test configuration options
  http-backend: Use http.getanyfile to disable dumb HTTP serving
  test smart http fetch and push
  http tests: use /dumb/ URL prefix
  set httpd port before sourcing lib-httpd
  t5540-http-push: remove redundant fetches
  Smart HTTP fetch: gzip requests
  Smart fetch over HTTP: client side
  Smart push over HTTP: client side
  Discover refs via smart HTTP server when available
  http-backend: more explict LocationMatch
  http-backend: add example for gitweb on same URL
  http-backend: use mod_alias instead of mod_rewrite
  ...

Conflicts:
	.gitignore
	remote-curl.c
2009-11-20 23:51:23 -08:00
Shawn O. Pearce b8538603a3 Smart HTTP fetch: gzip requests
The upload-pack requests are mostly plain text and they compress
rather well.  Deflating them with Content-Encoding: gzip can easily
drop the size of the request by 50%, reducing the amount of data
to transfer as we negotiate the common commits.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04 17:58:15 -08:00
Shawn O. Pearce 249b2004d8 Smart fetch over HTTP: client side
The git-remote-curl backend detects if the remote server supports
the git-upload-pack service, and if so, runs git-fetch-pack locally
in a pipe to generate the want/have commands.

The advertisements from the server that were obtained during the
discovery are passed into git-fetch-pack before the POST request
starts, permitting server capability discovery and enablement.

Common objects that are discovered are appended onto the request as
have lines and are sent again on the next request.  This allows the
remote side to reinitialize its in-memory list of common objects
during the next request.

Because all requests are relatively short, below git-remote-curl's
1 MiB buffer limit, requests will use the standard Content-Length
header and be valid HTTP/1.0 POST requests.  This makes the fetch
client more tolerant of proxy servers which don't support HTTP/1.1
or the chunked transfer encoding.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04 17:58:15 -08:00
Shawn O. Pearce de1a2fdd38 Smart push over HTTP: client side
The git-remote-curl backend detects if the remote server supports
the git-receive-pack service, and if so, runs git-send-pack in a
pipe to dump the command and pack data as a single POST request.

The advertisements from the server that were obtained during the
discovery are passed into git-send-pack before the POST request
starts.  This permits git-send-pack to operate largely unmodified.

For smaller packs (those under 1 MiB) a HTTP/1.0 POST with a
Content-Length is used, permitting interaction with any server.
The 1 MiB limit is arbitrary, but is sufficent to fit most deltas
created by human authors against text sources with the occasional
small binary file (e.g. few KiB icon image).  The configuration
option http.postBuffer can be used to increase (or shink) this
buffer if the default is not sufficient.

For larger packs which cannot be spooled entirely into the helper's
memory space (due to http.postBuffer being too small), the POST
request requires HTTP/1.1 and sets "Transfer-Encoding: chunked".
This permits the client to upload an unknown amount of data in one
HTTP transaction without needing to pregenerate the entire pack
file locally.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04 17:58:15 -08:00
Shawn O. Pearce 97cc7bc45c Discover refs via smart HTTP server when available
Instead of loading the cached info/refs, try to use the smart HTTP
version when the server supports it.  Since the smart variant is
actually the pkt-line stream from the start of either upload-pack
or receive-pack we need to parse these through get_remote_heads,
which requires a background thread to feed its pipe.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04 17:58:15 -08:00
Daniel Barkalow a45d3d7eff Allow curl helper to work without a local repository
It's okay to use the curl helper without a local repository, so long
as you don't use "fetch". There aren't any git programs that would try
to use it, and it doesn't make sense to try it (since there's nowhere
to write the results), but we may as well be clear.

Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-03 21:41:01 -08:00
Shawn O. Pearce ae4efe1957 Move WebDAV HTTP push under remote-curl
The remote helper interface now supports the push capability,
which can be used to ask the implementation to push one or more
specs to the remote repository.  For remote-curl we implement this
by calling the existing WebDAV based git-http-push executable.

Internally the helper interface uses the push_refs transport hook
so that the complexity of the refspec parsing and matching can be
reused between remote implementations.  When possible however the
helper protocol uses source ref name rather than the source SHA-1,
thereby allowing the helper to access this name if it is useful.

>From Clemens Buchacher <drizzd@aon.at>:
 update http tests according to remote-curl capabilities

 o Pushing packed refs is now fixed.

 o The transport helper fails if refs are already up-to-date. Add
   a test for that.

 o The transport helper will notice if refs are already
   up-to-date. We therefore need to update server info in the
   unpacked-refs test.

 o The transport helper will purge deleted branches automatically.

 o Use a variable ($ORIG_HEAD) instead of full SHA-1 name.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
CC: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30 19:20:54 -07:00
Shawn O. Pearce ef08ef9ea0 remote-helpers: Support custom transport options
Some transports, like the native pack transport implemented by
fetch-pack, support useful features like depth or include tags.
These should be exposed if the underlying helper knows how to
use them.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30 19:20:54 -07:00
Shawn O. Pearce 292ce46b60 remote-helpers: Fetch more than one ref in a batch
Some network protocols (e.g. native git://) are able to fetch more
than one ref at a time and reduce the overall transfer cost by
combining the requests into a single exchange.  Instead of feeding
each fetch request one at a time to the helper, feed all of them
at once so the helper can decide whether or not it should batch them.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30 19:20:54 -07:00
Shawn O. Pearce 37a8768f83 remote-curl: Refactor walker initialization
We will need the walker, url and remote in other functions as the
code grows larger to support smart HTTP.  Extract this out into a
set of globals we can easily reference once configured.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30 19:20:54 -07:00
Björn Steinbrink d01a8e32fe clone: Supply the right commit hash to post-checkout when -b is used
When we use -b <branch>, we may checkout something else than what the
remote's HEAD references, but we still used remote_head to supply the
new ref value to the post-checkout hook, which is wrong.

So instead of using remote_head to find the value to be passed to the
post-checkout hook, we have to use our_head_points_at, which is always
correctly setup, even if -b is not used.

This also fixes a segfault when "clone -b <branch>" is used with a
remote repo that doesn't have a valid HEAD, as in such a case
remote_head is NULL, but we still tried to access it.

Reported-by: Devin Cofer <ranguvar@archlinux.us>
Signed-off-by: Björn Steinbrink <B.Steinbrink@gmx.de>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-14 01:19:15 -07:00
Johannes Sixt c6dfb39944 remote-curl: add missing initialization of argv0_path
All programs, in particular also the stand-alone programs (non-builtins)
must call git_extract_argv0_path(argv[0]) in order to help builds that
derive the installation prefix at runtime, such as the MinGW build.
Without this call, the program segfaults (or raises an assertion
failure).

Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Tested-by: Michael Wookey <michaelwookey@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-13 23:24:58 -07:00
Daniel Barkalow a2d725b7bd Use an external program to implement fetching with curl
Use the transport native helper mechanism to fetch by http (and ftp, etc).

Signed-off-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-08-05 10:34:09 -07:00