Now that we call test_oid_init in the setup for all test scripts,
there's no point in calling it individually. Remove all of the places
where we've done so to help keep tests tidy.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Reviewed-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Ensure that we pass the object-format capability in the synthesized test
data so that this test works with algorithms other than SHA-1.
In addition, add an additional test using the old data for when we're
using SHA-1 so that we can be sure that we preserve backwards
compatibility with servers not offering the object-format capability.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The packetize() function takes its input on stdin, and requires 4
separate sub-processes to format a simple string. We can do much better
by getting the length via the shell's "${#packet}" construct. The one
caveat is that the shell can't put a NUL into a variable, so we'll have
to continue to provide the stdin form for a few calls.
There are a few other cleanups here in the touched code:
- the stdin form of packetize() had an extra stray "%s" when printing
the packet
- the converted calls in t5562 can be made simpler by redirecting
output as a block, rather than repeated appending
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This test uses $_z40 to express an all-zeros object ID, which doesn't
work for SHA-256. Use $ZERO_OID instead, which is the right size for
all hash values.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* mk/t5562-no-input-to-too-large-an-input-test:
t5562: do not depend on /dev/zero
Revert "t5562: replace /dev/zero with a pipe from generate_zero_bytes"
Some expected failures of git-http-backend leaves running its children
(receive-pack or upload-pack) which still hold opened descriptors
to act.err and with some probability they live long enough to write
there their failure messages after next test has already truncated
the files. This causes occasional failures of the test script.
Avoid the issue by using separated output and error file for each test,
apprending the test number to their name.
Reported-by: Carlo Arenas <carenas@gmail.com>
Helped-by: Carlo Arenas <carenas@gmail.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Max Kirillov <max@max630.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It was reported [1] that NonStop platform does not have /dev/zero.
The test uses /dev/zero as a dummy input. Passing case (http-backed
failed because of too big input size) should not be reading anything
from it. If http-backend would erroneously try to read any data
returning EOF probably would be even safer than providing some
meaningless data.
Replace /dev/zero with /dev/null to avoid issues with platforms which do
not have /dev/zero.
[1] https://public-inbox.org/git/20190209185930.5256-4-randall.s.becker@rogers.com/
Reported-by: Randall S. Becker <rsbecker@nexbridge.com>
Signed-off-by: Max Kirillov <max@max630.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Revert cc95bc20 ("t5562: replace /dev/zero with a pipe from
generate_zero_bytes", 2019-02-09), as not feeding anything to the
command is a better way to test it.
To help platforms that lack /dev/zero (e.g. NonStop), replace use
of /dev/zero to feed "git http-backend" with a pipe of output from
the generate_zero_bytes helper.
Signed-off-by: Randall S. Becker <rsbecker@nexbridge.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some systems do not have perl installed to /usr/bin. Use the variable
from the build settiings, and call perl directly than via shebang.
Signed-off-by: Max Kirillov <max@max630.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is a test of smart HTTP, so it should use the smart HTTP endpoints
(e.g. /info/refs?service=git-receive-pack), not dumb HTTP (HEAD).
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Max Kirillov <max@max630.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
According to RFC3875, empty environment variable is equivalent to unset,
and for CONTENT_LENGTH it should mean zero body to read.
However, unset CONTENT_LENGTH is also used for chunked encoding to indicate
reading until EOF. At least, the test "large fetch-pack requests can be split
across POSTs" from t5551 starts faliing, if unset or empty CONTENT_LENGTH is
treated as zero length body. So keep the existing behavior as much as possible.
Add a test for the case.
Reported-By: Jelmer Vernooij <jelmer@jelmer.uk>
Signed-off-by: Max Kirillov <max@max630.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 6c213e863a ("http-backend: respect CONTENT_LENGTH for
receive-pack", 2018-07-27) adds a test which uses the non-portable
export construct. Replace it with "FOO=bar && export FOO" instead.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Push passes to another commands, as described in
https://public-inbox.org/git/20171129032214.GB32345@sigill.intra.peff.net/
As it gets complicated to correctly track the data length, instead transfer
the data through parent process and cut the pipe as the specified length is
reached. Do it only when CONTENT_LENGTH is set, otherwise pass the input
directly to the forked commands.
Add tests for cases:
* CONTENT_LENGTH is set, script's stdin has more data, with all combinations
of variations: fetch or push, plain or compressed body, correct or truncated
input.
* CONTENT_LENGTH is specified to a value which does not fit into ssize_t.
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Max Kirillov <max@max630.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>