Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
/*
|
|
|
|
* "git rm" builtin command
|
|
|
|
*
|
|
|
|
* Copyright (C) Linus Torvalds 2006
|
|
|
|
*/
|
2019-01-24 11:29:12 +03:00
|
|
|
#define USE_THE_INDEX_COMPATIBILITY_MACROS
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
#include "builtin.h"
|
2021-04-08 23:41:28 +03:00
|
|
|
#include "advice.h"
|
2017-06-14 21:07:36 +03:00
|
|
|
#include "config.h"
|
2014-10-01 14:28:42 +04:00
|
|
|
#include "lockfile.h"
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
#include "dir.h"
|
2006-05-23 12:31:38 +04:00
|
|
|
#include "cache-tree.h"
|
2006-12-25 14:11:00 +03:00
|
|
|
#include "tree-walk.h"
|
2007-10-05 23:09:19 +04:00
|
|
|
#include "parse-options.h"
|
2013-06-12 12:06:43 +04:00
|
|
|
#include "string-list.h"
|
2012-09-26 22:21:13 +04:00
|
|
|
#include "submodule.h"
|
2013-07-14 12:35:42 +04:00
|
|
|
#include "pathspec.h"
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
|
2007-10-05 23:09:19 +04:00
|
|
|
static const char * const builtin_rm_usage[] = {
|
2015-01-13 10:44:47 +03:00
|
|
|
N_("git rm [<options>] [--] <file>..."),
|
2007-10-05 23:09:19 +04:00
|
|
|
NULL
|
|
|
|
};
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
|
|
|
|
static struct {
|
|
|
|
int nr, alloc;
|
2012-09-26 22:21:13 +04:00
|
|
|
struct {
|
|
|
|
const char *name;
|
|
|
|
char is_submodule;
|
|
|
|
} *entry;
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
} list;
|
|
|
|
|
2012-09-26 22:21:13 +04:00
|
|
|
static int get_ours_cache_pos(const char *path, int pos)
|
|
|
|
{
|
|
|
|
int i = -pos - 1;
|
|
|
|
|
|
|
|
while ((i < active_nr) && !strcmp(active_cache[i]->name, path)) {
|
|
|
|
if (ce_stage(active_cache[i]) == 2)
|
|
|
|
return i;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2013-06-12 12:06:43 +04:00
|
|
|
static void print_error_files(struct string_list *files_list,
|
|
|
|
const char *main_msg,
|
|
|
|
const char *hints_msg,
|
|
|
|
int *errs)
|
|
|
|
{
|
|
|
|
if (files_list->nr) {
|
|
|
|
int i;
|
|
|
|
struct strbuf err_msg = STRBUF_INIT;
|
|
|
|
|
|
|
|
strbuf_addstr(&err_msg, main_msg);
|
|
|
|
for (i = 0; i < files_list->nr; i++)
|
|
|
|
strbuf_addf(&err_msg,
|
|
|
|
"\n %s",
|
|
|
|
files_list->items[i].string);
|
2013-06-12 12:06:44 +04:00
|
|
|
if (advice_rm_hints)
|
|
|
|
strbuf_addstr(&err_msg, hints_msg);
|
2013-06-12 12:06:43 +04:00
|
|
|
*errs = error("%s", err_msg.buf);
|
|
|
|
strbuf_release(&err_msg);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-05-10 00:27:31 +03:00
|
|
|
static void submodules_absorb_gitdir_if_needed(void)
|
2012-09-26 22:21:13 +04:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
for (i = 0; i < list.nr; i++) {
|
|
|
|
const char *name = list.entry[i].name;
|
|
|
|
int pos;
|
Convert "struct cache_entry *" to "const ..." wherever possible
I attempted to make index_state->cache[] a "const struct cache_entry **"
to find out how existing entries in index are modified and where. The
question I have is what do we do if we really need to keep track of on-disk
changes in the index. The result is
- diff-lib.c: setting CE_UPTODATE
- name-hash.c: setting CE_HASHED
- preload-index.c, read-cache.c, unpack-trees.c and
builtin/update-index: obvious
- entry.c: write_entry() may refresh the checked out entry via
fill_stat_cache_info(). This causes "non-const struct cache_entry
*" in builtin/apply.c, builtin/checkout-index.c and
builtin/checkout.c
- builtin/ls-files.c: --with-tree changes stagemask and may set
CE_UPDATE
Of these, write_entry() and its call sites are probably most
interesting because it modifies on-disk info. But this is stat info
and can be retrieved via refresh, at least for porcelain
commands. Other just uses ce_flags for local purposes.
So, keeping track of "dirty" entries is just a matter of setting a
flag in index modification functions exposed by read-cache.c. Except
unpack-trees, the rest of the code base does not do anything funny
behind read-cache's back.
The actual patch is less valueable than the summary above. But if
anyone wants to re-identify the above sites. Applying this patch, then
this:
diff --git a/cache.h b/cache.h
index 430d021..1692891 100644
--- a/cache.h
+++ b/cache.h
@@ -267,7 +267,7 @@ static inline unsigned int canon_mode(unsigned int mode)
#define cache_entry_size(len) (offsetof(struct cache_entry,name) + (len) + 1)
struct index_state {
- struct cache_entry **cache;
+ const struct cache_entry **cache;
unsigned int version;
unsigned int cache_nr, cache_alloc, cache_changed;
struct string_list *resolve_undo;
will help quickly identify them without bogus warnings.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-09 19:29:00 +04:00
|
|
|
const struct cache_entry *ce;
|
2012-09-26 22:21:13 +04:00
|
|
|
|
|
|
|
pos = cache_name_pos(name, strlen(name));
|
|
|
|
if (pos < 0) {
|
|
|
|
pos = get_ours_cache_pos(name, pos);
|
|
|
|
if (pos < 0)
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
ce = active_cache[pos];
|
|
|
|
|
|
|
|
if (!S_ISGITLINK(ce->ce_mode) ||
|
2015-05-20 00:44:23 +03:00
|
|
|
!file_exists(ce->name) ||
|
2012-09-26 22:21:13 +04:00
|
|
|
is_empty_dir(name))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (!submodule_uses_gitfile(name))
|
2019-05-10 00:27:31 +03:00
|
|
|
absorb_git_dir_into_superproject(name,
|
2016-12-27 22:03:14 +03:00
|
|
|
ABSORB_GITDIR_RECURSE_SUBMODULES);
|
2012-09-26 22:21:13 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-09-05 23:08:04 +03:00
|
|
|
static int check_local_mod(struct object_id *head, int index_only)
|
2006-12-25 14:11:00 +03:00
|
|
|
{
|
2008-11-29 05:41:21 +03:00
|
|
|
/*
|
|
|
|
* Items in list are already sorted in the cache order,
|
2006-12-25 14:11:00 +03:00
|
|
|
* so we could do this a lot more efficiently by using
|
|
|
|
* tree_desc based traversal if we wanted to, but I am
|
|
|
|
* lazy, and who cares if removal of files is a tad
|
|
|
|
* slower than the theoretical maximum speed?
|
|
|
|
*/
|
|
|
|
int i, no_head;
|
|
|
|
int errs = 0;
|
2013-06-12 12:06:43 +04:00
|
|
|
struct string_list files_staged = STRING_LIST_INIT_NODUP;
|
|
|
|
struct string_list files_cached = STRING_LIST_INIT_NODUP;
|
|
|
|
struct string_list files_local = STRING_LIST_INIT_NODUP;
|
2006-12-25 14:11:00 +03:00
|
|
|
|
2016-09-05 23:08:04 +03:00
|
|
|
no_head = is_null_oid(head);
|
2006-12-25 14:11:00 +03:00
|
|
|
for (i = 0; i < list.nr; i++) {
|
|
|
|
struct stat st;
|
|
|
|
int pos;
|
Convert "struct cache_entry *" to "const ..." wherever possible
I attempted to make index_state->cache[] a "const struct cache_entry **"
to find out how existing entries in index are modified and where. The
question I have is what do we do if we really need to keep track of on-disk
changes in the index. The result is
- diff-lib.c: setting CE_UPTODATE
- name-hash.c: setting CE_HASHED
- preload-index.c, read-cache.c, unpack-trees.c and
builtin/update-index: obvious
- entry.c: write_entry() may refresh the checked out entry via
fill_stat_cache_info(). This causes "non-const struct cache_entry
*" in builtin/apply.c, builtin/checkout-index.c and
builtin/checkout.c
- builtin/ls-files.c: --with-tree changes stagemask and may set
CE_UPDATE
Of these, write_entry() and its call sites are probably most
interesting because it modifies on-disk info. But this is stat info
and can be retrieved via refresh, at least for porcelain
commands. Other just uses ce_flags for local purposes.
So, keeping track of "dirty" entries is just a matter of setting a
flag in index modification functions exposed by read-cache.c. Except
unpack-trees, the rest of the code base does not do anything funny
behind read-cache's back.
The actual patch is less valueable than the summary above. But if
anyone wants to re-identify the above sites. Applying this patch, then
this:
diff --git a/cache.h b/cache.h
index 430d021..1692891 100644
--- a/cache.h
+++ b/cache.h
@@ -267,7 +267,7 @@ static inline unsigned int canon_mode(unsigned int mode)
#define cache_entry_size(len) (offsetof(struct cache_entry,name) + (len) + 1)
struct index_state {
- struct cache_entry **cache;
+ const struct cache_entry **cache;
unsigned int version;
unsigned int cache_nr, cache_alloc, cache_changed;
struct string_list *resolve_undo;
will help quickly identify them without bogus warnings.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-09 19:29:00 +04:00
|
|
|
const struct cache_entry *ce;
|
2012-09-26 22:21:13 +04:00
|
|
|
const char *name = list.entry[i].name;
|
2016-09-05 23:08:04 +03:00
|
|
|
struct object_id oid;
|
2019-04-05 18:00:12 +03:00
|
|
|
unsigned short mode;
|
2007-07-13 21:41:38 +04:00
|
|
|
int local_changes = 0;
|
|
|
|
int staged_changes = 0;
|
2006-12-25 14:11:00 +03:00
|
|
|
|
|
|
|
pos = cache_name_pos(name, strlen(name));
|
2012-09-26 22:21:13 +04:00
|
|
|
if (pos < 0) {
|
|
|
|
/*
|
|
|
|
* Skip unmerged entries except for populated submodules
|
|
|
|
* that could lose history when removed.
|
|
|
|
*/
|
|
|
|
pos = get_ours_cache_pos(name, pos);
|
|
|
|
if (pos < 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (!S_ISGITLINK(active_cache[pos]->ce_mode) ||
|
|
|
|
is_empty_dir(name))
|
|
|
|
continue;
|
|
|
|
}
|
2006-12-25 14:11:00 +03:00
|
|
|
ce = active_cache[pos];
|
|
|
|
|
|
|
|
if (lstat(ce->name, &st) < 0) {
|
2017-05-30 03:23:33 +03:00
|
|
|
if (!is_missing_file_error(errno))
|
2016-05-08 12:47:31 +03:00
|
|
|
warning_errno(_("failed to stat '%s'"), ce->name);
|
2006-12-25 14:11:00 +03:00
|
|
|
/* It already vanished from the working tree */
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
else if (S_ISDIR(st.st_mode)) {
|
|
|
|
/* if a file was removed and it is now a
|
|
|
|
* directory, that is the same as ENOENT as
|
|
|
|
* far as git is concerned; we do not track
|
2012-09-26 22:21:13 +04:00
|
|
|
* directories unless they are submodules.
|
2006-12-25 14:11:00 +03:00
|
|
|
*/
|
2012-09-26 22:21:13 +04:00
|
|
|
if (!S_ISGITLINK(ce->ce_mode))
|
|
|
|
continue;
|
2006-12-25 14:11:00 +03:00
|
|
|
}
|
2008-11-29 05:41:21 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* "rm" of a path that has changes need to be treated
|
|
|
|
* carefully not to allow losing local changes
|
|
|
|
* accidentally. A local change could be (1) file in
|
|
|
|
* work tree is different since the index; and/or (2)
|
|
|
|
* the user staged a content that is different from
|
|
|
|
* the current commit in the index.
|
|
|
|
*
|
|
|
|
* In such a case, you would need to --force the
|
|
|
|
* removal. However, "rm --cached" (remove only from
|
|
|
|
* the index) is safe if the index matches the file in
|
|
|
|
* the work tree or the HEAD commit, as it means that
|
|
|
|
* the content being removed is available elsewhere.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Is the index different from the file in the work tree?
|
2012-09-26 22:21:13 +04:00
|
|
|
* If it's a submodule, is its work tree modified?
|
2008-11-29 05:41:21 +03:00
|
|
|
*/
|
2012-09-26 22:21:13 +04:00
|
|
|
if (ce_match_stat(ce, &st, 0) ||
|
|
|
|
(S_ISGITLINK(ce->ce_mode) &&
|
2016-12-21 02:20:11 +03:00
|
|
|
bad_to_remove_submodule(ce->name,
|
|
|
|
SUBMODULE_REMOVAL_DIE_ON_ERROR |
|
|
|
|
SUBMODULE_REMOVAL_IGNORE_IGNORED_UNTRACKED)))
|
2007-07-13 21:41:38 +04:00
|
|
|
local_changes = 1;
|
2008-11-29 05:41:21 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Is the index different from the HEAD commit? By
|
|
|
|
* definition, before the very initial commit,
|
|
|
|
* anything staged in the index is treated by the same
|
|
|
|
* way as changed from the HEAD.
|
|
|
|
*/
|
2007-03-26 22:55:39 +04:00
|
|
|
if (no_head
|
2019-06-27 12:28:49 +03:00
|
|
|
|| get_tree_entry(the_repository, head, name, &oid, &mode)
|
2007-03-26 22:55:39 +04:00
|
|
|
|| ce->ce_mode != create_ce_mode(mode)
|
2018-08-29 00:22:48 +03:00
|
|
|
|| !oideq(&ce->oid, &oid))
|
2007-07-13 21:41:38 +04:00
|
|
|
staged_changes = 1;
|
|
|
|
|
2008-11-29 05:41:21 +03:00
|
|
|
/*
|
|
|
|
* If the index does not match the file in the work
|
|
|
|
* tree and if it does not match the HEAD commit
|
|
|
|
* either, (1) "git rm" without --cached definitely
|
|
|
|
* will lose information; (2) "git rm --cached" will
|
|
|
|
* lose information unless it is about removing an
|
|
|
|
* "intent to add" entry.
|
|
|
|
*/
|
|
|
|
if (local_changes && staged_changes) {
|
2015-08-22 04:08:05 +03:00
|
|
|
if (!index_only || !ce_intent_to_add(ce))
|
2013-06-12 12:06:43 +04:00
|
|
|
string_list_append(&files_staged, name);
|
2008-11-29 05:41:21 +03:00
|
|
|
}
|
2007-07-13 21:41:38 +04:00
|
|
|
else if (!index_only) {
|
|
|
|
if (staged_changes)
|
2013-06-12 12:06:43 +04:00
|
|
|
string_list_append(&files_cached, name);
|
2016-12-27 22:03:14 +03:00
|
|
|
if (local_changes)
|
|
|
|
string_list_append(&files_local, name);
|
2007-07-13 21:41:38 +04:00
|
|
|
}
|
2006-12-25 14:11:00 +03:00
|
|
|
}
|
2013-06-12 12:06:43 +04:00
|
|
|
print_error_files(&files_staged,
|
|
|
|
Q_("the following file has staged content different "
|
|
|
|
"from both the\nfile and the HEAD:",
|
|
|
|
"the following files have staged content different"
|
|
|
|
" from both the\nfile and the HEAD:",
|
|
|
|
files_staged.nr),
|
|
|
|
_("\n(use -f to force removal)"),
|
|
|
|
&errs);
|
|
|
|
string_list_clear(&files_staged, 0);
|
|
|
|
print_error_files(&files_cached,
|
|
|
|
Q_("the following file has changes "
|
|
|
|
"staged in the index:",
|
|
|
|
"the following files have changes "
|
|
|
|
"staged in the index:", files_cached.nr),
|
|
|
|
_("\n(use --cached to keep the file,"
|
|
|
|
" or -f to force removal)"),
|
|
|
|
&errs);
|
|
|
|
string_list_clear(&files_cached, 0);
|
2013-07-26 10:05:17 +04:00
|
|
|
|
2013-06-12 12:06:43 +04:00
|
|
|
print_error_files(&files_local,
|
|
|
|
Q_("the following file has local modifications:",
|
|
|
|
"the following files have local modifications:",
|
|
|
|
files_local.nr),
|
|
|
|
_("\n(use --cached to keep the file,"
|
|
|
|
" or -f to force removal)"),
|
|
|
|
&errs);
|
|
|
|
string_list_clear(&files_local, 0);
|
|
|
|
|
2006-12-25 14:11:00 +03:00
|
|
|
return errs;
|
|
|
|
}
|
|
|
|
|
2007-10-05 23:09:19 +04:00
|
|
|
static int show_only = 0, force = 0, index_only = 0, recursive = 0, quiet = 0;
|
rm: support the --pathspec-from-file option
Decisions taken for simplicity:
1) It is not allowed to pass pathspec in both args and file.
Adjustments were needed for `if (!argc)` block:
This code actually means "pathspec is not present". Previously, pathspec
could only come from commandline arguments, so testing for `argc` was a
valid way of testing for the presence of pathspec. But this is no longer
true with `--pathspec-from-file`.
During the entire `--pathspec-from-file` story, I tried to keep its
behavior very close to giving pathspec on commandline, so that switching
from one to another doesn't involve any surprises.
However, throwing usage at user in the case of empty
`--pathspec-from-file` would puzzle because there's nothing wrong with
"usage" (that is, argc/argv array).
On the other hand, throwing usage in the old case also feels bad to me.
While it's less of a puzzle, I (as user) never liked the experience of
comparing my commandline to "usage", trying to spot a difference. Since
it's already known what the error is, it feels a lot better to give that
specific error to user.
Judging from [1] it doesn't seem that showing usage in this case was
important (the patch was to avoid segfault), and it doesn't fit into how
other commands react to empty pathspec (see for example `git add` with a
custom message).
Therefore, I decided to show new error text in both cases. In order to
continue testing for error early, I moved `parse_pathspec()` higher. Now
it happens before `read_cache()` / `hold_locked_index()` /
`setup_work_tree()`, which shouldn't cause any issues.
[1] Commit 7612a1ef ("git-rm: honor -n flag" 2006-06-09)
Signed-off-by: Alexandr Miloslavskiy <alexandr.miloslavskiy@syntevo.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-17 20:25:16 +03:00
|
|
|
static int ignore_unmatch = 0, pathspec_file_nul;
|
|
|
|
static char *pathspec_from_file;
|
2007-10-05 23:09:19 +04:00
|
|
|
|
|
|
|
static struct option builtin_rm_options[] = {
|
2012-08-20 16:32:42 +04:00
|
|
|
OPT__DRY_RUN(&show_only, N_("dry run")),
|
|
|
|
OPT__QUIET(&quiet, N_("do not list removed files")),
|
2013-08-03 15:51:19 +04:00
|
|
|
OPT_BOOL( 0 , "cached", &index_only, N_("only remove from the index")),
|
2018-02-09 14:02:16 +03:00
|
|
|
OPT__FORCE(&force, N_("override the up-to-date check"), PARSE_OPT_NOCOMPLETE),
|
2013-08-03 15:51:19 +04:00
|
|
|
OPT_BOOL('r', NULL, &recursive, N_("allow recursive removal")),
|
|
|
|
OPT_BOOL( 0 , "ignore-unmatch", &ignore_unmatch,
|
2012-08-20 16:32:42 +04:00
|
|
|
N_("exit with a zero status even if nothing matched")),
|
rm: support the --pathspec-from-file option
Decisions taken for simplicity:
1) It is not allowed to pass pathspec in both args and file.
Adjustments were needed for `if (!argc)` block:
This code actually means "pathspec is not present". Previously, pathspec
could only come from commandline arguments, so testing for `argc` was a
valid way of testing for the presence of pathspec. But this is no longer
true with `--pathspec-from-file`.
During the entire `--pathspec-from-file` story, I tried to keep its
behavior very close to giving pathspec on commandline, so that switching
from one to another doesn't involve any surprises.
However, throwing usage at user in the case of empty
`--pathspec-from-file` would puzzle because there's nothing wrong with
"usage" (that is, argc/argv array).
On the other hand, throwing usage in the old case also feels bad to me.
While it's less of a puzzle, I (as user) never liked the experience of
comparing my commandline to "usage", trying to spot a difference. Since
it's already known what the error is, it feels a lot better to give that
specific error to user.
Judging from [1] it doesn't seem that showing usage in this case was
important (the patch was to avoid segfault), and it doesn't fit into how
other commands react to empty pathspec (see for example `git add` with a
custom message).
Therefore, I decided to show new error text in both cases. In order to
continue testing for error early, I moved `parse_pathspec()` higher. Now
it happens before `read_cache()` / `hold_locked_index()` /
`setup_work_tree()`, which shouldn't cause any issues.
[1] Commit 7612a1ef ("git-rm: honor -n flag" 2006-06-09)
Signed-off-by: Alexandr Miloslavskiy <alexandr.miloslavskiy@syntevo.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-17 20:25:16 +03:00
|
|
|
OPT_PATHSPEC_FROM_FILE(&pathspec_from_file),
|
|
|
|
OPT_PATHSPEC_FILE_NUL(&pathspec_file_nul),
|
2007-10-05 23:09:19 +04:00
|
|
|
OPT_END(),
|
|
|
|
};
|
|
|
|
|
2006-07-29 09:44:25 +04:00
|
|
|
int cmd_rm(int argc, const char **argv, const char *prefix)
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
{
|
lock_file: move static locks into functions
Placing `struct lock_file`s on the stack used to be a bad idea, because
the temp- and lockfile-machinery would keep a pointer into the struct.
But after 076aa2cbd (tempfile: auto-allocate tempfiles on heap,
2017-09-05), we can safely have lockfiles on the stack. (This applies
even if a user returns early, leaving a locked lock behind.)
Each of these `struct lock_file`s is used from within a single function.
Move them into the respective functions to make the scope clearer and
drop the staticness.
For good measure, I have inspected these sites and come to believe that
they always release the lock, with the possible exception of bailing out
using `die()` or `exit()` or by returning from a `cmd_foo()`.
As pointed out by Jeff King, it would be bad if someone held on to a
`struct lock_file *` for some reason. After some grepping, I agree with
his findings: no-one appears to be doing that.
After this commit, the remaining occurrences of "static struct
lock_file" are locks that are used from within different functions. That
is, they need to remain static. (Short of more intrusive changes like
passing around pointers to non-static locks.)
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-09 23:55:39 +03:00
|
|
|
struct lock_file lock_file = LOCK_INIT;
|
2021-04-08 23:41:28 +03:00
|
|
|
int i, ret = 0;
|
2013-07-14 12:35:42 +04:00
|
|
|
struct pathspec pathspec;
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
char *seen;
|
|
|
|
|
2008-05-14 21:46:53 +04:00
|
|
|
git_config(git_default_config, NULL);
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
|
2009-05-23 22:53:12 +04:00
|
|
|
argc = parse_options(argc, argv, prefix, builtin_rm_options,
|
|
|
|
builtin_rm_usage, 0);
|
rm: support the --pathspec-from-file option
Decisions taken for simplicity:
1) It is not allowed to pass pathspec in both args and file.
Adjustments were needed for `if (!argc)` block:
This code actually means "pathspec is not present". Previously, pathspec
could only come from commandline arguments, so testing for `argc` was a
valid way of testing for the presence of pathspec. But this is no longer
true with `--pathspec-from-file`.
During the entire `--pathspec-from-file` story, I tried to keep its
behavior very close to giving pathspec on commandline, so that switching
from one to another doesn't involve any surprises.
However, throwing usage at user in the case of empty
`--pathspec-from-file` would puzzle because there's nothing wrong with
"usage" (that is, argc/argv array).
On the other hand, throwing usage in the old case also feels bad to me.
While it's less of a puzzle, I (as user) never liked the experience of
comparing my commandline to "usage", trying to spot a difference. Since
it's already known what the error is, it feels a lot better to give that
specific error to user.
Judging from [1] it doesn't seem that showing usage in this case was
important (the patch was to avoid segfault), and it doesn't fit into how
other commands react to empty pathspec (see for example `git add` with a
custom message).
Therefore, I decided to show new error text in both cases. In order to
continue testing for error early, I moved `parse_pathspec()` higher. Now
it happens before `read_cache()` / `hold_locked_index()` /
`setup_work_tree()`, which shouldn't cause any issues.
[1] Commit 7612a1ef ("git-rm: honor -n flag" 2006-06-09)
Signed-off-by: Alexandr Miloslavskiy <alexandr.miloslavskiy@syntevo.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-17 20:25:16 +03:00
|
|
|
|
|
|
|
parse_pathspec(&pathspec, 0,
|
|
|
|
PATHSPEC_PREFER_CWD,
|
|
|
|
prefix, argv);
|
|
|
|
|
|
|
|
if (pathspec_from_file) {
|
|
|
|
if (pathspec.nr)
|
|
|
|
die(_("--pathspec-from-file is incompatible with pathspec arguments"));
|
|
|
|
|
|
|
|
parse_pathspec_file(&pathspec, 0,
|
|
|
|
PATHSPEC_PREFER_CWD,
|
|
|
|
prefix, pathspec_from_file, pathspec_file_nul);
|
|
|
|
} else if (pathspec_file_nul) {
|
|
|
|
die(_("--pathspec-file-nul requires --pathspec-from-file"));
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!pathspec.nr)
|
|
|
|
die(_("No pathspec was given. Which files should I remove?"));
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
|
2007-11-03 14:23:13 +03:00
|
|
|
if (!index_only)
|
|
|
|
setup_work_tree();
|
|
|
|
|
2016-12-07 21:33:54 +03:00
|
|
|
hold_locked_index(&lock_file, LOCK_DIE_ON_ERROR);
|
2008-07-19 20:24:46 +04:00
|
|
|
|
|
|
|
if (read_cache() < 0)
|
2011-02-23 02:42:05 +03:00
|
|
|
die(_("index file corrupt"));
|
2008-07-19 20:24:46 +04:00
|
|
|
|
2019-07-17 23:28:24 +03:00
|
|
|
refresh_index(&the_index, REFRESH_QUIET|REFRESH_UNMERGED, &pathspec, NULL, NULL);
|
2010-01-17 11:43:13 +03:00
|
|
|
|
2013-07-14 12:35:42 +04:00
|
|
|
seen = xcalloc(pathspec.nr, 1);
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
|
2021-04-01 04:49:51 +03:00
|
|
|
/* TODO: audit for interaction with sparse-index. */
|
|
|
|
ensure_full_index(&the_index);
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
for (i = 0; i < active_nr; i++) {
|
Convert "struct cache_entry *" to "const ..." wherever possible
I attempted to make index_state->cache[] a "const struct cache_entry **"
to find out how existing entries in index are modified and where. The
question I have is what do we do if we really need to keep track of on-disk
changes in the index. The result is
- diff-lib.c: setting CE_UPTODATE
- name-hash.c: setting CE_HASHED
- preload-index.c, read-cache.c, unpack-trees.c and
builtin/update-index: obvious
- entry.c: write_entry() may refresh the checked out entry via
fill_stat_cache_info(). This causes "non-const struct cache_entry
*" in builtin/apply.c, builtin/checkout-index.c and
builtin/checkout.c
- builtin/ls-files.c: --with-tree changes stagemask and may set
CE_UPDATE
Of these, write_entry() and its call sites are probably most
interesting because it modifies on-disk info. But this is stat info
and can be retrieved via refresh, at least for porcelain
commands. Other just uses ce_flags for local purposes.
So, keeping track of "dirty" entries is just a matter of setting a
flag in index modification functions exposed by read-cache.c. Except
unpack-trees, the rest of the code base does not do anything funny
behind read-cache's back.
The actual patch is less valueable than the summary above. But if
anyone wants to re-identify the above sites. Applying this patch, then
this:
diff --git a/cache.h b/cache.h
index 430d021..1692891 100644
--- a/cache.h
+++ b/cache.h
@@ -267,7 +267,7 @@ static inline unsigned int canon_mode(unsigned int mode)
#define cache_entry_size(len) (offsetof(struct cache_entry,name) + (len) + 1)
struct index_state {
- struct cache_entry **cache;
+ const struct cache_entry **cache;
unsigned int version;
unsigned int cache_nr, cache_alloc, cache_changed;
struct string_list *resolve_undo;
will help quickly identify them without bogus warnings.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-09 19:29:00 +04:00
|
|
|
const struct cache_entry *ce = active_cache[i];
|
2021-04-08 23:41:28 +03:00
|
|
|
if (ce_skip_worktree(ce))
|
|
|
|
continue;
|
2018-08-13 19:14:22 +03:00
|
|
|
if (!ce_path_match(&the_index, ce, &pathspec, seen))
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
continue;
|
2012-09-26 22:21:13 +04:00
|
|
|
ALLOC_GROW(list.entry, list.nr + 1, list.alloc);
|
2013-11-14 23:24:37 +04:00
|
|
|
list.entry[list.nr].name = xstrdup(ce->name);
|
2013-08-06 23:15:25 +04:00
|
|
|
list.entry[list.nr].is_submodule = S_ISGITLINK(ce->ce_mode);
|
|
|
|
if (list.entry[list.nr++].is_submodule &&
|
2017-08-02 22:49:20 +03:00
|
|
|
!is_staging_gitmodules_ok(&the_index))
|
2018-07-21 10:49:19 +03:00
|
|
|
die(_("please stage your changes to .gitmodules or stash them to proceed"));
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
}
|
|
|
|
|
2013-07-14 12:35:42 +04:00
|
|
|
if (pathspec.nr) {
|
|
|
|
const char *original;
|
2007-04-16 11:53:24 +04:00
|
|
|
int seen_any = 0;
|
2021-04-08 23:41:28 +03:00
|
|
|
char *skip_worktree_seen = NULL;
|
|
|
|
struct string_list only_match_skip_worktree = STRING_LIST_INIT_NODUP;
|
|
|
|
|
2013-07-14 12:35:42 +04:00
|
|
|
for (i = 0; i < pathspec.nr; i++) {
|
|
|
|
original = pathspec.items[i].original;
|
2021-04-08 23:41:28 +03:00
|
|
|
if (seen[i])
|
2007-04-16 11:53:24 +04:00
|
|
|
seen_any = 1;
|
2021-04-08 23:41:28 +03:00
|
|
|
else if (ignore_unmatch)
|
|
|
|
continue;
|
|
|
|
else if (matches_skip_worktree(&pathspec, i, &skip_worktree_seen))
|
|
|
|
string_list_append(&only_match_skip_worktree, original);
|
|
|
|
else
|
|
|
|
die(_("pathspec '%s' did not match any files"), original);
|
|
|
|
|
2006-12-25 14:11:00 +03:00
|
|
|
if (!recursive && seen[i] == MATCHED_RECURSIVELY)
|
2011-02-23 02:42:05 +03:00
|
|
|
die(_("not removing '%s' recursively without -r"),
|
2013-07-14 12:35:42 +04:00
|
|
|
*original ? original : ".");
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
}
|
2007-04-16 11:53:24 +04:00
|
|
|
|
2021-04-08 23:41:28 +03:00
|
|
|
if (only_match_skip_worktree.nr) {
|
|
|
|
advise_on_updating_sparse_paths(&only_match_skip_worktree);
|
|
|
|
ret = 1;
|
|
|
|
}
|
|
|
|
free(skip_worktree_seen);
|
|
|
|
string_list_clear(&only_match_skip_worktree, 0);
|
|
|
|
|
2013-09-10 01:36:15 +04:00
|
|
|
if (!seen_any)
|
2021-04-08 23:41:28 +03:00
|
|
|
exit(ret);
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
}
|
2021-04-25 17:16:19 +03:00
|
|
|
clear_pathspec(&pathspec);
|
|
|
|
free(seen);
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
|
2016-12-27 22:03:14 +03:00
|
|
|
if (!index_only)
|
2019-05-10 00:27:31 +03:00
|
|
|
submodules_absorb_gitdir_if_needed();
|
2016-12-27 22:03:14 +03:00
|
|
|
|
2006-12-25 14:11:00 +03:00
|
|
|
/*
|
|
|
|
* If not forced, the file, the index and the HEAD (if exists)
|
|
|
|
* must match; but the file can already been removed, since
|
|
|
|
* this sequence is a natural "novice" way:
|
|
|
|
*
|
2007-04-16 12:17:32 +04:00
|
|
|
* rm F; git rm F
|
2006-12-25 14:11:00 +03:00
|
|
|
*
|
|
|
|
* Further, if HEAD commit exists, "diff-index --cached" must
|
|
|
|
* report no changes unless forced.
|
|
|
|
*/
|
|
|
|
if (!force) {
|
2016-09-05 23:08:04 +03:00
|
|
|
struct object_id oid;
|
|
|
|
if (get_oid("HEAD", &oid))
|
|
|
|
oidclr(&oid);
|
|
|
|
if (check_local_mod(&oid, index_only))
|
2006-12-25 14:11:00 +03:00
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
/*
|
|
|
|
* First remove the names from the index: we won't commit
|
2006-12-25 14:11:00 +03:00
|
|
|
* the index unless all of them succeed.
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
*/
|
|
|
|
for (i = 0; i < list.nr; i++) {
|
2012-09-26 22:21:13 +04:00
|
|
|
const char *path = list.entry[i].name;
|
2007-04-16 11:46:48 +04:00
|
|
|
if (!quiet)
|
|
|
|
printf("rm '%s'\n", path);
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
|
|
|
|
if (remove_file_from_cache(path))
|
2011-02-23 02:42:05 +03:00
|
|
|
die(_("git rm: unable to remove %s"), path);
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
}
|
|
|
|
|
2006-06-09 08:11:25 +04:00
|
|
|
if (show_only)
|
|
|
|
return 0;
|
|
|
|
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
/*
|
2007-01-12 01:58:47 +03:00
|
|
|
* Then, unless we used "--cached", remove the filenames from
|
2006-12-25 14:11:00 +03:00
|
|
|
* the workspace. If we fail to remove the first one, we
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
* abort the "git rm" (but once we've successfully removed
|
|
|
|
* any file at all, we'll go ahead and commit to it all:
|
2006-07-10 09:50:18 +04:00
|
|
|
* by then we've already committed ourselves and can't fail
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
* in the middle)
|
|
|
|
*/
|
2006-12-25 14:11:00 +03:00
|
|
|
if (!index_only) {
|
2013-08-06 23:15:25 +04:00
|
|
|
int removed = 0, gitmodules_modified = 0;
|
2017-02-11 22:51:08 +03:00
|
|
|
struct strbuf buf = STRBUF_INIT;
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
for (i = 0; i < list.nr; i++) {
|
2012-09-26 22:21:13 +04:00
|
|
|
const char *path = list.entry[i].name;
|
|
|
|
if (list.entry[i].is_submodule) {
|
2017-02-11 22:51:08 +03:00
|
|
|
strbuf_reset(&buf);
|
2016-12-27 22:03:14 +03:00
|
|
|
strbuf_addstr(&buf, path);
|
|
|
|
if (remove_dir_recursively(&buf, 0))
|
|
|
|
die(_("could not remove '%s'"), path);
|
|
|
|
|
|
|
|
removed = 1;
|
|
|
|
if (!remove_path_from_gitmodules(path))
|
|
|
|
gitmodules_modified = 1;
|
|
|
|
continue;
|
2012-09-26 22:21:13 +04:00
|
|
|
}
|
2008-09-27 02:59:14 +04:00
|
|
|
if (!remove_path(path)) {
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
removed = 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!removed)
|
2009-06-27 19:58:46 +04:00
|
|
|
die_errno("git rm: '%s'", path);
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
}
|
2017-02-11 22:51:08 +03:00
|
|
|
strbuf_release(&buf);
|
2013-08-06 23:15:25 +04:00
|
|
|
if (gitmodules_modified)
|
2017-12-12 22:53:50 +03:00
|
|
|
stage_updated_gitmodules(&the_index);
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
}
|
|
|
|
|
2018-03-01 23:40:20 +03:00
|
|
|
if (write_locked_index(&the_index, &lock_file,
|
|
|
|
COMMIT_LOCK | SKIP_IF_UNCHANGED))
|
|
|
|
die(_("Unable to write new index file"));
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
|
2021-04-08 23:41:28 +03:00
|
|
|
return ret;
|
Add builtin "git rm" command
This changes semantics very subtly, because it adds a new atomicity
guarantee.
In particular, if you "git rm" several files, it will now do all or
nothing. The old shell-script really looped over the removed files one by
one, and would basically randomly fail in the middle if "-f" was used and
one of the files didn't exist in the working directory.
This C builtin one will not re-write the index after each remove, but
instead remove all files at once. However, that means that if "-f" is used
(to also force removal of the file from the working directory), and some
files have already been removed from the workspace, it won't stop in the
middle in some half-way state like the old one did.
So what happens is that if the _first_ file fails to be removed with "-f",
we abort the whole "git rm". But once we've started removing, we don't
leave anything half done. If some of the other files don't exist, we'll
just ignore errors of removal from the working tree.
This is only an issue with "-f", of course.
I think the new behaviour is strictly an improvement, but perhaps more
importantly, it is _different_. As a special case, the semantics are
identical for the single-file case (which is the only one our test-suite
seems to test).
The other question is what to do with leading directories. The old "git
rm" script didn't do anything, which is somewhat inconsistent. This one
will actually clean up directories that have become empty as a result of
removing the last file, but maybe we want to have a flag to decide the
behaviour?
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-20 03:19:34 +04:00
|
|
|
}
|