[CIFS] Incorrect hardlink count when original file is cached (oplocked)

Fixes Samba bug 2823

In this case hardlink count is stale for one of the two inodes (ie the
original file) until it is closed - since revalidate does not go to
server while file is cached locally.

Signed-off-by: Steve French <sfrench@us.ibm.com>
This commit is contained in:
Steve French 2006-11-16 20:54:20 +00:00
Родитель 237ee312e1
Коммит 31ec35d6c8
1 изменённых файлов: 23 добавлений и 10 удалений

Просмотреть файл

@ -69,17 +69,30 @@ cifs_hardlink(struct dentry *old_file, struct inode *inode,
rc = -EOPNOTSUPP;
}
/* if (!rc) */
{
/* renew_parental_timestamps(old_file);
inode->i_nlink++;
mark_inode_dirty(inode);
d_instantiate(direntry, inode); */
/* BB add call to either mark inode dirty or refresh its data and timestamp to current time */
d_drop(direntry); /* force new lookup from server of target */
/* if source file is cached (oplocked) revalidate will not go to server
until the file is closed or oplock broken so update nlinks locally */
if(old_file->d_inode) {
cifsInode = CIFS_I(old_file->d_inode);
if(rc == 0) {
old_file->d_inode->i_nlink++;
old_file->d_inode->i_ctime = CURRENT_TIME;
/* parent dir timestamps will update from srv
within a second, would it really be worth it
to set the parent dir cifs inode time to zero
to force revalidate (faster) for it too? */
}
/* if not oplocked will force revalidate to get info
on source file from srv */
cifsInode->time = 0;
/* Will update parent dir timestamps from srv within a second.
Would it really be worth it to set the parent dir (cifs
inode) time field to zero to force revalidate on parent
directory faster ie
CIFS_I(inode)->time = 0; */
}
d_drop(direntry); /* force new lookup from server */
cifsInode = CIFS_I(old_file->d_inode);
cifsInode->time = 0; /* will force revalidate to go get info when needed */
cifs_hl_exit:
kfree(fromName);