londons_explore 5 days ago

> ...but it's unique while the file exists, right?

I don't think all filesystems guarantee this. Especially network filesystems.

2
amiga386 5 days ago

That's a problem for programs that do recursive fs descent (e.g. find, tar) because they use st_dev and st_ino alone for remembering what directories they've been in. They can't just use the absolute path, because symbolic links allow for loops.

find:

* https://cgit.git.savannah.gnu.org/cgit/findutils.git/tree/fi...

* https://cgit.git.savannah.gnu.org/cgit/findutils.git/tree/fi...

tar:

* https://cgit.git.savannah.gnu.org/cgit/tar.git/tree/src/crea...

* https://cgit.git.savannah.gnu.org/cgit/tar.git/tree/src/name...

* https://cgit.git.savannah.gnu.org/cgit/tar.git/tree/src/incr...

In particular, I'm intrigued by the comment in the last link:

      /* With NFS, the same file can have two different devices
         if an NFS directory is mounted in multiple locations,
         which is relatively common when automounting.
         To avoid spurious incremental redumping of
         directories, consider all NFS devices as equal,
         relying on the i-node to establish differences.  */
So GNU tar expects an inode to be unique across _all_ NFS mounts...

the_mitsuhiko 5 days ago

You are not wrong, but the issues with tar are well known. Linus himself had this to say [1]:

> Well, the fact that it hits snapshots, shows that the real problem is just "tar does stupid things that it shouldn't do".

> Yes, inode numbers used to be special, and there's history behind it. But we should basically try very hard to walk away from that broken history.

> An inode number just isn't a unique descriptor any more. We're not living in the 1970s, and filesystems have changed.

You might still get away with it most of the time today, but it's causing more and more issues.

[1]: https://lkml.iu.edu/hypermail/linux/kernel/2401.3/04127.html

amiga386 5 days ago

That sounds like blaming userspace.

If it's not the 1970s anymore, then update the POSIX standard with a solution that works for all OSes (including the BSDs) and can be relied upon. Definitely don't suggest a Linux-only solution for a Linux-only problem.

the_mitsuhiko 4 days ago

Again, you are not wrong. This is all clearly not intended. However it has become a challenge to map things like Btrfs subvolumes (when seen from a Btrfs mount) onto POSIX semantics [1].

You are absolutely right that ideally there is an update to the POSIX standard. But things like this take time and it's also not necessarily clear yet what the right path here is going forward. You can consider a lot of what is currently taking place as an experiment to push the envelope.

As for if this is a Linux specific problem I'm not sure. I'm not sufficiently familiar with the situation on other operating systems to know what conversations are taking place there.

[1]: https://lwn.net/Articles/866582/

db48x 4 days ago

ZFS has the same problem, for the same reasons. But it also has additional reasons. The simplest of them is that inode numbers are 64–bit integers but ZFS filesystems can have up to 2¹²⁸ files.

jcranmer 4 days ago

There is no solution, much less one that is portable across different Unixen.

The core problem is that, because of the ability of filesystems to effectively contain other filesystems within them, the number of bits to uniquely identify a file within a filesystem is not a constant number across different filesystem types. It's a harder problem on Linux because Linux is also full of filesystems that aren't really filesystems, where trying to come up with a persistent, unique identifier for people to use is a lot more bother than it's really worth.

account42 4 days ago

> is a lot more bother than it's really worth

According to who? Clearly there are user space utilities that need a (somewhat) persistent identifier to work correctly.

dwattttt 4 days ago

Have you checked what POSIX has to say about inode numbers? It may say less than you think.

amiga386 4 days ago

https://pubs.opengroup.org/onlinepubs/009696799/basedefs/sys...

> The st_ino and st_dev fields taken together uniquely identify the file within the system.

It says exactly what it ought to say.

dwattttt 4 days ago

Issue 8 (2024, https://pubs.opengroup.org/onlinepubs/9799919799/basedefs/sy...) relaxed the language there. st_ino and st_dev still uniquely identify a file, but it now notes that the duration it identifies it for is not indefinite.

As an example, it offers that the identity of a file that's deleted could be reused.

amiga386 4 days ago

It says pretty much what I said at the start of the thread (https://news.ycombinator.com/item?id=44157026), and yet this is what Linux is having problems complying with:

> A file identity is uniquely determined by the combination of st_dev and st_ino. At any given time in a system, distinct files shall have distinct file identities; hard links to the same file shall have the same file identity. Over time, these file identities can be reused for different files. For example, the st_ino value can be reused after the last link to a file is unlinked and the space occupied by the file has been freed, and the st_dev value associated with a file system can be reused if that file system is detached ("unmounted") and another is attached ("mounted").

I still think POSIX says exactly what it needs to say, and Linux ought to either comply with it, or lead the standardisation process on what should be done instead.

Don't say "tar is old". Tar's problems with Linux are the same problems that find, zip, rsync, cp and all other fs walking programs have. If memorising st_dev and st_ino are no good, tell us what cross-platform approach should be taken instead.

Brian_K_White 4 days ago

This. You can't break a fundamental assumption without providing it's replacement, and call anyone else stupid.

"A centimeter is no longer based on anything and has an unpredictable length. Rulers always did stupid things relying on that assumption."

the_mitsuhiko 4 days ago

> You can't break a fundamental assumption without providing it's replacement, and call anyone else stupid.

Sure, within the bounds of what's documented you are right. However tar is going beyond what either standard or Linux guarantee so a lot of bets are off.

The guarantee that tar wants is not given by any FS that recycles inodes and most importantly, tar already completely disregards the file-system locality when network drives are involved.

The actual issue here is that both tar and Linux are just in a tough situation because a) the POSIX spec is problematic b) no alternative API exists today. Something has to give.

account42 4 days ago

Sounds like Linus is advocating for ... breaking userspace. We are truly living in the end times.

the_mitsuhiko 5 days ago

It's effectively impossible to guarantee this when you have a file system that unifies and re-exports. Network file systems being an obvious one, but overlayfs is in a similar position.

Even if inodes still work nowadays they will eventually run into issues a few years down the line.

account42 4 days ago

Then unifying file systems is not something that POSIX support and a POSIX system shouldn't do it unless it can somehow map inodes with POSIX semantics. E.g. for a network mount spanning multiple remote filesystems you could also have multiple st_dev locally.