You are given a large POSIX-like filesystem containing millions of files across nested directories. Design an algorithm to find groups of duplicate files when you are not allowed to read file contents; you may only use metadata such as file size, timestamps, permissions, and inode information. Describe how you will traverse the directory tree, avoid infinite loops with symlinks, account for hard links, and handle permission errors. Explain your data structures (e.g., mapping sizes to file lists), memory and I/O considerations for very large datasets, and how you would stream or partition the work. Define what 'duplicate' means under the no-content-read constraint, discuss potential false positives and their impact, and provide time and space complexity analyses.