All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
dedicated server error "EXT4-fs warning index full, reach max htree level :2"
Hello i bought this server on oneprovider
CPU: Intel Xeon D-1531 - 2.2 GHz - 6 core(s)
RAM: 32GB - DDR4
Hard Drive(s): 2x 256GB (SSD SATA)
Bandwidth: Unmetered @ 1Gbps
i installed ubuntu 20 and aapanel (openlitespeed, mariadb, redis) everything ok for few days, but the server suddently stopped writting on disk today, i only found this on kern.log
Sep 11 00:05:57 sd-124336 kernel: [1039381.938016] EXT4-fs warning (device md1p1): ext4_dx_add_entry:2355: Directory (ino: 12607489) index full, reach max htree level :2
Sep 11 00:05:57 sd-124336 kernel: [1039381.938018] EXT4-fs warning (device md1p1): ext4_dx_add_entry:2359: Large directory feature is not enabled on this filesystem
Sep 11 00:05:57 sd-124336 kernel: [1039381.938038] EXT4-fs warning (device md1p1): ext4_dx_add_entry:2355: Directory (ino: 12607489) index full, reach max htree level :2
Sep 11 00:05:57 sd-124336 kernel: [1039381.938041] EXT4-fs warning (device md1p1): ext4_dx_add_entry:2359: Large directory feature is not enabled on this filesystem
Sep 11 00:05:59 sd-124336 kernel: [1039383.840100] EXT4-fs warning (device md1p1): ext4_dx_add_entry:2355: Directory (ino: 12607489) index full, reach max htree level :2
Sep 11 00:05:59 sd-124336 kernel: [1039383.840103] EXT4-fs warning (device md1p1): ext4_dx_add_entry:2359: Large directory feature is not enabled on this filesystem
Sep 11 00:06:05 sd-124336 kernel: [1039390.258572] EXT4-fs warning: 6 callbacks suppressed
Sep 11 00:06:05 sd-124336 kernel: [1039390.258575] EXT4-fs warning (device md1p1): ext4_dx_add_entry:2355: Directory (ino: 12607489) index full, reach max htree level :2
i already search on google but i still have no clue what this error means, i have a hetzner server running with centos the same software stack and had no issues....
Comments
Check inode usage with : df -i
What blocksize was this setup with? (blockdev --getbsz /dev/md1p1)
This error means you're hitting the max h-tree size, and don't have
large_dir
enabled. You can usetune2fs -O large_dir /dev/whatever
to enable large_dir, but it should be very rarely requiredThis caught my eye:
inodes start from 1, so this means that 12 million files or directories have been created (they might not all exist any more). Is some process creating a huge number of subdirectories or files within one directory? That's the only way I can think of that'd casue the h-tree to reach its limits. Try list all files (e.g.
sudo find /
) and see if there's a very large number of files/directories in the same directory.4096
I dunno, the error messages are pretty clear wanting large dir support enabled. I wouldn't have thought google would let you down on that.
On a side note, has anyone heard about inodes before getting fucked in some way because of some inodes bug?
I'll forever have an inodes hate-on after an inodes bug in Debian 6 or 7 prevented properly backing it up or being able to upgrade to next Debian.
Hostgator used to have this backup feature but only for like 20k inodes and was pretty useless for a decent website with a store. "So I paid for annual backup I can't use?". "Yep!"
Yeah, I still have PTSD from having to deal with this more than once, from more than one user:
Wordpress plugin that resizes every uploaded image to every possible size supported by the plugin, user uploads lifetime worth of photos. Actually had to talk to one of those guys on the phone, and phones are only for texting.
Hmm, that sounds like an attack vector. "Ah ha! I inodes the shit out of them!"
Reinstall with XFS and forget about fiddling with inodes.
XFS? Can it even work on modern computers, like e.g. BTRFS?
/not at all trying to ignite a filesystem war
Yes, undergrad operating system class teaches the concept of inode and how to see them with
df -i
command.I use XFS and BTRFS on some VPS just because the cool kids are using them.
I don't actually know the difference.
XFS works fine on modern computers / OSes, but I'm not sure how often it's used. I think ReiserFS is also still sometimes recommended for systems with a very large number of small files (like 30 million+ files less than ~10KB). There's also JFS but I don't know anything about it.
BTRFS supports compression, which is useful if you have a lot of text files. We use it at work both in dev and in prod. It makes a difference on development systems with lots of source code - The compression ratio is something like 40%, with minimal CPU impact.
It also has many of the usual features of a modern filesystem, like deduplication of data (so if you have multiple copies of the same file, they're only stored once). These use copy-on-write, so if one of the files is modified, it'll make a copy of it (unlike hardlinks, where editing one also edits the other)
ZFS is fairly similar, but has been around for longer, and has a larger RAM requirement for some features. IIRC ZFS does deduplication "online" / as files are written, whereas Btrfs does it as a background job.
Btrfs RAID isn't really reliable, so don't use that. Btrfs on mdadm works fine though, as does Btrfs on a single disk.
i found the problem was a lot of php session files being spawned on /tmp folder if i set "php_value session.xxxxxxxxx" in .htaccess, due to a openlitespeed bug (?) as this doesn't happen in enterprise version
thanks for help
And it will get worse with WP at least (default additional webp image copy creation in WP 6.1).
@donko
Try also to check:
df -h
mount
XFS is being kept up-to-date and optimized all the time, its developers are "on top of their game" so to say, it seems like some of the best filesystem people in the world are working on it. There is not even a single thought anywhere to deprecate or remove it (unlike, see below). Performance-wise it is usually as fast or faster than Ext4. The only lacking point for me is that it is not possible to shrink XFS if you need to resize down its partition, which is possible with Ext4 and Btrfs.
Speaking of Btrfs, it is much more featureful than both Ext4 and XFS (compression, snapshots, RAID), but expect it to be also much slower than both of those. The comparison of all three:
https://www.phoronix.com/review/linux-415-fs/2
ReiserFS 3 is being considered legacy and will be removed from the kernel:
https://www.phoronix.com/news/ReiserFS-Deprecate-Remove-2025
ReiserFS 4 has only one person working on it, and while he is also very good at it, he does not seem to plan pushing it for adoption to the mainline kernel (need to find or compile a kernel with it yourself!), and without that the long-term future is also doubtful.
As for JFS, it can be said to be very proven and reliable, but kind of "forgotten", it does not seem to be actively worked on. And the same issue as XFS, cannot be shrunk.
It's the default filesystem for RHEL.
That's what I routinely use under Debian, and speed (on top of spinning rust, that is) never was an issue so far. Surprisingly my limited real-life tests showed that out-of-the-box JFS (supposedly simpler, easier on CPU etc.) performed worse for my usage.
OTOH, my very limited experience with XFS under Debian (8 or 9?) was a bit rough; IIRC the snapshots are not completely "self-contained" - it keeps a registry somewhere on the host system. That was enough to make me come back to BTRFS and its many features.
There are no snapshots in XFS. Maybe you used snapshots in LVM?
XFS only recently has gained COW-based reflinking, which can be used for some limited snapshotting (non-atomic when it comes to a directory tree). That is, you can do
cp --reflink srcfile dstfile
and have that complete very fast with little to no actual writes or disk space consumed, and a real copy of file areas will be created only in case of modifying parts of srcfile or dstfile.No, not with LVM. IIRC it involved some xfs_dump and xfs_restore, maybe leveraged through a script.
edit: googled it, it's xfsdump & Co. Anyway, tricky stuff which I quickly forgot about, in favour of the featureful btrfs. (Oh and I use nilfs2 too )