How-To

File changed as we read it GlusterFS issue

File changed as we read it GlusterFS issue

File changed as we read it GlusterFS issue

Recently i had trouble running Backups of my Gitlab instance onto GlusterFS mounted volumes. The tar cronjob always exited with error message: File changed as we read it GlusterFS issue. So i was wondering what is going on and found a 3 year old bug report which describes the problem. They made a workaround for that behavior which i will describe here.

First of all the problem is as follows: Tar grabs the metadata stats from files which it is going to backup and after the backup has happen it verifies those metadata again and fails. Due to design that Glusterfs operates on multiple bricks, it occurs that the connection goes to another brick than previously were the atime/mtime can be different which produces this behavior.

Prerequisites:

  • Any texteditor (vi, joe, nano) of your choice

If you are using the native glusterfs fuse client, the only way to prevent this is by enabling following patch:

gluster volume set Put-your-netshare-here cluster.consistent-metadata on

In fact the workaround that they implemented is not the best solutions, since it decreases performance of the fuse mount even more.
A better approach would be to implement an option in fuse mount to set: noatime, nodiratime which isnt supported yet.

If you are using the gNfs implementation of GlusterFS you should not have such problems but noatime,nodiratime cant harm:

Put following in your /etc/fstab and change your Mount and ServerIP settings so that they fit to your setup.

10.0.0.1:/netshare/www /netshare/www nfs noauto,rw,acl,async,vers=3,noatime,nodiratime,rs
ize=1048576,wsize=1048576,hard,intr,retrans=2,tcp,nolock,actimeo=1,local_lock=none 0 0

I also added some extra tunings which i usually use on all my production setups to get highest performance and throughput out of my nfs glusterfs mounts.

If you need any assistance feel free and drop me a line!

Jules

Jules is the owner and author of ISPIRE.ME. He's a Linux System Engineer, Tech fanatic and an Open Source fan.

View Comments

  • Thanks for the post, Jules.

    I have the same problem with gitlab backed by the GFS 3.8.x. And my volume setting is Distributed and Replicated. So after I applied the steps above to patch the GFS, I am still seeing the same error message 'file changed as we read it' during the backup.

    So do we have to reboot the clients (aka, in this case, the gitlab server instances that mount themselves to the GFS ) or by just applying this patch on the GFS servers will suffice?

    Thanks in advance!
    Rene

    • You have to restart the gluster daemon (reboot glusterfs servers) that a set is taking effect.

Share
Published by
Jules

Recent Posts

HTTP/2 SSL Offloading with Hitch and Varnish

HTTP/2 SSL Offloading with Hitch and Varnish Since Chrome browsers showing you insecure warning on…

7 years ago

Running multiple instances of varnish using systemd

Running multiple instances of varnish using systemd If you have not yet found a complete…

8 years ago

HTTP/2 SSL Offloading with Haproxy and Nginx

HTTP/2 SSL Offloading with Haproxy and Nginx After HTTP/2 becoming more an more prominent regarding SSL…

8 years ago

Get Real IP with Haproxy Tomcat Jira Confluence using x-forwarded-for

Get Real IP with Haproxy Tomcat Jira Confluence using x-forwarded-for Everyone knows the Problem. Get…

8 years ago

Review TDS2 How to backtest using tick data with Metatrader 4

Review TDS2 How to backtest using tick data with Metatrader 4 in this Review TDS2…

8 years ago

Fix Upgrading Nginx 1.10 fails error unknown directive

Fix Upgrading Nginx 1.10 fails error unknown directive In this short article i will show…

8 years ago

This website uses cookies.