45 Drives Knowledge Base
KB450306 - Resolving Clock Skew Issues on PetaSAN
https://knowledgebase.45drives.com/kb/kb450306-resolving-clock-skew-issues-on-petasan/

KB450306 - Resolving Clock Skew Issues on PetaSAN

Posted on April 26, 2021 by Archie Blanchard


Resolving Clock Skew Issues on PetaSAN

Scope/Description:

This article will cover resolving issues with clock skew on PetaSAN. These issues are often displayed on the dashboard and can be referenced in the ceph.log file

Prerequisites:

Steps:

Verify that the issue regarding Slow Ops is related to clock skew. This can be done by accessing the command line through SSH using the root account.

This can be verified by running the following command:
cat /var/log/ceph/ceph.log | grep -i clock

The output should list off all entries in that log which contain the word 'clock,' which in this case, would be clock skew.

If there are entries mentioning clock skew, continue with the rest of this article.

We'll be doing this process on each node in the cluster. Start with:
service ntp stop
Followed by: ntpdate

 

Now, run ntpd -gq. This command may take some time resolve and have a long output, allow it to finish.

From here, restart the ntp service, then restart the monitor service.

service ntp start

systemctl restart ceph-mon@

Verification:

Run date on each server. The times should be synced. Keep an eye on the cluster to see if clock skew occurs again, using the ceph.log file to assist with monitoring.

KB450306 – Resolving Clock Skew Issues on PetaSAN – 45 Drives Knowledge Base

KB450306 – Resolving Clock Skew Issues on PetaSAN

Last modified: April 26, 2021
You are here:
Estimated reading time: 1 min

Resolving Clock Skew Issues on PetaSAN

Scope/Description:

This article will cover resolving issues with clock skew on PetaSAN. These issues are often displayed on the dashboard and can be referenced in the ceph.log file

Prerequisites:

  • PetaSAN Clustered solution
  • SSH Access
  • PuTTy or some other form of SSH client
  • Cluster reporting “Slow/Blocked Ops” or “Clock Skew”

Steps:

Verify that the issue regarding Slow Ops is related to clock skew. This can be done by accessing the command line through SSH using the root account.

This can be verified by running the following command:
cat /var/log/ceph/ceph.log | grep -i clock

The output should list off all entries in that log which contain the word ‘clock,’ which in this case, would be clock skew.

If there are entries mentioning clock skew, continue with the rest of this article.

We’ll be doing this process on each node in the cluster. Start with:
service ntp stop
Followed by: ntpdate [IP of NTP Server]

 

Now, run ntpd -gq. This command may take some time resolve and have a long output, allow it to finish.

From here, restart the ntp service, then restart the monitor service.

service ntp start

systemctl restart ceph-mon@[Monitor host name]

Verification:

Run date on each server. The times should be synced. Keep an eye on the cluster to see if clock skew occurs again, using the ceph.log file to assist with monitoring.

Was this article helpful?
Dislike 0
Views: 8
Unboxing Racking Storage Drives Cable Setup Power UPS Sizing Remote Access