-
Notifications
You must be signed in to change notification settings - Fork 142
Update HowTo on using the pool #218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,8 +1,8 @@ | ||
<pre class="code"> | ||
driftfile /var/lib/ntp/ntp.drift | ||
|
||
server 0.pool.ntp.org | ||
server 1.pool.ntp.org | ||
server 2.pool.ntp.org | ||
server 3.pool.ntp.org | ||
server 0.pool.ntp.org | ||
server 1.pool.ntp.org | ||
penguinpee marked this conversation as resolved.
Show resolved
Hide resolved
|
||
</pre> |
Original file line number | Diff line number | Diff line change | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,5 @@ | ||||||||||||
<pre class="code"> | ||||||||||||
pool 2.pool.ntp.org iburst | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The problem with your current setup is that it excludes any servers in the 0, 1, and 3 DNS records. In underserved zones, this won't be a problem, but in zones with sufficient capacity, the results returned in 0, 1, and 3 should be a discrete set of servers from 2. So using
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
As I understand it, the split of servers between 0, 1, 2, 3 is randomly changing over time. Even if 2 has fewer than four servers, the client should get four different addresses after few minutes. Of course, it would be better to use the non-numbered zone if it included both IPv4 and IPv6 servers.
This is effectively almost identical to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with @mlichvar, using I'm not familiar with the pool distribution logic. But seeing that major distributions ship NTP configurations only using 2.vendor.pool.ntp.org I don't see any immediate issue following suite. Besides these changes are somewhat masking the underlying issue documented in #176, which still needs to be solved. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How is using There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The server will be replaced even if specified as There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Well, once that happened, it be easy to adjust the documentation again. But until that happens, we should stick with There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
That may be true for chrony, but it is not the case for the ntpd reference implementation. ntpd will never discard a configured "server 0.pool.ntp.org" unless it includes the "preempt" option to the server directive. Every association spun up by the pool directive has "preempt" automatically added. So with ntpd, "pool"-spun associations will be dropped if they do not contribute to the time solution for about 10 polls, while "server" associations are only eligible to be dropped with "preempt" specified. Pool associations will be replaced automatically, while "server ... preempt" associations are not replaced after they're dropped, making it undesirable in practice. I suspect NTPsec hasn't changed that behavior since it forked from the reference implementation 4.2.8 version. I'd love to get clarification on how chrony behaves with "server ", "server preempt" (if it has the preempt option), and "pool ". Thanks, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I respectfully disagree. IMHO configurations suggested by the pool project are likely to live on long after the pool project changes those suggestions. With that in mind, I think the pool project suggestion should be a single pool pool.ntp.org iburst which allows the pool project to decide when to start serving IPv6 addresses more widely, rather than suggesting people use 2.pool.ntp.org's current IPv6 peculiarity. There could be a note that IPv6-only systems should use 2.pool.ntp.org instead for now, but my preference would be to see the primary suggestion remain the unnumbered pool.ntp.org/vendor.pool.ntp.org. It's also important to note that with the reference implementation (and possibly NTPsec), restrict ... nopeer restriction can prevent pool associations from working. If restrict default ... nopeer is used, it's critical to also have a similar restrict source directive that does not include nopeer. A restrict source directive causes every association that is spun up (including from server and pool) to get a host-specific automatic restriction which overrides any other restrictions that would otherwise apply to the association's remote address. This was put in place to allow users of the pool directive to be able to target restrictions at pool server IPs which are not known at configuration time. If this is confusing, just fire away and I'll be happy to explain further. I strongly feel the suggested configuration should include only one pool directive. As noted previously, each additional pool directive requires a corresponding increase in tos maxclock for ntpd, complicating the suggested configuration. It also triggers additional relatively useless DNS queries, at least for ntpd users. ntpd with a single pool pool.ntp.org iburst synchronizes as fast as the currently suggested 4 server #.pool.ntp.org iburst directives because the implementation immediately does a DNS lookup of the pool hostname and holds on to all IPs returned and spins an association with the first IP address. As soon as that server's response is received, another pool association is spun one second later with the next IP address as long as tos masclock hasn't been exceeded, If the list IP addresses from the DNS query is exhausted, another DNS query is triggered immediately, and when that DNS response comes in, another pool association is spun up a second later. The net result is up to the lesser of maxclock - 1 and the number of usable pool IPs found for pool.ntp.org (currently 4 for IPv4) are spun up within seconds. With pool.ntp.org using 150 second DNS TTL, more servers will be spun up within 4 default 64s pool prototype association cycles. I have milder feelings about suggesting a higher tos minclock configuration for ntpd users, but I think it should be considered. Currently, as the docs say, "for legacy purposes" the default is tos minsane 1 but it should really be a larger number that is less than tos minclock (which defaults to 3). Putting it all together, this is my take on a suggested ntp.conf for ntpd pool use: === ntp.conf ===
=== ntp.conf === This is also a reasonable configuration for a non-refclock pool server, perhaps with slightly different tos knob-twisting:
That makes a good pool server which will naturally gravitate to higher-quality sources and is a bit more paranoid about getting consensus between sources before considering itself synched and therefore serving time to clients and adjusting the local clock. You might want to throw in a hardwired server if you like a particular stratum-1 server that you can reach with low jitter. Cheers, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Chrony's server statement has no 'preempt' option. Pool and server behave the same. Chrony expects pools to resolve to multiple addresses and you can specify how many addresses chrony will use from any given pool. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
This PR is about changing the configuration examples on the website. That can easily be adjusted if the pool layout changes and all pools start announcing IPv6 servers. Most distributions do the right thing and ship their own vendor configuration, which, most notably, quite often contains So, the issue with IPv4 limitations of 3/4 of the pool is widely known and worked around. It's time to fix it and enable IPv6 on all pools. Until then the documentation should be updated. |
||||||||||||
|
||||||||||||
tos maxclock 5 | ||||||||||||
</pre> | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There needs to be only one pool directive. With For There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree. And There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You are right. Unless Setting I haven't used I'm fine with changing the configuration snippet for
I left out the For There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
@mlichvar is the number 4 determined by chrony or is it based on the number of addresses returned from a single DNS query of pool.ntp.org? If it's chrony, is that hardcoded or a configurable default value? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's configurable using |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
<pre class="code"> | ||
timedatectl set-timezone "Europe/Kiev" | ||
timedatectl set-time "2012-10-30 18:17:16" | ||
</pre> |
Uh oh!
There was an error while loading. Please reload this page.