Bug 1114388 - yast get error while update time with ntp server like pool.ntp.org
yast get error while update time with ntp server like pool.ntp.org
Status: RESOLVED DUPLICATE of bug 1087048
Classification: openSUSE
Product: openSUSE Tumbleweed
Classification: openSUSE
Component: YaST2
Current
Other Other
: P2 - High : Normal (vote)
: ---
Assigned To: YaST Team
Jiri Srain
https://trello.com/c/aoN5AoKI/2704-tw
:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2018-11-02 01:40 UTC by Igor Kuznetsov
Modified: 2019-02-15 09:00 UTC (History)
4 users (show)

See Also:
Found By: ---
Services Priority:
Business Priority:
Blocker: ---
Marketing QA Status: ---
IT Deployment: ---


Attachments
yast log (163.84 KB, text/plain)
2018-11-06 12:01 UTC, Igor Kuznetsov
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Igor Kuznetsov 2018-11-02 01:40:37 UTC
yast get the error: Cannot connect to server, while updating time from ntp (pool.ntp.org)

but ntpdate pool.ntp.org work fine.
Comment 2 Igor Kuznetsov 2018-11-06 12:01:47 UTC
Created attachment 788611 [details]
yast log

But i cant see any errors.
Comment 3 Igor Kuznetsov 2018-11-06 12:06:48 UTC
only like

2018-11-06 15:56:18 <1> lnxvrx53(27015) [Ruby] clients/ntp-client_proposal.rb:158 synchronize_time false

2018-11-06 15:56:26 <3> lnxvrx53(27015) [Ruby] clients/ntp-client_proposal.rb:408 Не удалось подключиться к выбранному серверу NTP.
Comment 4 David Diaz 2019-01-03 16:12:01 UTC
Hi Igor.

Sadly, I could not reproduce the described issue. However, I am not sure to be doing the right actions. Could you please give us the steps that you are following?

Also, could you attach the file generated by the `save_y2logs` command? It contains other files that bring us more detailed information such as the versions of packages.

With the attached log file, what I see suspicious is:

> 2018-11-06 15:56:25 <1> lnxvrx53(27015) [Ruby] modules/NtpClient.rb:157 Running ont time sync with pool.ntp.org
2018-11-06 15:56:26 <3> lnxvrx53(27015) [bash] ShellCommand.cc(shellcommand):78 2018-11-06T11:56:26Z chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP -SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 -DEBUG)
> 2018-11-06 15:56:26 <3> lnxvrx53(27015) [bash] ShellCommand.cc(shellcommand):78 2018-11-06T11:56:26Z Fatal error : Another chronyd may already be running (pid=27214), check /var/run/chrony/chronyd.pid
> 2018-11-06 15:56:26 <1> lnxvrx53(27015) [Ruby] modules/NtpClient.rb:168 'one-time chrony for pool.ntp.org' returned {"exit"=>1, "stderr"=>"2018-11-06T11:56:26Z chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP -SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 -DEBUG)\n2018-11-06T11:56:26Z Fatal error : Another chronyd may already be running (pid=27214), check /var/run/chrony/chronyd.pid\n", "stdout"=>""}

Thank you so much for your report and collaboration.
Comment 5 Josef Reidinger 2019-01-03 16:49:05 UTC
(In reply to David Diaz from comment #4)
> Hi Igor.
> 
> Sadly, I could not reproduce the described issue. However, I am not sure to
> be doing the right actions. Could you please give us the steps that you are
> following?
> 
> Also, could you attach the file generated by the `save_y2logs` command? It
> contains other files that bring us more detailed information such as the
> versions of packages.
> 
> With the attached log file, what I see suspicious is:
> 
> > 2018-11-06 15:56:25 <1> lnxvrx53(27015) [Ruby] modules/NtpClient.rb:157 Running ont time sync with pool.ntp.org
> 2018-11-06 15:56:26 <3> lnxvrx53(27015) [bash]
> ShellCommand.cc(shellcommand):78 2018-11-06T11:56:26Z chronyd version 3.4
> starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP -SCFILTER +SIGND +ASYNCDNS
> +SECHASH +IPV6 -DEBUG)
> > 2018-11-06 15:56:26 <3> lnxvrx53(27015) [bash] ShellCommand.cc(shellcommand):78 2018-11-06T11:56:26Z Fatal error : Another chronyd may already be running (pid=27214), check /var/run/chrony/chronyd.pid
> > 2018-11-06 15:56:26 <1> lnxvrx53(27015) [Ruby] modules/NtpClient.rb:168 'one-time chrony for pool.ntp.org' returned {"exit"=>1, "stderr"=>"2018-11-06T11:56:26Z chronyd version 3.4 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP -SCFILTER +SIGND +ASYNCDNS +SECHASH +IPV6 -DEBUG)\n2018-11-06T11:56:26Z Fatal error : Another chronyd may already be running (pid=27214), check /var/run/chrony/chronyd.pid\n", "stdout"=>""}
> 
> Thank you so much for your report and collaboration.

well, that log line basically say that chronyd already running. So we cannot do one time sync, as daemon is already running and syncing time. This is different to ntp daemon, which allows such big steps in time. So basically we should not do one time sync if chronyd running. Only possibility is instead call chronyc with burst param, which will send multiple request to adapt time more quickly.
Comment 6 Igor Kuznetsov 2019-01-04 03:40:07 UTC
I check the problem. Now it does not persist. (Making update several days ago).
Comment 7 David Diaz 2019-01-04 08:59:20 UTC
Thanks for the update, Igor.

Nice to read that your problem does not persist :)

However, I am going to reopen the bug because it seems that, looking at the information available in the attached log, Josef realized about something that we should improve. In fact, he added quite valuable information related to it. 

Thank you Josef!
Comment 8 Stefan Schubert 2019-02-15 08:23:15 UTC
Igor, have you made a new installation or are you in a running system ?
Comment 9 Stefan Schubert 2019-02-15 09:00:44 UTC
(In reply to Igor Kuznetsov from comment #6)
> I check the problem. Now it does not persist. (Making update several days
> ago).

Yes, the log file shows that you have used quite an old package (See the typo)

2018-11-06 15:56:25 <1> lnxvrx53(27015) [Ruby] modules/NtpClient.rb:157 Running ont time sync with pool.ntp.org

https://github.com/yast/yast-ntp-client/commit/26e3ef78c2149239fa53e27ee991ffb766cc2218#diff-f31befd8011851c773f783bb9afb356fR157

Beside that we have already fixed it with:
https://github.com/yast/yast-ntp-client/commit/72e16cfc43b1fa0fd2cb45652141e6ef02012d4b

So, that's why it is running after you have made an update :-)

*** This bug has been marked as a duplicate of bug 1087048 ***