-
Notifications
You must be signed in to change notification settings - Fork 24k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Service is in unknown state #71528
Comments
Files identified in the description: If these files are incorrect, please update the |
It does appear related to Fedora 32 as the target, though my experience doesn't preclude a controller-side issue as well. I'm using macOS 10.15.6 with ansible 2.9.10 as the controller and am only seeing the issue with Fedora 32; Fedora 30 & 31 are fine. |
I'm pretty sure it's a bug/feature related to Fedora 32, although I'm unsure how to fix it or even what is happening exactly. To reproduce, create a Fedora 32 Ansible controller, update it and run ansible or molecule. Thanks for following up @jshimkus-rh! |
I created a box for use with vagrant based on Fedora 32 this past Monday. Deploying that encounters this issue when attempting to do a systemd restart of chronyd. I have a different Fedora 32 VM that I set up a couple of weeks ago (August 13). Provisioning that using the same playbook, etc. succeeds. Today, I took a snapshot of it and started doing piecemeal 'dnf update' to try and isolate if a particular package leads to the issue. I eventually ended up with everything updated and provisioning of that system still succeeded. Ain't it always the way? As the chronyd restart is unconditional it's not as if one machine did it and the other skipped it. Not a great amount of information, but perhaps it bounds the timeframe of whatever change lead to the issue. |
Hello, Debian 10 I don't know Fedora 32 but it seems that they have 5.8 kernel too https://repos.fedorapeople.org/repos/thl/kernel-vanilla-mainline/fedora-32/x86_64/ Probably an effect by the last kernel 5.8? |
Having the same issue with this configuration in an Ubuntu 18.04 LXD container on Arch Linux:
System:
Workaround: Manually ensure the service is running and enabled, and remove that step from the Ansible configuration. |
This may be related to a bug in systemd/systemd#16424 |
What's the output of
|
This seems to be an issue with |
Yep. The cap_bpf flag trips up that version of systemd... Workarounds:
|
The reason I'm not able to reproduce this is because I'm running Docker on macOS, which uses a different kernel. So I would say the reasons outlined above are the issue.
|
In response to @samdoran, here is the output:
|
Interesting. Is that command run inside the container on a Fedora 32 host? I would expect it to fail if the problem is related to the bug I linked to. I will have to test on a Fedora 32 VM to see if I can duplicate this. |
@robertdebock log starts with Also, I can run |
I can reproduce this when connecting Ansible to a RHEL8 host with the elrepo.org kernel-ml package (a third party repo package for el8, currently on 5.8.8). That makes sense since the fix hasn't been backported to the RHEL8 systemd package. |
I'm facing this exact same issue with molecule, and ansible version 2.10 |
It seems that this is an issue with systemd not Ansible. Updating systemd to a version with the bugfix is the recommended solution. |
That's understandable, but more than a bit annoying for anyone running anything but a cutting edge release. Good luck to all of you running on any sort of LTS/stable release. Looks like I'm pinning the kernel version for now. Anyone else have other workarounds? |
With the current state, Ansible is not safely usable for Ubuntu 20.04 targets. |
I hit the same issue on a cluster of As a workaround, I have done the following:
apt remove -y linux-dtb-current-rockchip64 linux-focal-root-current-rock64 linux-image-current-rockchip64 linux-u-boot-rock64-current
apt install -y linux-dtb-current-rockchip64=20.08 linux-focal-root-current-rock64=20.08 linux-image-current-rockchip64=20.08 linux-u-boot-rock64-current=20.08 it should work without downgrading the
apt-mark hold linux-dtb-current-rockchip64 linux-focal-root-current-rock64 linux-image-current-rockchip64 linux-u-boot-rock64-current
|
Same here on my orange pi's running Armbian. |
Since so many folks are still having issues with this and aren't able to easily upgrade, I'm going to reopen this. I will open a PR with a workaround shortly. |
FYI: Fedora 33 works! But, I think it's smart that you (@samdoran) have created a fix anyway. |
Same issue in Centos7 and Centos8 using |
It was upgraded to kernel 5.8 and ansible fails: ansible/ansible#71528
Still persists on |
Hi guys. Just tested the fix in Controller node:
The task with
Just to confirm the service is in place:
For comparison with the Ansible without the fix. Exactly the same containers are failing with
|
Just updated to Ansible 2.10.3 and still seeing this issue on a remote host with Ubuntu 18.04 and 5.8.3-x86_64 kernel. |
@jurgenhaas This will be in the next set of Ansible releases: 2.10.4 and 2.9.16. |
Related to issue ansible#71528 and PR ansible#72337 Co-authored-by: Martin Polden <mpolden@mpolden.no>
SUMMARY
Services in containers on Fedora 32 report
Service is in unknown state
when trying to setstate: started
.ISSUE TYPE
COMPONENT NAME
systemd
ANSIBLE VERSION
CONFIGURATION
OS / ENVIRONMENT
Controller node: Fedora 32, just updated.
Managed node: any, tried Fedora 32 and CentOS 8.
STEPS TO REPRODUCE
EXPECTED RESULTS
These roles work in CI, but locally they started to fail after an update in Fedora 32.
ACTUAL RESULTS
Manually checking and starting works:
I have a feeling this is related to some Fedora update, can't get my finger on the exact issue. The role works in ci.
The text was updated successfully, but these errors were encountered: