Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Service is in unknown state #71528

Closed
robertdebock opened this issue Aug 31, 2020 · 32 comments · Fixed by #72337
Closed

Service is in unknown state #71528

robertdebock opened this issue Aug 31, 2020 · 32 comments · Fixed by #72337
Labels
affects_2.9 This issue/PR affects Ansible v2.9 bug This issue/PR relates to a bug. has_pr This issue has an associated PR. module This issue/PR relates to a module. P3 Priority 3 - Approved, No Time Limitation python3 support:core This issue/PR relates to code supported by the Ansible Engineering Team. system System category

Comments

@robertdebock
Copy link
Contributor

SUMMARY

Services in containers on Fedora 32 report Service is in unknown state when trying to set state: started.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

systemd

ANSIBLE VERSION
ansible 2.9.12
CONFIGURATION
# empty
OS / ENVIRONMENT

Controller node: Fedora 32, just updated.
Managed node: any, tried Fedora 32 and CentOS 8.

STEPS TO REPRODUCE
git pull git@github.com:robertdebock/ansible-role-cron.git
cd ansible-role-cron
molecule test
EXPECTED RESULTS

These roles work in CI, but locally they started to fail after an update in Fedora 32.

ACTUAL RESULTS
    TASK [ansible-role-cron : start and enable cron] *******************************
    task path: /home/robertdb/Documents/github.com/robertdebock/ansible-role-cron/tasks/main.yml:11
    <cron-fedora-latest> ESTABLISH DOCKER CONNECTION FOR USER: root
    <cron-fedora-latest> EXEC ['/usr/bin/docker', b'exec', b'-i', 'cron-fedora-latest', '/bin/sh', '-c', "/bin/sh -c 'echo ~ && sleep 0'"]
    <cron-fedora-latest> EXEC ['/usr/bin/docker', b'exec', b'-i', 'cron-fedora-latest', '/bin/sh', '-c', '/bin/sh -c \'( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1598856497.3296921-33825-64280145917305 `" && echo ansible-tmp-1598856497.3296921-33825-64280145917305="` echo /root/.ansible/tmp/ansible-tmp-1598856497.3296921-33825-64280145917305 `" ) && sleep 0\'']
    Using module file /usr/local/lib/python3.8/site-packages/ansible/modules/system/systemd.py
    <cron-fedora-latest> PUT /home/robertdb/.ansible/tmp/ansible-local-33264yxghir5p/tmpkf4ljd28 TO /root/.ansible/tmp/ansible-tmp-1598856497.3296921-33825-64280145917305/AnsiballZ_systemd.py
    <cron-fedora-latest> EXEC ['/usr/bin/docker', b'exec', b'-i', 'cron-fedora-latest', '/bin/sh', '-c', "/bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1598856497.3296921-33825-64280145917305/ /root/.ansible/tmp/ansible-tmp-1598856497.3296921-33825-64280145917305/AnsiballZ_systemd.py && sleep 0'"]
    <cron-fedora-latest> EXEC ['/usr/bin/docker', b'exec', b'-i', 'cron-fedora-latest', '/bin/sh', '-c', '/bin/sh -c \'sudo -H -S -n  -u root /bin/sh -c \'"\'"\'echo BECOME-SUCCESS-gzfmcazwwmgyqzdcbkzycmrtuzycfngw ; /usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1598856497.3296921-33825-64280145917305/AnsiballZ_systemd.py\'"\'"\' && sleep 0\'']
    <cron-fedora-latest> EXEC ['/usr/bin/docker', b'exec', b'-i', 'cron-fedora-latest', '/bin/sh', '-c', "/bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1598856497.3296921-33825-64280145917305/ > /dev/null 2>&1 && sleep 0'"]
fatal: [cron-fedora-latest]: FAILED! => changed=false 
  invocation:
    module_args:
      daemon_reexec: false
      daemon_reload: false
      enabled: true
      force: null
      masked: null
      name: crond
      no_block: false
      scope: null
      state: started
      user: null
  msg: Service is in unknown state
  status: {}

Manually checking and starting works:

[root@cron-fedora-latest /]# systemctl status crond
● crond.service - Command Scheduler
     Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor preset: enabled)
     Active: inactive (dead)
[root@cron-fedora-latest /]# systemctl start crond
[root@cron-fedora-latest /]# systemctl status crond
● crond.service - Command Scheduler
     Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2020-08-31 06:48:54 UTC; 1s ago
   Main PID: 621 (crond)
      Tasks: 1 (limit: 2769)
     Memory: 1.0M
     CGroup: /system.slice/docker-a1708d7b8309b9472a0bb8ef1d389ff1fd4ca36af32f86fbb4da5da5ab788d48.scope/system.slice/crond.service
             └─621 /usr/sbin/crond -n

Aug 31 06:48:54 cron-fedora-latest systemd[1]: Started Command Scheduler.
Aug 31 06:48:54 cron-fedora-latest crond[621]: (CRON) STARTUP (1.5.5)
Aug 31 06:48:54 cron-fedora-latest crond[621]: (CRON) INFO (Syslog will be used instead of sendmail.)
Aug 31 06:48:54 cron-fedora-latest crond[621]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 79% if used.)
Aug 31 06:48:54 cron-fedora-latest crond[621]: (CRON) INFO (running with inotify support)

I have a feeling this is related to some Fedora update, can't get my finger on the exact issue. The role works in ci.

@ansibot
Copy link
Contributor

ansibot commented Aug 31, 2020

Files identified in the description:

If these files are incorrect, please update the component name section of the description or use the !component bot command.

click here for bot help

@ansibot ansibot added affects_2.9 This issue/PR affects Ansible v2.9 bug This issue/PR relates to a bug. module This issue/PR relates to a module. needs_triage Needs a first human triage before being processed. python3 support:core This issue/PR relates to code supported by the Ansible Engineering Team. system System category labels Aug 31, 2020
@jshimkus-rh
Copy link

It does appear related to Fedora 32 as the target, though my experience doesn't preclude a controller-side issue as well.

I'm using macOS 10.15.6 with ansible 2.9.10 as the controller and am only seeing the issue with Fedora 32; Fedora 30 & 31 are fine.

@robertdebock
Copy link
Contributor Author

I'm pretty sure it's a bug/feature related to Fedora 32, although I'm unsure how to fix it or even what is happening exactly.

To reproduce, create a Fedora 32 Ansible controller, update it and run ansible or molecule.

Thanks for following up @jshimkus-rh!

@jshimkus-rh
Copy link

I created a box for use with vagrant based on Fedora 32 this past Monday. Deploying that encounters this issue when attempting to do a systemd restart of chronyd.

I have a different Fedora 32 VM that I set up a couple of weeks ago (August 13). Provisioning that using the same playbook, etc. succeeds. Today, I took a snapshot of it and started doing piecemeal 'dnf update' to try and isolate if a particular package leads to the issue. I eventually ended up with everything updated and provisioning of that system still succeeded. Ain't it always the way?

As the chronyd restart is unconditional it's not as if one machine did it and the other skipped it.

Not a great amount of information, but perhaps it bounds the timeframe of whatever change lead to the issue.

@pierrehenrymuller
Copy link

Hello,
I have same issue with ansible 2.7 and 2.9 on Debian 10 targets since 2 weeks. On the Debian up to date the only change I have made is the kernel who have passed to 5.8 branch.
I have tested an other 5.8 kernel packaged by Xanmod with same effect https://xanmod.org/

Debian 10
systemd 241
kernel 5.8.5 xanmod or 5.8.5 local compiled
ansible 2.7.14 and 2.9.13

I don't know Fedora 32 but it seems that they have 5.8 kernel too https://repos.fedorapeople.org/repos/thl/kernel-vanilla-mainline/fedora-32/x86_64/

Probably an effect by the last kernel 5.8?

@Shrews Shrews added needs_verified This issue needs to be verified/reproduced by maintainer P3 Priority 3 - Approved, No Time Limitation and removed needs_triage Needs a first human triage before being processed. labels Sep 3, 2020
@l0b0
Copy link

l0b0 commented Sep 3, 2020

Having the same issue with this configuration in an Ubuntu 18.04 LXD container on Arch Linux:

- name: Ensure Docker is started and enabled at boot.
  when: daemon|default(true)
  service:
    name: docker
    state: started
    enabled: yes

System:

$ uname --kernel-name --kernel-release --kernel-version --machine --processor --hardware-platform --operating-system
Linux 5.8.5-arch1-1 #1 SMP PREEMPT Thu, 27 Aug 2020 18:53:02 +0000 x86_64 unknown unknown GNU/Linux

Workaround: Manually ensure the service is running and enabled, and remove that step from the Ansible configuration.

@samdoran
Copy link
Contributor

samdoran commented Sep 5, 2020

This may be related to a bug in systemd that was fixed in 245.7:

systemd/systemd#16424
https://bugzilla.redhat.com/show_bug.cgi?id=1853736

@samdoran
Copy link
Contributor

samdoran commented Sep 5, 2020

What's the output of systemctl show cron in the container? Using the container from https://github.com/robertdebock/ansible-role-cron.git, I am unable to duplicate this.

> git clone git@github.com:robertdebock/ansible-role-cron.git
Cloning into 'ansible-role-cron'...
remote: Enumerating objects: 56, done.
remote: Counting objects: 100% (56/56), done.
remote: Compressing objects: 100% (40/40), done.
remote: Total 896 (delta 19), reused 35 (delta 11), pack-reused 840
Receiving objects: 100% (896/896), 108.48 KiB | 4.52 MiB/s, done.
Resolving deltas: 100% (468/468), done.

> cd ansible-role-cron/

> molecule test
--> Test matrix

└── default
    ├── dependency
    ├── lint
    ├── cleanup
    ├── destroy
    ├── syntax
    ├── create
    ├── prepare
    ├── converge
    ├── idempotence
    ├── side_effect
    ├── verify
    ├── cleanup
    └── destroy

...

--> Scenario: 'default'
--> Action: 'converge'

    PLAY [converge] ****************************************************************

    TASK [Gathering Facts] *********************************************************
    ok: [cron-fedora-latest]

    TASK [ansible-role-cron : include assert.yml] **********************************
    included: /Users/sdoran/Source/ansible-role-cron/tasks/assert.yml for cron-fedora-latest

    TASK [ansible-role-cron : test if cron_jobs is set correctly] ******************
    skipping: [cron-fedora-latest]

    TASK [ansible-role-cron : test if item in cron_jobs is set correctly] **********
    skipping: [cron-fedora-latest]

    TASK [ansible-role-cron : test if item.minute is set correctly] ****************
    skipping: [cron-fedora-latest]

    TASK [ansible-role-cron : test if item.hour is set correctly] ******************
    skipping: [cron-fedora-latest]

    TASK [ansible-role-cron : test if item.weekday is set correctly] ***************
    skipping: [cron-fedora-latest]

    TASK [ansible-role-cron : test if item.user is set correctly] ******************
    skipping: [cron-fedora-latest]

    TASK [ansible-role-cron : install cron] ****************************************
    changed: [cron-fedora-latest]

    TASK [ansible-role-cron : start and enable cron] *******************************
    changed: [cron-fedora-latest]

    TASK [ansible-role-cron : schedule requested cron jobs] ************************
    skipping: [cron-fedora-latest]

    PLAY RECAP *********************************************************************
    cron-fedora-latest         : ok=4    changed=2    unreachable=0    failed=0    skipped=7    rescued=0    ignored=0
...
> molecule login

[root@cron-fedora-latest /]# cat /etc/os-release
NAME=Fedora
VERSION="32 (Container Image)"
ID=fedora
VERSION_ID=32
VERSION_CODENAME=""
PLATFORM_ID="platform:f32"
PRETTY_NAME="Fedora 32 (Container Image)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:32"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f32/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=32
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=32
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Container Image"
VARIANT_ID=container

[root@cron-fedora-latest /]# systemctl --version
systemd 245 (v245.6-2.fc32)
+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=unified

[root@cron-fedora-latest /]# systemctl show crond | grep ActiveState
ActiveState=active

@samdoran samdoran removed the needs_verified This issue needs to be verified/reproduced by maintainer label Sep 5, 2020
@samdoran
Copy link
Contributor

samdoran commented Sep 5, 2020

This seems to be an issue with systemd and CAP_BPF in recent kernels. Updating systemd or downgrading the kernel seem to be ways to fix this. I'm not sure this is something we should account for in Ansible since the bug lies in systemd and has been addressed there.

@lanefu
Copy link

lanefu commented Sep 5, 2020

Probably an effect by the last kernel 5.8?

Yep. The cap_bpf flag trips up that version of systemd...

Workarounds:

  • download kernel prior to 5.8
  • upgrade systemd apt -t=buster-backports upgrade systemd Note: resolvconf isn't comppatible and will have to be removed first
  • disable the cap_bpf capability... not sure exactly how to do that.. docs were vague

@samdoran
Copy link
Contributor

samdoran commented Sep 5, 2020

The reason I'm not able to reproduce this is because I'm running Docker on macOS, which uses a different kernel. So I would say the reasons outlined above are the issue.

# uname -r
4.19.76-linuxkit

@robertdebock
Copy link
Contributor Author

In response to @samdoran, here is the output:

systemctl show cron

[root@cron-fedora-latest /]# systemctl show cron
Failed to parse bus message: Invalid argument
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
TimeoutAbortUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestampMonotonic=0
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=0
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
ReloadResult=success
CleanResult=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestampMonotonic=0
ExecMainExitTimestampMonotonic=0
ExecMainPID=0
ExecMainCode=0
ExecMainStatus=0
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
EffectiveCPUs=
EffectiveMemoryNodes=
TasksCurrent=[not set]
IPIngressBytes=[no data]
IPIngressPackets=[no data]
IPEgressBytes=[no data]
IPEgressPackets=[no data]
IOReadBytes=18446744073709551615
IOReadOperations=18446744073709551615
IOWriteBytes=18446744073709551615
IOWriteOperations=18446744073709551615
Delegate=no
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
CPUQuotaPeriodUSec=infinity
AllowedCPUs=
AllowedMemoryNodes=
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=yes
DefaultMemoryLow=0
DefaultMemoryMin=0
MemoryMin=0
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=2769
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=1073741816
LimitNOFILESoft=1073741816
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=infinity
LimitNPROCSoft=infinity
LimitMEMLOCK=2021895680
LimitMEMLOCKSoft=2021895680
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=61540
LimitSIGPENDINGSoft=61540
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
CPUAffinity=
CPUAffinityFromNUMA=no
NUMAPolicy=n/a
NUMAMask=
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=inherit
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
LogRateLimitIntervalUSec=0
LogRateLimitBurst=0
SecureBits=0

@samdoran
Copy link
Contributor

Interesting. Is that command run inside the container on a Fedora 32 host? I would expect it to fail if the problem is related to the bug I linked to.

I will have to test on a Fedora 32 VM to see if I can duplicate this.

@goneri
Copy link
Contributor

goneri commented Sep 14, 2020

@robertdebock log starts with Failed to parse bus message: Invalid argument, which makes me think this is our problem https://bugzilla.redhat.com/show_bug.cgi?id=1853736

Also, I can run ansible -b -m service -a 'name=crond enabled=true state=started' localhost just fine on my up to date Fedora 32 (kernel 5.8.4-200).

@abokth
Copy link

abokth commented Sep 16, 2020

I can reproduce this when connecting Ansible to a RHEL8 host with the elrepo.org kernel-ml package (a third party repo package for el8, currently on 5.8.8). That makes sense since the fix hasn't been backported to the RHEL8 systemd package.

@ansibot ansibot added the has_pr This issue has an associated PR. label Sep 16, 2020
@cheloim
Copy link

cheloim commented Sep 24, 2020

I'm facing this exact same issue with molecule, and ansible version 2.10

@samdoran
Copy link
Contributor

samdoran commented Oct 1, 2020

It seems that this is an issue with systemd not Ansible. Updating systemd to a version with the bugfix is the recommended solution.

@samdoran samdoran closed this as completed Oct 1, 2020
@ianthetechie
Copy link

That's understandable, but more than a bit annoying for anyone running anything but a cutting edge release. Good luck to all of you running on any sort of LTS/stable release. Looks like I'm pinning the kernel version for now. Anyone else have other workarounds?

@HontoNoRoger
Copy link

With the current state, Ansible is not safely usable for Ubuntu 20.04 targets.
The error still persists there, using systemd version systemd 245 (245.4-4ubuntu3.2).

@spiarh
Copy link

spiarh commented Oct 11, 2020

I hit the same issue on a cluster of rock64 running armbian with kernel 5.8.13.

As a workaround, I have done the following:

  1. downgraded to 5.7.15
apt remove -y linux-dtb-current-rockchip64 linux-focal-root-current-rock64 linux-image-current-rockchip64 linux-u-boot-rock64-current
apt install -y linux-dtb-current-rockchip64=20.08 linux-focal-root-current-rock64=20.08 linux-image-current-rockchip64=20.08 linux-u-boot-rock64-current=20.08

it should work without downgrading the *rockchip64* packages but I prefer to keep them at the same version.

  1. marked the packet as hold to avoid any upgrade to 5.8
apt-mark hold linux-dtb-current-rockchip64 linux-focal-root-current-rock64 linux-image-current-rockchip64 linux-u-boot-rock64-current
  1. reboot

@RogierSchuring
Copy link

Same here on my orange pi's running Armbian.
I ran armbian-config to downgrade the kernel (system -> other -> select 5.7.15 kernel)

@samdoran
Copy link
Contributor

Since so many folks are still having issues with this and aren't able to easily upgrade, I'm going to reopen this. I will open a PR with a workaround shortly.

@robertdebock
Copy link
Contributor Author

FYI: Fedora 33 works!

But, I think it's smart that you (@samdoran) have created a fix anyway.

@libert-xyz
Copy link

Same issue in Centos7 and Centos8 using ansible 2.10.2

pando85 added a commit to pando85/kubernetes-deployer that referenced this issue Oct 31, 2020
It was upgraded to kernel 5.8 and ansible fails:
ansible/ansible#71528
@hluaces
Copy link

hluaces commented Nov 2, 2020

Still persists on debian:10 docker containers with systemd 241-7~deb10u4.

@MonolithProjects
Copy link

Hi guys. Just tested the fix in ansible:devel and it seems like the issue is still there (only with slightly different error but the same result).

Controller node:

  • Fedora 33
  • Linux kernel 5.8.16-300.fc33.x86_64
  • molecule 3.2.0a0
  • ansible:2.11.0.dev0
  • systemd 246 (v246.6-3.fc33)
  • Docker version 19.03.13, build 4484c46

The task with systemd module is failing in following containers:

TASK [ansible-github_actions_runner : Start and enable Github Actions Runner service] ***
changed: [CentOS7]
fatal: [CentOS8]: FAILED! => {"changed": false, "msg": "Could not find the requested service actions.runner.monolithprojects-ansible-github_actions_runne.CentOS8.service: host"}
fatal: [Fedora31]: FAILED! => {"changed": false, "msg": "Could not find the requested service actions.runner.monolithprojects-ansible-github_actions_runne.Fedora31.service: host"}
changed: [Fedora32]
changed: [Fedora33]
changed: [Debian9]
fatal: [Debian10]: FAILED! => {"changed": false, "msg": "Could not find the requested service actions.runner.monolithprojects-ansible-github_actions_runne.Debian10.service: host"}
changed: [Ubuntu16]
fatal: [Ubuntu18]: FAILED! => {"changed": false, "msg": "Could not find the requested service actions.runner.monolithprojects-ansible-github_actions_runne.Ubuntu18.service: host"}
changed: [Ubuntu20]

Just to confirm the service is in place:

root@Ubuntu18:/# systemctl status actions.runner.monolithprojects-ansible-github_actions_runne.Ubuntu18.service
● actions.runner.monolithprojects-ansible-github_actions_runne.Ubuntu18.service - GitHub Actions Runner (monolithprojects-ansible-github_actions_runner-testrepo.Ubuntu18)
   Loaded: loaded (/etc/systemd/system/actions.runner.monolithprojects-ansible-github_actions_runne.Ubuntu18.service; enabled; vendor preset: enabled)
   Active: inactive (dead)
root@Ubuntu18:/# journalctl -u actions.runner.monolithprojects-ansible-github_actions_runne.Ubuntu18.service
-- Logs begin at Thu 2020-11-05 22:37:12 UTC, end at Thu 2020-11-05 22:40:34 UTC. --
-- No entries --

For comparison with the Ansible without the fix. Exactly the same containers are failing with ansible 2.10.3 but with the error @robertdebock mentioned above:

TASK [ansible-github_actions_runner : Start and enable Github Actions Runner service] ***
changed: [CentOS7]
fatal: [CentOS8]: FAILED! => {"changed": false, "msg": "Service is in unknown state", "status": {}}
fatal: [Fedora31]: FAILED! => {"changed": false, "msg": "Service is in unknown state", "status": {}}
changed: [Fedora32]
changed: [Fedora33]
changed: [Debian9]
fatal: [Debian10]: FAILED! => {"changed": false, "msg": "Service is in unknown state", "status": {}}
changed: [Ubuntu16]
fatal: [Ubuntu18]: FAILED! => {"changed": false, "msg": "Service is in unknown state", "status": {}}
fatal: [Ubuntu20]: FAILED! => {"changed": false, "msg": "Service is in unknown state", "status": {}}

@jurgenhaas
Copy link

Just updated to Ansible 2.10.3 and still seeing this issue on a remote host with Ubuntu 18.04 and 5.8.3-x86_64 kernel.

@samdoran
Copy link
Contributor

@jurgenhaas This will be in the next set of Ansible releases: 2.10.4 and 2.9.16.

T2L added a commit to lemberg/draft-environment that referenced this issue Nov 20, 2020
samdoran added a commit to samdoran/ansible that referenced this issue Nov 20, 2020
Related to issue ansible#71528 and PR ansible#72337

Co-authored-by: Martin Polden <mpolden@mpolden.no>
@ansible ansible locked and limited conversation to collaborators Nov 23, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
affects_2.9 This issue/PR affects Ansible v2.9 bug This issue/PR relates to a bug. has_pr This issue has an associated PR. module This issue/PR relates to a module. P3 Priority 3 - Approved, No Time Limitation python3 support:core This issue/PR relates to code supported by the Ansible Engineering Team. system System category
Projects
None yet
Development

Successfully merging a pull request may close this issue.