Use k8sClient from client.New in controller test #898
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Why are these changes needed?
The test "should be able to update all Pods to Running" is flaky. The reason is that we get Kubernetes cluster information from the local cache. However, the local cache may have a jet lag with the Kubernetes API server.
So, the thought is we should get the true status of the Pods directly from the API server in the operator test.
In client.New doc, it says:
In kubebuilder's writing-tests doc, it says:
So, we should use k8sClient that are created from client.New instead of k8sManager.GetClient.
In current implementation, we use k8sClient from k8sManager.GetClient:
Thus, I remove the above k8sClient from k8sManager.GetClient and use k8sClient from client.New:
Related issue number
Closes #894
Checks
In a 2 core CPU, 7 GB RAM VM (to simulate the github's standard Linux runner), I ran the test 100 times:
[Bug] "should be able to update all Pods to Running" never happened again after this change.
[Bug] "should be able to update all Pods to Running" fails 34/100 before this change.