-
Notifications
You must be signed in to change notification settings - Fork 244
Add a 'restart --replica' option #4180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
| ) | ||
| def instance_replica_restart( | ||
| request: Request, | ||
| ) -> dict[str, Any]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
non-blocking: i wonder if a typeddict or using the generated model would maybe be nicer here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, at the very least having a named model for the response would be great... i don't really see us importing the models into api/views/ files or instance/kubernetes.py at all, and instead only used on the client side, so I suppose I could do both--typeddict on the api side, reference the generated model on the client side?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea, i'm not sure if there's a reason for why we haven't tried using the generated named models or if we just never bothered since a lot of these endpoints predated our typing enthusiasm :p
paasta_tools/kubernetes_tools.py
Outdated
| delete_options = V1DeleteOptions() | ||
| try: | ||
| kube_client.core.delete_namespaced_pod( | ||
| pod_name, namespace, body=delete_options, grace_period_seconds=0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to force-delete?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wondered the same thing, but then I couldn't find any force param in DeleteOptions or delete_namespaced_pod, and came across kubernetes-client/python#508 (comment) ???
So I've just set grace_period_seconds=0 here (which we generally do when people ask us to delete pods as well)...I think eventually ppl might want this to optionally use an alternate grace period, but for now seems ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry, i meant: do we actually want to force here - seems like it'd be safer to default to following the configured terminationGracePeriodSeconds rather than always force-deleting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah gotcha, so my thinking on this was that when peoplee are reaching for a restart of an individual replica, it's probably already in a bad state and they want this to be forcibly restarted, and whenever we've done this manually for people at their request, it's basically always forced.
But yes, I can go ahead and create that option and default to the configured value, making force opt-in 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, true - I guess that if a pod is potentially already causing errors, it's not like we'll make things worse
this is fine with me then!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah, you were right, best to be safe by default first, then force, so I've now added the --force option that passes things along. One minor hiccup is that the API was only getting LongRunningInstanceConfigs rather than KubernetesDeploymentConfig, and only the latter has get_termination_grace_period(), so I just added a minor hasattr to get araound it.
| port = pick_random_port("paasta-dev-api") | ||
| # Generate api endpoints | ||
| api_endpoints = {"api_endpoints": {cluster: f"http://localhost:{port}"}} | ||
| # Just create both non-eks and eks endpoints, one has to be right :D |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
++
Co-authored-by: Luis Pérez <luisp@yelp.com>
Co-authored-by: Luis Pérez <luisp@yelp.com>
Co-authored-by: Luis Pérez <luisp@yelp.com>
We sometimes have folks who identify one/few pods that happen to show higher timings or errors, and often folks have wanted to try just restarting that bad pod rather than force all pods to get restarted.
This PR adds the option thru the PaaSTA API at
/services/{service}/{instance}/replicas/{replica}/restart, and reuses our PaaSTA API auth added for remote-run. (This means I also moved a helper that had originally only been for remote-run to utils so I could use it too).As this seemed pretty straightforward as an additional option on top of the existing
paasta restartsyntax, I just added this there, with the--replicaflag. I opted for--replicainstead of pod to keep the paasta terminology consistent w/paasta statusoutput (tho instances/instances still exists, tbf :p)Running this locally seemed to do the right thing:
The actual implementation is for the PaaSTA API to send a pod deletion request rather than actually 'restart' their main container, but I think this is what we generally reach for if someone asks us as oncall for a similar thing.
I do think more often than not people fighting 'bad' containers leads to a bad node rather than a bad replica, but this would at least give service owners a troubleshooting option that doesn't require elevated privileges or restarting all running pods.
For now, this makes a call to our PaaSTA API auth but /v1/services calls are not restricted; I'll attempt to add a rule with more specific path to try to restrict this particular call, but this also reuses the existing client-side LDAP check that
paasta restartalready uses.