Low-effort, AI-generated PR is incredibly frustrating to review for us as maintainers. We don’t want the PR author and our time wasted reviewing code that lacks direction and quality. We need to upgrade our contributing.md policy specifically tailored for this AI era.
These are some possible protocols that can create some amount of friction which we can define for our project, but please feel free to drop in your own thoughts as well:
-
All PRs must be preceded by an open Issue where the proposed change has been discussed and approved by a maintainer. We can close the PR if someone doesn't follow this and point them to our contribution policy.
We can even set a GHA workflow that checks this (and closes the PR if it doesn't finds a linked issue for some N number of changes) but this will require to set perms for pull-requests to write (IDK if it's a good idea?). We can introduce a #skip-issue label for exceptions.
- Exceptions:
- Maintainers (but their PR can be still rejected if we don't reach consensus on design)
- PRs that update dependencies, templates, etc.
- Trivial PRs (typo fixes, etc.)
-
One PR = One specific fix; which we currently follow but that needs to get reflected in contributing.md
-
Create a .github/PULL_REQUEST_TEMPLATE.md, which will contain a "How I Tested This" section for newcomers and a friendly reminder to check our contribution policy before submitting
-
PR description must contain a human‑written explanation or should explicitly state if they have used copilot or any other tool for generating the PR description.
-
Inspiration from the Linux Kernel policy: https://docs.kernel.org/process/coding-assistants.html
AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin (DCO). The human submitter is responsible for: Reviewing all AI-generated code, Ensuring compliance with licensing requirements, Adding their own Signed-off-by tag to certify the DCO, Taking full responsibility for the contribution
and possibly:
When AI tools contribute to kernel development, proper attribution helps track the evolving role of AI in the development process. Contributions should include an Assisted-by tag in the following format: Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]
These are some initial thoughts. If you all agree then I can proceed with a PR and can discuss further?
cc @lima-vm/maintainers
Low-effort, AI-generated PR is incredibly frustrating to review for us as maintainers. We don’t want the PR author and our time wasted reviewing code that lacks direction and quality. We need to upgrade our
contributing.mdpolicy specifically tailored for this AI era.These are some possible protocols that can create some amount of friction which we can define for our project, but please feel free to drop in your own thoughts as well:
All PRs must be preceded by an open Issue where the proposed change has been discussed and approved by a maintainer. We can close the PR if someone doesn't follow this and point them to our contribution policy.
We can even set a GHA workflow that checks this (and closes the PR if it doesn't finds a linked issue for some N number of changes) but this will require to set perms for
pull-requeststowrite(IDK if it's a good idea?). We can introduce a#skip-issuelabel for exceptions.One PR = One specific fix; which we currently follow but that needs to get reflected in
contributing.mdCreate a
.github/PULL_REQUEST_TEMPLATE.md, which will contain a "How I Tested This" section for newcomers and a friendly reminder to check our contribution policy before submittingPR description must contain a human‑written explanation or should explicitly state if they have used copilot or any other tool for generating the PR description.
Inspiration from the Linux Kernel policy: https://docs.kernel.org/process/coding-assistants.html
and possibly:
These are some initial thoughts. If you all agree then I can proceed with a PR and can discuss further?
cc @lima-vm/maintainers