AI-generated contribution policy, Reid Kleckner
Over the last year, we've seen a significant increase in the number of large, presumably AI-generated contributions to the project. This is straining the abilities of our maintainers to review code from new and unknown contributors, which is something we normally try to do in an effort to be welcoming and to sustain the project by mentoring new contributors. Clearly, our project needs policy and practical guidance on how to handle these matters. We should get together and talk about our best ideas in person. We have our existing policy, which doesn't offer any help to maintainers, and a draft proposal (PR #154441) which empowers us to do more moderation and labeling, but doesn't draw red lines around specific kinds of tools or enable automated enforcement. Some community members seem to feel this doesn't address their practical needs, so we should get together and discuss possible solutions.