Helping policymakers weigh the benefits of open source AI
GitHub enables developer collaboration on innovative software projects, and we’re committed to ensuring policymakers understand developer needs when crafting AI regulation.
Policymakers are increasingly focusing on software components of AI systems, and how developers are making AI model weights available for downstream use. GitHub enables developer collaboration on innovative software projects, and we’re committed to ensuring policymakers understand developer needs when crafting AI regulation. We support AI governance that empowers developers to build more responsibly, securely, and effectively, to accelerate human progress.
GitHub submitted a filing in response to the U.S. NTIA’s request for comment on the potential risks, benefits, and policy implications of widely available model weights–and of open source AI, which makes not only weights available to developers, but also code and other components under terms allowing developers to inspect, modify, (re)distribute, and use AI components for any purpose. Our submission can be found here, but there are a few important ideas we want to highlight.
Open source AI presents clear benefits
It is important to consider the myriad benefits of open source AI. Open source is a public good, designed for all to use: hobbyists, professional developers, companies, governments, and anyone looking to make an impact with code. The broadly available nature of open source has already generated tremendous value to society accelerating innovation, competition, and the wide use of software and AI across the global economy. Open source AI advances the responsible development of AI systems, use of AI in research across disciplines, developer education, and government capacity.
Evaluation and regulation should prioritize AI systems–not models
Evaluation and regulation are better focused on the full AI system and policies governing use, rather than subcomponents, including AI models. Policies that focus on restricting the model are likely to inhibit beneficial use more than prevent criminal abuse. It also risks missing the forest for the tree: orchestration and safety software included in AI systems can expand or constrain AI capabilities. Current evidence does not support government restrictions on sharing AI models. Policymakers should instead, irrespective of model type, prioritize AI regulation for high-risk AI systems and prepare plans to address abuse by bad actors. Security through obscurity is not a winning strategy.
The path to societal resilience is not open or closed
Governments have an important role to play in steering the technological frontier and building societal resilience that allows us to seize the benefits enabled by AI while reducing its risks. From accelerating needed AI measurement science and safety research, to supporting public education and protective measures, civic institutions are well-positioned to usher in a new era of AI governed by our values. The open availability, diversity, and diffusion of AI models can support this societal resilience and flourishing. With this in mind, GitHub looks forward to continuing policy collaboration to accelerate human progress.
Tags:
Written by
Related posts
Celebrating the GitHub Awards 2024 recipients 🎉
The GitHub Awards celebrates the outstanding contributions and achievements in the developer community by honoring individuals, projects, and organizations for creating an outsized positive impact on the community.
New from Universe 2024: Get the latest previews and releases
Find out how we’re evolving GitHub and GitHub Copilot—and get access to the latest previews and GA releases.
Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview
At GitHub Universe, we announced Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini are coming to GitHub Copilot—bringing a new level of choice to every developer.