https://a16z.com/a-roadmap-for-federal-ai-legislation-protect-people-empower-builders-win-the-future/

Debates in Washington often frame AI governance as a series of false choices: they pit innovation against safety, progress against protection, federal leadership against the rights of states. But at a16z, we believe these are not binaries. In order for America to realize the full promise of artificial intelligence, we must both build great products and protect people from AI-related harms. Congress can and should design a federal AI framework that protects individuals and families, while also safeguarding innovation and competition. This approach will allow startups and entrepreneurs, who we call Little Tech, to power America’s future growth while still addressing real risks.

At a16z, we take a long view. Our funds have a 10 to 20 year life cycle, which means we care about investing in trustworthy products, strong businesses, and durable markets that will still be thriving years from now. Pursuing short-term valuations at the expense of sustainable tools and healthy markets is bad for the founders we invest in, bad for the investors who trust us with their capital, bad for our firm, and most importantly, bad for American people and businesses. A boom-and-bust cycle that results in AI products that are insecure, unsafe, or misleading would be a failure, not a triumph.

Federal AI legislation should lead us in a different direction, where AI empowers people and delivers social and economic benefits. Smart regulation is essential to ensuring that AI can help our society thrive in the long run and that American startups can compete on the global stage. If there’s one thing central to this vision, it is competition: without competition, consumers get worse products, slower progress, and fewer choices. And Little Tech is central to competition: without startups, large, deep-pocketed incumbents will control the market.
The vision is clear. The question is how to achieve it.

A critical first step is enacting federal AI legislation that sets a clear standard for AI governance. We’ve written and discussed what good AI policy looks like for Little Tech, but with both Republicans and Democrats now calling for congressional action, it’s time to put the key elements in one place. The nine pillars below translate that work into a concrete policy agenda that can keep Americans safe while keeping the U.S. in the lead.

华盛顿的辩论常将人工智能治理塑造成一系列非此即彼的伪命题:创新与安全对立,进步与保护相斥,联邦领导权与各州权利抗衡。但在a16z看来,这些并非二元选择。美国要实现人工智能的全部潜力,就必须既打造卓越产品,又防范AI相关危害。国会完全能够——也应该——设计出既能保护个人与家庭,又能捍卫创新与竞争的联邦AI框架。这种模式将让我们所称的"小科技"初创企业和创业者既能推动美国未来增长,又能化解实际风险。

a16z着眼长远。我们的基金运作周期长达10至20年,这意味着我们关注的是投资可信产品、稳健企业和持久市场——这些要素在多年后仍将蓬勃发展。为追求短期估值而牺牲可持续工具和健康市场,对我们的被投创始人、信任我们的投资人、我们公司,尤其对美国民众和企业都贻害无穷。若导致AI产品存在安全隐患或误导性的繁荣-萧条周期,绝非胜利而是失败。

联邦AI立法应引领我们走向新方向:让AI赋能民众,带来社会经济红利。明智监管对确保AI长期助力社会繁荣、美国初创企业全球竞争至关重要。这一愿景的核心就是竞争:缺乏竞争将导致产品低劣、进步迟滞、选择匮乏。而"小科技"正是竞争的核心:没有初创企业,资金雄厚的老牌巨头将垄断市场。

愿景已然明晰,关键在于实现路径。

首要举措是颁布确立AI治理清晰标准的联邦立法。我们已多次撰文探讨适合"小科技"的良政,如今两党均呼吁国会行动,是时候整合关键要素了。以下九大支柱将转化为具体政策议程,既守护美国民众安全,又保持美国领先地位。

  1. Punish harmful uses of AI.
  2. Protect children from AI-related harms.
  3. Protect against catastrophic cyber and national security risks.
  4. Establish a national standard for AI model transparency.
  5. Ensure federal leadership in AI development, while protecting states’ ability to police the harmful use of AI within their borders.
  6. Invest in AI talent by supporting workers and educating students.
  7. Invest in infrastructure: compute, data, and energy.
  8. Invest in AI research.
  9. Use AI to modernize government service delivery.

1. Punish harmful uses of AI

AI should not serve as a liability shield. When bad actors use AI to break the law, they should not be able to hide behind the technology.
If a person uses an AI system to commit fraud, they have still committed fraud. If a company deploys an AI tool that discriminates in hiring or housing, civil rights law should apply. If a firm uses AI in ways that are unfair or deceptive, that conduct should remain within the reach of state and federal consumer-protection law. The core principle is simple: AI is not a “get out of jail free” card.

  1. 惩治人工智能的有害使用
    人工智能不应成为责任挡箭牌。当不法分子利用人工智能违法时,他们不能以技术为掩护逃脱追责。

若有人使用AI系统实施欺诈,其行为仍构成欺诈罪;若企业部署的AI工具在招聘或住房领域存在歧视,民权法应予以适用;若公司以不公平或欺骗性方式使用AI,这类行为仍应受州和联邦消费者保护法的约束。核心原则很简单:人工智能不是一张"免罪金牌"。

A federal AI framework should make that principle explicit:

  • Ensure that criminal codes, civil rights statutes, consumer protection law, and antitrust apply to cases involving AI. In many of these areas, states and the federal government have overlapping jurisdiction. In consumer protection law, for instance, the Federal Trade Commission enforces prohibitions on unfair and deceptive trade practices (UDAP), while many state attorneys general also enforce their own UDAP statutes.
  • Direct the Justice Department and other state and federal enforcement agencies to map how those tools work in AI-related cases, identify gaps, and recommend targeted fixes where necessary. If existing bodies of law do not account for certain AI use cases, Congress may need to step in to fill them. Any new law that targets AI-related harms should focus on marginal risk, and use an evidence-based approach to identify the gaps that need to be filled and the optimal approach to filling them.
  • Provide agencies with the resources—budget, headcount, and technical expertise—to actually bring these cases. In some cases, public-private partnerships may be valuable in providing technical expertise to ensure that prosecutors can prosecute existing law and that judges can recognize AI-based violations when they occur.

Of course, prohibiting people and companies from using AI as a liability shield does not mean that they should be unable to defend themselves. Defendants should still be permitted to use any defenses available in statute or at common law, and in negligence cases, judges should still take account of whether defendants enacted good-faith measures and safeguards—consistent with applicable best practices for their industry and company size—in determining legal liability.

2. Protect children from AI-related harms

AI can harm anyone, but children are uniquely vulnerable. Minors may be less-equipped than adults to protect themselves, and when harms occur, the consequences may be more severe. Because of these vulnerabilities, lawmakers should consider enacting additional protections for children.

As with other online services, children under the age of 13 should be prohibited from using AI services, absent parental consent. It should be noted that because of the challenges of obtaining consent, most technology services prohibit use entirely for younger children. Other minors—anyone between ages 13 and 17 years of age—using AI tools should receive additional protections when providers know users are minors.

In those cases, providers should offer parents meaningful controls: the ability to set privacy and content settings; to impose usage limits or blackout hours; and to access basic information about how a tool is being used. Providers should also present minors with clear disclosures about what the system is and what it is not: that it is AI, not a human; that it is not a licensed professional (for instance, not a licensed mental health care provider); that it is not intended for crisis situations such as suicidal emergencies; and that it is not a replacement for licensed mental health care.

In imposing these requirements, lawmakers should be careful to avoid blanket prohibitions on minors’ ability to use AI. As California Gavin Newsom said when he vetoed a misguided proposal that would have severely constrained minors’ ability to access and use AI products, “We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether.” Lawmaking should be careful not to confuse disempowerment for protection, and should stay within constitutional bounds.

Lawmakers should also require providers to develop protocols for how they handle certain situations, such as instances where a user expresses suicidal ideation or a desire to self-harm. Providers should be required to include information in these protocols on how they will refuse to help users harm themselves—including providing information about methods for committing suicide—and how they will refer users in crisis to suicide prevention resources.

Beyond these responsibilities, lawmakers should also consider ensuring that civil and criminal penalties can be imposed in cases involving harm to minors, such as if AI is used to solicit or traffic a minor. Similarly, prohibitions on assisted suicide should not permit exemptions for cases involving AI.

  1. 保护儿童免受人工智能相关伤害
    人工智能可能对任何人造成伤害,但儿童尤其脆弱。未成年人往往比成年人更缺乏自我保护能力,且一旦受到伤害,后果可能更为严重。鉴于这种特殊性,立法者应考虑为儿童制定额外保护措施。

与其他在线服务类似,未获父母同意的13岁以下儿童应被禁止使用人工智能服务。值得注意的是,由于获取同意的实际困难,多数科技服务会直接禁止低龄儿童使用。对于13至17岁的未成年人用户,当服务提供方知晓使用者为未成年人时,应提供额外保护措施。

在此类情况下,服务商应为家长提供实质性控制权限:包括设置隐私与内容偏好、设定使用时长限制或禁用时段、查看工具基本使用情况等功能。同时应向未成年人明确说明系统性质:强调这是人工智能而非真人;不属于持证专业人士(例如非执业心理健康医师);不适用于自杀危机等紧急情况;也不能替代正规心理健康诊疗。

立法者在制定这些要求时,应注意避免全面禁止未成年人使用人工智能。正如加州州长加文·纽瑟姆否决一项过度限制未成年人接触AI产品的提案时所言:"我们不能通过彻底禁止年轻人使用这些工具,来为他们准备一个人工智能无处不在的未来。"立法工作需谨防将剥夺权利误认为保护措施,并应始终遵循宪法框架。

立法者还应要求服务商制定特殊情境处理规范,例如当用户表达自杀倾向或自残意图时。这些规范必须包含以下内容:如何拒绝协助用户实施自我伤害(包括禁止提供自杀方法信息),以及如何将处于危机中的用户转介至专业自杀预防机构。

除上述责任外,立法者应考虑对侵害未成年人的行为(如利用AI引诱或贩卖未成年人)设立民事与刑事处罚。同理,协助自杀禁令也不应豁免涉及人工智能的情形。

3. Protect against catastrophic cyber and national security risks

Federal legislation should also improve the government’s understanding of AI’s marginal risks in high-stakes domains like national security. One option is to direct a technical, standard-focused federal government office to identify, test, and benchmark national security capabilities—like the use of AI in chemical, biological, radiological, and nuclear (CBRN) attacks or the ability to evade human control. That work should involve consultation with independent experts and AI researchers to understand existing risks and to establish assessment procedures. Building this type of measurement infrastructure will help ensure that policy responses are proportionate: capabilities should be managed based on evidence, not headlines. The same evidence-based approach should guide how policymakers think about AI’s role in both offensive and defensive cyber operations.

AI is poised to enhance the ability of nation-states, transnational cybercrime organizations, and lone wolves at greater scale and with increasing sophistication. As AI technologies become more accessible, they allow even those with minimal technical skills to carry out sophisticated attacks on critical infrastructure. As a result, while only the most sophisticated nation state actors and cyber-criminal organizations engage in such attacks today, AI could allow a greater number of nation-state and other threat actors to do so in the future. But unlike some technologies that create asymmetric offensive and defensive capabilities, AI does not create net-new incremental risk since AI enhances the capabilities of both attackers and defenders. A federal framework must empower, not hamstring, the defensive use of AI. Limiting our defensive strategies can create artificial asymmetries that make it easier for attackers to target critical infrastructure.

Information sharing among AI companies about the potential misuse of models for cybercrime is a critical countermeasure in combating cyberattacks but antitrust concerns can limit how much information is shared. Targeted exceptions to permit such sharing where necessary are therefore an essential safeguard. The financial system is particularly exposed to cyberattacks because of the central role it plays in monetizing such activity. Yet financial institutions are hamstrung by archaic model validation rules that frustrate the implementation of AI defenses. Legislative and regulatory changes should be enacted to remove these barriers. Finally, the government should procure and deploy state-of-the-art defensive AI solutions.

联邦立法还应提升政府对人工智能在国家安全等高危领域边际风险的理解。一种可行方案是设立专注于技术标准的联邦机构,负责识别、测试和评估国家安全相关能力——例如人工智能在化学、生物、放射性和核武器(CBRN)攻击中的应用,或规避人类控制的能力。这项工作应咨询独立专家和人工智能研究人员,以了解现有风险并建立评估程序。构建此类衡量基础设施将有助于确保政策应对措施的适度性:能力管理应基于证据而非新闻噱头。这种循证方法同样应指导政策制定者思考人工智能在进攻性和防御性网络行动中的作用。

人工智能将显著提升民族国家、跨国网络犯罪组织和独狼行动者的能力,使其攻击规模更大、手段更复杂。随着人工智能技术更易获取,即使技术能力有限者也能对关键基础设施发起精密攻击。因此,尽管目前仅有最先进的民族国家行为体和犯罪组织能实施此类攻击,未来人工智能可能使更多国家行为体和其他威胁方具备这种能力。但与某些造成攻防能力不对称的技术不同,人工智能不会产生净新增风险,因为它同时增强了攻击者和防御者的能力。联邦框架必须促进而非限制防御性人工智能的应用。限制防御策略可能人为制造不对称优势,使攻击者更容易锁定关键基础设施。

人工智能企业间关于模型可能被用于网络犯罪的信息共享是打击网络攻击的关键对策,但反垄断顾虑会限制信息共享程度。因此有针对性地设置必要时的共享豁免权是重要保障措施。金融体系因在犯罪资金流转中的核心作用而特别易受网络攻击,但金融机构受制于过时的模型验证规则,阻碍了人工智能防御的实施。应通过立法和监管改革消除这些障碍。最后,政府应采购并部署最先进的防御性人工智能解决方案。

4. Establish a national standard for model transparency

Transparency can help people make informed choices about the AI products they use. Just as nutrition labels provide basic information that give consumers the ability to make good choices about the food they eat, disclosing a set of “AI model facts” can help people make good choices about how they use AI models.

At the same time, government mandates that require companies to disclose information can present challenges. Government-imposed disclosure rules face constitutional constraints: they may be unconstitutional if they obligate companies to disclose information that is not factual, controversial, or unduly burdensome. Overly broad or onerous mandates are especially challenging for Little Tech, which cannot absorb compliance costs the way large incumbents can. For Little Tech, burdensome disclosure requirements threaten their ability to compete. As Jennifer Pahlka, a former White House deputy chief technology officer, has written, “paperwork favors the powerful.

Mandates also might be problematic if they fail to provide consumers with useful information. Transparency for transparency’s sake adds costs without adding value. Lawmakers should design any transparency obligation with people in mind: what information enables people to make decisions that are consistent with their preferences?

If the goal is to require transparency that is useful for consumers, lawful, and not unduly burdensome for startups, then lawmakers should consider requiring disclosure of the following information for the developers of base models:

  • Who built this model?
  • When was it released and what timeframe does its training data cover?
  • What are its intended uses and what are the modalities of input and output it supports?
  • What languages does it support?
  • What are the model’s terms of service or license?

Less powerful models should be exempted from this requirement, and disclosures shouldn’t require a company to reveal trade secrets or model weights.

5. Ensure federal leadership in AI development, while protecting states’ ability to police harmful use of AI

Recent debates about AI governance often present federal and state roles as mutually exclusive: the federal government has sole authority to regulate AI because it involves interstate commerce, or states have unbounded authority to regulate AI because states are laboratories of democracy and Congress has not yet enacted comprehensive AI legislation.

Neither extreme captures how the Constitution allocates power between state and federal governments: both states and the federal government have important roles in regulating AI. Congress should craft rules that govern the national AI market, while states should regulate harmful uses of AI within their borders. That means that Congress should take the lead in regulating model development, since open source and proprietary tools will necessarily travel across state lines. It also means that states should have the ability to enforce their own criminal and civil laws to prohibit harmful uses of AI in areas like consumer protection, civil rights, children’s safety, and mental health. And in some areas that traditionally fall within the domain of state lawmakers, like insurance and education, states may take the lead.

A federal framework can help to clarify these respective roles by expressly establishing congressional leadership in regulating AI development, while including safe harbors to clarify that states retain the ability to regulate AI use and to adjudicate tort claims.

Clear rules help in both directions. Developers get predictable rules for building and deploying models, and states maintain the tools they need to protect residents from concrete harms.

6. Invest in AI talent by supporting workers and educating students

Realizing AI’s economic and social potential requires an AI-ready workforce. That means supporting our workers and students in making the transition to an economy where success depends on possessing AI skills, just as being able to use the internet is essential for economic success today.

Supporting the transition to an AI-ready workforce includes several components:

  • Supporting workforce development initiatives that provide training on the use of AI technologies for workers, including by supporting reskilling and upskilling programs and by implementing partnerships with the private sector to offer industry-recognized certifications and clear on-ramps to jobs.
  • Establishing public-private partnerships to create opportunities for AI-ready workers and to support curriculum development modeled on relevant real-world skills.
  • Creating programs that provide certifications, apprenticeships, and internships to close the gap between classroom learning and practical, employable skills. Lawmakers should modernize the 80-year-old National Apprenticeship Act, as the current system wasn’t designed for new technologies like AI.
  • Implementing AI literacy in K-12 curricula to empower future generations of Americans to succeed in an AI-driven economy, including by strengthening STEM education, introducing age-appropriate machine learning concepts, and promoting responsible use of AI tools.

7. Invest in infrastructure: compute, data, and energy

A federal framework can play an important role making AI markets more competitive. One option is to establish a National AI Competitiveness Institute (NAICI) that can help lower barriers to entry for entrepreneurs, small businesses, researchers, and government agencies. NAICI could offer access to compute, curated datasets, benchmarking and evaluation tools, and open-source software environments. Shared infrastructure of this kind reduces redundancy and gives smaller projects a credible way to experiment, iterate, and grow.

Open data sets might be particularly valuable. NAICI users might have access to open data repositories of non-personal data, and the government might ensure that these data sets include access to government-funded research. As part of this initiative, the government could prioritize making its own data sets available for AI training and research, where lawful and appropriate, and could create an “Open Data Commons” of data pools that are managed in the public’s interest.

Energy is another structural constraint. Large-scale AI models are compute- and energy-intensive, so a federal framework should help to increase energy abundance, while also ensuring that startups are not priced out or crowded out. Energy policies should be structured so that neither consumers nor Little Tech is saddled with the costs of hyperscalers’ energy needs without seeing commensurate benefits.

8. Invest in AI research

The relationship between academic research and AI product development has always been tight. Breakthroughs in universities and public labs often seed the companies and tools that define each new generation of technology. Supporting that research is therefore critical to long-term innovation in both the public and private sectors.

Government support should prioritize foundational and disruptive AI research. That could include dedicated funding streams for moonshot projects—high-risk, high-reward efforts that challenge current paradigms—and a balanced portfolio that spans near-term, medium-term, and long-term horizons.

Promising topics range widely: how to design effective worker-retraining programs for an AI-intensive economy; the role of open-source tools in promoting competition and security; the use of AI to defend against cyber threats; and the potential for AI to improve the delivery of government services. Structured, public research on these questions can inform policy and shape more effective products.

To maximize impact, federal grants should, where possible, require that non-sensitive research data be shared in machine-readable formats under licenses that permit AI training and evaluation. Making this research available will turn public funding into public infrastructure.

9. Use AI to modernize government service delivery

AI has the potential to improve how the government operates and how it delivers services to the public. Each federal agency should develop a clear, time-bound plan for how it will use AI to improve operations—both by enhancing impact and lowering costs—while maintaining public trust.

As part of this plan, agencies should conduct regular assessments of their workflows to identify where AI can automate routine tasks, improve analysis of large datasets, and support better decision-making. In some cases, agencies may need to procure AI tools to assist them with these modernization efforts. Any procurement process should be designed so that it is accessible to Little Tech, and should not prohibit the acquisition of open source tools where appropriate.

Agencies should also implement pilot projects that allow them to test and evaluate AI tools in specific functions before deploying them at scale. These pilots should include clear metrics for evaluating impact. Where appropriate, agencies should consult with external experts on design, implementation, and evaluation of these pilot projects.

Any internal government use of AI should adhere to usage policies promulgated by the Office of Management and Budget. These policies should be updated regularly to reflect lessons learned from pilots, agency implementations, and evolving technical and legal standards.

A call to action: Congress should enact federal AI legislation

The time for congressional action is now. Millions of Americans use AI regularly, and there is an increasingly broad consensus that this technology has the power to benefit our economy and society. We know the U.S. must win the global AI race. Americans want their representatives to act to create a safe, thriving market, one that positions America to lead the world in AI.

Inaction poses other risks. Staying on our current path will produce AI markets that are less competitive and more concentrated, and will therefore compel people to use AI products that are worse and less innovative.

Congress doesn’t have to decide between protecting people and protecting competition. With the right priorities and policies in place, it can do both: create a comprehensive framework to protect kids and adults from the harms of AI, while keeping the door open for new entrants to build, innovate, and succeed. 

Logo

有“AI”的1024 = 2048,欢迎大家加入2048 AI社区

更多推荐