Skip to content

New Organization, O-SAIF, Debuts to Advocate for Transparency and Responsibility in Public AI Contract Deals

AI Advocacy Group O-SAIF Emerges to Promote Transparency and Responsibility in Artificial Intelligence Development

Launch of Non-Profit Organization, O-SAIF, Advocating for Transparency and Responsibility in AI...
Launch of Non-Profit Organization, O-SAIF, Advocating for Transparency and Responsibility in AI Contracts of Civil Administration

New Organization, O-SAIF, Debuts to Advocate for Transparency and Responsibility in Public AI Contract Deals

The Open-Source AI Foundation (O-SAIF), a newly launched organisation, is advocating for open-source AI in civilian government agencies. The foundation, which can be found at theopensourceai.foundation, emphasises the importance of transparency, accountability, and security in AI systems used by the government.

Travis Oliphant, the CEO of Quansight, shares a similar view, highlighting that open-source AI allows the public to audit and verify algorithms, thereby enhancing trust in government technology. O-SAIF's mission aligns with this sentiment, aiming to increase transparency and trustworthiness in AI systems that impact civilian life.

Joe Merrill, the CEO of OpenTeams, and Brittany Kaiser, the chairwoman of O-SAIF and the CSO of Eliza Systems, both stress the need for government AI to be built openly with transparency and auditability. Kaiser further emphasises the urgency of O-SAIF's mission, stating that AI will soon surpass human coding capabilities.

O-SAIF is planning to launch a US$10 million campaign to educate lawmakers, policymakers, and citizens about the importance of open-source AI. The foundation is backed by AI experts and organisations advocating for open and transparent AI in government.

The foundation argues that open-source AI is more secure because any attacks or exploits can be identified and remediated. Moreover, open collaboration on AI development enables more thorough security vetting and faster identification of vulnerabilities, reducing risks of misuse or malfunction in sensitive civilian applications.

O-SAIF also promotes innovation while safeguarding democratic values and civil rights. The models and their training in open-source AI can be audited to minimise bias, ensuring that AI aligns with ethical and legal standards, protecting civil rights and privacy.

Tyler Lindholm, the director of O-SAIF, states that the use of closed-source software vendors for AI development in civilian govtech is a misuse of public resources. He argues that these proprietary systems lack transparency and accountability, leading to taxpayer funds being wasted while private companies are enriched.

Shaw Walters, the CEO of Eliza Labs and the founder of elizaOS, supports this perspective, stating that large language models should be treated like a public good and be auditable by citizens. He further emphasises that ensuring auditability now is crucial to avoid dire consequences for taxpayers and society.

The Open-Source AI Foundation (O-SAIF) focuses on ensuring AI systems used by federal, state, and local government civilian agencies are publicly auditable. O-SAIF is calling for an end to closed-source AI contracts with civilian agencies, advocating for a shift towards open-source AI to ensure transparency, security, innovation, and accountability in publicly deployed AI systems.

Organisations like the Safe AI Forum (SAIF) and the Coalition for Secure AI (CoSAI) promote similar ideals, emphasising international cooperation to reduce extreme AI risks and benefit all, with an emphasis on transparency and scientific understanding. The California Report on Frontier AI Policy also stresses the importance of open release of foundational models and open-source development to spur evidence-based policy and technological advancement in the public interest.

In sum, while the search results do not directly define O-SAIF or its arguments, the broader landscape of AI governance and safety strongly supports the case for open-source AI in civilian government agencies to ensure transparency, security, innovation, and accountability in publicly deployed AI systems.

Open-source AI, as advocated by organizations like O-SAIF, allows for public auditing and verification of algorithms, enhancing trust in government technology. The foundation argues that the use of artificial-intelligence in AI systems used by the government should be built openly, with transparency and auditability, to minimise bias and ensure ethical and legal standards are met.

Read also:

    Latest