Skip to content

Linking Specialists to Develop Privacy-Empowering Technology and Artificial Intelligence for All Users

FPF unveiled its Research Coordination Network (RCN) for Data Sharing and Analytics with Privacy Preservation on July 9th. This network aligns with the Biden-Harris Administration's pledges regarding privacy, equality, and security, as outlined in the administration's Executive Order on AI.

Linking Specialists to Facilitate Development of Privacy-Focused Technology and Artificial...
Linking Specialists to Facilitate Development of Privacy-Focused Technology and Artificial Intelligence for All Users

Linking Specialists to Develop Privacy-Empowering Technology and Artificial Intelligence for All Users

In a significant move towards fostering more ethical, fair, and representative AI, regulators are focusing on imposing risk-based, audit-backed, and transparency-driven frameworks paired with active encouragement of PET adoption.

The Research Coordination Network (RCN) for Privacy-Preserving Data Sharing and Analytics, launched by the Future of Privacy Forum (FPF) on July 9th, aims to advance Privacy Enhancing Technologies (PETs) and their use in AI development. The RCN's work was shaped by over 40 global experts during a Virtual Kickoff event.

Key regulator-focused policy priorities include mandatory pre-processing risk assessments, annual independent cybersecurity audits, and requirements for transparency and accountability around automated decision-making technologies. These policies ensure AI systems respect privacy while mitigating potential harms such as privacy violations, bias, and unfair data use.

Businesses must conduct detailed risk assessments on personal data use to evaluate and restrict any processing posing significant privacy or fairness risks. Organizations need independent audits assessing how well cybersecurity programs protect personal data, reducing risks of breaches that disproportionately affect marginalized groups or destabilize trust. Laws should require transparency and oversight of AI decision systems to address potential discrimination or unfair outcomes, promoting more representative and equitable AI systems.

Policies should encourage privacy-enhancing methods like data obfuscation (anonymization, pseudonymization), encrypted data processing (homomorphic encryption, multi-party computation), federated analytics, and accountability tools. These allow AI training and data analysis without exposing raw personal data, improving privacy while maintaining data utility.

Compliance with data protection laws like GDPR, CCPA, and other emerging frameworks is essential. PETs provide practical means to comply with regulations by safeguarding data and enabling data sharing under strict privacy controls, thus embedding ethical norms into AI development.

Ensuring a balance between encouraging PET adoption and privacy-centric innovation without stifling the beneficial uses of AI is crucial. The focus is on building AI systems that are safer, more ethical, fair, and representative, helping mitigate bias, protect privacy rights, and ensure AI accountability while enabling responsible innovation.

However, there are hard questions about how to implement PETs while preserving data crucial for assessing and combating bias, especially in AI decision-making systems. There is a need for broader dissemination of PETs expertise beyond academia and big tech. Consumer sentiment is shifting towards greater awareness of privacy issues, and participants identified many areas of opportunity for PETs usage, such as in the social sciences, medical research, credential verification, AI model training, behavioral advertising, and education.

Industry experts, policymakers, civil society, and academics discussed the possibilities, challenges, and ethical considerations of PETs and their interaction with AI systems. Deployment and operational costs of PETs can be prohibitive, and participants suggested building a framework and series of questions to ask about a given use case with an applied technology could be a helpful way to move forward.

The discussion focused on the role of government in supporting business cases for PETs. FPF experts led a workshop-style virtual meeting to direct and inform the RCN's next three years of work. Public trust and consumer advocacy regarding PETs are considered crucial, and FPF will bring together a diverse group of experts to foster convergence and support the broad deployment of PETs.

Usability is essential in defining a PET-without understanding and building for the end users, we risk PETs losing their intended value. If you're a subject matter expert on PETs or use PETs, you can contribute to their future use and regulation by signing up for the Expert or Regular Sub-Groups. Regular meetings between the Expert and Regulator groups will provide substantive feedback on the RCN's progress.

The Roundtable meeting marked the beginning of a collaborative effort to advance Privacy Enhancing Technologies and their use in developing more ethical, fair, and representative AI. The future direction of the FPF RCN includes exploring various mechanisms for deployment. Later that day, senior representatives from various sectors met to discuss the ethical, equitable, and responsible use of PETs, with a focus on the FPF RCN's future direction.

[1] Future of Privacy Forum. (2021). Privacy-Preserving AI: A Policy Framework. Retrieved from https://www.fpf.org/wp-content/uploads/2021/07/Privacy-Preserving-AI-Policy-Framework.pdf

[2] Future of Privacy Forum. (2021). Privacy-Preserving Data Sharing and Analytics Research Coordination Network. Retrieved from https://www.fpf.org/project/privacy-preserving-data-sharing-and-analytics-research-coordination-network/

[3] Future of Privacy Forum. (2021). Privacy-Preserving AI: A Policy Framework for the U.S. Retrieved from https://www.fpf.org/wp-content/uploads/2021/07/Privacy-Preserving-AI-Policy-Framework-for-the-U.S.pdf

[4] Future of Privacy Forum. (2021). Privacy-Preserving Data Sharing and Analytics: A U.S. Policy Framework. Retrieved from https://www.fpf.org/wp-content/uploads/2021/07/Privacy-Preserving-Data-Sharing-and-Analytics-U.S.-Policy-Framework.pdf

[5] Future of Privacy Forum. (2021). Privacy-Preserving AI: A Policy Framework for the European Union. Retrieved from https://www.fpf.org/wp-content/uploads/2021/07/Privacy-Preserving-AI-Policy-Framework-for-the-European-Union.pdf

  1. The Research Coordination Network (RCN) for Privacy-Preserving Data Sharing and Analytics, launched by the Future of Privacy Forum (FPF), is focused on advancing Privacy Enhancing Technologies (PETs) and their application in AI development.
  2. Regulators are prioritizing policy initiatives that include mandatory pre-processing risk assessments, annual cybersecurity audits, and requirements for transparency and accountability around AI decision-making technologies.
  3. In AI development, data obfuscation, encrypted data processing, and federated analytics are prime examples of privacy-enhancing methods encouraged by regulators.
  4. PETs, such as data anonymization, encrypted data processing, and federated analytics, allow AI training and data analysis without exposing raw personal data, preserving privacy while maintaining data utility.
  5. Compliance with data protection laws like GDPR, CCPA, and other frameworks is vital, and PETs offer practical means to adhere to regulations by ensuring data privacy and enabling controlled data sharing.
  6. The deployment costs of PETs can be prohibitive, and a significant challenge lies in balancing their adoption with the beneficial uses of AI while preserving essential data needed for bias assessment.
  7. Building a framework and series of questions to analyze a given use case with an applied technology can help move forward with PETs implementation.
  8. Public trust, consumer advocacy, and the involvement of industry experts, policymakers, civil society, and academics are essential for fostering broader PETs deployment and responsible, ethical AI development.

Read also:

    Latest