The Future of Privacy Forum published a framework for biometric data regulations for immersive technologies on Tuesday.
The FPF’s Risk Framework for Body-Related Data in Immersive Technologies report discusses best practices for collecting, using, and transferring body-related data across entities.
#NEW: @futureofprivacy releases its ‘Risk Framework for Body-Related Data in Immersive Technologies’ by authors @spivackjameson & @DanielBerrick.
This analysis assists organizations to ensure they are handling body-related data safely & responsibly.https://t.co/FC1VOsaAFe
— Future of Privacy Forum (@futureofprivacy) December 12, 2023
Organisations, businesses, and individuals can incorporate the FPF’s observations as recommendations and a foundation for facilitating safe, responsible extended reality (XR) policies. This relates to entities requiring large amounts of biometric data in immersive technologies.
Furthermore, those following the guidelines of the report can apply the framework to document reasons and methodologies for handling biometric data, comply with laws and standards, evaluate risks associated with privacy and safety, and ethical considerations when collecting data from devices.
The framework applies not only to XR-related organisations but also to any institution leveraging technologies dependent on the processing of biometrics.
Jameson Spivack, Senior Policy Analyst, Immersive Technologies, and Daniel Berrick, Policy Counsel, co-authored the report.
Your Data: Handled with Care
In order to understand how to handle personal data, organisations must identify potential privacy risks, ensure compliance with laws, and implement best practices to boost safety and privacy, the FPF explained.
According to Stage One of the framework, organisations can do so by:
- Creating data maps that outline their data practices linked to biometric information
- Documenting their use of data and practices
- Identifying pertinent stakeholders, direct and third-party, affected by the organisation’s data practices
Companies would analyse applicable legal frameworks in Stage Two to ensure compliance. This would involve companies collecting, using, or transferring “body-related data” impacted by US privacy laws.
To comply, the framework recommends that organisations “understand the individual rights and business obligations” applicable to “existing comprehensive and sectoral privacy laws,” it read.
Organisations should also analyse emerging laws and regulations and how they would impact “body-based data practices.”
In Stage Three, companies, organisations, and institutions should identify and assess risks to others. It explained that this includes the individuals, communities, and societies they serve.
It said that privacy risks and harms could derive from data “used or handled in particular ways, or transferred to particular parties,” it said.
It added that legal compliance “may not be enough to mitigate risks.”
In order to maximise safety, companies can follow several steps to protect data, such as proactively identifying and reducing risks associated with data practices.
This would involve impacts on the following:
-
- Identifiability
- Use to make key decisions
- Sensitivity
- Partners and other third-party groups
- The potential for inferences
- Data retention
- Data accuracy and bias
- User expectations and understanding
After evaluating a group’s data use policy, organisations can assess the fairness and ethics behind its data practices, based on identified risks, it explained.
Finally, the FPF framework recommended the implementation of best practices in Stage Four, which involved a “number of legal, technical, and policy safeguards organisations can use.
It added this would help organisations keep updated with “statutory and regulatory compliance, minimize privacy risks, and ensure that immersive technologies are used fairly, ethically, and responsibly.”
The framework recommends that organisations intentionally implement best practices by comprehensively “touching all parts of the data lifecycle and addressing all relevant risks.”
Organisations can also collaboratively implement best practices using those “developed in consultation with multidisciplinary teams within an organization.”
These would involve legal product, engineering, trust, safety, and privacy-related stakeholders.
Organisations can protect their data by:
- Localising and processing data on devices and storage
- Minimising data footprints
- Regulating or implementing third-party management
- Offering meaningful notice and consent
- Preserving data integrity
- Providing user controls
- Incorporating privacy-enhancing technologies
Following these best practices, organisations could evaluate best practices and suitably align them as a coherent strategy. Afterwards, they could assess the best practices on an ongoing basis to maintain efficacy.
EU Proceeds with Artificial Intelligence (AI) Act
The news comes right after the European Union moved forward with its AI Act, which the FPF states will have a “broad extraterritorial impact.”
Currently under negotiations with member-states, the legislation aims to protect citizens from harmful and unethical use of AI-based solutions.
Political agreement was reached on the EU’s #AIAct, which will have a broad extraterritorial impact. If you would like to gain insights into key legal implications of the regulation, join @kate_deme for an in-depth FPF training tomorrow at 11 am ET.
: https://t.co/weVgDdsvRh— Future of Privacy Forum (@futureofprivacy) December 11, 2023
The organisation is offering guidance, expertise, and training for companies after the Act prepares to enter force. This has led to one of the biggest changes in data privacy policy since the introduction of the General Data Protection Regulation (GDPR) in May 2016.
The European Commission stated it wants to “regulate artificial intelligence (AI)” to ensure improved conditions for using and rolling out the technology.
It said in a statement,
“In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation. Once approved, these will be the world’s first rules on AI”
According to the Commission, it aims to approve the Act by the end of the year.
Biden-Harris Executive Order on AI
In late October, the Biden-Harris administration implemented an executive order on the regulation of AI. The Government’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence aims to safeguard citizens around the world from the harmful effects of AI programmes.
Enterprises, organisations, and experts will need to comply with the new regulations for “developers of the most powerful AI systems” to share their safety assessments with the US Government.
Responding to the Plan, the FPF said it was “incredibly comprehensive” and offered a “whole of government approach and with an impact beyond government agencies.”
It continued in its official statement,
“Although the executive order focuses on the government’s use of AI, the influence on the private sector will be profound due to the extensive requirements for government vendors, worker surveillance, education and housing priorities, the development of standards to conduct risk assessments and mitigate bias, the investments in privacy enhancing technologies, and more”
The statement also called on lawmakers to implement “bipartisan privacy legislation.” Doing so was “the most important precursor for protections for AI that impact vulnerable populations.”
UK Hosts AI Security Summit
Additionally, the United Kingdom also hosted its AI Security Summit at the iconic Bletchley Park, where world-renowned scientist Alan Turing cracked the Nazi’s World War II-era Enigma cryptography.
At the world-class event, some of the industry’s top-level experts, executives, companies, and organisations gathered to outline protections to regulate AI.
This has included the US, UK, EU, and UN governments, the Alan Turing Institute, The Future of Life Institute, Tesla, OpenAI, and many others. The groups discussed methods to create a shared understanding of the risks of AI, collaborate on best practices, and develop a framework for AI safety research.
The Fight for Data Rights
The news comes as multiple organisations enter fresh alliances in order to tackle ongoing concerns over the use of virtual, augmented, and mixed reality (VR/AR/MR), AI, and other emerging technologies.
For example, Meta Platforms and IBM launched a massive alliance united to develop best practices for artificial intelligence, biometric data, and to help create regulatory frameworks for tech companies worldwide.
The Global AI Alliance hosts more than 30 organisations, companies, and individuals from across the global tech community and includes tech giants such as AMD, HuggingFace, CERN, The Linux Foundation, and others.
Furthermore, organisations like the Washington, DC-based XR Association, Europe’s XR4Europe alliance, the globally-recognised Metaverse Standards Forum, and the Gatherverse, among others, have contributed enormously to the implementation of best practices for those involved in building the future of spatial technologies.