Jump to Table of Contents Collapse Sidebar

Privacy Paradigm ODRL Profile

Release

Latest editor's draft:
https://w3id.org/ppop
Editors:
(Ontology Engineering Group (OEG), Universidad Politécnica de Madrid)
Andrés Chomczyk Penedo (Law, Science, Technology and Society (LSTS), Vrije Universiteit Brussel)
Blessing Mutiro (Castlebridge)
Haleh Asgarinia (Behavioural, Management, and Social Science (BMS) Faculty, Universiteit Twente)
Dave Lewis (ADAPT Centre, Trinity College Dublin)
Participate:
GitHub profile
File a bug
Commit history
Pull requests

Abstract

This document presents a new ODRL profile, the Privacy Paradigm ODRL Profile (PPOP), that extends ODRL, DPV and other specifications to bridge the gap in the representation of information related to transparency practices in privacy and data protection across the knowledge representation, legal and ethical fields, and addresses the data processing requirements for personal datastores envisaged as key core elements of the data economy.

1. Acknowledgments

This research has been supported by European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 813497 (PROTECT).

2. Introduction

2.1 Ontology Requirement Specification Document

Privacy Paradigm ODRL Profile - PPOP
1. Purpose
The purpose of this profile is to support the specification of transparency measures in the context of data sharing activities and data-intensive flows between multiple data subjects, controllers and processors on decentralised data storage environments.
2. Scope
The scope of this profile is limited to the definition of data sharing policies aligned with European data protection regulations and with ethical guidelines related to the transparency of Artificial Intelligence.
3. Implementation Language
RDF, RDFS
4. Intended End-Users
Developers of decentralised data storage and sharing solutions
5. Intended Uses
Use 1 - Classify transparency practices of a PIMS
Use 2 - Define access control policies for legal and ethical access to group and individual personal data stores
Use 3 - Model agreements / contracts between data subjects and intermediaries in new data governance schemes
Use 4 - Model safeguards for the trustworthiness of AI systems and respective rights and duties
Use 5 - Create a machine and human-readable policy notice template
6. Ontology Requirements
a. Non-Functional Requirements
NFR 1. The ontology shall be published online with standard documentation.
b. Functional Requirements: Groups of Competency Questions
Related to Safeguards
Trustworthiness / Reliability CQTS1 - To what extent did you ensure that the system would function reliably under harsh conditions?
Safety CQSF1 - What considerations did you take to prioritise the safety and the mental and physical integrity of people when scanning horizons of technological possibility and when conceiving of and deploying AI applications?
Security CQSC1 - What strategies did you establish to ensure that the system continuously remains functional and accessible to its authorised users?
CQSC2 - What protocols did you use to keep confidential and private information secure even under hostile or adversarial conditions?
CQSC3 - To what extent is the system capable of maintaining the integrity of the information that constitutes it, including protecting its architectures from the unauthorised modification or damage of any of its component parts?
Privacy CQPR1 - What measures did you take to enhance privacy?
Explainability CQEX1 - What mechanism did you consider to provide explanation and justification of both the content of algorithmically supported decisions and the processes behind their production in plain, understandable, and coherent language? Did you research and try to use the simplest and most interpretable model possible for the application in question?
CQEX2 - What considerations were taken into account when the rationale behind a specific decision or behaviour was communicated and clarified? Did you establish mechanisms to inform (end-)users on the reasons and criteria behind the AI system's outcomes?
CQEX3 - What strategies did you use to provide a formal or logical explanation?
CQEX4 - What strategies did you use to provide a semantic explanation (explanation of technical rationale behind the outcome)?
Traceability CQTR1 - What measures can ensure traceability of outcomes and decisions?
CQTR2 - What methods are used to ensure traceability of designing and developing algorithmic systems trained by personal data?
Auditability CQAU1 - What measures did you take to ensure that every step of the process of designing and implementing AI is accessible for audit, oversight, and review? Did builders and implementers of algorithmic systems keep records and make accessible information that enable monitoring from the stages of collection, pre-processing, and modelling to training, testing, and deploying?
Avoid Bias CQFR1 - How to ensure that the system has been sufficiently trained to develop and implement responsibility without bias?
CQFR2 - Did you ensure that model architectures did not include target variables, features, processes, or analytical structures (correlations, interactions, and inferences) which are unreasonable, morally objectionable, or unjustifiable)?
CQFR3 - What considerations were taken into account to encourage all voices to be heard and all opinions to be weighed seriously and sincerely throughout the production and use lifecycle?
Transparency CQTP1 - What considerations were taken into account when considering the transparency of an AI system?
CQTP2 - If data was collected from the data subject, has the controller provided the data subject with the mandatory information?
CQTP3 - If data wasn't collected from the data subject, has the controller provided the data subject with the mandatory information?
CQTP4 - Is it the first time that the data subject is contacted?
CQTP5 - Is there any applicable exemption to the information obligation?
CQTP6 - Has the user of data intermediary services been provided with the mandatory information?
CQTP7 - Has the user of a mere conduit service been provided with information about the restrictions on the service?
CQTP8 - Were the purposes associated with each particular category of personal data informed?
CQTP9 - Was the particular legitimate interest associated with each particular category of personal data informed?
CQTP10 - Were the data recipients associated with each particular category of personal data informed?
CQTP11 - Were the reasons for the data transfer associated with each particular category of personal data informed?
CQTP12 - Were the data retention periods associated with each particular category of personal data informed?
CQTP13 - Were the conditions and restrictions related to the use of the service informed?
Related to Rights
Non-discrimination CQDS1 - How to ensure that the decisions of the system do not have discriminatory or inequitable impacts on the lives of the people they affect?
Autonomy / Informed Decisions CQAT1 - How to ensure that the users are able to make free and informed decisions (in interaction with a system)?
Right to Privacy CQRP1 - Did you build in mechanisms for notice and control over personal data?
Related to Duties
Accuracy CQAC1 - Did you ensure that the system generates a correct output?
CQAC2 - Did you assess whether you can analyse your training and testing data? can you change and update this over time?
Other CQs
Intended Purpose CQPP1 - Did you clarify the purpose of the AI system and who or what may benefit from the product/service?
Accountability CQRE1 - How to establish a continuous chain of human responsibility across the whole AI projects delivery flow: from the design of an AI system to its algorithmically steered outcomes?
Impacts on business CQBU1 - If the organisation's business model relies on personal data, where does the data come from to create value for the organisation?
Well-being CQWE1 - To what extend did you ensure that the use of technology fosters and cultivates the welfare and well-being of data subjects whose interests are impacted by its use?
Informed data subjects CQIN1 - Did you enable people to understand how an AI system is developed, trained, operates, and deployed in the relevant application domain, so that consumers, for example, can make more informed choices?

2.2 Profile diagram

The base concepts specified by the Privacy Paradigm ODRL Profile are shown in the figure below.

Privacy Paradigm ODRL Profile

2.3 Document Conventions

Prefix Namespace Description
odrl http://www.w3.org/ns/odrl/2/ [odrl-vocab] [odrl-model]
rdf http://www.w3.org/1999/02/22-rdf-syntax-ns# [rdf11-concepts]
rdfs http://www.w3.org/2000/01/rdf-schema# [rdf-schema]
owl http://www.w3.org/2002/07/owl# [owl2-overview]
dct http://purl.org/dc/terms/ [dct]
ns1 http://purl.org/vocab/vann/ [ns1]
xsd http://www.w3.org/2001/XMLSchema# [xsd]
skos http://www.w3.org/2004/02/skos/core# [skos]
dpv http://www.w3.org/ns/dpv# [dpv]
ppop https://w3id.org/ppop [ppop]

3. Profile specification

This ODRL profile relies on the invocation of legal and ethical concepts. Where adequate, the term has the indication of the legal and ethical sources that were used to define it. Its core concepts are new entities involved in the data economy, individual and group rights, organizational duties and measures to safeguard them.

In-force regulations and proposals of the European Commission were taken into account as well as existing case law and guidelines by the European Data Protection Board (EDPB). The sources used in this respect were the following: (i) the General Data Protection Regulation [GDPR], (ii) the Data Governance Act [DGA], (iii) the European Digital Identity regulation ammendment [eIDAS 2], (iv) the Digital Services Act [DSA], (v) EDPB’s guidelines on consent [consent], (vi) Article 29 Working Party guidelines on transparency [transparency], and (vii) WhatsApp Ireland decision from the Irish data protection supervisory authority [whatsapp]. The work was further complemented by scholarly legal literature on the matter when gaps were identified.

Existing ethical guidelines related to the transparency of Artificial Intelligence were also used for the collection of requirements for the profile. As such, the sources used in this respect were the following: (i) Ethics Guidelines for Trustworthy AI [AI HLEG], (ii) Understanding artificial intelligence ethics and safety [AI Turing], (iii) Recommendation of the Council on AI [OECD], (iv) First draft of the recommendation on the ethics of artificial intelligence [UNESCO], and (v) European Convention on Human Rights [ECHR]. In addition to these sources, various ethical and philosophical literature was used.

3.1 Entities

3.1.1 Classes

Group | Data Sharing Entity | Data Intermediary | Data Sharing Service Provider | Data Altruism Organisation | Data Holder | Data User | Data Trust Provider

3.1.1.1 Group

Term: Group
Definition: Collection of individuals who may or may not share a common purpose or intention; or a formal organization with formal goals and formal organizational rules.
Instance of: odrl:Party
Legal source: [pagallo-17], [puri-21], [puri-22]
Ethical source: [copp-84], [french-84], [newman-04]
Usage examples: Family Pod

3.1.1.2 Data Sharing Entity

Term: Data Sharing Entity
Definition: A legal or natural person that can be a data intermediary, data holder or data user
Subclass of: dpv:LegalEntity
Legal source: [DGA Art.9], [eIDAS Art.3.19]
Usage examples:

3.1.1.3 Data Intermediary

Term: Data Intermediary
Definition: A legal person that engages in intermediation services between data holders which are legal persons and potential data users, including making available the technical or other means to enable such services
Subclass of: ppop:DataSharingEntity
Legal source: [DGA Art.9.1]
Usage examples:

3.1.1.4 Data Sharing Service Provider

Term: Data Sharing Service Provider
Definition: A legal person that engages in intermediation services between data subjects that seek to make their personal data available and potential data users, including making available the technical or other means to enable such services, in the exercise of the rights provided in Regulation (EU) 2016/679
Subclass of: ppop:DataIntermediary
Legal source: [DGA Art.9.2]
Usage examples: Family Pod

3.1.1.5 Data Altruism Organisation

Term: Data Altruism Organisation
Definition: A legal person that performs the activities related to data altruism
Subclass of: ppop:DataIntermediary
Legal source: [DGA Art.14-16]
Usage examples:

3.1.1.6 Data Holder

Term: Data Holder
Definition: A legal person or data subject who has the right to grant / share access to data under its control
Subclass of: ppop:DataSharingEntity
Legal source: [DGA Art.2.5]
Usage examples: Family Pod

3.1.1.7 Data User

Term: Data User
Definition: A natural or legal person who has lawful access to data and is authorised to use it
Subclass of: ppop:DataSharingEntity
Legal source: [DGA Art.2.6]
Usage examples:

3.1.1.8 Data Trust Provider

Term: Data Trust Provider
Definition: A natural or a legal person who provides one or more trust services either as a qualified or as a non-qualified trust service provider
Subclass of: dpv:LegalEntity
Legal source: [eIDAS Art.3.19]
Usage examples:

3.1.2 Properties

has charge price | collects metadata | is fair | is transparent | is non discriminatory | converts data | prevents fraudulent access | prevents abusive access | ensures reasonable continuity | has competition procedures | has service limitation

3.1.2.1 has charge price

Term: has charge price
Definition: Indicates whether the data sharing entity charges a price for their service
Domain: ppop:DataSharingEntity
Range: xsd:boolean
Legal source: [DGA Art. 11.3]
Usage examples:

3.1.2.2 collects metadata

Term: collects metadata
Definition: Indicates whether the data sharing entity has disclosed if the data sharing implies collecting metadata
Domain: ppop:DataSharingEntity
Range: xsd:boolean
Legal source: [DGA Art. 11.2]
Usage examples:

3.1.2.3 is fair

Term: is fair
Definition: Indicates whether the data sharing entity has disclosed if the data sharing is fair
Domain: ppop:DataSharingEntity
Range: xsd:boolean
Legal source: [DGA Art. 11.3]
Usage examples:

3.1.2.4 is transparent

Term: is transparent
Definition: Indicates whether the data sharing entity has provided transparency measures
Domain: ppop:DataSharingEntity
Range: xsd:boolean
Legal source: [DGA Art. 11.3]
Usage examples:

3.1.2.5 is non discriminatory

Term: is non discriminatory
Definition: Indicates whether the data sharing entity has disclosed that the data sharing is non discriminatory
Domain: ppop:DataSharingEntity
Range: xsd:boolean
Legal source: [DGA Art. 11.3]
Usage examples:

3.1.2.6 converts data

Term: converts data
Definition: Indicates whether the data sharing entity has disclosed that the data sharing implies changing the data format
Domain: ppop:DataSharingEntity
Range: xsd:boolean
Legal source: [DGA Art. 11.4]
Usage examples:

3.1.2.7 prevents fraudulent access

Term: prevents fraudulent access
Definition: Indicates whether the data sharing entity has disclosed if the data sharing has technical and organisational measures to avoid fake access
Domain: ppop:DataSharingEntity
Range: xsd:boolean
Legal source: [DGA Art. 11.5]
Usage examples:

3.1.2.8 prevents abusive access

Term: prevents abusive access
Definition: Indicates whether the data sharing entity has disclosed if the data sharing has technical and organisational measures to avoid abusive access
Domain: ppop:DataSharingEntity
Range: xsd:boolean
Legal source: [DGA Art. 11.5]
Usage examples:

3.1.2.9 ensures reasonable continuity

Term: ensures reasonable continuity
Definition: Indicates whether the data sharing entity has disclosed if the data sharing has technical and organisational measures to ensure continuity in the data sharing
Domain: ppop:DataSharingEntity
Range: xsd:boolean
Legal source: [DGA Art. 11.6]
Usage examples:

3.1.2.10 has competition procedures

Term: has competition procedures
Definition: Indicates whether the data sharing entity has disclosed if the data sharing has technical and organisational measures to ensure competitive practices
Domain: ppop:DataSharingEntity
Range: xsd:boolean
Legal source: [DGA Art. 11.9]
Usage examples:

3.1.2.11 has service limitation

Term: has service limitation
Definition: Indicates whether the data sharing entity has disclosed the data retention of the data sharing
Domain: ppop:DataSharingEntity
Range: xsd:boolean
Legal source: [eIDAS Art. 2]
Usage examples:

3.2 Technologies

3.2.1 Classes

Personal Information Management System | Personal Data Store | Identity Wallet

3.2.1.1 Personal Information Management System

Term: Personal Information Management System
Definition: System that helps to give individuals more control over their personal data by managing their personal data in secure, on-premises or online storage systems and sharing it when and with whomever they choose
Subclass of: dpv:Technology
Legal source: [EDPS PIMS]
Usage examples:

3.2.1.2 Personal Data Store

Term: Personal Data Store
Definition: Service that lets an individual store, manage and deploy their personal data
Subclass of: ppop:PIMS
Usage examples:

3.2.1.3 Identity Wallet

Term: Identity Wallet
Definition: Service that allows the user to store identity data, credentials and attributes linked to her/his identity, to provide them to relying parties on request and to use them for authentication, online and offline, and to create qualified electronic signatures and seals
Subclass of: ppop:PIMS
Legal source: [eIDAS Art.3.42]
Usage examples:

3.3 Measures

PPOP measures taxonomy

3.3.1 Classes

Measure | Technical and Organisational Measure | Safeguard for Trustworthiness | Safeguard for General Safety | Safeguard for Security | Safeguard for Privacy | Safeguard for Explainability | Safeguard for Traceability | Safeguard for Auditability | Safeguard to Avoid Bias | Stakeholder Participation | Transparency Measure | Service Conditions

3.3.1.1 Measure

Term: Measure
Definition: Any action deployed by an entity involved in a data processing activity, due to the existence of a legal obligation, to guarantee and protect that the personal data involved shall not be affected in any way and, consequently, cause harm to the data subject
Legal source: [GDPR Arts. 5.1.d, 6.4.e, 12-22, 25, 32, 43-46, 77-79, 82, 89.1]
Usage examples:

3.3.1.2 Technical and Organisational Measure

Term: Technical and Organisational Measure
Definition: Any measure to ensure data’s confidentiality, integrity, and availability
Instance of: dpv:TechnicalOrganisationalMeasure
Legal source: [GDPR Arts. 5.1.d, 6.4.e, 12-22, 25, 32, 43-46, 77-79, 82, 89.1]
Usage examples:

3.3.1.3 Safeguard for Trustworthiness

Term: Safeguard for Trustworthiness
Definition: Safeguards to ensure that the technology itself is (statistically) reliable
Comment: Alternative definition: Factor that conveys trust in a system over future uncertainties
Subclass of: dpv:Safeguard
Legal source: [bodo-21]
Ethical source: [AI HLEG]
Usage examples:

3.3.1.4 Safeguard for General Safety

Term: Safeguard for General Safety
Definition: Precautionary measures to prevent vulnerabilities such as data polution, physical infrastructure, cyber security attacks, etc., to ensure the integrity and resilience of the AI system against potential attacks, to ensure public health against accidental release of hazardous biological agents
Subclass of: dpv:SafeguardForTrustworthiness
Legal source: [GDPR Art. 32]
Ethical source: [AI HLEG], [SIENNA], [AI Turing]
Usage examples:

3.3.1.5 Safeguard for Security

Term: Safeguard for Security
Definition: Actions taken to maintain the integrity of the information that constitute the system, safeguard to ensure that the system continuously remain functional and accessible to its authorised users
Subclass of: dpv:SafeguardForTrustworthiness
Legal source: [GDPR Art. 32]
Ethical source: [AI Turing]
Usage examples:

3.3.1.6 Safeguard for Privacy

Term: Safeguard for Privacy
Definition: Technical measures or tools for de-identification or anonymization of data
Subclass of: dpv:SafeguardForTrustworthiness
Ethical source: [ohm-09]
Usage examples:

3.3.1.7 Safeguard for Explainability

Term: Safeguard for Explainability
Definition: Safeguards used to provide a formal, logical, or semantic explanation to ensure that the rationale behind a specific decision or behaviour is communicated to (end-)users, and to make explicit and clarify the meaning of the content of the outcome
Subclass of: dpv:SafeguardForTrustworthiness
Legal source: [GDPR Arts. 12-14]
Ethical source: [AI HLEG], [AI Turing], [UNESCO]
Usage examples:

3.3.1.8 Safeguard for Traceability

Term: Safeguard for Traceability
Definition: Technical measures to trace all phases of algorithmic system design and development from data collection to selection, model building, and outcome of / decisions taken by algorithms
Subclass of: dpv:SafeguardForTrustworthiness
Legal source: [GDPR Art. 5]
Ethical source: [AI HLEG]
Usage examples:

3.3.1.9 Safeguard for Auditability

Term: Safeguard for Auditability
Definition: Safeguards to ensure that organizations and AI systems are consistent with relevant principles or norms; checks and balances in place to ensure that a system can be revised by indepedant third parties
Subclass of: dpv:SafeguardForTrustworthiness
Legal source: [GDPR Arts. 5, 24]
Ethical source: [capAI]
Usage examples:

3.3.1.10 Safeguard to Avoid Bias

Term: Safeguard to Avoid Bias
Definition: Safeguards to avoid underrepresenting or overrepresenting specific groups or samples in data collection
Subclass of: dpv:SafeguardForTrustworthiness
Ethical source: [dalessandro-17]
Usage examples:

3.3.1.11 Stakeholder Participation

Term: Stakeholder Participation
Definition: Diverse stakeholder participation is required to hear all voices and opinions throughout the production and use lifecycle of technologies
Subclass of: dpv:SafeguardToAvoidBias
Legal source: [GDPR Art. 22]
Ethical source: [AI Turing]
Usage examples:

3.3.1.12 Transparency Measure

Term: Transparency Measure
Definition: Measures to identify which information can or should be disclosed and the most appropriate way to make it available; and conditions of its accessibility
Subclass of: dpv:Measure
Ethical source: [turilli-floridi-09]
Usage examples:

3.3.1.13 Service Conditions

Term: Service Conditions
Definition: Any piece of information provided to natural persons regarding the activity where their data is involved
Subclass of: dpv:TransparencyMeasure
Legal source: [DGA Art. 11]
Usage examples:

3.4 Rights

3.4.1 Classes

Group Right | Right to Group Privacy | Right to Non-Discrimination | Right to Dignity | Right to Privacy | Right to Autonomy | Right to Security

3.4.1.1 Group Right

Term: Group Right
Definition: Rights held by a group itself rather than by its members severally
Subclass of: dpv:Right
Legal source: [GDPR Arts. 12-22]
Ethical source: [raz-88], [mcdonald-91]
Usage examples:

3.4.1.2 Right to Group Privacy

Term: Right To Group Privacy
Definition: A group has rights to privacy that are not reducible to the privacy of individuals who comprise that group
Comment: 2) Where information is shared: in family groups: absolutely private information about the group is shared only between insiders but not with outsiders. Joint individual right to privacy is required to keep the shared information private;3) Where aggregated information is analysed: in non-voluntary groups: group right to privacy: This kind of group is identified by any feature (or combination) individuals have in common, which is represented as the result of applying an algorithm, or used in an algorithm decision. Right to privacy for these groups is defined as right to reasonable inferences.
Subclass of: ppop:GroupRight
Ethical source: [floridi-17], [mantelero-17], [taylor-17], [sloot-15]
Usage examples:

3.4.1.3 Right to Non-Discrimination

Term: Right to Non-Discrimination
Definition: Right to be protected from the unfair and harmful use of group-related information
Subclass of: ppop:GroupRight
Ethical source: [muhlhoff-21], [vandyke-77]
Usage examples:

3.4.1.4 Right to Dignity

Term: Right to Dignity
Definition: A group has a right to dignity that is not reducible to the dignity of individuals who comprise that group
Subclass of: ppop:GroupRight
Usage examples:

3.4.1.5 Right to Privacy

Term: Right to Privacy
Definition: Data subjects have a right to exercise control over their personal information
Comment: Individuals (end-users) should have access to and control over their personal information. 1) Where information is collected: individual right to privacy: data subject should exercise control over their personal information.
Subclass of: dpv:DataSubjectRight
Legal source: [GDPR Arts. 12-22]
Ethical source: [roessler-05], [westin-67]
Usage examples:

3.4.1.6 Right to Autonomy

Term: Right to Autonomy
Definition: Data subjects have a right to reflect on, deliberate on, and justify decisions made in interaction with a system
Subclass of: dpv:DataSubjectRight
Legal source: [GDPR Arts. 12-22]
Ethical source: [AI Turing], [UNESCO], [OECD]
Usage examples:

3.4.1.7 Right to Security

Term: Right to Security
Definition: Data subjects are entitled to have their data processed, by both controllers and processors, in an appropiate manner that does not affect their freedom, including preventing inappropriate accessing to it
Subclass of: dpv:DataSubjectRight
Legal source: [GDPR Arts. 12-22]
Ethical source: [ECHR], [lundgren-18]
Usage examples:

3.5 Right Exemptions

3.5.1 Classes

Right Exemption | Right to be Informed Exemption | Data Subject Already Informed | Extraordinary Effort | Affects Processing | Legal Disclosure | Confidentiality Obligation | Expression of Opinion | Investigation Prevention | Third Party Rights | Confidentiality of Opinion | Statistical or Research Purpose | Legal Privilege | National Security | Defence | Public Security | Judicial Independence or Proceedings

3.5.1.1 Right Exemption

Term: Right Exemption
Definition: Organisations can deny a data subject from exercising their rights where it is necessary and proportionate but also allowed by the relevant regulation
Legal source: [GDPR Arts. 13.4, 14.5, 23]
Usage examples:

3.5.1.2 Right to be Informed Exemption

Term: Right to be Informed Exemption
Definition: Reasons why the data controller should not provide the data subject with the relevant information, according to Arts. 13 or 14 as applicable, about an intended data processing activity
Subclass of: ppop:RightExemption
Legal source: [GDPR Arts. 13.4, 14.5]
Usage examples:

3.5.1.3 Data Subject Already Informed

Term: Data Subject Already Informed
Definition: The data subject already has the relevant information about the intended data
Subclass of: ppop:RightToBeInformedExemption
Legal source: [GDPR Arts. 13.4, 14.5.a]
Usage examples:

3.5.1.4 Extraordinary Effort

Term: Extraordinary Effort
Definition: Providing the data subject with the relevant information would imply an impossible or disproportionate effort for the data controller
Subclass of: ppop:RightToBeInformedExemption
Legal source: [GDPR Art. 14.5.b]
Usage examples:

3.5.1.5 Affects Processing

Term: Affects Processing
Definition: Providing the data subject with the relevant information would render impossible or seriously impar the processing
Subclass of: ppop:RightToBeInformedExemption
Legal source: [GDPR Art. 14.5.b]
Usage examples:
Term: Legal Disclosure
Definition: The information due to the data subject is already disclosed in a Member State or Union law
Subclass of: ppop:RightToBeInformedExemption
Legal source: [GDPR Art. 14.5.c]
Usage examples:

3.5.1.7 Confidentiality Obligation

Term: Confidentiality Obligation
Definition: The data subject is not informed about a data processing activity due to the existance of a confidentiality obligation that covers the processing activity
Subclass of: ppop:RightToBeInformedExemption
Legal source: [GDPR Art. 14.5.d]
Usage examples:

3.5.1.8 Expression of Opinion

Term: Expression of Opinion
Definition: The personal data relating to the data subject consisting of an expression of opinion about the data subject by another given in confidence or on the understanding that it would be treated as confidential to a person who has a legitimate interest in receiving it
Subclass of: ppop:RightExemption
Legal source: [GDPR Art. 23.1.a - 23.1.j]
Usage examples:

3.5.1.9 Investigation Prevention

Term: Investigation Prevention
Definition: There is an allegation being made against the data subject and it is felt that the disclosure of data in the context of the request could in some way hinder the investigation
Subclass of: ppop:RightExemption
Legal source: [GDPR Art. 23.1.a - 23.1.j]
Usage examples:

3.5.1.10 Third Party Rights

Term: Third Party Rights
Definition: The data subject is only allowed to seek data in relation to themselves. Where another person may be identifiable from any information which may identify the third-party data should be redacted unless the third party has given consent
Subclass of: ppop:RightExemption
Legal source: [GDPR Art. 23.1.a - 23.1.j]
Usage examples:

3.5.1.11 Confidentiality of Opinion

Term: Confidentiality of Opinion
Definition: There is a confidential opinion expressed about the data subject by a member of staff
Subclass of: ppop:RightExemption
Legal source: [GDPR Art. 23.1.a - 23.1.j]
Usage examples:

3.5.1.12 Statistical or Research Purpose

Term: Statistical or Research Purpose
Definition: The request of the data subject can be refused if the exercise of rights would be likely to render impossible or seriously impair the achievement of archiving purposes or such restriction is necessary for the fulfilment of those purposes
Subclass of: ppop:RightExemption
Legal source: [GDPR Art. 23.1.a - 23.1.j]
Usage examples:
Term: Legal Privilege
Definition: Documents that have personal data of the data subject exempt from disclosure in court proceedings apply in relation to a Subject Access Request, this applies to both legal advice and litigation privilege
Subclass of: ppop:RightExemption
Legal source: [GDPR Art. 23.1.a - 23.1.j]
Usage examples:

3.5.1.14 National Security

Term: National Security
Definition: The exercise of the right by the data subject can be refused to safeguard national security where accepting the request of the right poses a threat to it
Subclass of: ppop:RightExemption
Legal source: [GDPR Art. 23.1.a - 23.1.j]
Usage examples:

3.5.1.15 Defence

Term: Defence
Definition: The exercise of the right by the data subject can be refused to safeguard defence where accepting the request of the right poses a threat to it
Subclass of: ppop:RightExemption
Legal source: [GDPR Art. 23.1.a - 23.1.j]
Usage examples:

3.5.1.16 Public Security

Term: Public Security
Definition: The exercise of the right by the data subject can be refused to safeguard public security where accepting the request of the right poses a threat to it
Subclass of: ppop:RightExemption
Legal source: [GDPR Art. 23.1.a - 23.1.j]
Usage examples:

3.5.1.17 Judicial Independence or Proceedings

Term: Judicial Independence or Proceedings
Definition: The exercise of the right by the data subject can be refused to safeguard judicial independence or proceedings where accepting the request of the right poses a threat to it
Subclass of: ppop:RightExemption
Legal source: [GDPR Art. 23.1.a - 23.1.j]
Usage examples:

3.6 Duties

3.6.1 Classes

Organisation Duty | Explainability | Traceability | Auditability | Accuracy

3.6.1.1 Organisation Duty

Term: Organisation Duty
Definition: Moral or legal duties of organizations corresponding to the rights of the data subject
Legal source: [GDPR Art. 24]
Ethical source: [HCR]
Usage examples:

3.6.1.2 Explainability

Term: Explainability
Definition: Organisations have a duty to explain and justify what is happening and why it is happening based on known facts and logical steps
Subclass of: ppop:OrganisationDuty
Legal source: [GDPR Art. 12]
Ethical source: [gall-21]
Usage examples:

3.6.1.3 Traceability

Term: Traceability
Definition: Organisations have a duty to assign responsibilities and document decisions to enable follow-up
Subclass of: ppop:OrganisationDuty
Legal source: [GDPR Art. 5]
Ethical source: [capAI]
Usage examples:

3.6.1.4 Auditability

Term: Auditability
Definition: Organisations have a duty to operationalise conformity assessment (to mitigate objective risks) and to ensure independant third party review (to mitigate subjective risks)
Subclass of: ppop:OrganisationDuty
Legal source: [GDPR Art. 24]
Ethical source: [capAI]
Usage examples:

3.6.1.5 Accuracy

Term: Accuracy
Definition: Ensure that systems generate correct and up to date outputs/outcomes
Subclass of: ppop:OrganisationDuty
Legal source: [GDPR Art. 5]
Ethical source: [AI Turing]
Usage examples:

4. Examples

4.1 Family Pod

Family with 2 parents and 2 children allow the processing of their medical health data for the purpose of research and development and have a data sharing service provider as a data intermediary

Example 4.1
        
          ex:familyPod a odrl:Policy ;
              odrl:profile ppop:, oac: ;
              odrl:uid <https://pod-provider/familyA/policy1> ;
              dc:issued "2022-02-22" ;
              odrl:permission [
                  odrl:assigner ex:familyPool ;
                  odrl:target oac:MedicalHealth ;
                  odrl:action oac:Read, oac:Write ;
                  odrl:constraint ex:purpose
              ] .

          ex:familyPool a ppop:Group ;
              ppop:hasDataIntermediary ex:dataIntermediary ;
              ppop:hasVoluntaryMembership ex:parent1, ex:parent2 ;
              ppop:hasNonVoluntaryMembership ex:child1, ex:child2 .
          ex:dataIntermediary a ppop:DataSharingServiceProvider .
          ex:parent1 a ppop:DataHolder, dpv:DataSubject ;
              ppop:isDataHolderFor ex:child1, ex:child2, ex:parent1 .
          ex:parent2 a ppop:DataHolder, dpv:DataSubject ;
              ppop:isDataHolderFor ex:child1, ex:child2, ex:parent2 .
          ex:child1 a dpv:Child .
          ex:child2 a dpv:Child .

          ex:purpose a odrl:Constraint ;
              odrl:leftOperand oac:Purpose ;
              odrl:operator odrl:isA ;
              odrl:rightOperand dpv:ResearchAndDevelopment .
        
      

4.2 ...

...

Example 4.2
        

        
      

4.3 Asthma type classification

In order to develop a model to identify different phenotypes of asthma that will lead to the provision of appropriate medical treatment, a server in a hospital processes encrypted patient records stored in individuals Pods in conjunction with anonymised patient records from public health care systems. A validated model is developed in which a pattern between two variables, sex and BMI, is discovered and labelled as Type A asthma. The developed model is used to treat patients differently based on the type of asthma that has been identified. The discovered knowledge may result in discriminatory requests for the care of obese females.

To realize the task of the AI system, i.e., prediction of asthma phenotype, a new sample is sent to the AI model for performing inference. The patient whose data is sent to the AI model has already been informed that an AI system is being used to make a prediction based on what it was trained on. Following the transmission of data to the model, the result, which is the label of identified type of asthma, is sent to the diagnostic workstation for evaluation and assessment in terms of reliability and accuracy. Finally, the patient is aware of the type of asthma s/he has (because the clinician explains the outcome of an AI system in a simple language while complying with transparency measures), and the clinician takes the appropriate actions to treat the patient.

Example 4.3A presents patient A's data sharing policy related to its health record for the provision of asthma treatment, while example 4.3B presents the hospital's privacy policy regarding this particular service.

Example 4.3A
        
          ex:patientA-policy a odrl:Policy ;
              odrl:profile ppop:, oac: ;
              odrl:uid <https://pod-provider/patientA/policy3> ;
              dc:issued "2022-01-13" ;
              odrl:assigner ex:patientA ;
              odrl:target oac:HealthRecord ;
              odrl:permission [ odrl:action oac:Use ; odrl:constraint ex:purpose ] ;
              odrl:permission [ odrl:action oac:Store ; odrl:constraint ex:storage ] ;
              odrl:obligation [
                  odrl:action ex:discloseCodeImplementer ;
                  ppop:accountableParty ex:codeImplementer ] .

          ex:patientA a ppop:DataHolder, oac:DataSubject .

          ex:AsthmaTreatment a dpv:Purpose ; skos:broaderTransitive dpv:ServiceProvision ;
              rdfs:label "Provision of medical asthma treatment" .
          ex:purpose odrl:leftOperand oac:Purpose ; 
              odrl:operator odrl:isA ;
              odrl:rightOperand ex:AsthmaTreatment .

          ex:storage odrl:leftOperand ppop:ProcessingContext ; 
              odrl:operator odrl:eq ;
              odrl:rightOperand [ a ppop:PersonalDataStore ; 
                  dpv:hasLocation <https://pod-provider/patientA/> ] .

          ex:discloseCodeImplementer a dpv:Processing ; skos:broaderTransitive dpv:Disclose ;
              rdfs:label "Disclose the implementer of the predictive model" .
          ex:codeImplementer a ppop:DataUser .
        
      
Example 4.3B
        
          ex:pp-usecase-3 a odrl:Privacy ;
              odrl:profile ppop:, oac: ;
              odrl:uid <https://hospitala.com/policy3> ;
              dc:issued "2022-01-15" ;
              odrl:assigner ex:controller ;
              odrl:constraint ex:usedTechnology ;
              odrl:permission [
                  ppop:accountableParty ex:controller ;
                  odrl:assignee ex:patients ;
                  odrl:target oac:HealthRecord ;
                  odrl:action [
                      rdf:value oac:Use ;
                      odrl:refinement [
                          odrl:and ex:purpose, ex:legalBasis, ex:measure
                      ]
                  ]
              ] ;
              odrl:permission [
                  odrl:target ex:anonymisedHealthRecords ;
                  odrl:action [
                      rdf:value oac:Use ;
                      odrl:refinement ex:purpose
                  ]
              ] ;
              odrl:obligation [
                  odrl:target ppop:RightToNonDiscrimination ;
                  odrl:action [
                      rdf:value ppop:realize ;
                      odrl:refinement [
                          odrl:leftOperand opp:Measure ;
                          odrl:operator odrl:isA ;
                          odrl:rightOperand ppop:SafeguardForPrivacy
                      ]
                  ] 
              ] .

          ex:controller a oac:DataController, ppop:DataUser ;
              dpv:hasName "Hospital A" ;
              dpv:hasAddress "Hospital Street, City, Country" ;
              dpv:hasContact "contact@hospitala.com" ;
              dpv:hasDataProtectionOfficer [
                  a dpv:DataProtectionOfficer ;
                  dpv:hasContact "dpo@hospitala.com" ;
              ] .

          ex:Server a dpv:Technology ; 
              skos:broaderTransitive dpv-tech:DataStorageTechnology, dpv-tech:DataManagementTechnology ;
              rdfs:label "Server used to store and develop models with patients' data" .
          ex:usedTechnology a odrl:Constraint ;
              odrl:leftOperand ppop:Technology ;
              odrl:operator odrl:isA ;
              odrl:rightOperand ex:Server .

          ex:patients a ppop:Group ;
              ppop:hasNonVoluntaryMembership ex:patientA, ex:patientB, ex:patientC .

          ex:purpose a odrl:Constraint ;
              odrl:leftOperand oac:Purpose ;
              odrl:operator odrl:isA ;
              odrl:rightOperand ex:AsthmaTreatment .

          ex:legalBasis a odrl:Constraint ;
              odrl:leftOperand ppop:LegalBasis ;
              odrl:operator odrl:isA ;
              odrl:rightOperand ex:consent .
          ex:consent a dpv-gdpr:A9-2-a ;
              dpv:hasWithdrawalMethod "withdraw@hospitala.com" .

          ex:measure a odrl:Constraint ;
              odrl:leftOperand ppop:Measure ;
              odrl:operator odrl:isA ;
              odrl:rightOperand dpv:Encryption .

          ex:AnonymisedHealthRecord a dpv:AnonymisedData, dpv:HealthRecord ;
              rdfs:label "Anonymised health record data" .
          ex:anonymisedHealthRecords a ex:AnonymisedHealthRecord ;
              ppop:collectedfromOtherDataSources "public health care systems" .
        
      

A. References

A.1 Ethical sources

[AI HLEG]
Content and Technology (European Commission) Directorate-General for Communications Networks (2020). The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self assessment. Publications Office of the European Union, LU. URL: https://data.europa.eu/doi/10.2759/002360
[AI Turing]
David Leslie (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. DOI: 10.5281/zenodo.3240529
[capAI]
Luciano Floridi, Matthias Holweg, Mariarosaria Taddeo, Javier Amaya Silva, Jakob Mökander, and Yuni Wen (2022). capAI - A Procedure for Conducting Conformity Assessment of AI Systems in Line with the EU Artificial Intelligence Act. Social Science Research Network, Rochester, NY. DOI: 10.2139/ssrn.4064091
[copp-84]
David Copp (1984). What Collectives Are: Agency, Individualism and Legal Theory. Dialogue 23(2), 249–269. DOI: 10.1017/S0012217300044899
[dalessandro-17]
Brian d’Alessandro, Cathy O’Neil, and Tom LaGatta (2017). Conscientious Classification: A Data Scientist’s Guide to Discrimination-Aware Classification. Big Data 5(2), 120–134. DOI: 10.1089/big.2016.0048
[ECHR]
Council of Europe (1950). Convention for the Protection of Human Rights and Fundamental Freedoms. Council of Europe Treaty Series 005, Strasbourg. URL: https://www.echr.coe.int/documents/convention_eng.pdf
[floridi-17]
Luciano Floridi (2017). Group Privacy: A Defence and an Interpretation. In Group Privacy. Springer, Cham, 83–100. DOI: 10.1007/978-3-319-46608-8_5
[french-84]
Peter A. French (1984). Collective and Corporate Responsibility. Columbia University Press, New York.
[gall-21]
Richard Gall (2021). Machine Learning Explainability vs Interpretability: Two concepts that could help restore trust in AI. KDnuggets. URL: https://www.kdnuggets.com/machine-learning-explainability-vs-interpretability-two-concepts-that-could-help-restore-trust-in-ai.html/
[HCR]
Heidi M. Hurd and Michael S. Moore (2018). The Hohfeldian Analysis of Rights. The American Journal of Jurisprudence 63(2), 295–354. DOI: 10.1093/ajj/auy015
[lundgren-18]
Björn Lundgren (2018). Information, Security, Privacy, and Anonymity: Definitional and Conceptual Issues. KTH Royal Institute of Technology, Stockholm, Sweden. URL: https://www.diva-portal.org/smash/get/diva2:1200793/FULLTEXT01.pdf
[mantelero-17]
Alessandro Mantelero (2017). From Group Privacy to Collective Privacy: Towards a New Dimension of Privacy and Data Protection in the Big Data Era. In Group Privacy, Bart van der Sloot, Luciano Floridi and Linnet Taylor (eds.). Springer Verlag. DOI: 10.1007/978-3-319-46608-8_8
[muhlhoff-21]
Rainer Mühlhoff (2021). Predictive privacy: towards an applied ethics of data analytics. Ethics and Information Technology 23(4), 675–690. DOI: 10.1007/s10676-021-09606-x
[mcdonald-91]
Michael McDonald (1991). Should Communities Have Rights? Reflections on Liberal Individualism. Canadian Journal of Law & Jurisprudence 4(2), 217–237. DOI: 10.1017/S0841820900002915
[newman-04]
Dwight G. Newman (2004). Collective Interests and Collective Rights. The American Journal of Jurisprudence 49(1), 127–163. DOI: 10.1093/ajj/49.1.127
[OECD]
OECD (2019). Recommendation of the Council on AI, OECD/LEGAL/0449. URL: https://www.fsmb.org/siteassets/artificial-intelligence/pdfs/oecd-recommendation-on-ai-en.pdf
[ohm-09]
Paul Ohm (2009). Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization. UCLA Law Review 57(6). URL: http://www.uclalawreview.org/?p=1353
[raz-88]
Joseph Raz (1988). The Morality of Freedom. Oxford University Press, Oxford. DOI: 10.1093/0198248075.001.0001
[roessler-05]
Beate Roessler (2005). The Value of Privacy. Wiley. URL: https://www.wiley.com/en-us/The+Value+of+Privacy-p-9780745631103
[SIENNA]
Philip Jansen, Philip Brey, Alice Fox, Jonne Mass, Bradley Hilas, Nils Wagner, Patrick Smith, Isaac Oluoch, Laura Lamers, Hero van Gein, Anais Resseguier, Rowena Rodrigues, David Wright, and David Douglas (2019). SIENNA D4.4: Ethical Analysis of AI and Robotics Technologies. DOI: 10.5281/zenodo.4068082
[sloot-15]
Bart van der Sloot (2015). How to assess privacy violations in the age of big data? Analysing the three different tests developed by the ECtHR and adding for a fourth one. Information and Communications Technology Law 24(1), 74–103. DOI: 10.1080/13600834.2015.1009714
[taylor-17]
Linnet Taylor, Luciano Floridi, and Bart van der Sloot (2017). Group Privacy: New Challenges of Data Technologies. Springer International Publishing, Cham. DOI: 10.1007/978-3-319-46608-8
[turilli-floridi-09]
Matteo Turilli and Luciano Floridi (2009). The ethics of information transparency. Ethics and Information Technology 11, 105–112. DOI: 10.1007/s10676-009-9187-9
[UNESCO]
UNESCO (2021). Recommendation on the Ethics of Artificial Intelligence. URL: https://unesdoc.unesco.org/ark:/48223/pf0000380455
[vandyke-77]
Vernon van Dyke (1977). The Individual, the State, and Ethnic Communities in Political Theory. World Politics 29(3), 343–369. DOI: 10.2307/2010001
[westin-67]
Alan F. Westin (1967). Privacy And Freedom. Washington and Lee Law Review 25(1), 166–170. URL: https://scholarlycommons.law.wlu.edu/wlulr/vol25/iss1/20

A.3 Vocabularies

[acl]
Basic Access Control ontology . 2009. URL: http://www.w3.org/ns/auth/acl#
[cert]
The Cert Ontology 1.0 . Henry Story. 13 November 2008. URL: http://www.w3.org/ns/auth/cert#
[dct]
DCMI Metadata Terms . DCMI Usage Board. DCMI. 20 January 2020. DCMI Recommendation. URL: https://www.dublincore.org/specifications/dublin-core/dcmi-terms/
[dpv]
Data Privacy Vocabulary (DPV) version 0.2 . Axel Polleres; Beatriz Esteves; Bert Bos; Bud Bruegger; Elmar Kiesling; Eva Schlehahn; Fajar J. Ekaputra; Georg P. Krog; Harshvardhan J. Pandit; Javier D. Fernández; Mark Lizar; Paul Ryan; Piero Bonatti; Ramisa Gachpaz Hamed; Rigo Wenning; Rob Brennan; Simon Steyskal. 21 June 2021. URL: http://www.w3.org/ns/dpv#
[ns1]
VANN: A vocabulary for annotating vocabulary descriptions . Ian Davis. 1 April 2005. URL: http://purl.org/vocab/vann/
[odrl-model]
ODRL Information Model 2.2 . Renato Iannella; Serena Villata. W3C. 15 February 2018. W3C Recommendation. URL: https://www.w3.org/TR/odrl-model/
[odrl-vocab]
ODRL Vocabulary & Expression 2.2 . Renato Iannella; Michael Steidl; Stuart Myles; Víctor Rodríguez-Doncel. W3C. 15 February 2018. W3C Recommendation. URL: https://www.w3.org/TR/odrl-vocab/
[owl2-overview]
OWL 2 Web Ontology Language Document Overview (Second Edition) . W3C OWL Working Group. W3C. 11 December 2012. W3C Recommendation. URL: https://www.w3.org/TR/owl2-overview/
[rdf-schema]
RDF Schema 1.1 . Dan Brickley; Ramanathan Guha. W3C. 25 February 2014. W3C Recommendation. URL: https://www.w3.org/TR/rdf-schema/
[rdf11-concepts]
RDF 1.1 Concepts and Abstract Syntax . Richard Cyganiak; David Wood; Markus Lanthaler. W3C. 25 February 2014. W3C Recommendation. URL: https://www.w3.org/TR/rdf11-concepts/
[skos]
SKOS Simple Knowledge Organization System Reference . Alistair Miles; Sean Bechhofer. W3C. 18 August 2009. W3C Recommendation. URL: https://www.w3.org/TR/skos-reference/
[xsd]
W3C XML Schema Definition Language (XSD) 1.1 Part 2: Datatypes . David Peterson; Sandy Gao; Ashok Malhotra; Michael Sperberg-McQueen; Henry Thompson; Paul V. Biron et al. W3C. 5 April 2012. W3C Recommendation. URL: https://www.w3.org/TR/xmlschema11-2/
[solid-protocol]
Solid Protocol . Sarven Capadisli; Tim Berners-Lee; Ruben Verborgh; Kjetil Kjernsmo; Justin Bingham; Dmitri Zagidulin. 7 July 2021. URL: https://solidproject.org/TR/protocol
[wac]
Web Access Control . Sarven Capadisli. 11 July 2021. URL: https://solid.github.io/web-access-control-spec/