New research paper: “Trust, but Verify: Informed Consent, AI-Technologies, and Public Health Emergencies”
When handling personal data – for research or when targeted at decision-support systems – there are clearly important considerations. These include data protection, of course. Since the adoption of the GDPR across Europe, it has been very clear what the rights of data subjects and what the responsibilities of those processing their personal data are. In BigMedylitics, we’re particularly aware of regulatory requirements since the data scientists in the project are dealing with very sensitive data (special category personal data), namely health data from patients across Europe.
For a project like this, though, that’s not enough. Yes, we are obliged to safeguard personal data. But there is a more significant imperative here. As we explore how best to model and capture what predictive power might be available from the large datasets we have access to, we have to think first and foremost about whether our modelling and predictions are ethically sound. In other words, does our big data approach and the data analytics we perform treat patients with respect, ultimately do them go and avoid doing them any harm, and are the results of our work accessible equitably across the community?
It doesn’t stop there either. The work we do in BigMedylitics is to support clinicians in the first instance the better to care for their patients. Data scientists expect clinicians to trust them or rather the technology they produce. And, of course, patients trust clinicians to treat them and support their health and well-being. How do we ensure that the trust relationship with patients is not compromised? And how do we help clinicians understand what the technologies offer?
With all this in mind, Trust, but Verify: Informed Consent, AI-Technologies, and Public Health Emergencies tries to explore the various different aspects of big data and advanced technologies as they are deployed in typical contexts, especially healthcare. The paper starts by looking at informed consent, and how the term confuses depending on the context. In addition, it discusses issues around human acceptance and understanding of technology. Bringing these two aspects together – the confusion over informed consent, and issues around technology acceptance – it proposes a trust-based understanding for advanced technology use across a range of relevant scenarios.