Artificial intelligence (AI)-powered tools and technologies are creating novel efficiences across industries, resulting in both big and small changes to the status quo. As a data-driven technology, the ethical use of AI is intertwined with questions about data privacy – including when and how sensitive personal data is being collected, used, stored and shared. Especially for highly-regulated industries like insurance, issues of data privacy are likely to be at the center of the sector’s adoption of AI.
AI is a broad category, generally defined as a machine performing a task in lieu of human intelligence. Machine learning is a type of AI where a computer algorithm is trained on data in order to detect patterns and make predictions. Generative AI tools, like ChatGPT, are built on a machine learning system known as a large language model (LLM) and are able to generate new content or data that is similar to what it has been trained on. LLMs are able to produce natural-sounding language after training on enormous data sets of text. Deep learning is a type of machine learning which goes further, processing a wider range of data resources using neural networks to process complex data sets.
Some AI technologies have already been integrated across many industries for years, including insurance. Examples of some AI functions used in insurance today would include: automation of repetitive tasks (e.g. natural language processing in claims processing and fraud detection) and augmented decision-making (e.g. supervised learning in underwriting/risk assessment).
Today, organizations adopting AI must be sure to adhere to legal requirements and engage in governance around data privacy, especially as the regulatory agencies increase their pressure. That means avoiding violation of privacy and data protection laws (including discrimination or bias), and engaging in unfair practices.
In the EU, organizations must also adhere to the General Data Protection Regulation (GDPR), an EU regulation implemented in 2018, which provides individuals with the right not to be subject to a decision based solely on automated processing.
Consumers understand how valuable their personal data is, and many are already sensitive to the modern push for more and more data collection. They are also worried about their data being hacked, sold, or otherwise misused. To ensure compliance with the GDPR and to meet basic ethical standards about the use of personal data, insurers adopting AI technologies should be transparent about their use, providing consumers with clear information about any personal data being collected, processed and stored (and the legal basis for doing so).
Other ethical considerations insurers should apply around data privacy and AI include:
- Transparently disclosing information about how personal data will be used by AI systems, including right to access, delete, change or otherwise restrict the usage of their personal data
- Ensuring AI is compliant with relevant data protection laws, permissions and regulations that dictate how personal data is collected, processed, stored and shared
- Introducing adequate security measures to make sure personal data is protected from unauthorized access or other misuse
- Creating processes for handling requests for access to personal data and handling complaints
- Developing broad governance frameworks to help guarantee AI systems are being used according to legal standards and guidelines
- Determining who ultimately takes responsibility for AI outputs because of the potential personal liability implications surrounding data privacy violations
It should also be noted that not all AI is problematic from a data privacy standpoint. AI models and tools are already frequently hosted in approved cloud infrastructures. AI tools are also being used to improve many aspects of cybersecurity software, including monitoring against data theft. The use of synthetic data, which is data created by AI-powered algorithms and which contains no personally identifiable information, is also likely to become a key player in ethical AI development.
And though current data privacy laws offer some protections for consumers, the pace with which new AI-powered technologies are being developed (and their transformative potential) means it might not be enough for organizations to wait for guidance from regulatory bodies to make the right choices when it comes to data privacy. Innovation, therefore, should be balanced by some measure of self-regulation, making sure data protection doesn’t come at the cost of privacy or modernization. Taking data protection seriously will help build trust, gain buy-in, and secure the adoption of valuable AI in a way that benefits both insurers and the customers they serve.