AI Regulation in the Financial Sector

Switzerland has not yet introduced an AI-specific regulatory framework. Financial institutions utilizing AI must comply with the general legal framework and FINMA’s supervisory expectations.

by Schellenberg Wittmer

Key Take-aways

  1. Switzerland has not yet introduced an AI-specific regulatory framework. Financial institutions utilizing AI must comply with the general legal framework and FINMA’s supervisory expectations.
  2. The EU AI Act introduces new regulations on AI systems, affecting not only EU entities, but also Swiss companies that supply AI systems to the EU or deploy systems whose output is used within the EU.
  3. Financial institutions and insurance companies must adopt AI governance frameworks and stay informed about regulatory developments to ensure compliance.

1 Introduction

Artificial intelligence (AI) has become a key driver of innovation in the financial industry, where it is employed in a wide range of use cases, including fraud detection, risk management, cash flow forecasting, process automation, credit risk analysis, customer relationship management, trading algorithms, IT development and information analysis. While recent developments in generative AI offer considerable opportunities, they also present risks. As a result, financial regulators worldwide are intensifying their supervision of AI applications used by financial institutions. This newsletter provides a high-level overview of the current state of the Swiss regulatory framework applicable to financial institutions using AI applications, as well as the EU AI Act, which may affect financial institutions that supply AI systems to EU-based entities or deploy AI systems whose
output is used in the EU.

2 Swiss Legislative Framework

Switzerland has not yet adopted a comprehensive AI-specific regulatory framework. In 2020, the Federal Council has adopted the Guidelines on Artificial Intelligence for the Confederation which apply only to the Federal Administration. Regarding the private sector, a Report by the State Secretariat for Education, Research and Innovation (SERI) to the Federal Council, published in 2019, concluded that there was no immediate need to introduce Swiss legislation dealing with AI.

Swiss financial institutions must comply with FINMA’s supervisory expectations.

However, in 2023, recognizing the growing global momentum toward AI regulation, the Federal Council tasked the Department of the Environment, Transport, Energy, and Communication (DETEC) with drafting a report on possible regulatory approaches by the end of 2024. This report will serve as the foundation for a potential Swiss AI regulatory framework proposal in 2025.

In the interim, Swiss businesses must comply with the general legal framework when developing or deploying AI applications, such as the Data Protection Act (FADP) (see 2.1 below) and personality rights under Swiss law, relevant intellectual property laws and notably the Copyright Act (CopA), as well as the Unfair Competition Act (UCA), in line with Switzerland’s principle-based and technology-neutral approach. Additionally, Swiss financial institutions utilizing AI must fulfil the supervisory expectations of FINMA (see 2.2 below) and comply with other relevant regulations, such as the Swiss bank secrecy provisions of the Banking Act, FINMA Circular 2018/3 (Outsourcing), and FINMA Circular 2023/1 (Operational Risks and Resilience).

2.1. Data Protection Act

In November 2023, the Federal Data Protection and Information Commissioner (FDPIC) issued a statement emphasizing that Swiss data protection legislation is directly applicable to AI-driven data processing. The statement reminded manufacturers, providers and deployers of AI applications that they must ensure transparency regarding the purpose, functionality and data sources of AI-driven data processing activities and must safeguard the highest possible degree of digital self-determination for data subjects. The requirements of Swiss data protection legislation apply to most AI applications used by financial institutions. Financial institutions must in particular assess whether the AI application generates automated individual decisions within the meaning of Article 21 FADP. This assessment is particularly relevant for AI applications used in credit scoring, digital onboarding, customer segmentation or filtering job applications. The FDPIC also pointed out that certain AI applications require a data protection impact assessment pursuant to Article 22 FADP. This applies particularly in cases where (i) large volumes of sensitive personal data are processed, (ii) personal data is systematically collected for AI processing (other than for statistical or non-personal purposes) or (iii) AI application‘s output has significant consequences for the concerned data subjects.

2.2. FINMA‘s Supervisory Expectations

FINMA has been monitoring the development and use of AI for several years. In the years 2021 and 2022, it conducted surveys on the use of AI in the insurance, banking and asset management sectors, established an inventory of areas in which AI applications were used and set up a specialized AI service. In its Risk Monitor 2023, FINMA outlined its supervisory expectations for financial institutions using AI, focusing on four critical areas:

Governance and Responsibility: Financial institutions must clearly define roles and responsibilities for AI-related decisions, ensuring that accountability remains with human actors, not the AI systems themselves. This is particularly important when AI errors may go unnoticed, where processes become overly complex, or where there is a lack of expertise within the institution.

– Robustness and Reliability: AI systems must be tested for accuracy and reliability, especially considering the risks of „drift“ in self-learning models. These systems should undergo rigorous testing, particularly in risk management areas. AI systems also pose cybersecurity risks, which must be addressed.

– Transparency and Explainability: Institutions must ensure that AI systems, especially those affecting customer outcomes, are transparent and that decisions made by these systems can be understood and explained by human operators.

– Equal Treatment: AI systems used in financial services, such as credit scoring, must avoid biases or discriminatory practices. FINMA requires institutions to monitor their AI systems to prevent any form of unequal treatment. By publishing these expectations, FINMA is positioning itself at the forefront of a trend among financial market regulators, who are increasingly issuing guidance regarding the use of AI through whitepapers, guidelines or statements. Recent examples include the Statement of the European Securities and Markets Authority (ESMA) offering initial guidance to firms using AI when providing investment services to retail clients (May 2024), the expert article by the German Federal Financial Supervisory Authority (BaFIN)

Read full article here.

Authors

Grégoire Tribolet, Stéphanie Chuffart-Finsterwald, Roland Mathys, Olivier Favre

Sign In

[login_form] Lost Password