Towards successful AI applications in the financial industry
INSIGHTS Opinion
Towards successful AI applications in the financial industry
Technological advancements within Artificial Intelligence (AI) continue to play a major role in shaping global competitiveness and productivity as AI becomes more mainstream. The presence of AI in pressing global social issues such as privacy issues and Covid-19 testing, has led to an increased focus by policy makers and the public. Examples include:
- The European Union (EU) aims to establish a reputation for “trustworthy AI” that is “made in Europe” with a strong emphasis on ethical and human-centric approaches, aligned with human rights values and democratic principles.
- In December 2020 the UK House of Lords Liaison Committee called on the government to better coordinate its AI policy and the use of data and technology by national and local governments.
- UK regulators have set up the first meeting of the AI Public Private Forum with the aim to assess the impact AI has on financial services.
- EU adoption of a new digital finance law last September.
- SEC’s Strategic Hub for Innovation and Financial Technology (FinHub) is currently running virtual meetings with industry representatives to better understand how AI is used to, for example, build new products, offer new services, create efficiencies, and enhance regulatory compliance.
Due to AI’s self-learning nature, challenges around model explainability (i.e., the ability to discern the model’s reasoning), data integrity and privacy need to be addressed for successful applications of AI. Arabesque AI offers these solutions for consideration in an effort to narrow the knowledge gap between practitioners, policy makers, and the public.
- Model Risk Management:
Challenge: Model explainability presents a unique challenge for AI applications, particularly in the space of Deep Learning, where non-linear dependencies can occur. For complex models, such non-linearities operate as black boxes making it difficult to explain model reasoning.
Proposed solution: In our opinion a comprehensive risk model should focus on model development, evaluation and validation as well as ongoing testing and monitoring; paired with transparency of research conducted to address the importance of explainability. AI models learn underlying distributions from the data used to train them and thus particular focus should be placed on tail-end events (e.g., unprecedented market conditions), in order to assess how well these models can generalise to data not seen or under-represented in the training process.
- Data Integrity:
Challenge: Last month’s newsletter highlighted issues around data bias for successful AI applications that require large amounts of data to train and validate.
Proposed solution: It is our belief, continuous monitoring and evaluation of input data is imperative. Building models trained to detect data bias or abnormal data can help in building more robust AI pipelines. Furthermore, recent studies report that more diverse teams of researchers are better equipped to spot data bias, highlighting the need for some level of human oversight of AI applications.
Currently we are observing a trend to include alternative data sources into AI pipelines e.g., social media, news or geo-spatial data. Data source verification is a vital step in the data pipeline to guarantee reliability, especially as we are seeing a rise in media manipulation campaigns. In our belief, a robust data ingestion, verification and storage pipeline will help safeguard AI applications and lead to more robust AI deployments.
- Privacy
Challenge: Recent publications highlight how AI can use customer data such as credit card transactions. As such data sources get incorporated into AI applications, data privacy and security considerations need to be addressed.
Proposed solution: In our opinion, the protection of personal data is a key responsibility of AI adopters and paramount to building trust in the technology. Various techniques exist within AI that safeguard personal data. This should be incorporated within all stages of model development, from idea generation to the public released of the product itself. Relevant legislation includes European General Data Protection Regulation (GDPR) that mandates strict conditions whilst using personally identifiable data.
While AI holds tremendous potential to radically transform the financial industry, it also presents unique challenges, risks and regulatory considerations. It is vital to continuously assess AI strategies on firm level as well as industry and national levels. Only by building trust in robust AI applications can the transformative power of AI be leveraged for good.
Arabesque AI
More on Arabesque AI:
Arabesque has been built on the two disruptors of finance, sustainability and Artificial Intelligence. Utilizing advancements in technology, as an organization, Arabesque seeks to deliver transparent, sustainable, innovative solutions for our clients; whether through our SRay® data services, investment solutions or, most recently, our AI research. Arabesque AI was established in late 2019, with a minority stake owned by the asset manager DWS, with the mission to build a world-leading, AI-driven, investment technology company that offers its clients high-performing, efficient and individually customizable investment strategies. The investment philosophy underpinning the use of AI, is that the discernible structure in financial markets is highly complex and varies over time, markets, and asset classes. AI can thus be used to build systems capable of handling this complexity and of enabling scalable investment process design for a wide variety of use cases in an efficient and cost-effective way.