Innovating with Intelligence: Open-Source Large Language Models for Secure System Transformation
GERHARDT SCRIVEN | TONY MOENICKE | SEBASTIAN EHRIG
MAGNUS WESTERLUND | Principal Lecturer in Information Technology and Director of the Laboratory for Trustworthy AI,
Arcada University of Applied Sciences, Helsinki, Finland
ELISABETH HILDT | Affiliated Professor, Arcada University of Applied Sciences, Helsinki, Finland, and Professor of
Philosophy and Director of the Center for the Study of Ethics in the Professions, Illinois Institute of Technology, Chicago, USA
APOSTOLOS C. TSOLAKIS | Senior Project Manager, Q-PLAN International Advisors PC, Thessaloniki, Greece
ROBERTO V. ZICARI | Affiliated Professor, Arcada University of Applied Sciences, Helsinki, Finland
The current landscape of assuring AI reliability and quality is fragmented, with existing frameworks often lacking a unified methodology for comprehensive evaluation, particularly in integrating ethical and human rights considerations.
This article introduces the Z-Inspection® process as a participatory, human-centered approach for assessing and codesigning trustworthy AI systems throughout their lifecycle. By forming multi-disciplinary teams and utilizing sociotechnical scenarios, Z-Inspection® enables the exploration of ethical dilemmas and risks in context, fostering a shared understanding among stakeholders. This methodology aligns with the European AI Act’s emphasis on human-centric technology and addresses limitations in existing standards by incorporating continuous ethical reflection and adaptability.
We demonstrate how the co-design aspect of Z-Inspection® facilitates proactive risk identification, transparency, and alignment with regulatory requirements. This approach advances beyond traditional static checklists, offering a dynamic framework that intrinsically weaves ethical considerations into AI development, thereby ensuring that AI technologies are not only technically robust but also ethically sound, socially beneficial, aligned with human values, and legally compliant. Trustworthy AI is not an afterthought or technical hindrance but a way to promote a mindful use of AI.
SEAN LYONS | Author of Corporate Defense and the Value Preservation Imperative: Bulletproof Your Corporate Defense Program
Global artificial intelligence safety is critical to defending against the potential downside of AI technology (from routine to existential risks) and needs to be prioritized accordingly. Our global leaders have a duty of care to safeguard against the potential damage of this impending AI value destruction and that will require a much higher, more robust, and more mature level of AI safety due diligence than is currently on display.
Dynamic developments in AI mean that the normal order of things no longer applies, and that going forward effective AI safety will require superior levels of guardianship, stewardship, and leadership.
DAVID S. KRAUSE | Emeritus Associate Professor of Finance, Marquette University
ERIC P. KRAUSE | PhD Candidate – Accounting, Bentley University
The emergence of generative artificial intelligence systems, capable of autonomously generating diverse content, is reshaping industries while raising concerns about biases, misuse, and errors. Auditing can play a crucial role in ensuring the responsible deployment of GenAI. This discussion examines the critical importance of auditing in mitigating risks and building user confidence.
Recent regulatory frameworks, such as the E.U’s Artificial Intelligence Act and New York City’s Bias Audit Law, underscore the necessity of audits for high-risk AI systems, focusing particularly on fairness and data integrity. Internally, organizations benefit significantly from conducting audits to pinpoint biases and vulnerabilities, thereby upholding ethical standards and compliance.
Traditional audit firms encounter challenges due to the intricate nature and rapid advancement of AI technologies. Nevertheless, they can adapt by enhancing their expertise and collaborating closely with AI specialists. In conclusion, rigorous auditing practices are essential for navigating regulatory environments, mitigating risks and ensuring the ethical and dependable integration of GenAI systems, fostering positive societal impact.
NATALIE A. PIERCE | Partner and Chair of the Employment and Labor Group, Gunderson Dettmer
This article explores the transformative impact of generative AI (GenAI) and robotics on the future of work and leadership. It discusses how these technologies are revolutionizing various industries, including healthcare, finance, retail, manufacturing, and education. The synergy between GenAI and robotics is highlighted, showing potential for adaptive robotics and enhanced human-robot interaction.
The article emphasizes the critical role of leadership in navigating this technological shift, addressing the need for strategic vision, resource allocation, and fostering an AI-friendly culture. It also covers the importance of workforce reskilling and the use of GenAI in learning and development. Legal considerations, including data privacy, discrimination risks, intellectual property rights, and evolving regulatory frameworks, are examined.
The article concludes by discussing challenges such as ethical concerns, job displacement, and data security, while emphasizing the potential for GenAI to drive innovation and competitive advantage when balanced with human-centric values and ethical considerations.
ARUN SUNDARARAJAN | Harold Price Professor of Entrepreneurship and Director of the Fubon Center for Technology,
Business, and Innovation, Stern School of Business, New York University
As the landscape of artificial intelligence (AI) evolves rapidly, AI oversight by corporate boards is essential for managing AI exposure and complying with new AI laws. Competitive pressure to stay ahead in the AI race is inducing CEOs to embrace innovation aggressively, making board oversight especially critical. This paper presents a framework for corporate boards that identifies some key AI governance dimensions and provides guidelines for assessing their organizational risk and regulatory likelihood.
The dual lenses of risk and regulation can simultaneously aid a board in prioritizing governance aspects to pay attention to and in choosing a robust oversight strategy. Mapping the risk-regulation matrix shapes appropriate recommended oversight strategies, ranging from proactive self-regulation and compliance monitoring to more passive wait-and-watch strategies.
This paper provides a structured way to navigate the evolving regulatory and governance landscape while unshackling boards from the subjectivity and imprecision of terms like “responsible” or “ethical” AI, leading to oversight that aligns with a company’s unique risk profile and industry-specific regulatory context, while recognizing that AI governance touches a range of topics, from technology, intellectual property and sustainability to audit, measurement, and risk assessment.
GERHARDT SCRIVEN | Executive Director, Capco
TONY MOENICKE | Senior Consultant, Capco
SEBASTIAN EHRIG | Senior Consultant, Capco
The rapid development of Large Language Models (LLMs) has revolutionized software development, yet the predominance of closed-source models has restricted their extensive adoption. In this paper, we explore open-source Large Language Models as an alternative to closed-source models like ChatGPT, particularly for the use case of interpreting legacy software source code.
We evaluate open-source models for their capacity in understanding and explaining COBOL code to a human user, a crucial task for financial institutions looking to update their legacy systems while keeping their data secure in-house.
Evaluating LLMs in this domain is challenging since there’s no simple right or wrong answer to the specific types of COBOL related questions we ask. Towards this, we have benchmarked the responses obtained from various proprietary and open-source LLMs against an expert human response. This method allows us to assess which models perform best for a specific type of question and are effective in a practical context.
This article provides insights for financial institutions looking to optimize or modernize their legacy systems using LLMs as well as offering considerations for adapting and integrating these models into their IT environments.
CRISTIÁN BRAVO | Professor, Canada Research Chair in Banking and Insurance Analytics, Department of Statistical
and Actuarial Sciences, Western University
The modern revolution of artificial intelligence (AI) has a benefit that is often not mentioned: it allows the use of diverse data from multiple sources and of multiple types (multimodal data), such as video, audio, or images, in an efficient, and, more importantly, effective manner. While this is much closer to how experts make decisions, the challenges are that it must be done profitably, while considering the internal culture and the operational systems that are available to ensure a positive return on investment (RoI). In this article, I will summarize some of the advantages and point out some of the challenges in creating effective, useful, AI systems that leverage multimodal data.
ALBERT SANCHEZ-GRAELLS | Professor of Economic Law, University of Bristol Law School
This paper reflects on the challenges that the public sector faces in adopting artificial intelligence, and generative AI in particular. Despite the increasing pressure on public sector organizations to deploy AI and GenAI to cut costs, this stage of public sector digitalization remains fraught with difficulties.
The paper stresses in particular the challenges that arise from the two-tier complexities of first, designing appropriate use cases and ensuring AI and GenAI are not used for other purposes and, second, successfully acquiring AI and GenAI for the public sector.
FENG LI | Associate Dean for Research and Innovation and Chair of Information Management, Bayes Business School (formerly Cass), City St George’s, University of London
HARVEY LEWIS | Partner, Ernst & Young (EY), London
The rise of artificial intelligence (AI), particularly generative AI (GenAI), presents both significant opportunities and challenges for business leaders. This paper explores how AI can reshape business models, operations, and the nature of work, drawing lessons from past technological revolutions and emerging insights from leading global organizations. It argues that AI’s true potential lies not just in automating tasks but in fundamentally rethinking organizational processes and business models.
The paper offers practical strategies for senior leaders to navigate this evolving landscape and successfully steer their organizations through an AI-driven future.
MICHAEL P. WELLMAN | Lynn A. Conway Collegiate Professor of Computer Science and Engineering University of Michigan, Ann Arbor
The rapid advancement of surprisingly capable AI is raising questions about AI’s impact on virtually all aspects of our economy and society. The nexus of AI and finance is especially salient, building on the impact AI has already had on trading and other financial domains. New AI developments could exacerbate market manipulation, and otherwise create loopholes in regulatory regimes. Anticipating these potential impacts suggests directions for market design and policy that makes financial markets robust to advanced AI capabilities.
CHARLOTTE BYRNE | Managing Principal, Capco
THOMAS HILL | Principal Consultant, Capco
The generative AI landscape is evolving rapidly – and transforming how organizations approach and embrace technology and innovation. As businesses seek to harness the power of GenAI, it is crucial they establish a robust technology blueprint that guides the development, deployment, and management of AI-driven solutions.
We explore the essential elements of a GenAI technology blueprint, covering the importance of flexible architectures, ethical considerations, and seamless integration with existing systems.
SEAN MCMINN | Director of Center for Educational Innovation, Hong Kong University of Science and Technology
JOON NAK CHOI | Advisor to the MSc in Business Analytics and Adjunct Associate Professor, Hong Kong University of Science and Technology
The rapid advancement of generative artificial intelligence (GenAI) tools has significant implications for creativity, decision-making, and problem-solving across various sectors. While AI offers opportunities to enhance productivity by offloading routine tasks to it, excessive or inappropriate dependence can diminish human cognitive engagement and critical thinking skills.
This paper highlights the importance of metacognition, which is the ability to reflect on one’s thinking and decision-making strategies, in effectively integrating AI into both educational and professional settings. By developing metacognitive awareness and employing strategic approaches, individuals and organizations can assess when and how to use AI effectively.
Addressing the AI literacy gap is also crucial as it empowers users to navigate AI-driven environments appropriately and confidently. Ultimately, fostering metacognitive skills ensures that AI serves to enhance, rather than replace, human judgment, creativity, and ethical responsibility in decision-making processes.
This article introduces key metacognitive strategies for effective AI integration and underscores the necessity of continuous learning and human oversight.
NYDIA REMOLINA | Assistant Professor of Law, and Fintech Track Lead, SMU Centre for AI and Data Governance, Singapore Management University
Generative artificial intelligence is rapidly reshaping the financial services sector by introducing new avenues for innovation, efficiency, and profitability. GenAI systems, including models like “generative adversarial networks” and “transformers”, can autonomously generate content such as synthetic data, trading strategies, and fraud detection insights, transforming traditional financial operations.
However, these advancements come with new challenges, particularly in ensuring that GenAI is deployed ethically, securely, and in compliance with evolving regulatory frameworks. Current financial regulations, such as those governing anti-money laundering, market integrity, financial consumer protection, among others, were originally designed for human-driven processes and do not fully address the complexities introduced by AI systems. While some jurisdictions, such as the E.U., Singapore, the U.S., and China, have launched AI regulatory initiatives, frameworks specifically tailored to the financial services industry are still a work in progress.
This article seeks to provide an overview of these differing regulatory landscapes while raising awareness of the gaps that financial institutions and regulators should address to bridge in the responsible adoption of GenAI in the financial services sector.
KATJA LANGENBUCHER | Professor of Civil Law, Commercial Law, and Banking Law, Goethe University Frankfurt
This paper describes how artificial intelligence might augment board decision-making and explores legal ramifications of this development. The article begins by providing a brief overview of the use of AI as a “prediction machine” for board decisions, and then zooms in on two core characteristics that explain what corporate law requires from board decision making: that board members fully own their decisions and that board members are trusted to form business judgments, immune from judicial second-guessing.
The paper makes two contributions to the debate: it rejects the notion that blackbox AI may not be used for board decision-making and proposes a graphic control matrix to identify low, medium, and enhanced judicial scrutiny when boards use AI to inform their decisions.
JULIA REDENIUS-HÖVERMANN | Professor of Civil Law and Corporate Law and Director of the Corporate Governance Institute (CGI) and the Frankfurt Competence Centre for German and Global Regulation (FCCR), Frankfurt School of Finance and Management
LARS HINRICHS | Partner at Deloitte Legal Rechtsanwaltsgesellschaft mbH (Deloitte Legal) and Lecturer, Frankfurt School of Finance and Management
In the following article, selected topics in the current implementation of compensation systems for management boards are discussed in more detail, with the focus on the tension that regularly arises in compensation practice between the regulatory and labor law framework, behavioral economics, and (market) practice. To make the presentation more understandable, the regulatory legal bases generally refer to the requirements of CRD VI and cover topics that the national legislators of the individual E.U. member states have implemented in national law with the same content.
It is shown that the practice of remuneration systems for management board members in institutions is based on a (mature) legal framework. Individual internal and external dynamic factors influence the further implementation of the remuneration systems for management board members and require a risk-based regular review process of the compatibility of the remuneration systems and their implementation with the regulatory requirements and the operational requirements of the institution, in particular from the updated business and risk strategy.
Particularly, when it comes to the specific implementation of performance-related variable remuneration, institutions must take into account the dependence of regulatory requirements on the applicable labor and company law framework and reconcile these in a balanced and practicable manner. The question of whether the current (over)regulation will lead to a “regulatory infarction” in the near future remains to be discussed.
EUGENIA NAVARRO | Lecturer and Director of the Legal Operations and Legal Tech Course, ESADE
The integration of artificial intelligence in the legal sector presents significant opportunities for improving efficiency, automating repetitive tasks, and enhancing decision-making processes. However, successful implementation requires a clear strategy, proper training for legal teams, and the right collaboration between internal and external experts.
Generative AI can streamline document drafting and client interaction, while non-generative AI excels in predictive analytics and e-discovery. Despite the advancements, AI cannot replace human emotional intelligence, creativity, and ethical judgment, which remain critical in delivering personalized and high-quality legal services. Ultimately, AI is a powerful tool, but its true value lies in complementing human expertise, not replacing it.