Decoding AI: Key terms risk and compliance professionals should know

Artificial Intelligence is transforming how financial institutions identify risk, handle compliance, support customers, and operate across the enterprise, yet its concepts and terminology can often feel exhaustive and highly technical because of its complex history and interdisciplinary nature.
A clear grasp of basic concepts and terminology of AI is critical for multiple reasons: regulators expect banks to explain not only what their AI systems do but how they work, and a solid understanding is also the gateway to extending and scaling these capabilities.
This column breaks down key AI concepts in plain English, with practical examples drawn from real financial applications. Whether you’re working with risk rating models or fraud models, drafting policies, reviewing vendor models, or building governance frameworks, these definitions will help you navigate the fast-evolving AI landscape with confidence.
Foundational AI Concepts
Artificial intelligence (AI): AI, founded as an academic field in 1956, has gone through recurring cycles of enthusiasm and "AI winters." Interest surged sharply after 2012 with GPU-driven deep learning and accelerated again in 2017 with the emergence of the "transformer" architecture. In the 2020s, breakthroughs in generative AI sparked a major boom, and we are now living through the "age of AI".
Banks have long been familiar with AI: credit risk scoring and fraud detection are classic machine-learning (a subset of AI) applications that financial institutions have relied on for decades. But with the arrival of newer forms of AI, especially Generative AI and Agentic AI, the landscape is evolving rapidly.
Artificial general intelligence (AGI): A hypothetical form of AI capable of performing any intellectual task that a human being can, including learning, reasoning, solving problems, and adapting to any type of task or situation, regardless of the field. It represents a significant departure from today’s Narrow AI, which is designed to excel at a single, specific task (e.g., identifying fraud, generating text).
Expert systems: Expert systems are early rule-based AI programs that make decisions using hand-crafted "if-then" logic. They have largely been replaced by modern methods that automatically learn from data.
Example: A rule-based credit-approval engine that applies fixed thresholds for income or debt ratios is a classic expert-system approach once common in banking.
Machine learning (ML): Machine learning is a subset of AI where programs automatically learn patterns (rules, formulas) from data instead of relying on hand-crafted expert rules. In essence, the machine (a program) discovers the rules by analyzing historical data. This represents a major conceptual leap from expert systems, once the domain and data grow in complexity, it becomes impossible for humans to manually define all the relevant patterns, making learning from data essential.
Example: In compliance, ML models can detect fraud or estimate credit risk by uncovering patterns across millions of past transactions and cases.
Deep learning (DL): Deep learning is an advanced form of machine learning that uses multi-layered neural networks, loosely inspired by the human brain, to handle highly complex data such as images, text, audio, and video. Its power comes from the combination of very large models, massive digital datasets, and modern GPUs that can efficiently train these networks, along with key mathematical breakthroughs over the past 15 years.
Since its breakthrough in 2012, deep learning has driven the major advances in modern AI, from facial recognition and speech processing to generative AI. In fact, foundational contributions to DL were recognized with Nobel Prizes in physics and chemistry in 2024.
Example: Banks apply deep learning to tasks such as document verification, identity authentication, voice-based fraud detection, and call-center automation.
.avif)
Language, text and generative AI
Natural language processing (NLP): NLP enables computers to understand and generate human language. It underpins chatbots that answer regulatory questions, systems that summarize policy documents, and monitoring tools that scan customer communications for compliance breaches.
Example: NLP-based systems can understand the context of a call center interaction, understand the needs of the customer and can generate optimal marketing actions in real time.
Generative AI (GenAI): AI models aren’t used only for prediction, GenAI can create new content, such as text, code, or images. In risk and compliance, GenAI can draft reports, synthesize regulatory guidance, or generate "what-if" scenarios for emerging risks — all under human supervision.
Example: A compliance professional could use GenAI to summarize a new 200-page regulation into a two-page internal briefing.
Large language models (LLMs): LLMs are a type of GenAI trained on enormous volumes of text to predict the next word in a sequence. This simple task power tools like ChatGPT. In financial institutions, LLMs can accelerate regulatory research or automate customer correspondence, though their outputs require review for accuracy and bias. LLMs are now embedded in many of the tools we use every day, from virtual assistants and search engines to email and document applications, making their influence widespread, even when users aren’t aware of it.
Example: By referencing official regulatory bulletins, an LLM-powered assistant can answer staff questions like "What are the new FinCEN reporting thresholds?"
Retrieval-augmented generation (RAG): RAG combines an LLM with a trusted data source, allowing the model to "look up" information rather than relying solely on its training data. In practice, a RAG-based chatbot can securely answer questions using a bank’s internal compliance manuals or policy documents, ensuring that responses are both accurate and aligned with official guidance.
Example: A risk professional could ask, "What’s our escalation protocol for wire-transfer fraud?" and the chatbot would retrieve the exact language from the internal procedures document.
Document understanding and information extraction: Document understanding uses AI to extract and interpret information from structured or unstructured documents such as contracts, Know Your Customer (KYC) forms, or regulatory filings. It combines Optical Character Recognition (OCR), NLP, and ML to automate document-heavy compliance tasks.
Example: A document-understanding system can automatically extract interest-rate clauses from vendor contracts or locate missing disclosures in loan files, saving compliance teams hours of manual review.
Autonomy, planning and reasoning
Agentic AI: Agentic AI builds on GenAI by giving models the ability to plan, make decisions, take actions, and use other tools autonomously to execute tasks, all within strict guardrails. Agentic AI represents the next major step in intelligent automation.
Example: An agentic AI assistant in a bank’s compliance department could automatically collect new regulatory updates, summarize key changes, compare them with existing internal policies, and flag potential compliance gaps, without human prompting.
Reinforcement learning: Unlike ML that learns from past data, reinforcement learning improves through trial and error. The model takes actions, receives feedback in the form of rewards or penalties, and gradually learns which choices lead to the best outcomes. Trading and credit-limit algorithms sometimes use this approach to balance profitability and risk exposure dynamically. However, regulators require transparency into how such models learn and make decisions. Some see RL as the gateway to Artificial General Intelligence (AGI).
Example: A reinforcement-learning system could adjust daily credit exposure limits for corporate clients based on changing market volatility, rewarding actions that minimize risk.
Governance, risk and oversight
Responsible and explainable AI: Responsible AI ensures that AI systems operate ethically, transparently, and in alignment with regulations. Explainable AI (XAI) makes complex models interpretable, revealing why a decision was made. Together, they form the foundation of AI governance in financial institutions.
Example: Before launching a credit-modeling system, a bank’s Responsible AI review may verify that decisions are explainable, data sources are unbiased, and results can be justified to both regulators and customers.
A human-in-the-loop approach combines machine efficiency with expert oversight, essential in compliance decisions where accountability cannot be delegated to an algorithm.
Model risk management (MRM): In banking, any AI or ML system is treated as a "model" subject to governance. MRM frameworks assess model design, validation, and performance drift, linking AI innovation directly to operational-risk and compliance oversight.
Example: A bank’s model risk team regularly reviews its AI-driven credit scoring models to ensure they remain accurate, fair, and compliant with OCC and Federal Financial Institutions Examination Council guidance.
Human-in-the-Loop (HITL): Even the best AI systems require human judgment. A HITL approach combines machine efficiency with expert oversight, essential in compliance decisions where accountability cannot be delegated to an algorithm.
Example: When AI flags a potential sanctions violation, a compliance professional reviews the alert before deciding whether to report it, thereby maintaining human accountability.
Core learning paradigms
AI systems "learn" from data, and this learning is encoded in mathematical objects called AI models. There are several fundamental ways AI models learn:
Supervised learning: The model learns from labeled examples, i.e. data paired with the correct answer. Most traditional banking applications, such as credit-risk scoring, use supervised learning because past defaults (the labels) guide the training process.
Unsupervised learning: The model finds structure in unlabeled data, such as clusters or hidden patterns. Customer segmentation based on spending and product-usage behavior is a classic unsupervised learning task since no predefined labels are required.
Self-supervised learning: The model generates its own labels from the data. Predicting the next word in a sentence or masking parts of an image and asking the model to fill them in are common examples. Large language models are primarily trained using self-supervised learning.
Reinforcement learning (RL): A type of machine learning where an agent — the AI system making decisions — learns how to behave in an environment by performing actions and receiving a reward or penalty as feedback for success or failure. The agent’s goal is to develop a decision-making strategy ("policy") that maximizes the cumulative reward over time. In cutting-edge models (like large language models), this often involves human feedback to ensure the model’s behavior is aligned with ethical or desired outcomes. For example, RL played a key role in the major quality jump from GPT-3 to ChatGPT (GPT-3.5) in late 2022. Some researchers consider reinforcement learning the pathway toward achieving artificial general intelligence.
Transfer learning: A model trained on one large domain is adapted to a new, narrower task. For example, starting with a pre-trained large language model and fine-tuning it on a bank’s compliance documents can produce a model that generates better summaries and answers aligned with internal policies.
Active Learning: Active learning is a training strategy where the model identifies which data points would be most useful to learn from and queries a human expert for labels. This reduces labeling effort and improves performance efficiently.
Online/adaptive learning: Online learning updates the model continuously or in small batches as new data arrives, allowing real-time adaptation. It contrasts with traditional "batch" learning where models are retrained only periodically.
This article originally appeared in the January/February 2026 issue of ABA Risk and Compliance magazine.
ABOUT THE AUTHOR

OMER FARUK ALIS, PH.D., is the director of the AI Solutions Hub at Northeastern University’s Institute for Experiential AI. With more than 25 years of experience in artificial intelligence and machine learning, he has led industry initiatives spanning finance, telecommunications, healthcare, and biotechnology. Omer has also founded two startups focused on automated machine learning and AI-driven skill assessment. He holds a Ph.D. in Applied and Computational Mathematics from Princeton University. Reach him at aipartnerships@northeastern.edu




