Here's a number that should keep you up at night: 87.5% of Nigerian fintechs already use AI for fraud detection. If you're one of them, you're not ahead of the curve. You're in the pack. And if you're not documenting how that system works, what data it processes, and why it blocks the transactions it blocks — you're running a compliance risk that's growing every quarter.
The NDPC fined Multichoice ₦766 million. They fined Meta $220 million. Over 1,368 organisations are under investigation. The Central Bank of Nigeria's March 2026 AML automation standards add another layer of regulatory scrutiny. Your fraud detection AI sits right at the intersection of both regulators' attention.
This article covers two things: how to build an AI fraud detection system that actually works for Nigerian transaction patterns, and what the NDPC and CBN expect you to document about it.
What You're Actually Building
Let's get specific about what a fraud detection system does. It's not one monolithic AI. It's a pipeline with several connected components.
Real-Time Transaction Scoring
Every transaction gets a risk score as it happens. The model looks at the amount, the sender and receiver profiles, time of day, device fingerprint, location, and velocity (how many transactions in the last hour, day, week). A score above your threshold triggers an action — flag for review, request additional authentication, or block outright.
For Nigerian fintechs, the scoring model needs to account for patterns that are perfectly normal here but would look suspicious on a UK or US model. A customer sending ten ₦5,000 transfers in an hour through a mobile money platform? That might be a market trader paying suppliers. A generic model trained on Western banking data would flag it as structuring.
Pattern Detection
Beyond individual transactions, you need models that spot patterns across time and across accounts. Things like:
- Ring transfers between a group of accounts (layering)
- Gradual increases in transaction size testing limits
- Multiple accounts created from the same device
- Sudden changes in a customer's behaviour profile
- Transactions that match known fraud typologies
Account Monitoring
Continuous monitoring of account-level signals: login locations, device changes, password resets, new beneficiaries added, sudden dormancy followed by activity. Account takeover is one of the fastest-growing fraud types in Nigerian fintech — it's not just about the transactions themselves.
Alert Workflow and Case Management
Your AI generates alerts. Humans investigate them. You need a system that routes alerts by severity, assigns them to analysts, tracks investigation status, logs decisions and reasons, and feeds outcomes back to the model for retraining. Without this feedback loop, your model never improves.
The Architecture
Here's what the technical stack looks like. I'm keeping this practical — this is what we actually build for clients.
Data Pipeline
You need a streaming data pipeline that ingests transactions in real time. Kafka or AWS Kinesis for the event stream. Each transaction event carries the raw data: amount, timestamp, sender, receiver, channel, device ID, IP, geolocation.
For Nigerian fintechs, you'll typically be ingesting from multiple channels simultaneously — NIP (NIBSS Instant Payment), USSD, POS, mobile app, and agent banking terminals. Each channel has different data richness. POS transactions might not carry geolocation. USSD transactions have minimal device data. Your pipeline needs to handle this gracefully.
Feature Store
Raw transaction data isn't what your model consumes. You need engineered features — calculated values that capture the signals. Examples:
- Velocity features: Transaction count and volume in the last 1 hour, 6 hours, 24 hours, 7 days
- Deviation features: How far this transaction deviates from the customer's average amount, typical time of day, usual recipient types
- Network features: Degree of connection between sender and receiver, shared device/IP history
- Channel features: Is this customer's first POS transaction? First cross-border transfer?
- Behavioural features: Session duration before transaction, number of failed attempts, navigation pattern
A feature store (like Feast or a custom Redis-based setup) pre-computes and serves these features at inference time with sub-100ms latency. You can't calculate all this on the fly for every transaction.
ML Model
Most production fraud detection systems use an ensemble approach. A gradient boosted model (XGBoost or LightGBM) handles structured feature data well and runs fast. You might layer a neural network on top for sequence modelling — looking at the order and timing of events, not just aggregate features.
The training data problem is real. Fraud is rare — maybe 0.1-0.5% of transactions. You'll need techniques like SMOTE for oversampling, cost-sensitive learning, or anomaly detection models that don't need labelled fraud examples at all.
Train on Nigerian data. I can't stress this enough. A model trained on UK card fraud data will produce garbage results when applied to Nigerian mobile money transactions. The transaction patterns, amounts, frequencies, and fraud typologies are fundamentally different.
Scoring API
Your model runs behind a REST or gRPC API. The payment processing system calls it for every transaction, passes the transaction details, gets back a risk score and a reason code. Latency matters — you need sub-200ms response times or you're slowing down payments.
The reason code is important for compliance. "Score: 0.87" tells your analyst nothing. "Score: 0.87 — unusual velocity (15 transactions in 20 minutes), new recipient, amount 10x customer average" tells them exactly what to investigate.
Case Management
When the score exceeds your review threshold, an alert goes into a case management system. This is where your fraud analysts work. They see the alert, the reason codes, the customer's transaction history, and make a decision: genuine or fraudulent.
Their decisions feed back into the training data. This human-in-the-loop process is both an operational necessity and a compliance requirement under Section 37 of the NDPA.
Nigerian-Specific Considerations
Generic fraud detection articles won't tell you this. Nigerian fintech fraud has specific patterns you need to build for.
Mobile Money and USSD
Mobile money dominates. Transactions are typically small, frequent, and high-volume. Your model needs to understand that a customer making 50 small transfers a day is normal behaviour for someone running a business through their phone. False-flagging legitimate mobile money use is the fastest way to lose customers.
USSD transactions carry minimal data — no device fingerprint, no app session data, no geolocation. Your model has to work with less information on this channel. Build separate risk profiles for USSD vs. app-based transactions.
Agent Banking
Agent banking fraud is a growing problem. Agents process transactions on behalf of customers, and a compromised agent can process hundreds of fraudulent transactions before anyone notices. Your model should track agent-level patterns separately — unusual transaction volumes, new customer registrations from a single agent, after-hours activity.
POS Fraud
POS terminal fraud spiked massively after COVID. Cloned terminals, compromised merchants, collusive transactions. Your model needs terminal-level features: transaction patterns per terminal, geographic clustering, merchant category patterns.
Cross-Border Flows
With Nigerian fintechs now operating across West Africa (and processing diaspora remittances), cross-border transactions add complexity. Legitimate remittance patterns look different from money laundering patterns, and your model needs to distinguish between a Nigerien trader receiving regular payments and a suspicious layering operation.
What It Costs
Let's talk numbers. These are based on what we've built and what the market looks like in Q1 2026.
Custom Build
- Development: ₦5-12 million (£5,000-£12,000) for the full stack — data pipeline, feature store, ML models, scoring API, case management dashboard
- Timeline: 8-14 weeks depending on complexity and data readiness
- Monthly running costs: ₦200-500K (£200-500) for cloud infrastructure, model inference, and monitoring
- Team needed: ML engineer, data engineer, backend developer. Or you outsource the build and manage in-house after handover
Off-the-Shelf
- Vendor solutions (Featurespace, Sardine, Feedzai): $2,000-5,000/month depending on transaction volume
- Pros: Faster to deploy, vendor handles model updates
- Cons: Less control over model tuning, may not handle Nigerian-specific patterns well, ongoing cost scales with volume, data leaves your infrastructure
The ROI Case
If you're processing 100K+ transactions per month, the custom build pays for itself within 6-12 months. You own the model, you tune it to your population, and your marginal cost per transaction drops as volume grows. For smaller fintechs processing under 50K transactions/month, a vendor solution might make more sense until you hit scale.
Either way, the compliance documentation costs are roughly the same — and they're not optional.
The False Positive Problem
This is where most fraud detection systems fail their customers. Your rule-based system has a 90-95% false positive rate. That means for every 100 transactions it flags, 90-95 are legitimate. You're blocking real customers, creating support tickets, and eroding trust.
AI brings this down to 20-40% false positive rates while catching more actual fraud. But only if you do it right.
Three things matter:
1. Train on your own data. Not a public dataset. Not a vendor's pre-trained model. Your customers, your transaction patterns, your fraud cases. The model needs to learn what "normal" looks like for your population.
2. Use tiered responses. Don't block everything above a threshold. Low-confidence alerts (score 0.5-0.7) get flagged for review. Medium-confidence (0.7-0.85) trigger step-up authentication — OTP, biometric check. High-confidence (0.85+) get blocked pending review. This reduces customer friction on false positives while still catching fraud.
3. Continuously retrain. Fraud patterns change weekly. New scam types appear. Your model needs regular retraining on fresh data, including the feedback from your fraud analysts' decisions. Set up a monthly retraining pipeline at minimum.
Compliance: NDPA and CBN — Both Apply
Your fraud detection system sits under two regulatory regimes simultaneously. You need to satisfy both.
NDPA Requirements
Your system processes personal data — transaction amounts, account details, device fingerprints, location data, behavioural patterns. The NDPA compliance requirements apply in full.
Lawful basis. Legitimate interest is the strongest ground here. You have a legitimate interest in preventing fraud, and that interest doesn't override customer rights — provided you have proper safeguards. Document this reasoning in your records of processing.
DPIA. This is mandatory. AI-powered fraud detection is high-risk automated processing by any reasonable definition. Your DPIA needs to cover what data you process, why automated processing is necessary, the risks to individuals (false positives blocking legitimate transactions, potential for discriminatory patterns), and your safeguards. If you need help structuring this, we've written a step-by-step guide to DPIAs for AI systems.
Section 37 — Automated decision-making. If your system blocks transactions or freezes accounts without human review, Section 37 of the NDPA applies. You need to provide meaningful information about how the system works, and customers must have the right to request human intervention. Your tiered response model handles this: auto-blocking is only for high-confidence scores, and every blocked customer has a path to human review.
Data Processing Agreements. If you're using cloud AI services (AWS SageMaker, Google Vertex, Azure ML), you need DPAs with those providers covering what data they access, where it's processed, and retention terms.
CBN Requirements
The CBN's March 2026 AML automation standards add specific requirements for fraud detection systems in regulated financial institutions:
- Automated transaction monitoring — not optional for licensed fintechs
- Suspicious activity detection and reporting — your system must generate SARs (Suspicious Activity Reports) for the Nigerian Financial Intelligence Unit
- Customer risk scoring — ongoing, not just at onboarding
- Model governance — documented model development, validation, testing, and audit trails
- Regulatory reporting — regular compliance reports to the CBN
For fintechs operating in the CBN regulatory sandbox, having proper AI governance documentation isn't just good practice — it's a condition of participation.
What You Need to Document
At minimum, your fraud detection AI needs:
- A DPIA covering data flows, risks, and safeguards
- A model card describing the algorithm, training data, performance metrics, and known limitations
- A bias assessment — does the model perform differently across customer demographics, regions, or transaction channels?
- Decision threshold documentation — why is 0.85 the auto-block threshold? What analysis supports this?
- Human review process — who reviews flagged transactions, what's the SLA, how do customers appeal?
- Data retention policy — how long do you keep transaction data, model training data, and investigation records?
- Incident response plan — what happens when the model fails? When a new fraud type bypasses it? When a customer is wrongly blocked?
- Retraining schedule and governance — who approves model updates? How is new training data validated?
The fintechs that go through proper AI compliance exercises end up with better-performing models, not just better paperwork. Documentation forces you to think through edge cases, bias risks, and failure modes that you'd otherwise miss.
Build It Right the First Time
You can bolt compliance documentation onto an existing system after the fact. But it's cheaper and faster to build it in from the start.
If you're building a new fraud detection system — or rebuilding one that's outgrown its rule-based origins — we build AI systems with compliance documentation included. Not as an afterthought. As part of the engineering process.
The model card gets written alongside the model. The DPIA gets drafted during architecture design. The bias assessment runs as part of model validation. You get a working system AND the documentation that keeps the NDPC and CBN happy.
Want to discuss your fraud detection build? Get in touch — we'll scope it and give you a clear timeline and cost. AI fraud detection systems for Nigerian fintechs typically fall in the ₦5-12 million range depending on complexity. Check our services page for more detail on what's included.
Frequently Asked Questions
How much does AI fraud detection cost for a Nigerian fintech?
A custom AI fraud detection system costs ₦5-12 million (£5,000-£12,000) to build. Running costs are ₦200-500K/month (£200-500) for hosting, model inference, and data feeds. Off-the-shelf solutions from vendors like Featurespace or Sardine start at $2,000-5,000/month. For mid-size fintechs processing 100K+ transactions/month, the custom build ROI is stronger because you own the model and can tune it to Nigerian transaction patterns.
What types of fraud can AI detect?
Transaction fraud (unusual amounts, velocities, or patterns), account takeover (login anomalies, device changes, behavioural shifts), identity fraud (synthetic identities, document forgery), payment fraud (card-not-present, authorised push payment scams), and money laundering patterns (structuring, layering, unusual cross-border flows). AI detects patterns that rule-based systems miss — especially novel fraud types that don't match existing rules.
Does my fraud detection AI need NDPA compliance?
Yes. Your fraud detection system processes personal data — transaction histories, device fingerprints, location data, behavioural patterns. The NDPA applies. You need a lawful basis (legitimate interest in preventing fraud), a DPIA because this is high-risk automated processing, and Section 37 compliance if the system makes decisions affecting customers (blocking transactions, freezing accounts). You also need DPAs with any cloud AI providers processing the data.
How do I reduce false positives in AI fraud detection?
Three approaches: train on high-quality labelled Nigerian transaction data (not generic global datasets), implement tiered response (low confidence = flag for review, high confidence = auto-block), and continuously retrain as fraud patterns evolve. Most rule-based systems have 90-95% false positive rates. AI reduces this to 20-40% while catching more actual fraud. The key is tuning to your specific transaction population — Nigerian mobile money patterns are different from UK card transactions.
What does the CBN expect for fraud detection systems?
The CBN's March 2026 AML automation standards require automated transaction monitoring, suspicious activity detection, customer risk scoring, and regulatory reporting. For fraud detection specifically, they expect documented governance frameworks, model validation, regular testing, and audit trails. Having proper compliance documentation strengthens your position with the CBN and is mandatory for regulatory sandbox participation.
Need help with this?
We build compliant AI systems and handle the documentation. Tell us what you need.
Get in TouchRelated Articles
AI for Business
AI for Insurance Companies: Claims, Underwriting, and Staying Compliant
Insurance companies are using AI to speed up claims, sharpen underwriting, and catch fraud. Here's how to build these systems without falling foul of the EU AI Act.
AI for Business
How to Choose an AI Automation Platform for Your Business
No-code, low-code, or custom-built? Here's how to evaluate AI automation platforms without getting locked into the wrong tool or blindsided by compliance gaps.
AI for Business
AI for Small Business: A Practical Guide to Getting Started in 2026
Where to start with AI for your small business. Covers what actually works, what it costs, which tools to consider, and how to avoid the compliance pitfalls that catch most SMEs off guard.