/ developer surface · v4.2.1
Bolehlah.ai is the engine brand of the AI credit intelligence platform that powers licensed lenders across Southeast Asia. Direct API access to /decisions, /scoring, /inference — trained on regional borrower data, explainable by design.
$ curl -X POST https://api.bolehlah.ai/v1/decisions \ -H "Authorization: Bearer sk_live_...",\ -d '{ "applicant": { "ic_hash": "sha256:7a3b...", "cohort": "MY_GOV" }, "request": { "principal": 25000, "tenure_months": 60 }, "context": { "salary": 5250, "commitments": 1008.45 } }' > HTTP/2 200 { "decision": "approve", "score": 78, "probability_default": 0.042, "inference_latency_ms": 138, "model_version": "v4.2.1", "explanation": { "dsr_weight": 0.40, "ctos_weight": 0.30, "tenure_weight": 0.15, "capacity_weight": 0.10, "history_weight": 0.05 }, "audit_hash": "0x8af2c41e...c91e" }
API surface
/v1/decisions
Run a single applicant through the full decision pipeline. Returns verdict, score, default probability, factor weights, and an audit hash.
/v1/scoring
Score-only endpoint for portfolio sweeps. Accepts batch of up to 10,000 applicants; returns Bolehlah scores and cohort-relative percentiles.
/v1/inference
Raw inference endpoint for research and custom workflows. Accepts feature vectors, returns model outputs with full feature attribution.
Model card
Every decision returns a weighted factor breakdown that regulators, auditors, and borrowers can read. Nothing is a black box.
Model card available to deployed institutions and authorised regulators.
bolehlah-credit-core · v4.2.1
released 2026-03-18
Benchmarks
Deployed institutions that run Bolehlah.ai against their internal scorecards, over 12 months, on equivalent applicant pools.
Default reduction
-35%
vs. rulebook underwriting baseline
Approval lift
+22%
at equivalent default rates
Autonomous rate
80%
decisions with zero human review
Decision time
3 s
end-to-end borrower journey
Architecture
// First-class entities in the platform schema
decisions // one row per AI verdict, immutable inferences // raw model outputs, feature attribution model_versions // deployed versions + A/B test allocations training_samples // anonymised outcomes used for retraining --- loans // delivery artefact, not the source of truth borrowers // identity + cohort attachment lenders // deployed institutions, configured thresholds
Telemetry
Every request emits decisions_served, inference_latency_ms, model_version_deployed, explanation_generated to the observability plane.
Ledger
Each decision hashed to Hyperledger Besu. Tamper-evident audit trail. Regulators can verify any historical decision in under a second.
Training
Aggregated, anonymised outcomes flow into the training pipeline under a permanent licence per deployed-institution contract. The flywheel compounds.
API access is granted to deployed institutions, regulators, and credentialed AI research teams. Sandbox keys available in under 48 hours.