Runline
The Founder's Journey8 minDraft

What Building an SEC-Regulated Platform Taught Me About AI Compliance

Lessons from building a nationwide SEC-registered robo-advisor — and why they apply directly to AI in credit unions.

Sean Hsieh

Sean Hsieh

Founder & CEO, Runline

Article 2 (Updated): "What Building an SEC-Regulated Platform Taught Me About AI Compliance"

Track: The Founder's Journey (amber) | Arc: Origin Story Target: CEOs, Compliance Officers, Board Members Length: ~2,000 words


Opening Hook

(the first SEC examination moment)

Act 1 — The SEC Crucible (enriched)

  • Concreit Fund Management LLC — a nationwide SEC-registered robo-advisor (registered investment adviser under the Investment Advisers Act of 1940). Not a chatbot. Not a demo. A live platform managing real money, subject to SEC examination
  • Now a multi-state manager following regulatory adjustments — navigating the evolving landscape of state-by-state vs. federal registration as regs shift
  • Also registered as a transfer agent for our subsidiary — meaning we operate under two distinct SEC regulatory frameworks simultaneously (Investment Advisers Act + Section 17A of the Securities Exchange Act of 1934)
  • What SEC examinations actually involve: pre-examination document requests, on-site inspections, compliance program reviews, books and records examination, testing of your representations against reality
  • The muscle this builds: you don't get to say "our algorithm does X" — you have to prove it does X, show how it does X, and demonstrate what happens when X fails
  • Reference: SEC 2026 Examination Priorities — AI explicitly listed as operational risk area
  • Reference: SEC "AI Washing" enforcement actions — the SEC is cracking down on companies that claim AI capabilities they can't substantiate. If you say "AI-powered," you'd better be able to open the hood for an examiner.

Act 2 — The Translation: SEC Muscle → Credit Union AI (enriched)

  • Credit unions are about to face their own version of this. NCUA's AI Compliance Plan (September 2025) gives CUs 12-18 months for monitoring, control, termination
  • The standards are forming, but nobody's landed yet. This is the gap:
    • NIST AI Risk Management Framework (AI RMF 1.0) — NCUA itself recommended this as the governance baseline. Voluntary, but becoming the de facto standard for financial institutions
    • ISO/IEC 42001 — First international AI management system standard. Certifiable. Early adopters in financial services getting certified now
    • COSO GenAI Risk & Controls — Published literally this week (Feb 23, 2026). Internal controls framework specifically for generative AI. Think SOX compliance extended to AI.
    • HITRUST AI Security Assessment — 44 controls, certifiable today. The closest thing to "SOC 2 for AI" that actually exists right now
    • AICPA — Developing AI assurance engagement guidance, extending SOC 2 Trust Services Criteria for AI. Not finalized yet — estimated 12-18 months
    • Colorado AI Act — Takes effect June 2026. First US state law requiring impact assessments for "high-risk" AI in consequential decisions (lending, insurance). Template for what other states will follow.
    • GAO Report (GAO-25-107197, May 2025) — Explicitly called out NCUA's gaps: limited model risk management guidance and no vendor examination authority. Translation: your regulator can't catch AI risk at your vendors. You have to self-govern.
  • Key insight: There's no "SOC 2 for AI" stamp you can buy yet. The pieces are converging (NIST + ISO + COSO + HITRUST + AICPA), but right now it's a patchwork. CUs that wait for a single standard to emerge will be 2 years behind CUs that start building governance infrastructure now.
  • The parallel: When Concreit launched, there was no "robo-advisor compliance playbook." We had to compose our own from existing SEC frameworks. CUs deploying AI are in that exact same position today.

Act 3 — The CTO Gap (New Section)

  • Here's the uncomfortable truth: most credit unions don't have a CTO. Not a VP of IT who manages vendors — a CTO who thinks in systems, architectures, and technical strategy
  • The CU ecosystem has COOs, compliance officers, VP of lending, VP of member services. But the person who can evaluate whether an AI vendor's architecture is examiner-defensible? That role barely exists
  • This isn't a criticism — it's a structural reality of institutions built on relationship banking, not software engineering
  • For an AI future, organizations must become AI-native and fluent. Not "we bought an AI tool" fluent — "we understand what's happening inside the tool and can defend it to an examiner" fluent
  • This forces CUs into an uncomfortable but necessary position: thinking like an AI-native FinTech while operating as a member-owned cooperative
  • The good news: you don't have to become a tech company. You need a partner who is one and who builds for your regulatory reality — not Silicon Valley's "ship first, comply later" reality
  • Reference: The CU leadership pipeline was built for a different era. The average CU CEO tenure is 15+ years. The average CUSO CTO (where they exist) is managing legacy core integrations, not evaluating transformer architectures. This gap is the single biggest risk in CU AI adoption — bigger than the technology itself.

Act 4 — The Vendor Problem (same as before, enhanced)

  • Most AI vendors selling to CUs have never operated under financial regulation themselves
  • The questions your AI vendor should answer (same list)
  • New addition: Ask your vendor which of these frameworks they align to: NIST AI RMF? ISO 42001? HITRUST AI? If they can't name one, they're building for demos, not examinations.
  • Reference: GAO finding — NCUA has no vendor examination authority. Your vendor's AI is your responsibility.

Act 5 — The Runline Lens (Closing) (enriched)

  • This is why Runline builds the way we do: SEC examination muscle applied to credit union AI infrastructure
  • Every agent action logged. Every decision auditable. Every system stoppable in seconds. Not because it's trendy — because we've been examined and we know what examiners actually ask for
  • The credit unions who treat compliance frameworks (NIST AI RMF + ISO 42001 + NCUA guidance) as a design spec — not a checkbox — will be the ones who deploy AI with confidence
  • The organizations that will lead the AI era aren't the ones with the most technology — they're the ones with the most regulatory fluency. And that's a muscle credit unions can build.
  • Callback to Article 1: Concreit Fund Management LLC → Runline. The regulatory infrastructure didn't disappear — it transferred. The SEC examination mentality is now baked into every agent we build.

Key References

  1. SEC Examination process for registered investment advisers (Investment Advisers Act of 1940)
  2. SEC Transfer Agent registration (Section 17A, Securities Exchange Act of 1934)
  3. SEC 2026 Examination Priorities — AI as operational risk
  4. SEC "AI Washing" enforcement actions
  5. NCUA AI Compliance Plan (Sept 2025) — monitoring, control, termination
  6. NIST AI RMF 1.0 — NCUA-recommended governance baseline
  7. ISO/IEC 42001 — AI management system standard (certifiable)
  8. COSO GenAI Risk & Controls (Feb 2026)
  9. HITRUST AI Security Assessment — 44 controls
  10. GAO-25-107197 — NCUA gaps in AI governance / vendor oversight
  11. Colorado AI Act (effective June 2026)
  12. AICPA AI assurance guidance (in development)

Tone Calibration

  • Empathy: "You're being asked to govern technology that didn't exist three years ago, using standards that haven't been finalized yet. I get it — I've been there. The good news: you don't need to wait for perfection."
  • Curiosity: Fascinated by the convergence — five different bodies (NIST, ISO, COSO, HITRUST, AICPA) all racing to define "AI governance" from different angles. History rhymes: this is what the early internet compliance landscape looked like.
  • Silicon Valley lesson: The best-run AI companies (Anthropic, OpenAI) are voluntarily adopting governance frameworks before regulation forces it. The credit unions that do the same will be the ones examiners hold up as models.
  • Spicy take: The CTO gap isn't just a hiring problem — it's a fluency problem. You need leaders who can think like AI-native fintechs while operating with cooperative values. That combination barely exists in the market. Which is exactly why an external AI partner with regulatory DNA matters.