DEV Community

Cover image for The Complete Skool API: 9 Months of Reverse-Engineering 33 Actions (n8n + TypeScript + AI Agents)
Cristian Tala
Cristian Tala

Posted on • Originally published at cristiantalasanchez.hashnode.dev

The Complete Skool API: 9 Months of Reverse-Engineering 33 Actions (n8n + TypeScript + AI Agents)

TL;DR: I spent 9 months reverse-engineering Skool.com — the community platform with millions of paying members but no public API. The result is a production Apify actor that handles posts, comments, members, classroom courses, file uploads, Auto DM, and group settings — usable from n8n, Make.com, Zapier, or LLM agents (Claude, ChatGPT, LangChain) with pay-per-event pricing. Documentation, recipes, and the full API reference are at github.com/ctala/skool-api-docs. This post is the technical story: what I learned, where Skool's architecture surprised me, and how this is being used in production today.


The problem: Skool has no public API

Skool is one of the fastest-growing community platforms — used by creators, course sellers, agencies, and SaaS founders to host paying communities ($30K+ MRR cases are common). It has tens of thousands of paying communities, millions of members.

It has zero public API.

If you're an admin running a community at scale, this means:

  • Manually approving every new member application
  • Manually replying to welcome threads
  • Manually uploading course content one page at a time
  • Manually copying content from other platforms into Skool
  • No way to integrate Skool with n8n, Make.com, or your CRM
  • No way to build AI agents that operate inside your community

For a $30K MRR community, "manual" stops scaling fast. The official recommendation is "hire a community manager." That's a $3-5K/month line item to do data entry. For a single founder with AI agents available, it's absurd.

So I started reverse-engineering.

Skool API architecture: SSR for reads, REST for writes

When you look at Skool with browser DevTools, you don't see clean REST endpoints. You see two patterns intermixed:

  1. Reads go through Next.js SSR data endpoints: /_next/data/{buildId}/{slug}.json. These return the same data the page would render server-side, in JSON. Fast, public-ish (still requires auth cookies), and oddly stable. The buildId changes on every Skool deploy (~weekly), so you need to refresh it dynamically.

  2. Writes go through a separate REST API at api2.skool.com. Create post, update comment, approve member, ban — all POST/DELETE to api2.skool.com/.... Uses Authorization: Bearer ... with JWT tokens.

This split is unusual. Most platforms either: (a) have a uniform REST/GraphQL API, or (b) hide reads behind an internal /api/... route on the same domain. Skool does neither.

Why it matters for an API consumer: you can't just hit one base URL. Your library needs two clients — one for SSR reads (with buildId rotation logic), one for REST writes (with bearer token rotation logic).

The buildId rotation gotcha

Every Skool deploy invalidates the cached buildId. If your library hardcodes one, all reads stop working until you refresh.

The fix: extract the current buildId from the homepage HTML (/) before reading. Skool exposes it in a <script id="__NEXT_DATA__"> tag that contains {"buildId":"..."}. Parse it, cache it, retry your read.

Initially I extracted from /dashboard, but Skool quietly removed that route in March 2026. Switched to / with /about fallback. Lesson: buildId extraction needs to live in your library as a refreshable concern, not a constant.

The Skool data model: posts and comments are the same object

This is the elegant part of Skool's data model and the part most reverse engineers miss.

In Skool, a post and a comment are the same database entity. The difference is just two fields:

  • rootId: the original post ID. For a top-level post, rootId == id. For a comment, rootId points to the post it's commenting on.
  • parentId: the immediate parent. For a top-level post, parentId == id. For a comment on a post, parentId == postId. For a nested reply, parentId == commentId.

This means:

  • Reply to a post: create with rootId = postId, parentId = postId
  • Reply to a comment (nested): create with rootId = postId, parentId = commentId
  • Edit a comment: use the post-update endpoint with the comment's ID
  • Delete a comment: use the post-delete endpoint with the comment's ID

There is no /comments endpoint. There is no comments: namespace. Everything is posts:.

Once you see this, the whole API gets simpler. I exposed it as the same Posts module in my library, with createComment() just being a thin wrapper around createPost() that sets rootId and parentId correctly.

Skool content format: plain text for posts, TipTap JSON for classroom

For posts and comments: plain text. No HTML, no markdown rendering server-side. The Skool editor handles formatting client-side via simple character codes (**bold** becomes bold in the UI, but the stored content is literal characters).

For classroom course bodies: TipTap JSON. Skool's classroom uses TipTap (the rich text editor library built on ProseMirror) and stores course/lesson bodies as TipTap JSON documents.

This split made classroom:setBody the most complex action in my library. To make it usable by non-developers, I wrote a markdown → TipTap converter from scratch (zero dependencies, ~500 LOC) that handles:

  • Headings (h1-h6)
  • Bold, italic, code, links
  • Bullet and ordered lists
  • Code blocks
  • Blockquote callouts (Skool renders these as colored boxes)
  • Tables (simple)
  • Images and embeds

This means you can write your course content in .md files in Git, push to your repo, and have a CI job publish updates to Skool. Course-as-code.

Skool API authentication: cookies, JWT, and AWS WAF tokens

Skool sits behind AWS WAF (Web Application Firewall) Captcha. Your authenticated session has three cookies that matter:

  • auth_token: JWT, ~30 day expiry
  • client_id: device fingerprint, ~1 year
  • aws-waf-token: rotating, ~3.5 day expiry

If aws-waf-token expires and you keep using the cached cookies, you get 403 errors that look like auth failures. The fix isn't to refresh auth_token — it's to do a full Playwright login (which solves the WAF challenge and gets a fresh aws-waf-token).

My actor exposes this as auth:login — runs Playwright once, returns a cookies string with all three tokens, and your subsequent calls can reuse those cookies for ~3.5 days at ~2s per call (no browser needed for reads/writes once you have valid cookies).

Why this matters for scheduled jobs

If you run a cron that hits Skool daily, you need to re-authenticate every 3-3.5 days. Either schedule it explicitly or build a retry-on-403 pattern that triggers re-auth. The actor returns structured errorCategory: "auth_error" with errorCode: "WAF_EXPIRED" so n8n / Make.com workflows can branch on it.

From skool-js TypeScript library to a production Skool API actor on Apify

After 6 months of building skool-js internally for my own use (operating Cágala, Aprende, Repite, a 500+ member community), I realized other community operators had the same problem. The internal library was solid, the test suite covered ~85% of real-world use cases, but distributing a TypeScript library as the consumption layer was wrong: most community operators aren't developers, and shipping it as open source meant abandoning maintenance the moment my own community didn't need an update.

The fix: wrap it in an Apify actor with three properties:

  1. Single HTTP endpoint — any HTTP client (curl, Postman, n8n's HTTP node, Make.com's HTTP module) can call it
  2. Pay-per-event pricing — $0.005 per dataset result, $0.01 per write operation, $0.05 per scrape operation. No subscription, no minimums. You pay for what you use.
  3. Action-based API — single input with action: "posts:create" (or 33 other actions), groupSlug, cookies, and params. Consistent shape across the entire surface.

Single consumption layer: the Apify actor, HTTP-callable from anything (n8n, Make.com, Zapier, Pipedream, custom backends, LLM tool-use), pay-per-event pricing, no infra to maintain on your side. The internal skool-js library powers it but stays private — that's what keeps the actor sustainably maintained instead of becoming another abandoned reverse-engineering project on GitHub.

33 Skool API actions: posts, comments, members, classroom, files, groups

The actor exposes these namespaces:

Posts — list, filter (by date, by unanswered, combine criteria), get, create, update, delete, pin/unpin, vote (like/unlike), createComment, getComments (REST, fast, max ~35), getCommentsFull (Playwright scroll, returns ALL comments in a thread bypassing Skool's REST cap)

Members — list active, list pending applications, approve, reject, ban, batch approve

Events — list all calendar events, list upcoming

Classroom (courses) — create course, create folder, create page (lesson), set body from markdown, update course/page (preserves privacy/min_tier/amount), delete unit (cascades), get full tree, list courses, update resources (downloadable files)

Files — upload cover image, upload private file (PDF/JSON/ZIP for classroom Resources with privacy:1)

Groups — get group info, set Auto DM message (with #NAME# and #GROUPNAME# tokens)

System — health check (no auth, no Skool calls — deterministic 2s response for monitoring)

Every action returns either the requested data or a structured error payload (success: false, errorCode, errorCategory, retryable, hint). The hint field is designed for LLM tool-use: an agent can read it and self-correct on errors like missing categories or expired auth.

How to integrate Skool with n8n, Make.com, and Zapier

The Skool API actor is designed as a single HTTP endpoint, which makes it drop-in compatible with every workflow automation platform:

n8n: use the HTTP Request node. One node per Skool action. Pattern: auth:login once → save cookies to workflow variable → reuse cookies across subsequent calls. Free n8n template here shows the exact wiring for auto-approving members with GPT-4o screening.

Make.com: use the HTTP module. Same pattern — login once, reuse cookies. The structured errorCategory field in actor responses lets you build Router branches without try/catch logic.

Zapier: works via webhook-trigger + HTTP action. Zapier's free tier supports this but you'll hit the task limit fast on busy communities. n8n self-hosted is the cost-efficient option once you have >10 daily workflow runs.

Pipedream: native HTTP support. Drop the actor URL into any Pipedream step, pass cookies from a stored secret.

Why a single-endpoint, action-based API beats traditional REST for these platforms: every workflow node maps to one HTTP request with a different action value. No URL templating, no header juggling per endpoint, no documentation hunt for "is it POST or PUT for editing?". Just action: "posts:update" and you're done.

Using the Skool API with AI agents (Claude, ChatGPT, MCP, LangChain)

Because every action has a consistent shape and structured error responses with recovery hint fields, the Skool API actor is unusually well-suited as a tool in LLM tool-calling stacks:

Anthropic Claude (tool use): define a single tool skool_api with parameters action (enum of 33 values), groupSlug, cookies, and params. The model picks the right action per user request. Error hints feed back into the conversation for self-correction.

OpenAI function calling: same pattern, define one function. Function calling format is compatible.

LangChain Tool: wrap in a Tool class. The actor returns dataset arrays which LangChain agents handle natively.

Model Context Protocol (MCP): Apify exposes any public actor via https://mcp.apify.com?tools=cristiantala/skool-all-in-one-api. Configure once in Claude Desktop / Cursor / any MCP client, and the model sees all 33 actions as discoverable tools. No separate server to host — Apify handles the MCP layer for you, and the same pay-per-event billing applies (publishers earn from MCP invocations the same as regular runs).

Custom AI agents (OpenClaw, Cline, Aider, etc.): any agent that supports HTTP tool calls works. The actor docs include schemas in JSON Schema format for autogenerated tool definitions.

This is where the "structured error payload with hint field" design pays off. A naive HTTP wrapper around Skool's REST endpoints would return raw 422 errors. The actor returns:

{
  "success": false,
  "errorCode": "MISSING_CATEGORY",
  "errorCategory": "skool_api_error",
  "hint": "This Skool group requires posts to have a category. Pass `params.labelId` in posts:create. Get available labels with groups:get."
}
Enter fullscreen mode Exit fullscreen mode

An AI agent reads that hint and self-corrects without human intervention. That's the difference between a wrapped REST API and an API designed for agents.

Production use cases (real examples)

These are workflows running in production today:

1. Auto-approve members with AI screening

n8n workflow:

  1. Cron every 6h → auth:login (if cookies expired)
  2. members:pending → returns list of pending applications with whyJoin text
  3. For each: pass whyJoin + LinkedIn URL to GPT-4o with a screening prompt ("does this person fit the community? rate 1-10")
  4. If rating ≥ 7 → members:approve. If < 4 → members:reject with message. Else → manual review in Telegram.

Result: hours per week saved. Spam applications filtered in seconds. Published as a free n8n template here.

2. Auto-DM new members with personalized welcome

One-time setup with groups:setAutoDM, then runs forever inside Skool's own infra (Skool sends the DM, the actor just sets the template). Use #NAME# and #GROUPNAME# tokens for personalization. Skool's UI limits this to 300 chars, but you can fit a meaningful welcome + first action.

3. Publish a complete course from markdown files

I write course content in .md files in a private GitHub repo. CI job:

  1. Detects changes to a course directory
  2. For each new lesson: classroom:createPage + classroom:setBody (markdown → TipTap conversion)
  3. For each updated lesson: posts:update with new content
  4. For attached resources (PDFs): files:uploadFile (with privacy:1) + classroom:updateResources

Course-as-code. Lessons get reviewed via PR. Skool stays in sync with the repo.

4. Scrape ALL comments in a thread (bypass Skool's ~35 REST cap)

Skool's REST endpoint for comments returns a maximum of ~35 per thread. For threads with hundreds (welcome threads, AMA posts, popular discussions), posts:getCommentsFull uses Playwright to scroll the post page and extract every comment from the DOM. Slower (~30-60s) and costs $0.05 per invocation, but returns everything.

I use this to audit my welcome thread: 229 introductions, find anyone I haven't replied to yet, generate personalized welcome messages, batch reply.

5. Auto-reply to unanswered posts

Filter posts by commentCount === 0 after 24h, run them through an LLM with the community's context, generate a thoughtful first reply, post via posts:createComment with manual approval before send. Stops valuable questions from sitting in the void.

Skool API alternatives: choosing the right tool for your use case

There are roughly three approaches today to programmatic access to Skool:

  1. Subscription-based API services — third-party developers who built their own reverse-engineered API and resell access via API keys. These typically focus on read operations (list posts, list members) and have limited write coverage. Pricing is monthly subscription, regardless of usage. Good fit if you have predictable monthly volume and only need read access.

  2. Generic scrapers on Apify Store — single-purpose actors that extract one type of data (member emails, course videos, post lists). No write operations. Useful for one-off data extraction, not for ongoing automation.

  3. Full read+write Skool API actor — pay-per-event, complete CRUD across all surface area (posts, comments, members, classroom, files, groups, Auto DM). The Apify actor I built falls in this category. Best fit if you need ongoing writes (auto-approve, auto-comment, course publishing, Auto DM updates).

Subscription API services Generic Apify scrapers This Skool API actor
Read access Limited endpoints Single purpose All 33 actions
Write access Very limited None Full CRUD
Classroom support No Some scrapers only Full (create, update, delete, body, resources)
Auto DM No No Yes (groups:setAutoDM)
Pricing model Monthly subscription Pay-per-event Pay-per-event
Best for Predictable read volume One-off extraction Ongoing community automation
AI agent ready Manual schema Manual schema Action-based + hint field

The "best" tool depends on your use case. If you only need to read data once, a generic scraper is fine. If you need ongoing writes (auto-approve members, auto-comment, course publishing, Auto DM updates), the full read+write actor is the only option that won't hit limits.

💡 Try the Skool API actor: apify.com/cristiantala/skool-all-in-one-api. First call gets you a cookies string you can reuse for 3.5 days. Pay-per-event means typical "auto-approve 10 members" run costs $0.10. No subscription, no minimums. Used in production at CAR (500+ members) — before: 4h/week of manual approvals; after: 10 min/week of review.

Why I'm not building Skool's "official" API

Skool will likely build a public API eventually. When they do, mine becomes redundant. That's fine.

In the meantime: every community operator running at scale today either has this problem solved (using a tool like mine, or a custom in-house scraper) or doesn't scale. The market for community operations automation is real — Discord has Apps, Slack has Bots, Circle has Workflows, every modern platform has SOME automation surface. Skool's absence here is the gap, and someone has to fill it until they do.

The actor itself is the product, but everything around it is in the open: github.com/ctala/skool-api-docs has the full API reference, recipes, and a CHANGELOG that tracks every Skool-side change I detect. If you're a Skool community operator and you want to automate, the actor is one HTTP call away. If you're someone at Skool reading this — please ship a public API. I'll happily deprecate the actor when you do.

Architecture decisions I'd make again

If I had to rebuild from scratch tomorrow:

  1. Cookie reuse with explicit refresh action — better than transparent re-auth. Users want to control when Playwright runs (it's the expensive call).
  2. Action-based API instead of REST-style endpointsposts:create reads better than POST /posts when you're building tool-use schemas for LLMs.
  3. Structured error payloads with hint field — saved me from writing exhaustive error docs. The hint tells the user (or LLM agent) exactly what to do next.
  4. Markdown → TipTap converter as a separate exported function — testable, reusable, and the most common request from users.
  5. Never throw, always push — every error becomes a success: false dataset item. Apify runs never exit_fail. Workflows can branch on errorCategory without try/catch.

The one I'd change: I'd start with TypeScript strict mode from day 1. I retrofitted it after the library was already large and that was painful.

What's next

On the roadmap:

  • Events (list/create/RSVP) — endpoints discovered but not yet stable
  • Analytics (engagement, revenue, member growth) — currently returns empty pageProps, likely needs paid Skool plans
  • Chat / DMs — would require a different auth flow
  • Search — likely Elasticsearch-backed, endpoint TBD
  • "Send email to all members" toggle on posts:create — discovered the field, needs validation in test community
  • Webhooks — Skool doesn't expose them yet, would need polling fallback

If any of these are blocking you, open an issue at github.com/ctala/skool-api-docs — prioritized by demand.

Skool API FAQ — common questions

Does Skool have a public API?

No. Skool does not provide a public API. The endpoints used by the Skool web app are not documented or supported for external use. This actor and library reverse-engineer those endpoints and expose them as a clean, AI-friendly Skool API.

Is reverse engineering the Skool API legal?

Reverse engineering for interoperability with software you legitimately use is broadly accepted in most jurisdictions (US DMCA section 1201(f), EU Software Directive 2009/24/EC). I only use it against communities where I am an admin or have explicit permission. This is the same legal framing under which thousands of third-party Twitter/X clients, scraping libraries, and platform automation tools operate.

How do I authenticate with the Skool API?

Two options: (1) Email + password every call (uses Playwright, slower at ~10s, simpler), or (2) Cookie reuse — run auth:login once, save the returned cookies string, pass in subsequent calls for ~2s response time. Cookies last ~3.5 days before the AWS WAF token rotates. See the authentication section above.

What's the rate limit of the Skool API?

Skool doesn't publish official rate limits. Empirically: ~60 reads/minute and ~20-30 writes/minute work without 429 errors. The library handles automatic retry on transient 429s with exponential backoff. For batch operations (e.g. approving 100 pending members), the actor paces requests automatically.

Can I use the Skool API with n8n?

Yes. The actor is HTTP-callable from any platform with HTTP nodes — n8n, Make.com, Zapier, Pipedream, custom backends. There's a free n8n template showing the full auto-approve workflow.

How is this different from third-party Skool API services?

Three differences. Coverage: 33 actions including full classroom, file uploads, and Auto DM — most third-party services are read-only. Pricing: pay-per-event ($0.005-$0.05 per call) instead of fixed monthly subscription — cheaper for low-to-medium usage. AI agent design: action-based API with structured error hints, optimized for LLM tool use.

Can I publish Skool courses programmatically?

Yes. The classroom:* actions create courses, folders, and pages. The classroom:setBody action accepts markdown and converts to Skool's internal TipTap JSON format. You can publish a complete course from .md files in a Git repo via CI.

How many Skool comments can I retrieve from a thread?

Skool's REST endpoint for comments caps at ~35 per thread. For threads with hundreds of comments (welcome posts, AMAs), the posts:getCommentsFull action uses Playwright to scroll the page and extract every comment. $0.05 per invocation but bypasses the cap entirely. Critical for community audits where you need to verify every member's introduction got a reply.

What happens if Skool changes their API?

The skool-js library is actively maintained. When Skool deploys a new version (typically weekly), the buildId changes — the actor handles this automatically by refreshing from the homepage. WAF token expiration is handled (auto-retry with re-auth). Breaking changes are documented in the CHANGELOG of the docs repo.

Can I expose this Skool API as an MCP server for Claude / Cursor?

Yes. Apify exposes any public actor via https://mcp.apify.com?tools=cristiantala/skool-all-in-one-api. Configure once in your MCP client (Claude Desktop, Cursor, Cline), and the model sees all 33 Skool API actions as discoverable tools. Same pay-per-event billing applies.

Try it

If you're operating a Skool community at any scale and this would save you time — try the actor, leave feedback in the Apify Store or open an issue in the docs repo. The pay-per-event pricing means you only pay when it's actually saving you work.

If this resonated with you and you're building in public around community automation, AI agents, or Skool tooling, I'd love to hear about it. I'm building this stuff in the open at cristiantala.com.

— Cristian Tala

Top comments (0)