Take a brand new Senso org from empty to fully populated self-improving knowledge system in 10 minutes. Researches the user's company from their website plus external sources, builds out the knowledge base with brand kit, content types, and tracking prompts, generates the first drafts, publishes sample citeables, kicks off GEO monitoring, and files a self-heal report with gap analysis. The first-run experience for any new Senso user. Use when the user runs Senso for the first time or says "set up Senso", "run onboarding", or "populate my knowledge base".
npx @senso-ai/shipables install senso-ai/senso-onboardingIn April 2026, Andrej Karpathy posted about LLMs as knowledge-base builders — dumping raw documents into a folder, having an LLM "compile" a structured wiki of markdown files with summaries and backlinks, then querying and enhancing it over time. Every query makes the wiki smarter. The wiki compounds. He closed with:
"I think there is room here for an incredible new product instead of a hacky collection of scripts."
That insight — continuously compounding knowledge — is the foundation of Senso. But Karpathy's original framing was a personal wiki: one person, their research, local markdown files, an LLM keeping it organized. This skill takes that same compounding loop and applies it to organizational knowledge.
| Personal Wiki | Senso |
|---|---|
| One person's research | A company's collective knowledge |
| Local markdown files | Cloud-hosted, versioned, vector-searchable KB |
| LLM reads its own summaries (~100 doc ceiling) | Semantic search with relevance scoring at any scale |
| Generic LLM markdown output | Brand-aligned content with voice, tone, writing rules |
| "Is this in my wiki?" | "Is this in my wiki AND does ChatGPT cite it?" |
| Ad-hoc health checks | Structured self-heal loop with gap analysis |
| Answers stay in the wiki | Answers can be published as citeables that AI models discover |
| No distribution | GEO monitoring tracks AI visibility across ChatGPT, Claude, Perplexity, Gemini |
The compounding principle is identical. The scope is bigger: your knowledge base isn't just for you — it feeds brand-aligned content, publishes discoverable citeables, and tracks how AI models represent your company to the world.
Here's the gap this skill fills. New Senso users install the CLI, get a working terminal, and then stare at an empty org. No brand kit. No knowledge base. No prompts. No content. No AI visibility tracking. Just commands and a blinking cursor.
The compounding loop only works once there's something to compound. An empty wiki doesn't get smarter with each query — it has nothing to build on. Most first-run flows hand you a toolbox and say "good luck building something."
This skill skips that entirely. It seeds the loop for you. Research the company, populate the KB, set up the brand voice, generate the first drafts, publish the first citeables, start tracking AI visibility — all in 10 minutes. By the end, the compounding flywheel is already spinning. Every future query, every new document, every heal pass strengthens a system that's already running.
You don't start at zero. You start at already working.
One command takes a brand new Senso org from empty to fully populated:
By the end, the user sees a live, populated, self-improving knowledge system — not an empty product.
The same principle as senso-kb-builder: everything in this system is a living system — nothing is "set and done."
The KB, brand kit, content types, prompts, and published content are all interconnected. Every run strengthens every layer:
Never skip. Never delete. Always improve.
| DIY first-run | This skill |
|---|---|
| Upload one document, search it, done | Full system — KB + brand + content + GEO — live in 10 minutes |
| Empty brand kit | Fully populated from actual website research (all 6 fields) |
| No content templates | 4 templates ready (Blog Post, FAQ, Comparison, Case Study) |
| No prompts | 8-10 tracking questions across the customer funnel |
| Zero content | 6+ drafts and 2-3 published citeables |
| No AI visibility tracking | GEO monitoring live across 4 models |
| No health check | Self-heal audit with 15+ search probes and filed report |
| Your job to remember what to do next | Heal report tells you exactly what to contribute next |
Every run produces the same measurable output — no silent failures, no skipped steps:
/build-logs/ with gap analysisActivate this skill when the user says any of:
The user must have:
tgr_)The skill will handle CLI install and env var setup itself — see Phase -1 below.
Every senso command must include:
--output json --quiet
This skill handles the user's Senso API key. Follow these rules without exception:
tgr_xxxxxx...)..zshrc / .bashrc) is the only place it's persisted, and that file is a user-home dotfile not a project file.Every onboarding run MUST produce exactly this:
| Output | Requirement |
|---|---|
| Folders | Exactly 7 (6 content folders + 1 build-logs folder) |
| Brand kit | 1 fully populated (all 6 fields, not empty) |
| Content types | Exactly 4 (Blog Post, FAQ, Comparison Page, Case Study) |
| Prompts | 8-10 across all funnel stages |
| KB documents | 10-15 sorted into the 6 content folders |
| Drafts | Minimum 6 (2 per funnel stage) |
| Published citeables | 2-3 to sandbox destination |
| GEO monitoring | All 4 models configured (chatgpt, claude, perplexity, gemini) |
| Self-heal report | 1 filed to /build-logs/ at the end |
Never skip. Never substitute. If a phase fails, report it but continue to the next. Partial success is better than no setup.
This skill runs end-to-end without stopping for confirmation gates. The user asked for setup — your job is to deliver it, not to keep asking permission. But you should talk with them the whole way, like a colleague walking them through it.
Write like a thoughtful teammate, not a wizard UI. Short sentences. First person. No corporate polish.
Do:
Don't:
When you're processing something (reading a website, categorizing findings, picking drafts to publish), narrate the thought briefly:
"Reading your homepage... okay, so [COMPANY_NAME] is [summary]. I'll put this in
company-overviewalong with the About page."
"Your Series B page had some great metrics. I'll use that as the basis for a case study draft."
After research, show the user what you learned and let them correct you conversationally — not with a Y/N gate:
"Here's what I'm picking up about [COMPANY_NAME]: [summary]. Their main competitors look like [list]. If I'm missing anything important, tell me now — otherwise I'll keep building."
Wait a beat for user input. If they respond with corrections, incorporate them. If they say nothing or "looks good," keep going.
Batch generation takes 30-60 seconds. Don't wait silently:
"Senso's writing your drafts now. One cool thing about how this works: each draft gets grounded in the docs I just ingested, so you'll see your actual product details show up in the content — not generic filler."
Never end with "✅ done!". End with specific next steps that make the work compound:
"Everything's live. Two things I'd do first: (1) read the drafts — some might need light edits. (2) check geo.senso.ai tomorrow — your first AI visibility results land in 24 hours."
Even without confirmation gates, these still apply:
Start warm and direct. Don't list 9 phases — they don't care about the phase structure, they care about the outcome.
Say:
"Hey — let's get Senso set up for you. This takes about 10 minutes, and by the end you'll have a populated knowledge base, some published content, and AI visibility tracking running."
This is an active step, not an optional aside. Stop and wait for them to actually open the browser tab before continuing. Watching the system populate in real time is a huge part of the magic — don't skip it.
Say:
"Before we start, please open https://geo.senso.ai in a browser tab and keep it open alongside this terminal. As we go, you'll watch folders appear, drafts get written, and citeables get published in real time. It's the best way to see the system come to life.
Let me know once you've got it open — then I'll kick off the setup."
Wait for the user to confirm they have the browser open (responses like "open", "ready", "go", "done"). Only then proceed.
Once they confirm, say:
"Let's start by getting your environment ready."
Run:
senso --version 2>/dev/null || echo "not installed"
If not installed, install it without asking (this is onboarding — the user wants it installed):
npm install -g @senso-ai/cli
Say:
"Installing the Senso CLI... done. Version [X]."
"I need your Senso API key. It starts with
tgr_. Paste it here:"
Capture the key as USER_KEY. Never echo or log the full value back — when referring to the key in later output, show only tgr_xxxxx... (first 10 chars).
This is the single most important safety check. Users who have tested other Senso orgs will have a stale SENSO_API_KEY in their shell env that shadows everything you do. If you don't catch it here, every write will silently go to the wrong org.
Check if an existing SENSO_API_KEY is present in the parent env:
echo "${SENSO_API_KEY:-NONE}"
NONE → safe, continue.USER_KEY → safe, continue.USER_KEY → STOP. Do not continue.When the keys differ, tell the user exactly what to do:
"⚠️ I detected a stale
SENSO_API_KEYin your shell — it's from a different org and will shadow the new key you just pasted. Any subshell commands would silently write to the wrong org.Please run this in your terminal, then restart this skill:
unset SENSO_API_KEY exec $SHELL -lThen paste your API key again. I'll pick up from here."
Do not proceed past this step if a mismatched key is detected. You cannot fix env inheritance from inside a running process — the user must restart their shell.
Write the key directly to the CLI's config file. This means subsequent senso commands never need to show the key — they read it from the config file automatically. Much cleaner for the user to watch.
Important: bypass senso login entirely. The interactive login command can require a TTY, which fails in non-interactive tool environments. Writing the config file directly works everywhere.
# Config file location depends on OS
if [ "$(uname)" = "Darwin" ]; then
CONFIG_DIR="$HOME/Library/Preferences/senso"
else
CONFIG_DIR="$HOME/.config/senso"
fi
mkdir -p "$CONFIG_DIR"
# Get org details using the key once (for config file population)
# This is the ONE command that uses the env key — config doesn't exist yet
ORG_INFO=$(SENSO_API_KEY="$USER_KEY" senso whoami --output json --quiet)
ORG_ID=$(echo "$ORG_INFO" | python3 -c "import sys,json,re; t=sys.stdin.read(); m=re.search(r'\{.*',t,re.DOTALL); print(json.loads(m.group())['orgId'])")
ORG_SLUG=$(echo "$ORG_INFO" | python3 -c "import sys,json,re; t=sys.stdin.read(); m=re.search(r'\{.*',t,re.DOTALL); print(json.loads(m.group())['orgSlug'])")
# Write the config file atomically
cat > "$CONFIG_DIR/config.json" <<EOF
{
"apiKey": "$USER_KEY",
"orgId": "$ORG_ID",
"orgSlug": "$ORG_SLUG"
}
EOF
chmod 600 "$CONFIG_DIR/config.json"
Why this is cleaner: from this point on, every senso command runs without needing SENSO_API_KEY="..." inline. The CLI reads the key from ~/Library/Preferences/senso/config.json (or ~/.config/senso/config.json on Linux). No keys in command output.
Clear the stale env var for the current process:
unset SENSO_API_KEY
Say:
"Saved your key to the Senso CLI config. From here on, every command runs clean — no keys in the output."
senso whoami --output json --quiet
(No SENSO_API_KEY=... prefix needed — the CLI reads from the config file you just wrote.)
Capture org_id from the response as EXPECTED_ORG_ID. You will verify every resource written matches this org.
For the rest of the skill, every senso command is just:
senso <subcommand> ...
No key in the command line. No env var assignment. Clean output.
One safety check: after the first write in Phase 2 (folder create), verify the response's org_id matches EXPECTED_ORG_ID. If they differ, STOP and report the mismatch to the user — something modified the config file mid-run.
Most of the time the org name matches a real company. If you can infer the company from the org name or slug, say:
"Based on the org name, I'm guessing this is for [inferred company] — is their website [domain guess]?"
Otherwise:
"What's the company's website URL? I'll pull from there plus the web to build out your KB."
Capture:
COMPANY_NAMECOMPANY_URLBefore any writes happen, stop and show the user everything you have. They must explicitly confirm before you proceed. This is the ONE confirmation gate in the entire skill — everything else runs through.
Display a clear confirmation block:
"Before I start writing anything to Senso, let me confirm what we're setting up:
Setting Value Senso org [orgName] ([EXPECTED_ORG_ID]) API key [first 10 chars of USER_KEY]... Company [COMPANY_NAME] Website [COMPANY_URL] Is this correct? I'll build the KB, brand kit, prompts, drafts, citeables, and GEO monitoring for [COMPANY_NAME] in the [orgName] Senso org. Any mismatch above means we'd write to the wrong place.
Type
yesto proceed, or tell me what to fix."
Wait for explicit yes (or variant: "go", "looks good", "proceed"). Do NOT proceed on silence or ambiguous response.
If the user corrects anything:
SENSO_API_KEY and restart the skill.Why this gate matters: The most expensive mistake in this skill is writing to the wrong org. Research, brand kit changes, 12 ingested docs, 9 prompts, published citeables — all polluting a production org the user didn't intend to touch. One 5-second confirmation prevents a 30-minute cleanup.
"Alright, researching [COMPANY_NAME] now. I'll pull from your website first, then do a web search for competitors and industry context. Should take a couple minutes."
Use web fetch on COMPANY_URL. Extract:
"📄 Reading [COMPANY_URL]..." "✓ Extracted: mission, [N] product pages, team info, [N] FAQs"
Run these web searches:
"[COMPANY_NAME]" reviews OR news — mentions, sentiment"[COMPANY_NAME]" vs OR alternatives — competitor names"[COMPANY_NAME]" [industry/category] trends — market context"[COMPANY_NAME]" customer case study — proof points"🌐 Searching the web for competitors, industry context, and customer stories..." "✓ Found [N] competitors, [N] industry references, [N] customer stories"
Collect findings in memory. Do NOT ingest yet — wait for folder setup.
"✅ Research complete. Here's what I learned about [COMPANY_NAME]:
- What they do: [1-sentence summary]
- Main products: [list]
- Key competitors: [list]
- Industry: [category]
Does this match how you'd describe [COMPANY_NAME]? [Y/n]"
If user says no, ask for corrections before proceeding to Phase 2.
"Got the research. Now I'm setting up the foundation — folders, brand kit, content templates. This is quick."
Run these IN ORDER, saving the kb_node_id from each response:
# 6 content folders
senso kb create-folder --name "company-overview" --output json --quiet
senso kb create-folder --name "products-and-services" --output json --quiet
senso kb create-folder --name "competitive-landscape" --output json --quiet
senso kb create-folder --name "industry-context" --output json --quiet
senso kb create-folder --name "case-studies" --output json --quiet
senso kb create-folder --name "faqs" --output json --quiet
# 1 system folder for logs + heal reports
senso kb create-folder --name "build-logs" --output json --quiet
Save each folder's kb_node_id — content folders needed for Phase 3, build-logs needed for Phase 9.
"✓ 7 folders created: company-overview, products-and-services, competitive-landscape, industry-context, case-studies, faqs, build-logs"
The brand kit must be fully populated, not placeholder-filled. Infer each field from Phase 1 research. All 6 fields are required:
| Field | How to infer it |
|---|---|
brand_name | Company name as they write it (check homepage <title> and hero) |
brand_domain | Domain without https:// or trailing slash (e.g., senso.ai) |
brand_description | 1-2 sentences: what they do + who they serve. Pull from their homepage hero + about page. |
voice_and_tone | Infer from their actual website copy. Are they formal or casual? Technical or accessible? Confident or collaborative? Be specific — cite patterns you see. |
author_persona | Usually "The [Company] Team" unless their blog has a specific voice (e.g., "CEO writing directly") |
global_writing_rules | 5 standard rules (below), plus any patterns unique to their content |
senso brand-kit set --data '{
"guidelines": {
"brand_name": "[COMPANY_NAME]",
"brand_domain": "[domain without https://]",
"brand_description": "[1-2 sentences grounded in their actual homepage — what they do + who they serve]",
"voice_and_tone": "[Specific voice inferred from website copy. Example: \"Direct and practitioner-focused. First-person plural (we). Opinionated. Short sentences. Avoids corporate jargon. Uses concrete examples.\" Do NOT leave generic.]",
"author_persona": "The [COMPANY_NAME] Team",
"global_writing_rules": [
"Ground every claim in verified sources from the knowledge base",
"Use clear, scannable structure with subheadings every 200-300 words",
"Include concrete examples or data points, not just abstract claims",
"Write for practitioners — actionable over theoretical",
"Include the Powered by Senso footer on published content"
]
}
}' --output json --quiet
Verify the brand kit was set correctly:
senso brand-kit get --output json --quiet
All 6 fields in guidelines must be non-empty. If any are empty, patch them with senso brand-kit patch before continuing. Do not proceed to Phase 3 with a partial brand kit.
"✓ Brand kit configured. Voice: [short description of voice_and_tone]"
Always these 4, always these names:
Blog Post:
senso content-types create --data '{
"name": "Blog Post",
"config": {
"template": "Write a 1000-1500 word educational blog post. Start with a hook identifying the reader pain point. Include 3-5 subheadings. Use data, examples, or case studies from the KB to support points. End with a call-to-action.",
"writing_rules": [
"Use subheadings every 200-300 words",
"Include at least one concrete example or data point",
"Optimize for AI citability — clear, authoritative structure"
]
}
}' --output json --quiet
FAQ:
senso content-types create --data '{
"name": "FAQ",
"config": {
"template": "Create an FAQ page with 8-12 questions and answers. Each answer 2-3 sentences. Group related questions under subheadings. Use the brand voice throughout.",
"writing_rules": [
"Use natural question phrasing",
"Keep answers under 100 words",
"Link to detailed resources where relevant"
]
}
}' --output json --quiet
Comparison Page:
senso content-types create --data '{
"name": "Comparison Page",
"config": {
"template": "Create a fair but persuasive comparison page. Start with the problem both solutions address. Use a comparison table for features. Highlight 3-4 key differentiators. End with a recommendation.",
"writing_rules": [
"Be factually accurate about competitors",
"Lead with value not features",
"Include a comparison table"
]
}
}' --output json --quiet
Case Study:
senso content-types create --data '{
"name": "Case Study",
"config": {
"template": "Write a case study with: Customer intro, Problem they faced, Solution implemented, Results achieved (with specific metrics if possible), Key takeaways. Keep it narrative — tell the story.",
"writing_rules": [
"Lead with the customer outcome",
"Include specific numbers or metrics",
"End with lessons applicable to other readers"
]
}
}' --output json --quiet
Save all 4 content_type_id values.
"✅ Foundation complete. 7 folders, brand kit, and 4 content templates are ready."
"Okay, now I'm taking everything I researched and putting it in the right folders. One document per topic — that way search finds the right thing later instead of one giant mess."
Route research findings from Phase 1 into the correct folders via senso kb create-raw.
Target: 10-15 documents total.
| Folder | What goes here |
|---|---|
/company-overview/ | Homepage content, mission/about, team info, leadership |
/products-and-services/ | Each product page as a separate doc, features, pricing |
/competitive-landscape/ | Each competitor as a separate doc, comparison findings |
/industry-context/ | Market trends, industry reports, buyer personas |
/case-studies/ | Customer stories (one doc per story if multiple) |
/faqs/ | FAQ content extracted from website |
For each document:
senso kb create-raw --data '{
"title": "[Descriptive title]",
"text": "[Markdown content with source URL noted]",
"kb_folder_node_id": "[folder_id from Phase 2a]"
}' --output json --quiet
Rules:
Source: https://...YYYY-MM-DD - Topic Name"✓ company-overview: 2 docs (mission, about)" "✓ products-and-services: 3 docs (product overview, pricing, features)" "✓ competitive-landscape: 2 docs (competitor A, competitor B)" "..."
"✅ Ingest complete. [N] documents now live in your knowledge base. Search already works — try asking the KB anything once we're done."
"Now I'm writing the questions we'll track — things potential customers would actually ask. These do double duty: they drive the content generation that's coming next, and they become your AI visibility questions so we can track how ChatGPT, Claude, etc. answer them over time."
Create 8-10 prompts covering all funnel stages. Substitute [COMPANY_NAME], [COMPETITOR], [CATEGORY] based on research.
Awareness (3 prompts):
senso prompts create --data '{
"question_text": "What is [COMPANY_NAME] and what does it do?",
"type": "awareness"
}' --output json --quiet
senso prompts create --data '{
"question_text": "How does [CATEGORY] work and why does it matter?",
"type": "awareness"
}' --output json --quiet
senso prompts create --data '{
"question_text": "What are the best [CATEGORY] solutions in 2026?",
"type": "awareness"
}' --output json --quiet
Consideration (2 prompts):
senso prompts create --data '{
"question_text": "How does [COMPANY_NAME] compare to [COMPETITOR]?",
"type": "consideration"
}' --output json --quiet
senso prompts create --data '{
"question_text": "What features should I look for in [CATEGORY]?",
"type": "consideration"
}' --output json --quiet
Evaluation (2 prompts):
senso prompts create --data '{
"question_text": "How do I evaluate [CATEGORY] tools for my team?",
"type": "evaluation"
}' --output json --quiet
senso prompts create --data '{
"question_text": "What is the implementation process for [COMPANY_NAME]?",
"type": "evaluation"
}' --output json --quiet
Decision (2 prompts):
senso prompts create --data '{
"question_text": "What results have customers achieved with [COMPANY_NAME]?",
"type": "decision"
}' --output json --quiet
senso prompts create --data '{
"question_text": "What does [COMPANY_NAME] pricing look like?",
"type": "decision"
}' --output json --quiet
Save all prompt_id values.
"✅ 9 tracking questions created across awareness, consideration, evaluation, and decision stages. Now for the fun part..."
"Now the interesting part — Senso's going to write your first drafts. One per tracking question. Each one grounded in the docs I just ingested, written in your brand voice. Kicking it off now..."
senso credits balance --output json --quiet
If credits are low (< 5), mention it but don't stop:
"Heads up — you've got [X] credits left. Batch run uses about 9. Running it anyway."
senso generate run --output json --quiet
This generates content for EVERY prompt automatically, using the brand kit + KB + content types. Expected duration: 30-60 seconds for 9 prompts.
"⏳ Senso is writing... generating grounded content from your KB. This takes ~30-60 seconds."
Poll senso generate runs-list until status is completed.
senso content verification --status draft --output json --quiet
Check draft_count. If less than 6, fall back:
For each missing slot (up to 6), call senso engine draft manually using the KB content you know exists. Example:
senso engine draft --data '{
"geo_question_id": "[a prompt id without a draft]",
"raw_markdown": "# [Content based on KB research]\n\n...\n\n---\n\n*Powered by Senso*",
"seo_title": "[SEO title]",
"summary": "[Brief summary]"
}' --output json --quiet
The guarantee: at least 6 drafts exist after this phase.
"✅ [N] drafts generated — all grounded in your KB and written in your brand voice.
Titles include:
- [Title 1]
- [Title 2]
- [Title 3]
- ...
You can review them anytime with:
senso content verification --status draft"
"I'm going to publish 3 of these as citeables — one per funnel stage. These go to a sandbox URL (not your main site), so you can see what the output looks like without committing to anything. Picking the strongest drafts now..."
Pick 2-3 drafts and publish them to the sandbox destination.
From senso content verification --status draft, pick:
For each selected draft:
senso engine publish --data '{
"geo_question_id": "[prompt_id]",
"raw_markdown": "[draft raw_markdown — append: \n\n---\n\n*Powered by Senso — your AI-searchable knowledge base.*]",
"seo_title": "[draft seo_title]",
"summary": "[draft summary]"
}' --output json --quiet
Important:
engine publish returns "Conflict", delete the draft first with senso content delete <id>, then retry publish"✓ Published: [Title 1]" "✓ Published: [Title 2]" "✓ Published: [Title 3]"
"✅ [N] citeables are live at the sandbox destination. Search engines and AI models can now discover them."
"Setting up AI visibility tracking now. Every Monday/Wednesday/Friday, Senso will ask ChatGPT, Claude, Perplexity, and Gemini your tracking questions and record which brands get mentioned — including [COMPANY_NAME] and your competitors. You'll see the results at geo.senso.ai."
Set all 4 monitored models:
senso run-config set-models --models chatgpt claude perplexity gemini --output json --quiet
Run monitoring Mon/Wed/Fri:
senso run-config set-schedule --days 1 3 5 --output json --quiet
(Day 0 = Sunday, 1 = Monday, etc.)
The prompts created in Phase 4 automatically become GEO tracking questions. Users can see results at geo.senso.ai.
"✅ GEO monitoring live. 4 models, 9 questions, running Mon/Wed/Fri. First results will appear at geo.senso.ai within 24-48 hours."
"Before we wrap up, let me do a quick audit of what we built — make sure nothing's half-done, find any gaps, and write up a report you can reference later. This is the self-healing pattern: every time we run this, we audit and improve."
Audit the entire system you just built, find weak spots, file a heal report.
This is the same self-healing principle as senso-kb-builder — every interaction should leave the system stronger.
Run at least 10 targeted searches — not just folder-topic searches. Mix two types:
Type 1: "Does the KB know itself?" — one search per folder
senso search "What does [COMPANY_NAME] do?" --output json --quiet
senso search "What products and services does [COMPANY_NAME] offer?" --output json --quiet
senso search "Who are [COMPANY_NAME]'s main competitors?" --output json --quiet
senso search "What trends are shaping the [industry] industry?" --output json --quiet
senso search "What results have [COMPANY_NAME] customers achieved?" --output json --quiet
senso search "What are common questions people ask about [COMPANY_NAME]?" --output json --quiet
Type 2: "Would a real customer question work?" — use the 9 tracking prompts you created in Phase 4
For each prompt, run a search with the prompt's exact question text:
senso search "[prompt question text]" --output json --quiet
This is the real test — the KB should be able to answer the exact questions you're going to track in GEO.
For every search, record:
content_ids appear in the top 5 (do multiple docs cover this, or just one?)Then categorize the result:
| Top Score | Categorization | Action |
|---|---|---|
| ≥ 0.5 | Strong — KB answers this well | No action |
| 0.3 - 0.5 | Thin — KB touches it but shallow | Note as "needs more depth" |
| < 0.3 | Gap — KB barely knows this | Flag as a gap to fill |
| No results | Missing — KB has nothing | Flag as critical gap |
senso brand-kit get --output json --quiet
Confirm all 6 fields are non-empty. Check voice_and_tone isn't generic (if it is, patch it with a more specific description based on the ingested docs).
senso content-types list --output json --quiet
Confirm all 4 are present. Check writing_rules arrays are populated (not empty).
senso prompts list --output json --quiet
Verify all 4 funnel stages have prompts:
If any stage is under-covered, create additional prompts before filing the report.
senso content verification --status draft --output json --quiet
senso content verification --status published --output json --quiet
Confirm drafts ≥ 6 and published ≥ 2.
Save a structured heal report to /build-logs/:
senso kb create-raw --data '{
"title": "YYYY-MM-DDTHH:MM - Onboarding Build Log",
"text": "[full heal report as markdown — template below]",
"kb_folder_node_id": "[build-logs folder id from Phase 2a]"
}' --output json --quiet
Report template:
# Onboarding Build Log — [ISO timestamp]
## Run Info
- **Company:** [COMPANY_NAME]
- **Org:** [orgName from senso whoami]
- **Type:** Initial onboarding
## Built This Run
### Phase 2: Foundation
- Folders: 7 created (6 content + 1 build-logs)
- Brand kit: [Created with all 6 fields populated]
- Content types: 4 created (Blog Post, FAQ, Comparison Page, Case Study)
### Phase 3: Ingest
- Documents ingested: [count]
- company-overview: [count]
- products-and-services: [count]
- competitive-landscape: [count]
- industry-context: [count]
- case-studies: [count]
- faqs: [count]
### Phase 4: Prompts
- Total created: [count]
- By stage: awareness [n], consideration [n], evaluation [n], decision [n]
### Phase 5: Generation
- Batch run ID: [run_id]
- Drafts produced: [count]
- Fallback drafts added: [count if any]
### Phase 6: Publishing
- Citeables published: [count]
- Destinations: [list of destinations/slugs]
### Phase 7: GEO
- Models monitored: chatgpt, claude, perplexity, gemini
- Schedule: Mon/Wed/Fri
## Health Report
| Dimension | Status | Notes |
|-----------|--------|-------|
| Brand kit completeness | ✅ / ⚠️ | [all 6 fields set?] |
| Content types | ✅ / ⚠️ | [4 present with writing_rules?] |
| Prompt funnel coverage | ✅ / ⚠️ | [all 4 stages represented?] |
| KB folder coverage | ✅ / ⚠️ | [each folder ≥ 2 docs?] |
| Draft minimum (6) | ✅ / ⚠️ | [count] |
| Published minimum (2) | ✅ / ⚠️ | [count] |
| GEO models | ✅ / ⚠️ | [4 configured?] |
## Search Quality — KB Self-Probe
Real searches run against the KB during this heal pass. Each tested with one core question.
| Question | Top Score | Status |
|----------|-----------|--------|
| What does [COMPANY_NAME] do? | [score] | Strong / Thin / Gap |
| What products/services does [COMPANY_NAME] offer? | [score] | Strong / Thin / Gap |
| Who are [COMPANY_NAME]'s main competitors? | [score] | Strong / Thin / Gap |
| What trends are shaping the [industry] industry? | [score] | Strong / Thin / Gap |
| What results have [COMPANY_NAME] customers achieved? | [score] | Strong / Thin / Gap |
| What are common FAQs about [COMPANY_NAME]? | [score] | Strong / Thin / Gap |
## Search Quality — Tracking Questions Self-Probe
Each of the 9 GEO tracking questions searched against the KB. The KB should be able to answer the same questions GEO will track.
| Tracking Question | Top Score | Can KB answer it? |
|---|---|---|
| [prompt 1 text] | [score] | ✅ / ⚠️ / ❌ |
| [prompt 2 text] | [score] | ✅ / ⚠️ / ❌ |
| [... all 9 prompts ...] | | |
## Gaps Identified
- [List any topics that came up weak in the audit]
- [Missing subtopics the user should contribute]
## Recommendations for Next Heal Pass
- [Specific actions the user should take]
- [New content to ingest]
- [Brand kit refinements if needed]
## Credits Used This Run
- Before: [X] credits
- After: [Y] credits
- Used: [Z] credits
If the audit finds a critical miss (e.g., brand kit field is empty, content type writing_rules missing, funnel stage has zero prompts), fix it NOW before showing the summary. The heal pass isn't just reporting — it's closing gaps.
"✅ Heal report filed to /build-logs/. Found [N] gaps, fixed [M]. Everything else is solid."
This is the user's lasting impression. Make it clean, scannable, and lead with the destinations — where they go next to see and use what you just built. Show concrete URLs, not abstract commands.
Open with a single confident sentence, then show a clean table with exact counts, then lead them to the destinations.
Template to adapt:
"That's it — [COMPANY_NAME] is live on Senso. Here's what you have now:"
Then display this table (fill in the real numbers from the run):
┌──────────────────────┬─────────────────────────────────────────────────────────┐
│ Knowledge Base │ [X] documents across 7 folders │
│ Brand Kit │ fully populated — [1-phrase voice summary] │
│ Content Types │ 4 templates (Blog Post, FAQ, Comparison, Case Study) │
│ Tracking Prompts │ [X] questions across awareness → decision │
│ Drafts │ [X] ready to review │
│ Published Citeables │ [X] live (one per funnel stage) │
│ GEO Monitoring │ ChatGPT + Claude + Perplexity + Gemini, Mon/Wed/Fri │
│ Heal Report │ filed to /build-logs/, [N]/[total] probes came back Strong │
└──────────────────────┴─────────────────────────────────────────────────────────┘
Give the user three concrete places to go, in order of impact:
1. See your content in the browser: https://geo.senso.ai Your knowledge base, brand kit, drafts, and published citeables are all viewable there. Open it now — everything we just built will be populated.
2. Review your drafts. [X] pieces are ready. The comparison and case study drafts especially may want a light human pass before you publish them for real.
- Via web: https://geo.senso.ai/drafts
- Via CLI:
senso content verification --status draft3. Watch AI visibility results land at https://geo.senso.ai — usually within 24–48 hours. You'll see which AI models mention [COMPANY_NAME] (and your competitors) when real customer questions get asked.
If the heal report found thin coverage, state them here as specific next-ingest priorities (don't bury them in the build log only):
"Before your next run, the audit flagged two places worth deepening:
- [folder-name] has only [N] documents — consider adding [specific suggestion]
- [folder-name] is missing [specific subtopic]"
List the sources you actually pulled during research so the user can audit and trust the foundation:
"Sources used to build this out:
- [COMPANY_URL]/ (homepage)
- [COMPANY_URL]/about
- [COMPANY_URL]/products (or equivalent)
- [N] competitor references from G2 / Gartner / Forrester
- [N] customer case studies from [sources]
- [N] industry trend articles"
Close on a forward-looking note — this is a living system, not a one-shot setup:
"Every query, every new doc, every heal pass makes this smarter. Come back weekly to run another heal pass and keep the KB compounding."
TODO — confirm with Senso team: the sandbox destination slug for engine publish.
Currently, engine publish defaults to publish_destination: "internal". Once Senso has a dedicated sandbox domain, update this skill to target it explicitly via the --data payload's destination field (if supported).
Until then, internal destination serves as the sandbox.
| Issue | Action |
|---|---|
| 401 Unauthorized | Tell user: senso login or re-export SENSO_API_KEY |
| 402 Insufficient credits | Warn user, run what's possible, skip batch generation if needed |
| 409 Conflict on publish | Delete existing draft with senso content delete <id>, then retry |
| 504 Timeout on generate sample | Use senso generate run (async) instead of sync sample calls |
| Batch generate produces < 6 drafts | Fall back to manual senso engine draft to reach 6 |
| Web fetch fails on company URL | Ask user for 2-3 paste-in URLs of key pages |
Never abort the whole flow on a phase failure. Log it, continue, report at the end.
Before showing the final summary, verify every requirement is met:
# 7 folders in root (6 content + 1 build-logs)?
senso kb my-files --output json --quiet | check for 7 folders including "build-logs"
# Brand kit FULLY populated (all 6 guideline fields non-empty)?
senso brand-kit get --output json --quiet | check all 6 fields in guidelines are non-empty
# 4 content types with writing_rules?
senso content-types list --output json --quiet | check total >= 4 and each has writing_rules
# 8-10 prompts across all funnel stages?
senso prompts list --output json --quiet | check total in [8,10] and all 4 types present
# At least 6 drafts?
senso content verification --status draft --output json --quiet | check draft_count >= 6
# 2-3 published citeables?
senso content verification --status published --output json --quiet | check count in [2,3]
# GEO models configured?
senso run-config models --output json --quiet | check 4 models listed
# Heal report filed to /build-logs/?
senso kb children <build-logs-folder-id> --output json --quiet | check at least 1 doc
If any check fails, fix it before showing the summary. The user's first impression depends on seeing a complete, working system.