The DSCC loop
Four verbs, one binary. Every coding task walks the same cycle.
Dispatch the task
Route work to the right model and provider — doubao, Qwen, a local Ollama, or any OpenAI-compatible endpoint. No hard-coded default, the choice is yours.
dscc --model doubao-seed-2.0-code
export DSCC_MODEL=qwen3-coder
dscc --model llama3:8b # via Ollama
Scaffold the workspace
Read the tree, load DSCC.md and .dscc/, discover local agents and skills — all the context the tools will need downstream.
dscc agents
dscc skills
# DSCC.md / .dscc/settings.json
Compose across turns
Iterate inside a REPL, save and resume sessions, compact history when it grows, and bring in plugins as you go.
/session /compact
/config /export
/agents /skills
Commit it back
Review diffs, export artifacts, and land changes through the built-in shell and edit tools — closing the loop back into the repo.
/diff
git add -p
git commit
Same task, same answer sheet
On 25,000 real users × six statistical questions (cross-table reconciliation, chi-square, logistic regression, a Simpson's check, k-means clustering, OLS residual outliers), dscc (doubao-seed-2.0-code) and Claude Code (claude-opus-4-6) produced results that match to four decimal places — two independent scripts, two vendor models, one answer sheet you can reconcile.
export DSCC_API_KEY="..."
export DSCC_BASE_URL="https://ark.cn-beijing.volces.com/api/coding/v3"
dscc --model doubao-seed-2.0-code \
--permission-mode workspace-write \
prompt "$(cat PROMPT_v2.md)"
Three-sentence takeaway
- The numbers line up exactly. Pearson
0.3143, chi-squarep 0.8435, logistictest AUC 0.5006 · acc 0.6156, Simpson's check-0.3698 / -0.3615 / -0.3713, k-means cluster sizes26.34 / 28.54 / 10.63 / 34.48%, and in Q6 the same five user_ids in the same order with the same residuals. Six sections, not a single digit off. - dscc fixes itself. The sandbox had no pandas/scipy, so dscc ran
pip3 installon its own; the first pass then hitKeyError: 0because it treatedpearsonr's return as a scalar — dscc read the traceback, edited the script tor, _ = pearsonr(...), and re-ran successfully. A traceback is input to the next turn, not a terminal state. - Divergence lives only in the prose. For k-means cluster 1 (screen 207.99 min, addiction 4.81), dscc labeled it "medium-usual" and Claude Code labeled it "high-intensity addict" — same rows, same centroids, two narrative picks. That's exactly the line a serious agent stack should draw between numbers and narration.
Expand: full prompt (all six questions)
Dataset: /tmp/social_analysis/{social_media_user_behavior.csv, platform_statistics_2026.csv}
All RNG: random_state=42. All numbers: 4 decimals.
Q1. Cross-table consistency: user-level mean daily_screen_time_minutes by primary_platform
vs platform_statistics_2026.avg_daily_time_minutes → reconciliation table, Pearson r,
top 3 platforms by absolute gap.
Q2. 2×2 contingency is_content_creator × has_purchased_via_social → chi2 / dof / p.
Q3. Logistic regression for has_purchased_via_social. 7 numeric + 3 boolean + 3 one-hot
categorical. StandardScaler on numeric only. train_test_split(0.2, stratify=y).
Report train AUC / test AUC / test accuracy + top-5 coefficients by |value|.
Q4. Overall Pearson(addiction, sleep) vs Pearson split by is_content_creator
— Simpson's reversal / halving check.
Q5. 4 features → StandardScaler → KMeans(n=4, n_init=10) → cluster sizes +
original-scale means per cluster.
Q6. followers_count ≥ 1000 users, log10(1+x) → predict engagement_rate_pct
(simple OLS), list top-5 users by |residual|.
Expand: dscc execution trace + analyze_v2.py + key numbers
$ head -5 social_media_user_behavior.csv # scaffold: peek at schema
✏️ write analyze_v2.py (pandas + scipy + sklearn)
$ python3 analyze_v2.py # ModuleNotFoundError: pandas
✏️ try stdlib rewrite # scipy / sklearn needed → abandon
$ pip3 install pandas numpy scipy scikit-learn statsmodels -q
$ python3 analyze_v2.py # KeyError: 0 — pearsonr unpack bug
📝 edit: r, _ = pearsonr(...) # fix the tuple handling
$ python3 analyze_v2.py # clean pass, writes report_v2_dscc.md
import pandas as pd, numpy as np
from scipy.stats import chi2_contingency, pearsonr
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score, accuracy_score
from sklearn.cluster import KMeans
RS = 42
u = pd.read_csv(".../social_media_user_behavior.csv")
p = pd.read_csv(".../platform_statistics_2026.csv")
# Q1 cross-table reconciliation
user_mean = u.groupby("primary_platform")["daily_screen_time_minutes"].mean()
merged = pd.DataFrame({"user_mean": user_mean}).join(
p.set_index("platform")["avg_daily_time_minutes"].rename("table_val"),
how="inner",
)
merged["gap"] = merged["user_mean"] - merged["table_val"]
r, _ = pearsonr(merged["user_mean"], merged["table_val"]) # ← initially missed `_`
# Q3 logistic (excerpt)
num = ["age","daily_screen_time_minutes","num_platforms_used", ...]
boo = ["follows_influencers","is_content_creator","uses_ai_features"]
cat = ["ad_click_frequency","income_bracket","primary_platform"]
X = pd.concat([u[num], u[boo].astype(int),
pd.get_dummies(u[cat], drop_first=True)], axis=1)
y = u["has_purchased_via_social"].astype(int)
Xtr, Xte, ytr, yte = train_test_split(X, y, test_size=0.2,
random_state=RS, stratify=y)
scaler = StandardScaler().fit(Xtr[num])
Xtr_s = Xtr.copy(); Xtr_s[num] = scaler.transform(Xtr[num])
Xte_s = Xte.copy(); Xte_s[num] = scaler.transform(Xte[num])
clf = LogisticRegression(max_iter=2000, random_state=RS).fit(Xtr_s, ytr)
# Q1 Pearson r = 0.3143
# top gaps: Bluesky 134.4854 / LinkedIn 129.3275 / Pinterest 127.9667
# Q2 chi2 = 0.0390 dof = 1 p = 0.8435 → fail to reject independence
# Q3 train AUC 0.5220 test AUC 0.5006 test acc 0.6156
# top coefs: Threads +0.1585 / income $150K+ -0.1203 / WhatsApp +0.1056
# Q4 r_all = -0.3698 r_creator = -0.3615 r_non = -0.3713
# no Simpson reversal
# Q5 cluster %: 26.3400 / 28.5440 / 10.6320 / 34.4840
# cluster 1 means: screen 207.99 / eng 1.78 / posts 2.22 / addict 4.81
# labels: heavy viewer · medium-usual · heavy creator · low-activity
# Q6 top residual: USR-020194 followers 1025 eng 6.12 → resid 4.2812
Expand: Claude Code execution trace + run_claude_v2.py + key numbers
Read PROMPT_v2.md
Read social_media_user_behavior.csv # schema check
Read platform_statistics_2026.csv
Write run_claude_v2.py # single script, Q1–Q6, ~180 lines
Bash python3 run_claude_v2.py # host pandas/scipy/sklearn in place, one-shot
Write report_v2_claude.md
import pandas as pd, numpy as np
from scipy.stats import chi2_contingency, pearsonr
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score, accuracy_score
from sklearn.cluster import KMeans
RS = 42
u = pd.read_csv("/tmp/social_analysis/social_media_user_behavior.csv")
p = pd.read_csv("/tmp/social_analysis/platform_statistics_2026.csv")
# Q1
user_mean = u.groupby("primary_platform")["daily_screen_time_minutes"].mean().round(4)
merged = pd.DataFrame({"user_mean": user_mean}).join(
p.set_index("platform")["avg_daily_time_minutes"].rename("table_val"),
how="inner",
)
merged["gap"] = (merged["user_mean"] - merged["table_val"]).round(4)
r, _ = pearsonr(merged["user_mean"], merged["table_val"])
top3 = merged["gap"].abs().sort_values(ascending=False).head(3)
# Q5
feats = ["daily_screen_time_minutes","engagement_rate_pct",
"posts_per_week","addiction_level_1_to_10"]
Xk = StandardScaler().fit_transform(u[feats])
km = KMeans(n_clusters=4, n_init=10, random_state=RS).fit(Xk)
u["cluster"] = km.labels_
sizes = u["cluster"].value_counts().sort_index()
means = u.groupby("cluster")[feats].mean().round(4)
# Q1 Pearson r = 0.3143
# top gaps: Bluesky 134.4854 / LinkedIn 129.3275 / Pinterest 127.9667
# Q2 chi2 = 0.0390 dof = 1 p = 0.8435 → fail to reject independence
# Q3 train AUC 0.5220 test AUC 0.5006 test acc 0.6156
# top coefs: Threads +0.1585 / income $150K+ -0.1203 / WhatsApp +0.1056
# Q4 r_all = -0.3698 r_creator = -0.3615 r_non = -0.3713
# no Simpson reversal
# Q5 cluster %: 26.3400 / 28.5440 / 10.6320 / 34.4840
# cluster 1 means: screen 207.99 / eng 1.78 / posts 2.22 / addict 4.81
# labels: heavy viewer · high-intensity addict · heavy creator · low-activity
# Q6 top residual: USR-020194 followers 1025 eng 6.12 → resid 4.2812
This demo doubles as the recorded artifact for
docs/cookbook/05-data-analysis
(ships with dscc_run.log and report_dscc.md). The other four cookbook cases —
01 code review ·
02 test generation ·
03 rename refactor ·
04 web research
— each ship a real run log and verification entry in the
verification registry.
Download
Pre-built binaries for macOS, Linux and Windows. Unpack the archive and run dscc.
All assets, checksums and signatures: release page.
Source build: cargo install --path crates/dscc-cli --locked.
Install
macOS / Linux
curl -LO /dl/dscc-v0.1.0-aarch64-apple-darwin.tar.gz
tar -xzf dscc-v0.1.0-aarch64-apple-darwin.tar.gz
sudo mv dscc /usr/local/bin/
dscc --version
Windows (PowerShell)
iwr /dl/dscc-v0.1.0-x86_64-pc-windows-msvc.zip -OutFile dscc.zip
Expand-Archive .\dscc.zip -DestinationPath .
.\dscc.exe --version
From source (Cargo)
git clone https://github.com/oMygpt/dscc.git
cd dscc
cargo install --path crates/dscc-cli --locked
Authenticate
# Anthropic-compatible
export ANTHROPIC_API_KEY="..."
# xAI / Grok
export XAI_API_KEY="..."
# Generic OpenAI-compatible
export DSCC_API_KEY="..."
export DSCC_BASE_URL="https://ark.cn-beijing.volces.com/api/v3"
Usage
DSCC has no hard-coded default model. Pick one via --model or
DSCC_MODEL; a bilingual error fires if both are missing.
Interactive session
dscc --model doubao-seed-2.0-code
One-shot prompt
dscc --model qwen3-coder \
prompt "summarize crates/runtime"
Local Ollama (zero setup)
export DSCC_BASE_URL="http://localhost:11434/v1"
export DSCC_API_KEY="ollama" # any string
dscc --model qwen3-coder:7b \
prompt "explain this workspace"
Slash commands
/status /config /diff /compact
/export /session /agents /skills
Features
Provider-agnostic
Anthropic Messages API, xAI/Grok, and OpenAI-compatible endpoints — all from one binary, with an explicit model choice.
Workspace-aware config
Loads DSCC.md, .dscc.json, .dscc/settings.json, permissions, and plugin settings.
Plugins, agents, skills
Local discovery and management via dscc agents, dscc skills, and plugin surfaces.
Interactive & one-shot
REPL for exploration, prompt subcommand for scripts and CI — same toolchain, two shapes.
Sessions & compaction
Save, inspect, resume, and compact conversations with built-in slash commands.
Tools that ship
Shell, file read/write/edit, search, web fetch/search, todos, notebook updates — the surface that lands work back in the repo.
About
DSCC = Dispatch · Scaffold · Compose · Commit. Four verbs for one coding loop: dispatch the task to the right model, scaffold the workspace, compose the work across turns, and commit it back to your tree.
Dual-licensed: AGPL-3.0-only for open source; for a commercial license, contact [email protected].