One MCP server wrapping every social-platform scraper Creativefuel uses · v0.1
cfai-scrapper-mcp is a single MCP server that wraps every social-platform scraping
backend Creativefuel uses behind one consistent surface. It exposes 18 tools
across 6 platforms — X (Twitter), Facebook, LinkedIn, YouTube, Threads,
and Instagram — available both as MCP stdio (for Claude Code, Claude Desktop,
Cursor) and as a HTTP REST API (for production at scrapper.cfai.in).
Built so consumer products (community.cfai.in, cfai-chat, cfai-comment-insights, future tools)
hit ONE service instead of each integrating Apify, Jarvis, and scrapper.jarvis.work directly.
The server preserves Naveen's sequential-1.5s lock for IG scrapper calls, caches Jarvis IG
responses for 1h, normalizes every backend's payload into a unified Envelope
schema, and tracks per-key daily $ caps via SQLite.
| Backend | What it covers | Auth |
|---|---|---|
| Apify (12 actors) | X (apidojo/tweet-scraper) · FB (apify/facebook-* family) · LinkedIn (harvestapi + pratikdani) · YouTube (streamers) · Threads (apify/threads-*) | APIFY_TOKEN |
| Jarvis (jarvis.work:8080) | Instagram profile + per-post data, with 1h SQLite cache | x-api-key (svc_…) |
| scrapper.jarvis.work | Instagram comments, profile analyse, share-count (Naveen's sequential lock at 1.5s) | X-API-Key (scrapper_… or pha_…) |
| cfai-comment-insights | Multimodal LLM verdict on an IG post (post + creator + comments + thumbnail → Gemini) | X-API-Key (cfai_ci_…) |
| shortcode-resolver (api.cfai.in) | IG / FB / X / YT / LinkedIn / Threads URL → WhatsApp group lookup | X-API-Key |
{platform, kind, identity, author, content, engagement, timestamps, flags, raw, source}. The raw field always preserves the unmodified backend payload so consumers never lose data./v1/tools/* call. Master key has unlimited cap; consumer keys have configurable daily $ caps.api_keys (bcrypt-hashed), audit_log (every call logged with cost/latency/status/args), daily_spend (per-key per-day with hard cap → HTTP 429), cache (TTL-based for Jarvis IG)./admin/ with HTTP-Basic auth (ADMIN_PASSWORD env). Tabs: Overview · Activity · Logs · API Keys · Health.x402-payment-requiredgit clone https://github.com/Bot222555/cfai-scrapper-mcp.git
cd cfai-scrapper-mcp
python3.11 -m venv .venv
source .venv/bin/activate
pip install -e .
cp .env.example .env
# Open .env and fill: APIFY_TOKEN, MCP_MASTER_KEY, ADMIN_PASSWORD (minimum)
cd admin-ui
npm install
npm run build # writes admin-ui/dist/, picked up automatically by HTTP server
# REQUIRED
APIFY_TOKEN=apify_api_...
MCP_MASTER_KEY=cfai_scr_master_...
ADMIN_PASSWORD=...
# OPTIONAL — Instagram backends
JARVIS_BASE_URL=https://jarvis.work:8080
JARVIS_API_KEY=svc_...
JARVIS_VERIFY_SSL=false
SCRAPPER_BASE_URL=https://scrapper.jarvis.work
SCRAPPER_API_KEY=scrapper_...
SCRAPPER_PHA_KEY_COMMENTS=pha_...
SCRAPPER_PHA_KEY_ANALYSE=pha_...
SCRAPPER_SEQUENTIAL_DELAY_MS=1500
INSIGHTS_BASE_URL=http://localhost:5700
INSIGHTS_API_KEY=cfai_ci_...
RESOLVER_BASE_URL=https://api.cfai.in
RESOLVER_API_KEY=...
# OPTIONAL — bootstrap consumer keys (label:key:daily_cap_usd, comma separated)
MCP_BOOTSTRAP_KEYS=community:cfai_scr_community_2026:5,chat:cfai_scr_chat_2026:5
DEFAULT_DAILY_USD_CAP=5
# OPTIONAL — server config
HTTP_HOST=0.0.0.0
HTTP_PORT=5800
DB_PATH=./scrapper.db
LOG_LEVEL=INFO
APIFY_RUN_TIMEOUT_SECS=600
APIFY_POLL_INTERVAL_SECS=4
# stdio mode (Claude Code/Desktop/Cursor)
.venv/bin/cfai-scrapper-mcp
# HTTP mode (production)
.venv/bin/cfai-scrapper-http
# binds 0.0.0.0:5800 by default
curl http://localhost:5800/healthz
# → {"ok":true,"service":"cfai-scrapper-mcp","version":"0.1.0"}
curl -X POST http://localhost:5800/v1/tools/list_platforms \
-H 'X-API-Key: <your master key>' -d '{}'
# → {"platforms":["x","facebook","linkedin","youtube","threads","instagram"],"count":6}
The repo includes a .mcp.json at the project root. Open Claude Code in the
cfai-scrapper-mcp/ directory and the MCP server auto-discovers; you'll be
prompted to approve on first call.
Drop a .mcp.json at /Users/tushar/CF AI/.mcp.json:
{
"mcpServers": {
"cfai-scrapper": {
"command": "/Users/tushar/CF AI/cfai-scrapper-mcp/.venv/bin/cfai-scrapper-mcp"
}
}
}
To skip per-call permission prompts, add to ~/.claude/settings.json under permissions.allow: "mcp__cfai-scrapper__*".
Edit ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"cfai-scrapper": {
"command": "/Users/tushar/CF AI/cfai-scrapper-mcp/.venv/bin/cfai-scrapper-mcp"
}
}
}
Restart Claude Desktop. The 18 tools appear in the tool picker.
Edit ~/.cursor/mcp.json with the same shape as Claude Desktop above.
Every /v1/tools/* request needs an X-API-Key header. Keys are issued from the admin dashboard at /admin/ (or POST /api/admin/keys). Each key has:
Cached responses (source.cached = true) don't count toward the cap.
Set MCP_BOOTSTRAP_KEYS in .env (comma-separated label:key:cap tuples) to auto-create consumer keys on first start. Idempotent — existing labels are skipped.
curl -X POST https://scrapper.cfai.in/api/admin/keys \
-u admin:<ADMIN_PASSWORD> \
-H 'Content-Type: application/json' \
-d '{"label":"my-script","daily_cap_usd":5,"is_admin":false}'
# → {"id":"...","label":"my-script","key":"cfai_scr_my-script_xxx...","daily_cap_usd":5}
# (the full key is shown ONCE — copy immediately)
Every successful tool call returns this shape:
{
"platform": "instagram | x | facebook | linkedin | youtube | threads",
"kind": "profile | post | comment | search_result | company |
transcript | reaction | conversation | shares | analysis |
shortcode_lookup",
"identity": { "id": "...", "url": "...", "shortcode": "..." },
"author": {
"platform_id": "...", "username": "...", "display_name": "...",
"verified": true, "follower_count": 12345,
"profile_pic_url": "...", "bio": "...", "location": "...",
"extra": { /* platform-specific extras */ }
},
"content": {
"text": "...", "language": "en",
"media": [{"type": "image", "url": "..."}],
"hashtags": [], "mentions": [], "links": []
},
"engagement": {
"likes": 0, "comments": 0, "shares": 0, "views": 0,
"saves": 0, "bookmarks": 0, "quotes": 0, "reposts": 0,
"reactions": { "like": 0, "love": 0 }
},
"timestamps": { "posted_at_utc": "...", "fetched_at_utc": "..." },
"flags": {
"is_paid_partnership": false, "is_reply": false,
"is_repost": false, "is_quote": false, "is_monetized": false,
"possibly_sensitive": false, "age_restricted": false
},
"raw": { /* full original backend response, untouched */ },
"source": {
"backend": "apify | jarvis | scrapper | insights",
"actor_or_endpoint": "apidojo/tweet-scraper",
"cached": false, "cost_usd": 0.0004, "latency_ms": 6500
}
}
For tools that return many records (posts, comments, search), the wrapper is:
{
"platform": "x", "kind": "post",
"items": [ /* array of envelopes above */ ],
"count": 30,
"source": { /* aggregate */ },
"cursor": null
}
Get a profile/page snapshot from any platform.
POST /v1/tools/scrape_profile
{ "platform": "x", "identifier": "MissMalini" }
identifier: handle (MissMalini or @MissMalini), URL, or platform-native id. Returns: Envelope(kind=profile).
List recent posts from a profile / channel / page.
POST /v1/tools/scrape_posts
{ "platform": "x",
"identifier": "MissMalini",
"count": 30,
"since_date": "2026-04-01" }
Per-platform caps: X ~1000, FB ~200, LinkedIn ~50, YT ~100, Threads 4 inline. since_date is ISO-8601 (YYYY-MM-DD).
Fetch a single post by URL with full per-post engagement.
POST /v1/tools/scrape_post
{ "platform": "youtube",
"post_url": "https://www.youtube.com/watch?v=zRtGL0-5rg4" }
Comments / replies on a post.
POST /v1/tools/scrape_comments
{ "platform": "instagram",
"post_url": "https://www.instagram.com/p/DX10_sjCEKZ/",
"count": 50 }
Instagram routes through scrapper /api/v1/comments (sequential, ~2s/call due to Naveen's rule).
Keyword / hashtag search where supported.
POST /v1/tools/scrape_search
{ "platform": "x",
"query": "#Bollywood",
"count": 30,
"filters": {
"language": "en",
"min_likes": 50,
"min_retweets": 10,
"only_verified": true,
"start": "2026-04-01",
"end": "2026-04-30"
} }
X filters: language, min_likes, min_retweets, min_replies, only_verified, only_blue, only_image, only_video, start, end, sort. YouTube filters: date_filter, video_type, length_filter, is_hd, has_subtitles, is_live, sort.
| Tool | Platform | Purpose |
|---|---|---|
scrape_x_conversation | X | Every tweet in a thread (= original + nested replies). Args: conversation_id, depth (default 50) |
scrape_linkedin_reactions | Individual reactor profiles on a post. Unique to LinkedIn — no other platform exposes "who liked this". Args: post_url, max_reactions (default 50) | |
scrape_linkedin_company | Company page details: employees, jobs, funding, similar pages. Args: company_url | |
scrape_youtube_transcript | YouTube | Subtitles for a video, in any available language. Args: video_url, language (default "en") |
scrape_facebook_metadata | Page metadata only (categories, websites, contact). No posts. Use scrape_posts for posts.Args: page_url | |
scrape_threads_replies | Threads | Full reply tree on a Threads post. Args: post_url, max_replies (default 30) |
scrape_ig_shares | Share count specifically (only IG exposes this on a per-post basis). Routes to scrapper /share-count. Args: post_url | |
analyze_ig_post | Multimodal LLM verdict (post + creator + comments + thumbnail → viral / strong / avg / under). Routes through cfai-comment-insights. Slow (15-30s). Args: post_url | |
resolve_ig_shortcode | any | Look up which WhatsApp groups have shared a given social URL. Despite the name, supports FB / X / YT / LinkedIn / Threads URLs (multi-platform parser since 2026-04-30). Args: shortcode_or_url |
| Tool | Returns |
|---|---|
list_platforms | {platforms: ["x","facebook","linkedin","youtube","threads","instagram"], count: 6} |
list_capabilities | Capability matrix: which scrape_* methods are wired for which platform. |
get_actor_health | Pings every backend (Apify, Jarvis, scrapper, insights). Takes ~30s due to scrapper sequential lock. |
get_quota_status | Apify plan + monthly usage. |
| Capability | 𝕏 | FB | LI | YT | Th | IG |
|---|---|---|---|---|---|---|
| Profile / page metadata | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Post text / caption | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Likes | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Views / impressions | ✓ | ✗ | ✗ | ✓ | ✗ | ✓ |
| Reposts / shares | ✓ | ✓ | ✓ | n/a | ✓ | ✓ |
| Quote count | ✓ | n/a | n/a | n/a | ✓ | n/a |
| Bookmark count | ✓ | ✗ | ✗ | n/a | ✗ | ✓ |
| Reactions breakdown | n/a | partial | ✓ | n/a | n/a | n/a |
| Comment count | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ |
| Comment text scraping | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Individual reactor profiles | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
| Hashtags extracted | ✓ | ✗ | ✗ | ✓ | ✗ | ✓ |
| Mentions extracted | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ |
| Link cards / previews | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Geo / place tagging | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ |
| Author verified flag | ✓ | n/a | ✓ | ✓ | ✓ | ✓ |
| Paid partnership flag | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ |
| Video transcripts | ✗ | ✓ | ✗ | ✓ | partial | OCR-able |
| Date range filter | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ |
| Engagement floor filter | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Language filter | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Search by keyword | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Account creation date | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ |
✓ = wired in v0.1 · ✗ = not exposed by upstream (or not yet wired) · n/a = doesn't apply on that platform · partial = some forms.
Profile (extracted from latest tweet's author): identity.url/id; author.username/display_name/verified/follower_count/profile_pic_url/bio/location; author.extra.{following, statuses_count, media_count, favourites_count, created_at (account birthday), is_blue_verified, twitter_url}.
Post: identity.id/url/shortcode (=conversationId); author (full profile snapshot); content.text/fullText/language/source ("Web"/"iPhone"/etc); media (from extendedEntities); hashtags / mentions / links; engagement.likes/comments(=replyCount)/views/bookmarks/quotes/reposts; flags.is_reply/is_repost/is_quote/possibly_sensitive.
Filters in scrape_search: language, min_likes, min_retweets, min_replies, only_verified, only_blue, only_image, only_video, only_quote, start, end, sort.
Page metadata (scrape_profile / scrape_facebook_metadata): identity.id (=pageId)/url; author.username/display_name (=title)/profile_pic_url; author.extra.{categories, websites, info, messenger, cover_photo, personal_profile (if URL was a personal account)}.
Post: identity.postId/url; author.username (=pageName)/display_name; content.text/media; engagement.likes/shares; engagement.reactions (LIKE+LOVE+total).
Gaps: no comment count, no view count, no full reaction breakdown.
Profile (48 fields — biggest payload): identity.id/url; author.username (=publicIdentifier)/display_name (=firstName+lastName)/verified/follower_count/profile_pic_url/bio (=headline)/location; author.extra.{about, connections_count, current_position, experience[], education[], skills[], certifications, honors_and_awards, languages, publications, patents, projects, services, volunteering, received_recommendations, registered_at, influencer, premium, open_to_work, hiring}.
Post / Comment (returned mixed, discriminated by raw.type): POST: author, content, engagement.{likes,comments,reposts}, media. COMMENT: author (commenter), content (=commentary), engagement.likes.
Reactions (unique): EnvelopeList(kind=reaction), each with author = the individual reactor (with their bio + photo).
Company (36 fields): identity.id (=company_id)/url; author.username (=universal_name_id)/display_name/bio (=tagline or description)/profile_pic_url/follower_count/location (=headquarters); author.extra.{industry, employee_count, company_size, company_type, founded, specialties, website, funding, investors, jobs, locations, affiliated_pages, similar_pages, crunchbase_url, stock_info}.
Channel profile: identity.id (=channelId)/url; author.username (=channelUsername)/display_name (=channelName)/verified (=isChannelVerified)/follower_count (=numberOfSubscribers)/profile_pic_url (=channelAvatarUrl)/bio (=channelDescription)/location; author.extra.{channel_url, channel_banner_url, channel_total_videos, channel_total_views, channel_joined_date, about_channel_info}.
Video / post: identity.id/url; author = full channel object (every video record bundles channel data); content.text (=title or description), media, hashtags, descriptionLinks; engagement.likes/comments (=commentsCount)/views (=viewCount); flags.is_monetized/age_restricted/is_paid_partnership (=isPaidContent); duration in raw.
Filters in scrape_search: date_filter, video_type, length_filter, is_hd, has_subtitles, has_cc, is_live, sort. Transcript: kind=transcript with subtitles in content.text (SRT or whatever format subtitlesFormat was set to).
Profile (27 fields + bundled latest 4 posts): identity.id (=pk)/url; author.username/display_name (=full_name)/verified (=is_verified)/follower_count/profile_pic_url/bio (=biography); author.extra.{is_private, bio_links, profile_tags, transparency_label, hd_profile_pic_versions}.
Post (33 fields): identity.id (=pk)/url (=canonical_url)/shortcode (=code); author = the post user; content.text/media (image/video/carousel/audio); engagement.likes (=like_count); engagement.comments (=text_post_app_info.direct_reply_count); engagement.reposts (=repost_count); engagement.quotes (=quote_count); engagement.shares (=reshare_count); flags.is_paid_partnership.
No view count — Meta hasn't exposed views on Threads' API. engagement.views will always be None.
Profile (Jarvis primary, scrapper fallback on 404 or followers=0 anomaly): identity.id/url; author.username/display_name (=full_name)/verified (=is_verified)/follower_count/profile_pic_url/bio (=biography); author.extra.{following, post_count (=media_count), is_private, is_active_on_text_post_app}. When scrapper fallback fires, extra also includes engagement_rate, avg_likes, avg_comments, avg_views, recentPosts[].
Post: identity.id/url/shortcode; author.username; content.text (=caption); engagement.likes/comments/views/shares (only via scrapper /share-count fallback).
Comments (scrapper only, sequential 1.5s lock): identity.id; author.username/display_name/profile_pic_url; content.text; engagement.likes. Top-100-liked + 100-random-sample by default (PHA cost-control).
Shares (scrapper /share-count specifically): engagement.shares = reshareCount, plus likes/comments/views/playCount.
Analysis (cfai-comment-insights multimodal): kind=analysis; raw.data = full LLM verdict with verdict (viral/strong/avg/under), content_analysis (visual_summary from thumbnail, hooks, CTA, branding), audience (segments, geo, lang_mix), reaction (sentiment, themes, notable_quotes), signals, recommendations.
HTTP-Basic auth using ADMIN_PASSWORD env var (no username check). Used by the React SPA at /admin/.
| Method | Path | Purpose |
|---|---|---|
| GET | /api/admin/overview | Today + 7d + 30d spend cards + daily series |
| GET | /api/admin/activity?group_by=tool|key|day&days=N | Spend rollup by dimension |
| GET | /api/admin/logs?limit=&offset=&tool=&platform=&api_key_id=&since= | Audit log with filters |
| GET | /api/admin/logs/{id} | Single log entry, full args + error |
| GET | /api/admin/keys | List all API keys (masked prefixes) |
| POST | /api/admin/keys | Create key — full key shown ONCE |
| POST | /api/admin/keys/{id}/disable | Disable a key |
| POST | /api/admin/keys/{id}/enable | Re-enable a key |
| DELETE | /api/admin/keys/{id} | Permanently delete |
| PATCH | /api/admin/keys/{id}/cap | Update daily $ cap |
| GET | /api/admin/keys/{id}/spend | Today's spend + request count |
| GET | /api/admin/health | Backend ping (same as get_actor_health) |
| GET | /api/admin/quota | Apify plan/usage |
Every successful call records cost_usd from the upstream backend.
apidojo/tweet-scraper at $0.40/1k tweets.Daily $ cap is enforced PER API key. Cap exceeded → HTTP 429 until UTC midnight. Cached responses (source.cached = true) don't count toward the cap.
For a 100-creator monthly sweep × 5 Apify-backed platforms × 30 posts each = 15,000 records:
| Platform | Records | Cost basis | Monthly $ |
|---|---|---|---|
| X (3k posts) | 3,000 | $0.40/1k | $1.20 |
| FB (3k posts) | 3,000 | ~$4/1k | $12.00 |
| LinkedIn profiles (300) | 300 | $4/1k | $1.20 |
| LinkedIn posts (3k) | 3,000 | ~$17/1k bundle | $51.00 |
| YouTube (3k videos) | 3,000 | ~$2.40/1k | $7.20 |
| Threads (300) | 300 | $5/1k | $1.50 |
| Instagram (Jarvis+scrapper) | — | $0 (in-house) | $0.00 |
| Total monthly | ~$74 | ||
Live at https://scrapper.cfai.in on cfai-llm-gateway EC2 (15.206.98.152, Tailscale 100.86.128.89), co-located with cfai-chat-backend.
/opt/cfai-scrapper-mcp//etc/cfai-scrapper-mcp/.env (chmod 640, root:ubuntu)/var/lib/cfai-scrapper-mcp/scrapper.dbcfai-scrapper-mcp.service, port 5800/etc/nginx/sites-available/scrapper.cfai.in.confscrapper.cfai.in → 15.206.98.152 (TTL 300)# From local repo, after committing changes
./deploy.sh ubuntu@100.86.128.89
# What it does:
# 1. rsync source (excluding .venv, __pycache__, .env, *.db, .git)
# 2. python venv + pip install -e .
# 3. systemd unit install + daemon-reload
# 4. nginx vhost install (idempotent)
# 5. systemctl restart + curl /healthz
# SCP setup-host.sh to the EC2 + run it once
ssh ubuntu@<host> "sudo bash" < deploy/setup-host.sh
# Then EDIT the production .env that setup-host.sh created
sudo nano /etc/cfai-scrapper-mcp/.env
# Then run deploy.sh from local
| Label | Daily cap | Used by |
|---|---|---|
master | unlimited | admin / debugging |
community | $10/day | community.cfai.in |
chat | $5/day | cfai-chat backend (already wired) |
comment-insights | $5/day | cfai-comment-insights (when it deploys) |
JHS-affiliate + trend-radar deferred per Tushar's instruction — those services keep direct integrations.
The chat agent (chat.cfai.in) now has 5 scraper tools registered in its ALL_TOOLS: scrape_profile, scrape_posts, scrape_post, scrape_comments, scrape_search. Each takes a platform arg and routes to scrapper.cfai.in via SCRAPPER_MCP_API_KEY env var. Errors map cleanly: 401 → "connector unavailable", 429 → "daily $ cap reached", 501 → "tool not supported for that platform".
Three wrapper files (jarvis_ig.py, scrapper_client.py, scrapper_post.py) check the USE_SCRAPPER_MCP env flag. When true, they delegate to scrapper_mcp_client.py which calls scrapper.cfai.in. Same canonical return shapes preserved so callers + tests don't change.
| Symptom | Cause / Fix |
|---|---|
x402-payment-required from Apify | Free plan can't run pay-per-event actors. Upgrade to Starter at apify.com/billing. |
JARVIS_API_KEY not set | IG-specific tools need JARVIS_API_KEY. Set it in .env or skip IG tools. |
All keys exhausted for /api/v1/comments: 401 | Wrong/missing SCRAPPER_PHA_KEY_*. Verify with the scrapper team. |
| Admin SPA shows "admin-ui not built yet" | Run cd admin-ui && npm install && npm run build. |
Daily cap of $5.00 reached (HTTP 429) | Per-key cap exceeded. Bump it via Admin Dashboard → API Keys → Edit cap, or wait until UTC midnight. |
| Platform call hangs > 5min | Apify run stuck. Check console.apify.com/runs for the actor. |
HTTP 501 from a tool | Capability not wired for that platform — see Capability Matrix above. |
Browser HTTP-Basic prompt doesn't appear at /admin/ | The SPA uses an in-app login form; type the password there. ADMIN_PASSWORD is in /etc/cfai-scrapper-mcp/.env on the host. |
| IG comments call returns 0 items | Post may be too new for indexer. Try again in 30+ min. |
Threads engagement.views always None | Meta hasn't exposed views on Threads' API. Known limitation, not a bug. |
HANDLE=missmalini
KEY=cfai_scr_community_2026
for PLATFORM in x facebook linkedin youtube threads; do
echo "=== $PLATFORM ==="
curl -sS -X POST https://scrapper.cfai.in/v1/tools/scrape_profile \
-H "X-API-Key: $KEY" -H 'Content-Type: application/json' \
-d "{\"platform\":\"$PLATFORM\",\"identifier\":\"$HANDLE\"}" \
| python3 -c "
import sys, json
d = json.load(sys.stdin)
a = d.get('author', {})
print(f' {a.get(\"username\")} · {a.get(\"follower_count\"):,} followers · verified={a.get(\"verified\")}')"
done
curl -X POST https://scrapper.cfai.in/v1/tools/scrape_search \
-H 'X-API-Key: cfai_scr_trendradar_2026' \
-H 'Content-Type: application/json' \
-d '{
"platform": "x",
"query": "#cfai",
"count": 50,
"filters": {
"language": "en",
"min_likes": 10,
"start": "2026-05-01",
"only_blue": true
}
}'
curl -X POST https://scrapper.cfai.in/v1/tools/scrape_comments \
-H 'X-API-Key: cfai_scr_community_2026' \
-H 'Content-Type: application/json' \
-d '{
"platform": "instagram",
"post_url": "https://www.instagram.com/p/DX10_sjCEKZ/",
"count": 200
}'
curl -X POST https://scrapper.cfai.in/v1/tools/scrape_linkedin_company \
-H 'X-API-Key: cfai_scr_master_2026' \
-H 'Content-Type: application/json' \
-d '{ "company_url": "https://www.linkedin.com/company/creativefuel/" }'
curl -X POST https://scrapper.cfai.in/v1/tools/scrape_youtube_transcript \
-H 'X-API-Key: cfai_scr_master_2026' \
-H 'Content-Type: application/json' \
-d '{
"video_url": "https://www.youtube.com/watch?v=zRtGL0-5rg4",
"language": "en"
}'
curl -X POST https://scrapper.cfai.in/v1/tools/scrape_x_conversation \
-H 'X-API-Key: cfai_scr_master_2026' \
-H 'Content-Type: application/json' \
-d '{ "conversation_id": "2050084119883964840", "depth": 50 }'