Knowledge Graphs for AI Coding Assistants
Turn any folder of code, docs, papers, images, or videos into a queryable knowledge graph. Type /tracely360 in your AI coding assistant — it reads your files, builds a graph, and gives you back structure you didn't know was there.
pip install tracely360$ pip install tracely360 && tracely360 install$ /tracely360 ./my-project Scanning corpus... 127 files (Python, TypeScript, Go) ✓ 1,842 nodes · 3,291 edges ✓ 12 communities detected ✓ God nodes: AuthService, UserRepo, APIGateway ✓ Surprise: RateLimiter → CacheLayer ✓ 14 API endpoints mapped (FastAPI, Express) Outputs → tracely360-out/ graph.html graph.json GRAPH_REPORT.md Query cost: 2.1k tokens (vs 148k raw) · 70× reduction
What is tracely360?
tracely360 is a local knowledge-graph engine that transforms folders of code, documentation, images, and media into queryable, interactive knowledge graphs. Instead of reading raw files, your AI coding assistant works with a structured graph that reveals relationships, dependencies, and architectural patterns that weren't visible before.
The core extraction is deterministic and runs entirely offline using tree-sitter — 25 languages, zero LLM calls, zero API costs. A second optional pass uses your assistant's existing model to extract semantic meaning from documentation and add it to the same graph.
On a 52-file mixed corpus of code, papers, and diagrams, an average query costs ~1.7k tokens against the graph versus ~123k reading raw files. That is a 71.5× token reduction — persistent across sessions, honest about what was found versus guessed.
Everything you need to understand a codebase
tracely360 unifies static analysis, semantic extraction, and graph clustering into a single skill any AI coding assistant can invoke.
Deterministic AST Extraction
25 languages via tree-sitter: Python, JS, TS, Go, Rust, Java, C/C++, Ruby, C#, Kotlin, Scala, PHP, Swift, Lua, Zig, and more. Two-pass extraction for cross-file resolution.
Knowledge Graph Build
Merges extracted nodes and edges into a NetworkX graph. Leiden community detection groups related concepts by topology — no LLM calls, no embeddings required.
God Nodes & Surprises
Identifies the highest-degree god nodes at the heart of your system. Flags unexpected cross-community edges worth investigating, each with a plain-English explanation.
API Endpoint Discovery
Static analysis of route decorators across 13 frameworks: Flask, FastAPI, Django, Express, NestJS, Next.js, Spring, Laravel, Rails, Gin, Echo, Chi, ASP.NET. No code execution.
Multimodal Inputs
Code, markdown, PDFs, images, screenshots, diagrams, whiteboard photos, video, and audio. Video/audio transcribed via faster-whisper with domain-aware prompts.
Interactive Outputs
graph.html (vis.js visualization), GRAPH_REPORT.md (one-page audit), graph.json (persistent queryable graph), and wiki/ (Obsidian-compatible vault with bidirectional wikilinks).
14 AI coding assistants, one skill
tracely360 ships as a slash command. Type /tracely360 . in any supported assistant. It writes tracely360-out/ and queries read the graph instead of raw files.
MCP Server tools
query_graphNatural language queryget_nodeFull node detailsget_neighborsAdjacency lookupget_communityCommunity membersgod_nodesTop connected nodesgraph_statsGraph summaryshortest_pathPath between nodesA modular pipeline
Each stage is an isolated module. Contributors can extend any step independently without touching the rest of the pipeline.
Up in 60 seconds
tracely360 is distributed on PyPI. Install it, run tracely360 install to auto-detect your AI assistant, then type /tracely360 . to build your first graph.
Output structure
tracely360-out/ ├── GRAPH_REPORT.md ← one-page audit ├── graph.json ← persistent graph ├── graph.html ← interactive viz ├── wiki/ ← Obsidian vault └── cache/ ← incremental cache
Optional extras
# Requires Python 3.10+pip install tracely360tracely360 install # auto-detects your AI assistant# In your AI coding assistant:/tracely360 . # build graph for current folder/tracely360 --update # incremental rebuild/tracely360 --wiki # generate Obsidian vault# CLI utilities:tracely360 query "How is AuthController connected?"tracely360 path "AuthService" "UserRepository"tracely360 watch # auto-rebuild on file changes
Real corpora, real results
The repository ships with reproducible corpora demonstrating tracely360 on both small libraries and large mixed code-and-paper collections.
httpx
God nodes
Surprise edge: DigestAuth → Response (cross-community)
Karpathy repos
Avg query: ~1.7k tokens vs ~123k reading raw files. 3 GPT repos + 5 attention papers + 4 diagrams.
How tracely360 fits in the landscape
| Tool | Purpose | Strength | Limitation vs tracely360 |
|---|---|---|---|
| tracely360 | Local knowledge graph for AI assistants | Multi-modal, offline, 14 platform integrations | — |
| Sourcegraph | Cross-repo code search | Enterprise-grade navigation | Not a knowledge graph; no design semantics |
| Code2Vec | Function-level embeddings | Vector retrieval & classification | No graph structure, no multi-modal input |
| Neo4j | General graph database | Powerful Cypher queries | Does not generate graphs from code itself |
Secure by design
tracely360 is released under the Apache 2.0 License. Its core dependencies — NetworkX (BSD-3) and tree-sitter (MIT) — are all permissive open-source licenses with no conflicts. The project performs no telemetry.
The only outbound network call is the optional semantic-extraction step, which uses your own configured AI model API key. Only semantic descriptions of documents are transmitted — never raw source code.
No raw code transmitted
Only semantic summaries sent to LLMs. AST extraction is 100% local.
SSRF protection
URL fetching restricted to http/https, size- and time-bounded via security.py.
Path traversal prevention
All output paths are containment-checked before write.
XSS-safe outputs
Node labels are HTML-escaped before being written into graph.html.
No telemetry
Zero analytics, zero beacons, zero external pings beyond your own LLM key.
Apache 2.0 License
Commercial use, modification, and distribution permitted with no restrictions.
Deeper guides
Knowledge Graphs for AI Coding Assistants
Why structural graphs beat vector RAG for code understanding.
Read more →CLI Command Reference
Every /tracely360 and tracely360 command in one place.
Read more →AI Assistant Integration
Claude Code, Codex, Cursor, MCP server, and git hooks step by step.
Read more →tracely360 vs Alternatives
Honest comparison against Sourcegraph, Code2Vec, and Neo4j.
Read more →No. The AST extraction pass runs entirely locally using tree-sitter — no network calls, no API keys required. Only during the optional semantic extraction step are semantic summaries of documents sent to an LLM, never raw source code.
14 platforms: Claude Code, OpenAI Codex, OpenCode, Aider, Cursor, VS Code Copilot Chat, GitHub Copilot CLI, OpenClaw, Factory Droid, Trae, Gemini CLI, Hermes, Kiro, and Google Antigravity. Each has a matching install command.
Leiden clustering uses approximate betweenness centrality sampling for graphs over 5,000 nodes. Per-file SHA256 caching means incremental rebuilds only process changed files, so subsequent updates are fast regardless of total codebase size.
Yes. tracely360 is released under the Apache 2.0 License, which permits commercial use, modification, and distribution. Its core dependencies — NetworkX (BSD-3) and tree-sitter (MIT) — are equally permissive.
Not manually. Run tracely360 hook install once to set up post-commit and post-checkout git hooks that auto-rebuild the AST graph. Alternatively, tracely360 watch gives real-time file-system monitoring.
Ready to understand your codebase?
Install tracely360 in 60 seconds. No API key required for the first run.
pip install tracely360