Analysis EngineOpen Source

Knowledge Graphs for AI Coding Assistants

Turn any folder of code, docs, papers, images, or videos into a queryable knowledge graph. Type /tracely360 in your AI coding assistant — it reads your files, builds a graph, and gives you back structure you didn't know was there.

$pip install tracely360
25+
Languages
71.5×
Token Reduction
14
AI Platforms
Overview

What is tracely360?

tracely360 is a local knowledge-graph engine that transforms folders of code, documentation, images, and media into queryable, interactive knowledge graphs. Instead of reading raw files, your AI coding assistant works with a structured graph that reveals relationships, dependencies, and architectural patterns that weren't visible before.

The core extraction is deterministic and runs entirely offline using tree-sitter — 25 languages, zero LLM calls, zero API costs. A second optional pass uses your assistant's existing model to extract semantic meaning from documentation and add it to the same graph.

On a 52-file mixed corpus of code, papers, and diagrams, an average query costs ~1.7k tokens against the graph versus ~123k reading raw files. That is a 71.5× token reduction — persistent across sessions, honest about what was found versus guessed.

Core Capabilities

Everything you need to understand a codebase

tracely360 unifies static analysis, semantic extraction, and graph clustering into a single skill any AI coding assistant can invoke.

Deterministic AST Extraction

25 languages via tree-sitter: Python, JS, TS, Go, Rust, Java, C/C++, Ruby, C#, Kotlin, Scala, PHP, Swift, Lua, Zig, and more. Two-pass extraction for cross-file resolution.

Knowledge Graph Build

Merges extracted nodes and edges into a NetworkX graph. Leiden community detection groups related concepts by topology — no LLM calls, no embeddings required.

God Nodes & Surprises

Identifies the highest-degree god nodes at the heart of your system. Flags unexpected cross-community edges worth investigating, each with a plain-English explanation.

API Endpoint Discovery

Static analysis of route decorators across 13 frameworks: Flask, FastAPI, Django, Express, NestJS, Next.js, Spring, Laravel, Rails, Gin, Echo, Chi, ASP.NET. No code execution.

Multimodal Inputs

Code, markdown, PDFs, images, screenshots, diagrams, whiteboard photos, video, and audio. Video/audio transcribed via faster-whisper with domain-aware prompts.

Interactive Outputs

graph.html (vis.js visualization), GRAPH_REPORT.md (one-page audit), graph.json (persistent queryable graph), and wiki/ (Obsidian-compatible vault with bidirectional wikilinks).

Integrations

14 AI coding assistants, one skill

tracely360 ships as a slash command. Type /tracely360 . in any supported assistant. It writes tracely360-out/ and queries read the graph instead of raw files.

Claude CodeOpenAI CodexCursorVS Code CopilotGitHub Copilot CLIAiderOpenCodeFactory DroidGemini CLIOpenClawTraeKiroHermesGoogle Antigravity

MCP Server tools

query_graphNatural language query
get_nodeFull node details
get_neighborsAdjacency lookup
get_communityCommunity members
god_nodesTop connected nodes
graph_statsGraph summary
shortest_pathPath between nodes
Architecture

A modular pipeline

Each stage is an isolated module. Contributors can extend any step independently without touching the rest of the pipeline.

detect
Collect & classify files
extract
AST + LLM nodes/edges
build
NetworkX graph assembly
cluster
Leiden communities
analyze
God nodes & surprises
report
GRAPH_REPORT.md
export
HTML · JSON · Wiki
Install & Run

Up in 60 seconds

tracely360 is distributed on PyPI. Install it, run tracely360 install to auto-detect your AI assistant, then type /tracely360 . to build your first graph.

Output structure

tracely360-out/
├── GRAPH_REPORT.md   ← one-page audit
├── graph.json        ← persistent graph
├── graph.html        ← interactive viz
├── wiki/             ← Obsidian vault
└── cache/            ← incremental cache

Optional extras

tracely360[mcp]tracely360[pdf]tracely360[video]tracely360[neo4j]tracely360[watch]tracely360[leiden]tracely360[all]
bash · install & usage
# Requires Python 3.10+pip install tracely360tracely360 install    # auto-detects your AI assistant# In your AI coding assistant:/tracely360 .         # build graph for current folder/tracely360 --update  # incremental rebuild/tracely360 --wiki    # generate Obsidian vault# CLI utilities:tracely360 query "How is AuthController connected?"tracely360 path "AuthService" "UserRepository"tracely360 watch      # auto-rebuild on file changes
Worked Examples

Real corpora, real results

The repository ships with reproducible corpora demonstrating tracely360 on both small libraries and large mixed code-and-paper collections.

Small corpus

httpx

6 Python files
144
nodes
330
edges
6
communities

God nodes

ClientAsyncClientResponseRequest

Surprise edge: DigestAuth → Response (cross-community)

Mixed corpus

Karpathy repos

52 files · ~92k words
285
nodes
340
edges
53
communities
71.5×token reduction

Avg query: ~1.7k tokens vs ~123k reading raw files. 3 GPT repos + 5 attention papers + 4 diagrams.

Comparison

How tracely360 fits in the landscape

ToolPurposeStrengthLimitation vs tracely360
tracely360Local knowledge graph for AI assistantsMulti-modal, offline, 14 platform integrations
SourcegraphCross-repo code searchEnterprise-grade navigationNot a knowledge graph; no design semantics
Code2VecFunction-level embeddingsVector retrieval & classificationNo graph structure, no multi-modal input
Neo4jGeneral graph databasePowerful Cypher queriesDoes not generate graphs from code itself
Security, Licensing & Trust

Secure by design

tracely360 is released under the Apache 2.0 License. Its core dependencies — NetworkX (BSD-3) and tree-sitter (MIT) — are all permissive open-source licenses with no conflicts. The project performs no telemetry.

The only outbound network call is the optional semantic-extraction step, which uses your own configured AI model API key. Only semantic descriptions of documents are transmitted — never raw source code.

🔒

No raw code transmitted

Only semantic summaries sent to LLMs. AST extraction is 100% local.

🛡️

SSRF protection

URL fetching restricted to http/https, size- and time-bounded via security.py.

📁

Path traversal prevention

All output paths are containment-checked before write.

XSS-safe outputs

Node labels are HTML-escaped before being written into graph.html.

🚫

No telemetry

Zero analytics, zero beacons, zero external pings beyond your own LLM key.

📄

Apache 2.0 License

Commercial use, modification, and distribution permitted with no restrictions.

FAQ

Frequently asked questions

Can't find what you're looking for? Open an issue on GitHub.

No. The AST extraction pass runs entirely locally using tree-sitter — no network calls, no API keys required. Only during the optional semantic extraction step are semantic summaries of documents sent to an LLM, never raw source code.

14 platforms: Claude Code, OpenAI Codex, OpenCode, Aider, Cursor, VS Code Copilot Chat, GitHub Copilot CLI, OpenClaw, Factory Droid, Trae, Gemini CLI, Hermes, Kiro, and Google Antigravity. Each has a matching install command.

Leiden clustering uses approximate betweenness centrality sampling for graphs over 5,000 nodes. Per-file SHA256 caching means incremental rebuilds only process changed files, so subsequent updates are fast regardless of total codebase size.

Yes. tracely360 is released under the Apache 2.0 License, which permits commercial use, modification, and distribution. Its core dependencies — NetworkX (BSD-3) and tree-sitter (MIT) — are equally permissive.

Not manually. Run tracely360 hook install once to set up post-commit and post-checkout git hooks that auto-rebuild the AST graph. Alternatively, tracely360 watch gives real-time file-system monitoring.

Ready to understand your codebase?

Install tracely360 in 60 seconds. No API key required for the first run.

pip install tracely360
View on GitHub →