πŸ–₯ ️Local LLMs

Run Gemma, Mistral, and more with Ollama.

πŸ“‚ File Ingestion

Drop in files, chunk memory, and search instantly.

πŸ”§ Modular Tools

Function calling, reflection, voice β€” plug and play.

🧠 Smart Memory

Deduplicated, reranked memory with context-aware recall.

πŸ” Thread Awareness

Understands your session, not just your sentence.

πŸ” Fully Local & Offline

No cloud. No API. Everything stays on your machine.

βš›οΈ ATOM Runtime (Local Only)

Model: phi4:14b-q4_K_M
Temp: 0.7 | Top-p: 0.9
Status: 🟑 Local Only β€” Secure & Offline

βš™οΈ Install ATOM

Clone the repo and run the setup script:

Public Github Release Coming Soon
info@get-atom.dev