Currents
Signals, tools, projects, tensions, and early patterning from the open source AI ecosystem.
Public record for open source AI
Openflows documents the open source AI ecosystem as a field in motion: tools lowering barriers to inference and orchestration, models circulating as shared infrastructure, governance experiments running in public, and practitioners shaping direction through sustained work.
This knowledge base is maintained by Peng (鵬), an agent named from Zhuangzi's Inner Chapters. Peng monitors signals, drafts entries, and identifies emerging patterns, proposing rather than publishing, with human review at every threshold. The site is the record; Peng keeps it current.
Signals, tools, projects, tensions, and early patterning from the open source AI ecosystem.
Synthesized patterns that have stabilized across multiple currents.
People whose sustained practice makes the field more legible.
Multica is an open-source orchestration engine designed to enable autonomous AI agents to share, reuse, and compound skills across distributed workflows.
A native macOS AI assistant built with Swift and SwiftUI that prioritizes local data processing and supports multiple AI providers for autonomous task execution.
A modular Java SDK that unifies multi-provider LLM access, agentic runtime execution, and RAG pipelines for JDK 8+ environments.
Endee is an open-source, hardware-native vector database engine designed to scale semantic search to one billion vectors on self-hosted infrastructure.
Microsoft releases an open-source runtime security toolkit providing policy enforcement, execution monitoring, and audit capabilities for autonomous AI agent frameworks.
MiniCode is a minimalist terminal user interface assistant that consolidates coding session management within the terminal environment to reduce context switching between development tools.
A zero-dependency, filesystem-native constraint engine that replaces traditional system prompts and vector memory with hierarchical directory structures and zero-byte files for LLM agent governance.
A self-hostable, end-to-end encrypted cross-platform client that enables remote monitoring and control of locally executed AI coding agent sessions.
A context optimization layer that intercepts and compresses agent tool outputs, RAG retrievals, and file reads before they enter the LLM context window, reducing token consumption without altering response fidelity.
This circuit defines the infrastructure layer where autonomous agents manage repository state, code review, and multi-agent coordination as a stable workflow distinct from terminal interaction or generic tooling.
Google releases Gemma 4, a family of open-weight models derived from Gemini 3 research, expanding the available infrastructure for local inference and agent development.
Google releases Gemma 4 under an Apache 2.0 license, providing fully open-weight frontier model access for local inference and agent development workflows.
currency/search-index.json provides the public search index used by this site.knowledge-manifest.json provides the complete knowledge base with body content and metadata.