The embedded knowledge graph for AI.

One file. Graph, vector, and full-text search with sub-millisecond latency. No server, no configuration.

Think SQLite, but for knowledge graphs.

curl -fsSL https://raw.githubusercontent.com/jeffhajewski/latticedb/main/dist/install.sh | bash pip install latticedb npm install @hajewski/latticedb

One query. Three search modes.

Vector similarity, full-text search, and graph traversal in a single Cypher query.

-- Find chunks similar to a query, traverse to their document, then to the author
MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
WHERE chunk.embedding <=> $query_vector < 0.3
  AND doc.content @@ "neural networks"
RETURN doc.title, chunk.text, author.name
ORDER BY chunk.embedding <=> $query_vector
LIMIT 10

Built for speed

0.13 µs
Node lookup
0.83 ms
10-NN vector search @ 1M
14–2819x
Faster graph traversal vs SQLite
19 µs
Full-text search (300x faster than FTS5)

How it compares

Vector Search (10-NN, 1M vectors)

SystemLatencyRecallType
LatticeDB0.83 ms100%Embedded
FAISS HNSW0.5–3 msLibrary
Weaviate1.4 msServer
Qdrant~1–2 msServer
pgvector~5 ms99%Extension
Chroma4–5 msEmbedded
Pinecone~15 msCloud

Graph Traversal (2-hop, 100K nodes)

SystemLatencyType
LatticeDB39 µsEmbedded
SQLite (recursive CTE)548 µsEmbedded
Kuzu19 msEmbedded
Neo4j10 msServer

Everything in one library

Graph

  • Stable edge IDs across CREATE and MERGE paths
  • Multi-hop traversal with validated variable-length patterns
  • ACID transactions with crash-safe recovery
  • MERGE, WITH, UNWIND, expression LIMIT/SKIP, aggregations

Vector Search

  • HNSW approximate nearest neighbor
  • Configurable M, ef parameters
  • 100% recall at 1M vectors
  • Built-in hash embeddings or Ollama/OpenAI

Full-Text Search

  • BM25-ranked inverted index
  • Tokenization and stemming
  • Fuzzy search with Levenshtein distance
  • 300x faster than SQLite FTS5

What's new in v0.3.0

Major query-engine correctness work and stronger reliability across core and bindings.

Query Engine

  • Full relationship MERGE support with stable edge IDs and inline properties
  • LIMIT and SKIP now support expressions and parameters
  • Standalone UNWIND support and corrected alias handling
  • Variable-length path semantics tightened, including zero-hop behavior

Planner and Runtime

  • ORDER BY / SKIP / LIMIT execution order corrected before RETURN projection
  • Typed MATCH on unknown relationship types returns empty results
  • DISTINCT and GROUP key handling hardened with canonical encoding
  • Null-safe behavior improved for incompatible numeric comparisons

Reliability and Tooling

  • Edge ID monotonicity hardened across replay and recovery
  • Expanded rollback and cross-binding intent test coverage
  • TypeScript binding initialization hardened for repeated test runs
  • Release automation now validates version consistency across code and docs

View the full v0.2.1 to v0.3.0 diff on GitHub

From Sample Code to Graph

This graph is rendered from the exact creation/query pattern shown below.

Construct and Query

These statements build the graph shown on the right.

CREATE (alice:Person {name: "Alice"})
CREATE (doc:Document {title: "Attention Is All You Need"})
CREATE (c1:Chunk {text: "Self-attention..."})
CREATE (c2:Chunk {text: "Transformer blocks..."})

CREATE (c1)-[:PART_OF]->(doc)
CREATE (c2)-[:PART_OF]->(doc)
CREATE (doc)-[:AUTHORED_BY]->(alice)

MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
RETURN doc.title, chunk.text, author.name;
Knowledge graph visualization with Chunk, Document, and Person nodes connected by PART_OF and AUTHORED_BY relationships.
Rendered from the snippet at left. Download DOT source.

Get started in 30 seconds

Python

pip install latticedb
from latticedb import Database, hash_embed

with Database("knowledge.db", create=True, enable_vector=True, vector_dimensions=128) as db:
    with db.write() as txn:
        alice = txn.create_node(labels=["Person"], properties={"name": "Alice"})
        doc = txn.create_node(labels=["Document"], properties={"title": "Attention Is All You Need"})
        chunk = txn.create_node(labels=["Chunk"], properties={"text": "Self-attention..."})

        txn.set_vector(chunk.id, "embedding", hash_embed("transformer", dimensions=128))
        txn.create_edge(chunk.id, doc.id, "PART_OF")
        txn.create_edge(doc.id, alice.id, "AUTHORED_BY")
        txn.commit()

    results = db.query("""
        MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
        WHERE chunk.embedding <=> $query < 0.5
        RETURN doc.title, chunk.text, author.name
        ORDER BY chunk.embedding <=> $query
        LIMIT 5
    """, parameters={"query": hash_embed("attention mechanism", dimensions=128)})

    for row in results:
        print(f"{row['doc.title']} by {row['author.name']}")

TypeScript

npm install @hajewski/latticedb
import { Database, hashEmbed } from "@hajewski/latticedb";

const db = new Database("knowledge.db", {
  create: true, enableVector: true, vectorDimensions: 128,
});
await db.open();

await db.write(async (txn) => {
  const alice = await txn.createNode({ labels: ["Person"], properties: { name: "Alice" } });
  const doc = await txn.createNode({ labels: ["Document"], properties: { title: "Attention Is All You Need" } });
  const chunk = await txn.createNode({ labels: ["Chunk"], properties: { text: "Self-attention..." } });

  await txn.setVector(chunk.id, "embedding", hashEmbed("transformer", 128));
  await txn.createEdge(chunk.id, doc.id, "PART_OF");
  await txn.createEdge(doc.id, alice.id, "AUTHORED_BY");
});

const results = await db.query(
  `MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
   WHERE chunk.embedding <=> $query < 0.5
   RETURN doc.title, chunk.text, author.name
   ORDER BY chunk.embedding <=> $query
   LIMIT 5`,
  { query: hashEmbed("attention mechanism", 128) }
);

for (const row of results.rows) {
  console.log(`${row["doc.title"]} by ${row["author.name"]}`);
}

await db.close();