Embedded property-graph database with native vector and full-text indexing.

One file. Relationship traversal, vector similarity, BM25 full-text search, durable streams, and ACID transactions in one local engine and one query layer.

Think SQLite, but for connected data you want to query by relationship, semantics, and text.

curl -fsSL https://raw.githubusercontent.com/jeffhajewski/latticedb/main/dist/install.sh | bash pip install latticedb npm install @hajewski/latticedb

One query. Three search modes.

Vector similarity, full-text search, and graph traversal in a single Cypher query.

-- Find chunks similar to a query, traverse to their document, then to the author
MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
WHERE chunk.embedding <=> $query_vector < 0.3
  AND doc.content @@ "neural networks"
RETURN doc.title, chunk.text, author.name
ORDER BY chunk.embedding <=> $query_vector
LIMIT 10

Built for speed

0.13 µs
Node lookup
0.83 ms
10-NN vector search @ 1M
14–2819x
Faster graph traversal vs SQLite
19 µs
Full-text search (300x faster than FTS5)

How it compares

Vector Search (10-NN, 1M vectors)

SystemLatencyRecallType
LatticeDB0.83 ms100%Embedded
FAISS HNSW0.5–3 msLibrary
Weaviate1.4 msServer
Qdrant~1–2 msServer
pgvector~5 ms99%Extension
Chroma4–5 msEmbedded
Pinecone~15 msCloud

Graph Traversal (2-hop, 100K nodes)

SystemLatencyType
LatticeDB39 µsEmbedded
SQLite (recursive CTE)548 µsEmbedded
Kuzu19 msEmbedded
Neo4j10 msServer

Everything in one library

Graph

  • Stable edge IDs across CREATE and MERGE paths
  • Multi-hop traversal with validated variable-length patterns
  • ACID transactions with crash-safe recovery
  • MERGE, WITH, UNWIND, expression LIMIT/SKIP, aggregations

Vector Search

  • HNSW approximate nearest neighbor
  • Configurable M, ef parameters
  • 100% recall at 1M vectors
  • Built-in hash embeddings or Ollama/OpenAI

Full-Text Search

  • BM25-ranked inverted index
  • Tokenization and stemming
  • Fuzzy search with Levenshtein distance
  • 300x faster than SQLite FTS5

Streams and Changefeeds

  • Durable named streams stored in system B+Trees
  • Per-stream sequences and explicit consumer offsets
  • Manual trim for retention control
  • Built-in graph changefeed from committed graph writes

What's new on main

Durable event streams and graph changefeeds are now part of the local storage engine and client APIs.

Durable Streams

  • Named streams auto-create on first publish
  • Records commit atomically with graph writes
  • Reads use explicit cursors and do not auto-ack
  • Consumer offsets are durable transaction records

Graph Changefeeds

  • __lattice_changes captures semantic graph mutations
  • Node and edge inserts, deletes, labels, and properties are represented
  • Payloads use stable typed maps with IDs and old/new values
  • Records replay through WAL recovery like other committed writes

Client APIs

  • C API stream batches borrow data until batch free
  • Python exposes read_stream, publish_stream, and changes
  • TypeScript exposes readStream, publishStream, and changes
  • Same-process readers can wait for committed stream records

Read the durable streams and changefeeds guide

From Sample Code to Graph

This graph is rendered from the exact creation/query pattern shown below.

Construct and Query

These statements build the graph shown below.

CREATE (alice:Person {name: "Alice"})
CREATE (doc:Document {title: "Attention Is All You Need"})
CREATE (c1:Chunk {text: "Self-attention..."})
CREATE (c2:Chunk {text: "Transformer blocks..."})

CREATE (c1)-[:PART_OF]->(doc)
CREATE (c2)-[:PART_OF]->(doc)
CREATE (doc)-[:AUTHORED_BY]->(alice)

MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
RETURN doc.title, chunk.text, author.name;
Knowledge graph visualization with Chunk, Document, and Person nodes connected by PART_OF and AUTHORED_BY relationships.
Rendered from the snippet above. Download DOT source.

Get started in 30 seconds

Python

pip install latticedb
from latticedb import Database
from latticedb.embedding import hash_embed

with Database("knowledge.db", create=True, enable_vectors=True, vector_dimensions=128) as db:
    with db.write() as txn:
        alice = txn.create_node(labels=["Person"], properties={"name": "Alice"})
        doc = txn.create_node(labels=["Document"], properties={"title": "Attention Is All You Need"})
        chunk = txn.create_node(labels=["Chunk"], properties={"text": "Self-attention..."})

        txn.set_vector(chunk.id, "embedding", hash_embed("transformer", dimensions=128))
        txn.create_edge(chunk.id, doc.id, "PART_OF")
        txn.create_edge(doc.id, alice.id, "AUTHORED_BY")
        txn.commit()

    results = db.query("""
        MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
        WHERE chunk.embedding <=> $query < 0.5
        RETURN doc.title, chunk.text, author.name
        ORDER BY chunk.embedding <=> $query
        LIMIT 5
    """, parameters={"query": hash_embed("attention mechanism", dimensions=128)})

    for row in results:
        print(f"{row['doc.title']} by {row['author.name']}")

TypeScript

npm install @hajewski/latticedb
import { Database } from "@hajewski/latticedb";
import { hashEmbed } from "@hajewski/latticedb/embedding";

const db = new Database("knowledge.db", {
  create: true, enableVectors: true, vectorDimensions: 128,
});
await db.open();

await db.write(async (txn) => {
  const alice = await txn.createNode({ labels: ["Person"], properties: { name: "Alice" } });
  const doc = await txn.createNode({ labels: ["Document"], properties: { title: "Attention Is All You Need" } });
  const chunk = await txn.createNode({ labels: ["Chunk"], properties: { text: "Self-attention..." } });

  await txn.setVector(chunk.id, "embedding", hashEmbed("transformer", 128));
  await txn.createEdge(chunk.id, doc.id, "PART_OF");
  await txn.createEdge(doc.id, alice.id, "AUTHORED_BY");
});

const results = await db.query(
  `MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
   WHERE chunk.embedding <=> $query < 0.5
   RETURN doc.title, chunk.text, author.name
   ORDER BY chunk.embedding <=> $query
   LIMIT 5`,
  { query: hashEmbed("attention mechanism", 128) }
);

for (const row of results.rows) {
  console.log(`${row["doc.title"]} by ${row["author.name"]}`);
}

await db.close();