Embedded property-graph database with native vector and full-text indexing.

One file. Relationship traversal, vector similarity, BM25 full-text search, durable streams, and ACID transactions in one local engine and one query layer.

Think SQLite, but for connected data you want to query by relationship, semantics, and text.

curl -fsSL https://raw.githubusercontent.com/jeffhajewski/latticedb/main/dist/install.sh | bash pip install latticedb npm install @hajewski/latticedb

One query. Three search modes.

Vector similarity, full-text search, and graph traversal in a single Cypher query.

-- Find chunks similar to a query, traverse to their document, then to the author
MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
WHERE chunk.embedding <=> $query_vector < 0.3
  AND doc.content @@ "neural networks"
RETURN doc.title, chunk.text, author.name
ORDER BY chunk.embedding <=> $query_vector
LIMIT 10

Built for speed

0.13 µs
Node lookup
0.83 ms
10-NN vector search @ 1M
14–2819x
Faster graph traversal vs SQLite
19 µs
Full-text search (300x faster than FTS5)

How it compares

Vector Search (10-NN, 1M vectors)

SystemLatencyRecallType
LatticeDB0.83 ms100%Embedded
FAISS HNSW0.5–3 msLibrary
Weaviate1.4 msServer
Qdrant~1–2 msServer
pgvector~5 ms99%Extension
Chroma4–5 msEmbedded
Pinecone~15 msCloud

Graph Traversal (2-hop, 100K nodes)

SystemLatencyType
LatticeDB39 µsEmbedded
SQLite (recursive CTE)548 µsEmbedded
Kuzu19 msEmbedded
Neo4j10 msServer

Everything in one library

Graph

  • Stable edge IDs for traversal and property updates
  • Large node and edge properties via B+Tree overflow storage
  • Multi-hop traversal with validated variable-length patterns
  • MERGE, WITH, UNWIND, expression LIMIT/SKIP, aggregations

Vector Search

  • HNSW approximate nearest neighbor
  • Configurable M, ef parameters
  • 100% recall at 1M vectors
  • Built-in hash embeddings or Ollama/OpenAI

Full-Text Search

  • BM25-ranked inverted index
  • Tokenization and stemming
  • Fuzzy search with Levenshtein distance
  • 300x faster than SQLite FTS5

Streams and Changefeeds

  • Durable named streams stored in system B+Trees
  • Per-stream sequences and explicit consumer offsets
  • Manual trim for retention control
  • Built-in graph changefeed with large-value summaries

What's new in 0.9.6

Typed graph traversal, stream publish sequences, adjacency-cache options, and WAL truncation make graph and stream workloads easier to operate.

Typed Traversal

  • Outgoing and incoming edge reads can filter by type
  • Traversal limits stop collection before result materialization
  • Native edge scan is documented for admin and rebuild workflows
  • Python, TypeScript, and Go bindings expose typed helpers

Stream Sequences

  • Publish calls can return the assigned stream sequence
  • Existing publish APIs remain source-compatible
  • Rollback keeps staged stream records non-durable
  • Last-sequence introspection is available for durable streams

Operations

  • Open v3 exposes the adjacency-cache option to all bindings
  • Transaction all-node snapshots complement label reads
  • Checkpointed WAL files truncate to header-only logs on close
  • Property index design notes define future lookup semantics

Read the graph storage notes Read the 0.9.6 release notes

From Sample Code to Graph

This graph is rendered from the exact creation/query pattern shown below.

Construct and Query

These statements build the graph shown below.

CREATE (alice:Person {name: "Alice"})
CREATE (doc:Document {title: "Attention Is All You Need"})
CREATE (c1:Chunk {text: "Self-attention..."})
CREATE (c2:Chunk {text: "Transformer blocks..."})

CREATE (c1)-[:PART_OF]->(doc)
CREATE (c2)-[:PART_OF]->(doc)
CREATE (doc)-[:AUTHORED_BY]->(alice)

MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
RETURN doc.title, chunk.text, author.name;
Knowledge graph visualization with Chunk, Document, and Person nodes connected by PART_OF and AUTHORED_BY relationships.
Rendered from the snippet above. Download DOT source.

Get started in 30 seconds

Python

pip install latticedb
from latticedb import Database
from latticedb.embedding import hash_embed

with Database("knowledge.db", create=True, enable_vectors=True, vector_dimensions=128) as db:
    with db.write() as txn:
        alice = txn.create_node(labels=["Person"], properties={"name": "Alice"})
        doc = txn.create_node(labels=["Document"], properties={"title": "Attention Is All You Need"})
        chunk = txn.create_node(labels=["Chunk"], properties={"text": "Self-attention..."})

        txn.set_vector(chunk.id, "embedding", hash_embed("transformer", dimensions=128))
        txn.create_edge(chunk.id, doc.id, "PART_OF")
        txn.create_edge(doc.id, alice.id, "AUTHORED_BY")
        txn.commit()

    results = db.query("""
        MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
        WHERE chunk.embedding <=> $query < 0.5
        RETURN doc.title, chunk.text, author.name
        ORDER BY chunk.embedding <=> $query
        LIMIT 5
    """, parameters={"query": hash_embed("attention mechanism", dimensions=128)})

    for row in results:
        print(f"{row['doc.title']} by {row['author.name']}")

TypeScript

npm install @hajewski/latticedb
import { Database } from "@hajewski/latticedb";
import { hashEmbed } from "@hajewski/latticedb/embedding";

const db = new Database("knowledge.db", {
  create: true, enableVectors: true, vectorDimensions: 128,
});
await db.open();

await db.write(async (txn) => {
  const alice = await txn.createNode({ labels: ["Person"], properties: { name: "Alice" } });
  const doc = await txn.createNode({ labels: ["Document"], properties: { title: "Attention Is All You Need" } });
  const chunk = await txn.createNode({ labels: ["Chunk"], properties: { text: "Self-attention..." } });

  await txn.setVector(chunk.id, "embedding", hashEmbed("transformer", 128));
  await txn.createEdge(chunk.id, doc.id, "PART_OF");
  await txn.createEdge(doc.id, alice.id, "AUTHORED_BY");
});

const results = await db.query(
  `MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
   WHERE chunk.embedding <=> $query < 0.5
   RETURN doc.title, chunk.text, author.name
   ORDER BY chunk.embedding <=> $query
   LIMIT 5`,
  { query: hashEmbed("attention mechanism", 128) }
);

for (const row of results.rows) {
  console.log(`${row["doc.title"]} by ${row["author.name"]}`);
}

await db.close();