The embedded database for AI that understands relationships.

One file. Graph + vector + full-text search. Sub-millisecond. No server, no configuration.

Think SQLite, but for knowledge graphs.

curl -fsSL https://raw.githubusercontent.com/jeffhajewski/latticedb/main/dist/install.sh | bash pip install latticedb npm install @hajewski/latticedb

One query. Three search modes.

Vector similarity, full-text search, and graph traversal in a single Cypher query.

-- Find chunks similar to a query, traverse to their document, then to the author
MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
WHERE chunk.embedding <=> $query_vector < 0.3
  AND doc.content @@ "neural networks"
RETURN doc.title, chunk.text, author.name
ORDER BY chunk.embedding <=> $query_vector
LIMIT 10

Built for speed

0.13 µs
Node lookup
0.83 ms
10-NN vector search @ 1M
14–2819x
Faster graph traversal vs SQLite
19 µs
Full-text search (300x faster than FTS5)

How it compares

Vector Search (10-NN, 1M vectors)

SystemLatencyRecallType
LatticeDB0.83 ms100%Embedded
FAISS HNSW0.5–3 msLibrary
Weaviate1.4 msServer
Qdrant~1–2 msServer
pgvector~5 ms99%Extension
Chroma4–5 msEmbedded
Pinecone~15 msCloud

Graph Traversal (2-hop, 100K nodes)

SystemLatencyType
LatticeDB39 µsEmbedded
SQLite (recursive CTE)548 µsEmbedded
Kuzu19 msEmbedded
Neo4j10 msServer

Everything in one library

Graph

  • Nodes and edges with labels and properties
  • Multi-hop traversal, variable-length paths
  • ACID transactions with snapshot isolation
  • MERGE, WITH, UNWIND, aggregations

Vector Search

  • HNSW approximate nearest neighbor
  • Configurable M, ef parameters
  • 100% recall at 1M vectors
  • Built-in hash embeddings or Ollama/OpenAI

Full-Text Search

  • BM25-ranked inverted index
  • Tokenization and stemming
  • Fuzzy search with Levenshtein distance
  • 300x faster than SQLite FTS5

Get started in 30 seconds

Python

pip install latticedb
from latticedb import Database, hash_embed

with Database("knowledge.db", create=True, enable_vector=True, vector_dimensions=128) as db:
    with db.write() as txn:
        alice = txn.create_node(labels=["Person"], properties={"name": "Alice"})
        doc = txn.create_node(labels=["Document"], properties={"title": "Attention Is All You Need"})
        chunk = txn.create_node(labels=["Chunk"], properties={"text": "Self-attention..."})

        txn.set_vector(chunk.id, "embedding", hash_embed("transformer", dimensions=128))
        txn.create_edge(chunk.id, doc.id, "PART_OF")
        txn.create_edge(doc.id, alice.id, "AUTHORED_BY")
        txn.commit()

    results = db.query("""
        MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
        WHERE chunk.embedding <=> $query < 0.5
        RETURN doc.title, chunk.text, author.name
        ORDER BY chunk.embedding <=> $query
        LIMIT 5
    """, parameters={"query": hash_embed("attention mechanism", dimensions=128)})

    for row in results:
        print(f"{row['doc.title']} by {row['author.name']}")

TypeScript

npm install @hajewski/latticedb
import { Database, hashEmbed } from "@hajewski/latticedb";

const db = new Database("knowledge.db", {
  create: true, enableVector: true, vectorDimensions: 128,
});
await db.open();

await db.write(async (txn) => {
  const alice = await txn.createNode({ labels: ["Person"], properties: { name: "Alice" } });
  const doc = await txn.createNode({ labels: ["Document"], properties: { title: "Attention Is All You Need" } });
  const chunk = await txn.createNode({ labels: ["Chunk"], properties: { text: "Self-attention..." } });

  await txn.setVector(chunk.id, "embedding", hashEmbed("transformer", 128));
  await txn.createEdge(chunk.id, doc.id, "PART_OF");
  await txn.createEdge(doc.id, alice.id, "AUTHORED_BY");
});

const results = await db.query(
  `MATCH (chunk:Chunk)-[:PART_OF]->(doc:Document)-[:AUTHORED_BY]->(author:Person)
   WHERE chunk.embedding <=> $query < 0.5
   RETURN doc.title, chunk.text, author.name
   ORDER BY chunk.embedding <=> $query
   LIMIT 5`,
  { query: hashEmbed("attention mechanism", 128) }
);

for (const row of results.rows) {
  console.log(`${row["doc.title"]} by ${row["author.name"]}`);
}

await db.close();