Embed a full-featured PostgreSQL instance directly into your Python, C, Go, or Rust application. No Docker required. No installation headaches. Just import and run.
pip install dbcenter
from dbcenter import DBCenter
# Initialize embedded Postgres
db = DBCenter(path="./data")
# Use MongoDB-style API
users = db.collection("users")
users.insert_one({"name": "Alice", "role": "Engineer"})
Built on the robust foundation of PostgreSQL, enhanced with the speed of Rust, and wrapped in an API you'll actually enjoy using.
The core engine is rewritten in Rust for zero-cost abstractions, memory safety, and blazing fast startup times.
Built-in pgvector support with ChromaDB-style API. No manual extension setup - just call vector_store() and start storing embeddings.
Full geospatial capabilities out of the box. Perform complex spatial queries without setting up an external server.
Built-in pg_trgm for fuzzy matching. FuzzyWuzzy-like API running on PostgreSQL - 93x faster than pure Python solutions.
A dedicated sidecar process ensures your database shuts down cleanly, even if your main application crashes.
No Docker, no system PostgreSQL required. Binaries are lazily downloaded to ~/.dbcenter only when needed.
Explore real-world use cases with our built-in extensions. Click each example to expand.
from dbcenter import DBCenter
db = DBCenter("./data")
# Create vector store (pgvector auto-loaded)
vectors = db.vector_store("documents", dimension=1536)
# Add embeddings - ChromaDB-style API
vectors.add(
ids=["doc1", "doc2", "doc3"],
embeddings=[[0.1, 0.2, ...], [0.3, 0.4, ...], [0.5, 0.6, ...]],
metadatas=[
{"source": "blog", "author": "Alice"},
{"source": "docs", "author": "Bob"},
{"source": "paper", "author": "Charlie"}
]
)
# Semantic search - no SQL needed!
results = vectors.query(
query_embeddings=[[0.1, 0.2, ...]],
n_results=5,
where={"source": "blog"} # Filter by metadata
)
print(results) # [{"id": "doc1", "distance": 0.23, "metadata": {...}}, ...]
from dbcenter import DBCenter
db = DBCenter("./data")
products = db.collection("products")
# Insert some products
products.insert_many([
{"name": "iPhone 15 Pro", "price": 999},
{"name": "Samsung Galaxy S24", "price": 899},
{"name": "iPad Air", "price": 599}
])
# Fuzzy search - FuzzyWuzzy-style API (pg_trgm auto-loaded)
results = products.fuzzy_search(
query="ipone", # Typo-tolerant!
field="name",
threshold=0.3
)
print(results)
# [{"name": "iPhone 15 Pro", "similarity": 0.67, ...}, ...]
# Or use direct similarity scoring
score = products.similarity("iPhone", "ipone")
print(score) # 0.67
from dbcenter import DBCenter
db = DBCenter("./data")
# Create geo collection (PostGIS auto-loaded)
stores = db.geo_collection("stores")
# Insert locations - simple dict, no SQL!
stores.insert_many([
{"name": "Store A", "lat": 40.7128, "lon": -74.0060, "city": "NYC"},
{"name": "Store B", "lat": 34.0522, "lon": -118.2437, "city": "LA"},
{"name": "Store C", "lat": 40.7589, "lon": -73.9851, "city": "NYC"}
])
# Find nearby - no SQL needed!
nearby = stores.find_within_radius(
lat=40.7128,
lon=-74.0060,
radius_km=5
)
# Distance calculation
distance = stores.distance_between(
from_id="store_a",
to_id="store_b"
)
# Check if point is within polygon
is_inside = stores.contains_point(
polygon_id="delivery_zone_1",
lat=40.7128,
lon=-74.0060
)
Real-world benchmarks comparing dbcenter to popular alternatives
Fuzzy Text Matching
10,000 records · similarity threshold 0.3
Vector Similarity Search
Benchmark (N=200) · cosine similarity · top-10
Start building with dbcenter in less than 60 seconds
Binaries are downloaded to `~/.dbcenter` only when needed, keeping your deployment light.
Use the MongoDB-like API for rapid development, or drop down to raw SQL for complex joins and analytics.
PostgreSQL runs as a child process, managed by the Rust core, ensuring stability and resource control.