Skip to main content

Comparison With Popular Vector Databases

We have been comparing ApertureDB to popular vector databases like Pinecone, Weaviate, Qdrant, Milvus, and others. Here is a quick summary of our analysis.

Data ingestion

ApertureDB seamlessly scales as we ingest millions of 4096 dimensional embeddings where some others fail past a few 100,000 embeddings.

Throughput

We also see 2-10X higher throughput when running K-Near Neighbor (KNN - essential for RAG) search across these millions of embeddings compared to these vector databases, even with smaller (96-dimensional) embeddings.

Latency

For KNN classification over a few million image embeddings, we see sub-7msec query response time for image search and sub-10msec query response time for our RAG chatbot, as measured at the server (removing any network effects).

Given most multimodal embeddings are higher dimensions (typically 128 for images to 1536 for LLMs), ApertureDB guarantees seamless ingestion and better semantic search performance.

Pricing

ApertureDB pricing doesn't vary with dimensions, number of embeddings, or how many queries you run against the database -- which means, you get predictable pricing as your application scales. We are also exploring a special vector DB tier for ApertureDB cloud. That's going to be even cheaper for starter workloads.

For a detailed report, please write to team@aperturedata.io