Visual data (images, videos) is rich in business and expert insights. These insights are now made increasingly accessible via advancements in Data Science and Machine Learning(ML). Exploiting big-visual-data requires storage and access methods designed with visual ML in mind. With the current off-the-shelf alternatives, ML engineers and data scientists are forced to glue disparate data solutions to address visual data management, accumulating technical debt as their use case scales and landing themselves with a maintenance nightmare.
ApertureData offers ApertureDB, a unique, specialized database for visual data. ApertureDB stores and manages images, videos, feature vectors, and associated metadata like annotations. It natively supports complex searching and preprocessing operations over media objects.
ApertureDB is based off of the open source VDMS (Visual Data Management System) code. It implements a client-server design. The Server handles concurrent client requests and coordinates request execution across the metadata and data components in order to return a unified response to the client.
ApertureDB is unique when compared with other databases and infrastructure tools because
- It natively supports images in different formats and videos with multiple video encodings and containers, together with efficient frame-level access.
- Given it is designed with visual analytics in mind, it also supports bounding boxes or regions of interest for labeling and provides necessary preprocessing operations like zoom, crop, create thumbnails as you search and other image/video/frame level operations as necessary.
- It not only stores an application’s contextual (multimodal) metadata but manages this information as a knowledge graph to capture internal relationships. This allows for complex visual searches since the visual data is stored besides its contextual metadata.
- Since feature vectors can be used to describe contents in images or frames which can in turn make it possible to find visually similar objects, it also offers similarity search and k-nearest neighbor computation using high-dimensional feature vectors.
All of this is encapsulated behind a unified API as described in this documentation.
- Connect to existing data on local or cloud storage as it can be linked with the metadata that is managed by ApertureDB Server
- Feed data directly into ML frameworks like PyTorch
- Support large scale ML training operations thought our batch access API or through the keyword and/or feature based searches in users’ existing ML workflows
- Support multiple labels on frames or images, and associate them with the rest of the metadata to continue building the knowledge graph of an application.
Try It Out¶
You can try ApertureDB using one of our demo examples as part of a free trial. We also offer a UI frontend, currently in alpha version, that can assist with navigating user data or in formulating custom queries. You can reach out to us at firstname.lastname@example.org to get access to a docker for your “on-premise” or “self-hosted” trial. Tutorial examples and details of the API to use ApertureDB are available in the rest of this documentation.
|Learning Systems @ NIPS 2018||learningsys.org||Systems for Machine Learning Workshop @ NIPS||uni-trier.de|
|HotStorage @ ATC 2017||usenix.org||Positioning Paper at USENIX ATC 2017 Workshop||usenix.org/bibtex|