Skip to main content

Ingest and Consume Data

Bulk Load Data from Various Sources

Recipe to ingest data into ApertureDB:

  • Setup ApertureDB instance in your environment or use our hosted service
  • Configure the database
  • Define metadata schema depending on application use case
  • Generate CSV files for data ingestion
  • Use our CSV data loaders that parallelize load of batches of all object types
  • Setup periodic loading with tools like Airflow
  • For smaller, sporadic additions or updates, use our query language directly through Python scripts or Jupyter notebooks

Monitor the Status of your Database

Once ingested, you can see it's status on ApertureDB web UI

You can also monitor the resource utilization and logs from our Grafana dashboard.

Various ApertureDB Query Alternatives

In addition to graphically querying from our web frontend, you can query data from ApertureDB in the following ways:

  • Use the REST API from any client (we provide Python wrappers) to send JSON-based queries
  • Query using the Python object wrapper or using JSON queries from Python in Jupyter notebooks or C++ applications
  • Query from within ML frameworks like PyTorch, Tensorflow using ApertureDB Datasets