CommonLibrary
Common library functions for ApertureDB. This will not have a big class structure, but rather a collection of functions This is the place to put functions that are reused in codebase.
import_module_by_path
def import_module_by_path(filepath: str) -> Any
This function imports a module given a path to a python file.
create_connector
def create_connector(name: Optional[str] = None,
key: Optional[str] = None,
create_config_for_colab_secret=True) -> Connector
Create a connector to the database.
This function chooses a configuration in the folowing order:
- The configuration named by the
nameparameter orkeyparameter - The configuration described in the
APERTUREDB_KEYenvironment variable. - The configuration described in the
APERTUREDB_KEYGoogle Colab secret. - The configuration described in the
APERTUREDB_JSONenvironment variable. - The configuration described in the
APERTUREDB_JSONGoogle Colab secret. - The configuration described in the
APERTUREDB_JSONsecret in a.envfile. - The configuration named by the
APERTUREDB_CONFIGenvironment variable. - The active configuration.
If there are both global and local configurations with the same name, the global configuration is preferred.
See adb config command-line tool for more information.
Arguments:
namestr, optional - The name of the configuration to use. Default is None.create_config_for_colab_secretbool, optional - Whether to create a configuration from the Google Colab secret. Default is True.
Returns:
-
Connector- The connector to the database.Note about Google Colab secret: This secret is available in the context of a notebook running on Google Colab. In particular, it is not available to the
adbCLI tool running in a Colab notebook or any scripts run within a notebook. To resolve this issue, a configuration is automatically created and activated in this case. Use thecreate_config_for_colab_secretparameter to disable this behavior.
execute_query
def execute_query(client: Connector,
query: Commands,
blobs: Blobs = [],
success_statuses: list[int] = [0],
response_handler: Optional[Callable] = None,
commands_per_query: int = 1,
blobs_per_query: int = 0,
strict_response_validation: bool = False,
cmd_index=None) -> Tuple[int, CommandResponses, Blobs]
Execute a batch of queries, doing useful logging around it. Calls the response handler if provided.
This should be used (without the parallel machinery) instead of Connector.query to keep the response handling consistent, better logging, etc.
Arguments:
clientConnector - The database connector.queryCommands - List of commands to execute.blobsBlobs, optional - List of blobs to send.success_statuseslist[int], optional - The list of success statuses. Defaults to [0].response_handlerCallable, optional - The response handler. Defaults to None.commands_per_queryint, optional - The number of commands per query. Defaults to 1.blobs_per_queryint, optional - The number of blobs per query. Defaults to 0.strict_response_validationbool, optional - Whether to strictly validate the response. Defaults to False.
Returns:
int- The result code.- 0 : if all commands succeeded
- 1 : if there was -1 in the response
- 2 : For any other code.
CommandResponses- The response.Blobs- The blobs.
issue_deprecation_warning
def issue_deprecation_warning(old_name, new_name)
Issue a deprecation warning for a function and class.