great_expectations.cli.toolkit

Module Contents

Classes

MyYAML(: Any, _kw: Optional[Text] = enforce, typ: Any = None, pure: Any = False, output: Any = None, plug_ins=None)

Functions

create_expectation_suite(context, datasource_name=None, batch_kwargs_generator_name=None, generator_asset=None, batch_kwargs=None, expectation_suite_name=None, additional_batch_kwargs=None, empty_suite=False, show_intro_message=False, flag_build_docs=True, open_docs=False, profiler_configuration=’demo’, data_asset_name=None)

Create a new expectation suite.

_profile_to_create_a_suite(additional_batch_kwargs, batch_kwargs, batch_kwargs_generator_name, context, datasource_name, expectation_suite_name, data_asset_name, profiler_configuration)

_raise_profiling_errors(profiling_results)

attempt_to_open_validation_results_in_data_docs(context, profiling_results)

_get_default_expectation_suite_name(batch_kwargs, data_asset_name)

tell_user_suite_exists(suite_name: str)

create_empty_suite(context: DataContext, expectation_suite_name: str, batch_kwargs)

launch_jupyter_notebook(notebook_path: str)

load_batch(context: DataContext, suite: Union[str, ExpectationSuite], batch_kwargs: Union[dict, BatchKwargs])

load_expectation_suite(context: DataContext, suite_name: str, usage_event: str)

Load an expectation suite from a given context.

exit_with_failure_message_and_stats(context: DataContext, usage_event: str, message: str)

load_checkpoint(context: DataContext, checkpoint_name: str, usage_event: str)

Load a checkpoint or raise helpful errors.

select_datasource(context: DataContext, datasource_name: str = None)

Select a datasource interactively.

load_data_context_with_error_handling(directory: str, from_cli_upgrade_command: bool = False)

Return a DataContext with good error handling and exit codes.

upgrade_project(context_root_dir, ge_config_version, from_cli_upgrade_command=False)

confirm_proceed_or_exit(confirm_prompt=’Would you like to proceed?’, continuation_message=’Ok, exiting now. You can always read more at https://docs.greatexpectations.io/ !’, exit_on_no=True)

Every CLI command that starts a potentially lengthy (>1 sec) computation

class great_expectations.cli.toolkit.MyYAML(: Any, _kw: Optional[Text] = enforce, typ: Any = None, pure: Any = False, output: Any = None, plug_ins=None)

Bases: ruamel.yaml.YAML

dump(self, data, stream=None, **kw)
great_expectations.cli.toolkit.yaml
great_expectations.cli.toolkit.default_flow_style = False
great_expectations.cli.toolkit.create_expectation_suite(context, datasource_name=None, batch_kwargs_generator_name=None, generator_asset=None, batch_kwargs=None, expectation_suite_name=None, additional_batch_kwargs=None, empty_suite=False, show_intro_message=False, flag_build_docs=True, open_docs=False, profiler_configuration='demo', data_asset_name=None)

Create a new expectation suite.

WARNING: the flow and name of this method and its interaction with _profile_to_create_a_suite require a serious revisiting. :return: a tuple: (success, suite name, profiling_results)

great_expectations.cli.toolkit._profile_to_create_a_suite(additional_batch_kwargs, batch_kwargs, batch_kwargs_generator_name, context, datasource_name, expectation_suite_name, data_asset_name, profiler_configuration)
great_expectations.cli.toolkit._raise_profiling_errors(profiling_results)
great_expectations.cli.toolkit.attempt_to_open_validation_results_in_data_docs(context, profiling_results)
great_expectations.cli.toolkit._get_default_expectation_suite_name(batch_kwargs, data_asset_name)
great_expectations.cli.toolkit.tell_user_suite_exists(suite_name: str) → None
great_expectations.cli.toolkit.create_empty_suite(context: DataContext, expectation_suite_name: str, batch_kwargs) → None
great_expectations.cli.toolkit.launch_jupyter_notebook(notebook_path: str) → None
great_expectations.cli.toolkit.load_batch(context: DataContext, suite: Union[str, ExpectationSuite], batch_kwargs: Union[dict, BatchKwargs]) → DataAsset
great_expectations.cli.toolkit.load_expectation_suite(context: DataContext, suite_name: str, usage_event: str) → ExpectationSuite

Load an expectation suite from a given context.

Handles a suite name with or without .json :param usage_event:

great_expectations.cli.toolkit.exit_with_failure_message_and_stats(context: DataContext, usage_event: str, message: str) → None
great_expectations.cli.toolkit.load_checkpoint(context: DataContext, checkpoint_name: str, usage_event: str) → dict

Load a checkpoint or raise helpful errors.

great_expectations.cli.toolkit.select_datasource(context: DataContext, datasource_name: str = None) → Datasource

Select a datasource interactively.

great_expectations.cli.toolkit.load_data_context_with_error_handling(directory: str, from_cli_upgrade_command: bool = False) → DataContext

Return a DataContext with good error handling and exit codes.

great_expectations.cli.toolkit.upgrade_project(context_root_dir, ge_config_version, from_cli_upgrade_command=False)
great_expectations.cli.toolkit.confirm_proceed_or_exit(confirm_prompt='Would you like to proceed?', continuation_message='Ok, exiting now. You can always read more at https://docs.greatexpectations.io/ !', exit_on_no=True)

Every CLI command that starts a potentially lengthy (>1 sec) computation or modifies some resources (e.g., edits the config file, adds objects to the stores) must follow this pattern: 1. Explain which resources will be created/modified/deleted 2. Use this method to ask for user’s confirmation

The goal of this standardization is for the users to expect consistency - if you saw one command, you know what to expect from all others.

If the user does not confirm, the program should exit. The purpose of the exit_on_no parameter is to provide the option to perform cleanup actions before exiting outside of the function.