idaes.core.util.convergence package¶
Submodules¶
idaes.core.util.convergence.convergence module¶
This module is a commandline script for executing convergence evaluation testing on IDAES models.
Convergence evaluation testing is used to verify reliable convergence of a model over a range of conditions for inputs and parameters. The developer of the test must create a ConvergenceEvaluation class prior to executing any convergence testing (see convergence_base.py for documentation).
Convergence evaluation testing is a two step process. In the first step, a json file is created that contains a set of points sampled from the provided inputs. This step only needs to be done once  up front. The second step, which should be executed any time there is a major code change that could impact the model, takes that set of sampled points and solves the model at each of the points, collecting convergence statistics (success/failure, iterations, and solution time).
To find help on convergence.py:
$ python convergence.py help
You will see that there are some subcommands. To find help on a particular subcommand:
$ python convergence.py <subcommand> help
To create a sample file, you can use a commandline like the following (this should be done once by the model developer for a few different sample sizes):
$ python ../../../core/util/convergence/convergence.py createsamplefile
s PressureChanger10.json
N 10 seed=42
e idaes.models.convergence.pressure_changer.
pressure_changer_conv_eval.PressureChangerConvergenceEvaluation
More commonly, to run the convergence evaluation:
$ python ../../../core/util/convergence/convergence.py runeval
s PressureChanger10.json
Note that the convergence evaluation can also be run in parallel if you have installed MPI and mpi4py using a command line like the following:
$ mpirun np 4 python ../../../core/util/convergence/convergence.py runeval
s PressureChanger10.json
idaes.core.util.convergence.convergence_base module¶
This module provides the base classes and methods for running convergence evaluations on IDAES models. The convergence evaluation runs a given model over a set of sample points to ensure reliable convergence over the parameter space.
 The module requires the user to provide:
 a set of inputs along with their lower bound, upper bound, mean,
 and standard deviation.
 an initialized Pyomo model
 a Pyomo solver with appropriate options
The module executes convergence evaluation in two steps. In the first step, a json file is created that containsa set of points sampled from the provided inputs. This step only needs to be done once  up front. The second step, which should be executed any time there is a major code change that could impact the model, takes that set of sampled points and solves the model at each of the points, collecting convergence statistics (success/failure, iterations, and solution time).
This can be used as a tool to evaluate model convergence reliability over the defined input space, or to verify that convergence performance is not decreasing with framework and/or model changes.
In order to write a convergence evaluation for your model, you must inherit a class from ConvergenceEvaluation, and implement three methods:
 get_specification: This method should create and return a
 ConvergenceEvaluationSpecification object. There are methods on ConvergenceEvaluationSpecification to add inputs. These inputs contain a string that identifies a Pyomo Param or Var object, the lower and upper bounds, and the mean and standard deviation to be used for sampling. When samples are generated, they are drawn from a normal distribution, and then truncated by the lower or upper bounds.
 get_initialized_model: This method should create and return a Pyomo model
 object that is already initialized and ready to be solved. This model will be modified according to the sampled inputs, and then it will be solved.
 get_solver: This method should return an instance of the Pyomo solver that
 will be used for the analysis.
There are methods to create the sample points file (on ConvergenceEvaluationSpecification), to run a convergence evaluation (run_convergence_evaluation), and print the results in table form (print_convergence_statistics).
However, this package can also be executed using the commandline interface. See the documentation in convergence.py for more information.

idaes.core.util.convergence.convergence_base.
print_convergence_statistics
(inputs, results, s)[source] Print the statistics returned from run_convergence_evaluation in a set of tables
Parameters: Returns: Return type: N/A

idaes.core.util.convergence.convergence_base.
run_convergence_evaluation
(sample_file_dict, conv_eval)[source] Run convergence evaluation and generate the statistics based on information in the sample_file.
Parameters:  sample_file_dict (dict) – Dictionary created by ConvergenceEvaluationSpecification that contains the input and sample point information
 conv_eval (ConvergenceEvaluation) – The ConvergenceEvaluation object that should be used
Returns: Return type: N/A

idaes.core.util.convergence.convergence_base.
save_results_to_dmf
(dmf, inputs, results, stats)[source] Save results of run, along with stats, to DMF.
Parameters: Returns: None

idaes.core.util.convergence.convergence_base.
write_sample_file
(eval_spec, filename, convergence_evaluation_class_str, n_points, seed=None)[source] Samples the space of the inputs defined in the eval_spec, and creates a json file with all the points to be used in executing a convergence evaluation
Parameters:  filename (str) – The filename for the json file that will be created containing all the points to be run
 eval_spec (ConvergenceEvaluationSpecification) – The convergence evaluation specification object that we would like to sample
 convergence_evaluation_class_str (str) – Python string that identifies the convergence evaluation class for this specific evaluation. This is usually in the form of module.class_name.
 n_points (int) – The total number of points that should be created
 seed (int or None) – The seed to be used when generating samples. If set to None, then the seed is not set
Returns: Return type: N/A