evaluate_batch#

TwinModel.evaluate_batch(inputs_df, field_inputs=None)#

Evaluate the twin model with historical input values given in a data frame.

Note

if field_inputs are supplied for a TBROM, they will override any input mode coefficient inputs for that ROM that are included in inputs_df.

Parameters:
inputs_df: pandas.DataFrame

Historical input values stored in a Pandas dataframe. It must have a ‘Time’ column and all history for the twin model inputs that you want to simulate. The dataframe must have one input per column, starting at time instant t=0.(s). If a twin model input is not found in a dataframe column, this input is kept constant to its initialization value. The column header must match with a twin model input name.

field_inputsdict (optional)

Dictionary of snapshot file paths or snapshot Numpy arrays that must be used as field input at all time instants given by the ‘inputs_df’ argument. One file path or array must be given per time instant, for a field input of a TBROM included in the twin model, using following dictionary format: {“tbrom_name”: {“field_input_name”: [snapshot_t0, snapshot_t1, … ]}}

Returns:
output_df: pandas.DataFrame

Twin output values associated with the input values stored in the Pandas dataframe.

Raises:
TwinModelError:

If the pytwin.TwinModel.initialize_evaluation() method has not been called before. If there is no ‘Time’ column in the input values stored in the Pandas dataframe. If there is no time instant t=0.s in the input values stored in the Pandas dataframe. If the list of snapshots given as field inputs has not one snapshot per time instant. If the snapshots given as field inputs are not Numpy arrays or paths to snapshot files. If the field inputs dictionary has bad TBROM or field input names.

Examples

>>> import pandas as pd
>>> from pytwin import TwinModel
>>>
>>> # Example 1 - Batch evaluation with scalar inputs and scalar outputs
>>> twin_model = TwinModel(model_filepath='path_to_your_twin_model.twin')
>>> inputs_df = pd.DataFrame({'Time': [0., 1., 2.], 'input1': [1., 2., 3.], 'input2': [1., 2., 3.]})
>>> twin_model.initialize_evaluation(inputs={'input1': 1., 'input2': 1.})
>>> scalar_outputs_df = twin_model.evaluate_batch(inputs_df=inputs_df)
>>>
>>> # Example 2 - Batch evaluation with field inputs from disk and field output
>>> model = TwinModel(model_filepath='path_to_your_twin_model.twin')
>>> romname = model.tbrom_names[0]
>>> fieldname = twin_model.get_field_input_names(romname)[0]
>>> snapshot_filepath_t0 = 'path_to_snapshot_t0.twin'
>>> twin_model.initialize_evaluation(field_inputs={romname: {fieldname: snapshot_filepath_t0})
>>> inputs_df = pd.DataFrame({'Time': [0., 1., 2.]})
>>> snapshot_filepaths = ['path_to_snapshot_t0.bin', 'path_to_snapshot_t1.bin', 'path_to_snapshot_t2.bin']
>>> batch_results = twin_model.evaluate_batch(inputs_df=inputs_df,        field_inputs={romname: {fieldname: snapshot_filepaths})
>>> output_snapshots = twin_model.generate_snapshot_batch(batch_results, romname)
>>>
>>> # Example 3 - Batch evaluation with field inputs from memory and field output
>>> model = TwinModel(model_filepath='path_to_your_twin_model.twin')
>>> romname = model.tbrom_names[0]
>>> fieldname = twin_model.get_field_input_names(romname)[0]
>>> snapshot_t0 = np.array([3.14, 2.71, 9.81, 6.02])
>>> twin_model.initialize_evaluation(field_inputs={romname: {fieldname: snapshot_t0})
>>> inputs_df = pd.DataFrame({'Time': [0., 1., 2.]})
>>> snapshots = [snapshot_t0, snapshot_t1, snapshot_t2]
>>> batch_results = twin_model.evaluate_batch(inputs_df=inputs_df,        field_inputs={romname: {fieldname: snapshots})
>>> output_snapshots = twin_model.generate_snapshot_batch(batch_results, romname)