pyretis.testing package

This package defines common methods which are used for testing.

Package structure

Modules

simulation_comparison.py (pyretis.testing.simulation_comparison)
Common methods for comparing results.
helpers.py (pyretis.testing.helpers)
Common methods for tests.

pyretis.testing.simulation_comparison module

Methods for comparing simulation results.

This module defines methods that can be used for comparing results from different simulations, such as output files, reports, and path ensembles.

pyretis.testing.simulation_comparison._compare_block_comments(comment1, comment2)[source]

Compare two block comment lists, tolerating 1-ULP float differences.

Parameters:
  • comment1 (list of str) – Comment lines from the first file block.
  • comment2 (list of str) – Comment lines from the second file block.
Returns:

bool – True if the comments are considered equal.

pyretis.testing.simulation_comparison._read_file_lines(filepath)[source]

Read all lines from a file.

pyretis.testing.simulation_comparison.compare_data_by_columns(file1, file2, file_type, skip=None)[source]

Compare two output PyRETIS data files by columns.

This method compares files where numbers are stored in columns and the columns have specific labels. Here, we also compare labels and comments.

Parameters:
  • file1 (str) – The path to the first file to compare.
  • file2 (str) – The path to the second file to compare.
  • file_type (str) – A string used to determine the file type (e.g., ‘energy’).
  • skip (list of str, optional) – A list of items from the loaded data we are to skip. This can, for instance, be certain energy terms that are not absolute and can’t easily be compared.
Returns:

  • equal (bool) – True if the files are deemed to be equal.
  • msg (str) – A descriptive message of the result of the comparison.

pyretis.testing.simulation_comparison.compare_numerical_data(file1, file2, rel_tol=1e-05)[source]

Compare two files containing numerical data.

Here, we compare files that contain numerical data. We don’t care about comments here, we just compare the actual numerical data.

Parameters:
  • file1 (str) – The path to the first file to compare.
  • file2 (str) – The path to the second file to compare.
  • rel_tol (float, optional) – Relative tolerance for the comparison.
Returns:

  • equal (bool) – True if the files are deemed to be equal.
  • msg (str) – A descriptive message of the result of the comparison.

pyretis.testing.simulation_comparison.compare_numerical_mse(file1, file2, tol=1e-12)[source]

Compare two numerical files using mean squared error.

Parameters:
  • file1 (str) – The path to the first file to compare.
  • file2 (str) – The path to the second file to compare.
  • tol (float, optional) – Tolerance for the mean squared error.
Returns:

  • equal (bool) – True if the MSE is below the tolerance.
  • msg (str) – A descriptive message with the MSE value.

pyretis.testing.simulation_comparison.compare_path_ensemble_data(file1, file2, rel_tol=1e-05, skip=None)[source]

Compare two path ensemble files.

We compare line-by-line, but skip comments and we check that numbers are close, as judged by the given relative tolarance.

Parameters:
  • file1 (str) – The path to the first file to consider in the comparison.
  • file2 (str) – The path to the second file to consider in the comparison.
  • rel_tol (float, optional) – A relative tolerance used to determine if numbers are equal.
  • skip (list of int, optional) – These are columns we are to skip in the comparison.
Returns:

  • equal (bool) – True if the files are equal, False otherwise.
  • msg (str) – A message describing the result of the comparison.

pyretis.testing.simulation_comparison.compare_reports_normalized(fil1, fil2)[source]

Compare two reports, normalizing common version/time differences.

This function ignores Docutils version meta-data, timestamps, and common spelling variations (grey/gray) in CSS to remain robust against environment differences.

Parameters:
  • fil1 (str) – The path to the first report to compare.
  • fil2 (str) – The path to the second report to compare.
Returns:

  • equal (bool) – True if reports are essentially equal.
  • msg (str) – Description of mismatch if found.

pyretis.testing.simulation_comparison.compare_restarted_cross_files(file11, file12, file2)[source]

Compare CrossFile data from a restarted simulation.

Parameters:
  • file11 (str) – Path to the first part of the crossing data.
  • file12 (str) – Path to the second part of the crossing data.
  • file2 (str) – Path to the full (continuous) crossing data.
Returns:

  • equal (bool) – True if the crossing data matches.
  • msg (str) – A descriptive message.

pyretis.testing.simulation_comparison.compare_restarted_text_files(file11, file12, file2)[source]

Check if file2 is equal to file11 + file12 minus one overlapping line.

We handle headers (lines starting with ‘#’) by skipping them in the second file part.

Parameters:
  • file11 (str) – Path to the first part of the restarted simulation output.
  • file12 (str) – Path to the second part of the restarted simulation output.
  • file2 (str) – Path to the full (continuous) simulation output.
Returns:

  • equal (bool) – True if the files match the pattern.
  • msg (str) – A descriptive message of the result.

pyretis.testing.simulation_comparison.compare_simulation_files(file1, file2, skip=None, mode='line')[source]

Top-level function to compare two simulation output files.

Parameters:
  • file1 (str) – The path to the first file to compare.
  • file2 (str) – The path to the second file to compare.
  • skip (list of str or list of int, optional) – A list of items that are to be skipped in the comparison.
  • mode (str, optional) – A string used to determine how we do the comparison: ‘numerical’ will select a comparison of numerical blocks; ‘line’ will select a line-by-line text comparison; anything else will perform a literal file comparison.
Returns:

  • equal (bool) – True if the files were found to be equal, False otherwise.
  • msg (str) – A string with information about the comparison result.

pyretis.testing.simulation_comparison.compare_text_line_by_line(file1, file2, skip=None, skip_keys=None)[source]

Compare two files, line by line.

Parameters:
  • file1 (str) – The path to the first file to compare.
  • file2 (str) – The path to the second file to compare.
  • skip (list of int, optional) – These are 0-indexed line numbers we are to skip.
  • skip_keys (list of str, optional) – Lines whose first token matches any key in this list are filtered out from both files before comparison. Useful for ignoring settings like exe_path that differ by run directory.
Returns:

  • equal (bool) – True if the files are deemed to be equal.
  • msg (str) – A descriptive message of the result of the comparison.

pyretis.testing.simulation_comparison.compare_traj_archive(dir1, dir2)[source]

Compare archived trajectories between two directories.

These archives consist of trajectory information such as energies, order parameters and positions. Here, we verify that the output written by PyRETIS is identical in the two cases.

Parameters:
  • dir1 (str) – The path to the first directory to use in the comparison.
  • dir2 (str) – The path to the second directory to use in the comparison.
Returns:

errors (list of tuple) – This list contains the files which differed, if any.

pyretis.testing.simulation_comparison.read_files(*files, read_comments=True)[source]

Read files into memory.

Here, we assume that we are given small files and that we can read these into memory.

Parameters:
  • files (tuple of str) – These are the paths to the files we are to read.
  • read_comments (bool, optional) – If False, we skip lines starting with a “#”.
Returns:

all_data (list of list of str) – The data read from the different files.

pyretis.testing.helpers module

Methods that might be useful for testing.

This module defines generic methods for testing.

pyretis.testing.helpers.clean_dir(dirname)[source]

Remove ALL files in the given directory.

pyretis.testing.helpers.search_for_files(rootdir, match=None)[source]

Find files by walking the given directory.

Parameters:
  • rootdir (string) – The path where we will search from.
  • match (string, optional) – If given, the method will only return files that are equal to the given match.
Returns:

out (list of strings) – The paths of the found files.

pyretis.testing.systemhelp module

Methods that might be useful for testing.

This module defines methods that are useful in connection with systems.

pyretis.testing.systemhelp.create_system_ext(pos=None, vel=False)[source]

Create an external system with given positions and velocities.