This project aims to use Bayesian optimization for finding the optimal solver settings in OpenFOAM. The basis of the project can be found in the repository Learning of optimized solver settings for CFD applications
The instructions and tests are tailored to:
- OpenFOAM-v2406
- Python 3.11
Newer versions might work as well but were not explicitly tested.
To set up a suitable virtual environment, run:
# repository top-level
python3.11 -m venv bopt
source bopt/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
source bopt/bin/activate
python run.py example_config_local.yaml &> log.example_run
The script eval_runs.py contains a rudimentary example for visualizing the outcome.
To quantify the uncertainty of the computational environment, it is sensible to repeat simulations multiple times with unchanged settings:
source bopt/bin/activate
python run_repeated.py example_config_repeated_local.yaml &> log.example_run
The scripts outputs the elapsed time to a timing_int_*.csv file in the experiment folder.
The instructions in this subsection are specific to TU Dresden's Barnard system. The following command creates a workspace named general_testing that is valid for 90 days.
ws_allocate -F horse -r 7 -m [email protected] -n general_testing -d 90
cd /data/horse/ws/$USER-general_testing
For more details on the workspace allocation, refer to the quick start guide.
First, clone the repository to your workspace:
git clone https://github.com/JanisGeise/BayesOpt_solverSettings
cd BayesOpt_solverSettings
To set up a suitable virtual environment, run:
# repository top-level
module load release/24.04 GCCcore/12.3.0 Python/3.11.3
python -m venv bopt
source bopt/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
For more details on the HPC system, refer to the documentation:
The driver script has to be started via a jobscirpt. A suitable jobscript looks as follows (don't forget to substitute the mail address):
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --tasks-per-node=1
#SBATCH --time=08:00:00
#SBATCH --job-name=gamg_opt
#SBATCH --mail-type=start,end
#SBATCH --mail-user=<your.email>@tu-dresden.de
module load release/24.04 GCCcore/12.3.0 Python/3.11.3
source bopt/bin/activate
python run.py example_config_slurm.yaml &> log.example_run
To submit the job, run:
sbatch jobscript
The resources required for running a simulation are specified in batch_settings of the general configuration file (refer to example_config_slurm.yaml).
The current test cases are 2D laminar flow past a cylinder taken from the flow_data repository, 2D Transonic buffet for OAT15 using DDES and URANS turbulence models. The stl file for the OAT15 setup is not provided. The absolute path (absolute path is required) to the "airfoil.stl" must be provided in the respective "Allrun.pre" file, where the following line should be edited accordingly,
cp /path/to/airfoil.stl constant/triSurface
More test cases etc. will follow
The evaluation of trial runs can be done using the "eval_runs.py" script. The script make evaluation plots, whose settings can be controlled from the config file. Different plots that are output from the script -
- "trial_vs_base" - It compares the execution time of the trials at different intervals with the default settings case. This plot requires the execution time data of benchmark cases. Data in the required structure can be generated using "copy_data.sh" script in the "baseCase_benchmark" folder. The path to this folder needs to be provided in the config file.
- "best_parms" - Makes parallel plots of best parameters from the optimization across different intervals
- "trial_vs_obj" - Plot to check how the objective function varies over trials
- "gaussian_process" - Makes the gaussian process plot for a given parameters
- "feature_importance" - Ranks the importance of different parameters considered for BO
- "cross_validation" - Plots predicted outcome against actual outcome, signifies accuracy of the predictions
- "parallel_coordinates" - Parallel coordinates plot of parameters selected for different trials along with the objective function
- "write_trial_data" - Writes a csv file containing the information of top trials Run the script by giving the config file as the argument using, for example
python eval_runs.py example_config_slurm.yaml
- early stopping
- test different optimization configs for ax