Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Following guide helps users to setup client-server workflow for Ansys 2022R1 Mechanical and Fluent modules - Please note all ports and path in this guid guide are based on version 2022R1; for different version you need to modify ports and path, accordingly.
  • Users will establish web-based remote desktop connection to Nebula (Visualisation Windows Cluster) as client running their simulation job on Setonix as server. 

Ui tabs


Ui tab
titlePre-start
  • Users need to have active Nebula project. Follow instruction how to access Nebula.
  • Ansys Fluent users do NOT need to install Fluent in their project in Setonix, they can use system installation.
  • Users need to pass their licence server address to Pawsey to set in Nebula before starting to use Ansys.


Ui tab
titleInitial Setup

Following setup is only needed for the first time. Do not do it each time using Ansys package.

trueFluent

To setup passwordless communication between Nebula and Zeus, follow instruction below:

  1. Run "Ansys Key" located at your desktop, enter 2 and press ENTER several times as shown below to go with default config.



  2. Run "Ansys Key" again and select 2 for twice to Fluent key setup is working:



    If you didn't see the successful message as above, you need to repeat step 1 again. If didn't resolve, contact help@pawsey.org.au with screenshot of the error.




Ui tab
titleAnsys Fluent

Run Fluent software either directly or via Workbench and then set the config as below:

  1. Leave "General Options" as it is and uncheck "Pre/Post Only"



  2. Select "default" as "Interconnects" and "openmpi" as "MPI Types" in "Parallel Settings" tab.



  3. In "Remote" tab, same as below, insert your Fluent installation root path as:

    bash

    Note: to get  Ansys fluent path for the required version, login to Setonix and run "module show ansys-fluids/<version>" and copy the value against "PATH" for fluent.
    then, set your working directory, "setonix-01.pawsey.org.au" as head node address, and ssh as spawn command mode:



  4. Check "Use Job Scheduler" in "Scheduler" tab by entering "setonix-01.pawsey.org.au" as "Slurm Submission Host", "work" or "debug" as "Slurm Partition" and your group number as "Slurm Account":  



  5. Leave "Environment" tab if you don't want to compile UDF, otherwise do as your needs. Also, choose number of processors you need in "Paralle Per Slurm" section for your parallel process and hit "Start":



  6. You should have gui opened with no issue like below showing submitted job number:




  7. When launched fluent, you should have the job running in Setonix. You can double check as below:

     

  8. By running the simulation, you will see server is running fluent simulator on compute node as many as parallel process that you had requested in "Fluent Launcher" window:



  9. When you're done with simulation, close Fluent GUI and the job running on Setonix will be automatically stopped.


...