Policies
Queue Policy
Core User Pool
Consists of users which heavy and regular need of computational resources.There are total of 125 nodes available for core users of which 4 are high-memory nodes.
Peripheral User Pool
Consists of users with low resource requirements. There are a total of 8 nodes currently available for this pool.
QUEUES AVAILABLE FOR CORE USERS
QUEUE MAXIMUM MAXIMUM RUN LIMIT/ RELATIVE
NAME NODES WALLTIME MAX. NODES PRIORITY
interactive 1 30 mins 4 jobs 1
debug 1-8 30 mins 4 jobs 1
short1 9-16 24 hours 90 node 2
short2 9-16 48 hours 64 nodes 3
medium1 4-8 48 hours 32 nodes 3
medium2 4-8 72 hours 64 nodes 4
long1 1-3 72 hours 15 nodes 4
long2 1-4 120 hours 8 nodes 5
hmq (High M/M) 1-4 72 hours 4 jobs 3
QUEUES AVAILABLE FOR PERIPHERAL USERS
QUEUE RANGE OF MAXIMUM RUN LIMIT RELATIVE
NAME CORES WALLTIME PRIORITY
p-queue 8-16 cores 96 hours 16 cores 1
The Run Limit refers to the maximum total number of jobs that can be running in a queue. Queue Name refers to the keyword to be used in a PBS job script to access the appropriate queue. Jobs will be accepted in the priority indicated by the last column above. IISER Bhopal students, researchers and faculty members can apply for accounts mentioning the user pool required along with a justification of the resources required. Click on the Application Form To apply online.
The filesystems accessible to the users are divided into two main parts - Home (composing /home1 and /home2) and Scratch (/scratch). All areas are completely backed up on our tape-library (TSM-based).
USAGE POLICY
Users will submit jobs only through the queue. Running long production runs interactively on any node without using the queue is strictly forbidden.
Chaining of jobs on the interactive and debug queues are strictly forbidden.
User will not place jobs on hold for more that 4 hours. Such jobs shall be deleted by the systems administrator beyond the allowed limit.
A single user can use for all jobs put together a total of 50 nodes.
The High Memory Queue (hmq) is dedicated to jobs requiring more than 64 GB per node and should only be used for such jobs..
A single user can have no more than 3 jobs waiting in the queue at any point of time.
Users should (as far as possible) route all runtime I/O operations to the parallel file-system (scratch area).
HOME AREA
Every user will be allocated a home area 1 TB for Core users and 500 GB for Peripheral Users, by default. Additional space can be allocated based on adequate justification. The location of this area will be e.g. /home1/<userid>. and can also be accessed by the environmental variable $HOME
SCRACTH AREA
The Scratch area is mounted on GPFS (parallel-file system) and should be used for I/O intensive activity over the NFS. Every user will be assigned their own directory in the scratch area - e.g. /scratch/<userid>. This is also accessible by the environmental variable $SCRATCH. Currently, there is no quota limit for any user in the scratch area. However, please be advised that files in the scratch area more than 14 days old shall be purged. So users are advised to backup dated files.
USER ACCOUNT PASSWORD POLICY
User can not set last 3 passwords.
Password must be at least [8] characters in length.
Minimum one UPPER case character..
Minimum one lower case character.
Minimum one digit.