You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/hpc/3stampede/running.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,9 +7,9 @@
7
7
Stampede3's job scheduler is the Slurm Workload Manager. Slurm commands enable you to submit, manage, monitor, and control your jobs. See the [Job Management](#jobs) section below for further information.
8
8
9
9
!!! important
10
-
**Queue limits are subject to change without notice.**
11
-
TACC Staff will occasionally adjust the QOS settings in order to ensure fair scheduling for the entire user community.
12
-
Use TACC's `qlimits` utility to see the latest queue configurations.
10
+
**Queue limits are subject to change without notice.**
11
+
Frontera admins may occasionally adjust queue <!--the QOS--> settings in order to ensure fair scheduling for the entire user community.
12
+
TACC's `qlimits` utility will display the latest queue configurations.
13
13
14
14
<!--
15
15
10/20/2025
@@ -36,15 +36,15 @@ spr:2.0
36
36
37
37
#### Table 8. Production Queues { #table8 }
38
38
39
-
Queue Name | Node Type | Max Nodes per Job<br>(assoc'd cores) | Max Job<br>Duration | Max Nodes<br>per User | Max Jobs<br>per User | Charge Rate<br>(per node-hour)
Queue Name | Node Type | Max Nodes per Job<br>(assoc'd cores) | Max Job<br>Duration | Max Nodes<br>per User | Max Jobs<br>per User | Max Submit | Charge Rate<br>(per node-hour)
Copy file name to clipboardExpand all lines: docs/hpc/6lonestar/running.md
+13-18Lines changed: 13 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,9 +14,9 @@ The jobs in this queue consume 1/7 the resources of a full node. Jobs are charg
14
14
#### Table 5. Production Queues { #table5 }
15
15
16
16
!!! important
17
-
**Queue limits are subject to change without notice.**
18
-
TACC Staff will occasionally adjust the QOS settings in order to ensure fair scheduling for the entire user community.
19
-
Use TACC's `qlimits` utility to see the latest queue configurations.
17
+
**Queue limits are subject to change without notice.**
18
+
Frontera admins may occasionally adjust queue <!--the QOS--> settings in order to ensure fair scheduling for the entire user community.
19
+
TACC's `qlimits` utility will display the latest queue configurations.
20
20
21
21
<!--
22
22
10/20/2025
@@ -29,8 +29,6 @@ gpu-a100 1 8 2-00:00:00 12 8 32
29
29
gpu-a100-dev 1 2 02:00:00 2 1 3
30
30
gpu-a100-small 1 1 2-00:00:00 3 3 12
31
31
gpu-h100 1 1 2-00:00:00 1 1 4
32
-
grace 1 64 2-00:00:00 75 20 100
33
-
grace-serial 1 64 5-00:00:00 75 20 80
34
32
large 65 256 2-00:00:00 256 1 4
35
33
normal 1 64 2-00:00:00 75 20 100
36
34
vm-small 1 1 2-00:00:00 4 4 16
@@ -45,19 +43,16 @@ large:1.0
45
43
normal:1.0
46
44
vm-small:0.143
47
45
-->
48
-
49
-
50
-
51
-
Queue Name | Min/Max Nodes per Job<br>(assoc'd cores)* | Max Job<br>Duration | Max Nodes<br>per User | Max Jobs<br>per User | Charge Rate<br>(per node-hour)
Queue Name | Min/Max Nodes per Job<br>(assoc'd cores)* | Max Job<br>Duration | Max Nodes<br>per User | Max Jobs<br>per User | | Max Submit | Charge Rate<br>(per node-hour)
* Access to the `large` queue is restricted. To request more nodes than are available in the normal queue, submit a consulting (help desk) ticket through the TACC User Portal. Include in your request reasonable evidence of your readiness to run under the conditions you're requesting. In most cases this should include your own strong or weak scaling results from Lonestar6.
Copy file name to clipboardExpand all lines: docs/hpc/frontera.md
+18-28Lines changed: 18 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
# Frontera User Guide
2
-
*Last update: October 22, 2025*
2
+
*Last update: November 3, 2025*
3
3
4
4
<!-- **Important**: (10-15-2024) Please note [TACC's new SU charge policy](#sunotice). -->
5
5
@@ -732,16 +732,18 @@ Frontera's `flex` queue offers users a low cost queue for lower priority/node co
732
732
733
733
!!! important
734
734
**Queue limits are subject to change without notice.**
735
-
Frontera admins may occasionally adjust the QOS settings in order to ensure fair scheduling for the entire user community.
736
-
Use TACC's `qlimits` utility to see the latest queue configurations.
737
-
738
-
Users are limited to a maximum of 50 running and 200 pending jobs in all queues at one time.
735
+
Frontera admins may occasionally adjust queue <!--the QOS--> settings in order to ensure fair scheduling for the entire user community.
736
+
TACC's `qlimits` utility will display the latest queue configurations.
739
737
740
738
<!--
741
739
10/20/2025
740
+
/usr/local/etc/queue.map
742
741
frontera4(1)$ qlimits
743
742
Current queue/partition limits on TACC's Frontera system:
744
743
744
+
The "running job limit" is the MaxJobsPU column. MaxJobsPU is the maximum number of jobs a user can have running simultaneously.
745
+
The "job submission limit" is the MaxSubmit column. The MaxSubmit limit is the maximum number of jobs a user can have in the queue.
746
+
745
747
Name MinNode MaxNode PreemptExemptTime MaxWall MaxNodePU MaxJobsPU MaxSubmit
746
748
flex 1 128 01:00:00 2-00:00:00 2048 15 60
747
749
development 40 02:00:00 40 2 2
@@ -755,31 +757,19 @@ Current queue/partition limits on TACC's Frontera system:
755
757
grace 30 5-00:00:00 30 30 100
756
758
corral 512 2-00:00:00 2048 100 200
757
759
gh 1 02:00:00 1 2 2
758
-
759
-
/usr/local/etc/queue.map
760
-
761
-
flex:0.8
762
-
development:1.0
763
-
normal:1.0
764
-
large:1.0
765
-
rtx:3.0
766
-
rtx-dev:3.0
767
-
nvdimm:2.0
768
-
small:1.0
769
-
rtx-corralextra:3.0
770
-
gh:0.0
771
760
-->
772
761
773
-
| Queue Name | Min-Max Nodes per Job<br>(assoc'd cores) | Pre-empt<br>Exempt Time | Max Job Duration | Max Nodes per User | Max Jobs per User | Charge Rate<br>per node-hour
| Queue Name | Min-Max Nodes per Job<br>(assoc'd cores) | Pre-empt<br>Exempt Time | Max Job Duration | Max Nodes per User | Max Jobs per User | Max Submit | Charge Rate<br>per node-hour
Copy file name to clipboardExpand all lines: docs/hpc/frontera/running.md
+17-27Lines changed: 17 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,16 +30,18 @@ Frontera's `flex` queue offers users a low cost queue for lower priority/node co
30
30
31
31
!!! important
32
32
**Queue limits are subject to change without notice.**
33
-
Frontera admins may occasionally adjust the QOS settings in order to ensure fair scheduling for the entire user community.
34
-
Use TACC's `qlimits` utility to see the latest queue configurations.
35
-
36
-
Users are limited to a maximum of 50 running and 200 pending jobs in all queues at one time.
33
+
Frontera admins may occasionally adjust queue <!--the QOS--> settings in order to ensure fair scheduling for the entire user community.
34
+
TACC's `qlimits` utility will display the latest queue configurations.
37
35
38
36
<!--
39
37
10/20/2025
38
+
/usr/local/etc/queue.map
40
39
frontera4(1)$ qlimits
41
40
Current queue/partition limits on TACC's Frontera system:
42
41
42
+
The "running job limit" is the MaxJobsPU column. MaxJobsPU is the maximum number of jobs a user can have running simultaneously.
43
+
The "job submission limit" is the MaxSubmit column. The MaxSubmit limit is the maximum number of jobs a user can have in the queue.
44
+
43
45
Name MinNode MaxNode PreemptExemptTime MaxWall MaxNodePU MaxJobsPU MaxSubmit
44
46
flex 1 128 01:00:00 2-00:00:00 2048 15 60
45
47
development 40 02:00:00 40 2 2
@@ -53,31 +55,19 @@ Current queue/partition limits on TACC's Frontera system:
53
55
grace 30 5-00:00:00 30 30 100
54
56
corral 512 2-00:00:00 2048 100 200
55
57
gh 1 02:00:00 1 2 2
56
-
57
-
/usr/local/etc/queue.map
58
-
59
-
flex:0.8
60
-
development:1.0
61
-
normal:1.0
62
-
large:1.0
63
-
rtx:3.0
64
-
rtx-dev:3.0
65
-
nvdimm:2.0
66
-
small:1.0
67
-
rtx-corralextra:3.0
68
-
gh:0.0
69
58
-->
70
59
71
-
| Queue Name | Min-Max Nodes per Job<br>(assoc'd cores) | Pre-empt<br>Exempt Time | Max Job Duration | Max Nodes per User | Max Jobs per User | Charge Rate<br>per node-hour
| Queue Name | Min-Max Nodes per Job<br>(assoc'd cores) | Pre-empt<br>Exempt Time | Max Job Duration | Max Nodes per User | Max Jobs per User | Max Submit | Charge Rate<br>per node-hour
0 commit comments