The KU Community Cluster can only exist when user groups buy hardware in the cluster. As such, users will only be granted access to use the KU Community Cluster if they are a part of an active research group that has purchased hardware within the KU Community Cluster. Access will be granted to collaborators on a case by case basis at the discretion of the Center for Research computing.
Six Hour Policy
If the hardware allocated to any given research group should prove insufficient to the needs of a researcher within that group, there exists a "sixhour" queue containing all compute resources that are part of the KU Community Cluster. Any user is allowed to run jobs in this queue, provided that their job does not run for more than six hours at one time. The Center for Research Computing asks that all users be good stewards of this queue, if the resources within the user's host queue can fit their needs then please use that queue. If it is found that this resource is being abused to the detriment of other users, access can be revoked to the "sixhour" queue at the discretion of the Center for Research computing.
Proper Node Usage Policy
The cluster is comprised of two login nodes, many compute nodes, and several administrative nodes. Users are to run jobs or computationally intensive executables on the compute nodes only. Login nodes are to be used only for small tasks such as word processing, viewing job results, etc. These nodes are the shared gateway to the cluster for all users. Any user found to be running intensive executables on these login nodes or any other node that is not a compute node, the user's processes will be cancelled and the user warned through email. Any subsequent offense and the user may have their access revoked at the discretion of the Center for Research Computing. If it is necessary to have an active shell to view live results of an executable, please run an interactive job as outlined in https://crc.ku.edu/using-hpc#Submitting.
Cluster Backup Policy
The cluster file system is a triple redundant system, and we never expect to see any loss of files. This being said, it is always a good idea to keep your files backed up somewhere not on the cluster's file system in case of unexpected failure. It is infeasible to back up the entire file system due to it's size and the nature of the file system. Users are responsible for their own backups, the Center for Research Computing will not create any automatic backups for any user and is not responsible for the deletion or loss of any files from the cluster file system.
Scratch Space Policy
The scratch space on the cluster file system is open to all users of the cluster with no space limitations other than the physical limitations of available hard drive space. To keep this shared space clean and available for all users, any file found to be more than 60 days old on the scratch segment of the file system will be automatically deleted. The scratch space is short term storage for immediate use in computation. Any files that need to be kept longer should be kept in a user's home or work directory.