Docs
  • Introduction
  • Command Line Basics
  • Lawrence HPC
    • About Lawrence
    • Login
    • Filesystems
    • Transferring Files
    • Software on Lawrence
    • Submitting Jobs
    • Programming Environment
    • Gaussian Tutorial
    • Lumerical (FDTD) from Lawrence on Windows PC Tutorial
    • Lumerical (FDTD) on Lawrence GUI Tutorial
    • Helpful Hints
    • Jupyter on Lawrence
  • Globus
    • Globus Information
    • Globus Tutorials
    • .
  • Non-Lawrence Guides
    • Downloads
    • Software Installations on Windows
    • .
  • Archived Information
    • .
      • Legacy-Lawrence Migration
      • Ubuntu- from Login page
      • VNC
      • Non-Lawrence Tutorials
      • WARNING: These are the archives, not current information
      • Start Here
      • SSH Log In
      • Home Directory
      • Data Transfer
      • Interactive Session
      • Batch Compute Jobs
      • rclone for Google Drive
      • rclone for MS OneDrive
      • rclone for DropBox
      • Gaussian
      • Modules
      • PAUP
      • X11 Forwarding
      • TigerVNC
      • Logging in to an interactive session - qlogin
      • VNC
Powered by GitBook
On this page
  • Home Directories
  • Group Home Directories
  • Scratch
  1. Lawrence HPC

Filesystems

Home Directories

Home directories have 50GB of storage. Additionally, home directories are shared across nodes and have a path as follows:

/home/usd.local/user.name

###Home directories can also be called by their environmental variable:

$HOME

Please note that home directories are not backed up!

Group Home Directories

Group home directories have 5TB of storage. Additionally, home directories are shared across nodes and have a path as follows:

/home/smith-lab
    /shared (read/write permissions for all group members)

Please note that group home directories are not backed up!

Scratch

Slurm creates /scratch/job_$SLURM_JOB_ID when your job is running. If you want to modify or use this output, you could include something like this in your job script:

SCRATCH="/scratch/job_$SLURM_JOB_ID"
cp $HOME/workfile.txt $SCRATCH
cd $SCRATCH

# run commands, do things

cp resultfile.txt $HOME

Please note that /scratch is not a shared filesystem and that each node has its own /scratch (169 GB per node (SSD)). Additionally, the data on scratch only lasts while the job is running! If you need /scratch data from your job, remove it before your job ends.

PreviousLoginNextTransferring Files

Last updated 6 years ago