Presentation held at the USC Information Sciences Institute on July 27, 2016 Abstract - Understanding user behavior is a crucial factor when evaluating scheduling and allocation performances in high performance computing environments. Since workload traces implicitly include interaction processes, they are often used for conducting performance evaluation. Nevertheless, realistic performance evaluations need to take into account the dynamic user reaction to different levels of system performance as recorded data reflects only one instantiation of an interactive process. To further understand this process, we perform a comprehensive analysis of the user behavior in recorded data in the form of delays in subsequent job submission behavior. Therefore, we characterize a workload trace from Mira supercomputer at ALCF (Argonne Leadership Computing Facility) covering one year of job submissions. We perform an in-depth analysis of correlations between job characteristics, system performance metrics, and subsequent user behavior. Analysis results show that the user behavior is significantly influenced by long waiting times, and that complex jobs (in terms of number of nodes and CPU hours) lead to longer delays in subsequent job submissions. Also, we investigate that a notification mechanism informing users upon job completion does not influence the subsequent submission behavior. Furthermore, we advance the results of HPC job submission to HTC job submission. We consider HTC job submission behavior in terms of parallel batch-wise submissions, as well as delays and pauses in job submission. We compare differences in batch characteristics by classifying batches using a popular model. Our findings show that modeling an HTC job submission behavior requires knowledge of the underlying bags of tasks, which is often unavailable. Additionally, we find evidence that the subsequent job submission behavior is not influenced by the different complexities and requirements of HPC and HTC jobs.