Skip to main content
Advice
0 votes
3 replies
37 views

I have a program that writes info in the screen, does some stuff, prints again, and so on. For example, the program is julia MWE.jl, where MWE.jl is the file for ii=1:50 println("ii=",ii)...
mancolric's user avatar
  • 111
1 vote
0 answers
43 views

I'm trying to install flash attention 2.8.2 on a Centos 7 SLURM GPU cluster. But as the login node does not have NVCC, direct installation at login node is not successful. Direct installation command: ...
RobbyNewbie's user avatar
2 votes
1 answer
55 views

I am trying to set up a workflow using nextflow v.25.04.7, but I am having issues with the -withLabel options. Here is the relevant code in nextflow.config: process { executor = "slurm" ...
Max_IT's user avatar
  • 660
0 votes
0 answers
56 views

I am trying to use VS Code (from a Windows laptop) to: SSH into the institutional cluster (which uses SLURM). Launch an interactive session with srun --pty -N1 -n1 --mem=4G --time=03:00:00 /bin/bash (...
wrong_path's user avatar
1 vote
1 answer
72 views

Is @ray.remote def run_experiment(...): (...) if __name__ == '__main__': ray.init() exp_config = sys.argv[1] params_tuples, num_cpus, num_gpus = load_exp_config(exp_config) ray.get(...
Blupon's user avatar
  • 1,091
0 votes
0 answers
50 views

I am currently using Snakemake version 9.6.3 on a cluster managed by an SLURM scheduler. In previous workflows, I relied on version 6, which supported the --cluster, --cluster-status, and --parsable ...
jeje's user avatar
  • 11
0 votes
0 answers
47 views

I am currently running a slurm job file on an array that I manually set the size of, e.g. sbatch --array=1-6 myjob.array the size of the array is determined when setting up the code to run. E.g. if I ...
Sam's user avatar
  • 1,522
0 votes
0 answers
55 views

I'm working on Slurm 24.11.5 and have Slurmrestd installed. When using provided RESTful API, I noticed that /slurm/{version}/job/{job_id} won't work and always responds with an error message of "...
Popoo's user avatar
  • 1
1 vote
1 answer
106 views

I'm building a SLURM pipeline where each stage is a bash wrapper script that generates and submits SLURM jobs. Currently I'm doing complex job ID extraction which feels clunky: # Current approach ...
desert_ranger's user avatar
0 votes
2 answers
57 views

I am working with a complex pipeline in Snakemake running on an HPC cluster managed by SLURM. My workflow runs across multiple servers with varying capacities. While Snakemake's distribution of jobs ...
Decarls's user avatar
1 vote
0 answers
48 views

Given: #!/bin/bash #SBATCH --account=project_XXX #SBATCH --job-name=test #SBATCH --output=/scratch/project_XXX/ImACCESS/trash/logs/%x_%a_%N_%j_%A.out #SBATCH [email protected] #SBATCH -...
farid's user avatar
  • 1,621
0 votes
2 answers
94 views

I am trying to test future.batchtools for parallelisation in R. I have a small test job (run_futurebatchtools_job.R) as: library(future) library(future.batchtools) # Set up the future plan to use ...
Arindam Ghosh's user avatar
0 votes
1 answer
117 views

I am currently trying to use Ray with Slurm. According to Depolying on slurm, I have the following slurm script: #!/bin/bash #SBATCH --account=xxx #SBATCH --job-name=test #SBATCH -o job.%j.out ...
Shijie Cao's user avatar
1 vote
0 answers
88 views

I’ve been using salloc to allocate compute nodes without issues before. Recently, after switching to another user account (same .bashrc config, only the conda path changed), salloc stopped working. I ...
Calculus007's user avatar
0 votes
1 answer
68 views

It is not possible for me to give you a minimal reproducible example because I have no idea why or even how this is happening. But I can give you a sample of the error messages. So basically there I'm ...
profPlum's user avatar
  • 521

15 30 50 per page
1
2 3 4 5
129