We have a requirement where we have copies of the same Spark application (basically a .jar file) running from different folders. I am looking for a shell command using which I can kill the Spark app running in one folder without killing other spark jobs.
Ex:
/home/user/Folder1 - CapStoneSpark.jar /home/user/Folder2 - CapStoneSpark.jar /home/user/Folder3 - CapStoneSpark.jar
The main class in the jar file is "CapStoneSparkMain". Suppose, I want to kill the CapStoneSpark.jar running in Folder2 only without touching or killing the CapStoneSpark.jar running from Folder1 and Folder3, then what should I do?
I have already tried:
kill $(ps aux | grep 'CapStoneSparkMain' | awk '{print $2}')
But, it kills all the process which have "CapStoneSparkMain" class in them.
I only want to kill the process originating from a single directory and don’t want to touch the copies of the processes running from other directories.
3
Answers
Hello
A for loop that check for the user in…
And
if
it fits your desirethen
putkill
command on${i}
in loop.You can find all Proccess ID’s which using these folders:
And execute it:
It’s not clear how the jobs are started, but assuming that each job is started with a different working direectory, it is possible kill the specific job. Given that it’s possible to find the working directory of each process via the /proc/$$/cwd (symlink to the job folder). Building on the commands/loop suggested above:
The code will check if the symlink /proc/$$/cwd matches the named folder (kill_folder), and will only issue the kill to processes in that folder.