According to this answer, python instances are set up when foreachPartition, mapPartitions functions are used in the nodes where the executors run. How are the memory / compute capacities of these instances set? Do we get to control it through some configuration?
spark.executor.pyspark.memorybut even here, in the description, it does not explicitly mention if this is for the python instances created by the executor. Also, is there a similar parameter for cores?spark.executor.corescores.