In bash, when running
myvar=val mycommand myargs myvar=val will be added to the environment for executing mycommand.
Suppose the the bash process calls fork() to create a child process which will executes mycommand, i.e. mycommand is an external executable file or a script file.
When does adding myvar=val to the environment happen, before or after the bash shell calls fork()? In other words, which of the following two possibilities actually happens?
the bash process adds
myvar=valto its own environment, then callsfork()to create a child process which will callsexecve()to executemycommand, andmyvar=valas part of the environment of the bash process is inherited into the environment of the child process. Upon finishing the execution ofmycommandand exiting of the child process, the bash process dropsmyvar=valfrom its own environment.the bash process calls
fork()to create a child process which will executesmycommand, and the child process addsmyvar=valto its own environment and then callsexecve()to executemycommand.
My question is motivated from Stephen's reply to my earlier post.
In Bash,
_is a special parameter which is set to the value of the last argument every time a command is parsed. It also has the special property of not being exportable, which is enforced every time a command is executed (seebind_lastargin the Bash source code).
I am wondering that when a bash process executes a command, if bash doesn't add _ to its own environment, why does it need to drop it from its own environment?
Thanks.
_as an environment variable. Every time a command is executed, the "exported" flag on the variable is cleared. You would think that, once the bit has been cleared, there is no need to clear it again, but it is cheaper and simpler to just clear it instead of doing a test and clear.envpparameter to theexecvesystem call. The kernel then copies the array onto the child process's stack, next to the argument vector thatargvpoints to.