Having recently discovered python, I attempted to write a simple logger. Data is read from a device, processes, displayed and stored on disc. Those different tasks belong to different modules, of course.
What appeared to be cool at the time, was to provide command-line interface to each module, as explained here: https://docs.python.org/2/tutorial/modules.html#executing-modules-as-scripts
The result is tat although the program as a whole uses import x and the uses the classes in the modules, the modules can be used form the command line as well. For example, invoking hw_comm.py would open the default device with default options and stream to stdout. Also plot.py expects data form stdin and draws a plot.
I can see a ton of advantages to this design, such as
- easy to debug
- easy to adapt for different use cases
- easy to test
- provides immediate value to the end user, long before the project is finished.
I have read only a few thousand lines of python yet, but haven't seen this approach (modules as both importable classes and stand-alone scripts) elsewhere. Why would be that? Is the extra work to support this format too much for an enterprise project? Or maybe, unlike my tiny project, business scale projects can not easily support this behavior, with hundreds of modules doing different things just to accomplish one complex goal together?
cutto get only one column. Source. Doing so from the python interpreter would require typing the code that invokes the class, each time.Doing so from the python interpreter would require typing the code invoking the class, each time.I'm not sure I see the problem.if __name__ == "__main__":there are 10 lines, that use this class in a default way. Now using the module can be as simple aspython hw_comm.py. If those lines weren't there, one would need to invoke the python interpreter and type those 10 lines. Furthermore, invoking other commands would be more cumbersome than in Bash.