0

i want to speed my code compilation..I have searched the internet and heard that psyco is a very tool to improve the speed.i have searched but could get a site for download.

i have installed any additional libraries or modules till date in my python.. can psyco user,tell where we can download the psyco and its installation and using procedures?? i use windows vista and python 2.6 does this work on this ??

3
  • Is there a specific reason your code is unusually slow to compile? Commented Jun 9, 2010 at 16:13
  • as mentioned before i need to run a certain method which invovle opening reading of 4 files and this method is called 10,000 times Commented Jun 9, 2010 at 16:26
  • 1
    Normally a method will be compiled once and run ten thousand times. You don't need to speed up compilation, but you may need to speed up execution. Commented Jun 9, 2010 at 20:06

4 Answers 4

11

I suggest to not rely on this tools, anyway psycho is being replaced by the new python implementations as PyPy and unladen swallow. To speed up "for free" you can use Cython and Shedskin. Anyway this is not the right way to speedup the code in my opinion.

If you are looking for speed here are some hints:

  1. Profiling
  2. Profiling
  3. Profiling

You should use the cProfile module and find the bottlenecks, then proceed with the optimization.

If the optimization in python isn't enough, rewrite the relevant parts in Cython and you're ok.

Sign up to request clarification or add additional context in comments.

Comments

3

Psyco does not speed up compilation (in fact, it would slow it down). However, if your problem is compilation speed in Python, there are some serious problems with your code.

If you are trying to improve runtime performance, Psyco does work with 32bit operating systems and Python version 2.5. The latest version is the first Google result for Psyco: http://psyco.sourceforge.net/

Psyco is no longer an "interesting" project as Python 3.x has gained Unladen Swallow, and most of the developer attention is divided between that and PyPy.

There are other ways of improving performance, not limited to Cython and Shed Skin

Comments

2

So it seems you don't want to speed up the compile but want to speed up the execution.

If that is the case, my mantra is "do less." Save off results and keep them around, don't re-read the same file(s) over and over again. Read a lot of data out of the file at once and work with it.

On files specifically, your performance will be pretty miserable if you're reading a little bit of data out of each file and switching between a number of files while doing it. Just read in each file in completion, one at a time, and then work with them.

2 Comments

I have file data of format 3.343445 1 3.54564 1 4.345535 1 2.453454 1 and so on upto 1000 lines and i have number given such as a=2.44443 for the given file i need to find the row number of the numbers in file which is most close to the given number "a" how can i do this i am presently doing by loading whole file into list and comparin each element and findin the closest one any other better faster method?
you could create a new list of data pairs: my_list = [(abs(n-a), n) for n in file_list], then sort that and pick out the first element.
2
  • Use the appropriate data structures. If you see that you are doing a lot

    if element in list #or list.index(element) 

    then you might be better off with sets and dictionaries.

  • Don't create a list only to iterate over it, use generators or the itertools module.
  • Read Python Performance Tips
  • As already mentioned, do profiling.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.