Wednesday, December 3, 2008

cinpy - or C in Python

cinpy is a tool where you can write C code in your Python code (with the help of ctypes - included in modern Python versions). When you execute your python program the C code is compiled on the fly using Tiny C Compiler (TCC).
In this posting I will describe:
1) installation and testing cinpy
2) a simple benchmark (c-in-py vs python)
3) compare performance with gcc (c-in-py vs gcc)
4) measure cinpy (on-the-fly) compilation time
5) how to dynamically change cinpy methods

1. How to install and try cinpy (note: also found in cinpy/tcc README files)
1) download, uncompress and compile TCC
./configure
make
make install
gcc -shared -Wl,-soname,libtcc.so -o libtcc.so libtcc.o

2) download, uncompress and try cinpy
cp ../tcc*/*.so .
python cinpy_test.py  # you may have to comment out or install psyco 

2. Sample performance results (on a x86 linux box):

python cinpy_test.py
Calculating fib(30)...
fibc : 1346269 time: 0.03495 s
fibpy: 1346269 time: 2.27871 s
Calculating for(1000000)...
forc : 1000000 time: 0.00342 s
forpy: 1000000 time: 0.32119 s

Using cinpy for fibc (Fibonacci) method was ~65 times faster than fibpy, and and cinpy for forc (loop) was ~93 times faster than forpy, not bad.

3. How does cinpy (compiled with tcc) compare to gcc performance?
Copying the C fib() method and calling it with main program
$ time fibgcc


fib(30) = 1346269

real    0m0.016s
user    0m0.020s
sys     0m0.000s


GCC gives roughly twice as fast code as cinpy/tcc (0.034/0.016). 




4. How long time does it take for tcc to on-the-fly compile cinpy methods?


#!/usr/bin/env python
# cinpy_compile_performance.py
import ctypes
import cinpy
import time
t=time.time
t0 = t()
fibc=cinpy.defc(
     "fib",
     ctypes.CFUNCTYPE(ctypes.c_int,ctypes.c_int),
     """
     int fib(int x) {
       if (x<=1) return 1;
       return fib(x-1)+fib(x-2);
     }
     """)
t1 = t()
print "Calculating fib(30)..."
sc,rv_fibc,ec=t(),fibc(30),t()
print "fibc :",rv_fibc,"time: %6.5f s" % (ec-sc)
print "compilation time = %6.5f s" % (t1-t0)

python cinpy_compile_performance.py

Calculating fib(30)...
fibc: 1346269 time: 0.03346 s
compilation time: 0.00333 s


So compilation (and linking) time is about 3.3ms, which is reasonably good, not a lot of overhead! (note: TCC is benchmarked to compile, assemble and link 67MB of code in 2.27s)


5. "Hot-swap" replacement of cinpy code?
Let us assume you have a system with a "pareto" situation, i.e. 80-99% of the code doesn't need be fast (written in Python), but 1-20% need to be really high performance (and written in C using cinpy), and that you need to frequently change the high performing code, can that be done? Examples of such a system could be mobile (software) agents.


Sure, all you need to do is wrap your cinpy definition as a string and run exec on it, like this:

origmethod="""
fibc=cinpy.defc("fib",ctypes.CFUNCTYPE(ctypes.c_int,ctypes.c_int),
          '''
          int fib(int x) {
              if (x<=1) return 1;
              return fib(x-1)+fib(x-2);
          }
          ''')
"""
# add an offset to the Fibonacci method
alternatemethod = origmethod.replace("+", "+998+")
print alternatemethod
# alternatemethod has replaces origmethod with exec(alternatemethod)
print fibc(2) # = 1000
Conclusion:
cinpy ain't bad.

Remark: depending on the problem you are solving (e.g. if it is primarily network IO bound and not CPU bound) , becoming 1-2 orders of magnitude faster (cinpy vs pure python) is probably fast enough (CPU wise), the doubling from GCC may not matter (since network IO wise Python performs quite alright, e.g. with Twisted ).