1. Summary
  2. Files
  3. Support
  4. Report Spam
  5. Create account
  6. Log in

Links

About PyISAPIe

PyISAPIe is an ISAPI extension for Windows-based web servers that allows Python scripts to be used to serve web pages. The main web server this extension runs on is IIS, although it will run on Apache web servers as well.

The reason ISAPI applications have the capability of being better than CGI or FastCGI applications is mostly due to its tight integration with the web server environment. Instead of initializing an entire program from scratch (in this case, the Python interpreter) every time a request is made for a page, an ISAPI extension only has to provide a function that is called upon every request. For interpreting Python scripts on a per-request basis, this means that the interpreter can be initialized once and used many times, creating a very noticeable performance gain.

PyISAPIe is WSGI compliant (except for readline support). I have included an example WSGI handler script, so if you know something like Django you shouldn't have too much trouble getting going with it.

-

News

Updates on the project.

2010-05-03

I've been quite busy polishing 1.1.0 and progress has been great. A list of some things that are changed as a result of bug fixes or compatibility issues:

  • ONLY when keep-alive is disabled, content length is no longer verified for writes and a single write is no longer required.

Why? Because it doesn't matter if you, the script writer, send data not equal to the content length if the connection is ending anyway. However, if it's a keep-alive connection, there are limited and ugly ways to handle unaligned writes: (1) allow overflow, making the client think it's part of some future response, (2) truncate overflow with no possibility of reporting the error to the client, (3) use whatever space is left of the content-length to report an error.

This only kind of alleviates the issue with multiple iterator values in WSGI output (e.g. yielding many times for a response). Why? Because you can't send a close header, it's "hop-by-hop". You'll get "AssertionError: Hop-by-hop headers not allowed". This is stupid, and I don't have an answer for you here besides to go around WSGI to close the connection.

My final advice if you want to yield many values (e.g. for large files) when you have a known length WITHOUT hacking around WSGI: if the client is HTTP/1.1 and they didn't send a close header (so you know it's a keep-alive request), just don't specify a content-length. The client will know when it gets all the data by receiving the zero-length chunk (via chunked transfer encoding). If the client is HTTP/1.0 and it's not a keepalive request, you can specify content length; however, if it's a keepalive request and you don't specify a length before you start sending, the connection will be closed. This is the only case where the client won't reliably know when the end of the data is reached.

  • Chunked transfer encoding in POST data is now handled properly.

With some caveats: CONTENT_LENGTH will be -1, and you won't know that you've read in all the data until you get a zero-length string. So, check this value, and if it is -1, pick a buffer size and keep reading. Thanks to Jocelyn E. for pointing this out. She also noted that PHP FastCGI doesn't handle this situation either. For any ISAPI developer who's curious, there is NO mention if this specific situation being different in the documentation. However, David Wang from Microsoft has provided the necessary information in a forum so I was able to include handling for it. Someone might want to get the FastCGI ISAPI author(s) to look into it as well.

  • Full example of multiple Django instances within one process.

More caveats here: using multiple interpreters can be a sticky situation, so I'm NOT guaranteeing stability in any case where C extensions are used. Some examples would be psycopg2, sqlite3, or other compiled database adapters. I know my solution works on the Python level, but when throwing these things into the mix YMMV.

Also, because (1) Django folks have outwardly pushed away Windows support and (2) they don't want to change the process-level settings module definition, this is the most I will ever do at this point Django-related. You have a WSGI module and some other toys to help you, and beyond that, you are on your own.

  • Bytecode and optimization options.

In the Http.Config module, there's some info and setting on how to make PyISAPIe write .pyc/.pyo files if you so wish. The new default behavior is to not do it. Also, you can change the defaults regarding the "-O" flag you'd normally pass via the command line.

  • Full 64-bit support.

It's done, and without many complications. Thanks to Geographika for pioneering the public effort. I had struggled with it in the past but wanted to wait for the official Python64, but waited a bit too long after it was finally available.

  • Documentation.

The only reason (besides testing) at this point the next release isn't available. I'm doing my best to make up for the lack of documentation in past versions. If you're a pioneer, you can checkout the source and compile it with all of the above changes implemented.

Also, a quick copy-paste from the WSGI handler file about what's not compliant at this point:

  • No readline on wsgi.input.
  • wsgi.input accepts calls with no parameters.
  • Chunked POST data is not pre-read, so an app using CONTENT_LENGTH, which will be -1, will not get the data. Chunked POST data isn't mentioned in the spec anyway.

Finally, a quick note about response encoding: yes, chunked transfer encoding is usable and viable even when the connection is closed. No, there's currently no way to specify/enable such behavior. It only makes sense in the context of verification, where you need to be sure you received everything before the connection was closed; however, there are other application-level ways to do this too. I just wanted to discuss this because it came up recently -- I don't plan to make any changes on this front.

Thanks again to those who have shown interest and shared their issues (and in some cases money!) with me. It's why I keep coming back to this project :)

2010-04-13

A lot of time has passed and user activity is picking up again (seems to be a yearly trend with this project). I've received some feedback and a bug report and finally decided to roll it all into the 1.1.0 final release. A preview of what's to come:

  • Data input bugfix (#7)
  • WSGI enhancements
  • Fix to an extremely subtle locking bug in code that never got executed (at least in IIS < 7)
  • Compiled with VS2008 SP1 C++ compiler (only because that's what I have now)
  • 64-bit support now that it's a standard Python release (barring any typing issues, so this might be delayed)
  • The final version of PyISAPIe to support Python 2.5, making way for 2.6, 2.7 and 3.2 support.

My goal here is to make sure I don't add any new features or change what's working. Bugfixes and enhancements after the next release will be 1.1.1, 1.1.2, etc. To give you an idea of where this is going, 1.2 will probably be for Python 3.x support, and 2.0 will only happen if I do something big like going asynchronous (IOCP) and/or supporting stackless Python, which is a bit too much for me at the moment and the foreseeable future. For those who are curious, I do most of my web work in Linux these days and I'm working on a super-secret (for now) project similar to this one for it.

Cheers to the most recent contributors! I will also add credit in the docs (working on a Sphinx doc setup):

  • Einar R.
  • S. Girvin

2009-10-26

It looks like Sourceforge is linking to the wrong version on the project page and everything (e.g. svn info) is way outdated... and they don't make it very easy to administer projects anymore. I'm not so sure why I'm still with SF actually. </rant>

Please double-check that you are running 1.1.0-rc4 or later. FYI, the step to 1.1.0 when it arrives will be small in terms of code changes, most bugs are accounted for now.

Older Stuff

While I put some effort into all of this a while back, the locking mechanism has changed a bit in 3.x and I haven't had time to catch up. This summer should provide me with more time to scientifically prove/disprove that a critical section is better than an event for the GIL on Windows.

GIL Modification Proposal

This is a followup to my thoughts below about speeding up Python/PyISAPIe. I've submitted a ticket to the Python tracker and got a discussion going on the Python-Dev mailing list... so we might see a great improvement in multithreaded Python code soon. Perhaps someday, when I have more time (ha), I can figure out how implement the GIL as a futex in Linux. Or something faster anyway.

Making Python & PyISAPIe faster, the hacker's way

I've been experimenting with some ways to make PyISAPIe faster. Not because it isn't fast already, but because it's the most fun I have after my code works - I get a chance to break it on my quest for Ultimate Speed :)

I've been working on two things lately:

Embedding the Python core into PyISAPIe

I've done it before and the performance results were so-so. I think this option will be viable, if anything, as a convenience package (no Python install necessary).

Making GIL locking/unlocking faster

So far this looks promising. The GIL is operated by a Windows "Event Object," which is possibly the slowest synchronization option available under high concurrency. The system-level mutex would be a better choice, but that wasn't fast enough for me so I chose the most lightweight option: a critical section.

I had to cheat the non-recursive part in a way that some would not appreciate because critical sections are recursive by design. Recursive is good because it prevents deadlocks, and bad because it can make debugging a nightmare.

Apparently my fake non-recursion is enough for now - not only does all of my Python code work, I'm getting (NON-scientific!) benchmarks that are almost twice as fast.

I don't know why events were chosen as the method for GIL locking/unlocking. Even if Windows 95 support is absolutely necessary (meaning no "official" TryEnterCriticalSection), system-level mutexes are still available (via CreateMutex et al). At least the base-case of one single thread is fast thanks to the hybrid interlocked compare-exchange checks, but that is also accomplished by critical sections.

If anyone is interested in discussing this more, feel free to get a thread going on the mailing list.


The old Blog site has finally been recovered, but I'm not sure if I plan to continue using it. For those who were wondering, SourceForge likes to silence PHP errors and produce pretty blank pages in order to make debugging "textless." That was fun ;)


Support PyISAPIe!