Re: [PyWrapper-devel] Problem with PyWrapper, BioMoby and saveHttpBody function
Status: Alpha
Brought to you by:
jatorre
From: Dring, M. <m.d...@BG...> - 2007-12-14 13:41:52
|
Ricardo, Ill have to look at it more closely, but I was the one to add this "save http body" stuff to pywrapper. I guess the moby part from javier was working correctly before. The situation with http body is a difficlut one unfortunately. When the streamed request comes in, we dont know what it is apart from the http content header and the content length header. It is streamed so you need to decide what to do just then inspecting the http headers. The idea is to only save the body if it is an xml content header, otherwise (for example html content from the webapps) leave it alone so that other libraries will not get confused (if you "save" the body, the stream is gone and this will cause troubles for analysing request parameters later via cherrypy.request). XML based messages that we use for TAPIR (but also for SOAP or others) should be send immediately in the http (POST) body and not encoded as a POST parameters in a http request. This is also part of the TAPIR spec but you can find that in OGC too. The idea is that all parameters are encoded in the XML message already. So you need to access the entire http body, something I didnt manage to do with CherryPy cause the framework already parses a lot of http stuff by default for you. So I intercepted the http requests pretty early with custom cherrypy filters. I believe we should check why moby fails and simple fix the moby module to read the saved http body. Wouldnt that work? Markus "Ricardo Scachetti Pereira" wrote on 13.12.2007 19:42 Uhr: > Hi Markus and Javi, > > I'm trying to get PyWrapper to process BioMoby requests, but I got > stuck with the following problem. > > The function saveHttpBody() (which I copy below - from file > ./webapp/pywrapper.py) looks at the request URL and decide whether or > not to save the http request body. > > The problem is that the moby handler needs that the http body is > saved, but I don't know how to save it when a moby request comes. To be > honest, I found the check n == 'pywrapper' a little obscure. When does > the http body gets saved anyway? > > In any case, if I change the code so that the http body is always > saved, moby requests are handled without problems, but then the > PyWrapper config tool webapp breaks. > > Is there a way for me to detect that the request is a moby one at > that point? But the request hasn't been parsed yet at that point, right? > > Anyway, I would really appreciate if you could shed some light on > the matter. > > Thanks, > > Ricardo > > > PS: Here is the function: > > def saveHttpBody(): > """ Called after the request header has been read/parsed""" > request = cherrypy.request > # only process hook if pywrapper is being called > n=request.path_info.lower().replace('/','') > #print request.path_info / script_name > log.debug("SaveHttpBody is looking at the request: %s"%n) > if n == 'pywrapper': > para=cfgObj.PyWrapper.httpBodyParameter > # get http body data > dataLength = int(request.headers.get('Content-Length') or 0) > data = request.rfile.read(dataLength) > # save http body & content-length as a new parameter. usually > __HTTP_BODY__ > request.params[para]=(cStringIO.StringIO(data), dataLength) > # fake the input stream so CP can parse the body regularly > if hasattr(request.rfile, 'maxlen'): > request.rfile = > http.SizeCheckWrapper(rfile=cStringIO.StringIO(data), > maxlen=request.rfile.maxlen) > else: > request.rfile = > http.SizeCheckWrapper(rfile=cStringIO.StringIO(data), maxlen=0) > > cherrypy.request.hooks.attach('before_request_body', saveHttpBody, > failsafe=None, priority=None) > |