pythomnic3k-questions Mailing List for Pythomnic3k (Page 2)
Brought to you by:
targeted
You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
(8) |
Apr
|
May
|
Jun
(9) |
Jul
(2) |
Aug
(4) |
Sep
(18) |
Oct
(7) |
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
(3) |
Dec
(17) |
2011 |
Jan
(1) |
Feb
(2) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
|
2014 |
Jan
(5) |
Feb
|
Mar
(4) |
Apr
(12) |
May
|
Jun
|
Jul
(1) |
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Dmitry D. <dm...@ta...> - 2014-04-04 09:31:41
|
Hi Angelo, Thanks for the heads up. I would say we need to replace all {x:s} with {x!s} because we always need str from format and it will play nicely with existing custom __str__ operators. I don't think changing it over the entire codebase is a big problem. It is simple to search and replace and provided all the tests pass there shouldn't be any problem. As soon as we are on it, I also wanted to remove many of those messages, especially debug's telling about success of something. It's been in production for years and I can't remember a case when those were useful in troubleshooting, they only add noise. One other thing is that for the long time I kept thinking about how to skip evaluating the debug messages in non-debug mode. Or in general, messages with lower priority than the current. I thought about using AST manipulations, but for one it doesn't work with precompiled modules. And we ship out product in precompiled (and encrypted) pyc files. Therefore I'm thinking about having a set of compact shortcuts, somewhat like: ... if LL_XXX: pmnc.log.debug(...) ... where LL_XXX would be some kind of a global inserted by the framework. This will require discipline for sure, but it shouldn't be too complex to implement. What do you think ? Sincerely, Dmitry Dvoinikov 03.04.2014 20:46, Angelo Hulshout ?????: > A change in Python 3.4 disallows the use of the following construct, > commonly used in the Pythomnic3k framework: > > "transaction {0:s} begins".format(self) > > This construct is no longer allowed and results in a TypeError with > the description ¨TypeError: non-empty format string passed to > object.__format__ > > Changing the code to > > "transaction {0:s} begins".format(str(self)) > > looks like a solution, but I'm not sure yet - nor happy about having > to change all occurences. > > Background info 1: http://bugs.python.org/issue20150 > Background info 2: http://bugs.python.org/issue7994 > > Thoughts? > > Regards, > > Angelo > > -- > *Delphino Consultancy* - software architecture, training and coaching > *T* 06 2531 9743 / *E* an...@de... > <mailto:an...@de...> > *W*http://www.delphino-consultancy.nl / *KVK* Eindhoven 17228522 / > *GPG *487A55D0 |
From: Angelo H. <an...@de...> - 2014-04-03 15:12:51
|
A change in Python 3.4 disallows the use of the following construct, commonly used in the Pythomnic3k framework: "transaction {0:s} begins".format(self) This construct is no longer allowed and results in a TypeError with the description ¨TypeError: non-empty format string passed to object.__format__ Changing the code to "transaction {0:s} begins".format(str(self)) looks like a solution, but I'm not sure yet - nor happy about having to change all occurences. Background info 1: http://bugs.python.org/issue20150 Background info 2: http://bugs.python.org/issue7994 Thoughts? Regards, Angelo -- *Delphino Consultancy* - software architecture, training and coaching *T* 06 2531 9743 / *E* an...@de... *W* http://www.delphino-consultancy.nl / *KVK* Eindhoven 17228522 / *GPG * 487A55D0 |
From: Nerio R. <hdd...@gm...> - 2014-03-21 12:53:49
|
Thank you so much, looks pretty much straightforward, python for processing data, connected to, lets say, Postgresql, and some kind of technology for GUI. I'll give it a try, and come back to you guys, really, thanks. 2014-03-21 0:29 GMT-04:30 Dmitry Dvoinikov <dm...@ta...>: > Hello Nerio, > > Web-based GUI would be preferrable for Pythomnic3k-based application > (which is a service). > Basically, you combine Pythomnic3k support for HTTP protocol with any > templating engine. > This is how I've done it: > > 1. Pick any Python3 templating engine (I used evoque). > 2. Have a separate directory with HTML templates. > 3. Have an HTTP(S) interface running on a service. > 4. In that interface's handling module (interface_http.py) parse an URL, > determine which template file to use, > and apply your data to the template using regular engine operation. Return > the resulting HTML. > > Even better if you can pass HTTP request parameters to the engine and > don't bother about anything > else like where the templates are and how to parse cookies. > > In practice, we have a rather complicated web interface implemented > manually with cookies, > sessions, authentication, form parsing etc. in about 200 lines. > > Desktop GUI is possible too, but in this case it will be more separate. > GUI would be just a front, > not even necessarily in Python, talking to the backend service over > whatever protocol and whatever > convention you choose. > > Sincerely, > Dmitry Dvoinikov > > 21.03.2014 2:26, Nerio Rincón пишет: > > Good evening, Im just downloading, pythomnic3k have everything i need, but > curious about how to use this to create an application with some kind of > GUI, web or desktop. > > -- > Nerio Rincón. > > > -- Nerio Rincón. http://about.me/nrincon |
From: Angelo H. <an...@de...> - 2014-03-21 07:31:10
|
That's exactly what we did as well - we set up a cage that supports HTTP with cookies, and use Chameleon template engine to generate pages. Regards, Angelo On Fri, Mar 21, 2014 at 5:59 AM, Dmitry Dvoinikov <dm...@ta...>wrote: > Hello Nerio, > > Web-based GUI would be preferrable for Pythomnic3k-based application > (which is a service). > Basically, you combine Pythomnic3k support for HTTP protocol with any > templating engine. > This is how I've done it: > > 1. Pick any Python3 templating engine (I used evoque). > 2. Have a separate directory with HTML templates. > 3. Have an HTTP(S) interface running on a service. > 4. In that interface's handling module (interface_http.py) parse an URL, > determine which template file to use, > and apply your data to the template using regular engine operation. Return > the resulting HTML. > > Even better if you can pass HTTP request parameters to the engine and > don't bother about anything > else like where the templates are and how to parse cookies. > > In practice, we have a rather complicated web interface implemented > manually with cookies, > sessions, authentication, form parsing etc. in about 200 lines. > > Desktop GUI is possible too, but in this case it will be more separate. > GUI would be just a front, > not even necessarily in Python, talking to the backend service over > whatever protocol and whatever > convention you choose. > > Sincerely, > Dmitry Dvoinikov > > 21.03.2014 2:26, Nerio Rincón пишет: > > Good evening, Im just downloading, pythomnic3k have everything i need, but > curious about how to use this to create an application with some kind of > GUI, web or desktop. > > -- > Nerio Rincón. > > > > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > Pythomnic3k-questions mailing list > Pyt...@li... > https://lists.sourceforge.net/lists/listinfo/pythomnic3k-questions > > -- *Delphino Consultancy* - software architecture, training and coaching *T* 06 2531 9743 / *E* an...@de... *W* http://www.delphino-consultancy.nl / *KVK* Eindhoven 17228522 / *GPG * 487A55D0 |
From: Dmitry D. <dm...@ta...> - 2014-03-21 06:10:07
|
Hello Nerio, Web-based GUI would be preferrable for Pythomnic3k-based application (which is a service). Basically, you combine Pythomnic3k support for HTTP protocol with any templating engine. This is how I've done it: 1. Pick any Python3 templating engine (I used evoque). 2. Have a separate directory with HTML templates. 3. Have an HTTP(S) interface running on a service. 4. In that interface's handling module (interface_http.py) parse an URL, determine which template file to use, and apply your data to the template using regular engine operation. Return the resulting HTML. Even better if you can pass HTTP request parameters to the engine and don't bother about anything else like where the templates are and how to parse cookies. In practice, we have a rather complicated web interface implemented manually with cookies, sessions, authentication, form parsing etc. in about 200 lines. Desktop GUI is possible too, but in this case it will be more separate. GUI would be just a front, not even necessarily in Python, talking to the backend service over whatever protocol and whatever convention you choose. Sincerely, Dmitry Dvoinikov 21.03.2014 2:26, Nerio Rincón ?????: > Good evening, Im just downloading, pythomnic3k have everything i need, > but curious about how to use this to create an application with some > kind of GUI, web or desktop. > > -- > Nerio Rincón. |
From: Nerio R. <hdd...@gm...> - 2014-03-20 20:26:50
|
Good evening, Im just downloading, pythomnic3k have everything i need, but curious about how to use this to create an application with some kind of GUI, web or desktop. -- Nerio Rincón. |
From: Dmitry D. <dm...@ta...> - 2014-01-31 13:37:47
|
> an intra-site URL should read an inter-site URL of course 31.01.2014 18:24, Dmitry Dvoinikov ?????: > Hi Angelo, > > > So my question is: what do I need to do to add OAuth to the HTTP > protocol in the framework? > > I'd say you should modify protocol_http.py, copy it over to the cage > directory and patch. > You will need some HTTP resource to talk to oath service, the name of > the resource could > either have to be configured in config_interface_http_myweb.py or be > picked conventionally, > for example it *ought* to be config_resource_http_myweb_oath.py, the > later is more > coherent with pythomnic style but it's up to you, I may be missing the > case of having more > than one oath services per one http interface. > > Then you may see how _extract_auth_tokens is used in > process_tcp_request and splice in > somewhere around. If you need to talk to the service, use the above > resource. If you need > to redirect, return 3 etc and if you need state, use cookies and > shared in-memory or on-disk > structure (ex. state). > > > how to resovle the cookie issue, that would be a saviour for the > short term. > > You know, I may be wrong but in my opinion cookies should stick to the > host:port that issued them, > not just host. If such behaviour is against the specification I'd be > surprised, because it'd ruin the entire > purpose of site-specific cookies. > > I understand you are running one service simultaneously on several > HTTP ports. This is awkward, > but in some cases unavoidable. Perhaps you should switch from cookies > to a URL parameter and > generate all your links containing some user-specific nonce ? Or you > can combine both worlds > and use cookies inside each site and instrument an intra-site URL with > an additional parameter > that would be automatically cut off and converted to a cookie by the > target site ? > > Sincerely, > Dmitry Dvoinikov |
From: Dmitry D. <dm...@ta...> - 2014-01-31 12:25:05
|
Hi Angelo, > So my question is: what do I need to do to add OAuth to the HTTP protocol in the framework? I'd say you should modify protocol_http.py, copy it over to the cage directory and patch. You will need some HTTP resource to talk to oath service, the name of the resource could either have to be configured in config_interface_http_myweb.py or be picked conventionally, for example it *ought* to be config_resource_http_myweb_oath.py, the later is more coherent with pythomnic style but it's up to you, I may be missing the case of having more than one oath services per one http interface. Then you may see how _extract_auth_tokens is used in process_tcp_request and splice in somewhere around. If you need to talk to the service, use the above resource. If you need to redirect, return 3 etc and if you need state, use cookies and shared in-memory or on-disk structure (ex. state). > how to resovle the cookie issue, that would be a saviour for the short term. You know, I may be wrong but in my opinion cookies should stick to the host:port that issued them, not just host. If such behaviour is against the specification I'd be surprised, because it'd ruin the entire purpose of site-specific cookies. I understand you are running one service simultaneously on several HTTP ports. This is awkward, but in some cases unavoidable. Perhaps you should switch from cookies to a URL parameter and generate all your links containing some user-specific nonce ? Or you can combine both worlds and use cookies inside each site and instrument an intra-site URL with an additional parameter that would be automatically cut off and converted to a cookie by the target site ? Sincerely, Dmitry Dvoinikov 31.01.2014 16:32, Angelo Hulshout ?????: > Hi Dmitry (and others), > > I use Pythomnic3k as the basis for a web based application to be > launched later this year. Everyting I need is in the framework, except > for authentication in HTTP sessions. > > Just now, I see two issues: > > 1 I have a number of Pythonmnic cages and interfaces that are expoes > through HTTP on the same server, but using different port numbers. > Despite the cookie-specification, some browsers do considere > portnumbers in URLs significant, causing the use of authentication > cookies across these services to fail (cookie not send with request). > 2. I'd prefer using OAuth, because it is safer than cookies. > > So my question is: what do I need to do to add OAuth to the HTTP > protocol in the framework? And, in the mean time, if someone knows how > to resovle the cookie issue, that would be a saviour for the short term. > > Regards, > > Angelo > |
From: Angelo H. <an...@de...> - 2014-01-31 11:00:09
|
Hi Dmitry (and others), I use Pythomnic3k as the basis for a web based application to be launched later this year. Everyting I need is in the framework, except for authentication in HTTP sessions. Just now, I see two issues: 1 I have a number of Pythonmnic cages and interfaces that are expoes through HTTP on the same server, but using different port numbers. Despite the cookie-specification, some browsers do considere portnumbers in URLs significant, causing the use of authentication cookies across these services to fail (cookie not send with request). 2. I'd prefer using OAuth, because it is safer than cookies. So my question is: what do I need to do to add OAuth to the HTTP protocol in the framework? And, in the mean time, if someone knows how to resovle the cookie issue, that would be a saviour for the short term. Regards, Angelo -- *Delphino Consultancy* - software architecture, training and coaching *T* 06 2531 9743 / *E* an...@de... *W* http://www.delphino-consultancy.nl / *KVK* Eindhoven 17228522 / *GPG * 487A55D0 |
From: Dmitry D. <dm...@ta...> - 2014-01-09 05:32:15
|
Hello Eric, Thank you for the fix. I believe you are right, I've encountered this before but never bothered to find the cause. I will commit this to current. Dmitry 06.01.2014 20:12, Eric Livingston пишет: > I have been wrestling for a few days now with protocol_retry and > state.py, wondering why the state system was broken on my debian linux > box, but working fine on my windows box. > > Ultimately, the problem appears to be the use of relative directory > names for the database (i.e. "./cages/...") > > So, I altered state.py, changing line 76 from: > > self._dir = dir > > to > > self._dir = os_path.abspath(dir) > > That seems to take care of the problem! > > Eric > |
From: Eric L. <Er...@Th...> - 2014-01-06 14:12:21
|
I have been wrestling for a few days now with protocol_retry and state.py, wondering why the state system was broken on my debian linux box, but working fine on my windows box. Ultimately, the problem appears to be the use of relative directory names for the database (i.e. "./cages/...") So, I altered state.py, changing line 76 from: self._dir = dir to self._dir = os_path.abspath(dir) That seems to take care of the problem! Eric |
From: Angelo H. <an...@sp...> - 2013-10-07 17:53:34
|
Hi there, Let's see if anyone is out there, still using and working on Pythomnic3k. I've just started using the framework, and run into the following with the sender-receiver demo. The demo works, but the message appears after every exchange 19:24:50.13 ERR [interfaces/1:140] ResourceError("the state is not available") in _get_module_state() (state.py:171) <- typecheck_invocation_proxy() (typecheck.py:356) <- start_transaction() (state.py:421) <- typecheck_invocation_proxy() (typecheck.py:356) <- __call__() (module_loader.py:87) <- begin_transaction() (protocol_retry.py:797) <- wu_participate() (transaction.py:290) <- _default_accept() (transaction.py:561) <- execute() (transaction.py:499) <- execute_async() (remote_call.py:98) <- typecheck_invocation_proxy() (typecheck.py:356) <- __call__() (module_loader.py:87) <- __call__() (module_loader.py:598) <- process_request() (interface_every_10.py:36) <- typecheck_invocation_proxy() (typecheck.py:356) <- __call__() (module_loader.py:87) <- _process_request() (protocol_schedule.py:159) <- wu_process_request() (protocol_schedule.py:145) # protocol_schedule.py:148 in wu_process_request() by RQ-235F (at 2013-10-07 19:24:50) via every_10 +10.0s In the biz-baz demo, I get an error message and no RPC exchange: 18:54:15.76 ERR [cage] interface retry could not be created: Exception("the state is not available") in _get_module_state() (state.py:171) <- typecheck_invocation_proxy() (typecheck.py:356) <- get_queue() (state.py:308) <- typecheck_invocation_proxy() (typecheck.py:356) <- __call__() (module_loader.py:87) <- __init__() (protocol_retry.py:184) <- typecheck_invocation_proxy() (typecheck.py:356) <- __init__() (protocol_retry.py:289) <- typecheck_invocation_proxy() (typecheck.py:356) <- create_object() (module_loader.py:173) <- __call__() (module_loader.py:87) <- create() (interface.py:21) <- typecheck_invocation_proxy() (typecheck.py:356) <- __call__() (module_loader.py:87) <- _create() (interfaces.py:85) # interfaces.py:88 in _create() Any idea o what I should change to prevent or fix these 'out of the box' errors? For completeness: this is on an Ubuntu system, and I changed the broadcast address for the RPC calls to 127.0.0.1/255.255.255.255. Regards, Angelo -- Angelo Hulshout E an...@sp... / T 0625319743 GPG 487A55D0 |
From: Gamesbrainiac <gam...@gm...> - 2013-10-04 16:19:24
|
Hey guys, I am brand new to SMPP, I need to learn how to use it. I know thats not saying much, but everything I read is just going over my head. I've taken a look at presentation on the website, but I have trouble understanding where I am supposed to place the configuration, for things like servers and stuff like that. -- Kind Regards, Quazi Nafiul Islam |
From: Dmitry D. <dm...@ta...> - 2011-03-10 05:24:03
|
> I love Pythomnic! Thank you. :) --- I think the problem with your code is not the exceptions, but the request timeouts. Each request has its timeout and that timeout is _cumulative_, it accounts for all previous activity. If you do for i in range(10): r.connect() r.post() each consecutive HTTP request will have less and less time allowed to execute. Be careful if you execute more than one HTTP request at a time. The connect_timeout that you specify in Resource() is not a request timeout, it is only an additional restriction, so that you may use connect_timeout = 3.0 and be sure that the request will not spend its entire 15 seconds connecting. On the other hand, time spent in connect is obviously deducted from the request timeout, therefore if HTTP connect() takes 9 seconds, the request only has 6 seconds left for post() and everything else. If you want to guarantee 15 seconds for post alone you should increase request timeout to 10+15 = 25 seconds, allowing for the worst case of 10 seconds of connecting. You may insert pmnc.log("something") between connect and post to see the remaining timeout of the request executing it. Since version 1.2 log lines end with ... +2.9 if for example the request currently has 2.9 seconds left. For pooled resources there is another line of defense against timeouts - you can specify config = dict \ ( ... pool__min_time = 3.0, ) in config_resource_any.py, then the resource will decline the request altogether if the request's remaining timeout if less than 3 seconds, rather than jumping into it and risking hitting deadline in the middle. --- Here is also a few assorted notes: * Function annotations are automatically instrumented with type checking only for module methods, not on your class methods. Therefore this def execute(...)->list: is purely decorative. If you need it enforcing, do this: @typecheck def execute(...)->list: * You may use existing utility method for dumping exceptions: import exc_string; from exc_string import exc_string ... except: pmnc.log.warning(exc_string()) * Brackets are not required in return(foo) and are stylistically un-pythonic. Dmitry Dvoinikov On 09.03.2011 20:49, Eric Livingston wrote: > Ah, so already, my cage is being pooled and a number of instances generated, > up to "thread_count"... wow, you just saved me a huge amount of work and > worry! I love Pythomnic! I'm sure I knew that, but forgot you were doing > that for me - sorry for lapsing into ignorance. > > As I looked more closely at my timing and performance, I realize that > horizontal scaling is not the issue (especially given what you said about > the pooling). It's the per-transaction delay, mostly caused by external > resources. So given you're already pooling my cage and multi-threading it, > no additional work is necessary there. > > What's causing me the most grief is when I have to insert another HTTP call > *out* to retrieve information, and then put the results into my own reply. > That is currently experiencing a lot of delay, and often times out > altogether. The good news is that for the most part, the information being > retrieved is optional and the transaction can continue without it. The bad > news is I can't seem to get the errors to behave. I have a simple little > class: > > class http_get: > def __init__(s,url,content): > server,sep,path=(url.split("//"))[1].partition("/") > s.fqdn,sep,s.port=server.partition(":") > s.path="/"+path.strip() > s.content=content > > def execute(s,method="POST")->list: > result=[408,"Timeout"] # Just default to timeout on error > r = pmnc.protocol_http.Resource("sl", > server_address = (s.fqdn,int(s.port)), > connect_timeout = 10.0, > ssl_key_cert_file = None, # or filename > ssl_ca_cert_file = None, # or filename > extra_headers = { "connection": "close" }, > http_version = "HTTP/1.1") > try: > r.connect() > if method=="POST": result = > r.post(s.path,s.content.encode("utf_8")) > else: result=r.get(s.path+s.content) > except: > pmnc.log.warning(repr(sys.exc_info()[1])+s.content) > finally: > r.disconnect() > return(result) > > The problem I'm having is the exception handling doesn't seem to "clear" the > error and move on. I have my overall interface timeout set to 15 seconds, > and this web class set to 10 seconds, which should give me 5 seconds of > overhead necessary to complete the transaction and return a result. However, > what seems to happen when the pmnc log registers a timeout is the > transaction stops - processing beyond the execute() command in this class is > not performed, and the overall transaction is aborted. I'd like to simply > note the timeout in the log and move right along, completing the transaction > normally and completing the further processing as normal. Am I handling the > exception incorrectly? > > Thanks! |
From: Dmitry D. <dm...@ta...> - 2011-03-09 05:12:20
|
Hi Eric, > A. For sql, I use the construction "s.xa = > pmnc.transaction.create()" which I believe already means I'm making use of > dynamic, pooled resources on the back end for sql, right? Yes, when you use sql resources in transactions, they are pooled and reused. > So, already > "horizontally scaled" Not really. Pooling is not the same as horizontal scaling. Pooling means the following: 1. You don't have to waste time connecting to a database per each request because the connection is already there. 2. There is a limited number of concurrent connections, excessive requests get queued up and wait for a free connection. > B. For http, I use "r = pmnc.protocol_http.Resource()" - lower > level, so I can dynamically assign server addresses and such on the fly. My > understanding is that is NOT using a pooled system, so I'm waiting on that > resource each time I use it individually. Well, you are waiting for it _to_connect_ each time, otherwise it's the same. If you are using HTTPS, this can take time and be a burden on the target server too. > 2. Lets say I now want to split up monolithic.py into: > A. "fastListener.py" that just sits on port 80, gets a request, and > dispatches it > B. "RequestHandler.py" that can handle multiple requests > simultaneously via pooled or multiple threads. There already is a listener that dispatches requests to a pool of worker threads. No matter which interface delivered the request, its processing is up to the worker threads. The size of the worker threads pool is controlled by config_interfaces::thread_count. > What's the best way to do that? The question is: what is the problem ? What performance issues are you facing and what are you trying to achieve ? You need to determine what is it that bottleneck that limits the performance and go from there. Another question is: does your service return HTTP response synchronously, after fetching some other pages and inserting them to the database (this is what I understand your system does). Or the HTTP request serves only as a processing trigger, which returns arbitrary response immediately and fetching and inserting takes place later. I suggest that we clarify the problem before moving on. Dmitry Dvoinikov On 08.03.2011 0:16, Eric Livingston wrote: > Well, my system is starting to see some real load, and I've been considering > how to split it up, but I'm not exactly sure how - sorry if my questions are > too simple! > > I currently have one monolithic.py (not really called that, but for example) > that's based on HTTP and sits listening on 80 for connections, processes > them, and returns. Here are some questions. > > 1. First, regarding used resources. Currently, monolithic.py relies on the > http and postgres resources. > A. For sql, I use the construction "s.xa = > pmnc.transaction.create()" which I believe already means I'm making use of > dynamic, pooled resources on the back end for sql, right? So, already > "horizontally scaled" > B. For http, I use "r = pmnc.protocol_http.Resource()" - lower > level, so I can dynamically assign server addresses and such on the fly. My > understanding is that is NOT using a pooled system, so I'm waiting on that > resource each time I use it individually. > > 2. Lets say I now want to split up monolithic.py into: > A. "fastListener.py" that just sits on port 80, gets a request, and > dispatches it > B. "RequestHandler.py" that can handle multiple requests > simultaneously via pooled or multiple threads. > > What's the best way to do that? That is, how do I run multiple > RequestHandlers? Would fastListener.py be a totally separate cage, calling > multiple copies of the RequestHandler cage, each running as a separate > windows service? Or, would it be better to try to make RequestHandler into a > pmnc pooled resource so it can be called dynamically from fastListener and > just keep it all in one cage? I'm just trying to work out how I "tease > apart" the architecture into a more horizontally scaled system, one of the > key benefits of pmnc, I know! > > If RequestHandler.py is a new, separate cage, then while fastListener would > be a http-based cage, RequestHandler would not have to be. It could just be > RPC or something, correct? Once fastListener accepts the http request, then > its communication with RequestHandler would be direct. You had previously > given me these examples: > > 1. result = pmnc.module.method(...) > A synchronous local call with very little overhead. > > 2. result = pmnc("other_cage").module.method(...) > A synchronous remote call to another cage over SSL. > > I'm thinking that #2 would be the model in this case? So it would be > something like pmnc("RequestHandler").process_request(params) > > If that's true, then what interface would I be basing RequestHandler on, if > not http? > > Thanks for any help you can give me! > > |
From: Eric L. <Er...@Th...> - 2011-03-07 19:16:56
|
Well, my system is starting to see some real load, and I've been considering how to split it up, but I'm not exactly sure how - sorry if my questions are too simple! I currently have one monolithic.py (not really called that, but for example) that's based on HTTP and sits listening on 80 for connections, processes them, and returns. Here are some questions. 1. First, regarding used resources. Currently, monolithic.py relies on the http and postgres resources. A. For sql, I use the construction "s.xa = pmnc.transaction.create()" which I believe already means I'm making use of dynamic, pooled resources on the back end for sql, right? So, already "horizontally scaled" B. For http, I use "r = pmnc.protocol_http.Resource()" - lower level, so I can dynamically assign server addresses and such on the fly. My understanding is that is NOT using a pooled system, so I'm waiting on that resource each time I use it individually. 2. Lets say I now want to split up monolithic.py into: A. "fastListener.py" that just sits on port 80, gets a request, and dispatches it B. "RequestHandler.py" that can handle multiple requests simultaneously via pooled or multiple threads. What's the best way to do that? That is, how do I run multiple RequestHandlers? Would fastListener.py be a totally separate cage, calling multiple copies of the RequestHandler cage, each running as a separate windows service? Or, would it be better to try to make RequestHandler into a pmnc pooled resource so it can be called dynamically from fastListener and just keep it all in one cage? I'm just trying to work out how I "tease apart" the architecture into a more horizontally scaled system, one of the key benefits of pmnc, I know! If RequestHandler.py is a new, separate cage, then while fastListener would be a http-based cage, RequestHandler would not have to be. It could just be RPC or something, correct? Once fastListener accepts the http request, then its communication with RequestHandler would be direct. You had previously given me these examples: 1. result = pmnc.module.method(...) A synchronous local call with very little overhead. 2. result = pmnc("other_cage").module.method(...) A synchronous remote call to another cage over SSL. I'm thinking that #2 would be the model in this case? So it would be something like pmnc("RequestHandler").process_request(params) If that's true, then what interface would I be basing RequestHandler on, if not http? Thanks for any help you can give me! |
From: Dmitry D. <dm...@ta...> - 2011-02-18 08:02:34
|
Hi Eric, You can enable periodic processing on any given cage by adding an interface of type "schedule". This interface throws in request every specified time. Download the schedule protocol pack and see the "Database reporting" sample from the samples page http://www.pythomnic3k.org/samples.html That sample does periodic report generation. Dmitry On 17.02.2011 19:13, Eric Livingston wrote: > What would be the best/recommended way to implement a "heartbeat" or > "recurring" process that could do clean-up and other maintenance tasks in > the overall system? Right now my system, by design, just sits and waits for > incoming requests, but I have a few things that would be handy to have > happening on a regular basis, regardless of whether incoming requests are > coming or not. Can you show a quick example of how to arrange a > cage/function to be "fired" by the system on a recurring, say 10-minute > schedule? > |
From: Eric L. <Er...@Th...> - 2011-02-17 14:13:45
|
What would be the best/recommended way to implement a "heartbeat" or "recurring" process that could do clean-up and other maintenance tasks in the overall system? Right now my system, by design, just sits and waits for incoming requests, but I have a few things that would be handy to have happening on a regular basis, regardless of whether incoming requests are coming or not. Can you show a quick example of how to arrange a cage/function to be "fired" by the system on a recurring, say 10-minute schedule? -----Original Message----- From: Dmitry Dvoinikov [mailto:dm...@ta...] Sent: Wednesday, December 22, 2010 3:01 AM To: Eric Livingston Cc: pyt...@li... Subject: Re: Using class hierarchies in Pythomnic3k > what would my "__all__" look like, ------- classes.py ------- __all__ = [ "Foo", "Bar" ] class Foo: ... class Bar(Foo): def do_stuff(self): ... # EOF ------- interface_foo.py ------- def process_request(request, response): bar = pmnc.classes.Bar(...) bar.do_stuff() ------------------------------- |
From: Dmitry D. <dm...@ta...> - 2011-01-11 08:21:23
|
Hello all, If you need it, I've written MongoDB protocol for Pythomnic3k. You may download the pack from the site: http://www.pythomnic3k.org/download.html The usage instructions as always are in protocol file, in this case protocol_mongodb.py Sincerely, Dmitry Dvoinikov |
From: Dmitry D. <dm...@ta...> - 2010-12-22 08:01:14
|
> what would my "__all__" look like, ------- classes.py ------- __all__ = [ "Foo", "Bar" ] class Foo: ... class Bar(Foo): def do_stuff(self): ... # EOF ------- interface_foo.py ------- def process_request(request, response): bar = pmnc.classes.Bar(...) bar.do_stuff() ------------------------------- |
From: Dmitry D. <dm...@ta...> - 2010-12-22 07:54:40
|
Regular OOP approach with tree hierarchies doesn't play well in component-based applications. You really should reconsider using stateless module components instead. But if you *need* a class hierarchy, you have two options: 1. Put all your classes Foo, Bar etc. in a separate reloadable module (ex. classes.py) within a cage directory, then create instances using factory-like calls: def process_request(request, response): foo = pmnc.classes.Foo(...) foo.method() This works fine, but makes certain things problematic, such as using isinstance (and referencing *classes* in general). You should not be needing it anyway, but if you do, it becomes awkward, something like this: foo = pmnc.classes.Foo(...) bar = pmnc.classes.Bar(...) if isinstance(bar, foo.__class___): ... One other thing is that on-the-fly reloadability now becomes a problem because instances' lifetime may exceed that of their modules, consider this: foo = pmnc.classes.Foo(...) >>>> classes.py is reloaded here <<<< bar = pmnc.classes.Bar(...) Now, is bar an instance of (previous version's) Foo any more ? 2. Put all your classes in a non-reloadable module in lib and use regular approach: from classes import Foo foo = Foo() Sincerely, Dmitry Dvoinikov On 22.12.2010 2:01, Eric Livingston wrote: > That sounds good! One further question on syntax and usage: > > My (large) single cage.py basically looks like: > > Class foo: > ... > Class Bar: > ... > Class Other: > ... > Class YetAnother: > ... > Class AdditionalClass: > ... > Def process_request(request, response): > Self.foo > Self.Bar > ... > > In other words, all my code is wrapped up in classes, and not "naked" > methods at the module/sourcefile level, like process_request. > > How do I use your system in that context? If I put "Class Foo" into Foo.py, > what would my "__all__" look like, and how would I refer to and create an > instance of that class back in cage.py, where process_request is? Basically, > process_request instantiates a bunch of class objects and operates on them > through methods - can I still do that under your architecture? > > Eric |
From: Eric L. <Er...@Th...> - 2010-12-21 21:01:42
|
That sounds good! One further question on syntax and usage: My (large) single cage.py basically looks like: Class foo: ... Class Bar: ... Class Other: ... Class YetAnother: ... Class AdditionalClass: ... Def process_request(request, response): Self.foo Self.Bar ... In other words, all my code is wrapped up in classes, and not "naked" methods at the module/sourcefile level, like process_request. How do I use your system in that context? If I put "Class Foo" into Foo.py, what would my "__all__" look like, and how would I refer to and create an instance of that class back in cage.py, where process_request is? Basically, process_request instantiates a bunch of class objects and operates on them through methods - can I still do that under your architecture? Eric -----Original Message----- From: Dmitry Dvoinikov [mailto:dm...@ta...] Sent: Monday, December 20, 2010 10:24 AM To: Eric Livingston Cc: pyt...@li... Subject: Re: [Pythomnic3k-questions] [ANN] Pythomnic3k 1.2 released > pmnc module calls implied Actually, it can do lots of things. Here is a short reference: 1. result = pmnc.module.method(...) A synchronous local call with very little overhead. 2. result = pmnc("other_cage").module.method(...) A synchronous remote call to another cage over SSL. 3. pmnc(queue = "retry").module.method(...) Enqueues a retriable local call, returns immediately, is being executed in separate thread, the call is retried again and again if it throws an exception. 3. pmnc("other_cage", queue = "retry").module.method(...) Enqueues a retriable remote call to another cage, is being executed in separate thread, the call is retried again and again if it throws an exception. Sincerely, Dmitry Dvoinikov On 20.12.2010 19:26, Eric Livingston wrote: > Ah, ok - I thought the pmnc module calls implied a "heavy" RPC > protocol that could add unnecessary overhead, but if it's just > organizational, then that's great, I'll try it! The idea of > independent classes/modules being loaded/reloaded when necessary is > great :) |
From: Dmitry D. <dm...@ta...> - 2010-12-20 15:24:23
|
> pmnc module calls implied Actually, it can do lots of things. Here is a short reference: 1. result = pmnc.module.method(...) A synchronous local call with very little overhead. 2. result = pmnc("other_cage").module.method(...) A synchronous remote call to another cage over SSL. 3. pmnc(queue = "retry").module.method(...) Enqueues a retriable local call, returns immediately, is being executed in separate thread, the call is retried again and again if it throws an exception. 3. pmnc("other_cage", queue = "retry").module.method(...) Enqueues a retriable remote call to another cage, is being executed in separate thread, the call is retried again and again if it throws an exception. Sincerely, Dmitry Dvoinikov On 20.12.2010 19:26, Eric Livingston wrote: > Ah, ok - I thought the pmnc module calls implied a "heavy" RPC > protocol that could add unnecessary overhead, but if it's just > organizational, then that's great, I'll try it! The idea of > independent classes/modules being loaded/reloaded when necessary is > great :) |
From: Eric L. <Er...@Th...> - 2010-12-20 14:26:12
|
Ah, ok - I thought the pmnc module calls implied a "heavy" RPC protocol that could add unnecessary overhead, but if it's just organizational, then that's great, I'll try it! The idea of independent classes/modules being loaded/reloaded when necessary is great :) On 12/20/10, Dmitry Dvoinikov <dm...@ta...> wrote: > > I'm not trying to make each class a "module" > > Actually, you can place classes in separate modules > if you prefer. Something like this works: > > --------------- foo.py --------------- > > __all__ = [ "Foo" ] > > class Foo: > ... > > --------------- bar.py --------------- > > foo = pmnc.foo.Foo(...) > > -------------------------------------- > > > with separate pmnc call overhead, etc - just trying to > > simply break up the source files for my one main "cage" > > into workable pieces. > > But this is exactly the point. You essentially extract methods from > one module to other modules and access them through pmnc.foo.bar > calls. This gives you clean modularization and the smaller modules > are reloaded independently when they are changed. You should group > similar methods to modules, which again means that modules turn > into kind of components. I prefer this approach, where module > is just a code component which participates in request processing. > The reason for this is that each request is processed independently, > and there is typically no long-living state. > > If you want to build a full blown class hierarchy, you may do so > by putting it all to a separate module and in that module enjoy > inheritance and everything, and the other modules create instances > of those classes by pmnc calls. Best of both worlds if you like. > > Sincerely, > Dmitry Dvoinikov > > On 18.12.2010 4:41, Eric Livingston wrote: >> Hmmm, well, I guess my question is, as a best practice, how to I easily >> partition my source files? >> >> My main module .py file is now up to about 2,000 lines, and typically I >> don't like to do that - I'd prefer to have each of my classes broken out >> into separate files, have a "header" file with my constants in it (I guess >> revealing my roots in C and C++), etc. >> >> I haven't worked out how to easily break that file down into say 10 or >> more >> "import"able modules that I can call, etc. >> >> How would you do such a thing? I'm not trying to make each class a >> "module" >> with separate pmnc call overhead, etc - just trying to simply break up the >> source files for my one main "cage" into workable pieces. > > -- Sent from my mobile device |
From: Dmitry D. <dm...@ta...> - 2010-12-20 07:16:55
|
> I'm not trying to make each class a "module" Actually, you can place classes in separate modules if you prefer. Something like this works: --------------- foo.py --------------- __all__ = [ "Foo" ] class Foo: ... --------------- bar.py --------------- foo = pmnc.foo.Foo(...) -------------------------------------- > with separate pmnc call overhead, etc - just trying to > simply break up the source files for my one main "cage" > into workable pieces. But this is exactly the point. You essentially extract methods from one module to other modules and access them through pmnc.foo.bar calls. This gives you clean modularization and the smaller modules are reloaded independently when they are changed. You should group similar methods to modules, which again means that modules turn into kind of components. I prefer this approach, where module is just a code component which participates in request processing. The reason for this is that each request is processed independently, and there is typically no long-living state. If you want to build a full blown class hierarchy, you may do so by putting it all to a separate module and in that module enjoy inheritance and everything, and the other modules create instances of those classes by pmnc calls. Best of both worlds if you like. Sincerely, Dmitry Dvoinikov On 18.12.2010 4:41, Eric Livingston wrote: > Hmmm, well, I guess my question is, as a best practice, how to I easily > partition my source files? > > My main module .py file is now up to about 2,000 lines, and typically I > don't like to do that - I'd prefer to have each of my classes broken out > into separate files, have a "header" file with my constants in it (I guess > revealing my roots in C and C++), etc. > > I haven't worked out how to easily break that file down into say 10 or more > "import"able modules that I can call, etc. > > How would you do such a thing? I'm not trying to make each class a "module" > with separate pmnc call overhead, etc - just trying to simply break up the > source files for my one main "cage" into workable pieces. |