From: karol s. <kar...@gm...> - 2011-05-01 19:10:08
|
Hi, what is the status of sb-heapdump? Does it live somewhere as a dedicated contrib? Does it work on latest SBCL? I have large constant datasets which take a while to setup, and I would like to preprocess them first and load quicky when necessary. Karol |
From: Nikolaus D. <de...@in...> - 2011-05-01 21:51:48
|
Hi, I don't know about sb-heapdump, but do you think that maybe cl-store [1] could be an option for you? It is pretty easy to use, but I don't know how performant it really is for large datasets. Best regards, Niko [1] http://common-lisp.net/project/cl-store/ Am 01.05.2011 um 21:10 schrieb karol skocik: > Hi, > what is the status of sb-heapdump? Does it live somewhere as a > dedicated contrib? Does it work on latest SBCL? > I have large constant datasets which take a while to setup, and I > would like to preprocess them first and load quicky when necessary. > Karol > > ------------------------------------------------------------------------------ > WhatsUp Gold - Download Free Network Management Software > The most intuitive, comprehensive, and cost-effective network > management toolset available today. Delivers lowest initial > acquisition cost and overall TCO of any competing solution. > http://p.sf.net/sfu/whatsupgold-sd > _______________________________________________ > Sbcl-devel mailing list > Sbc...@li... > https://lists.sourceforge.net/lists/listinfo/sbcl-devel |
From: Paul K. <pv...@pv...> - 2011-05-01 22:39:31
|
On 2011-05-01, at 3:10 PM, karol skocik wrote: > Hi, > what is the status of sb-heapdump? Does it live somewhere as a > dedicated contrib? Does it work on latest SBCL? > I have large constant datasets which take a while to setup, and I > would like to preprocess them first and load quicky when necessary. Depending on the data and on the usage pattern, a FASL might be good enough. For instance, you can COMPILE-FILE dump.lisp: (defparameter *data* #.*data*) And LOAD dump.fasl in another image. Paul Khuong |
From: Daniel P. <dp...@gm...> - 2011-05-02 06:17:34
|
On May 1, 2011, at 3:39 PM, Paul Khuong wrote: > On 2011-05-01, at 3:10 PM, karol skocik wrote: [...] >> I have large constant datasets which take a while to setup, and I >> would like to preprocess them first and load quicky when necessary. > > Depending on the data and on the usage pattern, a FASL might be good enough. For instance, you can COMPILE-FILE Related to Paul's suggestion: Load your data and then call sb-ext:save-lisp-and-die. Of course, the executable image gets bigger, but you're trading time versus space by moving the complexity. This worked well in a high volume deployment where downtime and restart time had to be minimized. Not suitable for all, of course. The backstory: We needed to ensure minimal downtime between data refreshes, and the model used bulk refresh. (This was an early stage start-up, so adding complexity of continuous updates was punted until after we would see revenue.) I've previously posted tidbits about the 2007 ad network in Seattle where principal server software was written in CL using SBCL on FreeBSD with threads enabled. It ran fine for us, despite known thread issues on BSD. Everything ran from SBCL's hash-tables as our in-memory database. We populated various nested hash-tables and then called sb-ext:save-lisp-and-die. These images were then shipped from utility host to production nodes that sat behind load-balancer. We could remove any node from the load-balancer pool in an automated & controlled fashion at any time, but we wanted to minimize that window. While I never got the hang of their business/culture, the geek-macho angle was this: It was one of the few business models with likelihood of seeing a billion requests a day, and we had to ensure our ability to absorb traffic spikes of multiple client websites. Needless to say, keeping planned downtime brief was a priority. By explicitly loading Swank before calling save-lisp-and-die, we still had SLIME goodness via M-x slime-connect over an SSH tunnel. I can elaborate off-line. -Daniel -- first name at last name dot com |
From: karol s. <kar...@gm...> - 2011-05-02 07:23:06
|
I know about save-lisp-and-die, we use it for production images as well. For now I wanted just a method to minimize time for completing a development build and running tests with the data. I guess I will try Paul's method first, since it looks easy and does not bring another dependency to the system. Thanks all for comments. Karol On Mon, May 2, 2011 at 8:17 AM, Daniel Pezely <dp...@gm...> wrote: > > On May 1, 2011, at 3:39 PM, Paul Khuong wrote: >> On 2011-05-01, at 3:10 PM, karol skocik wrote: > [...] >>> I have large constant datasets which take a while to setup, and I >>> would like to preprocess them first and load quicky when necessary. >> >> Depending on the data and on the usage pattern, a FASL might be good enough. For instance, you can COMPILE-FILE > > > Related to Paul's suggestion: > > Load your data and then call sb-ext:save-lisp-and-die. Of course, the executable image gets bigger, but you're trading time versus space by moving the complexity. > > This worked well in a high volume deployment where downtime and restart time had to be minimized. Not suitable for all, of course. > > > The backstory: > > We needed to ensure minimal downtime between data refreshes, and the model used bulk refresh. (This was an early stage start-up, so adding complexity of continuous updates was punted until after we would see revenue.) > > I've previously posted tidbits about the 2007 ad network in Seattle where principal server software was written in CL using SBCL on FreeBSD with threads enabled. It ran fine for us, despite known thread issues on BSD. > > Everything ran from SBCL's hash-tables as our in-memory database. We populated various nested hash-tables and then called sb-ext:save-lisp-and-die. These images were then shipped from utility host to production nodes that sat behind load-balancer. > > We could remove any node from the load-balancer pool in an automated & controlled fashion at any time, but we wanted to minimize that window. > > While I never got the hang of their business/culture, the geek-macho angle was this: > > It was one of the few business models with likelihood of seeing a billion requests a day, and we had to ensure our ability to absorb traffic spikes of multiple client websites. Needless to say, keeping planned downtime brief was a priority. > > By explicitly loading Swank before calling save-lisp-and-die, we still had SLIME goodness via M-x slime-connect over an SSH tunnel. > > I can elaborate off-line. > > -Daniel > -- > first name at last name dot com > > > > > |