Menu

Batching

Help
Gareth
2014-05-20
2014-05-28
  • Gareth

    Gareth - 2014-05-20

    Hi

    Another question!
    I also need to model a situation where customers are collected into batches. I.e. customers are held in a waiting area until it is full (say N customers) after which they are immediately processed by N servers. I.e. they are held until the system can be run at full capacity to maximise use the resource.
    I have tried load dependant routing to hold jobs up but it doesn't seem to be able to work in quite the way I want.
    I believe this has been discussed before:
    https://sourceforge.net/p/jmt/discussion/556279/thread/c04ffca5/#4e91
    But it is not clear if a solution exists yet. Maybe some combination of load dependant routing and class switching?
    Any ideas would be much appreciated!
    Best wishes

    Gareth

     
    • serazzi

      serazzi - 2014-05-26

      Hi Gareth,
      we do not have the batch feature. However, i have a suggestion. Let consider that you want to wait until the buffer of size N is full and then immediately process the N requests. If you have interarrival times of requests exponentially distributes (hope this is YOUR CASE!) then you may generate arrivals with an Erlang-N distribution (it is and Erlang with N stages), each stage having the mean interarrival time between two consecutive requests. This allows you to generate a "heavy" request every N arrived requests. Then the newly generated heavy request can be directed in input to a Fork/Join that will split it into N requests each one of them sent to a delay, having as mean time the processing time that you like. So, globally in the Fork/join area you have N delays, one per link. Then you have the Join the N links. If you want also to account for the time that the arriving requests wait in queue, then generate with a source a heavy stream of requests (no importance of the distribution, it must be sufficient in order to keep always working the delay) , then connect the source to a delay having Erlang-N service time distribution (that will behave as a source), then connect the fork/Join. Attention, the delay must be in a FCR region with num.max of requests=1.
      You may require to JMT the generation of interarrival or service times with Erlang-N distribution.
      Hope my suggestion it is enough clear, otherwise just ask me more details.
      Best regards
      giuseppe

       
    • serazzi

      serazzi - 2014-05-28

      Hi Gareth,
      in my previous answer I have not described why the proposed model should simulate the batches. The interarrival times that follows an Erlang-N distribution are obtained by adding N exponentially distributed times. Let us consider that your elementary requests arrive at the buffer with a mean interarrival time of 1 exponentially distributed. To generate an event (say, batch) every 10 elementary requests it is sufficient to use an Erlang-9 distribution (9 because we want to consider the first and the last event, so the intervals to be considered are 9). Of course, the events generated by the Erlang must be considered "batches" and, as suggetsed in my previous mail, forked in N in order to obtain the correct number of elementary requests.
      Hope is enough clear.
      Best regards
      Giuseppe

       
  • Gareth

    Gareth - 2014-05-23

    Hi

    I am wondering if there is any way to collect customers into batches?
    I have been trying all kinds of strategies but without success. Load dependent routing seemed promising but it appears that the service time is only changed for customer waiting in the queue and can't affect those already in the server.
    Perhaps some kind of round robin routing strategy plus class switching and/or forking elements might be possible but I can't see a solution yet.
    Best regards

    Gareth

     

Anonymous
Anonymous

Add attachments
Cancel