Re: [jgroups-users] Hijacking the JGroups channel inside Infinispan and not getting away with it :)
Brought to you by:
belaban
From: Questions/problems r. to u. J. <jav...@li...> - 2022-11-22 01:29:38
|
Thanks for the confirmation Bela, I was easily able to modify the Infinispan configuration to include the FORK [1] and the application is no longer prone to random OOMs during bootstrap. Cheers, Johnathan [1] <config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups-4.0.xsd"> ... <FORK> <fork-stacks> <fork-stack id="hijack-stack"/> </fork-stacks> </FORK> <FRAG3 /> </config> if (cacheManager.getTransport() instanceof JGroupsTransport) { JGroupsTransport jGroupsTransport = (JGroupsTransport) cacheManager.getTransport(); ProtocolStack stack = jGroupsTransport.getChannel().getProtocolStack(); Class<? extends Protocol> neighborProtocol = stack.findProtocol(FRAG2.class) != null ? FRAG2.class : FRAG3.class; channel = new ForkChannel(jGroupsTransport.getChannel(), "hijack-stack", "lead-hijacker", false, ProtocolStack.Position.ABOVE, neighborProtocol); On Fri, 21 Oct 2022, 16:40 Questions/problems related to using JGroups via javagroups-users, <jav...@li...> wrote: > Hi Jonathan > > yes, the best solution is to define FORK in the JGroups section of the > Infinispan configuration, as you mentioned. > > If you don't control the configuration, then it gets a bit more > tricky... you could (*before sending any brodcast traffic*) insert FORK > dynamically into every JChannel instance. > To do that, you need to get the JChannel; IIRC, via (paraphrased) > cache.getExtendedCache().getRpcManager().getTransport(), downcast it to > JGroupsTransport, then call getChannel(). > > Once you have JChannel, call > channel.getProtocolStack().insertProtocolAtTop(new FORK(...)); > Hope this helps, > > [1] http://www.jgroups.org/manual5/index.html#ForkChannel > > On 21.10.22 16:02, Questions/problems related to using JGroups wrote: > > Hi all, > > > > We have run into an interesting race condition when attempting to use a > > fork channel in our application, we more or less follow what Bela wrote > > here > > > http://belaban.blogspot.com/2013/08/how-to-hijack-jgroups-channel-inside.html > < > http://belaban.blogspot.com/2013/08/how-to-hijack-jgroups-channel-inside.html> > . > > > > When the cluster comes up and receives views the node broadcasts > > information about itself over the fork channel so the cluster knows what > > each node is capable of handling. > > > > Unfortunately from what I can see there is a race condition on bootstrap > > where the JGroups stack is started and receives a fork message before > > the fork is inserted into the stack (fork not present in stack trace) > > [1] which results in garbage / unknown data passing through the > > Infinispan marshaller.. if you are lucky enough it will read an > > extremely large int and try and allocate that into a byte array > > resulting in the JVM to throw an OOM or NegativeArraySizeException > > > > I believe one possible solution is to define the fork inside the > > jgroups.xml which is used to create the initial jgroups stack which > > would hopefully discard fork channel messages (until the message > > listener is registered) and not pass them up the stack resulting in > > undefined behaviour. > > > > I had a look at the fork documentation but there are not many examples, > > does my possible solution seem feasible or does someone have alternative > > solutions? I am currently looking through the Infinispan code to see if > > there is any way to decorate jgroups before it starts. > > > > Thanks in advance, > > > > Johnathan > > > > [1] > > > > 2022-10-20 10:42:28,515 ERROR [jgroups-89,service-2] > > (org.infinispan.CLUSTER) ISPN000474: Error processing request 0@service-2 > > java.lang.NegativeArraySizeException: -436207616 > > at > > > org.infinispan.marshall.core.GlobalMarshaller.readUnknown(GlobalMarshaller.java:904) > ~[infinispan-core-11.0.1.Final.jar:11.0.1.Final] > > at > > > org.infinispan.marshall.core.GlobalMarshaller.readUnknown(GlobalMarshaller.java:891) > ~[infinispan-core-11.0.1.Final.jar:11.0.1.Final] > > at > > > org.infinispan.marshall.core.GlobalMarshaller.readNonNullableObject(GlobalMarshaller.java:715) > ~[infinispan-core-11.0.1.Final.jar:11.0.1.Final] > > at > > > org.infinispan.marshall.core.GlobalMarshaller.readNullableObject(GlobalMarshaller.java:358) > ~[infinispan-core-11.0.1.Final.jar:11.0.1.Final] > > at > > > org.infinispan.marshall.core.GlobalMarshaller.objectFromObjectInput(GlobalMarshaller.java:192) > ~[infinispan-core-11.0.1.Final.jar:11.0.1.Final] > > at > > > org.infinispan.marshall.core.GlobalMarshaller.objectFromByteBuffer(GlobalMarshaller.java:221) > ~[infinispan-core-11.0.1.Final.jar:11.0.1.Final] > > at > > > org.infinispan.remoting.transport.jgroups.JGroupsTransport.processRequest(JGroupsTransport.java:1361) > ~[infinispan-core-11.0.1.Final.jar:11.0.1.Final] > > at > > > org.infinispan.remoting.transport.jgroups.JGroupsTransport.processMessage(JGroupsTransport.java:1301) > ~[infinispan-core-11.0.1.Final.jar:11.0.1.Final] > > at > > > org.infinispan.remoting.transport.jgroups.JGroupsTransport.access$300(JGroupsTransport.java:130) > ~[infinispan-core-11.0.1.Final.jar:11.0.1.Final] > > at > > > org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.lambda$up$0(JGroupsTransport.java:1450) > ~[infinispan-core-11.0.1.Final.jar:11.0.1.Final] > > at org.jgroups.util.MessageBatch.forEach(MessageBatch.java:318) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at > > > org.infinispan.remoting.transport.jgroups.JGroupsTransport$ChannelCallbacks.up(JGroupsTransport.java:1450) > ~[infinispan-core-11.0.1.Final.jar:11.0.1.Final] > > at org.jgroups.JChannel.up(JChannel.java:796) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:903) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.protocols.FRAG3.up(FRAG3.java:187) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.protocols.FlowControl.up(FlowControl.java:418) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.stack.Protocol.up(Protocol.java:338) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:297) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.protocols.UNICAST3.deliverBatch(UNICAST3.java:1071) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.protocols.UNICAST3.removeAndDeliver(UNICAST3.java:886) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.protocols.UNICAST3.handleBatchReceived(UNICAST3.java:852) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:501) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:689) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.stack.Protocol.up(Protocol.java:338) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.protocols.FailureDetection.up(FailureDetection.java:197) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.stack.Protocol.up(Protocol.java:338) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.stack.Protocol.up(Protocol.java:338) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.stack.Protocol.up(Protocol.java:338) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at org.jgroups.protocols.TP.passBatchUp(TP.java:1408) > > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at > > > org.jgroups.util.MaxOneThreadPerSender$BatchHandlerLoop.passBatchUp(MaxOneThreadPerSender.java:284) > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at > > > org.jgroups.util.SubmitToThreadPool$BatchHandler.run(SubmitToThreadPool.java:136) > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at > > > org.jgroups.util.MaxOneThreadPerSender$BatchHandlerLoop.run(MaxOneThreadPerSender.java:273) > ~[jgroups-4.2.1.Final.jar:4.2.1.Final] > > at > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > ~[?:?] > > at > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > ~[?:?] > > at java.lang.Thread.run(Thread.java:829) ~[?:?] > > > > > > _______________________________________________ > > javagroups-users mailing list > > jav...@li... > > https://lists.sourceforge.net/lists/listinfo/javagroups-users > > -- > Bela Ban | http://www.jgroups.org > > > > _______________________________________________ > javagroups-users mailing list > jav...@li... > https://lists.sourceforge.net/lists/listinfo/javagroups-users > |