BroadcastModule Error getting last processed, skipping listener

Hello,

We are on version 14.4 of the cms and added the Workflow Process Management (for the first time) using the essentials app. Everything thing seems to run fine locally but when we deploy to our dev environment we get a lot of the error below even though everything appears to be working properly. Do we have to do anything special that isn’t listed in Add Workflow Process Management to a Project - Bloomreach Experience - Open Source CMS

at org.apache.jackrabbit.core.NodeImpl.resolveRelativePath(NodeImpl.java:238) ~[jackrabbit-core-2.18.5-h3.jar:14.4.0]
… 13 more
26.02.2021 00:21:25 ERROR pool-5-thread-1 [traceId=] BroadcastModule Error getting last processed, skipping listener ‘com.onehippo.cms7.services.wpm.project.observation.ClusterWideEventCounterListenerImpl@7407bbd8’
javax.jcr.RepositoryException: Failed to resolve path relative to node /hippo:configuration/hippo:modules/broadcast/hippo:moduleconfig
at org.apache.jackrabbit.core.NodeImpl.resolveRelativePath(NodeImpl.java:240) ~[jackrabbit-core-2.18.5-h3.jar:14.4.0]
at org.apache.jackrabbit.core.NodeImpl.resolveRelativeNodePath(NodeImpl.java:223) ~[jackrabbit-core-2.18.5-h3.jar:14.4.0]
at org.apache.jackrabbit.core.NodeImpl.hasNode(NodeImpl.java:2281) ~[jackrabbit-core-2.18.5-h3.jar:14.4.0]
at org.hippoecm.repository.impl.NodeDecorator.hasNode(NodeDecorator.java:203) ~[hippo-repository-engine-14.4.0.jar:14.4.0]
at org.hippoecm.repository.events.BroadcastModule.getLastProcessed(BroadcastModule.java:106) ~[hippo-repository-modules-14.4.0.jar:14.4.0]
at org.hippoecm.repository.events.BroadcastModule.getNextJob(BroadcastModule.java:164) [hippo-repository-modules-14.4.0.jar:14.4.0]
at org.hippoecm.repository.events.Broadcaster.run(Broadcaster.java:73) [hippo-repository-modules-14.4.0.jar:14.4.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_212]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_212]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
Caused by: org.apache.jackrabbit.spi.commons.conversion.MalformedPathException: empty path
at org.hippoecm.repository.jackrabbit.HippoPathParser.parse(HippoPathParser.java:219) ~[hippo-repository-engine-14.4.0.jar:14.4.0]
at org.hippoecm.repository.jackrabbit.HippoPathParser.parse(HippoPathParser.java:177) ~[hippo-repository-engine-14.4.0.jar:14.4.0]
at org.hippoecm.repository.jackrabbit.HippoPathParser.parse(HippoPathParser.java:149) ~[hippo-repository-engine-14.4.0.jar:14.4.0]
at org.hippoecm.repository.jackrabbit.HippoPathParser.parse(HippoPathParser.java:68) ~[hippo-repository-engine-14.4.0.jar:14.4.0]
at org.hippoecm.repository.jackrabbit.HippoCachingPathResolver.getQPath(HippoCachingPathResolver.java:53) ~[hippo-repository-engine-14.4.0.jar:14.4.0]
at org.hippoecm.repository.jackrabbit.HippoNamePathResolver.getQPath(HippoNamePathResolver.java:61) ~[hippo-repository-engine-14.4.0.jar:14.4.0]
at org.apache.jackrabbit.core.SessionImpl.getQPath(SessionImpl.java:654) ~[jackrabbit-core-2.18.5-h3.jar:14.4.0]
at org.apache.jackrabbit.core.session.SessionContext.getQPath(SessionContext.java:338) ~[jackrabbit-core-2.18.5-h3.jar:2.18.5-h3]
at org.apache.jackrabbit.core.NodeImpl.resolveRelativePath(NodeImpl.java:238) ~[jackrabbit-core-2.18.5-h3.jar

Hey,

It looks to be a missing configuration on your environment.
Could you check that this path is present on the environment, if not you would need to add it to the bootstrapping.

Thanks
Shane

Hi @Shane_Ahern
Yes that path is present in the environment.

I do notice that when we run the project locally there are sub nodes that could be related to this. I tried exporting and moving these nodes over from my local instance but the error still comes out. I’m not sure how these were created locally or what the expected values should be. It seems like the main node is a UID to something.

Turns out we weren’t setting a cluster id JRC_OPTS="-Dorg.apache.jackrabbit.core.cluster.node_id=${CLUSTER_ID}" in our setenv.sh script. Once I added that the errors went away and the correct sub nodes were created in the console.