Hello, I have following issue with implemented threestep publishing workflow
test 1
User1 edit save and ask for publish for 1 document
User2 confirm publication
Result : document is published.
test 2
User1 edit save and ask for scheduled publish for 1 document
User2 confirm scheduled publication
Result : document is published (scheduled).
test 3
User1 edit save and ask for scheduled publish for 2 documents
User2 confirm all 2 scheduled publications
Result : only one document is published (scheduled).
test 4
User1 edit save and ask for scheduled publish for more than 2 documents
User2 confirm all scheduled publications
Result : only 2 documents are published (scheduled).
Results of Test with two different hippo configurations :
- All tests works fine if H2 ist configured (local)
- Test 1 and 2 OK, Test 3 and 4 fails if Oracle configuration (Using DB) is used
What I found :
- Hippo creates 2 Threads to read and execute workflows. Its hardcoded.
So I can see in Log
Hippo JCR Quartz Job Scheduler_Worker-1
Hippo JCR Quartz Job Scheduler_Worker-2
Thats why max 2 Publish-request could be processed
- I reimplemented WorkflowJob (from org.onehippo.repository.documentworkflow.task.ScheduleWorkflowTask
And got ALL Threads works sync :
public void execute(RepositoryJobExecutionContext context) throws RepositoryException {
log.info("****** WATING FOR SYNC SCHEDULING JOB… " + this.hashCode());
synchronized ( LOCK )
{
log.info("****** START SYNC SCHEDULING JOB… " + this.hashCode());
…
Just to check if there are some kind of racing conditions.
- I inserted Select at the begining and at the end of Job, to check hom much request exists every time Workflow ist started
Query query = qMgr.createQuery(“SELECT * FROM hipposched:trigger WHERE hipposched:nextFireTime <= TIMESTAMP '” + ISO8601.format(cal) + “’ ORDER BY hipposched:nextFireTime”, “sql”);
Now example with 3 documents
H2 Configurations runs like this :
Worker 1 starts and says we have 3 requests pending
Worker 2 starts and wait for worker 1
Worker 1 publish document1
Worker 1 finish and says wir have 3 requests pending
Hippo says : autoexport is processing changes…
Worker 2 starts and says we have 2 requests pending
Worker 1 starts and wait for worker 1
Worker 2 publish document2
Worker 2 finish and says wir have 2 requests pending
Hippo says : autoexport is processing changes…
Worker 1 starts and says we have 1 request pending
Worker 1 publish document3
Worker 1 finish and says wir have 1 requests pending
Hippo says : autoexport is processing changes…
Everything works fine All documents are published successfully
Oracle Configurations runs like this :
Worker 1 starts and says we have 3 requests pending (doc1 doc2 and doc3)
Worker 2 starts and wait for worker 1
Worker 1 publish document1
!!! now happens something strange
24.07.2018 12:08:02 [ClusterNode-gf0vsxja839e.corp.int] INFO [org.apache.jackrabbit.core.cluster.ClusterNode.consume():858] Processing revision: 5796
24.07.2018 12:08:02 [ClusterNode-gf0vsxja839e.corp.int] INFO [org.apache.jackrabbit.core.cluster.ClusterNode.process():930] [174] 5796 system@default:/content/documents/HippoCMS/homepage/wissenswert/document2
!!!
Publish-Request for Doc2 disappears
Worker 1 finish and says wir have 2 requests pending (doc1 and doc3)
Worker 2 starts and says we have 1 requests pending (doc3)
Worker 1 starts and wait for worker 1
Worker 2 publish document3
Worker 2 finish and says wir have 1 request pending (doc3)
I’m not able to find out WHY Hippo consumes document2.
Error could be reproduced everytime.
If I have 6 Documents . all of Requests except two, will be consumed by hippo
Scheduled publishing of more that 1 Document on https://cms.demo.onehippo.com/ works, But I think there ist H2 Configuration not MySQL or Oracle.