-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplicate cavalcade job entries #89
Comments
I noticed the same thing, which is why I opened #88 to help clean up job entries via WP-CLI, but it would be great to know what is causing these duplicate entries to occur. |
Does this project have multiple webservers? My approach was going to be to put an key constraint on hook + args + site so that a second instance of the same event wouldn't get into the database. The issue with that is that the args column is a longtext instead of varchar(max) which doesn't support indexes. (https://github.com/humanmade/Cavalcade/blob/master/inc/namespace.php#L70-L73) The issue can also occur when using intervals because each worker will check if an event needs to be rescheduled. The acquire_lock method of the runner isn't adequate: https://github.com/humanmade/Cavalcade-Runner/blob/master/inc/class-job.php#L55-L66 When 4 workers get started and they call https://github.com/humanmade/Cavalcade-Runner/blob/master/inc/class-runner.php#L236-L239 at a similar time then try and update we see the race condition here. Last time I spoke with @rmccue about this he wanted to look at database locking if I recall correctly. Looking into this something like https://dev.mysql.com/doc/refman/5.7/en/innodb-locking-reads.html might be an option. |
Would love to hear if there's a solution as we're looking to implement Cavalcade right now. If duplicates are being created, the solution would be (at the moment) no better than WP's broken built-in cron. |
@archon810 you could try Cavalcade 2.0. Worth noting this doesn't happen all the time - Cavalcade is running on all our production sites and wordpress.org too. |
WordPress.org has this running as a daily cron task to solve this issue:
It's not ideal, but it's been working for us for quite some time. Duplicate jobs are a little more common with Cavalcade over the usual WP cron storage, as Cavalcade inserts multiple rows where as WP cron just overwrites the previous cron array with a new one (which will only include one cron entry). Using a table lock would help, but Cavalcade would also have to reload the DB cron entries for the current site after locking, which could cause a lot of table locking on a high usage site like WordPress.org. For reference, Cavalcade on WordPress.org in numbers:
|
Maybe change the logic to confirm 100% that the query that looks for existing jobs returned correctly, and if it errors, don't schedule a potential dupe? Error check? |
Hey, thanks for these recent updates. Yes is the answer that we want to understand and resolve this issue for good. Thanks @dd32 for the code that checks for duplicates too. We did have some initial discovery work internally but we didn't reach any direct conclusions yet. We will have a few QA sprints coming up next month so I will try and get this addressed then. |
Hello, Any news regarding this issue? We implemented @dd32 duplicate marking solution as a cron job and it's working, but we still see a lot of duplicates (around 30,000 dupes currently). It's a multisite network with 400+ sites, and three workers (runners on AWS instances) with 25 max-workers each. The three of them are %99 CPU all the time, and we think it is due to the underlying dupes problem. We modified the runner a bit to change events with less than 15 minutes interval, to make the nextrun the current execution time plus the interval, so they never get stuck, and changed also the nextrun of the events with a one minute interval to be two minutes instead: Hopefully you cant sort this out! |
It seems this piece of software is abandoned... I'm running on issues with several multisite installations where cavalcade is opening and closing a lot of connections putting down a t4g.large aurora database server really quick |
@doradod not abandoned, it’s in production on large installations including Wordpress.org with smaller instance sizes, but this is a difficult issue to consistently reproduce and test for. Is your problem the same as the one described here? |
My problem looks related to a very high number of queries as an example
there hundreds of these ones related to different multisite installations where we have from just 2-3 sites on each installation to no more than 30 sites i had to run a script to iterate over the 'cavalcade jobs' tables that i have across my installations to remove duplicates, but anyhow looks like since i've disabled the supervidor processes that i have set up to run cavalcade in the background the server has not been restarted. I have the logs of a period of 15 minutes when the issue last happened and beside WPML doing it's bullshit cavalcade is the next heavy actor querying the database really intensive |
It’s not clear those are definitely duplicates, the args could be different, and they’re scheduled for very different times. What’s the plugin or code adding the transient cleaner task? Maybe there’s something to investigate there |
But this is just an example of a task i'm also not sure what is exactly doing the database open a lot of new connections, but definitely after shutting down cavalcade didn't happen still monitoring
i think i did something not very good, i cleaned all records @ the jobs tables... i see the jobs are recreated now but the next run shows that are already late |
They should be recreated and rescheduled, that should be ok. Are you using InnoDB, and possibly any kind of dual read only & write db set up? |
They are recreated, but not rescheduled, not a single cron is executing now. so far over the last two days the php has not been restarted, so looks lke cavalcave was the responsible of those php-fpm processes hanging the database opening too many connections I have single aurora instance on my aurora im not using two instances for read and write tables are set to InnoDB Any recommendations? |
Did the cavalcade runner stop as well? Cavalcade works by executing WP CLI commands so it shouldn't be creating additional This could be a general scalability problem, but it's hard to advise without knowing what the stack looks like or how much traffic you're handling. You might need to look into adding Redis or other object cache if you haven't already to help alleviate the number of requests, as well as a page caching solution via batcache & cloudfront for example. |
I have object cache pro, nginx fastcgi cache for the whole server, caching scripts that iterates over all my urls in ~10m caching the sites for a week and cloudfront on the most heavy sites... I've stopped the cavalcade processes but still i saw some entries on the aurora logs cheking the tables, which seems weird... But again the php-fpm hanged, and now looks like it's not cavalcade itself doing it... Anyhow i'm not sure how to make the crons to run again... they got late on the database and i cannot make them to run ahead in time |
You would need to deactivate the cavalcade plugin too to switch off cavalcade fully. The plugin just hooks into WordPress and tells it to interact with the cavalcade tables rather than the standard array in the options table. The cavalcade runner is a separate server process that runs every second (or more, depending how you configured the runner), so if the cavalcade runner is stopped then no WordPress cron tasks will be run. You'll need to profile your application, could be too many / long running blocking HTTP requests for a missing resource or some plugin or other code that's trying to update / write to the db on every request is another common one. Anyway, I think your issue is not the one described here, you could confirm it by checking the stored args and start time of the jobs match. There will be multiple entries for a given hook if different arguments adn times are passed for it. |
@roborourke What do you think about preventing duplicate events as suggested here: https://core.trac.wordpress.org/ticket/49693 ? |
@citrika I think we follow the same logic as core, in theory it shouldn't happen but we have an opportunity here that core does not have which is using MySQL features. I would look into adding a In the meantime there's #89 (comment) |
Addresses #89 Using a generated column to make a hash of cron task uniqueness lets us add a unique index to the table. Changing the $wpdb->insert() call to use replace() means that an existing record will be updated rather than causing an error. Need to determine if this is really the right way to define uniqueness. Potential for improving lookup performance using a generated hash column for args as well as thats more reasonable to add an index for.
Seeing a potential issue where (in this example) scheduled post cavalcade jobs are duplicated causing a race condition and posts don't publish.
Will update upon further investigation
The text was updated successfully, but these errors were encountered: