You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To support scalable deployments of the Aspects infrastructure, we would like to add the EduNEXT production helm charts to the Harmony project. Specifically these would support:
Adding a version of the ClickHouse Operator helm chart for running ClickHouse in a scalable clustered mode
Celery settings for Aspects
Autoscaling for Ralph and Superset
The text was updated successfully, but these errors were encountered:
For the actual values, we can reference the Ralph Helm Chart and the Superset Helm Chart. We don't use the superset workers extensively, but it would be a good addition to have autoscaling values for it too.
Celery
The default celery workers are run using a process pool that assumes all tasks are CPU intensive, however, Aspects tasks are mainly I/O bound, as they perform either a call or set of calls to Redis (for batching) or to Ralph (which makes another call to ClickHouse) and are most of the time CPU idle. At edunext, we have developed a tutor Celery plugin to manage multiple queues for Celery. With it, we have tested switching to a gevent pool which uses lightweight threads on the default lms worker deployment with concurrency set to 100 events. It improved a lot the performance of the tasks.
The plan would be:
Add gevent as a dependency of edx-platform.
Add notes for scaling and how to configure it for Aspects tasks such as how to improve the performance of Aspects tasks, manage multiple celery queues, and have a dedicated queue for Aspects.
ClickHouse
Support for the ClickHouse operator will be added to Harmony, and examples with documentation for running on production with Aspects.
To support scalable deployments of the Aspects infrastructure, we would like to add the EduNEXT production helm charts to the Harmony project. Specifically these would support:
The text was updated successfully, but these errors were encountered: