Skip to content

Commit

Permalink
Built site for gh-pages
Browse files Browse the repository at this point in the history
  • Loading branch information
Quarto GHA Workflow Runner committed Oct 22, 2024
1 parent 6af6d68 commit 9d9c7d3
Show file tree
Hide file tree
Showing 7 changed files with 103 additions and 101 deletions.
2 changes: 1 addition & 1 deletion .nojekyll
Original file line number Diff line number Diff line change
@@ -1 +1 @@
6de539b8
3f37df37
2 changes: 1 addition & 1 deletion admins/howto/calendar-scaler.html
Original file line number Diff line number Diff line change
Expand Up @@ -490,7 +490,7 @@ <h3 class="anchored" data-anchor-id="working-on-testing-and-deploying-the-calend
<p>All file locations in this section will assume that you are in the <code>datahub/images/node-placeholder-scaler/</code> directory.</p>
<p>It is strongly recommended that you create a new python 3.11 environment before doing any dev work on the scaler. With <code>conda</code>, you can run the following commands to create one:</p>
<div class="sourceCode" id="cb1"><pre class="sourceCode bash code-with-copy"><code class="sourceCode bash"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a><span class="ex">conda</span> create <span class="at">-ny</span> scalertest python=3.11</span>
<span id="cb1-2"><a href="#cb1-2" aria-hidden="true" tabindex="-1"></a><span class="ex">pip</span> install <span class="at">-r</span> requirements.txt</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
<span id="cb1-2"><a href="#cb1-2" aria-hidden="true" tabindex="-1"></a><span class="ex">pip</span> install <span class="at">-r</span> images/node-placeholder-scaler/requirements.txt</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
<p>Any changes to the scaler code will require you to run <code>chartpress</code> to redeploy the scaler to GCP.</p>
<p>Here is an example of how you can test any changes to <code>scaler/calendar.py</code> locally in the python interpreter:</p>
<div class="sourceCode" id="cb2"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb2-1"><a href="#cb2-1" aria-hidden="true" tabindex="-1"></a><span class="co"># these tests will use some dates culled from the calendar with varying numbers of events.</span></span>
Expand Down
8 changes: 5 additions & 3 deletions admins/howto/new-hub.html
Original file line number Diff line number Diff line change
Expand Up @@ -482,12 +482,14 @@ <h2 class="anchored" data-anchor-id="why-create-a-new-hub">Why create a new hub?
<h2 class="anchored" data-anchor-id="prerequisites">Prerequisites</h2>
<p>Working installs of the following utilities:</p>
<ul>
<li><a href="https://github.com/mozilla/sops/releases">sops</a></li>
<li><a href="https://hubploy.readthedocs.io/en/latest/index.html">hubploy</a></li>
<li><a href="https://pypi.org/project/chartpress/">chartpress</a></li>
<li><a href="https://pypi.org/project/cookiecutter/">cookiecutter</a></li>
<li><a href="https://cloud.google.com/sdk/docs/install">gcloud</a></li>
<li><a href="https://github.com/berkeley-dsep-infra/hubploy">hubploy</a></li>
<li><a href="https://kubernetes.io/docs/tasks/tools/">kubectl</a></li>
<li><a href="https://github.com/audreyr/cookiecutter">cookiecutter</a></li>
<li><a href="https://github.com/mozilla/sops/releases">sops</a></li>
</ul>
<p>The easiest way to install <code>chartpress</code>, <code>cookiecutter</code> and <code>hubploy</code> is to run <code>pip install -r dev-requirements.txt</code> from the root of the <code>datahub</code> repo.</p>
<p>Proper access to the following systems:</p>
<ul>
<li>Google Cloud IAM: <em>owner</em></li>
Expand Down
38 changes: 19 additions & 19 deletions incidents/index.html

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions search.json
Original file line number Diff line number Diff line change
Expand Up @@ -386,7 +386,7 @@
"href": "admins/howto/calendar-scaler.html#calendar-autoscaler",
"title": "Calendar Node Pool Autoscaler",
"section": "Calendar Autoscaler",
"text": "Calendar Autoscaler\nThe code for the calendar autoscaler is a python 3.11 script, located here: https://github.com/berkeley-dsep-infra/datahub/tree/staging/images/node-placeholder-scaler/scaler\n\nHow the scaler works\nThere is a k8s pod running in the node-placeholder namespace, which simply runs python3 -m scaler. This script runs in an infinite loop, and every 60 seconds checks the scaler config and calendar for entries. It then uses the highest value provided as the number of placeholder replicas for any given hub. This means that if there’s a daily evening event to ‘cool down’ the number of replicas for all hubs to 0, and a simultaneous event to set one or more hubs to a higher number, the scaler will see this and keep however many node placeholders specified up and ready to go.\nAfter determining the number of replicas needed for each hub, the scaler will create a k8s template and run kubectl in the pod.\n\n\nUpdating the scaler config\nThe scaler config sets the default number of node-placeholders that are running at any given time. These values can be overridden by creating events in the DataHub Scaling Events calendar.\nWhen classes are in session, these defaults are all typically set to 1, and during breaks (or when a hub is not expected to be in use) they can be set to 0.\nAfter making changes to values.yaml, create a PR normally and our CI will push the new config out to the node-placeholder pod. There is no need to manually restart the node-placeholder pod as the changes will be picked up automatically.\n\n\nWorking on, testing and deploying the calendar scaler\nAll file locations in this section will assume that you are in the datahub/images/node-placeholder-scaler/ directory.\nIt is strongly recommended that you create a new python 3.11 environment before doing any dev work on the scaler. With conda, you can run the following commands to create one:\nconda create -ny scalertest python=3.11\npip install -r requirements.txt\nAny changes to the scaler code will require you to run chartpress to redeploy the scaler to GCP.\nHere is an example of how you can test any changes to scaler/calendar.py locally in the python interpreter:\n# these tests will use some dates culled from the calendar with varying numbers of events.\nimport scaler.calendar\nimport datetime\nimport zoneinfo\n\ntz = zoneinfo.ZoneInfo(key='America/Los_Angeles')\nzero_events_noon_june = datetime.datetime(2023, 6, 14, 12, 0, 0, tzinfo=tz)\none_event_five_pm_april = datetime.datetime(2023, 4, 27, 17, 0, 0, tzinfo=tz)\nthree_events_eight_thirty_pm_march = datetime.datetime(2023, 3, 6, 20, 30, 0, tzinfo=tz)\ncalendar = scaler.calendar.get_calendar('https://calendar.google.com/calendar/ical/c_s47m3m1nuj3s81187k3b2b5s5o%40group.calendar.google.com/public/basic.ics')\nzero_events = scaler.calendar.get_events(calendar, time=zero_events_noon_june)\none_event = scaler.calendar.get_events(calendar, time=one_event_five_pm_april)\nthree_events = scaler.calendar.get_events(calendar, time=three_events_eight_thirty_pm_march)\n\nassert len(zero_events) == 0\nassert len(one_event) == 1\nassert len(three_events) == 3\nget_events returns a list of ical ical.event.Event class objects.\nThe method for testing scaler/scaler.py is similar to above, but the only things you’ll be able test locally are the make_deployment() and get_replica_counts() functions.\nWhen you’re ready, create a PR. The deployment workflow is as follows:\n\nGet all authed-up for chartpress by performing the documented steps.\nRun chartpress --push from the root datahub/ directory. If this succeeds, check your git status and add datahub/node-placeholder/Chart.yaml and datahub/node-placeholder/values.yml to your PR.\nMerge to staging and then prod.\n\n\n\nChanging python imports\nThe python requirements file is generated using requirements.in and pip-compile. If you need to change/add/update any packages, you’ll need to do the following:\n\nEnsure you have the correct python environment activated (see above).\nPip install pip-tools\nEdit requirements.in and save your changes.\nExecute pip-compile requirements.in, which will update the requirements.txt.\nCheck your git status and diffs, and create a pull request if necessary.\nGet all authed-up for chartpress by performing the documented steps.\nRun chartpress --push from the root datahub/ directory. If this succeeds, check your git status and add datahub/node-placeholder/Chart.yaml and datahub/node-placeholder/values.yml to your PR.\nMerge to staging and then prod.",
"text": "Calendar Autoscaler\nThe code for the calendar autoscaler is a python 3.11 script, located here: https://github.com/berkeley-dsep-infra/datahub/tree/staging/images/node-placeholder-scaler/scaler\n\nHow the scaler works\nThere is a k8s pod running in the node-placeholder namespace, which simply runs python3 -m scaler. This script runs in an infinite loop, and every 60 seconds checks the scaler config and calendar for entries. It then uses the highest value provided as the number of placeholder replicas for any given hub. This means that if there’s a daily evening event to ‘cool down’ the number of replicas for all hubs to 0, and a simultaneous event to set one or more hubs to a higher number, the scaler will see this and keep however many node placeholders specified up and ready to go.\nAfter determining the number of replicas needed for each hub, the scaler will create a k8s template and run kubectl in the pod.\n\n\nUpdating the scaler config\nThe scaler config sets the default number of node-placeholders that are running at any given time. These values can be overridden by creating events in the DataHub Scaling Events calendar.\nWhen classes are in session, these defaults are all typically set to 1, and during breaks (or when a hub is not expected to be in use) they can be set to 0.\nAfter making changes to values.yaml, create a PR normally and our CI will push the new config out to the node-placeholder pod. There is no need to manually restart the node-placeholder pod as the changes will be picked up automatically.\n\n\nWorking on, testing and deploying the calendar scaler\nAll file locations in this section will assume that you are in the datahub/images/node-placeholder-scaler/ directory.\nIt is strongly recommended that you create a new python 3.11 environment before doing any dev work on the scaler. With conda, you can run the following commands to create one:\nconda create -ny scalertest python=3.11\npip install -r images/node-placeholder-scaler/requirements.txt\nAny changes to the scaler code will require you to run chartpress to redeploy the scaler to GCP.\nHere is an example of how you can test any changes to scaler/calendar.py locally in the python interpreter:\n# these tests will use some dates culled from the calendar with varying numbers of events.\nimport scaler.calendar\nimport datetime\nimport zoneinfo\n\ntz = zoneinfo.ZoneInfo(key='America/Los_Angeles')\nzero_events_noon_june = datetime.datetime(2023, 6, 14, 12, 0, 0, tzinfo=tz)\none_event_five_pm_april = datetime.datetime(2023, 4, 27, 17, 0, 0, tzinfo=tz)\nthree_events_eight_thirty_pm_march = datetime.datetime(2023, 3, 6, 20, 30, 0, tzinfo=tz)\ncalendar = scaler.calendar.get_calendar('https://calendar.google.com/calendar/ical/c_s47m3m1nuj3s81187k3b2b5s5o%40group.calendar.google.com/public/basic.ics')\nzero_events = scaler.calendar.get_events(calendar, time=zero_events_noon_june)\none_event = scaler.calendar.get_events(calendar, time=one_event_five_pm_april)\nthree_events = scaler.calendar.get_events(calendar, time=three_events_eight_thirty_pm_march)\n\nassert len(zero_events) == 0\nassert len(one_event) == 1\nassert len(three_events) == 3\nget_events returns a list of ical ical.event.Event class objects.\nThe method for testing scaler/scaler.py is similar to above, but the only things you’ll be able test locally are the make_deployment() and get_replica_counts() functions.\nWhen you’re ready, create a PR. The deployment workflow is as follows:\n\nGet all authed-up for chartpress by performing the documented steps.\nRun chartpress --push from the root datahub/ directory. If this succeeds, check your git status and add datahub/node-placeholder/Chart.yaml and datahub/node-placeholder/values.yml to your PR.\nMerge to staging and then prod.\n\n\n\nChanging python imports\nThe python requirements file is generated using requirements.in and pip-compile. If you need to change/add/update any packages, you’ll need to do the following:\n\nEnsure you have the correct python environment activated (see above).\nPip install pip-tools\nEdit requirements.in and save your changes.\nExecute pip-compile requirements.in, which will update the requirements.txt.\nCheck your git status and diffs, and create a pull request if necessary.\nGet all authed-up for chartpress by performing the documented steps.\nRun chartpress --push from the root datahub/ directory. If this succeeds, check your git status and add datahub/node-placeholder/Chart.yaml and datahub/node-placeholder/values.yml to your PR.\nMerge to staging and then prod.",
"crumbs": [
"Using DataHub",
"Contributing to DataHub",
Expand Down Expand Up @@ -568,7 +568,7 @@
"href": "admins/howto/new-hub.html#prerequisites",
"title": "Create a New Hub",
"section": "Prerequisites",
"text": "Prerequisites\nWorking installs of the following utilities:\n\nsops\nhubploy\ngcloud\nkubectl\ncookiecutter\n\nProper access to the following systems:\n\nGoogle Cloud IAM: owner\nWrite access to the datahub repo\nOwner or admin access to the berkeley-dsep-infra organization",
"text": "Prerequisites\nWorking installs of the following utilities:\n\nchartpress\ncookiecutter\ngcloud\nhubploy\nkubectl\nsops\n\nThe easiest way to install chartpress, cookiecutter and hubploy is to run pip install -r dev-requirements.txt from the root of the datahub repo.\nProper access to the following systems:\n\nGoogle Cloud IAM: owner\nWrite access to the datahub repo\nOwner or admin access to the berkeley-dsep-infra organization",
"crumbs": [
"Using DataHub",
"Contributing to DataHub",
Expand Down
Loading

0 comments on commit 9d9c7d3

Please sign in to comment.