404
+ +Page not found
+ + +diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..c562933 --- /dev/null +++ b/404.html @@ -0,0 +1,115 @@ + + +
+ + + + +Page not found
+ + +This page contains the automatically generated API documentation for Coral Credits.
+
+
Coral credits is a resource management system that helps build a "coral reef style" fixed capacity cloud, cooperatively sharing community resources through interfaces such as: Azimuth, OpenStack Blazar and Slurm.
+Please do read about our proposed mission.
+ +Coral Credits aims to support the building of a "coral reef style" fixed capacity cloud. +A Coral reef style cloud involves cooperatively sharing resources +to maximise your investiment in both people and cloud resources.
+Coral credits are focused on how to support sharing of +resources from a federated e-infrastructure (or community cloud) +where resources are consumed via multiple interfaces such as: +Azimuth, OpenStack Blazar and Slurm
+We are assuming clouds are trying to follow the +AARC blueprint, +such that user groups are managed via a central AAAI proxy, +typically based on Indigo IAM +or Keycloak.
+Typically this means the project lead (or principal investigator) +is responsible for ensuring the membership of the groups they +manage in the central AAAI proxy are correct.
+Coral Credit Accounts are assocaited to an account, +and access to that account is limited to a group +defined in the central AAAI proxy. This group typically +has access to many different resource providers, +and often uses more than one interface to access those resources.
+The coral credits operators are responsible for defining +the list of available resource classes. +We will use the definition of resource classes used by OpenStack +and defined in the python library +os-resource-classes
+A federation manager is typically responsible for updating the +allocation of resource credits given to each account.
+A credit allocation has the following properties:
+To simplify the initial implementation +no account can have overlapping credits +valid for the same resource provider, +although an existing allocation can be increased +or decresed at any time. +The hope is to add this support in a future, +under the assumption any resource consumption +is only from a single credit pool.
+There are places where an account gets to +consume their allocated credits.
+Coral credits operator is responsible for +onboarding a particular resource provider +and giving them a token to access the +resource consumption API.
+Cloud credits are consumed at a specfiic Resource +Provider. The units are resource class hours. +The Resource Provider has to map their local view +of an account and user into how Coral Cloud Credits +views that account. Note this means the user reference +given is likely specific to each resource provider, +although the recomendation will be to use an email +address, to make differences between resource providers +less likely.
+Resource providers should create an appropriate +resource consumption request, before allowing +resources to be consumed. +Only if enough credits are availabe for the +duration of the request will the request be +accepted by the coral credit system.
+A resource consumtion request has the following properties:
+Azimuth plaforms are now forced to pick an end date, +such that we can make a credit consumption request +for a platform we are about to create.
+If there are not enough credits, it will be clear +what credits are required to create the platform, +possbily including which platforms could be +stopped early to free up credits for the requested +platform.
+When a platform is stopped before the originially +agreed time, the consmption record should be +updated with the new end date, returning the credits +back to the user.
+Where platforms are long lived, the scheduled end +date need to be either when their current credits +expire, or possibly sooner if the proposed +platform will consume all reminaing credits before +those credits expire.
+Users need to be warned when platforms are about +to be automatically deleted, so they can get +additional credits allocated.
+When credits are allocated "back to back" with no +gap, the user is able to request a change to the +end date for the existing credit consumption +request, and with the option to extend to the +maximun date allowed given the current credit +allocation for the associated account.
+All the platforms so far have assumed a uniform +resource usage throught the lifetime of the +platform.
+While not supported in the initial implemention, +we need to support the a variety of increases +and decreases in resource during the lifetime +of the cluster. +We likely need to have the option for resource +consumption requests resource footprint +records to have a start and end date that is +indepent of the overall resource consumption +request.
+This is very similar to the Azimuth case, +except its for an arbitry reservation via +the Blazar API.
+To help reservations line up nicely, +and reduce resource fragmentation, +we could enforce that we round up credits +to the nearer time window (e.g. 1 hour, +or one of three 8hr working day windows +each day).
+It is common to have one project be given separate +openstack projects for Dev/Test, Staging and Production. +In this case, it would be good if they all share a single +credit account, although its clear they which openstack +project is consuming the resources.
+You could have a single pool of credits, +where you could self-service request that +a some amount of Coral Credits are given to +your Slurm account, such that you can submit +some jobs to your chosen Slurm cluster.
+For example, you could reserve 30 days of 1k CPU hours +with a Slurm cluster, and if accepted those +cloud credits are consumed from the central pool. +If that Slurm cluster is very busy, it might not +have any available for your selected 30 day period, +but there might some available next month. +With the idea that a specific federation only has +a limited number of CPU hours available each month +from that Slurm system, and users reserve some of +those, on-demand, when they need need them, +and they have not spent them on other cloud credits.
+Its possible that very large credit delegations +to a slurm cluster could be used to expand the slurm +cluster using available cloud resources, depending +on them being from a shared pool of resources, +such as a single OpenStack Ironic based cloud.
+Similar to Blazar, you could imagine building the +option to self service Slurm reservations against +a shared resource pool.
+Without care people can run up unexpected +cloud bills. Automation to convert account +credits into a public cloud account, +correctly setup with spend limits, +bringing the pre-paid credit system to +public clouds.
+With all transfer of credits, care must +be taken to ensure unused credits are +refundable when possible, such as public +cloud spend (where possible) is capped +rather than pre-paid as such. Work is +needed to understand how this works +with JISC OCRE: +https://www.jisc.ac.uk/ocre-cloud-framework
+There are various systems that could create a +job queue that spans multiple resource providers +(or in some cases a common interface at multiple +providers):
+Cloud credit users could be consuming cloud credits +when they submit large groups or jobs, (or maybe the +user trades in cloud credits for some credits on the +shared job queue).
+Some "free" or "cheaper" queues could exist +for preemtable jobs, that could help consume +the free capacity that exists between cloud +reservations.
+When a user logs into jupyer hub, and their container +is spun up, maybe this could be blocked (using a custom +Authorization plugin or jupyterhub-singleuser wrapper) +if the user doesn't have any credits left, +along side matching configuration in the idle-culling system.
+One thing not possible with quota, is being +able to hand out a very small amount of resoruce +for people to try things out. You could say +all members of an instituiton automatically get +a seedcorn allocation they could use. +This could become a default allocation amount +for any automatically created accounts.
+All changes should be recorded in an audit log, +that can be quiried via the API
+There should be a clear view of:
+Various stats should be made availabe via a prometheus +metrics endpoint, including these per account metrics:
+Each resource provider is responsble for regularly checking +if there is any drift between the current resource consumption +requests, and the current state of resoruce consumption records. +Only the service knows how to map the records in coral credits +back to the real resources in that service.
+Coral credits on credit allocations and consumption records +per account, not the current usage in each service. +Coral credis does not track if the resources are being fully +utilized (e.g. job efficieny).
+Resource Providers, combined with their use of the central +AAI proxy, must ensure users have accepted all policies +before requesting a resource.
+The Coral Cloud credits admin must ensure the account PI +has accepted all the policies for the duration of any +credit allocation to thier account.
+ +' + escapeHtml(summary) +'
' + noResultsText + '
'); + } +} + +function doSearch () { + var query = document.getElementById('mkdocs-search-query').value; + if (query.length > min_search_length) { + if (!window.Worker) { + displayResults(search(query)); + } else { + searchWorker.postMessage({query: query}); + } + } else { + // Clear results for short queries + displayResults([]); + } +} + +function initSearch () { + var search_input = document.getElementById('mkdocs-search-query'); + if (search_input) { + search_input.addEventListener("keyup", doSearch); + } + var term = getSearchTermFromLocation(); + if (term) { + search_input.value = term; + doSearch(); + } +} + +function onWorkerMessage (e) { + if (e.data.allowSearch) { + initSearch(); + } else if (e.data.results) { + var results = e.data.results; + displayResults(results); + } else if (e.data.config) { + min_search_length = e.data.config.min_search_length-1; + } +} + +if (!window.Worker) { + console.log('Web Worker API not supported'); + // load index in main thread + $.getScript(joinUrl(base_url, "search/worker.js")).done(function () { + console.log('Loaded worker'); + init(); + window.postMessage = function (msg) { + onWorkerMessage({data: msg}); + }; + }).fail(function (jqxhr, settings, exception) { + console.error('Could not load worker.js'); + }); +} else { + // Wrap search in a web worker + var searchWorker = new Worker(joinUrl(base_url, "search/worker.js")); + searchWorker.postMessage({init: true}); + searchWorker.onmessage = onWorkerMessage; +} diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 0000000..28df0a0 --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Coral Credits Coral credits is a resource management system that helps build a \"coral reef style\" fixed capacity cloud, cooperatively sharing community resources through interfaces such as: Azimuth, OpenStack Blazar and Slurm. Please do read about our proposed mission .","title":"Home"},{"location":"#coral-credits","text":"Coral credits is a resource management system that helps build a \"coral reef style\" fixed capacity cloud, cooperatively sharing community resources through interfaces such as: Azimuth, OpenStack Blazar and Slurm. Please do read about our proposed mission .","title":"Coral Credits"},{"location":"api/","text":"This page contains the automatically generated API documentation for Coral Credits.","title":"API Reference"},{"location":"mission/","text":"Coral Credits Mission Coral Credits aims to support the building of a \"coral reef style\" fixed capacity cloud. A Coral reef style cloud involves cooperatively sharing resources to maximise your investiment in both people and cloud resources. Coral credits are focused on how to support sharing of resources from a federated e-infrastructure (or community cloud) where resources are consumed via multiple interfaces such as: Azimuth, OpenStack Blazar and Slurm On-boarding Accounts, Projects and Users We are assuming clouds are trying to follow the AARC blueprint , such that user groups are managed via a central AAAI proxy, typically based on Indigo IAM or Keycloak . Typically this means the project lead (or principal investigator) is responsible for ensuring the membership of the groups they manage in the central AAAI proxy are correct. Coral Credit Accounts are assocaited to an account, and access to that account is limited to a group defined in the central AAAI proxy. This group typically has access to many different resource providers, and often uses more than one interface to access those resources. Resource Class and Resource Class Hours The coral credits operators are responsible for defining the list of available resource classes. We will use the definition of resource classes used by OpenStack and defined in the python library os-resource-classes Allocating credits to Accounts A federation manager is typically responsible for updating the allocation of resource credits given to each account. A credit allocation has the following properties: a single account it is associated with a start date and an end date credits are a dict of resource class to resource class hours list of one or more resource providers where you can consume these credits, with the default being any resource provider To simplify the initial implementation no account can have overlapping credits valid for the same resource provider, although an existing allocation can be increased or decresed at any time. The hope is to add this support in a future, under the assumption any resource consumption is only from a single credit pool. Resource Providers There are places where an account gets to consume their allocated credits. Coral credits operator is responsible for onboarding a particular resource provider and giving them a token to access the resource consumption API. Resource Consumption Request Cloud credits are consumed at a specfiic Resource Provider. The units are resource class hours. The Resource Provider has to map their local view of an account and user into how Coral Cloud Credits views that account. Note this means the user reference given is likely specific to each resource provider, although the recomendation will be to use an email address, to make differences between resource providers less likely. Resource providers should create an appropriate resource consumption request, before allowing resources to be consumed. Only if enough credits are availabe for the duration of the request will the request be accepted by the coral credit system. A resource consumtion request has the following properties: account resources provider resources consumption interface (e.g. Blazar or Azimuth or Slurm) email address of user requesting the resource resource footprint, i.e. a list of resource class and float amounts proposed start date optionally a proposed end date, if ommitted we return when whatever date is first, when all credits are used or when all credits have expired Example: Azimuth short lived platform Azimuth plaforms are now forced to pick an end date, such that we can make a credit consumption request for a platform we are about to create. If there are not enough credits, it will be clear what credits are required to create the platform, possbily including which platforms could be stopped early to free up credits for the requested platform. When a platform is stopped before the originially agreed time, the consmption record should be updated with the new end date, returning the credits back to the user. Example: Azimuth long lived platform Where platforms are long lived, the scheduled end date need to be either when their current credits expire, or possibly sooner if the proposed platform will consume all reminaing credits before those credits expire. Users need to be warned when platforms are about to be automatically deleted, so they can get additional credits allocated. When credits are allocated \"back to back\" with no gap, the user is able to request a change to the end date for the existing credit consumption request, and with the option to extend to the maximun date allowed given the current credit allocation for the associated account. Example: Azimuth variable resource usage All the platforms so far have assumed a uniform resource usage throught the lifetime of the platform. While not supported in the initial implemention, we need to support the a variety of increases and decreases in resource during the lifetime of the cluster. We likely need to have the option for resource consumption requests resource footprint records to have a start and end date that is indepent of the overall resource consumption request. Example: OpenStack Blazar reservation This is very similar to the Azimuth case, except its for an arbitry reservation via the Blazar API. To help reservations line up nicely, and reduce resource fragmentation, we could enforce that we round up credits to the nearer time window (e.g. 1 hour, or one of three 8hr working day windows each day). Example: Many OpenStack projects, one account It is common to have one project be given separate openstack projects for Dev/Test, Staging and Production. In this case, it would be good if they all share a single credit account, although its clear they which openstack project is consuming the resources. Example: Slurm batch job credits You could have a single pool of credits, where you could self-service request that a some amount of Coral Credits are given to your Slurm account, such that you can submit some jobs to your chosen Slurm cluster. For example, you could reserve 30 days of 1k CPU hours with a Slurm cluster, and if accepted those cloud credits are consumed from the central pool. If that Slurm cluster is very busy, it might not have any available for your selected 30 day period, but there might some available next month. With the idea that a specific federation only has a limited number of CPU hours available each month from that Slurm system, and users reserve some of those, on-demand, when they need need them, and they have not spent them on other cloud credits. Its possible that very large credit delegations to a slurm cluster could be used to expand the slurm cluster using available cloud resources, depending on them being from a shared pool of resources, such as a single OpenStack Ironic based cloud. Example: Slurm reservations Similar to Blazar, you could imagine building the option to self service Slurm reservations against a shared resource pool. Example: Onboarding to Public Cloud Without care people can run up unexpected cloud bills. Automation to convert account credits into a public cloud account, correctly setup with spend limits, bringing the pre-paid credit system to public clouds. With all transfer of credits, care must be taken to ensure unused credits are refundable when possible, such as public cloud spend (where possible) is capped rather than pre-paid as such. Work is needed to understand how this works with JISC OCRE: https://www.jisc.ac.uk/ocre-cloud-framework Example: Shared Job Queue (idea) There are various systems that could create a job queue that spans multiple resource providers (or in some cases a common interface at multiple providers): https://armadaproject.io/ https://dirac.readthedocs.io/en/latest/index.html https://kueue.sigs.k8s.io/ https://github.com/elixir-cloud-aai/cwl-WES https://nextflow.io/ Cloud credit users could be consuming cloud credits when they submit large groups or jobs, (or maybe the user trades in cloud credits for some credits on the shared job queue). Some \"free\" or \"cheaper\" queues could exist for preemtable jobs, that could help consume the free capacity that exists between cloud reservations. Example: Juypter Hub (idea) When a user logs into jupyer hub, and their container is spun up, maybe this could be blocked (using a custom Authorization plugin or jupyterhub-singleuser wrapper) if the user doesn't have any credits left, along side matching configuration in the idle-culling system. Example: Seedcorn allocation One thing not possible with quota, is being able to hand out a very small amount of resoruce for people to try things out. You could say all members of an instituiton automatically get a seedcorn allocation they could use. This could become a default allocation amount for any automatically created accounts. Audit logs All changes should be recorded in an audit log, that can be quiried via the API Visibility for Account holders There should be a clear view of: all active resource allocations for the account all consumers associated with each resource allocation, so its clear how the credits are being consumed A prediction of how many credits will be left at the end of the allocation Prometheus metrics for operators Various stats should be made availabe via a prometheus metrics endpoint, including these per account metrics: size of current allocated credits size of any not current credits remining amount for current active credit allocations any active resource consumption records, including user and account details Periodic reconciliation Each resource provider is responsble for regularly checking if there is any drift between the current resource consumption requests, and the current state of resoruce consumption records. Only the service knows how to map the records in coral credits back to the real resources in that service. No tracking of usage or efficiency Coral credits on credit allocations and consumption records per account, not the current usage in each service. Coral credis does not track if the resources are being fully utilized (e.g. job efficieny). Policy Resource Providers, combined with their use of the central AAI proxy, must ensure users have accepted all policies before requesting a resource. The Coral Cloud credits admin must ensure the account PI has accepted all the policies for the duration of any credit allocation to thier account.","title":"Coral Credits Mission"},{"location":"mission/#coral-credits-mission","text":"Coral Credits aims to support the building of a \"coral reef style\" fixed capacity cloud. A Coral reef style cloud involves cooperatively sharing resources to maximise your investiment in both people and cloud resources. Coral credits are focused on how to support sharing of resources from a federated e-infrastructure (or community cloud) where resources are consumed via multiple interfaces such as: Azimuth, OpenStack Blazar and Slurm","title":"Coral Credits Mission"},{"location":"mission/#on-boarding-accounts-projects-and-users","text":"We are assuming clouds are trying to follow the AARC blueprint , such that user groups are managed via a central AAAI proxy, typically based on Indigo IAM or Keycloak . Typically this means the project lead (or principal investigator) is responsible for ensuring the membership of the groups they manage in the central AAAI proxy are correct. Coral Credit Accounts are assocaited to an account, and access to that account is limited to a group defined in the central AAAI proxy. This group typically has access to many different resource providers, and often uses more than one interface to access those resources.","title":"On-boarding Accounts, Projects and Users"},{"location":"mission/#resource-class-and-resource-class-hours","text":"The coral credits operators are responsible for defining the list of available resource classes. We will use the definition of resource classes used by OpenStack and defined in the python library os-resource-classes","title":"Resource Class and Resource Class Hours"},{"location":"mission/#allocating-credits-to-accounts","text":"A federation manager is typically responsible for updating the allocation of resource credits given to each account. A credit allocation has the following properties: a single account it is associated with a start date and an end date credits are a dict of resource class to resource class hours list of one or more resource providers where you can consume these credits, with the default being any resource provider To simplify the initial implementation no account can have overlapping credits valid for the same resource provider, although an existing allocation can be increased or decresed at any time. The hope is to add this support in a future, under the assumption any resource consumption is only from a single credit pool.","title":"Allocating credits to Accounts"},{"location":"mission/#resource-providers","text":"There are places where an account gets to consume their allocated credits. Coral credits operator is responsible for onboarding a particular resource provider and giving them a token to access the resource consumption API.","title":"Resource Providers"},{"location":"mission/#resource-consumption-request","text":"Cloud credits are consumed at a specfiic Resource Provider. The units are resource class hours. The Resource Provider has to map their local view of an account and user into how Coral Cloud Credits views that account. Note this means the user reference given is likely specific to each resource provider, although the recomendation will be to use an email address, to make differences between resource providers less likely. Resource providers should create an appropriate resource consumption request, before allowing resources to be consumed. Only if enough credits are availabe for the duration of the request will the request be accepted by the coral credit system. A resource consumtion request has the following properties: account resources provider resources consumption interface (e.g. Blazar or Azimuth or Slurm) email address of user requesting the resource resource footprint, i.e. a list of resource class and float amounts proposed start date optionally a proposed end date, if ommitted we return when whatever date is first, when all credits are used or when all credits have expired","title":"Resource Consumption Request"},{"location":"mission/#example-azimuth-short-lived-platform","text":"Azimuth plaforms are now forced to pick an end date, such that we can make a credit consumption request for a platform we are about to create. If there are not enough credits, it will be clear what credits are required to create the platform, possbily including which platforms could be stopped early to free up credits for the requested platform. When a platform is stopped before the originially agreed time, the consmption record should be updated with the new end date, returning the credits back to the user.","title":"Example: Azimuth short lived platform"},{"location":"mission/#example-azimuth-long-lived-platform","text":"Where platforms are long lived, the scheduled end date need to be either when their current credits expire, or possibly sooner if the proposed platform will consume all reminaing credits before those credits expire. Users need to be warned when platforms are about to be automatically deleted, so they can get additional credits allocated. When credits are allocated \"back to back\" with no gap, the user is able to request a change to the end date for the existing credit consumption request, and with the option to extend to the maximun date allowed given the current credit allocation for the associated account.","title":"Example: Azimuth long lived platform"},{"location":"mission/#example-azimuth-variable-resource-usage","text":"All the platforms so far have assumed a uniform resource usage throught the lifetime of the platform. While not supported in the initial implemention, we need to support the a variety of increases and decreases in resource during the lifetime of the cluster. We likely need to have the option for resource consumption requests resource footprint records to have a start and end date that is indepent of the overall resource consumption request.","title":"Example: Azimuth variable resource usage"},{"location":"mission/#example-openstack-blazar-reservation","text":"This is very similar to the Azimuth case, except its for an arbitry reservation via the Blazar API. To help reservations line up nicely, and reduce resource fragmentation, we could enforce that we round up credits to the nearer time window (e.g. 1 hour, or one of three 8hr working day windows each day).","title":"Example: OpenStack Blazar reservation"},{"location":"mission/#example-many-openstack-projects-one-account","text":"It is common to have one project be given separate openstack projects for Dev/Test, Staging and Production. In this case, it would be good if they all share a single credit account, although its clear they which openstack project is consuming the resources.","title":"Example: Many OpenStack projects, one account"},{"location":"mission/#example-slurm-batch-job-credits","text":"You could have a single pool of credits, where you could self-service request that a some amount of Coral Credits are given to your Slurm account, such that you can submit some jobs to your chosen Slurm cluster. For example, you could reserve 30 days of 1k CPU hours with a Slurm cluster, and if accepted those cloud credits are consumed from the central pool. If that Slurm cluster is very busy, it might not have any available for your selected 30 day period, but there might some available next month. With the idea that a specific federation only has a limited number of CPU hours available each month from that Slurm system, and users reserve some of those, on-demand, when they need need them, and they have not spent them on other cloud credits. Its possible that very large credit delegations to a slurm cluster could be used to expand the slurm cluster using available cloud resources, depending on them being from a shared pool of resources, such as a single OpenStack Ironic based cloud.","title":"Example: Slurm batch job credits"},{"location":"mission/#example-slurm-reservations","text":"Similar to Blazar, you could imagine building the option to self service Slurm reservations against a shared resource pool.","title":"Example: Slurm reservations"},{"location":"mission/#example-onboarding-to-public-cloud","text":"Without care people can run up unexpected cloud bills. Automation to convert account credits into a public cloud account, correctly setup with spend limits, bringing the pre-paid credit system to public clouds. With all transfer of credits, care must be taken to ensure unused credits are refundable when possible, such as public cloud spend (where possible) is capped rather than pre-paid as such. Work is needed to understand how this works with JISC OCRE: https://www.jisc.ac.uk/ocre-cloud-framework","title":"Example: Onboarding to Public Cloud"},{"location":"mission/#example-shared-job-queue-idea","text":"There are various systems that could create a job queue that spans multiple resource providers (or in some cases a common interface at multiple providers): https://armadaproject.io/ https://dirac.readthedocs.io/en/latest/index.html https://kueue.sigs.k8s.io/ https://github.com/elixir-cloud-aai/cwl-WES https://nextflow.io/ Cloud credit users could be consuming cloud credits when they submit large groups or jobs, (or maybe the user trades in cloud credits for some credits on the shared job queue). Some \"free\" or \"cheaper\" queues could exist for preemtable jobs, that could help consume the free capacity that exists between cloud reservations.","title":"Example: Shared Job Queue (idea)"},{"location":"mission/#example-juypter-hub-idea","text":"When a user logs into jupyer hub, and their container is spun up, maybe this could be blocked (using a custom Authorization plugin or jupyterhub-singleuser wrapper) if the user doesn't have any credits left, along side matching configuration in the idle-culling system.","title":"Example: Juypter Hub (idea)"},{"location":"mission/#example-seedcorn-allocation","text":"One thing not possible with quota, is being able to hand out a very small amount of resoruce for people to try things out. You could say all members of an instituiton automatically get a seedcorn allocation they could use. This could become a default allocation amount for any automatically created accounts.","title":"Example: Seedcorn allocation"},{"location":"mission/#audit-logs","text":"All changes should be recorded in an audit log, that can be quiried via the API","title":"Audit logs"},{"location":"mission/#visibility-for-account-holders","text":"There should be a clear view of: all active resource allocations for the account all consumers associated with each resource allocation, so its clear how the credits are being consumed A prediction of how many credits will be left at the end of the allocation","title":"Visibility for Account holders"},{"location":"mission/#prometheus-metrics-for-operators","text":"Various stats should be made availabe via a prometheus metrics endpoint, including these per account metrics: size of current allocated credits size of any not current credits remining amount for current active credit allocations any active resource consumption records, including user and account details","title":"Prometheus metrics for operators"},{"location":"mission/#periodic-reconciliation","text":"Each resource provider is responsble for regularly checking if there is any drift between the current resource consumption requests, and the current state of resoruce consumption records. Only the service knows how to map the records in coral credits back to the real resources in that service.","title":"Periodic reconciliation"},{"location":"mission/#no-tracking-of-usage-or-efficiency","text":"Coral credits on credit allocations and consumption records per account, not the current usage in each service. Coral credis does not track if the resources are being fully utilized (e.g. job efficieny).","title":"No tracking of usage or efficiency"},{"location":"mission/#policy","text":"Resource Providers, combined with their use of the central AAI proxy, must ensure users have accepted all policies before requesting a resource. The Coral Cloud credits admin must ensure the account PI has accepted all the policies for the duration of any credit allocation to thier account.","title":"Policy"}]} \ No newline at end of file diff --git a/search/worker.js b/search/worker.js new file mode 100644 index 0000000..8628dbc --- /dev/null +++ b/search/worker.js @@ -0,0 +1,133 @@ +var base_path = 'function' === typeof importScripts ? '.' : '/search/'; +var allowSearch = false; +var index; +var documents = {}; +var lang = ['en']; +var data; + +function getScript(script, callback) { + console.log('Loading script: ' + script); + $.getScript(base_path + script).done(function () { + callback(); + }).fail(function (jqxhr, settings, exception) { + console.log('Error: ' + exception); + }); +} + +function getScriptsInOrder(scripts, callback) { + if (scripts.length === 0) { + callback(); + return; + } + getScript(scripts[0], function() { + getScriptsInOrder(scripts.slice(1), callback); + }); +} + +function loadScripts(urls, callback) { + if( 'function' === typeof importScripts ) { + importScripts.apply(null, urls); + callback(); + } else { + getScriptsInOrder(urls, callback); + } +} + +function onJSONLoaded () { + data = JSON.parse(this.responseText); + var scriptsToLoad = ['lunr.js']; + if (data.config && data.config.lang && data.config.lang.length) { + lang = data.config.lang; + } + if (lang.length > 1 || lang[0] !== "en") { + scriptsToLoad.push('lunr.stemmer.support.js'); + if (lang.length > 1) { + scriptsToLoad.push('lunr.multi.js'); + } + if (lang.includes("ja") || lang.includes("jp")) { + scriptsToLoad.push('tinyseg.js'); + } + for (var i=0; i < lang.length; i++) { + if (lang[i] != 'en') { + scriptsToLoad.push(['lunr', lang[i], 'js'].join('.')); + } + } + } + loadScripts(scriptsToLoad, onScriptsLoaded); +} + +function onScriptsLoaded () { + console.log('All search scripts loaded, building Lunr index...'); + if (data.config && data.config.separator && data.config.separator.length) { + lunr.tokenizer.separator = new RegExp(data.config.separator); + } + + if (data.index) { + index = lunr.Index.load(data.index); + data.docs.forEach(function (doc) { + documents[doc.location] = doc; + }); + console.log('Lunr pre-built index loaded, search ready'); + } else { + index = lunr(function () { + if (lang.length === 1 && lang[0] !== "en" && lunr[lang[0]]) { + this.use(lunr[lang[0]]); + } else if (lang.length > 1) { + this.use(lunr.multiLanguage.apply(null, lang)); // spread operator not supported in all browsers: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator#Browser_compatibility + } + this.field('title'); + this.field('text'); + this.ref('location'); + + for (var i=0; i < data.docs.length; i++) { + var doc = data.docs[i]; + this.add(doc); + documents[doc.location] = doc; + } + }); + console.log('Lunr index built, search ready'); + } + allowSearch = true; + postMessage({config: data.config}); + postMessage({allowSearch: allowSearch}); +} + +function init () { + var oReq = new XMLHttpRequest(); + oReq.addEventListener("load", onJSONLoaded); + var index_path = base_path + '/search_index.json'; + if( 'function' === typeof importScripts ){ + index_path = 'search_index.json'; + } + oReq.open("GET", index_path); + oReq.send(); +} + +function search (query) { + if (!allowSearch) { + console.error('Assets for search still loading'); + return; + } + + var resultDocuments = []; + var results = index.search(query); + for (var i=0; i < results.length; i++){ + var result = results[i]; + doc = documents[result.ref]; + doc.summary = doc.text.substring(0, 200); + resultDocuments.push(doc); + } + return resultDocuments; +} + +if( 'function' === typeof importScripts ) { + onmessage = function (e) { + if (e.data.init) { + init(); + } else if (e.data.query) { + postMessage({ results: search(e.data.query) }); + } else { + console.error("Worker - Unrecognized message: " + e); + } + }; +} diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 0000000..0f8724e --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,3 @@ + +