Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable consumption and configuration of specific hyperscaler resources [EPIC] #18195

Open
15 tasks
varbanv opened this issue Sep 20, 2023 · 21 comments
Open
15 tasks
Assignees
Labels
area/control-plane Related to all activities around Kyma Control Plane Epic

Comments

@varbanv
Copy link
Contributor

varbanv commented Sep 20, 2023

Description

Provide a way for end users to consume and be charged for a pre-defined set of hyperscaler resources:

  • specialized node types. For example GPU and ARM nodes, network, memory and CPU optimized nodes, etc.
  • additional types of storage, including SSD and read-write-many options.

To have standard machine types configurable in worker pools gets addressed in separate story #18709. It is expected that any additional node specific settings will be added to that concept as further option

Context

Problem

Currently, Kyma is a layer on top of Kubernetes and as such provides a very limited set of infrastructure configuration options at provisioning time.
However, customers looking to adopt Kyma that already use existing hyperscaler offerings already take advantage of specialized resources as part of their workloads (for example faster storage, GPU nodes, network optimized nodes, etc). This prevents those users from on-boarding on Kyma without having to re-engineer their workloads.

Benefits

For customers:

  • greater flexibility around infrastructure requirements
  • ability to meet requirements in order to move workloads to Kyma
  • reduced maintenance related to hyperscaler accounts

For us:

  • increase adoption
  • abstract and bundle infrastructure related requirements in one feature
  • mitigate the abandoned BYOC approach

Potential problems

  • billing could become more complex if we don't introduce an as way for customers to track their costs

Gathering of Resources to support

  • advanced machine types
    • Mandatory as that is the most asked feature
    • like GPUs, at the end all types which are available in gardener. The machines are selectable in the service instance parameters as part of the worker pool setup.
    • including ARM types, which will require an additional setting in the worker pool config
    • advanced machine types can have a very high price and the price will differ dependent on multiple factors like the platform, a generic pricing based on amount of CPU/Mem seems not adequate
  • Cloud Manager Module resources
    • Mandatory as it is about to go live and we need charging for it
    • All are requested via custom resource in the cluster
    • NFS storage
      • Charged per volume size and type
    • VPC usage
      • no price per se
    • Redis instances
      • price per usage and data transfer rate
  • Application LoadBalancers
    • Some hyperscaler provide different load balancers types, to be used in front of a kubernetes cluster
    • Will be selected via annotation on the istio/service resources
    • Mandatory, as currently you can get multiple via Kubernetes Services of type loadbalancer and will not get charged
  • Storage classes
    • Customers can create additional storage classes via k8s resources (available classes should be provided by default?)
    • Customers needs to get charged on base of storage class
    • Mandatory as currently expensive storage gets charged with the base price
  • Web/Application Firewall
    • Part of firewall package, usually in an enterprise tier
    • Paid with a monthly fee + a fee per amount of protected resources
    • Most probably configured via Cloud Manager module?
    • Not mandatory at the moment for closing the epic but should be respected in the concept
  • Assured workload (GCP)
    • Needed for restricted markets like KSA
    • Premium tier requires 20% more charge on machine types

Billing requirements

  • need to stay simple as
    • on one hand it is very complex to add new items to the bill
    • the pricing model of CF is successful
    • different prices per region and items will make users live troublesome
  • at the moment only storage and compute is billed and most probably that should stay like that, so all items need to be mapped to that
  • storage and compute can have different entities listed like machine types, however there should be no regional differences

Acceptance criteria

  • Users can enable/disable hyperscaler resource usage for their cluster from a pre-defined list of options
  • Users can enable/disable hyperscaler resource usage via BTP Cockpit, Kyma Dashboard and command line
  • Users will be billed for all additional usage in their clusters and the charges will be correctly reflected in their billing summary
  • Users cannot select or enable hyperscaler resources that are not part of the pre-defined list.
  • Kyma will provide a mechanism for end users to access relevant information related to the hyperscaler resource they use without providing direct access to the hyperscaler account
  • List of available resources should be based on region and hyperscaler availability
  • In case of quota/availability limitations for a specific resource, end users will be given reasonable information about why they cannot consume the desired resource

Tasks

  • business workshop to decide on the pricing model
  • technical workshop to define the architecture
  • decide which machine types/nodes to expose first
  • add new node type input to the worker pool config
  • KMC handles metering for new node type
  • KMC handles metering for non-basic storage types
  • provide feedback to the end user about errors during resource configuration changes - e.g. no GPUs available from the hyperscaler
  • document the pricing and calculation for the additional node types
@varbanv varbanv added the Epic label Sep 20, 2023
@Disper
Copy link
Member

Disper commented Nov 6, 2023

  • That would potentially require rewriting the provisioning/de-provisioning logic in the infrastructure manager, as we want to move away from the old provisioner.
  • Question if the implementation would enable specific set of resources or a generic one, that we will with any types of resources? How will that pre-defined list look like?

@marco-porru
Copy link

cKMS team is evaluating the usage of Kyma.
The team need to have "confidential computing capabilities". This kind of machine is surely available for azure and gcp

@marco-porru
Copy link

SAP for Me would like to use m6g and m6in machine types

@valentinvieriu
Copy link
Contributor

+1 for GPUs

@marco-porru
Copy link

marco-porru commented Jan 29, 2024

+1 for GPUs

Thanks Valentin for reporting it. I think it's worth mentioning the context and let me do it on behalf of you for simplicity 😄 : it's to make it possible for Core AI to run on Kyma (subject to future discussions and agreements)

@varbanv
Copy link
Contributor Author

varbanv commented Mar 21, 2024

Had a preliminary workshop with @tobiscr and @PK85 and added a first set of tasks to work on.

@marco-porru
Copy link

+1 team for GPU (ICN Munich)

@marco-porru
Copy link

Enable more private connectivity (e.g. via firewall), requested by not less than 3 teams (e.g. S/4HANA ABAP Machines)

@marco-porru
Copy link

Enable assured workload GCP module (relevant for KSA), requested by BTP email service

@abbi-gaurav
Copy link
Member

A customer is looking for very high IOPS storage.
e.g. enabling ultra disks for storage could help them: https://learn.microsoft.com/en-us/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal

@abbi-gaurav
Copy link
Member

At present, customers are able to use resources for which they are not charged such as

  • additional load balancers
  • non default storage class e.g. for multiple read write

We should somehow make the customers aware that they might have to pay for this in the future, so it should not come as a surprise for them.

@NHingerl , could you please help? IMHO, putting this info out might not need to wait until this epic is done.

@lanthoor
Copy link

lanthoor commented Jun 9, 2024

SAP IPR would like to use g5 and r7i instance types along with other hyperscaler resources like ALB/NLB.

@pthd
Copy link

pthd commented Jun 14, 2024

+1 for GPU support.
AI scenarios required GPU powered instances.
More precisely we want to leverage Transformer models which run much faster on GPU.

@MarcusNotheis
Copy link

We would be interested in OpenSearch consumption

@tobiscr tobiscr changed the title Enable consumption and configuration of specific hyperscaler resources Enable consumption and configuration of specific hyperscaler resources [EPIC] Jul 8, 2024
@marco-porru
Copy link

GPUs for the Product Services team (already LIVE)
In particular from GCP
A100
H100
H200 machines

@marco-porru
Copy link

GPUs requested also by NGS (already live)

@varbanv varbanv assigned a-thaler and unassigned k15r Jul 15, 2024
Copy link

This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs.
Thank you for your contributions.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 24, 2024
@a-thaler a-thaler removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 24, 2024
@a-thaler
Copy link
Contributor

In the default worker pool we will continue to support the current machine types only. With additional worker pools we will support additional machine types to be used. As soon as the worker pool feature is ready (#18709), we will start adding some compute-intensive types followed then by GPUs.

In parallel we are already working on a concept to emit also non-billable metrics, bringing more transparency on what actually gets charged. That is still in a conceptual phase still identifying what is possible to achieve.

For the compute-intensive workloads we are currently thinking to add these ones (non-ARM based):

@cruschke
Copy link

cruschke commented Nov 13, 2024

Signavio would be interested in c5.9xlarge or newer generations, and r6i.8xlarge and r7i.24xlarge.

@mbhagdev
Copy link

We use the g4dn.2xlarge from aws and g2-standard-8 from GCP in gardener for our deployments. Would be nice if these were available in kyma for us to migrate to kyma.

@marco-porru
Copy link

machines requested by BDC:

IaaS machine type CPU CPU Memory GPU GPU Memory
AWS g4dn.xlarge 4 16 Gi 1 ?
Azure Standard_NC4as_T4_v3 4 28 Gi 1 16 Gi
GCP n1-standard-4 4 15 Gi 1 16 Gi

Backup machine type
IaaS machine type
Azure Standard_NC6s_v3
GCP g2-standard-4 (nvidia-l4 GPU)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/control-plane Related to all activities around Kyma Control Plane Epic
Projects
None yet
Development

No branches or pull requests