-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hosting.Dapr local component - virtualised configuration #437
Comments
As a whole, this looks great, but I do have a few follow-ups on particulars.
I know from experience that the account key/managed identity will get frustrating with regards to potentially setting up CosmosDB as an actor state store (e.g. for workflows). Per a recent support ticket with Azure support, managed identities accessing Cosmos via RBAC are unable to perform management operations (e..g creating a database or container, changing throughput, updating indexing policy, etc.), so some of these operations should prefer to use account keys even if a managed identity is available simply so setup operations can complete successfully. |
@WhitWaldo - TLDR; I think we're covered by existing aspire functionality - but we should definitely keep it in mind. I think ultimately the answer to question 1 is that level of configuration probably relies on the azure specific abstraction - similar to my Redis version - there will need to be hooks added to determine Identity / authentication mechanism. provided we get the abstractions on secret meta data correct this should just flow. RBAC wise - provided those publishing abstractions (such as the redis instance or a cosmos db instance) are set up correctly, Aspire should allow this to just happen. Ultimately what we want to get to is the WithReference model which means that the configuration of a cosmosdb / redis instance etc is a separate concern to referencing that for dapr e.g. var cosmosResource = builder.AddAzureCosmosDB("blogData")
.ConfigureInfrastructure(infr =>
{
var account = infr.GetProvisionableResources().OfType<CosmosDBAccount>().First();
account.Capabilities.Add(new CosmosDBAccountCapability { Name = "EnableServerless" });
})
.RunAsEmulator()
.AddDatabase("blogData"); Adds a cosmos db account with database & configures to use serverless. AddDaprComponent("stateStore").WithReference(cosmosResource); should only need to instruct the infra that there is that reference which should cause: The managed identity used by the application to get cosmos RBAC (I thought add container was granted on the data roles?) I believe aspire 9.1 switches to Entra ID + RBAC as the default which means that any of these nuances should be covered off by their work. There are flags in 9.1 that will allow us to understand if a resource has been configured to use access keys at which point we can ensure that |
I'm a bit out of touch on what Aspire changed in 9.1, so I'll go back and review all that. Regarding Cosmos in particular - my information is current as of an Azure Support ticket in late Aug 2024 where an MI was assigned "DocumentDB Account Contributor" scoped to a Cosmos DB instance and despite that allowing the action "Microsoft.DocumentDB/databaseAccounts/*", the request for "Microsoft.DocumentDB/databaseAccounts/readMetadata" was blocked as not allowed. The docs provide a workaround wherein you add built-in Cosmos-specific roles to your MI on the data and management planes but for me, that yielded a new error that "the given request POST /dbs//] could not be authorized by AAD token in data plane" and their conclusion was that MI should not be used here - only access keys should be used for management operations. |
Aside: Aspire definitely plans to let you model role assignments in a first-class way (dotnet/aspire#6636). |
Currently I guess, we should focus on a way to change the yaml files of dapr with more or less static values or offer some kind of transformation functions. For me a little pain point is, that we need to define our own schemas, because I couldn't found any schema definitions for components, configuration, subscription, resiliency and http endpoints. The only thing that exists is this documentation where you can ask for example copilot to generate schemas for you: There are schemas for the concrete implementations of a component. For example kafka: But when I tried my first steps with the implementation I was more or less copieng the documentation. So for me, the better focus would be to define a good implementation of how to configure dapr configurations, and less focus the So from my side, I would like to:
After that base work, it should be easy to write extensions to use But still I think we should first starting the base work of how we want to write all the YAML files before we start investigate into the integration of other resources. At the end with the use of annotation any user can define his own extension methods quite easy, so they can at least implement workarounds.
|
All the validation schemas for the various components are kept in the components-contrib repo, for example, MQTT3 (pubsub). And after re-reading your comment @paule96 , I see you already found those. |
@WhitWaldo yes my problem is more that kinda the base schema is missing. Because each component follows the same base schema. But this schema just exists as documentation not as yaml schema. Their are some go classes that come close to that, like that: Edit: I found a json schema for components at least: But still missing all the other configuration schemas. |
@paule96 Having a models to represent the dapr yaml schema sounds reasonable - What does the public API look like for this? |
I mean I would expect json schema files. Sadly it looks like we can only have this for dapr components: For the the rest of the dapr configuration files we have only this documentation: https://docs.dapr.io/reference/resource-specs/ @FullStackChef what do you think, should I just start a branch and generate the models? |
I validated with one of the core developers - we do not have any JSON specs for the others. Just the one @paule96 found. |
What I'm trying to understand is what does this look like to use? what's the public API for end users? |
Also super early WIP PR for further discussion https://github.com/CommunityToolkit/Aspire/pull/450/files |
I think perhaps that... var daprState = builder.AddDaprComponent("daprState", "state.azure.blobstorage")
.WithMetadata("accountName", EndpointReference) // storage endpoint reference
.WithMetadata("accountKey", ParameterResource) // Secret parameter
.WithMetadata("containerName", "myContainer"); // string value ...is a bit too "stringy" and could benefit more from typed parameters. I know that DAPR is not great on documentation "freshness" and that the component specs are not all available but I think we are in a position to influence this by not committing to a lowest common denominator implementation here. Therefore, I suggest:
This way, we can have a high quality Aspire sdk for DAPR while pushing DAPR documentation towards being more accurate and improving the quality overall. Everyone wins. *or if feeling particularly generous, we also write the missing specs and submit PRs to DAPR |
As for the actual implementation on generating manifests programmatically for the apphost, we should be able to take advantage of builder.Eventing.Subscribe<AfterEndpointsAllocatedEvent>(
async (e, ct) =>
{
// find cosmosdb resource
var cosmos = builder.Resources
.OfType<AzureCosmosDBResource>().Single();
// get port, write to yaml secret store file
... |
We should be aiming to follow Aspire conventions using WithReference where possible to make binding of resources to components simple. I dislike the idea of having to decompose a connection string manually just to inject the bits into a method. E.g. // spin up storage emulator
var blob = builder.AddAzureBlobStorage("storage").RunAsEmulator();
// add project
var project = builder.AddProject<MyDaprApp>()
// Add verb = new resource, as the sidecar is a distinct resource, independent from the project
.AddDaprSidecar() // returns IResourceBuilder<DaprSidecarResource>
// With verb = mutate sidecar resource
.WithStateStore(s => s.ConfigureAzureBlobStateStore("storage"))
.WithReference(blob); // logically, the storage is used by the sidecar, not the project so we mutate the sidecar again The |
I really like @oising's idea of using Dapr's existing YAML schemas with source generators—rather than duplicating them in the aspire library, we could contribute back to Dapr for a win-win. I favor the WithReference approach (as done in the Azure Redis implementation). That said, we need to nail down how the extensions are structured, and input from @davidfowl and @aaronpowell would be invaluable. Rather than one monolithic package (like Hosting.Dapr) handling all Dapr configurations, I propose a modular design: Hosting.Dapr: The core package handling source generators, metadata models, annotations, and a foundational WithMetadata. var builder = DistributedApplication.CreateBuilder(args);
var redisState = builder.AddAzureRedis("redisState").RunAsContainer(); // Azure redis in production and a container locally
var daprState = builder.AddDaprStateStore("daprState")
.WithReference(redisState); // By doing this we're using what the user has configured.
var api = builder.AddProject<Projects.CommunityToolkit_Aspire_Hosting_Dapr_AzureRedis_ApiService>("example-api")
.WithReference(daprState) // Current aspire API doesn't need to change
.WithDaprSidecar();
builder.Build().Run(); |
Great! I would pay attention though to the details of my last post, particularly around the use of the With and Add verb prefixes, and how this affects to use of follow on extension methods. With methods modify the parent object, and Add methods create a new Aspire Resource in which subsequent With methods will act upon it, and not the top level Resource object. I know the current SDK uses WithDaprSidecar, but new guidelines would have this use Add, because having it as a full Resource makes more sense. We shouldn't be afraid to break backwards compatibility. This is a new SDK and while the first release preserves the API of the Aspire one, this means that our second release can break it since people can continue to use the Aspire version. Let's not rush this. |
Do we need to include a With/Add DaprSidecar method in our public API? My current code uses: .WithReference(daprState) This attaches a Dapr state store. Should we also incorporate DaprSidecar configuration in this call? Alternatively, I think keeping WithDaprSidecar as a separate method makes sense because adding a Dapr state store is already handled by It would be problematic if either the add or with method for DaprSidecar returned a different type, as this would break the chaining of our project API. One potential solution is to change the method signature to something like: ConfigureDaprSidecar(Action<DaprSidecarOptions> configurationAction) -> IResourceBuilder<ProjectReference> Ideally, we'd want |
It depends. The way I see it is that a ProjectResource would have a parent/child relationship with the DaprSidecarResource. Whether the sidecar resource has first-class children or is mutation via With* is another question. Which is more useful? Anything that returns a resource can be reused elsewhere, for example. So maybe having components as fully-fledged resources could be useful since you could define a state store once and attach it to two different dapr apps.
Not neccessarily. Other Aspire resources use the same pattern. Example for event hub (in 9.1) builder.AddAzureEventHub("ehns") // IResourceBuilder<AzureEventHubNamespaceResource>
.AddHub("hub1") // IResourceBuilder<AzureEventHubResource>
.AddConsumerGroup("cg1"); // IResourceBuilder<AzureEventHubConsumerGroupResource> As you see, it chains but the resource changes. The "Add" verb gives this hint, and if you want to reference the hub or consumergroup elsewhere, you are expected to assign it rather than chain, but the chaining still works.
|
I suggest we break the overall task into these focused sub-issues:
Does this capture the issues - are there other issues that we need to capture or do we require further high level discussion? |
Sorry guys for late response. I opened a PR on @FullStackChef draft that at least generated the component specs. They a generated from copilot from the docs. The problem right now is, that I don't see a better way of doing that. For the discussion around public static IResourceBuilder<DaprComponentResource> WithReference<TTarget>(this IResourceBuilder<DaprComponentResource> builder, TTarget target, Action<DaprComponentResource, TTarget> transform)
where TTarget: IResourceWithEnvironment That should allow all the transforms that we would like todo. The purpose of this method is just to add an annotation to the |
@paule96 I would suggest targeting the community toolkit rather than a branch of mine - otherwise we're creating a lot of dependencies that have to land at once @paule96 @oising @WhitWaldo I have created a sub issue for component validation - let's discuss component validation etc more specifically with that (#456) I will also create sub issues for the remaining parts above shortly, I have code that addresses And while WithMetadata might be a little "stringy" it would be good to deliver some incremental value that addresses some of the challenges highlighted by users in the aspire discord channel |
Related to an existing integration?
Yes
Existing integration
Hosting.Dapr
Overview
We need to improve on the current local development experience for Dapr components. The current DaprComponentResource model:
has the following limitations:
Proposed Solutions
the API should align with appropriate overloads for WithEnvrionment() to allow for variables, references, parameters and secrets.
"On demand" components should be updated to warn about unsupported component types and throw if no meta data is provided
Current on demand components (pubsub and state) should be updated to use Metadata.
Additional predefined components can be created using wrapper methods for example
AddDaprStateStore would call AddDaprComponent with a predefined set of meta data. specific meta data could be overwritten using the WithMetadata
Usage example
Where account key is a secret parameter, it is expected that the API would configure this appropriately in the dapr component as a secret
Breaking change?
No
Alternatives
The above API would allow for wrapper APIs such as
based on the Reference type of blob storage the underlying api of
could be called
Additional context
No response
Help us help you
Yes, I'd like to be assigned to work on this item
The text was updated successfully, but these errors were encountered: