From 556e7d13918a486de1e5e51622c5fbac44055918 Mon Sep 17 00:00:00 2001 From: Code-Inside-Bot Date: Thu, 7 Sep 2023 07:30:03 +0200 Subject: [PATCH] Sloader update on _data/Sloader.json --- _data/Sloader.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_data/Sloader.json b/_data/Sloader.json index 1ac6840..3e18f83 100644 --- a/_data/Sloader.json +++ b/_data/Sloader.json @@ -1 +1 @@ -{"Data":{"Blog":{"FeedItems":[{"Title":"First steps with Azure OpenAI and .NET","PublishedOn":"2023-03-23T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

The AI world is rising very fast these days: ChatGPT is such an awesome (and scary good?) service and Microsoft joined the ship with some partner announcements and investments. The result is of these actions is, that OpenAI is now a “first class citizen” on Azure.

\n\n

So - for the average Microsoft/.NET developer this opens up a wonderful toolbox and the first steps are really easy.

\n\n

Be aware: You need to “apply” to access the OpenAI service, but it took less then 24 hours for us to gain access to the service. I guess this is just a temporary thing.

\n\n

Disclaimer: I’m not an AI/ML engineer and I only have a very “glimpse” knowledge about the technology behind GPT3, ChatGPT and ML in general. If in doubt, I always ask my buddy Oliver Guhr, because he is much smarter in this stuff. Follow him on Twitter!

\n\n

1. Step: Go to Azure OpenAI Service

\n\n

Search for “OpenAI” and you will see the “Azure OpenAI Service” entry:

\n\n

\"x\"

\n\n

2. Step: Create a Azure OpenAI Service instance

\n\n

Create a new Azure OpenAI Service instance:

\n\n

\"x\"

\n\n

On the next page you will need to enter the subscription, resource group, region and a name (typical Azure stuff):

\n\n

\"x\"

\n\n

Be aware: If your subscription is not enabled for OpenAI, you need to apply here first.

\n\n

3. Step: Overview and create a model

\n\n

After the service is created you should see something like this:

\n\n

\"x\"

\n\n

Now go to “Model deployments” and create a model - I choosed “text-davinci-003”, because I think this is GPT3.5 (which was the initial ChatGPT release, GPT4 is currently in preview for Azure and you need to apply again.

\n\n

\"x\"

\n\n

My guess is, that you could train/deploy other, specialized models here, because this model is quite complex and you might want to tailor the model for your scenario to get faster/cheaper results… but I honestly don’t know how to do it (currently), so we just leave the default.

\n\n

4. Step: Get the endpoint and the key

\n\n

In this step we just need to copy the key and the endpoint, which can be found under “Keys and Endpoint”, simple - right?

\n\n

\"x\"

\n\n

5. Step: Hello World to our Azure OpenAI instance

\n\n

Create a .NET application and add the Azure.AI.OpenAI NuGet package (currently in preview!).

\n\n
dotnet add package Azure.AI.OpenAI --version 1.0.0-beta.5\n
\n\n

Use this code:

\n\n
using Azure.AI.OpenAI;\nusing Azure;\n\nConsole.WriteLine(\"Hello, World!\");\n\nOpenAIClient client = new OpenAIClient(\n        new Uri(\"YOUR-ENDPOINT\"),\n        new AzureKeyCredential(\"YOUR-KEY\"));\n\nstring deploymentName = \"text-davinci-003\";\nstring prompt = \"Tell us something about .NET development.\";\nConsole.Write($\"Input: {prompt}\");\n\nResponse<Completions> completionsResponse = client.GetCompletions(deploymentName, prompt);\nstring completion = completionsResponse.Value.Choices[0].Text;\n\nConsole.WriteLine(completion);\n\nConsole.ReadLine();\n\n
\n\n

Result:

\n\n
Hello, World!\nInput: Tell us something about .NET development.\n\n.NET development is a mature, feature-rich platform that enables developers to create sophisticated web applications, services, and applications for desktop, mobile, and embedded systems. Its features include full-stack programming, object-oriented data structures, security, scalability, speed, and an open source framework for distributed applications. A great advantage of .NET development is its capability to develop applications for both Windows and Linux (using .NET Core). .NET development is also compatible with other languages such as\n
\n\n

As you can see… the result is cut off, not sure why, but this is just a simple demonstration.

\n\n

Summary

\n\n

With these basic steps you can access the OpenAI development world. Azure makes it easy to integrate in your existing Azure/Microsoft “stack”. Be aware, that you could also use the same SDK and use the endpoint from OpenAI. Because of billing reasons it is easier for us to use the Azure hosted instances.

\n\n

Hope this helps!

\n\n

Video on my YouTube Channel

\n\n

If you understand German and want to see it in action, check out my video on my Channel:

\n\n\n\n","Href":"https://blog.codeinside.eu/2023/03/23/first-steps-with-azure-openai-and-dotnet/","RawContent":null,"Thumbnail":null},{"Title":"How to fix: 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine","PublishedOn":"2023-03-18T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

In our product we can interact with different datasource and one of these datasources was a Microsoft Access DB connected via OLEDB. This is really, really old, but still works, but on one customer machine we had this issue:

\n\n
'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine\n
\n\n

Solution

\n\n

If you face this issue, you need to install the provider from here.

\n\n

Be aware: If you have a different error, you might need to install the newer provider - this is labled as “2010 Redistributable”, but still works with all those fancy Office 365 apps out there.

\n\n

Important: You need to install the provider in the correct bit version, e.g. if you run under x64, install the x64.msi.

\n\n

The solution comes from this Stackoverflow question.

\n\n

Helper

\n\n

The best tip from Stackoverflow was these powershell commands to check, if the provider is there or not:

\n\n
(New-Object system.data.oledb.oledbenumerator).GetElements() | select SOURCES_NAME, SOURCES_DESCRIPTION \n\nGet-OdbcDriver | select Name,Platform\n
\n\n

This will return something like this:

\n\n
PS C:\\Users\\muehsig> (New-Object system.data.oledb.oledbenumerator).GetElements() | select SOURCES_NAME, SOURCES_DESCRIPTION\n\nSOURCES_NAME               SOURCES_DESCRIPTION\n------------               -------------------\nSQLOLEDB                   Microsoft OLE DB Provider for SQL Server\nMSDataShape                MSDataShape\nMicrosoft.ACE.OLEDB.12.0   Microsoft Office 12.0 Access Database Engine OLE DB Provider\nMicrosoft.ACE.OLEDB.16.0   Microsoft Office 16.0 Access Database Engine OLE DB Provider\nADsDSOObject               OLE DB Provider for Microsoft Directory Services\nWindows Search Data Source Microsoft OLE DB Provider for Search\nMSDASQL                    Microsoft OLE DB Provider for ODBC Drivers\nMSDASQL Enumerator         Microsoft OLE DB Enumerator for ODBC Drivers\nSQLOLEDB Enumerator        Microsoft OLE DB Enumerator for SQL Server\nMSDAOSP                    Microsoft OLE DB Simple Provider\n\n\nPS C:\\Users\\muehsig> Get-OdbcDriver | select Name,Platform\n\nName                                                   Platform\n----                                                   --------\nDriver da Microsoft para arquivos texto (*.txt; *.csv) 32-bit\nDriver do Microsoft Access (*.mdb)                     32-bit\nDriver do Microsoft dBase (*.dbf)                      32-bit\nDriver do Microsoft Excel(*.xls)                       32-bit\nDriver do Microsoft Paradox (*.db )                    32-bit\nMicrosoft Access Driver (*.mdb)                        32-bit\nMicrosoft Access-Treiber (*.mdb)                       32-bit\nMicrosoft dBase Driver (*.dbf)                         32-bit\nMicrosoft dBase-Treiber (*.dbf)                        32-bit\nMicrosoft Excel Driver (*.xls)                         32-bit\nMicrosoft Excel-Treiber (*.xls)                        32-bit\nMicrosoft ODBC for Oracle                              32-bit\nMicrosoft Paradox Driver (*.db )                       32-bit\nMicrosoft Paradox-Treiber (*.db )                      32-bit\nMicrosoft Text Driver (*.txt; *.csv)                   32-bit\nMicrosoft Text-Treiber (*.txt; *.csv)                  32-bit\nSQL Server                                             32-bit\nODBC Driver 17 for SQL Server                          32-bit\nSQL Server                                             64-bit\nODBC Driver 17 for SQL Server                          64-bit\nMicrosoft Access Driver (*.mdb, *.accdb)               64-bit\nMicrosoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb) 64-bit\nMicrosoft Access Text Driver (*.txt, *.csv)            64-bit\n
\n\n

Hope this helps! (And I hope you don’t need to deal with these ancient technologies for too long 😅)

\n","Href":"https://blog.codeinside.eu/2023/03/18/microsoft-ace-oledb-12-0-provider-is-not-registered/","RawContent":null,"Thumbnail":null},{"Title":"Resource type is not supported in this subscription","PublishedOn":"2023-03-11T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

I was playing around with some Visual Studio Tooling and noticed this error during the creation of a “Azure Container Apps”-app:

\n\n

Resource type is not supported in this subscription

\n\n

\"x\"

\n\n

Solution

\n\n

The solution is quite strange at first, but in the super configurable world of Azure it makes sense: You need to activate the Resource provider for this feature on your subscription. For Azure Container Apps you need the Microsoft.ContainerRegistry-resource provider registered:

\n\n

\"x\"

\n\n

It seems, that you can create such resources via the Portal, but if you go via the API (which Visual Studio seems to do) the provider needs to be registered at first.

\n\n

Some resource providers are “enabled by default”, other providers needs to be turned on manually. Check out this list for a list of all resource providers and the related Azure service.

\n\n

Be careful: I guess you should only enable the resource providers that you really need, otherwise your attack surface will get larger.

\n\n

To be honest: This was completly new for me - I do Azure since ages and never had to deal with resource providers. Always learning ;)

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2023/03/11/resource-type-is-not-supported-in-this-subscription/","RawContent":null,"Thumbnail":null},{"Title":"Azure DevOps Server 2022 Update","PublishedOn":"2023-02-15T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Azure DevOps Server 2022 - OnPrem?

\n\n

Yes I know - you can get everything from the cloud nowadays, but we are still using our OnPrem hardware and were running the “old” Azure DevOps Server 2020. \nThe _Azure DevOps Server 2022 was released last december, so an update was due.

\n\n

Requirements

\n\n

If you are running am Azure DevOps Server 2020 the requirements for the new 2022 release are “more or less” the same except the following important parts:

\n\n\n\n

Make sure you have a backup

\n\n

The last requirement was a suprise for me, because I thought the update should run smoothly, but the installer removed the previous version and I couldn’t update, because our SQL Server was still on SQL Server 2016. Fortunately we had a VM backup and could rollback to the previous version.

\n\n

Step for Step

\n\n

The update process itself was straightforward: Download the installer and run it.

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

The screenshots are from two different sessions. If you look carefully on the clock you might see that the date is different, that is because of the SQL Server 2016 problem.

\n\n

As you can see - everything worked as expected, but after we updated the server the search, which is powered by ElasticSearch was not working. The “ElasticSearch”-Windows-Service just crashed on startup and I’m not a Java guy, so… we fixed it by removing the search feature and reinstall it. \nWe tried to clean the cache, but it was still not working. After the reinstall of this feature the issue went away.

\n\n

Features

\n\n

Azure Server 2022 is just a minor update (at least from a typical user perspective). The biggest new feature might be “Delivery Plans”, which are nice, but for small teams not a huge benefit. Check out the release notes.

\n\n

A nice - nerdy - enhancement, and not mentioned in the release notes: “mermaid.js” is now supported in the Azure DevOps Wiki, yay!

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2023/02/15/azure-devops-server-2022-update/","RawContent":null,"Thumbnail":null},{"Title":"Use ASP.NET Core and React with Vite.js","PublishedOn":"2023-02-11T01:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

The CRA Problem

\n\n

In my previous post I showed a simple setup with ASP.NET Core & React. The React part was created with the “CRA”-Tooling, which is kind of problematic. The “new” state of the art React tooling seems to be vite.js - so let’s take a look how to use this.

\n\n

\"x\"

\n\n

Step for Step

\n\n

Step 1: Create a “normal” ASP.NET Core project

\n\n

(I like the ASP.NET Core MVC template, but feel free to use something else - same as in the other blogpost)

\n\n

\"x\"

\n\n

Step 2: Install vite.js and init the template

\n\n

Now move to the root directory of your project with a shell and execute this:

\n\n
npm create vite@latest clientapp -- --template react-ts\n
\n\n

This will install the latest & greatest vitejs based react app in a folder called clientapp with the react-ts template (React with Typescript). Vite itself isn’t focused on React and supports many different frontend frameworks.

\n\n

\"x\"

\n\n

Step 3: Enable HTTPS in your vite.js

\n\n

Just like in the “CRA”-setup we need to make sure, that the environment is served under HTTPS. In the “CRA” world we needed to different files from the original ASP.NET Core & React template, but with vite.js there is a much simpler option available.

\n\n

Execute the following command in the clientapp directory:

\n\n
npm install --save-dev vite-plugin-mkcert\n
\n\n

Then in your vite.config.ts use this config:

\n\n
import { defineConfig } from 'vite'\nimport react from '@vitejs/plugin-react'\nimport mkcert from 'vite-plugin-mkcert'\n\n// https://vitejs.dev/config/\nexport default defineConfig({\n    base: '/app',\n    server: {\n        https: true,\n        port: 6363\n    },\n    plugins: [react(), mkcert()],\n})\n
\n\n

Be aware: The base: '/app' will be used as a sub-path.

\n\n

The important part for the HTTPS setting is that we use the mkcert() plugin and configure the server part with a port and set https to true.

\n\n

Step 4: Add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package

\n\n

Same as in the other blogpost, we need to add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package to glue the ASP.NET Core development and React world together. If you use .NET 7, then use the version 7.x.x, if you use .NET 6, use the version 6.x.x - etc.

\n\n

\"x\"

\n\n

Step 5: Enhance your Program.cs

\n\n

Back to the Program.cs - this is more or less the same as with the “CRA” setup:

\n\n

Add the SpaStaticFiles to the services collection like this in your Program.cs - be aware, that vite.js builds everything in a folder called dist:

\n\n
var builder = WebApplication.CreateBuilder(args);\n\n// Add services to the container.\nbuilder.Services.AddControllersWithViews();\n\n// ↓ Add the following lines: ↓\nbuilder.Services.AddSpaStaticFiles(configuration => {\n    configuration.RootPath = \"clientapp/dist\";\n});\n// ↑ these lines ↑\n\nvar app = builder.Build();\n
\n\n

Now we need to use the SpaServices like this:

\n\n
app.MapControllerRoute(\n    name: \"default\",\n    pattern: \"{controller=Home}/{action=Index}/{id?}\");\n\n// ↓ Add the following lines: ↓\nvar spaPath = \"/app\";\nif (app.Environment.IsDevelopment())\n{\n    app.MapWhen(y => y.Request.Path.StartsWithSegments(spaPath), client =>\n    {\n        client.UseSpa(spa =>\n        {\n            spa.UseProxyToSpaDevelopmentServer(\"https://localhost:6363\");\n        });\n    });\n}\nelse\n{\n    app.Map(new PathString(spaPath), client =>\n    {\n        client.UseSpaStaticFiles();\n        client.UseSpa(spa => {\n            spa.Options.SourcePath = \"clientapp\";\n\n            // adds no-store header to index page to prevent deployment issues (prevent linking to old .js files)\n            // .js and other static resources are still cached by the browser\n            spa.Options.DefaultPageStaticFileOptions = new StaticFileOptions\n            {\n                OnPrepareResponse = ctx =>\n                {\n                    ResponseHeaders headers = ctx.Context.Response.GetTypedHeaders();\n                    headers.CacheControl = new CacheControlHeaderValue\n                    {\n                        NoCache = true,\n                        NoStore = true,\n                        MustRevalidate = true\n                    };\n                }\n            };\n        });\n    });\n}\n// ↑ these lines ↑\n\napp.Run();\n
\n\n

Just like in the original blogpost. In the development mode we use the UseProxyToSpaDevelopmentServer-method to proxy all requests to the vite.js dev server. In the real world, we will use the files from the dist folder.

\n\n

Step 6: Invoke npm run build during publish

\n\n

The last step is to complete the setup. We want to build the ASP.NET Core app and the React app, when we use dotnet publish:

\n\n

Add this to your .csproj-file and it should work:

\n\n
\t<PropertyGroup>\n\t\t<SpaRoot>clientapp\\</SpaRoot>\n\t</PropertyGroup>\n\n\t<Target Name=\"PublishRunWebpack\" AfterTargets=\"ComputeFilesToPublish\">\n\t\t<!-- As part of publishing, ensure the JS resources are freshly built in production mode -->\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm install\" />\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm run build\" />\n\n\t\t<!-- Include the newly-built files in the publish output -->\n\t\t<ItemGroup>\n\t\t\t<DistFiles Include=\"$(SpaRoot)dist\\**\" />  <!-- Changed to dist! -->\n\t\t\t<ResolvedFileToPublish Include=\"@(DistFiles->'%(FullPath)')\" Exclude=\"@(ResolvedFileToPublish)\">\n\t\t\t\t<RelativePath>%(DistFiles.Identity)</RelativePath> <!-- Changed! -->\n\t\t\t\t<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>\n\t\t\t\t<ExcludeFromSingleFile>true</ExcludeFromSingleFile>\n\t\t\t</ResolvedFileToPublish>\n\t\t</ItemGroup>\n\t</Target>\n
\n\n

Result

\n\n

You should now be able to use Visual Studio Code (or something like this) and start the frontend project with dev. If you open a browser and go to https://127.0.0.1:6363/app you should see something like this:

\n\n

\"x\"

\n\n

Now start the ASP.NET Core app and go to /app and it should look like this:

\n\n

\"x\"

\n\n

Ok - this looks broken, right? Well - this is a more or less a “known” problem, but can be easily avoided. If we import the logo from the assets it works as expected and shouldn’t be a general problem:

\n\n

\"x\"

\n\n

Code

\n\n

The sample code can be found here.

\n\n

Video

\n\n

I made a video about this topic (in German, sorry :-/) as well - feel free to subscribe ;)

\n\n\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2023/02/11/aspnet-core-react-with-vitejs/","RawContent":null,"Thumbnail":null},{"Title":"Use ASP.NET Core & React togehter","PublishedOn":"2023-01-25T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

The ASP.NET Core React template

\n\n

\"x\"

\n\n

Visual Studio (at least VS 2019 and the newer 2022) ships with a ASP.NET Core React template, which is “ok-ish”, but has some really bad problems:

\n\n

The React part of this template is scaffolded via “CRA” (which seems to be problematic as well, but is not the point of this post) and uses JavaScript instead of TypeScript.\nAnother huge pain point (from my perspective) is that the template uses some special configurations to just host the react part for users - if you want to mix in some “MVC”/”Razor” stuff, you need to change some of this “magic”.

\n\n

The good parts:

\n\n

Both worlds can live together: During development time the ASP.NET Core stuff is hosted via Kestrel and the React part is hosted under the WebPack Development server. The lovely hot reload is working as expected and is really powerful.\nIf you are doing a release build, the project will take care of the npm-magic.

\n\n

But because of the “bad problems” outweight the benefits, we try to integrate a typical react app in a “normal” ASP.NET Core app.

\n\n

Step for Step

\n\n

Step 1: Create a “normal” ASP.NET Core project

\n\n

(I like the ASP.NET Core MVC template, but feel free to use something else)

\n\n

\"x\"

\n\n

Step 2: Create a react app inside the ASP.NET Core project

\n\n

(For this blogpost I use the “Create React App”-approach, but you can use whatever you like)

\n\n

Execute this in your ASP.NET Core template (node & npm must be installed!):

\n\n
npx create-react-app clientapp --template typescript\n
\n\n

Step 3: Copy some stuff from the React template

\n\n

The react template ships with some scripts and settings that we want to preserve:

\n\n

\"x\"

\n\n

The aspnetcore-https.js and aspnetcore-react.js file is needed to setup the ASP.NET Core SSL dev certificate for the WebPack Dev Server. \nYou should also copy the .env & .env.development file in the root of your clientapp-folder!

\n\n

The .env file only has this setting:

\n\n
BROWSER=none\n
\n\n

A more important setting is in the .env.development file (change the port to something different!):

\n\n
PORT=3333\nHTTPS=true\n
\n\n

The port number 3333 and the https=true will be important later, otherwise our setup will not work.

\n\n

Also, add this line to the .env-file (in theory you can use any name - for this sample we keep it spaApp):

\n\n
PUBLIC_URL=/spaApp\n
\n\n

Step 4: Add the prestart to the package.json

\n\n

In your project open the package.json and add the prestart-line like this:

\n\n
  \"scripts\": {\n    \"prestart\": \"node aspnetcore-https && node aspnetcore-react\",\n    \"start\": \"react-scripts start\",\n    \"build\": \"react-scripts build\",\n    \"test\": \"react-scripts test\",\n    \"eject\": \"react-scripts eject\"\n  },\n
\n\n

Step 5: Add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package

\n\n

\"x\"

\n\n

We need the Microsoft.AspNetCore.SpaServices.Extensions NuGet-package. If you use .NET 7, then use the version 7.x.x, if you use .NET 6, use the version 6.x.x - etc.

\n\n

Step 6: Enhance your Program.cs

\n\n

Add the SpaStaticFiles to the services collection like this in your Program.cs:

\n\n
var builder = WebApplication.CreateBuilder(args);\n\n// Add services to the container.\nbuilder.Services.AddControllersWithViews();\n\n// ↓ Add the following lines: ↓\nbuilder.Services.AddSpaStaticFiles(configuration => {\n    configuration.RootPath = \"clientapp/build\";\n});\n// ↑ these lines ↑\n\nvar app = builder.Build();\n
\n\n

Now we need to use the SpaServices like this:

\n\n
app.MapControllerRoute(\n    name: \"default\",\n    pattern: \"{controller=Home}/{action=Index}/{id?}\");\n\n// ↓ Add the following lines: ↓\nvar spaPath = \"/spaApp\";\nif (app.Environment.IsDevelopment())\n{\n    app.MapWhen(y => y.Request.Path.StartsWithSegments(spaPath), client =>\n    {\n        client.UseSpa(spa =>\n        {\n            spa.UseProxyToSpaDevelopmentServer(\"https://localhost:3333\");\n        });\n    });\n}\nelse\n{\n    app.Map(new PathString(spaPath), client =>\n    {\n        client.UseSpaStaticFiles();\n        client.UseSpa(spa => {\n            spa.Options.SourcePath = \"clientapp\";\n\n            // adds no-store header to index page to prevent deployment issues (prevent linking to old .js files)\n            // .js and other static resources are still cached by the browser\n            spa.Options.DefaultPageStaticFileOptions = new StaticFileOptions\n            {\n                OnPrepareResponse = ctx =>\n                {\n                    ResponseHeaders headers = ctx.Context.Response.GetTypedHeaders();\n                    headers.CacheControl = new CacheControlHeaderValue\n                    {\n                        NoCache = true,\n                        NoStore = true,\n                        MustRevalidate = true\n                    };\n                }\n            };\n        });\n    });\n}\n// ↑ these lines ↑\n\napp.Run();\n
\n\n

As you can see, we run in two different modes. \nIn our development world we just use the UseProxyToSpaDevelopmentServer-method to proxy all requests that points to spaApp to the React WebPack DevServer (or something else). The huge benefit is, that you can use the React ecosystem with all its tools. Normally we use Visual Studio Code to run our react frontend and use the ASP.NET Core app as the “Backend for frontend”.\nIn production we use the build artefacts of the react build and make sure, that it’s not cached. To make the deployment easier, we need to invoke npm run build when we publish this ASP.NET Core app.

\n\n

Step 7: Invoke npm run build during publish

\n\n

Add this to your .csproj-file and it should work:

\n\n
\t<PropertyGroup>\n\t\t<SpaRoot>clientapp\\</SpaRoot>\n\t</PropertyGroup>\n\n\t<Target Name=\"PublishRunWebpack\" AfterTargets=\"ComputeFilesToPublish\">\n\t\t<!-- As part of publishing, ensure the JS resources are freshly built in production mode -->\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm install\" />\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm run build\" />\n\n\t\t<!-- Include the newly-built files in the publish output -->\n\t\t<ItemGroup>\n\t\t\t<DistFiles Include=\"$(SpaRoot)build\\**\" />\n\t\t\t<ResolvedFileToPublish Include=\"@(DistFiles->'%(FullPath)')\" Exclude=\"@(ResolvedFileToPublish)\">\n\t\t\t\t<RelativePath>%(DistFiles.Identity)</RelativePath> <!-- Changed! -->\n\t\t\t\t<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>\n\t\t\t\t<ExcludeFromSingleFile>true</ExcludeFromSingleFile>\n\t\t\t</ResolvedFileToPublish>\n\t\t</ItemGroup>\n\t</Target>\n
\n\n

Be aware that these instruction are copied from the original ASP.NET Core React template and are slightly modified, otherwise the path wouldn’t match.

\n\n

Result

\n\n

With this setup you can add any spa app that you would like to add to your “normal” ASP.NET Core project.

\n\n

If everything works as expected you should be able to start the React app in Visual Studio Code like this:

\n\n

\"x\"

\n\n

Be aware of the https://localhost:3333/spaApp. The port and the name is important for our sample!

\n\n

Start your hosting ASP.NET Core app in Visual Studio (or in any IDE that you like) and all requests that points to spaApp use the WebPack DevServer in the background:

\n\n

\"x\"

\n\n

With this setup you can mix all client & server side styles as you like - mission succeeded and you can use any client setup (CRA, anything else) as you would like to.

\n\n

Code

\n\n

The code (but with slightly modified values (e.g. another port)) can be found here. \nBe aware, that npm i needs to be run first.

\n\n

Video

\n\n

I uploaded a video on my YouTube channel (in German) about this setup:

\n\n\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2023/01/25/aspnet-core-and-react/","RawContent":null,"Thumbnail":null},{"Title":"Your URL is flagged as malware/phishing, now what?","PublishedOn":"2023-01-04T22:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Problem

\n\n

On my last day in 2022 - Friday, 23. December, I received a support ticket from one customer, that our software seems to be offline and it looks like that our servers are not responding. I checked our monitoring and the server side of the customer and everything was fine. \nMy first thought: Maybe a misconfiguration on the customer side, but after a remote support session with the customer, I saw that it “should work”, but something in the customer network blocks the requests to our services.\nNext thought: Firewall or proxy stuff. Always nasty, but we are just using port 443, so nothing too special.

\n\n

After a while I received a phone call from the customers firewall team and they discovered the problem: They are using a firewall solution from “Check Point” and our domain was flagged as “phishing”/”malware”. What the… \nThey even created an exception so that Check Point doesn’t block our requests, but the next problem occured: The customers “Windows Defender for Office 365” has the same “flag” for our domain, so they revert everything, because they didn’t want to change their settings too much.

\n\n

\"x\"

\n\n

Be aware, that from our end everything was working “fine” and I could access the customer services and our Windows Defender didn’t had any problems with this domain.

\n\n

Solution

\n\n

Somehow our domain was flagged as malware/phishing and we needed to change this false positive listening. I guess there are tons of services, that “tracks” “bad” websites and maybe all are connected somehow. From this indicent I can only suggest:

\n\n

If you have trouble with Check Point:

\n\n

Go to “URLCAT”, register an account and try to change the category of your domain. After you submit the “change request” you will get an email like this:

\n\n
Thank you for submitting your category change request.\nWe will process your request and notify you by email (to: xxx.xxx@xxx.com ).\nYou can follow the status of your request on this page.\nYour request details\nReference ID: [GUID]\nURL: https://[domain].com\nSuggested Categories: Computers / Internet,Business / Economy\nComment: [Given comment]\n
\n\n

After ~1-2 days the change was done. Not sure if this is automated or not, but it was during Christmas.

\n\n

If you have trouble with Windows Defender:

\n\n

Go to “Report submission” in your Microsoft 365 Defender setting (you will need an account with special permissions, e.g. global admin) and add the URL as “Not junk”.

\n\n

\"x\"

\n\n

I’m not really sure if this helped or not, because we didn’t had any issues with the domain itself and I’m not sure if those “false positive” tickets bubbles up into a “global defender catalog” or if this only affects our own tenant.

\n\n

Result

\n\n

Anyway - after those tickets were “resolved” by Check Point / Microsoft the problem on the customer side disappeared and everyone was happy. This was my first experience with such an “false positive malware report”. I’m not sure how we ended up on such a list and why only one customer was affected.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2023/01/04/checkpoint-and-defender-false-positive-url/","RawContent":null,"Thumbnail":null},{"Title":"SQLLocalDb update","PublishedOn":"2022-12-03T22:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Short Intro

\n\n

SqlLocalDb is a “developer” SQL server, without the “full” SQL Server (Express) installation. If you just develop on your machine and don’t want to run a “full blown” SQL Server, this is the tooling that you might need.

\n\n

From the Microsoft Docs:

\n\n
\n

Microsoft SQL Server Express LocalDB is a feature of SQL Server Express targeted to developers. It is available on SQL Server Express with Advanced Services.

\n\n

LocalDB installation copies a minimal set of files necessary to start the SQL Server Database Engine. Once LocalDB is installed, you can initiate a connection using a special connection string. When connecting, the necessary SQL Server infrastructure is automatically created and started, enabling the application to use the database without complex configuration tasks. Developer Tools can provide developers with a SQL Server Database Engine that lets them write and test Transact-SQL code without having to manage a full server instance of SQL Server.

\n
\n\n

Problem

\n\n

(I’m not really sure, how I ended up on this problem, but I after I solved the problem I did it on my “To Blog”-bucket list)

\n\n

From time to time there is a new SQLLocalDb version, but to upgrade an existing installation is a bit “weird”.

\n\n

Solution

\n\n

If you have installed an older SQLLocalDb version you can manage it via sqllocaldb. If you want to update you must delete the “current” MSSQLLocalDB in the first place.

\n\n

To to this use:

\n\n
sqllocaldb stop MSSQLLocalDB\nsqllocaldb delete MSSQLLocalDB\n
\n\n

Then download the newest version from Microsoft. \nIf you choose “Download Media” you should see something like this:

\n\n

\"x\"

\n\n

Download it, run it and restart your PC, after that you should be able to connect to the SQLLocalDb.

\n\n

We solved this issue with help of this blogpost.

\n\n

Hope this helps! (and I can remove it now from my bucket list \\o/ )

\n","Href":"https://blog.codeinside.eu/2022/12/03/sqllocaldb-update/","RawContent":null,"Thumbnail":null},{"Title":"Azure DevOps & Azure Service Connection","PublishedOn":"2022-10-04T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Today I needed to setup a new release pipeline on our Azure DevOps Server installation to deploy some stuff automatically to Azure. The UI (at least on the Azure DevOps Server 2020 (!)) is not really clear about how to connect those two worlds, and thats why I’m writing this short blogpost.

\n\n

First - under project settings - add a new service connection. Use the Azure Resource Manager-service. Now you should see something like this:

\n\n

\"x\"

\n\n

Be aware: You will need to register app inside your Azure AD and need permissions to setup. If you are not able to follow these instructions, you might need to talk to your Azure subscription owner.

\n\n

Subscription id:

\n\n

Copy here the id of your subscription. This can be found in the subscription details:

\n\n

\"x\"

\n\n

Keep this tab open, because we need it later!

\n\n

Service prinipal id/key & tenant id:

\n\n

Now this wording about “Service principal” is technically correct, but really confusing if your are not familar with Azure AD. A “Service prinipal” is like a “service user”/”app” that you need to register to use it.\nThe easiest route is to create an app via the Bash Azure CLI:

\n\n
az ad sp create-for-rbac --name DevOpsPipeline\n
\n\n

If this command succeeds you should see something like this:

\n\n
{\n  \"appId\": \"[...GUID..]\",\n  \"displayName\": \"DevOpsPipeline\",\n  \"password\": \"[...PASSWORD...]\",\n  \"tenant\": \"[...Tenant GUID...]\"\n}\n
\n\n

This creates an “Serivce principal” with a random password inside your Azure AD. The next step is to give this “Service principal” a role on your subscription, because it has currently no permissions to do anything (e.g. deploy a service etc.).

\n\n

Go to the subscription details page and then to Access control (IAM). There you can add your “DevOpsPipeline”-App as “Contributor” (Be aware that this is a “powerful role”!).

\n\n

After that use the \"appId\": \"[...GUID..]\" from the command as Service Principal Id. \nUse the \"password\": \"[...PASSWORD...]\" as Service principal key and the \"tenant\": \"[...Tenant GUID...]\" for the tenant id.

\n\n

Now you should be able to “Verify” this connection and it should work.

\n\n

Links:\nThis blogpost helped me a lot. Here you can find the official documentation.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2022/10/04/azure-devops-azure-service-connection/","RawContent":null,"Thumbnail":null},{"Title":"'error MSB8011: Failed to register output.' & UTF8-BOM files","PublishedOn":"2022-08-30T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Be aware: I’m not a C++ developer and this might be an “obvious” problem, but it took me a while to resolve this issue.

\n\n

In our product we have very few C++ projects. We use these projects for very special Microsoft Office COM stuff and because of COM we need to register some components during the build. Everything worked as expected, but we renamed a few files and our build broke with:

\n\n
C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\MSBuild\\Microsoft\\VC\\v170\\Microsoft.CppCommon.targets(2302,5): warning MSB3075: The command \"regsvr32 /s \"C:/BuildAgentV3_1/_work/67/s\\_Artifacts\\_ReleaseParts\\XXX.Client.Addin.x64-Shims\\Common\\XXX.Common.Shim.dll\"\" exited with code 5. Please verify that you have sufficient rights to run this command. [C:\\BuildAgentV3_1\\_work\\67\\s\\XXX.Common.Shim\\XXX.Common.Shim.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\MSBuild\\Microsoft\\VC\\v170\\Microsoft.CppCommon.targets(2314,5): error MSB8011: Failed to register output. Please try enabling Per-user Redirection or register the component from a command prompt with elevated permissions. [C:\\BuildAgentV3_1\\_work\\67\\s\\XXX.Common.Shim\\XXX.Common.Shim.vcxproj]\n\n(xxx = redacted)\n
\n\n

The crazy part was: Using an older version of our project just worked as expected, but all changes were “fine” from my point of view.

\n\n

After many, many attempts I remembered that our diff tool doesn’t show us everything - so I checked the file encodings: UTF8-BOM

\n\n

Somehow if you have a UTF8-BOM encoded file that your C++ project uses to register COM stuff it will fail. I changed the encoding and to UTF8 and everyting worked as expected.

\n\n

What a day… lessons learned: Be aware of your file encodings.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2022/08/30/error-msb8011-failed-to-register-output-and-utf8bom/","RawContent":null,"Thumbnail":null},{"Title":"Which .NET Framework Version is installed on my machine?","PublishedOn":"2022-08-29T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

If you need to know which .NET Framework Version (the “legacy” .NET Framework) is installed on your machine try this handy oneliner:

\n\n
Get-ItemProperty \"HKLM:SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4\\Full\"\n
\n\n

Result:

\n\n
CBS           : 1\nInstall       : 1\nInstallPath   : C:\\Windows\\Microsoft.NET\\Framework64\\v4.0.30319\\\nRelease       : 528372\nServicing     : 0\nTargetVersion : 4.0.0\nVersion       : 4.8.04084\nPSPath        : Microsoft.PowerShell.Core\\Registry::HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework\n                Setup\\NDP\\v4\\Full\nPSParentPath  : Microsoft.PowerShell.Core\\Registry::HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4\nPSChildName   : Full\nPSDrive       : HKLM\nPSProvider    : Microsoft.PowerShell.Core\\Registry\n
\n\n

The version should give you more then enough information.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2022/08/29/which-dotnet-version-is-installed-via-powershell/","RawContent":null,"Thumbnail":null},{"Title":"How to run a Azure App Service WebJob with parameters","PublishedOn":"2022-07-22T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

We are using WebJobs in our Azure App Service deployment and they are pretty “easy” for the most part. Just register a WebJobs or deploy your .exe/.bat/.ps1/... under the \\site\\wwwroot\\app_data\\Jobs\\triggered folder and it should execute as described in the settings.job.

\n\n

\"x\"

\n\n

If you put any executable in this WebJob folder, it will be executed as planned.

\n\n

Problem: Parameters

\n\n

If you have a my-job.exe, then this will be invoked from the runtime. But what if you need to invoke it with a parameter like my-job.exe -param \"test\"?

\n\n

Solution: run.cmd

\n\n

The WebJob environment is “greedy” and will search for a run.cmd (or run.exe) and if this is found, it will be executed and it doesn’t matter if you have any other .exe files there.\nStick to the run.cmd and use this to invoke your actual executable like this:

\n\n
echo \"Invoke my-job.exe with parameters - Start\"\n\n..\\MyJob\\my-job.exe -param \"test\"\n\necho \"Invoke my-job.exe with parameters - Done\"\n
\n\n

Be aware, that the path must “match”. We use this run.cmd-approach in combination with the is_in_place-option (see here) and are happy with the results).

\n\n

A more detailed explanation can be found here.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2022/07/22/how-to-run-a-azure-appservice-webjob-with-parameters/","RawContent":null,"Thumbnail":null},{"Title":"How to use IE proxy settings with HttpClient","PublishedOn":"2022-03-28T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Internet Explorer is - mostly - dead, but some weird settings are still around and “attached” to the old world, at least on Windows 10. \nIf your system administrator uses some advanced proxy settings (e.g. a PAC-file), those will be attached to the users IE setting.

\n\n

If you want to use this with a HttpClient you need to code something like this:

\n\n
    string target = \"https://my-target.local\";\n    var targetUri = new Uri(target);\n    var proxyAddressForThisUri = WebRequest.GetSystemWebProxy().GetProxy(targetUri);\n    if (proxyAddressForThisUri == targetUri)\n    {\n        // no proxy needed in this case\n        _httpClient = new HttpClient();\n    }\n    else\n    {\n        // proxy needed\n        _httpClient = new HttpClient(new HttpClientHandler() { Proxy = new WebProxy(proxyAddressForThisUri) { UseDefaultCredentials = true } });\n    }\n
\n\n

The GetSystemWebProxy() gives access to the system proxy settings from the current user. Then we can query, what proxy is needed for the target. If the result is the same address as the target, then no proxy is needed. Otherwise, we inject a new WebProxy for this address.

\n\n

Hope this helps!

\n\n

Be aware: Creating new HttpClients is (at least in a server environment) not recommended. Try to reuse the same HttpClient instance!

\n\n

Also note: The proxy setting in Windows 11 are now built into the system settings, but the API still works :)

\n\n

\"x\"

\n","Href":"https://blog.codeinside.eu/2022/03/28/how-to-use-ie-proxy-settings-with-httpclient/","RawContent":null,"Thumbnail":null},{"Title":"Redirect to HTTPS with a simple web.config rule","PublishedOn":"2022-01-05T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

The scenario is easy: My website is hosted in an IIS and would like to redirect all incomming HTTP traffic to the HTTPS counterpart.

\n\n

This is your solution - a “simple” rule:

\n\n
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<configuration>\n    <system.webServer>\n        <rewrite>\n            <rules>\n                <rule name=\"Redirect to https\" stopProcessing=\"true\">\n                    <match url=\".*\" />\n                    <conditions logicalGrouping=\"MatchAny\">\n                        <add input=\"{HTTPS}\" pattern=\"off\" />\n                    </conditions>\n                    <action type=\"Redirect\" url=\"https://{HTTP_HOST}{REQUEST_URI}\" redirectType=\"Found\" />\n                </rule>\n            </rules>\n        </rewrite>\n    </system.webServer>\n</configuration>\n
\n\n

We used this in the past to setup a “catch all” web site in an IIS that redirects all incomming HTTP traffic.\nThe actual web applications had only the HTTPS binding in place.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2022/01/05/redirect-to-https-with-a-simple-webconfig-rule/","RawContent":null,"Thumbnail":null},{"Title":"Select random rows","PublishedOn":"2021-12-06T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Let’s say we have a SQL table and want to retrieve 10 rows randomly - how would you do that? Although I have been working with SQL for x years, I have never encountered that problem. The solution however is quite “simple” (at least if you don’t be picky how we define “randomness” and if you try this on millions of rows):

\n\n

ORDER BY NEWID()

\n\n

The most boring way is to use the ORDER BY NEWID() clause:

\n\n
SELECT TOP 10 FROM Products ORDER BY NEWID()\n
\n\n

This works, but if you do that on “large” datasets you might hit performance problems (e.g. more on that here)

\n\n

TABLESAMPE

\n\n

The SQL Server implements the Tablesample clause which was new to me. It seems to perform much bettern then the ORDER BY NEWID() clause, but behaves a bit weird. With this clause you can specify the “sample” from a table. The size of the sample can be specified as PERCENT or ROWS (which are then converted to percent internally).

\n\n

Syntax:

\n\n
SELECT TOP 10 FROM Products TABLESAMPLE (25 Percent)\nSELECT TOP 10 FROM Products TABLESAMPLE (100 ROWS)\n
\n\n

The weird part is that the given number might not match the number of rows of your result. You might got more or less results and if our tablesample is too small you might even got nothing in return. There are some clever ways to work around this (e.g. using the TOP 100 statement with a much larger tablesample clause to get a guaranteed result set), but it feels “strange”.\nIf you hit limitations with the first solution you might want to read more on this blog or in the Microsoft Docs.

\n\n

Stackoverflow

\n\n

Of course there is a great Stackoverflow thread with even wilder solutions.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2021/12/06/select-random-rows/","RawContent":null,"Thumbnail":null},{"Title":"SQL collation problems","PublishedOn":"2021-11-24T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

This week I deployed a new feature and tried it on different SQL databases and was a bit suprised that on one database this error message came up:

\n\n
Cannot resolve the collation conflict between \"Latin1_General_CI_AS\" and \"SQL_Latin1_General_CP1_CI_AS\" in the equal to operation.\n
\n\n

This was strange, because - at least in theory - all databases have the same schema and I was sure that each database had the same collation setting.

\n\n

Collations on columns

\n\n

Well… my theory was wrong and this SQL statement told me that “some” columns had a different collation.

\n\n
select sc.name, sc.collation_name from sys.columns sc\ninner join sys.tables t on sc.object_id=t.object_id\nwhere t.name='TABLENAME'\n
\n\n

As it turns out, some columns had the collation Latin1_General_CI_AS and some had SQL_Latin1_General_CP1_CI_AS. I’m still not sure why, but I needed to do something.

\n\n

How to change the collation

\n\n

To change the collation you can execute something like this:

\n\n
ALTER TABLE MyTable\nALTER COLUMN [MyColumn] NVARCHAR(200) COLLATE SQL_Latin1_General_CP1_CI_AS\n
\n\n

Unfortunately there are restrictions and you can’t change the collation if the column is referenced by any one of the following:

\n\n\n\n

Be aware: If you are not in control of the collation or if the collation is “fine” and you want to do this operation anyway, there might be a way to specify the collation in the SQL query.

\n\n

For more information you might want to check out this Microsoft Docs “Set or Change the Column Collation

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2021/11/24/sql-collations-problem/","RawContent":null,"Thumbnail":null},{"Title":"Microsoft Build 2021 session recommendations","PublishedOn":"2021-09-24T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

To be fair: Microsoft Build 2021 was some month ago, but the content might still be relevant today. Sooo… it took me a while, but here is a list of sessions that I found interesting. Some sessions are “better” and some “lighter”, the order doesn’t reflect that - that was just the order I watched those videos.

\n\n

The headline has a link to the video and below are some notes.

\n\n

Build cloud-native applications that run anywhere

\n\n\n\n

Build differentiated SaaS apps with the Microsoft Cloud

\n\n\n\n

[Build the next generation of collaborative apps for hybrid work

\n

](https://mybuild.microsoft.com/sessions/2915b9b6-6b45-430a-9df7-2671318e2161?source=sessions)

\n\n\n\n

[Mark Russinovich on Azure innovation and more!

\n

](https://mybuild.microsoft.com/sessions/b7d536c1-515f-476a-83d2-85b6cf14577a?source=sessions)

\n\n\n\n

[Learn how to build exciting apps across meetings, chats, and channels within or outside Microsoft Teams](

\n

https://mybuild.microsoft.com/sessions/512470be-15d3-4b50-b180-6532c8153931?source=sessions)

\n\n\n\n

What’s new for Windows desktop application development

\n\n\n\n

[Understand the ML process and embed models into apps

\n

](https://mybuild.microsoft.com/sessions/10930f2e-ad9c-460b-b91d-844d17a5a875?source=sessions)

\n\n\n\n

[The future of modern application development with .NET

\n

](https://mybuild.microsoft.com/sessions/76ebac39-517d-44da-a58e-df4193b5efa9?source=sessions)

\n\n\n\n

Scott Guthrie ‘Unplugged’ – Home Edition (Extended)

\n\n\n\n

Build your first web app with Blazor & Web Assembly

\n\n\n\n

Develop apps with the Microsoft Graph Toolkit

\n\n\n\n

Application Authentication in the Microsoft Identity platform

\n\n\n\n

[Double-click with Microsoft engineering leaders

\n

](https://mybuild.microsoft.com/sessions/08538f9b-e562-4d71-8b42-d240c3966ef0?source=sessions)

\n\n\n\n

[.NET 6 deep dive; what’s new and what’s coming

\n

](https://mybuild.microsoft.com/sessions/70d379f4-1173-4941-b389-8796152ec7b8?source=sessions)

\n\n\n\n

Hope this helps.

\n","Href":"https://blog.codeinside.eu/2021/09/24/build-2021-recommendation/","RawContent":null,"Thumbnail":null},{"Title":"Today I learned (sort of) 'fltmc' to inspect the IO request pipeline of Windows","PublishedOn":"2021-05-30T22:00:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

The headline is obviously a big lie, because I followed this twitter conversation last year, but it’s still interesting to me and I wanted to write it somewhere down.

\n\n

Starting point was that Bruce Dawson (Google programmer) noticed, that building Chrome on Windows is slow for various reasons:

\n\n

Based on some twitter discussion about source-file length and build times two months ago I wrote a blog post. It's got real data based on Chromium's build, and includes animations of build-time improvements:https://t.co/lsLH8BNe48

— Bruce Dawson (Antifa) (@BruceDawson0xB) March 31, 2020
\n\n\n

Trentent Tye told him to disable the “filter driver”:

\n\n

disabling the filter driver makes it dead dead dead. Might be worth testing with the number and sizes of files you are dealing with. Even half a millisecond of processing time adds up when it runs against millions and millions of files.

— Trentent Tye (@TrententTye) April 1, 2020
\n\n\n

If you have never heard of a “filter driver” (like me :)), you might want to take a look here.

\n\n

To see the loaded filter driver on your machine try out this: Run fltmc (fltmc.exe) as admin.

\n\n

\"x\"

\n\n

Description:

\n\n

Each filter in the list sit in a pipe through which all IO requests bubble down and up. They see all IO requests, but ignore most. Ever wondered how Windows offers encrypted files, OneDrive/GDrive/DB file sync, storage quotas, system file protection, and, yes, anti-malware? ;)

— Rich Turner (@richturn_ms) April 2, 2020
\n\n\n

This makes more or less sense to me. I’m not really sure what to do with that information, but it’s cool (nerd cool, but anyway :)).

\n","Href":"https://blog.codeinside.eu/2021/05/30/fltmc-inspect-the-io-request-pipeline-of-windows/","RawContent":null,"Thumbnail":null},{"Title":"How to self host Google Fonts","PublishedOn":"2021-04-28T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Google Fonts are really nice and widely used. Typically Google Fonts consistes of the actual font file (e.g. woff, ttf, eot etc.) and some CSS, which points to those font files.

\n\n

In one of our applications, we used a HTML/CSS/JS - Bootstrap like theme and the theme linked some Google Fonts. The problem was, that we wanted to self host everything.

\n\n

After some research we discovered this tool: Google-Web-Fonts-Helper

\n\n

\"x\"

\n\n

Pick your font, select your preferred CSS option (e.g. if you need to support older browsers etc.) and download a complete .zip package. Extract those files and add them to your web project like any other static asset. (And check the font license!)

\n\n

The project site is on GitHub.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2021/04/28/how-to-self-host-google-fonts/","RawContent":null,"Thumbnail":null},{"Title":"Microsoft Graph: Read user profile and group memberships","PublishedOn":"2021-01-31T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

In our application we have a background service, that “syncs” user data and group membership information to our database from the Microsoft Graph.

\n\n

The permission model:

\n\n

Programming against the Microsoft Graph is quite easy. There are many SDKS available, but understanding the permission model is hard.

\n\n

‘Directory.Read.All’ and ‘User.Read.All’:

\n\n

Initially we only synced the “basic” user data to our database, but then some customers wanted to reuse some other data already stored in the graph. Our app required the ‘Directory.Read.All’ permission, because we thought that this would be the “highest” permission - this is wrong!

\n\n

If you need “directory” information, e.g. memberships, the Directory.Read.All or Group.Read.All is a good starting point. But if you want to load specific user data, you might need to have the User.Read.All permission as well.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2021/01/31/microsoft-graph-read-user-profile-and-group-memberships/","RawContent":null,"Thumbnail":null},{"Title":"How to get all distribution lists of a user with a single LDAP query","PublishedOn":"2020-12-31T00:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

In 2007 I wrote a blogpost how easy it is to get all “groups” of a given user via the tokenGroup attribute.

\n\n

Last month I had the task to check why “distribution list memberships” are not part of the result.

\n\n

The reason is simple:

\n\n

A pure distribution list (not security enabled) is not a security group and only security groups are part of the “tokenGroup” attribute.

\n\n

After some thoughts and discussions we agreed, that it would be good if we could enhance our function and treat distribution lists like security groups.

\n\n

How to get all distribution lists of a user?

\n\n

The get all groups of a given user might be seen as trivial, but the problem is, that groups can contain other groups. \nAs always, there are a couple of ways to get a “full flat” list of all group memberships.

\n\n

A stupid way would be to load all groups in a recrusive function - this might work, but will result in a flood of requests.

\n\n

A clever way would be to write a good LDAP query and let the Active Directory do the heavy lifting for us, right?

\n\n

1.2.840.113556.1.4.1941

\n\n

I found some sample code online with a very strange LDAP query and it turns out:\nThere is a “magic” ldap query called “LDAP_MATCHING_RULE_IN_CHAIN” and it does everything we are looking for:

\n\n
var getGroupsFilterForDn = $\"(&(objectClass=group)(member:1.2.840.113556.1.4.1941:= {distinguishedName}))\";\n                using (var dirSearch = CreateDirectorySearcher(getGroupsFilterForDn))\n                {\n                    using (var results = dirSearch.FindAll())\n                    {\n                        foreach (SearchResult result in results)\n                        {\n                            if (result.Properties.Contains(\"name\") && result.Properties.Contains(\"objectSid\") && result.Properties.Contains(\"groupType\"))\n                                groups.Add(new GroupResult() { Name = (string)result.Properties[\"name\"][0], GroupType = (int)result.Properties[\"groupType\"][0], ObjectSid = new SecurityIdentifier((byte[])result.Properties[\"objectSid\"][0], 0).ToString() });\n                        }\n                    }\n                }\n
\n\n

With a given distinguishedName of the target user, we can load all distribution and security groups (see below…) transitive!

\n\n

Combine tokenGroups and this

\n\n

During our testing we found some minor differences between the LDAP_MATCHING_RULE_IN_CHAIN and the tokenGroups approach. Some “system-level” security groups were missing with the LDAP_MATCHING_RULE_IN_CHAIN way. In our production code we use a combination of those two approaches and it seems to work.

\n\n

A full demo code how to get all distribution lists for a user can be found on GitHub.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/12/31/how-get-all-distribution-lists-of-a-user-with-a-single-ldap-query/","RawContent":null,"Thumbnail":null},{"Title":"Update AzureDevOps Server 2019 to AzureDevOps Server 2019 Update 1","PublishedOn":"2020-11-30T18:00:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

We did this update in May 2020, but I forgot to publish the blogpost… so here we are

\n\n

Last year we updated to Azure DevOps Server 2019 and it went more or less smooth.

\n\n

In May we decided to update to the “newest” release at that time: Azure DevOps Server 2019 Update 1.1

\n\n

Setup

\n\n

Our AzureDevOps Server was running on a “new” Windows Server 2019 and everything was still kind of newish - so we just needed to update the AzureDevOps Server app.

\n\n

Update process

\n\n

The actual update was really easy, but we had some issues after the installation.

\n\n

Steps:

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

Aftermath

\n\n

We had some issues with our Build Agents - they couldn’t connect to the AzureDevOps Server:

\n\n
TF400813: Resource not available for anonymous access\n
\n\n

As a first “workaround” (and a nice enhancement) we switched from HTTP to HTTPS internally, but this didn’t solved the problem.

\n\n

The real reason was, that our “Azure DevOps Service User” didn’t had the required write permissions for this folder:

\n\n
C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys\n
\n\n

The connection issue went away, but now we introduced another problem: Our SSL Certificate was “self signed” (from our Domain Controller), so we need to register the agents like this:

\n\n
.\\config.cmd --gituseschannel --url https://.../tfs/ --auth Integrated --pool Default-VS2019 --replace --work _work\n
\n\n

The important parameter is -gituseschannel, which is needed when dealing with “self signed, but Domain ‘trusted’“-certificates.

\n\n

With this setting everything seemed to work as expected.

\n\n

Only node.js projects or toolings were “problematic”, because node.js itself don’t use the Windows Certificate Store.

\n\n

To resolve this, the root certificate from our Domain controller must be stored on the agent.

\n\n
  [Environment]::SetEnvironmentVariable(\"NODE_EXTRA_CA_CERTS\", \"C:\\SSLCert\\root-CA.pem\", \"Machine\") \n
\n\n

Summary

\n\n

The update itself was easy, but it took us some hours to configure our Build Agents. After the initial hiccup it went smooth from there - no issues and we are ready for the next update, which is already released.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/11/30/update-onprem-azuredevops-server-2019-to-azuredevops-server-2019-update1/","RawContent":null,"Thumbnail":null},{"Title":"DllRegisterServer 0x80020009 Error","PublishedOn":"2020-10-31T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Last week I had a very strange issue and the solution was really “easy”, but took me a while.

\n\n

Scenario

\n\n

For our products we build Office COM Addins with a C++ based “Shim” that boots up our .NET code (e.g. something like this.\nAs the nature of COM: It requires some pretty dumb registry entries to work and in theory our toolchain should “build” and automatically “register” the output.

\n\n

Problem

\n\n

The registration process just failed with a error message like that:

\n\n
The module xxx.dll was loaded but the call to DllRegisterServer failed with error code 0x80020009\n
\n\n

After some research you will find some very old stuff or only some general advises like in this Stackoverflow.com question, e.g. “run it as administrator”.

\n\n

The solution

\n\n

Luckily we had another project were we use the same approach and this worked without any issues. After comparing the files I notices some subtile differences: The file encoding was different!

\n\n

In my failing project some C++ files were encoded with UTF8-BOM. I changed everything to UTF8 and after this change it worked.

\n\n

My reaction:

\n\n
(╯°□°)╯︵ ┻━┻\n
\n\n

I’m not a C++ dev and I’m not even sure why some files had the wrong encoding in the first place. It “worked” - at least Visual Studio 2019 was able to build the stuff, but register it with “regsrv32” just failed.

\n\n

I needed some hours to figure that out.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/10/31/dllregisterserver-0x80020009-error/","RawContent":null,"Thumbnail":null},{"Title":"How to share an Azure subscription in a team","PublishedOn":"2020-09-29T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

We at Sevitec are moving more and more workloads for us or our customers to Azure.

\n\n

So the basic question needs an answer:

\n\n

How can a team share an Azure subscription?

\n\n

Be aware: This approach works for us. There might be better options. If we do something stupid, just tell me in the comments or via email - help is appreciated.

\n\n

Step 1: Create a directory

\n\n

We have a “company directory” with a fully configured Azure Active Directory (incl. User sync between our OnPrem system, Office 365 licenses etc.).

\n\n

Our rule of thumb is: We create for each product team a individual directory and all team members are invited in the new directory.

\n\n

Keep in mind: A directory itself costs you nothing but might help you to keep things manageable.

\n\n

\"Create

\n\n

Step 2: Create a group

\n\n

This step might be optional, but all team members - except the “Administrator” - have the same rights and permissions in our company. To keep things simple, we created a group with all team members.

\n\n

\"Put

\n\n

Step 3: Create a subscription

\n\n

Now create a subscription. The typical “Pay-as-you-go” offer will work. Be aware that the user who creates the subscription is initially setup as the Administrator.

\n\n

\"Create

\n\n

Step 4: “Share” the subscription

\n\n

This is the most important step:

\n\n

You need to grant the individual users or the group (from step 2) the “Contributor” role for this subscription via the “Access control (IAM)”.\nThe hard part is to understand how those “Role assignment” affect the subscription. I’m not even sure if the “Contributor” is the best fit, but it works for us.

\n\n

\"Pick

\n\n

Summary

\n\n

I’m not really sure why such a basic concept is labeled so poorly but you really need to pick the correct role assignment and the other person should be able to use the subscription.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/09/29/how-to-share-an-azure-subscription-in-a-team/","RawContent":null,"Thumbnail":null},{"Title":"How to run a legacy WCF .svc Service on Azure AppService","PublishedOn":"2020-08-31T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Last month we wanted to run good old WCF powered service on Azures “App Service”.

\n\n

WCF… what’s that?

\n\n

If you are not familiar with WCF: Good! For the interested ones: WCF is or was a framework to build mostly SOAP based services in the .NET Framework 3.0 timeframe. Some parts where “good”, but most developers would call it a complex monster.

\n\n

Even in the glory days of WCF I tried to avoid it at all cost, but unfortunately I need to maintain a WCF based service.

\n\n

For the curious: The project template and the tech is still there. Search for “WCF”.

\n\n

\"VS

\n\n

The template will produce something like that:

\n\n

The actual “service endpoint” is the Service1.svc file.

\n\n

\"WCF

\n\n

Running on Azure: The problem

\n\n

Let’s assume we have a application with a .svc endpoint. In theory we can deploy this application to a standard Windows Server/IIS without major problems.

\n\n

Now we try to deploy this very same application to Azure AppService and this is the result after we invoke the service from the browser:

\n\n
\"The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.\" (HTTP Response was 404)\n
\n\n

Strange… very strange. In theory a blank HTTP 400 should appear, but not a HTTP 404. The service itself was not “triggered”, because we had some logging in place, but the request didn’t get to the actual service.

\n\n

After hours of debugging, testing and googling around I created a new “blank” WCF service from the Visual Studio template got the same error.

\n\n

The good news: It’s was not just my code something is blocking the request.

\n\n

After some hours I found a helpful switch in the Azure Portal and activated the “Failed Request tracing” feature (yeah… I could found it sooner) and I discovered this:

\n\n

\"Failed

\n\n

Running on Azure: The solution

\n\n

My initial thoughts were correct: The request was blocked. It was treated as “static content” and the actual WCF module was not mapped to the .svc extension.

\n\n

To “re-map” the .svc extension to the correct handler I needed to add this to the web.config:

\n\n
...\n<system.webServer>\n    ...\n\t<handlers>\n\t\t<remove name=\"svc-integrated\" />\n\t\t<add name=\"svc-integrated\" path=\"*.svc\" verb=\"*\" type=\"System.ServiceModel.Activation.HttpHandler\" resourceType=\"File\" preCondition=\"integratedMode\" />\n\t</handlers>\n</system.webServer>\n...\n\n
\n\n

With this configuration everything worked as expected on Azure AppService.

\n\n

Be aware:

\n\n

I’m really not 100% sure why this is needed in the first place. I’m also not 100% sure if the name svc-integrated is correct or important.

\n\n

This blogpost is a result of these tweets.

\n\n

That was a tough ride… Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/08/31/how-to-run-a-legacy-wcf-svc-service-on-azure-app-service/","RawContent":null,"Thumbnail":null},{"Title":"EWS, Exchange Online and OAuth with a Service Account","PublishedOn":"2020-07-31T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

This week we had a fun experiment: We wanted to talk to Exchange Online via the “old school” EWS API, but in a “sane” way.

\n\n

But here is the full story:

\n\n

Our goal

\n\n

We wanted to access contact information via a web service from the organization, just like the traditional “Global Address List” in Exchange/Outlook. We knew that EWS was on option for the OnPrem Exchange, but what about Exchange Online?

\n\n

The big problem: Authentication is tricky. We wanted to use a “traditional” Service Account approach (think of username/password). Unfortunately the “basic auth” way will be blocked in the near future because of security concerns (makes sense TBH). There is an alternative approach available, but at first it seems not to work as we would like.

\n\n

So… what now?

\n\n

EWS is… old. Why?

\n\n

The Exchange Web Services are old, but still quite powerful and still supported for Exchange Online and OnPrem Exchanges. On the other hand we could use the Microsoft Graph, but - at least currently - there is not a single “contact” API available.

\n\n

To mimic the GAL we would need to query List Users and List orgContacts, which would be ok, but the “orgContacts” has a “flaw”. \n“Hidden” contacts (“msexchhidefromaddresslists”) are returned from this API and we thought that this might be a NoGo for our customers.

\n\n

Another argument for using EWS was, that we could support OnPrem and Online with one code base.

\n\n

Docs from Microsoft

\n\n

The good news is, that EWS and the Auth problem is more or less good documented here.

\n\n

There are two ways to authenticate against the Microsoft Graph or any Microsoft 365 API: Via “delegation” or via “application”.

\n\n

Delegation:

\n\n

Delegation means, that we can write a desktop app and all actions are executed in the name of the signed in user.

\n\n

Application:

\n\n

Application means, that the app itself can do some actions without any user involved.

\n\n

EWS and the application way

\n\n

At first we thought that we might need to use the “application” way.

\n\n

The good news is, that this was easy and worked. \nThe bad news is, that the application needs the EWS permission “full_access_as_app”, which means that our application can access all mailboxes from this tenant. This might be ok for certain apps, but this scared us.

\n\n

Back to the delegation way:

\n\n

EWS and the delegation way

\n\n

The documentation from Microsoft is good, but our “Service Account” usecase was not mentioned. In the example from Microsoft a user needs to manually login.

\n\n

Solution / TL;DR

\n\n

After some research I found the solution to use a “username/password” OAuth flow to access a single mailbox via EWS:

\n\n
    \n
  1. \n

    Follow the normal “delegate” steps from the Microsoft Docs

    \n
  2. \n
  3. \n

    Instead of this code, which will trigger the login UI:

    \n
  4. \n
\n\n
...\n// The permission scope required for EWS access\nvar ewsScopes = new string[] { \"https://outlook.office.com/EWS.AccessAsUser.All\" };\n\n// Make the interactive token request\nvar authResult = await pca.AcquireTokenInteractive(ewsScopes).ExecuteAsync();\n...\n
\n\n

Use the “AcquireTokenByUsernamePassword” method:

\n\n
...\nvar cred = new NetworkCredential(\"UserName\", \"Password\");\nvar authResult = await pca.AcquireTokenByUsernamePassword(new string[] { \"https://outlook.office.com/EWS.AccessAsUser.All\" }, cred.UserName, cred.SecurePassword).ExecuteAsync();\n...\n
\n\n

To make this work you need to enable the “Treat application as public client” under “Authentication” > “Advanced settings” in our AAD Application because this uses the “Resource owner password credential flow”.

\n\n

Now you should be able to get the AccessToken and do some EWS magic.

\n\n

I posted a shorter version on Stackoverflow.com

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/07/31/ews-exchange-online-oauth-with-a-service-account/","RawContent":null,"Thumbnail":null},{"Title":"Can a .NET Core 3.0 compiled app run with a .NET Core 3.1 runtime?","PublishedOn":"2020-06-30T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Within our product we move more and more stuff in the .NET Core land.\nLast week we had a disussion around needed software requirements and in the .NET Framework land this question was always easy to answer:

\n\n
\n

.NET Framework 4.5 or higher.

\n
\n\n

With .NET Core the answer is sligthly different:

\n\n

In theory major versions are compatible, e.g. if you compiled your app with .NET Core 3.0 and a .NET Core runtime 3.1 is the only installed 3.X runtime on the machine, this runtime is used.

\n\n

This system is called “Framework-dependent apps roll forward” and sounds good.

\n\n

The bad part

\n\n

Unfortunately this didn’t work for us. Not sure why, but our app refuses to work because a .dll is not found or missing. The reason is currently not clear. Be aware that Microsoft has written a hint that such things might occure:

\n\n
\n

It’s possible that 3.0.5 and 3.1.0 behave differently, particularly for scenarios like serializing binary data.

\n
\n\n

The good part

\n\n

With .NET Core we could ship the framework with our app and it should run fine wherever we deploy it.

\n\n

Summery

\n\n

Read the docs about the “app roll forward” approach if you have similar concerns, but test your app with that combination.

\n\n

As a sidenote: 3.0 is not supported anymore, so it would be good to upgrade it to 3.1 anyway, but we might see a similar pattern with the next .NET Core versions.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/06/30/can-a-dotnet-core-30-compiled-app-run-with-a-dotnet-core-31-runtime/","RawContent":null,"Thumbnail":null},{"Title":"SqlBulkCopy for fast bulk inserts","PublishedOn":"2020-05-31T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Within our product OneOffixx we can create a “full export” from the product database. Because of limitations with normal MS SQL backups (e.g. compatibility with older SQL databases etc.), we created our own export mechanic.\nAn export can be up to 1GB and more. This is nothing to serious and far from “big data”, but still not easy to handle and we had some issues to import larger “exports”. \nOur importer was based on a Entity Framework 6 implementation and it was really slow… last month we tried to resolve this and we are quite happy. Here is how we did it:

\n\n

TL;DR Problem:

\n\n

Bulk Insert with a Entity Framework based implementation is really slow. There is at least one NuGet package, which seems to help, but unfortunately we run into some obscure issues. This Stackoverflow question highlights some numbers and ways of doing it.

\n\n

SqlBulkCopy to the rescure:

\n\n

After my failed attempt to tame our EF implementation I discovered the SqlBulkCopy operation. In .NET (Full Framework and .NET Standard!) the usage is simple via the “SqlBulkCopy” class.

\n\n

Our importer looks more or less like this:

\n\n
using (var scope = new TransactionScope(TransactionScopeOption.RequiresNew, TimeSpan.FromMinutes(30), TransactionScopeAsyncFlowOption.Enabled))\nusing (SqlBulkCopy bulkCopy = new SqlBulkCopy(databaseConnectionString))\n    {\n    var dt = new DataTable();\n    dt.Columns.Add(\"DataColumnA\");\n    dt.Columns.Add(\"DataColumnB\");\n    dt.Columns.Add(\"DataColumnId\", typeof(Guid));\n\n    foreach (var dataEntry in data)\n    {\n        dt.Rows.Add(dataEntry.A, dataEntry.B, dataEntry.Id);\n    }\n\n    sqlBulk.DestinationTableName = \"Data\";\n    sqlBulk.AutoMapColumns(dt);\n    sqlBulk.WriteToServer(dt);\n\n    scope.Complete();\n    }\n\npublic static class Extensions\n    {\n        public static void AutoMapColumns(this SqlBulkCopy sbc, DataTable dt)\n        {\n            sbc.ColumnMappings.Clear();\n\n            foreach (DataColumn column in dt.Columns)\n            {\n                sbc.ColumnMappings.Add(column.ColumnName, column.ColumnName);\n            }\n        }\n    }       \n
\n\n

Some notes:

\n\n\n\n

Only “downside”: SqlBulkCopy is a table by table insert. You need to insert your data in the correct order if you have any db constraints in your schema.

\n\n

Result:

\n\n

We reduced the import from several minutes to seconds :)

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/05/31/sqlbulkcopy-for-fast-bulk-inserts/","RawContent":null,"Thumbnail":null},{"Title":"Blazor for Office Add-ins: First look","PublishedOn":"2020-04-30T21:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Last week I did some research and tried to build a pretty basic Office Addin (within the “new” web based Addin model) with Blazor.

\n\n

Side note: Last year I blogged about how to build Office Add-ins with ASP.NET Core.

\n\n

Why Blazor?

\n\n

My daily work home is in the C# and .NET land, so it would be great to use Blazor for Office Addins, right? \nA Office Add-in is just a web application with a “communication tunnel” to the hosting Office application - not very different from the real web.

\n\n

What (might) work: Serverside Blazor

\n\n

My first try was with a “standard” serverside Blazor application and I just pointed the dummy Office Add-in manifest file to the site and it (obviously) worked:

\n\n

Mhh... maybe?🤔😏#Blazor #OfficeDev pic.twitter.com/BzdVQzIeqA

— Robert Muehsig (@robert0muehsig) April 23, 2020
\n\n\n

I assume that serverside Blazor is for the client not very “complicated” and it would probably work.

\n\n

After my initial tweet Manuel Sidler jumped in and made a simple demo project, which also invokes the Office.js APIs from C#!

\n\n

Building an #Office Add-In based on #Blazor (Server) could be possible. Whether it's a good idea or not is another story ;) https://t.co/LdSPYl4SRh (thanks @robert0muehsig to get me jump up on this idea) pic.twitter.com/1w29212qdS

— Manuel Sidler (@manuelsidler) April 24, 2020
\n\n\n

Checkout his repository on GitHub for further information.

\n\n

What won’t work: WebAssembly (if I don’t miss anything)

\n\n

Serverside Blazor is cool, but has some problems (e.g. a server connection is needed and scaling is not that easy) - what about WebAssembly?

\n\n

Well… Blazor WebAssembly is still in preview and I tried the same setup that worked for serverside blazor.

\n\n

Result:

\n\n

The desktop PowerPoint (I tried to build a PowerPoint addin) keeps crashing after I add the addin. On Office Online it seems to work, but not for a very long time:

\n\n

Blazor WebAssembly seems not to work or at least the startup is super weird :-/ pic.twitter.com/IvnecQFMj2

— Robert Muehsig (@robert0muehsig) April 27, 2020
\n\n\n

Possible reasons:

\n\n

The default Blazor WebAssembly installs a service worker. I removed that part, but I’m not 100% sure if I did it correctly. At least they are currently not supported from the Office Add-in Edge WebView. My experience with Office Online and the Blazor addin failed as well and I don’t think that service workers are the problem.

\n\n

I’m not really sure why its not working, but its quite early for Blazor WebAssembly, so… time will tell.

\n\n

What does the Office Dev Team think of Blazor?

\n\n

Currently I just found one comment on this blogpost regarding Blazor:

\n\n
Will Blazor be supported for Office Add-ins?\n\nNo, it will be a React Office.js add-in. We don’t have any plans to support Blazor yet. For that, please put a note on our UserVoice channel: https://officespdev.uservoice.com. There are several UserVoice items already on this, so know that we are listening to your feedback and prioritizing based on customer requests. The more requests we get for particular features, the more we will consider moving forward with developing it. \n
\n\n

Well… vote for it! ;)

\n","Href":"https://blog.codeinside.eu/2020/04/30/blazor-for-office-addins-first-look/","RawContent":null,"Thumbnail":null},{"Title":"Escape enviroment variables in MSIEXEC parameters","PublishedOn":"2020-03-27T23:59:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Problem

\n\n

Customers can install our product on Windows with a standard MSI package. To automate the installation administrators can use MSIEXEC and MSI parameters to configure our client.

\n\n

A simple installation can look like this:

\n\n
msiexec /qb /i \"OneOffixx.msi\" ... CACHEFOLDER=\"D:/OneOffixx/\"\n
\n\n

The “CACHEFOLDER” parameter will be written in the .exe.config file and our program will read it and stores offline content under the given location.

\n\n

So far, so good.

\n\n

For Terminal Server installations or “multi-user” scenarios this will not work, because each cache is bound to a local account. To solve this we could just insert the “%username%” enviroment variable, right?

\n\n

Well… no… at least not with the obvious call, because this:

\n\n
msiexec /qb /i \"OneOffixx.msi\" ... CACHEFOLDER=\"D:/%username%/OneOffixx/\"\n
\n\n

will result in a call like this:

\n\n
msiexec /qb /i \"OneOffixx.msi\" ... CACHEFOLDER=\"D:/admin/OneOffixx/\"\n
\n\n

Solution

\n\n

I needed a few hours and some Google-Fu to found the answer.

\n\n

To “escape” those variables we need to invoke it like this:

\n\n
msiexec /qb /i \"OneOffixx.msi\" ... CACHEFOLDER=\"D:/%%username%%/OneOffixx/\"\n
\n\n

Be aware: This stuff is a mess. It depends on your scenario. Checkout this Stackoverflow answer to learn more. The double percent did the trick for us, so I guess it is “ok-ish”.

\n\n

Update

\n\n

The above solution only works if you save the command in a file, e.g. in a install.bat file. If you want to invoke this in the CMD shell use this:

\n\n
cmd /v /c msiexec /qb /i \"OneOffixx.msi\" ... CACHEFOLDER=\"%appdata%/OneOffixx\"\n
\n\n

The important parameter is “/v”, which enables delayed environment variable expansion.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/03/27/escape-environment-variables-in-msiexec-parameters/","RawContent":null,"Thumbnail":null}],"ResultType":"Feed"},"YouTube":{"FeedItems":[{"Title":"Erste Schritte mit dem Azure OpenAI Service","PublishedOn":"2023-03-23T22:30:48+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=VVNHT4gVxDo","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/VVNHT4gVxDo/hqdefault.jpg"},{"Title":"Erster Schritt in die Source Control: Visual Studio Projekte auf GitHub pushen","PublishedOn":"2023-03-17T21:59:57+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=iKQS5nYbC-k","RawContent":null,"Thumbnail":"https://i2.ytimg.com/vi/iKQS5nYbC-k/hqdefault.jpg"},{"Title":"Vite.js für React & TypeScript für ASP.NET Core & Visual Studio Entwickler","PublishedOn":"2023-02-12T00:25:03+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=-2iiXpBcmDY","RawContent":null,"Thumbnail":"https://i2.ytimg.com/vi/-2iiXpBcmDY/hqdefault.jpg"},{"Title":"React.js mit TypeScript in ASP.NET Core mit Visual Studio & Visual Studio Code","PublishedOn":"2023-01-26T23:35:26+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=IgW79wxMO-c","RawContent":null,"Thumbnail":"https://i2.ytimg.com/vi/IgW79wxMO-c/hqdefault.jpg"},{"Title":"React.js mit ASP.NET Core - ein Einstieg mit Visual Studio","PublishedOn":"2022-10-07T23:15:55+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=gIzMtWDs_QM","RawContent":null,"Thumbnail":"https://i4.ytimg.com/vi/gIzMtWDs_QM/hqdefault.jpg"},{"Title":"Einstieg in die Webentwicklung mit .NET 6 & ASP.NET Core","PublishedOn":"2022-04-12T21:13:18+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=WtpzsW5Xwqo","RawContent":null,"Thumbnail":"https://i4.ytimg.com/vi/WtpzsW5Xwqo/hqdefault.jpg"},{"Title":"Das erste .NET 6 Programm","PublishedOn":"2022-01-30T22:21:35+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=fVzo2qJubmA","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/fVzo2qJubmA/hqdefault.jpg"},{"Title":"Azure SQL - ist das echt so teuer? Neee...","PublishedOn":"2022-01-11T21:49:29+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=dNaIOGQj15M","RawContent":null,"Thumbnail":"https://i1.ytimg.com/vi/dNaIOGQj15M/hqdefault.jpg"},{"Title":"Was sind \"Project Templates\" in Visual Studio?","PublishedOn":"2021-12-22T22:36:25+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=_IMabo9yHSA","RawContent":null,"Thumbnail":"https://i4.ytimg.com/vi/_IMabo9yHSA/hqdefault.jpg"},{"Title":".NET Versionen - was bedeutet LTS und Current?","PublishedOn":"2021-12-21T21:06:29+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=2ghTKF0Ey_0","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/2ghTKF0Ey_0/hqdefault.jpg"},{"Title":"Einstieg in die .NET Entwicklung für Anfänger","PublishedOn":"2021-12-20T22:18:35+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=2EcSJDX-8-s","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/2EcSJDX-8-s/hqdefault.jpg"},{"Title":"Erste Schritte mit Unit Tests","PublishedOn":"2008-11-05T00:14:06+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=tjAv1-Qb4rY","RawContent":null,"Thumbnail":"https://i1.ytimg.com/vi/tjAv1-Qb4rY/hqdefault.jpg"},{"Title":"3 Schichten Architektur","PublishedOn":"2008-10-17T22:01:06+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=27yknlB8xeg","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/27yknlB8xeg/hqdefault.jpg"}],"ResultType":"Feed"},"O_Blog":{"FeedItems":[{"Title":"How to build a simple hate speech detector with machine learning","PublishedOn":"2019-08-02T13:00:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"

Not everybody on the internet behaves nice and some comments are just rude or offending. If you run a web page that offers a public comment function hate speech can be a real problem. For example in Germany, you are legally required to delete hate speech comments. This can be challenging if you have to check thousands of comments each day. \nSo wouldn’t it be nice, if you can automatically check the user’s comment and give them a little hint to stay nice?\n

\n\n

The simplest thing you could do is to check if the user’s text contains offensive words. However, this approach is limited since you can offend people without using offensive words.

\n\n

This post will show you how to train a machine learning model that can detect if a comment or text is offensive. And to start you need just a few lines of Python code \\o/

\n\n

The Data

\n\n

At first, you need data. In this case, you will need a list of offensive and nonoffensive texts. I wrote this tutorial for a machine learning course in Germany, so I used German texts but you should be able to use other languages too.

\n\n

For a machine learning competition, scientists provided a list of comments labeled as offensive and nonoffensive (Germeval 2018, Subtask 1). This is perfect for us since we just can use this data.

\n\n

The Code

\n\n

To tackle this task I would first establish a baseline and then improve this solution step by step. Luckily they also published the scores of all submission, so we can get a sense of how well we are doing.

\n\n

For our baseline model we are going to use Facebooks fastText. It’s simple to use, works with many languages and does not require any special hardware like a GPU. Oh, and it’s fast :)

\n\n

1. Load the data

\n\n

After you downloaded the training data file germeval2018.training.txt you need to transform this data into a format that fastText can read.\nFastTexts standard format looks like this “label[your label] some text”:

\n\n
__label__offensive some insults\n__label__other have a nice day\n
\n\n

2. Train the Model

\n\n

To train the model you need to install the fastText Python package.

\n\n
$ pip install fasttext\n
\n

To train the model you need just there line of code.

\n
import fasttext\ntraning_parameters = {'epoch': 50, 'lr': 0.05, 'loss': \"ns\", 'thread': 8, 'ws': 5, 'dim': 100}    \nmodel = fasttext.supervised('fasttext.train', 'model', **traning_parameters)\n
\n\n

I packed all the training parameters into a seperate dictionary. To me that looks a bit cleaner but you don’t need to do that.

\n\n

3. Test your Model

\n\n

After we trained the model it is time to test how it performs. FastText provides us a handy test method the evaluate the model’s performance. To compare our model with the other models from the GermEval contest I also added a lambda which calculates the average F1 score. For now, I did not use the official test script from the contests repository. Which you should do if you wanted to participate in such contests.

\n\n
def test(model):\n    f1_score = lambda precision, recall: 2 * ((precision * recall) / (precision + recall))\n    nexamples, recall, precision = model.test('fasttext.test')\n    print (f'recall: {recall}' )\n    print (f'precision: {precision}')\n    print (f'f1 score: {f1_score(precision,recall)}')\n    print (f'number of examples: {nexamples}')\n
\n\n

I don’t know about you, but I am so curious how we score. Annnnnnnd:

\n\n
recall: 0.7018686296715742\nprecision: 0.7018686296715742\nf1 score: 0.7018686296715742\nnumber of examples: 3532\n
\n\n

Looking at the results we can see that the best other model had an average F1 score of 76,77 and our model achieves -without any optimization and preprocessing- an F1 Score of 70.18.

\n\n

This is pretty good since the models for these contests are usually specially optimized for the given data.

\n\n

FastText is a clever piece of software, that uses some neat tricks. If interested in fastText you should take a look the paper and this one. For example, fastText uses character n-grams. This approach is well suited for the German language, which uses a lot of compound words.

\n\n

Next Steps

\n\n

In this very basic tutorial, we trained a model with just a few lines of Python code. There are several things you can do to improve this model. The first step would be to preprocess your data. During preprocessing you could lower case all texts, remove URLs and special characters, correct spelling, etc. After every optimization step, you can test your model and check if your scores went up. Happy hacking :)

\n\n

Some Ideas:

\n\n
    \n
  1. Preprocess the data
  2. \n
  3. Optimize the parameters (number of training epochs, learning rate, embedding dims, word n-grams)
  4. \n
  5. Use pre-trained word vectors from the fastText website
  6. \n
  7. add more data to the training set
  8. \n
  9. Use data augmentation.
  10. \n
\n\n

Here is the full code:

\n\n\n\n

Credit: Photo by Jon Tyson on Unsplash

","Href":"https://www.oliverguhr.eu/nlp/jekyll/2019/08/02/build-a-simple-hate-speech-detector-with-machine-learning.html","RawContent":null,"Thumbnail":null}],"ResultType":"Feed"},"GitHubEventsUser":{"Events":[{"Id":"30305352867","Type":"IssuesEvent","CreatedAt":"2023-07-10T09:06:50","Actor":"oliverguhr","Repository":"oliverguhr/spelling","Organization":null,"RawContent":null,"RelatedAction":"closed","RelatedUrl":"https://github.com/oliverguhr/spelling/issues/4","RelatedDescription":"Closed issue \"You hugging face shared model produces different results at my end\" (#4) at oliverguhr/spelling","RelatedBody":"Hi there,\r\n\r\nsince my graphics card is pretty slow, I started playing around with your hugging face provided version of the german and english models. If I go for the most minimalistic approach, I get different behaviours when using the english or german version.\r\n\r\n**Inputs (for both variants):**\r\nInput 1: `das idst ein neuZr test`\r\nInput 2: `Well maybler ill just write as fast as i can to get this thing stugglinh`\r\n\r\n**German results:**\r\nResult 1: `[{'generated_text': 'Das ist ein neuer Test. Das ist ein neuer Test. Das ist ein neuer Test. Das'}]`\r\nResult 2: `[{'generated_text': 'Will maybler will just write as fast as is can to get this thing st'}]`\r\n\r\n**English results:**\r\nResult 1: `[{'generated_text': 'As idest in near test.'}]`\r\nResult 2: `[{'generated_text': \"Well, maybe I'll just write as fast as I can to get this thing stuck\"}]`\r\n\r\n**Code to reproduce:**\r\n```python\r\nfrom transformers import pipeline\r\nfix_spelling = pipeline(\"text2text-generation\", model=\"oliverguhr/spelling-correction-english-base\")\r\n# fix_spelling = pipeline(\"text2text-generation\", model=\"oliverguhr/spelling-correction-german-base\")\r\nprint(fix_spelling(\"das idst ein neuZr test\"))\r\nprint(fix_spelling(\"Well maybler ill just write as fast as i can to get this thing stugglinh\"))\r\n```\r\n\r\nAm I doing something completely wrong here? Do you have some insights on why this might be? Sounds pretty strange to me, that these 2 variants differ in the length of the results they present, but I am totally new to this.\r\n\r\nThanks alot for your awesome work,\r\nMichiruf"},{"Id":"30190929118","Type":"IssuesEvent","CreatedAt":"2023-07-04T15:19:11","Actor":"oliverguhr","Repository":"oliverguhr/spelling","Organization":null,"RawContent":null,"RelatedAction":"closed","RelatedUrl":"https://github.com/oliverguhr/spelling/issues/3","RelatedDescription":"Closed issue \"Fail to reproduce results\" (#3) at oliverguhr/spelling","RelatedBody":"I'm trying to reproduce results of the English model before making a French one.\r\nLoss stabilizes around 2.0 after 0.2 epochs, slowly decreasing to 1.7.\r\n\r\nI'm using provided data and the script generate_dataset.py for dataset generation.\r\n\r\n**Training parameters** are default ones, except for batch sizes that I had to decrease from 4 to 2:\r\n- learning_rate: 0.0003\r\n- train_batch_size: 2\r\n- eval_batch_size: 2\r\n- seed: 42\r\n- gradient_accumulation_steps: 8\r\n- total_train_batch_size: 16\r\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\r\n- lr_scheduler_type: linear\r\n- num_epochs: 1.0\r\n\r\n**Framework versions:**\r\n\r\n- Transformers 4.24.0\r\n- Pytorch 1.10.0+cu113\r\n- Datasets 2.6.1\r\n- Tokenizers 0.13.2\r\n\r\nI had to manually download model and replace name by path inside train_bart_model.sh.\r\nModel and tokenizer configurations seem to be well retrieved. I've checked it by comparing files config.json and tokenizer_config.json of our models, generated automatically after training.\r\nThe only change is my transformers' version being \"4.24.0\" instead of \"4.19.0.dev0\".\r\n\r\nI had to convert some files to txt format to share them.\r\n[README.md](https://github.com/oliverguhr/spelling/files/10085878/README.md)\r\n[train_bart_model.txt](https://github.com/oliverguhr/spelling/files/10085917/train_bart_model.txt)\r\n[config.txt](https://github.com/oliverguhr/spelling/files/10085892/config.txt)\r\n[tokenizer_config.txt](https://github.com/oliverguhr/spelling/files/10085893/tokenizer_config.txt)\r\n\r\nAre there things I'm doing wrong?"},{"Id":"30030779461","Type":"IssuesEvent","CreatedAt":"2023-06-27T09:26:35","Actor":"oliverguhr","Repository":"oliverguhr/transformer-time-series-prediction","Organization":null,"RawContent":null,"RelatedAction":"closed","RelatedUrl":"https://github.com/oliverguhr/transformer-time-series-prediction/issues/24","RelatedDescription":"Closed issue \"Multistep Tranformer Input Zeroed\" (#24) at oliverguhr/transformer-time-series-prediction","RelatedBody":"I was wondering why the input to the multistep transformer has zeroes of length of the output_window. Is there a reason why we can't do it in the same way as for the single step transformer, that is, instead of [0 1 2 3 4 0 0], we have [0 1 2 3 4] for the input and [2 3 4 5 6] for the labels instead of [0 1 2 3 4 5 6]?"},{"Id":"29831660799","Type":"IssuesEvent","CreatedAt":"2023-06-18T10:55:19","Actor":"oliverguhr","Repository":"mastodon/mastodon-ios","Organization":"mastodon","RawContent":null,"RelatedAction":"opened","RelatedUrl":"https://github.com/mastodon/mastodon-ios/issues/1063","RelatedDescription":"Opened issue \"Issue when uploading images \" (#1063) at mastodon/mastodon-ios","RelatedBody":"### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nWhen I add an existing image to a post, the image selection UI does not close. When I go back to the app, the app is frozen.\r\n\r\n\n\n### Expected Behavior\n\nThe app should add the image to the post. \n\n### Steps To Reproduce\n\n1. create a new toot\r\n2. Add a image from you library \n\n### Environment\n\n```markdown\n- Device: iPhone 13\r\n- OS: iOS 16.3.1\r\n- Version: 2023.9\r\n- Build: 324\n```\n\n\n### Anything else?\n\n_No response_"},{"Id":"29064790469","Type":"IssuesEvent","CreatedAt":"2023-05-15T08:39:25","Actor":"robertmuehsig","Repository":"fluentribbon/Fluent.Ribbon","Organization":"fluentribbon","RawContent":null,"RelatedAction":"opened","RelatedUrl":"https://github.com/fluentribbon/Fluent.Ribbon/issues/1134","RelatedDescription":"Opened issue \"Selected Tab on first render has a black border\" (#1134) at fluentribbon/Fluent.Ribbon","RelatedBody":"Not sure if this counts as bug or feature, but the first impression is \"weird\". When you create a RibbonWindow with a selected tab, a black border appears around it:\r\n\r\n![image](https://github.com/fluentribbon/Fluent.Ribbon/assets/756703/581486c4-8e35-4051-abd8-e2facb7f9bd7)\r\n\r\nTo reproduce just press this button in your sample app:\r\n![image](https://github.com/fluentribbon/Fluent.Ribbon/assets/756703/be3352a8-b063-446a-bee0-dc3e77085f99)\r\n\r\nIMHO the black border makes sense for keyboard navigation (accessibility...), but in this case no keyboard navigation has happend and the ribbon is clearly selected anyway.\r\n\r\n---\r\n### Environment\r\n\r\n- Fluent.Ribbon __v10__\r\n- Windows __11__\r\n- .NET Framework __4.8__\r\n"},{"Id":"29005444532","Type":"IssuesEvent","CreatedAt":"2023-05-11T15:17:33","Actor":"robertmuehsig","Repository":"fluentribbon/Fluent.Ribbon","Organization":"fluentribbon","RawContent":null,"RelatedAction":"closed","RelatedUrl":"https://github.com/fluentribbon/Fluent.Ribbon/issues/1130","RelatedDescription":"Closed issue \"Update to v10 results in a \"black screen\"\" (#1130) at fluentribbon/Fluent.Ribbon","RelatedBody":"We have a larger application, which I want to update to v10. Update via NuGet & build/compile without any errors, but the resulting window is \"black\":\r\n\r\n![image](https://github.com/fluentribbon/Fluent.Ribbon/assets/756703/50d2b1e7-c2a5-4cc1-a888-4e563f99429e)\r\n\r\nI didn't found any big difference between our application and your sample app. I guess the reason might be, that you moved some resources. \r\n\r\nI also included this in our `App.xaml`:\r\n\r\n```\r\n\r\n \r\n \r\n \r\n \r\n \r\n\r\n \r\n \r\n \r\n \r\n\r\n```\r\n\r\nThe application worked with the most recent Fluent.Ribbon - maybe you have an idea where to look.\r\n\r\n---\r\n### Environment\r\n\r\n- Fluent.Ribbon __v10__\r\n- Windows __11__\r\n- .NET Framework __4.8__\r\n"},{"Id":"28970364870","Type":"IssuesEvent","CreatedAt":"2023-05-10T10:44:48","Actor":"robertmuehsig","Repository":"fluentribbon/Fluent.Ribbon","Organization":"fluentribbon","RawContent":null,"RelatedAction":"opened","RelatedUrl":"https://github.com/fluentribbon/Fluent.Ribbon/issues/1130","RelatedDescription":"Opened issue \"Update to v10 results in a \"black screen\"\" (#1130) at fluentribbon/Fluent.Ribbon","RelatedBody":"We have a larger application, which I want to update to v10. Update via NuGet & build/compile without any errors, but the resulting window is \"black\":\r\n\r\n![image](https://github.com/fluentribbon/Fluent.Ribbon/assets/756703/50d2b1e7-c2a5-4cc1-a888-4e563f99429e)\r\n\r\nI didn't found any big difference between our application and your sample app. I guess the reason might be, that you moved some resources. \r\n\r\nI also included this in our `App.xaml`:\r\n\r\n```\r\n\r\n \r\n \r\n \r\n \r\n \r\n\r\n \r\n \r\n \r\n \r\n\r\n```\r\n\r\nThe application worked with the most recent Fluent.Ribbon - maybe you have an idea where to look.\r\n\r\n---\r\n### Environment\r\n\r\n- Fluent.Ribbon __v10__\r\n- Windows __11__\r\n- .NET Framework __4.8__\r\n"}],"ResultType":"GitHubEvent"}},"RunOn":"2023-07-22T05:30:02.6782774Z","RunDurationInMilliseconds":1370} \ No newline at end of file +{"Data":{"Blog":{"FeedItems":[{"Title":"Zip deployment failed on Azure","PublishedOn":"2023-09-05T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

The Problem

\n\n

We are using Azure App Service for our application (which runs great BTW) and deploy it automatically via ZipDeploy. \nThis basic setup was running smoth, but we noticed that at some point the deployment failed with these error messages:

\n\n
2023-08-24T20:48:56.1057054Z Deployment endpoint responded with status code 202\n2023-08-24T20:49:15.6984407Z Configuring default logging for the app, if not already enabled\n2023-08-24T20:49:18.8106651Z Zip deployment failed. {'id': 'temp-b574d768', 'status': 3, 'status_text': '', 'author_email': 'N/A', 'author': 'N/A', 'deployer': 'ZipDeploy', 'message': 'Deploying from pushed zip file', 'progress': '', 'received_time': '2023-08-24T20:48:55.8916655Z', 'start_time': '2023-08-24T20:48:55.8916655Z', 'end_time': '2023-08-24T20:49:15.3291017Z', 'last_success_end_time': None, 'complete': True, 'active': False, 'is_temp': True, 'is_readonly': False, 'url': 'https://[...].scm.azurewebsites.net/api/deployments/latest', 'log_url': 'https://[...].scm.azurewebsites.net/api/deployments/latest/log', 'site_name': '[...]', 'provisioningState': 'Failed'}. Please run the command az webapp log deployment show\n2023-08-24T20:49:18.8114319Z                            -n [...] -g production\n
\n\n

or this one (depending on how we invoked the deployment script):

\n\n
Getting scm site credentials for zip deployment\nStarting zip deployment. This operation can take a while to complete ...\nDeployment endpoint responded with status code 500\nAn error occured during deployment. Status Code: 500, Details: {\"Message\":\"An error has occurred.\",\"ExceptionMessage\":\"There is not enough space on the disk.\\r\\n\",\"ExceptionType\":\"System.IO.IOException\",\"StackTrace\":\" \n
\n\n

“There is not enough space on the disk”?

\n\n

The message There is not enough space on the disk was a good hint, but according to the File system storage everything should be fine with only 8% used.

\n\n

Be aware - this is important: We have multiple apps on the same App Service plan!

\n\n

\"x\"

\n\n

Kudu to the rescure

\n\n

Next step was to check the behind the scene environment via the “Advanced Tools” Kudu and there it is:

\n\n

\"x\"

\n\n

There are two different storages attached to the app service:

\n\n
    \n
  • c:\\home is the “File System Storage” that you can see in the Azure Portal and is quite large. App files are located here.
  • \n
  • c:\\local is a much smaller storage with ~21GB and if the space is used, then ZipDeploy will fail.
  • \n
\n\n

Who is using this space?

\n\n

c:\\local stores “mostly” temporarily items, e.g.:

\n\n
Directory of C:\\local\n\n08/31/2023  06:40 AM    <DIR>          .\n08/31/2023  06:40 AM    <DIR>          ..\n07/13/2023  04:29 PM    <DIR>          AppData\n07/13/2023  04:29 PM    <DIR>          ASP Compiled Templates\n08/31/2023  06:40 AM    <DIR>          Config\n07/13/2023  04:29 PM    <DIR>          DomainValidationTokens\n07/13/2023  04:29 PM    <DIR>          DynamicCache\n07/13/2023  04:29 PM    <DIR>          FrameworkJit\n07/13/2023  04:29 PM    <DIR>          IIS Temporary Compressed Files\n07/13/2023  04:29 PM    <DIR>          LocalAppData\n07/13/2023  04:29 PM    <DIR>          ProgramData\n09/05/2023  08:36 PM    <DIR>          Temp\n08/31/2023  06:40 AM    <DIR>          Temporary ASP.NET Files\n07/18/2023  04:06 AM    <DIR>          UserProfile\n08/19/2023  06:34 AM    <SYMLINKD>     VirtualDirectory0 [\\\\...\\]\n               0 File(s)              0 bytes\n              15 Dir(s)  13,334,384,640 bytes free\n
\n\n

The “biggest” item here was in our case under c:\\local\\Temp\\zipdeploy:

\n\n
 Directory of C:\\local\\Temp\\zipdeploy\n\n08/29/2023  04:52 AM    <DIR>          .\n08/29/2023  04:52 AM    <DIR>          ..\n08/29/2023  04:52 AM    <DIR>          extracted\n08/29/2023  04:52 AM       774,591,927 jiire5i5.zip\n
\n\n

This folder stores our ZipDeploy package, which is quite large with ~800MB. The folder also contains the extracted files - remember: We only have 21GB on this storage, but even if this zip file and the extracted files are ~3GB, there is still plenty of room, right?

\n\n

Shared resources

\n\n

Well… it turns out, that each App Service on a App Service plan is using this storage and if you have multiple App Services on the same plan, than those 21GB might melt away.

\n\n

The “bad” part is, that the space is shared, but each App Services has it’s own c:\\local folder (which makes sense). To free up memory we had to clean up this folder on each App Service like that:

\n\n
rmdir c:\\local\\Temp\\zipdeploy /s /q\n
\n\n

TL;DR

\n\n

If you have problems with ZipDeploy and the error message tells you, that there is not enough space, check out the c:\\local space (and of course c:\\home as well) and delete unused files. Sometimes a reboot might help as well (to clean up temp-files), but AFAIK those ZipDeploy files will survive that.

\n\n","Href":"https://blog.codeinside.eu/2023/09/05/zip-deployment-failed-on-azure-and-how-to-fix-it/","RawContent":null,"Thumbnail":null},{"Title":"First steps with Azure OpenAI and .NET","PublishedOn":"2023-03-23T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

The AI world is rising very fast these days: ChatGPT is such an awesome (and scary good?) service and Microsoft joined the ship with some partner announcements and investments. The result is of these actions is, that OpenAI is now a “first class citizen” on Azure.

\n\n

So - for the average Microsoft/.NET developer this opens up a wonderful toolbox and the first steps are really easy.

\n\n

Be aware: You need to “apply” to access the OpenAI service, but it took less then 24 hours for us to gain access to the service. I guess this is just a temporary thing.

\n\n

Disclaimer: I’m not an AI/ML engineer and I only have a very “glimpse” knowledge about the technology behind GPT3, ChatGPT and ML in general. If in doubt, I always ask my buddy Oliver Guhr, because he is much smarter in this stuff. Follow him on Twitter!

\n\n

1. Step: Go to Azure OpenAI Service

\n\n

Search for “OpenAI” and you will see the “Azure OpenAI Service” entry:

\n\n

\"x\"

\n\n

2. Step: Create a Azure OpenAI Service instance

\n\n

Create a new Azure OpenAI Service instance:

\n\n

\"x\"

\n\n

On the next page you will need to enter the subscription, resource group, region and a name (typical Azure stuff):

\n\n

\"x\"

\n\n

Be aware: If your subscription is not enabled for OpenAI, you need to apply here first.

\n\n

3. Step: Overview and create a model

\n\n

After the service is created you should see something like this:

\n\n

\"x\"

\n\n

Now go to “Model deployments” and create a model - I choosed “text-davinci-003”, because I think this is GPT3.5 (which was the initial ChatGPT release, GPT4 is currently in preview for Azure and you need to apply again.

\n\n

\"x\"

\n\n

My guess is, that you could train/deploy other, specialized models here, because this model is quite complex and you might want to tailor the model for your scenario to get faster/cheaper results… but I honestly don’t know how to do it (currently), so we just leave the default.

\n\n

4. Step: Get the endpoint and the key

\n\n

In this step we just need to copy the key and the endpoint, which can be found under “Keys and Endpoint”, simple - right?

\n\n

\"x\"

\n\n

5. Step: Hello World to our Azure OpenAI instance

\n\n

Create a .NET application and add the Azure.AI.OpenAI NuGet package (currently in preview!).

\n\n
dotnet add package Azure.AI.OpenAI --version 1.0.0-beta.5\n
\n\n

Use this code:

\n\n
using Azure.AI.OpenAI;\nusing Azure;\n\nConsole.WriteLine(\"Hello, World!\");\n\nOpenAIClient client = new OpenAIClient(\n        new Uri(\"YOUR-ENDPOINT\"),\n        new AzureKeyCredential(\"YOUR-KEY\"));\n\nstring deploymentName = \"text-davinci-003\";\nstring prompt = \"Tell us something about .NET development.\";\nConsole.Write($\"Input: {prompt}\");\n\nResponse<Completions> completionsResponse = client.GetCompletions(deploymentName, prompt);\nstring completion = completionsResponse.Value.Choices[0].Text;\n\nConsole.WriteLine(completion);\n\nConsole.ReadLine();\n\n
\n\n

Result:

\n\n
Hello, World!\nInput: Tell us something about .NET development.\n\n.NET development is a mature, feature-rich platform that enables developers to create sophisticated web applications, services, and applications for desktop, mobile, and embedded systems. Its features include full-stack programming, object-oriented data structures, security, scalability, speed, and an open source framework for distributed applications. A great advantage of .NET development is its capability to develop applications for both Windows and Linux (using .NET Core). .NET development is also compatible with other languages such as\n
\n\n

As you can see… the result is cut off, not sure why, but this is just a simple demonstration.

\n\n

Summary

\n\n

With these basic steps you can access the OpenAI development world. Azure makes it easy to integrate in your existing Azure/Microsoft “stack”. Be aware, that you could also use the same SDK and use the endpoint from OpenAI. Because of billing reasons it is easier for us to use the Azure hosted instances.

\n\n

Hope this helps!

\n\n

Video on my YouTube Channel

\n\n

If you understand German and want to see it in action, check out my video on my Channel:

\n\n\n\n","Href":"https://blog.codeinside.eu/2023/03/23/first-steps-with-azure-openai-and-dotnet/","RawContent":null,"Thumbnail":null},{"Title":"How to fix: 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine","PublishedOn":"2023-03-18T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

In our product we can interact with different datasource and one of these datasources was a Microsoft Access DB connected via OLEDB. This is really, really old, but still works, but on one customer machine we had this issue:

\n\n
'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine\n
\n\n

Solution

\n\n

If you face this issue, you need to install the provider from here.

\n\n

Be aware: If you have a different error, you might need to install the newer provider - this is labled as “2010 Redistributable”, but still works with all those fancy Office 365 apps out there.

\n\n

Important: You need to install the provider in the correct bit version, e.g. if you run under x64, install the x64.msi.

\n\n

The solution comes from this Stackoverflow question.

\n\n

Helper

\n\n

The best tip from Stackoverflow was these powershell commands to check, if the provider is there or not:

\n\n
(New-Object system.data.oledb.oledbenumerator).GetElements() | select SOURCES_NAME, SOURCES_DESCRIPTION \n\nGet-OdbcDriver | select Name,Platform\n
\n\n

This will return something like this:

\n\n
PS C:\\Users\\muehsig> (New-Object system.data.oledb.oledbenumerator).GetElements() | select SOURCES_NAME, SOURCES_DESCRIPTION\n\nSOURCES_NAME               SOURCES_DESCRIPTION\n------------               -------------------\nSQLOLEDB                   Microsoft OLE DB Provider for SQL Server\nMSDataShape                MSDataShape\nMicrosoft.ACE.OLEDB.12.0   Microsoft Office 12.0 Access Database Engine OLE DB Provider\nMicrosoft.ACE.OLEDB.16.0   Microsoft Office 16.0 Access Database Engine OLE DB Provider\nADsDSOObject               OLE DB Provider for Microsoft Directory Services\nWindows Search Data Source Microsoft OLE DB Provider for Search\nMSDASQL                    Microsoft OLE DB Provider for ODBC Drivers\nMSDASQL Enumerator         Microsoft OLE DB Enumerator for ODBC Drivers\nSQLOLEDB Enumerator        Microsoft OLE DB Enumerator for SQL Server\nMSDAOSP                    Microsoft OLE DB Simple Provider\n\n\nPS C:\\Users\\muehsig> Get-OdbcDriver | select Name,Platform\n\nName                                                   Platform\n----                                                   --------\nDriver da Microsoft para arquivos texto (*.txt; *.csv) 32-bit\nDriver do Microsoft Access (*.mdb)                     32-bit\nDriver do Microsoft dBase (*.dbf)                      32-bit\nDriver do Microsoft Excel(*.xls)                       32-bit\nDriver do Microsoft Paradox (*.db )                    32-bit\nMicrosoft Access Driver (*.mdb)                        32-bit\nMicrosoft Access-Treiber (*.mdb)                       32-bit\nMicrosoft dBase Driver (*.dbf)                         32-bit\nMicrosoft dBase-Treiber (*.dbf)                        32-bit\nMicrosoft Excel Driver (*.xls)                         32-bit\nMicrosoft Excel-Treiber (*.xls)                        32-bit\nMicrosoft ODBC for Oracle                              32-bit\nMicrosoft Paradox Driver (*.db )                       32-bit\nMicrosoft Paradox-Treiber (*.db )                      32-bit\nMicrosoft Text Driver (*.txt; *.csv)                   32-bit\nMicrosoft Text-Treiber (*.txt; *.csv)                  32-bit\nSQL Server                                             32-bit\nODBC Driver 17 for SQL Server                          32-bit\nSQL Server                                             64-bit\nODBC Driver 17 for SQL Server                          64-bit\nMicrosoft Access Driver (*.mdb, *.accdb)               64-bit\nMicrosoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb) 64-bit\nMicrosoft Access Text Driver (*.txt, *.csv)            64-bit\n
\n\n

Hope this helps! (And I hope you don’t need to deal with these ancient technologies for too long 😅)

\n","Href":"https://blog.codeinside.eu/2023/03/18/microsoft-ace-oledb-12-0-provider-is-not-registered/","RawContent":null,"Thumbnail":null},{"Title":"Resource type is not supported in this subscription","PublishedOn":"2023-03-11T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

I was playing around with some Visual Studio Tooling and noticed this error during the creation of a “Azure Container Apps”-app:

\n\n

Resource type is not supported in this subscription

\n\n

\"x\"

\n\n

Solution

\n\n

The solution is quite strange at first, but in the super configurable world of Azure it makes sense: You need to activate the Resource provider for this feature on your subscription. For Azure Container Apps you need the Microsoft.ContainerRegistry-resource provider registered:

\n\n

\"x\"

\n\n

It seems, that you can create such resources via the Portal, but if you go via the API (which Visual Studio seems to do) the provider needs to be registered at first.

\n\n

Some resource providers are “enabled by default”, other providers needs to be turned on manually. Check out this list for a list of all resource providers and the related Azure service.

\n\n

Be careful: I guess you should only enable the resource providers that you really need, otherwise your attack surface will get larger.

\n\n

To be honest: This was completly new for me - I do Azure since ages and never had to deal with resource providers. Always learning ;)

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2023/03/11/resource-type-is-not-supported-in-this-subscription/","RawContent":null,"Thumbnail":null},{"Title":"Azure DevOps Server 2022 Update","PublishedOn":"2023-02-15T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Azure DevOps Server 2022 - OnPrem?

\n\n

Yes I know - you can get everything from the cloud nowadays, but we are still using our OnPrem hardware and were running the “old” Azure DevOps Server 2020. \nThe _Azure DevOps Server 2022 was released last december, so an update was due.

\n\n

Requirements

\n\n

If you are running am Azure DevOps Server 2020 the requirements for the new 2022 release are “more or less” the same except the following important parts:

\n\n
    \n
  • Supported server operation systems: Windows Server 2022 & Windows Server 2019 vs. the old Azure DevOps Server 2020 could run under a Windows Server 2016
  • \n
  • Supported SQL Server versions: Azure SQL Database, SQL Managed Instance, SQL Server 2019, SQL Server 2017 vs. the old Azure DevOps Server supported SQL Server 2016.
  • \n
\n\n

Make sure you have a backup

\n\n

The last requirement was a suprise for me, because I thought the update should run smoothly, but the installer removed the previous version and I couldn’t update, because our SQL Server was still on SQL Server 2016. Fortunately we had a VM backup and could rollback to the previous version.

\n\n

Step for Step

\n\n

The update process itself was straightforward: Download the installer and run it.

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

The screenshots are from two different sessions. If you look carefully on the clock you might see that the date is different, that is because of the SQL Server 2016 problem.

\n\n

As you can see - everything worked as expected, but after we updated the server the search, which is powered by ElasticSearch was not working. The “ElasticSearch”-Windows-Service just crashed on startup and I’m not a Java guy, so… we fixed it by removing the search feature and reinstall it. \nWe tried to clean the cache, but it was still not working. After the reinstall of this feature the issue went away.

\n\n

Features

\n\n

Azure Server 2022 is just a minor update (at least from a typical user perspective). The biggest new feature might be “Delivery Plans”, which are nice, but for small teams not a huge benefit. Check out the release notes.

\n\n

A nice - nerdy - enhancement, and not mentioned in the release notes: “mermaid.js” is now supported in the Azure DevOps Wiki, yay!

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2023/02/15/azure-devops-server-2022-update/","RawContent":null,"Thumbnail":null},{"Title":"Use ASP.NET Core and React with Vite.js","PublishedOn":"2023-02-11T01:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

The CRA Problem

\n\n

In my previous post I showed a simple setup with ASP.NET Core & React. The React part was created with the “CRA”-Tooling, which is kind of problematic. The “new” state of the art React tooling seems to be vite.js - so let’s take a look how to use this.

\n\n

\"x\"

\n\n

Step for Step

\n\n

Step 1: Create a “normal” ASP.NET Core project

\n\n

(I like the ASP.NET Core MVC template, but feel free to use something else - same as in the other blogpost)

\n\n

\"x\"

\n\n

Step 2: Install vite.js and init the template

\n\n

Now move to the root directory of your project with a shell and execute this:

\n\n
npm create vite@latest clientapp -- --template react-ts\n
\n\n

This will install the latest & greatest vitejs based react app in a folder called clientapp with the react-ts template (React with Typescript). Vite itself isn’t focused on React and supports many different frontend frameworks.

\n\n

\"x\"

\n\n

Step 3: Enable HTTPS in your vite.js

\n\n

Just like in the “CRA”-setup we need to make sure, that the environment is served under HTTPS. In the “CRA” world we needed to different files from the original ASP.NET Core & React template, but with vite.js there is a much simpler option available.

\n\n

Execute the following command in the clientapp directory:

\n\n
npm install --save-dev vite-plugin-mkcert\n
\n\n

Then in your vite.config.ts use this config:

\n\n
import { defineConfig } from 'vite'\nimport react from '@vitejs/plugin-react'\nimport mkcert from 'vite-plugin-mkcert'\n\n// https://vitejs.dev/config/\nexport default defineConfig({\n    base: '/app',\n    server: {\n        https: true,\n        port: 6363\n    },\n    plugins: [react(), mkcert()],\n})\n
\n\n

Be aware: The base: '/app' will be used as a sub-path.

\n\n

The important part for the HTTPS setting is that we use the mkcert() plugin and configure the server part with a port and set https to true.

\n\n

Step 4: Add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package

\n\n

Same as in the other blogpost, we need to add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package to glue the ASP.NET Core development and React world together. If you use .NET 7, then use the version 7.x.x, if you use .NET 6, use the version 6.x.x - etc.

\n\n

\"x\"

\n\n

Step 5: Enhance your Program.cs

\n\n

Back to the Program.cs - this is more or less the same as with the “CRA” setup:

\n\n

Add the SpaStaticFiles to the services collection like this in your Program.cs - be aware, that vite.js builds everything in a folder called dist:

\n\n
var builder = WebApplication.CreateBuilder(args);\n\n// Add services to the container.\nbuilder.Services.AddControllersWithViews();\n\n// ↓ Add the following lines: ↓\nbuilder.Services.AddSpaStaticFiles(configuration => {\n    configuration.RootPath = \"clientapp/dist\";\n});\n// ↑ these lines ↑\n\nvar app = builder.Build();\n
\n\n

Now we need to use the SpaServices like this:

\n\n
app.MapControllerRoute(\n    name: \"default\",\n    pattern: \"{controller=Home}/{action=Index}/{id?}\");\n\n// ↓ Add the following lines: ↓\nvar spaPath = \"/app\";\nif (app.Environment.IsDevelopment())\n{\n    app.MapWhen(y => y.Request.Path.StartsWithSegments(spaPath), client =>\n    {\n        client.UseSpa(spa =>\n        {\n            spa.UseProxyToSpaDevelopmentServer(\"https://localhost:6363\");\n        });\n    });\n}\nelse\n{\n    app.Map(new PathString(spaPath), client =>\n    {\n        client.UseSpaStaticFiles();\n        client.UseSpa(spa => {\n            spa.Options.SourcePath = \"clientapp\";\n\n            // adds no-store header to index page to prevent deployment issues (prevent linking to old .js files)\n            // .js and other static resources are still cached by the browser\n            spa.Options.DefaultPageStaticFileOptions = new StaticFileOptions\n            {\n                OnPrepareResponse = ctx =>\n                {\n                    ResponseHeaders headers = ctx.Context.Response.GetTypedHeaders();\n                    headers.CacheControl = new CacheControlHeaderValue\n                    {\n                        NoCache = true,\n                        NoStore = true,\n                        MustRevalidate = true\n                    };\n                }\n            };\n        });\n    });\n}\n// ↑ these lines ↑\n\napp.Run();\n
\n\n

Just like in the original blogpost. In the development mode we use the UseProxyToSpaDevelopmentServer-method to proxy all requests to the vite.js dev server. In the real world, we will use the files from the dist folder.

\n\n

Step 6: Invoke npm run build during publish

\n\n

The last step is to complete the setup. We want to build the ASP.NET Core app and the React app, when we use dotnet publish:

\n\n

Add this to your .csproj-file and it should work:

\n\n
\t<PropertyGroup>\n\t\t<SpaRoot>clientapp\\</SpaRoot>\n\t</PropertyGroup>\n\n\t<Target Name=\"PublishRunWebpack\" AfterTargets=\"ComputeFilesToPublish\">\n\t\t<!-- As part of publishing, ensure the JS resources are freshly built in production mode -->\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm install\" />\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm run build\" />\n\n\t\t<!-- Include the newly-built files in the publish output -->\n\t\t<ItemGroup>\n\t\t\t<DistFiles Include=\"$(SpaRoot)dist\\**\" />  <!-- Changed to dist! -->\n\t\t\t<ResolvedFileToPublish Include=\"@(DistFiles->'%(FullPath)')\" Exclude=\"@(ResolvedFileToPublish)\">\n\t\t\t\t<RelativePath>%(DistFiles.Identity)</RelativePath> <!-- Changed! -->\n\t\t\t\t<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>\n\t\t\t\t<ExcludeFromSingleFile>true</ExcludeFromSingleFile>\n\t\t\t</ResolvedFileToPublish>\n\t\t</ItemGroup>\n\t</Target>\n
\n\n

Result

\n\n

You should now be able to use Visual Studio Code (or something like this) and start the frontend project with dev. If you open a browser and go to https://127.0.0.1:6363/app you should see something like this:

\n\n

\"x\"

\n\n

Now start the ASP.NET Core app and go to /app and it should look like this:

\n\n

\"x\"

\n\n

Ok - this looks broken, right? Well - this is a more or less a “known” problem, but can be easily avoided. If we import the logo from the assets it works as expected and shouldn’t be a general problem:

\n\n

\"x\"

\n\n

Code

\n\n

The sample code can be found here.

\n\n

Video

\n\n

I made a video about this topic (in German, sorry :-/) as well - feel free to subscribe ;)

\n\n\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2023/02/11/aspnet-core-react-with-vitejs/","RawContent":null,"Thumbnail":null},{"Title":"Use ASP.NET Core & React togehter","PublishedOn":"2023-01-25T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

The ASP.NET Core React template

\n\n

\"x\"

\n\n

Visual Studio (at least VS 2019 and the newer 2022) ships with a ASP.NET Core React template, which is “ok-ish”, but has some really bad problems:

\n\n

The React part of this template is scaffolded via “CRA” (which seems to be problematic as well, but is not the point of this post) and uses JavaScript instead of TypeScript.\nAnother huge pain point (from my perspective) is that the template uses some special configurations to just host the react part for users - if you want to mix in some “MVC”/”Razor” stuff, you need to change some of this “magic”.

\n\n

The good parts:

\n\n

Both worlds can live together: During development time the ASP.NET Core stuff is hosted via Kestrel and the React part is hosted under the WebPack Development server. The lovely hot reload is working as expected and is really powerful.\nIf you are doing a release build, the project will take care of the npm-magic.

\n\n

But because of the “bad problems” outweight the benefits, we try to integrate a typical react app in a “normal” ASP.NET Core app.

\n\n

Step for Step

\n\n

Step 1: Create a “normal” ASP.NET Core project

\n\n

(I like the ASP.NET Core MVC template, but feel free to use something else)

\n\n

\"x\"

\n\n

Step 2: Create a react app inside the ASP.NET Core project

\n\n

(For this blogpost I use the “Create React App”-approach, but you can use whatever you like)

\n\n

Execute this in your ASP.NET Core template (node & npm must be installed!):

\n\n
npx create-react-app clientapp --template typescript\n
\n\n

Step 3: Copy some stuff from the React template

\n\n

The react template ships with some scripts and settings that we want to preserve:

\n\n

\"x\"

\n\n

The aspnetcore-https.js and aspnetcore-react.js file is needed to setup the ASP.NET Core SSL dev certificate for the WebPack Dev Server. \nYou should also copy the .env & .env.development file in the root of your clientapp-folder!

\n\n

The .env file only has this setting:

\n\n
BROWSER=none\n
\n\n

A more important setting is in the .env.development file (change the port to something different!):

\n\n
PORT=3333\nHTTPS=true\n
\n\n

The port number 3333 and the https=true will be important later, otherwise our setup will not work.

\n\n

Also, add this line to the .env-file (in theory you can use any name - for this sample we keep it spaApp):

\n\n
PUBLIC_URL=/spaApp\n
\n\n

Step 4: Add the prestart to the package.json

\n\n

In your project open the package.json and add the prestart-line like this:

\n\n
  \"scripts\": {\n    \"prestart\": \"node aspnetcore-https && node aspnetcore-react\",\n    \"start\": \"react-scripts start\",\n    \"build\": \"react-scripts build\",\n    \"test\": \"react-scripts test\",\n    \"eject\": \"react-scripts eject\"\n  },\n
\n\n

Step 5: Add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package

\n\n

\"x\"

\n\n

We need the Microsoft.AspNetCore.SpaServices.Extensions NuGet-package. If you use .NET 7, then use the version 7.x.x, if you use .NET 6, use the version 6.x.x - etc.

\n\n

Step 6: Enhance your Program.cs

\n\n

Add the SpaStaticFiles to the services collection like this in your Program.cs:

\n\n
var builder = WebApplication.CreateBuilder(args);\n\n// Add services to the container.\nbuilder.Services.AddControllersWithViews();\n\n// ↓ Add the following lines: ↓\nbuilder.Services.AddSpaStaticFiles(configuration => {\n    configuration.RootPath = \"clientapp/build\";\n});\n// ↑ these lines ↑\n\nvar app = builder.Build();\n
\n\n

Now we need to use the SpaServices like this:

\n\n
app.MapControllerRoute(\n    name: \"default\",\n    pattern: \"{controller=Home}/{action=Index}/{id?}\");\n\n// ↓ Add the following lines: ↓\nvar spaPath = \"/spaApp\";\nif (app.Environment.IsDevelopment())\n{\n    app.MapWhen(y => y.Request.Path.StartsWithSegments(spaPath), client =>\n    {\n        client.UseSpa(spa =>\n        {\n            spa.UseProxyToSpaDevelopmentServer(\"https://localhost:3333\");\n        });\n    });\n}\nelse\n{\n    app.Map(new PathString(spaPath), client =>\n    {\n        client.UseSpaStaticFiles();\n        client.UseSpa(spa => {\n            spa.Options.SourcePath = \"clientapp\";\n\n            // adds no-store header to index page to prevent deployment issues (prevent linking to old .js files)\n            // .js and other static resources are still cached by the browser\n            spa.Options.DefaultPageStaticFileOptions = new StaticFileOptions\n            {\n                OnPrepareResponse = ctx =>\n                {\n                    ResponseHeaders headers = ctx.Context.Response.GetTypedHeaders();\n                    headers.CacheControl = new CacheControlHeaderValue\n                    {\n                        NoCache = true,\n                        NoStore = true,\n                        MustRevalidate = true\n                    };\n                }\n            };\n        });\n    });\n}\n// ↑ these lines ↑\n\napp.Run();\n
\n\n

As you can see, we run in two different modes. \nIn our development world we just use the UseProxyToSpaDevelopmentServer-method to proxy all requests that points to spaApp to the React WebPack DevServer (or something else). The huge benefit is, that you can use the React ecosystem with all its tools. Normally we use Visual Studio Code to run our react frontend and use the ASP.NET Core app as the “Backend for frontend”.\nIn production we use the build artefacts of the react build and make sure, that it’s not cached. To make the deployment easier, we need to invoke npm run build when we publish this ASP.NET Core app.

\n\n

Step 7: Invoke npm run build during publish

\n\n

Add this to your .csproj-file and it should work:

\n\n
\t<PropertyGroup>\n\t\t<SpaRoot>clientapp\\</SpaRoot>\n\t</PropertyGroup>\n\n\t<Target Name=\"PublishRunWebpack\" AfterTargets=\"ComputeFilesToPublish\">\n\t\t<!-- As part of publishing, ensure the JS resources are freshly built in production mode -->\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm install\" />\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm run build\" />\n\n\t\t<!-- Include the newly-built files in the publish output -->\n\t\t<ItemGroup>\n\t\t\t<DistFiles Include=\"$(SpaRoot)build\\**\" />\n\t\t\t<ResolvedFileToPublish Include=\"@(DistFiles->'%(FullPath)')\" Exclude=\"@(ResolvedFileToPublish)\">\n\t\t\t\t<RelativePath>%(DistFiles.Identity)</RelativePath> <!-- Changed! -->\n\t\t\t\t<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>\n\t\t\t\t<ExcludeFromSingleFile>true</ExcludeFromSingleFile>\n\t\t\t</ResolvedFileToPublish>\n\t\t</ItemGroup>\n\t</Target>\n
\n\n

Be aware that these instruction are copied from the original ASP.NET Core React template and are slightly modified, otherwise the path wouldn’t match.

\n\n

Result

\n\n

With this setup you can add any spa app that you would like to add to your “normal” ASP.NET Core project.

\n\n

If everything works as expected you should be able to start the React app in Visual Studio Code like this:

\n\n

\"x\"

\n\n

Be aware of the https://localhost:3333/spaApp. The port and the name is important for our sample!

\n\n

Start your hosting ASP.NET Core app in Visual Studio (or in any IDE that you like) and all requests that points to spaApp use the WebPack DevServer in the background:

\n\n

\"x\"

\n\n

With this setup you can mix all client & server side styles as you like - mission succeeded and you can use any client setup (CRA, anything else) as you would like to.

\n\n

Code

\n\n

The code (but with slightly modified values (e.g. another port)) can be found here. \nBe aware, that npm i needs to be run first.

\n\n

Video

\n\n

I uploaded a video on my YouTube channel (in German) about this setup:

\n\n\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2023/01/25/aspnet-core-and-react/","RawContent":null,"Thumbnail":null},{"Title":"Your URL is flagged as malware/phishing, now what?","PublishedOn":"2023-01-04T22:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Problem

\n\n

On my last day in 2022 - Friday, 23. December, I received a support ticket from one customer, that our software seems to be offline and it looks like that our servers are not responding. I checked our monitoring and the server side of the customer and everything was fine. \nMy first thought: Maybe a misconfiguration on the customer side, but after a remote support session with the customer, I saw that it “should work”, but something in the customer network blocks the requests to our services.\nNext thought: Firewall or proxy stuff. Always nasty, but we are just using port 443, so nothing too special.

\n\n

After a while I received a phone call from the customers firewall team and they discovered the problem: They are using a firewall solution from “Check Point” and our domain was flagged as “phishing”/”malware”. What the… \nThey even created an exception so that Check Point doesn’t block our requests, but the next problem occured: The customers “Windows Defender for Office 365” has the same “flag” for our domain, so they revert everything, because they didn’t want to change their settings too much.

\n\n

\"x\"

\n\n

Be aware, that from our end everything was working “fine” and I could access the customer services and our Windows Defender didn’t had any problems with this domain.

\n\n

Solution

\n\n

Somehow our domain was flagged as malware/phishing and we needed to change this false positive listening. I guess there are tons of services, that “tracks” “bad” websites and maybe all are connected somehow. From this indicent I can only suggest:

\n\n

If you have trouble with Check Point:

\n\n

Go to “URLCAT”, register an account and try to change the category of your domain. After you submit the “change request” you will get an email like this:

\n\n
Thank you for submitting your category change request.\nWe will process your request and notify you by email (to: xxx.xxx@xxx.com ).\nYou can follow the status of your request on this page.\nYour request details\nReference ID: [GUID]\nURL: https://[domain].com\nSuggested Categories: Computers / Internet,Business / Economy\nComment: [Given comment]\n
\n\n

After ~1-2 days the change was done. Not sure if this is automated or not, but it was during Christmas.

\n\n

If you have trouble with Windows Defender:

\n\n

Go to “Report submission” in your Microsoft 365 Defender setting (you will need an account with special permissions, e.g. global admin) and add the URL as “Not junk”.

\n\n

\"x\"

\n\n

I’m not really sure if this helped or not, because we didn’t had any issues with the domain itself and I’m not sure if those “false positive” tickets bubbles up into a “global defender catalog” or if this only affects our own tenant.

\n\n

Result

\n\n

Anyway - after those tickets were “resolved” by Check Point / Microsoft the problem on the customer side disappeared and everyone was happy. This was my first experience with such an “false positive malware report”. I’m not sure how we ended up on such a list and why only one customer was affected.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2023/01/04/checkpoint-and-defender-false-positive-url/","RawContent":null,"Thumbnail":null},{"Title":"SQLLocalDb update","PublishedOn":"2022-12-03T22:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Short Intro

\n\n

SqlLocalDb is a “developer” SQL server, without the “full” SQL Server (Express) installation. If you just develop on your machine and don’t want to run a “full blown” SQL Server, this is the tooling that you might need.

\n\n

From the Microsoft Docs:

\n\n
\n

Microsoft SQL Server Express LocalDB is a feature of SQL Server Express targeted to developers. It is available on SQL Server Express with Advanced Services.

\n\n

LocalDB installation copies a minimal set of files necessary to start the SQL Server Database Engine. Once LocalDB is installed, you can initiate a connection using a special connection string. When connecting, the necessary SQL Server infrastructure is automatically created and started, enabling the application to use the database without complex configuration tasks. Developer Tools can provide developers with a SQL Server Database Engine that lets them write and test Transact-SQL code without having to manage a full server instance of SQL Server.

\n
\n\n

Problem

\n\n

(I’m not really sure, how I ended up on this problem, but I after I solved the problem I did it on my “To Blog”-bucket list)

\n\n

From time to time there is a new SQLLocalDb version, but to upgrade an existing installation is a bit “weird”.

\n\n

Solution

\n\n

If you have installed an older SQLLocalDb version you can manage it via sqllocaldb. If you want to update you must delete the “current” MSSQLLocalDB in the first place.

\n\n

To to this use:

\n\n
sqllocaldb stop MSSQLLocalDB\nsqllocaldb delete MSSQLLocalDB\n
\n\n

Then download the newest version from Microsoft. \nIf you choose “Download Media” you should see something like this:

\n\n

\"x\"

\n\n

Download it, run it and restart your PC, after that you should be able to connect to the SQLLocalDb.

\n\n

We solved this issue with help of this blogpost.

\n\n

Hope this helps! (and I can remove it now from my bucket list \\o/ )

\n","Href":"https://blog.codeinside.eu/2022/12/03/sqllocaldb-update/","RawContent":null,"Thumbnail":null},{"Title":"Azure DevOps & Azure Service Connection","PublishedOn":"2022-10-04T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Today I needed to setup a new release pipeline on our Azure DevOps Server installation to deploy some stuff automatically to Azure. The UI (at least on the Azure DevOps Server 2020 (!)) is not really clear about how to connect those two worlds, and thats why I’m writing this short blogpost.

\n\n

First - under project settings - add a new service connection. Use the Azure Resource Manager-service. Now you should see something like this:

\n\n

\"x\"

\n\n

Be aware: You will need to register app inside your Azure AD and need permissions to setup. If you are not able to follow these instructions, you might need to talk to your Azure subscription owner.

\n\n

Subscription id:

\n\n

Copy here the id of your subscription. This can be found in the subscription details:

\n\n

\"x\"

\n\n

Keep this tab open, because we need it later!

\n\n

Service prinipal id/key & tenant id:

\n\n

Now this wording about “Service principal” is technically correct, but really confusing if your are not familar with Azure AD. A “Service prinipal” is like a “service user”/”app” that you need to register to use it.\nThe easiest route is to create an app via the Bash Azure CLI:

\n\n
az ad sp create-for-rbac --name DevOpsPipeline\n
\n\n

If this command succeeds you should see something like this:

\n\n
{\n  \"appId\": \"[...GUID..]\",\n  \"displayName\": \"DevOpsPipeline\",\n  \"password\": \"[...PASSWORD...]\",\n  \"tenant\": \"[...Tenant GUID...]\"\n}\n
\n\n

This creates an “Serivce principal” with a random password inside your Azure AD. The next step is to give this “Service principal” a role on your subscription, because it has currently no permissions to do anything (e.g. deploy a service etc.).

\n\n

Go to the subscription details page and then to Access control (IAM). There you can add your “DevOpsPipeline”-App as “Contributor” (Be aware that this is a “powerful role”!).

\n\n

After that use the \"appId\": \"[...GUID..]\" from the command as Service Principal Id. \nUse the \"password\": \"[...PASSWORD...]\" as Service principal key and the \"tenant\": \"[...Tenant GUID...]\" for the tenant id.

\n\n

Now you should be able to “Verify” this connection and it should work.

\n\n

Links:\nThis blogpost helped me a lot. Here you can find the official documentation.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2022/10/04/azure-devops-azure-service-connection/","RawContent":null,"Thumbnail":null},{"Title":"'error MSB8011: Failed to register output.' & UTF8-BOM files","PublishedOn":"2022-08-30T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Be aware: I’m not a C++ developer and this might be an “obvious” problem, but it took me a while to resolve this issue.

\n\n

In our product we have very few C++ projects. We use these projects for very special Microsoft Office COM stuff and because of COM we need to register some components during the build. Everything worked as expected, but we renamed a few files and our build broke with:

\n\n
C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\MSBuild\\Microsoft\\VC\\v170\\Microsoft.CppCommon.targets(2302,5): warning MSB3075: The command \"regsvr32 /s \"C:/BuildAgentV3_1/_work/67/s\\_Artifacts\\_ReleaseParts\\XXX.Client.Addin.x64-Shims\\Common\\XXX.Common.Shim.dll\"\" exited with code 5. Please verify that you have sufficient rights to run this command. [C:\\BuildAgentV3_1\\_work\\67\\s\\XXX.Common.Shim\\XXX.Common.Shim.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\MSBuild\\Microsoft\\VC\\v170\\Microsoft.CppCommon.targets(2314,5): error MSB8011: Failed to register output. Please try enabling Per-user Redirection or register the component from a command prompt with elevated permissions. [C:\\BuildAgentV3_1\\_work\\67\\s\\XXX.Common.Shim\\XXX.Common.Shim.vcxproj]\n\n(xxx = redacted)\n
\n\n

The crazy part was: Using an older version of our project just worked as expected, but all changes were “fine” from my point of view.

\n\n

After many, many attempts I remembered that our diff tool doesn’t show us everything - so I checked the file encodings: UTF8-BOM

\n\n

Somehow if you have a UTF8-BOM encoded file that your C++ project uses to register COM stuff it will fail. I changed the encoding and to UTF8 and everyting worked as expected.

\n\n

What a day… lessons learned: Be aware of your file encodings.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2022/08/30/error-msb8011-failed-to-register-output-and-utf8bom/","RawContent":null,"Thumbnail":null},{"Title":"Which .NET Framework Version is installed on my machine?","PublishedOn":"2022-08-29T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

If you need to know which .NET Framework Version (the “legacy” .NET Framework) is installed on your machine try this handy oneliner:

\n\n
Get-ItemProperty \"HKLM:SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4\\Full\"\n
\n\n

Result:

\n\n
CBS           : 1\nInstall       : 1\nInstallPath   : C:\\Windows\\Microsoft.NET\\Framework64\\v4.0.30319\\\nRelease       : 528372\nServicing     : 0\nTargetVersion : 4.0.0\nVersion       : 4.8.04084\nPSPath        : Microsoft.PowerShell.Core\\Registry::HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework\n                Setup\\NDP\\v4\\Full\nPSParentPath  : Microsoft.PowerShell.Core\\Registry::HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4\nPSChildName   : Full\nPSDrive       : HKLM\nPSProvider    : Microsoft.PowerShell.Core\\Registry\n
\n\n

The version should give you more then enough information.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2022/08/29/which-dotnet-version-is-installed-via-powershell/","RawContent":null,"Thumbnail":null},{"Title":"How to run a Azure App Service WebJob with parameters","PublishedOn":"2022-07-22T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

We are using WebJobs in our Azure App Service deployment and they are pretty “easy” for the most part. Just register a WebJobs or deploy your .exe/.bat/.ps1/... under the \\site\\wwwroot\\app_data\\Jobs\\triggered folder and it should execute as described in the settings.job.

\n\n

\"x\"

\n\n

If you put any executable in this WebJob folder, it will be executed as planned.

\n\n

Problem: Parameters

\n\n

If you have a my-job.exe, then this will be invoked from the runtime. But what if you need to invoke it with a parameter like my-job.exe -param \"test\"?

\n\n

Solution: run.cmd

\n\n

The WebJob environment is “greedy” and will search for a run.cmd (or run.exe) and if this is found, it will be executed and it doesn’t matter if you have any other .exe files there.\nStick to the run.cmd and use this to invoke your actual executable like this:

\n\n
echo \"Invoke my-job.exe with parameters - Start\"\n\n..\\MyJob\\my-job.exe -param \"test\"\n\necho \"Invoke my-job.exe with parameters - Done\"\n
\n\n

Be aware, that the path must “match”. We use this run.cmd-approach in combination with the is_in_place-option (see here) and are happy with the results).

\n\n

A more detailed explanation can be found here.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2022/07/22/how-to-run-a-azure-appservice-webjob-with-parameters/","RawContent":null,"Thumbnail":null},{"Title":"How to use IE proxy settings with HttpClient","PublishedOn":"2022-03-28T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Internet Explorer is - mostly - dead, but some weird settings are still around and “attached” to the old world, at least on Windows 10. \nIf your system administrator uses some advanced proxy settings (e.g. a PAC-file), those will be attached to the users IE setting.

\n\n

If you want to use this with a HttpClient you need to code something like this:

\n\n
    string target = \"https://my-target.local\";\n    var targetUri = new Uri(target);\n    var proxyAddressForThisUri = WebRequest.GetSystemWebProxy().GetProxy(targetUri);\n    if (proxyAddressForThisUri == targetUri)\n    {\n        // no proxy needed in this case\n        _httpClient = new HttpClient();\n    }\n    else\n    {\n        // proxy needed\n        _httpClient = new HttpClient(new HttpClientHandler() { Proxy = new WebProxy(proxyAddressForThisUri) { UseDefaultCredentials = true } });\n    }\n
\n\n

The GetSystemWebProxy() gives access to the system proxy settings from the current user. Then we can query, what proxy is needed for the target. If the result is the same address as the target, then no proxy is needed. Otherwise, we inject a new WebProxy for this address.

\n\n

Hope this helps!

\n\n

Be aware: Creating new HttpClients is (at least in a server environment) not recommended. Try to reuse the same HttpClient instance!

\n\n

Also note: The proxy setting in Windows 11 are now built into the system settings, but the API still works :)

\n\n

\"x\"

\n","Href":"https://blog.codeinside.eu/2022/03/28/how-to-use-ie-proxy-settings-with-httpclient/","RawContent":null,"Thumbnail":null},{"Title":"Redirect to HTTPS with a simple web.config rule","PublishedOn":"2022-01-05T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

The scenario is easy: My website is hosted in an IIS and would like to redirect all incomming HTTP traffic to the HTTPS counterpart.

\n\n

This is your solution - a “simple” rule:

\n\n
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<configuration>\n    <system.webServer>\n        <rewrite>\n            <rules>\n                <rule name=\"Redirect to https\" stopProcessing=\"true\">\n                    <match url=\".*\" />\n                    <conditions logicalGrouping=\"MatchAny\">\n                        <add input=\"{HTTPS}\" pattern=\"off\" />\n                    </conditions>\n                    <action type=\"Redirect\" url=\"https://{HTTP_HOST}{REQUEST_URI}\" redirectType=\"Found\" />\n                </rule>\n            </rules>\n        </rewrite>\n    </system.webServer>\n</configuration>\n
\n\n

We used this in the past to setup a “catch all” web site in an IIS that redirects all incomming HTTP traffic.\nThe actual web applications had only the HTTPS binding in place.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2022/01/05/redirect-to-https-with-a-simple-webconfig-rule/","RawContent":null,"Thumbnail":null},{"Title":"Select random rows","PublishedOn":"2021-12-06T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Let’s say we have a SQL table and want to retrieve 10 rows randomly - how would you do that? Although I have been working with SQL for x years, I have never encountered that problem. The solution however is quite “simple” (at least if you don’t be picky how we define “randomness” and if you try this on millions of rows):

\n\n

ORDER BY NEWID()

\n\n

The most boring way is to use the ORDER BY NEWID() clause:

\n\n
SELECT TOP 10 FROM Products ORDER BY NEWID()\n
\n\n

This works, but if you do that on “large” datasets you might hit performance problems (e.g. more on that here)

\n\n

TABLESAMPE

\n\n

The SQL Server implements the Tablesample clause which was new to me. It seems to perform much bettern then the ORDER BY NEWID() clause, but behaves a bit weird. With this clause you can specify the “sample” from a table. The size of the sample can be specified as PERCENT or ROWS (which are then converted to percent internally).

\n\n

Syntax:

\n\n
SELECT TOP 10 FROM Products TABLESAMPLE (25 Percent)\nSELECT TOP 10 FROM Products TABLESAMPLE (100 ROWS)\n
\n\n

The weird part is that the given number might not match the number of rows of your result. You might got more or less results and if our tablesample is too small you might even got nothing in return. There are some clever ways to work around this (e.g. using the TOP 100 statement with a much larger tablesample clause to get a guaranteed result set), but it feels “strange”.\nIf you hit limitations with the first solution you might want to read more on this blog or in the Microsoft Docs.

\n\n

Stackoverflow

\n\n

Of course there is a great Stackoverflow thread with even wilder solutions.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2021/12/06/select-random-rows/","RawContent":null,"Thumbnail":null},{"Title":"SQL collation problems","PublishedOn":"2021-11-24T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

This week I deployed a new feature and tried it on different SQL databases and was a bit suprised that on one database this error message came up:

\n\n
Cannot resolve the collation conflict between \"Latin1_General_CI_AS\" and \"SQL_Latin1_General_CP1_CI_AS\" in the equal to operation.\n
\n\n

This was strange, because - at least in theory - all databases have the same schema and I was sure that each database had the same collation setting.

\n\n

Collations on columns

\n\n

Well… my theory was wrong and this SQL statement told me that “some” columns had a different collation.

\n\n
select sc.name, sc.collation_name from sys.columns sc\ninner join sys.tables t on sc.object_id=t.object_id\nwhere t.name='TABLENAME'\n
\n\n

As it turns out, some columns had the collation Latin1_General_CI_AS and some had SQL_Latin1_General_CP1_CI_AS. I’m still not sure why, but I needed to do something.

\n\n

How to change the collation

\n\n

To change the collation you can execute something like this:

\n\n
ALTER TABLE MyTable\nALTER COLUMN [MyColumn] NVARCHAR(200) COLLATE SQL_Latin1_General_CP1_CI_AS\n
\n\n

Unfortunately there are restrictions and you can’t change the collation if the column is referenced by any one of the following:

\n\n
    \n
  • A computed column
  • \n
  • An index
  • \n
  • Distribution statistics, either generated automatically or by the CREATE STATISTICS statement
  • \n
  • A CHECK constraint
  • \n
  • A FOREIGN KEY constraint
  • \n
\n\n

Be aware: If you are not in control of the collation or if the collation is “fine” and you want to do this operation anyway, there might be a way to specify the collation in the SQL query.

\n\n

For more information you might want to check out this Microsoft Docs “Set or Change the Column Collation

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2021/11/24/sql-collations-problem/","RawContent":null,"Thumbnail":null},{"Title":"Microsoft Build 2021 session recommendations","PublishedOn":"2021-09-24T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

To be fair: Microsoft Build 2021 was some month ago, but the content might still be relevant today. Sooo… it took me a while, but here is a list of sessions that I found interesting. Some sessions are “better” and some “lighter”, the order doesn’t reflect that - that was just the order I watched those videos.

\n\n

The headline has a link to the video and below are some notes.

\n\n

Build cloud-native applications that run anywhere

\n\n
    \n
  • Azure Arc (GitHub & Policies)
  • \n
  • AKS
  • \n
\n\n

Build differentiated SaaS apps with the Microsoft Cloud

\n\n
    \n
  • Power Apps
  • \n
  • “Light” session - only if you are interested in Microsofts “Low Code” portfolio
  • \n
\n\n

[Build the next generation of collaborative apps for hybrid work

\n

](https://mybuild.microsoft.com/sessions/2915b9b6-6b45-430a-9df7-2671318e2161?source=sessions)

\n\n
    \n
  • Overview Dev Platform (PowerApps, Graph…)
  • \n
  • Fluid
  • \n
  • Adaptive Cards
  • \n
  • Project.Reunion / WebView 2
  • \n
\n\n

[Mark Russinovich on Azure innovation and more!

\n

](https://mybuild.microsoft.com/sessions/b7d536c1-515f-476a-83d2-85b6cf14577a?source=sessions)

\n\n
    \n
  • Dapr
  • \n
  • Story about RdcMan
  • \n
  • Sysmon on linux
  • \n
\n\n

[Learn how to build exciting apps across meetings, chats, and channels within or outside Microsoft Teams](

\n

https://mybuild.microsoft.com/sessions/512470be-15d3-4b50-b180-6532c8153931?source=sessions)

\n\n
    \n
  • Microsoft Teams SDK
  • \n
  • Azure Communication Services
  • \n
  • Meeting Events, Media APIs, Share integration
  • \n
  • Teams Connect
  • \n
  • Adaptive Cards in Teams
  • \n
  • Messaging Extensions in Outlook for Web
  • \n
  • Together Mode scenes
  • \n
\n\n

What’s new for Windows desktop application development

\n\n
    \n
  • Project Reunion
  • \n
  • MAUI
  • \n
\n\n

[Understand the ML process and embed models into apps

\n

](https://mybuild.microsoft.com/sessions/10930f2e-ad9c-460b-b91d-844d17a5a875?source=sessions)

\n\n
    \n
  • Azure ML
  • \n
  • “Data scientist”: VS Code Demo with Jupyter Notebooks, PyTorch, TensorBoard
  • \n
  • “MLOps”
  • \n
  • Azure Machine Learning Studio
  • \n
  • “Red/Blue”-Deployment via GitHub Actions
  • \n
\n\n

[The future of modern application development with .NET

\n

](https://mybuild.microsoft.com/sessions/76ebac39-517d-44da-a58e-df4193b5efa9?source=sessions)

\n\n
    \n
  • “.NET Core Momentum”
  • \n
  • .NET Upgrade Assistant
  • \n
  • Minimal web apis
  • \n
  • MAUI
  • \n
  • Blazor in Web & Desktop
  • \n
  • Hot Reload
  • \n
\n\n

Scott Guthrie ‘Unplugged’ – Home Edition (Extended)

\n\n
    \n
  • ScottGu
  • \n
  • DevTools
  • \n
  • GitHub Actions
  • \n
  • Codespaces
  • \n
  • Cosmos DB: Serverless, Cache, Encryption, Free tier enhancements
  • \n
  • Azure AI
  • \n
\n\n

Build your first web app with Blazor & Web Assembly

\n\n
    \n
  • Learning video
  • \n
\n\n

Develop apps with the Microsoft Graph Toolkit

\n\n
    \n
  • “Low code” Learning video about the toolkit
  • \n
\n\n

Application Authentication in the Microsoft Identity platform

\n\n
    \n
  • MSAL 2.0 & Microsoft Identity Platform
  • \n
  • SPA App with JS
  • \n
  • WebApps stuff with ASP.NET Core
  • \n
  • Service apps
  • \n
\n\n

[Double-click with Microsoft engineering leaders

\n

](https://mybuild.microsoft.com/sessions/08538f9b-e562-4d71-8b42-d240c3966ef0?source=sessions)

\n\n
    \n
  • “Whiteboarding-style”
  • \n
  • GitOps Concepts
  • \n
  • Velocity - Inner/Outer Loop
  • \n
  • Data Analytics with Cosmos DB
  • \n
  • Azure Cloud “overview”
  • \n
\n\n

[.NET 6 deep dive; what’s new and what’s coming

\n

](https://mybuild.microsoft.com/sessions/70d379f4-1173-4941-b389-8796152ec7b8?source=sessions)

\n\n
    \n
  • .NET Momentum
  • \n
  • .NET 5 - why
  • \n
  • .NET 6 main features
  • \n
  • EF Core
  • \n
  • C# 10
  • \n
  • Minimal WebApis
  • \n
  • MAUI
  • \n
  • Blazor
  • \n
  • ASP.NET Core
  • \n
  • Edit and Continue
  • \n
\n\n

Hope this helps.

\n","Href":"https://blog.codeinside.eu/2021/09/24/build-2021-recommendation/","RawContent":null,"Thumbnail":null},{"Title":"Today I learned (sort of) 'fltmc' to inspect the IO request pipeline of Windows","PublishedOn":"2021-05-30T22:00:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

The headline is obviously a big lie, because I followed this twitter conversation last year, but it’s still interesting to me and I wanted to write it somewhere down.

\n\n

Starting point was that Bruce Dawson (Google programmer) noticed, that building Chrome on Windows is slow for various reasons:

\n\n

Based on some twitter discussion about source-file length and build times two months ago I wrote a blog post. It's got real data based on Chromium's build, and includes animations of build-time improvements:https://t.co/lsLH8BNe48

— Bruce Dawson (Antifa) (@BruceDawson0xB) March 31, 2020
\n\n\n

Trentent Tye told him to disable the “filter driver”:

\n\n

disabling the filter driver makes it dead dead dead. Might be worth testing with the number and sizes of files you are dealing with. Even half a millisecond of processing time adds up when it runs against millions and millions of files.

— Trentent Tye (@TrententTye) April 1, 2020
\n\n\n

If you have never heard of a “filter driver” (like me :)), you might want to take a look here.

\n\n

To see the loaded filter driver on your machine try out this: Run fltmc (fltmc.exe) as admin.

\n\n

\"x\"

\n\n

Description:

\n\n

Each filter in the list sit in a pipe through which all IO requests bubble down and up. They see all IO requests, but ignore most. Ever wondered how Windows offers encrypted files, OneDrive/GDrive/DB file sync, storage quotas, system file protection, and, yes, anti-malware? ;)

— Rich Turner (@richturn_ms) April 2, 2020
\n\n\n

This makes more or less sense to me. I’m not really sure what to do with that information, but it’s cool (nerd cool, but anyway :)).

\n","Href":"https://blog.codeinside.eu/2021/05/30/fltmc-inspect-the-io-request-pipeline-of-windows/","RawContent":null,"Thumbnail":null},{"Title":"How to self host Google Fonts","PublishedOn":"2021-04-28T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Google Fonts are really nice and widely used. Typically Google Fonts consistes of the actual font file (e.g. woff, ttf, eot etc.) and some CSS, which points to those font files.

\n\n

In one of our applications, we used a HTML/CSS/JS - Bootstrap like theme and the theme linked some Google Fonts. The problem was, that we wanted to self host everything.

\n\n

After some research we discovered this tool: Google-Web-Fonts-Helper

\n\n

\"x\"

\n\n

Pick your font, select your preferred CSS option (e.g. if you need to support older browsers etc.) and download a complete .zip package. Extract those files and add them to your web project like any other static asset. (And check the font license!)

\n\n

The project site is on GitHub.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2021/04/28/how-to-self-host-google-fonts/","RawContent":null,"Thumbnail":null},{"Title":"Microsoft Graph: Read user profile and group memberships","PublishedOn":"2021-01-31T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

In our application we have a background service, that “syncs” user data and group membership information to our database from the Microsoft Graph.

\n\n

The permission model:

\n\n

Programming against the Microsoft Graph is quite easy. There are many SDKS available, but understanding the permission model is hard.

\n\n

‘Directory.Read.All’ and ‘User.Read.All’:

\n\n

Initially we only synced the “basic” user data to our database, but then some customers wanted to reuse some other data already stored in the graph. Our app required the ‘Directory.Read.All’ permission, because we thought that this would be the “highest” permission - this is wrong!

\n\n

If you need “directory” information, e.g. memberships, the Directory.Read.All or Group.Read.All is a good starting point. But if you want to load specific user data, you might need to have the User.Read.All permission as well.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2021/01/31/microsoft-graph-read-user-profile-and-group-memberships/","RawContent":null,"Thumbnail":null},{"Title":"How to get all distribution lists of a user with a single LDAP query","PublishedOn":"2020-12-31T00:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

In 2007 I wrote a blogpost how easy it is to get all “groups” of a given user via the tokenGroup attribute.

\n\n

Last month I had the task to check why “distribution list memberships” are not part of the result.

\n\n

The reason is simple:

\n\n

A pure distribution list (not security enabled) is not a security group and only security groups are part of the “tokenGroup” attribute.

\n\n

After some thoughts and discussions we agreed, that it would be good if we could enhance our function and treat distribution lists like security groups.

\n\n

How to get all distribution lists of a user?

\n\n

The get all groups of a given user might be seen as trivial, but the problem is, that groups can contain other groups. \nAs always, there are a couple of ways to get a “full flat” list of all group memberships.

\n\n

A stupid way would be to load all groups in a recrusive function - this might work, but will result in a flood of requests.

\n\n

A clever way would be to write a good LDAP query and let the Active Directory do the heavy lifting for us, right?

\n\n

1.2.840.113556.1.4.1941

\n\n

I found some sample code online with a very strange LDAP query and it turns out:\nThere is a “magic” ldap query called “LDAP_MATCHING_RULE_IN_CHAIN” and it does everything we are looking for:

\n\n
var getGroupsFilterForDn = $\"(&(objectClass=group)(member:1.2.840.113556.1.4.1941:= {distinguishedName}))\";\n                using (var dirSearch = CreateDirectorySearcher(getGroupsFilterForDn))\n                {\n                    using (var results = dirSearch.FindAll())\n                    {\n                        foreach (SearchResult result in results)\n                        {\n                            if (result.Properties.Contains(\"name\") && result.Properties.Contains(\"objectSid\") && result.Properties.Contains(\"groupType\"))\n                                groups.Add(new GroupResult() { Name = (string)result.Properties[\"name\"][0], GroupType = (int)result.Properties[\"groupType\"][0], ObjectSid = new SecurityIdentifier((byte[])result.Properties[\"objectSid\"][0], 0).ToString() });\n                        }\n                    }\n                }\n
\n\n

With a given distinguishedName of the target user, we can load all distribution and security groups (see below…) transitive!

\n\n

Combine tokenGroups and this

\n\n

During our testing we found some minor differences between the LDAP_MATCHING_RULE_IN_CHAIN and the tokenGroups approach. Some “system-level” security groups were missing with the LDAP_MATCHING_RULE_IN_CHAIN way. In our production code we use a combination of those two approaches and it seems to work.

\n\n

A full demo code how to get all distribution lists for a user can be found on GitHub.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/12/31/how-get-all-distribution-lists-of-a-user-with-a-single-ldap-query/","RawContent":null,"Thumbnail":null},{"Title":"Update AzureDevOps Server 2019 to AzureDevOps Server 2019 Update 1","PublishedOn":"2020-11-30T18:00:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

We did this update in May 2020, but I forgot to publish the blogpost… so here we are

\n\n

Last year we updated to Azure DevOps Server 2019 and it went more or less smooth.

\n\n

In May we decided to update to the “newest” release at that time: Azure DevOps Server 2019 Update 1.1

\n\n

Setup

\n\n

Our AzureDevOps Server was running on a “new” Windows Server 2019 and everything was still kind of newish - so we just needed to update the AzureDevOps Server app.

\n\n

Update process

\n\n

The actual update was really easy, but we had some issues after the installation.

\n\n

Steps:

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

\"x\"

\n\n

Aftermath

\n\n

We had some issues with our Build Agents - they couldn’t connect to the AzureDevOps Server:

\n\n
TF400813: Resource not available for anonymous access\n
\n\n

As a first “workaround” (and a nice enhancement) we switched from HTTP to HTTPS internally, but this didn’t solved the problem.

\n\n

The real reason was, that our “Azure DevOps Service User” didn’t had the required write permissions for this folder:

\n\n
C:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys\n
\n\n

The connection issue went away, but now we introduced another problem: Our SSL Certificate was “self signed” (from our Domain Controller), so we need to register the agents like this:

\n\n
.\\config.cmd --gituseschannel --url https://.../tfs/ --auth Integrated --pool Default-VS2019 --replace --work _work\n
\n\n

The important parameter is -gituseschannel, which is needed when dealing with “self signed, but Domain ‘trusted’“-certificates.

\n\n

With this setting everything seemed to work as expected.

\n\n

Only node.js projects or toolings were “problematic”, because node.js itself don’t use the Windows Certificate Store.

\n\n

To resolve this, the root certificate from our Domain controller must be stored on the agent.

\n\n
  [Environment]::SetEnvironmentVariable(\"NODE_EXTRA_CA_CERTS\", \"C:\\SSLCert\\root-CA.pem\", \"Machine\") \n
\n\n

Summary

\n\n

The update itself was easy, but it took us some hours to configure our Build Agents. After the initial hiccup it went smooth from there - no issues and we are ready for the next update, which is already released.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/11/30/update-onprem-azuredevops-server-2019-to-azuredevops-server-2019-update1/","RawContent":null,"Thumbnail":null},{"Title":"DllRegisterServer 0x80020009 Error","PublishedOn":"2020-10-31T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Last week I had a very strange issue and the solution was really “easy”, but took me a while.

\n\n

Scenario

\n\n

For our products we build Office COM Addins with a C++ based “Shim” that boots up our .NET code (e.g. something like this.\nAs the nature of COM: It requires some pretty dumb registry entries to work and in theory our toolchain should “build” and automatically “register” the output.

\n\n

Problem

\n\n

The registration process just failed with a error message like that:

\n\n
The module xxx.dll was loaded but the call to DllRegisterServer failed with error code 0x80020009\n
\n\n

After some research you will find some very old stuff or only some general advises like in this Stackoverflow.com question, e.g. “run it as administrator”.

\n\n

The solution

\n\n

Luckily we had another project were we use the same approach and this worked without any issues. After comparing the files I notices some subtile differences: The file encoding was different!

\n\n

In my failing project some C++ files were encoded with UTF8-BOM. I changed everything to UTF8 and after this change it worked.

\n\n

My reaction:

\n\n
(╯°□°)╯︵ ┻━┻\n
\n\n

I’m not a C++ dev and I’m not even sure why some files had the wrong encoding in the first place. It “worked” - at least Visual Studio 2019 was able to build the stuff, but register it with “regsrv32” just failed.

\n\n

I needed some hours to figure that out.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/10/31/dllregisterserver-0x80020009-error/","RawContent":null,"Thumbnail":null},{"Title":"How to share an Azure subscription in a team","PublishedOn":"2020-09-29T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

We at Sevitec are moving more and more workloads for us or our customers to Azure.

\n\n

So the basic question needs an answer:

\n\n

How can a team share an Azure subscription?

\n\n

Be aware: This approach works for us. There might be better options. If we do something stupid, just tell me in the comments or via email - help is appreciated.

\n\n

Step 1: Create a directory

\n\n

We have a “company directory” with a fully configured Azure Active Directory (incl. User sync between our OnPrem system, Office 365 licenses etc.).

\n\n

Our rule of thumb is: We create for each product team a individual directory and all team members are invited in the new directory.

\n\n

Keep in mind: A directory itself costs you nothing but might help you to keep things manageable.

\n\n

\"Create

\n\n

Step 2: Create a group

\n\n

This step might be optional, but all team members - except the “Administrator” - have the same rights and permissions in our company. To keep things simple, we created a group with all team members.

\n\n

\"Put

\n\n

Step 3: Create a subscription

\n\n

Now create a subscription. The typical “Pay-as-you-go” offer will work. Be aware that the user who creates the subscription is initially setup as the Administrator.

\n\n

\"Create

\n\n

Step 4: “Share” the subscription

\n\n

This is the most important step:

\n\n

You need to grant the individual users or the group (from step 2) the “Contributor” role for this subscription via the “Access control (IAM)”.\nThe hard part is to understand how those “Role assignment” affect the subscription. I’m not even sure if the “Contributor” is the best fit, but it works for us.

\n\n

\"Pick

\n\n

Summary

\n\n

I’m not really sure why such a basic concept is labeled so poorly but you really need to pick the correct role assignment and the other person should be able to use the subscription.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/09/29/how-to-share-an-azure-subscription-in-a-team/","RawContent":null,"Thumbnail":null},{"Title":"How to run a legacy WCF .svc Service on Azure AppService","PublishedOn":"2020-08-31T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Last month we wanted to run good old WCF powered service on Azures “App Service”.

\n\n

WCF… what’s that?

\n\n

If you are not familiar with WCF: Good! For the interested ones: WCF is or was a framework to build mostly SOAP based services in the .NET Framework 3.0 timeframe. Some parts where “good”, but most developers would call it a complex monster.

\n\n

Even in the glory days of WCF I tried to avoid it at all cost, but unfortunately I need to maintain a WCF based service.

\n\n

For the curious: The project template and the tech is still there. Search for “WCF”.

\n\n

\"VS

\n\n

The template will produce something like that:

\n\n

The actual “service endpoint” is the Service1.svc file.

\n\n

\"WCF

\n\n

Running on Azure: The problem

\n\n

Let’s assume we have a application with a .svc endpoint. In theory we can deploy this application to a standard Windows Server/IIS without major problems.

\n\n

Now we try to deploy this very same application to Azure AppService and this is the result after we invoke the service from the browser:

\n\n
\"The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.\" (HTTP Response was 404)\n
\n\n

Strange… very strange. In theory a blank HTTP 400 should appear, but not a HTTP 404. The service itself was not “triggered”, because we had some logging in place, but the request didn’t get to the actual service.

\n\n

After hours of debugging, testing and googling around I created a new “blank” WCF service from the Visual Studio template got the same error.

\n\n

The good news: It’s was not just my code something is blocking the request.

\n\n

After some hours I found a helpful switch in the Azure Portal and activated the “Failed Request tracing” feature (yeah… I could found it sooner) and I discovered this:

\n\n

\"Failed

\n\n

Running on Azure: The solution

\n\n

My initial thoughts were correct: The request was blocked. It was treated as “static content” and the actual WCF module was not mapped to the .svc extension.

\n\n

To “re-map” the .svc extension to the correct handler I needed to add this to the web.config:

\n\n
...\n<system.webServer>\n    ...\n\t<handlers>\n\t\t<remove name=\"svc-integrated\" />\n\t\t<add name=\"svc-integrated\" path=\"*.svc\" verb=\"*\" type=\"System.ServiceModel.Activation.HttpHandler\" resourceType=\"File\" preCondition=\"integratedMode\" />\n\t</handlers>\n</system.webServer>\n...\n\n
\n\n

With this configuration everything worked as expected on Azure AppService.

\n\n

Be aware:

\n\n

I’m really not 100% sure why this is needed in the first place. I’m also not 100% sure if the name svc-integrated is correct or important.

\n\n

This blogpost is a result of these tweets.

\n\n

That was a tough ride… Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/08/31/how-to-run-a-legacy-wcf-svc-service-on-azure-app-service/","RawContent":null,"Thumbnail":null},{"Title":"EWS, Exchange Online and OAuth with a Service Account","PublishedOn":"2020-07-31T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

This week we had a fun experiment: We wanted to talk to Exchange Online via the “old school” EWS API, but in a “sane” way.

\n\n

But here is the full story:

\n\n

Our goal

\n\n

We wanted to access contact information via a web service from the organization, just like the traditional “Global Address List” in Exchange/Outlook. We knew that EWS was on option for the OnPrem Exchange, but what about Exchange Online?

\n\n

The big problem: Authentication is tricky. We wanted to use a “traditional” Service Account approach (think of username/password). Unfortunately the “basic auth” way will be blocked in the near future because of security concerns (makes sense TBH). There is an alternative approach available, but at first it seems not to work as we would like.

\n\n

So… what now?

\n\n

EWS is… old. Why?

\n\n

The Exchange Web Services are old, but still quite powerful and still supported for Exchange Online and OnPrem Exchanges. On the other hand we could use the Microsoft Graph, but - at least currently - there is not a single “contact” API available.

\n\n

To mimic the GAL we would need to query List Users and List orgContacts, which would be ok, but the “orgContacts” has a “flaw”. \n“Hidden” contacts (“msexchhidefromaddresslists”) are returned from this API and we thought that this might be a NoGo for our customers.

\n\n

Another argument for using EWS was, that we could support OnPrem and Online with one code base.

\n\n

Docs from Microsoft

\n\n

The good news is, that EWS and the Auth problem is more or less good documented here.

\n\n

There are two ways to authenticate against the Microsoft Graph or any Microsoft 365 API: Via “delegation” or via “application”.

\n\n

Delegation:

\n\n

Delegation means, that we can write a desktop app and all actions are executed in the name of the signed in user.

\n\n

Application:

\n\n

Application means, that the app itself can do some actions without any user involved.

\n\n

EWS and the application way

\n\n

At first we thought that we might need to use the “application” way.

\n\n

The good news is, that this was easy and worked. \nThe bad news is, that the application needs the EWS permission “full_access_as_app”, which means that our application can access all mailboxes from this tenant. This might be ok for certain apps, but this scared us.

\n\n

Back to the delegation way:

\n\n

EWS and the delegation way

\n\n

The documentation from Microsoft is good, but our “Service Account” usecase was not mentioned. In the example from Microsoft a user needs to manually login.

\n\n

Solution / TL;DR

\n\n

After some research I found the solution to use a “username/password” OAuth flow to access a single mailbox via EWS:

\n\n
    \n
  1. \n

    Follow the normal “delegate” steps from the Microsoft Docs

    \n
  2. \n
  3. \n

    Instead of this code, which will trigger the login UI:

    \n
  4. \n
\n\n
...\n// The permission scope required for EWS access\nvar ewsScopes = new string[] { \"https://outlook.office.com/EWS.AccessAsUser.All\" };\n\n// Make the interactive token request\nvar authResult = await pca.AcquireTokenInteractive(ewsScopes).ExecuteAsync();\n...\n
\n\n

Use the “AcquireTokenByUsernamePassword” method:

\n\n
...\nvar cred = new NetworkCredential(\"UserName\", \"Password\");\nvar authResult = await pca.AcquireTokenByUsernamePassword(new string[] { \"https://outlook.office.com/EWS.AccessAsUser.All\" }, cred.UserName, cred.SecurePassword).ExecuteAsync();\n...\n
\n\n

To make this work you need to enable the “Treat application as public client” under “Authentication” > “Advanced settings” in our AAD Application because this uses the “Resource owner password credential flow”.

\n\n

Now you should be able to get the AccessToken and do some EWS magic.

\n\n

I posted a shorter version on Stackoverflow.com

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/07/31/ews-exchange-online-oauth-with-a-service-account/","RawContent":null,"Thumbnail":null},{"Title":"Can a .NET Core 3.0 compiled app run with a .NET Core 3.1 runtime?","PublishedOn":"2020-06-30T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Within our product we move more and more stuff in the .NET Core land.\nLast week we had a disussion around needed software requirements and in the .NET Framework land this question was always easy to answer:

\n\n
\n

.NET Framework 4.5 or higher.

\n
\n\n

With .NET Core the answer is sligthly different:

\n\n

In theory major versions are compatible, e.g. if you compiled your app with .NET Core 3.0 and a .NET Core runtime 3.1 is the only installed 3.X runtime on the machine, this runtime is used.

\n\n

This system is called “Framework-dependent apps roll forward” and sounds good.

\n\n

The bad part

\n\n

Unfortunately this didn’t work for us. Not sure why, but our app refuses to work because a .dll is not found or missing. The reason is currently not clear. Be aware that Microsoft has written a hint that such things might occure:

\n\n
\n

It’s possible that 3.0.5 and 3.1.0 behave differently, particularly for scenarios like serializing binary data.

\n
\n\n

The good part

\n\n

With .NET Core we could ship the framework with our app and it should run fine wherever we deploy it.

\n\n

Summery

\n\n

Read the docs about the “app roll forward” approach if you have similar concerns, but test your app with that combination.

\n\n

As a sidenote: 3.0 is not supported anymore, so it would be good to upgrade it to 3.1 anyway, but we might see a similar pattern with the next .NET Core versions.

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/06/30/can-a-dotnet-core-30-compiled-app-run-with-a-dotnet-core-31-runtime/","RawContent":null,"Thumbnail":null},{"Title":"SqlBulkCopy for fast bulk inserts","PublishedOn":"2020-05-31T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Within our product OneOffixx we can create a “full export” from the product database. Because of limitations with normal MS SQL backups (e.g. compatibility with older SQL databases etc.), we created our own export mechanic.\nAn export can be up to 1GB and more. This is nothing to serious and far from “big data”, but still not easy to handle and we had some issues to import larger “exports”. \nOur importer was based on a Entity Framework 6 implementation and it was really slow… last month we tried to resolve this and we are quite happy. Here is how we did it:

\n\n

TL;DR Problem:

\n\n

Bulk Insert with a Entity Framework based implementation is really slow. There is at least one NuGet package, which seems to help, but unfortunately we run into some obscure issues. This Stackoverflow question highlights some numbers and ways of doing it.

\n\n

SqlBulkCopy to the rescure:

\n\n

After my failed attempt to tame our EF implementation I discovered the SqlBulkCopy operation. In .NET (Full Framework and .NET Standard!) the usage is simple via the “SqlBulkCopy” class.

\n\n

Our importer looks more or less like this:

\n\n
using (var scope = new TransactionScope(TransactionScopeOption.RequiresNew, TimeSpan.FromMinutes(30), TransactionScopeAsyncFlowOption.Enabled))\nusing (SqlBulkCopy bulkCopy = new SqlBulkCopy(databaseConnectionString))\n    {\n    var dt = new DataTable();\n    dt.Columns.Add(\"DataColumnA\");\n    dt.Columns.Add(\"DataColumnB\");\n    dt.Columns.Add(\"DataColumnId\", typeof(Guid));\n\n    foreach (var dataEntry in data)\n    {\n        dt.Rows.Add(dataEntry.A, dataEntry.B, dataEntry.Id);\n    }\n\n    sqlBulk.DestinationTableName = \"Data\";\n    sqlBulk.AutoMapColumns(dt);\n    sqlBulk.WriteToServer(dt);\n\n    scope.Complete();\n    }\n\npublic static class Extensions\n    {\n        public static void AutoMapColumns(this SqlBulkCopy sbc, DataTable dt)\n        {\n            sbc.ColumnMappings.Clear();\n\n            foreach (DataColumn column in dt.Columns)\n            {\n                sbc.ColumnMappings.Add(column.ColumnName, column.ColumnName);\n            }\n        }\n    }       \n
\n\n

Some notes:

\n\n
    \n
  • The TransactionScope is not required, but still nice.
  • \n
  • The SqlBulkCopy instance just needs the databaseConnectionString.
  • \n
  • A Datatable is needed and (I’m not sure why) all non crazy SQL datatypes are magically supported, but GUIDs needs to be typed explicitly.
  • \n
  • Insert thousands of data in your dataTable, point the SqlBulkCopy to your destination table, map those columns and write the to the server.
  • \n
  • You can use the same instance for multiple bulk operations.
  • \n
  • There is also an Async implementation available.
  • \n
\n\n

Only “downside”: SqlBulkCopy is a table by table insert. You need to insert your data in the correct order if you have any db constraints in your schema.

\n\n

Result:

\n\n

We reduced the import from several minutes to seconds :)

\n\n

Hope this helps!

\n","Href":"https://blog.codeinside.eu/2020/05/31/sqlbulkcopy-for-fast-bulk-inserts/","RawContent":null,"Thumbnail":null},{"Title":"Blazor for Office Add-ins: First look","PublishedOn":"2020-04-30T21:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n

Last week I did some research and tried to build a pretty basic Office Addin (within the “new” web based Addin model) with Blazor.

\n\n

Side note: Last year I blogged about how to build Office Add-ins with ASP.NET Core.

\n\n

Why Blazor?

\n\n

My daily work home is in the C# and .NET land, so it would be great to use Blazor for Office Addins, right? \nA Office Add-in is just a web application with a “communication tunnel” to the hosting Office application - not very different from the real web.

\n\n

What (might) work: Serverside Blazor

\n\n

My first try was with a “standard” serverside Blazor application and I just pointed the dummy Office Add-in manifest file to the site and it (obviously) worked:

\n\n

Mhh... maybe?🤔😏#Blazor #OfficeDev pic.twitter.com/BzdVQzIeqA

— Robert Muehsig (@robert0muehsig) April 23, 2020
\n\n\n

I assume that serverside Blazor is for the client not very “complicated” and it would probably work.

\n\n

After my initial tweet Manuel Sidler jumped in and made a simple demo project, which also invokes the Office.js APIs from C#!

\n\n

Building an #Office Add-In based on #Blazor (Server) could be possible. Whether it's a good idea or not is another story ;) https://t.co/LdSPYl4SRh (thanks @robert0muehsig to get me jump up on this idea) pic.twitter.com/1w29212qdS

— Manuel Sidler (@manuelsidler) April 24, 2020
\n\n\n

Checkout his repository on GitHub for further information.

\n\n

What won’t work: WebAssembly (if I don’t miss anything)

\n\n

Serverside Blazor is cool, but has some problems (e.g. a server connection is needed and scaling is not that easy) - what about WebAssembly?

\n\n

Well… Blazor WebAssembly is still in preview and I tried the same setup that worked for serverside blazor.

\n\n

Result:

\n\n

The desktop PowerPoint (I tried to build a PowerPoint addin) keeps crashing after I add the addin. On Office Online it seems to work, but not for a very long time:

\n\n

Blazor WebAssembly seems not to work or at least the startup is super weird :-/ pic.twitter.com/IvnecQFMj2

— Robert Muehsig (@robert0muehsig) April 27, 2020
\n\n\n

Possible reasons:

\n\n

The default Blazor WebAssembly installs a service worker. I removed that part, but I’m not 100% sure if I did it correctly. At least they are currently not supported from the Office Add-in Edge WebView. My experience with Office Online and the Blazor addin failed as well and I don’t think that service workers are the problem.

\n\n

I’m not really sure why its not working, but its quite early for Blazor WebAssembly, so… time will tell.

\n\n

What does the Office Dev Team think of Blazor?

\n\n

Currently I just found one comment on this blogpost regarding Blazor:

\n\n
Will Blazor be supported for Office Add-ins?\n\nNo, it will be a React Office.js add-in. We don’t have any plans to support Blazor yet. For that, please put a note on our UserVoice channel: https://officespdev.uservoice.com. There are several UserVoice items already on this, so know that we are listening to your feedback and prioritizing based on customer requests. The more requests we get for particular features, the more we will consider moving forward with developing it. \n
\n\n

Well… vote for it! ;)

\n","Href":"https://blog.codeinside.eu/2020/04/30/blazor-for-office-addins-first-look/","RawContent":null,"Thumbnail":null}],"ResultType":"Feed"},"YouTube":{"FeedItems":[{"Title":"Erste Schritte mit dem Azure OpenAI Service","PublishedOn":"2023-03-23T22:30:48+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=VVNHT4gVxDo","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/VVNHT4gVxDo/hqdefault.jpg"},{"Title":"Erster Schritt in die Source Control: Visual Studio Projekte auf GitHub pushen","PublishedOn":"2023-03-17T21:59:57+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=iKQS5nYbC-k","RawContent":null,"Thumbnail":"https://i2.ytimg.com/vi/iKQS5nYbC-k/hqdefault.jpg"},{"Title":"Vite.js für React & TypeScript für ASP.NET Core & Visual Studio Entwickler","PublishedOn":"2023-02-12T00:25:03+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=-2iiXpBcmDY","RawContent":null,"Thumbnail":"https://i2.ytimg.com/vi/-2iiXpBcmDY/hqdefault.jpg"},{"Title":"React.js mit TypeScript in ASP.NET Core mit Visual Studio & Visual Studio Code","PublishedOn":"2023-01-26T23:35:26+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=IgW79wxMO-c","RawContent":null,"Thumbnail":"https://i2.ytimg.com/vi/IgW79wxMO-c/hqdefault.jpg"},{"Title":"React.js mit ASP.NET Core - ein Einstieg mit Visual Studio","PublishedOn":"2022-10-07T23:15:55+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=gIzMtWDs_QM","RawContent":null,"Thumbnail":"https://i4.ytimg.com/vi/gIzMtWDs_QM/hqdefault.jpg"},{"Title":"Einstieg in die Webentwicklung mit .NET 6 & ASP.NET Core","PublishedOn":"2022-04-12T21:13:18+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=WtpzsW5Xwqo","RawContent":null,"Thumbnail":"https://i4.ytimg.com/vi/WtpzsW5Xwqo/hqdefault.jpg"},{"Title":"Das erste .NET 6 Programm","PublishedOn":"2022-01-30T22:21:35+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=fVzo2qJubmA","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/fVzo2qJubmA/hqdefault.jpg"},{"Title":"Azure SQL - ist das echt so teuer? Neee...","PublishedOn":"2022-01-11T21:49:29+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=dNaIOGQj15M","RawContent":null,"Thumbnail":"https://i1.ytimg.com/vi/dNaIOGQj15M/hqdefault.jpg"},{"Title":"Was sind \"Project Templates\" in Visual Studio?","PublishedOn":"2021-12-22T22:36:25+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=_IMabo9yHSA","RawContent":null,"Thumbnail":"https://i4.ytimg.com/vi/_IMabo9yHSA/hqdefault.jpg"},{"Title":".NET Versionen - was bedeutet LTS und Current?","PublishedOn":"2021-12-21T21:06:29+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=2ghTKF0Ey_0","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/2ghTKF0Ey_0/hqdefault.jpg"},{"Title":"Einstieg in die .NET Entwicklung für Anfänger","PublishedOn":"2021-12-20T22:18:35+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=2EcSJDX-8-s","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/2EcSJDX-8-s/hqdefault.jpg"},{"Title":"Erste Schritte mit Unit Tests","PublishedOn":"2008-11-05T00:14:06+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=tjAv1-Qb4rY","RawContent":null,"Thumbnail":"https://i1.ytimg.com/vi/tjAv1-Qb4rY/hqdefault.jpg"},{"Title":"3 Schichten Architektur","PublishedOn":"2008-10-17T22:01:06+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=27yknlB8xeg","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/27yknlB8xeg/hqdefault.jpg"}],"ResultType":"Feed"},"O_Blog":{"FeedItems":[{"Title":"How to build a simple hate speech detector with machine learning","PublishedOn":"2019-08-02T13:00:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"

Not everybody on the internet behaves nice and some comments are just rude or offending. If you run a web page that offers a public comment function hate speech can be a real problem. For example in Germany, you are legally required to delete hate speech comments. This can be challenging if you have to check thousands of comments each day. \nSo wouldn’t it be nice, if you can automatically check the user’s comment and give them a little hint to stay nice?\n

\n\n

The simplest thing you could do is to check if the user’s text contains offensive words. However, this approach is limited since you can offend people without using offensive words.

\n\n

This post will show you how to train a machine learning model that can detect if a comment or text is offensive. And to start you need just a few lines of Python code \\o/

\n\n

The Data

\n\n

At first, you need data. In this case, you will need a list of offensive and nonoffensive texts. I wrote this tutorial for a machine learning course in Germany, so I used German texts but you should be able to use other languages too.

\n\n

For a machine learning competition, scientists provided a list of comments labeled as offensive and nonoffensive (Germeval 2018, Subtask 1). This is perfect for us since we just can use this data.

\n\n

The Code

\n\n

To tackle this task I would first establish a baseline and then improve this solution step by step. Luckily they also published the scores of all submission, so we can get a sense of how well we are doing.

\n\n

For our baseline model we are going to use Facebooks fastText. It’s simple to use, works with many languages and does not require any special hardware like a GPU. Oh, and it’s fast :)

\n\n

1. Load the data

\n\n

After you downloaded the training data file germeval2018.training.txt you need to transform this data into a format that fastText can read.\nFastTexts standard format looks like this “label[your label] some text”:

\n\n
__label__offensive some insults\n__label__other have a nice day\n
\n\n

2. Train the Model

\n\n

To train the model you need to install the fastText Python package.

\n\n
$ pip install fasttext\n
\n

To train the model you need just there line of code.

\n
import fasttext\ntraning_parameters = {'epoch': 50, 'lr': 0.05, 'loss': \"ns\", 'thread': 8, 'ws': 5, 'dim': 100}    \nmodel = fasttext.supervised('fasttext.train', 'model', **traning_parameters)\n
\n\n

I packed all the training parameters into a seperate dictionary. To me that looks a bit cleaner but you don’t need to do that.

\n\n

3. Test your Model

\n\n

After we trained the model it is time to test how it performs. FastText provides us a handy test method the evaluate the model’s performance. To compare our model with the other models from the GermEval contest I also added a lambda which calculates the average F1 score. For now, I did not use the official test script from the contests repository. Which you should do if you wanted to participate in such contests.

\n\n
def test(model):\n    f1_score = lambda precision, recall: 2 * ((precision * recall) / (precision + recall))\n    nexamples, recall, precision = model.test('fasttext.test')\n    print (f'recall: {recall}' )\n    print (f'precision: {precision}')\n    print (f'f1 score: {f1_score(precision,recall)}')\n    print (f'number of examples: {nexamples}')\n
\n\n

I don’t know about you, but I am so curious how we score. Annnnnnnd:

\n\n
recall: 0.7018686296715742\nprecision: 0.7018686296715742\nf1 score: 0.7018686296715742\nnumber of examples: 3532\n
\n\n

Looking at the results we can see that the best other model had an average F1 score of 76,77 and our model achieves -without any optimization and preprocessing- an F1 Score of 70.18.

\n\n

This is pretty good since the models for these contests are usually specially optimized for the given data.

\n\n

FastText is a clever piece of software, that uses some neat tricks. If interested in fastText you should take a look the paper and this one. For example, fastText uses character n-grams. This approach is well suited for the German language, which uses a lot of compound words.

\n\n

Next Steps

\n\n

In this very basic tutorial, we trained a model with just a few lines of Python code. There are several things you can do to improve this model. The first step would be to preprocess your data. During preprocessing you could lower case all texts, remove URLs and special characters, correct spelling, etc. After every optimization step, you can test your model and check if your scores went up. Happy hacking :)

\n\n

Some Ideas:

\n\n
    \n
  1. Preprocess the data
  2. \n
  3. Optimize the parameters (number of training epochs, learning rate, embedding dims, word n-grams)
  4. \n
  5. Use pre-trained word vectors from the fastText website
  6. \n
  7. add more data to the training set
  8. \n
  9. Use data augmentation.
  10. \n
\n\n

Here is the full code:

\n\n\n\n

Credit: Photo by Jon Tyson on Unsplash

","Href":"https://www.oliverguhr.eu/nlp/jekyll/2019/08/02/build-a-simple-hate-speech-detector-with-machine-learning.html","RawContent":null,"Thumbnail":null}],"ResultType":"Feed"},"GitHubEventsUser":{"Events":[{"Id":"31261163258","Type":"IssuesEvent","CreatedAt":"2023-08-21T15:35:19","Actor":"robertmuehsig","Repository":"fluentribbon/Fluent.Ribbon","Organization":"fluentribbon","RawContent":null,"RelatedAction":"opened","RelatedUrl":"https://github.com/fluentribbon/Fluent.Ribbon/issues/1161","RelatedDescription":"Opened issue \"RibbonBackButton - Localization Mix\" (#1161) at fluentribbon/Fluent.Ribbon","RelatedBody":"The \"BackButton\" on the Backstage has a flaw, that it mixes languages and the localization is \"not ideal\".\r\n\r\nThis button here:\r\n![image](https://github.com/fluentribbon/Fluent.Ribbon/assets/756703/8e239e25-7c79-45f1-b55b-005f9e561f92) \r\n... work with a screenreader since [this change](https://github.com/fluentribbon/Fluent.Ribbon/issues/1125). \r\n\r\nUnfortunately the German translation is \"not ideal\", because of the Wording \"Backstage schließen\".\r\n\"Backstage\" itself is used by Microsoft even in German support sites, but our accessibility tester doesn't allow this, because NVDA, Jaws & the Windows Narrator have a weird pronunciation of it.\r\n\r\nI checked if I could change the title myself to \"Menü schließen\", which doesn't sound too bad, but the screenreader will read \"Menü schließen - Button - Open Backstage\", which is weird.\r\n\r\nThe \"Open Backstage\" originates from the RoutedUICommand:\r\n\r\n![image](https://github.com/fluentribbon/Fluent.Ribbon/assets/756703/afba7aac-1297-4050-8876-ffc1c4718636), which is currently hardcoded:\r\n\r\n```\r\npublic static class RibbonCommands\r\n{\r\n /// \r\n /// Gets the value that represents the Open Backstage command\r\n /// \r\n public static readonly RoutedCommand OpenBackstage = new RoutedUICommand(\"Open backstage\", nameof(OpenBackstage), typeof(RibbonCommands));\r\n}\r\n```\r\n\r\nThe \"easiest\" fix would be to change the \"Backstage\" to \"Menü\" (for the German translation) and somehow use the same text for the RoutedUICommand, but I'm not even sure if this is needed or if the text could be removed anyway.\r\n\r\n---\r\n### Environment\r\n\r\n- Fluent.Ribbon __v10__\r\n- Windows __11__\r\n- .NET Framework __4.8__\r\n"},{"Id":"31096417138","Type":"PullRequestEvent","CreatedAt":"2023-08-14T09:05:21","Actor":"oliverguhr","Repository":"Donat24/FastVAD","Organization":null,"RawContent":null,"RelatedAction":"opened","RelatedUrl":"https://github.com/Donat24/FastVAD/pull/2","RelatedDescription":"Opened pull request \"added bayes filter demo\" (#2) at Donat24/FastVAD","RelatedBody":""},{"Id":"31059327908","Type":"IssuesEvent","CreatedAt":"2023-08-11T13:23:30","Actor":"oliverguhr","Repository":"pytorch/pytorch","Organization":"pytorch","RawContent":null,"RelatedAction":"closed","RelatedUrl":"https://github.com/pytorch/pytorch/issues/70036","RelatedDescription":"Closed issue \"ROCM device not found\" (#70036) at pytorch/pytorch","RelatedBody":"### 🐛 Describe the bug\n\nHello,\r\nI installed rocm 4.2 according to the documentation. The GPU is recognized by the rocm tooling. However, pytorch does not detect the GPU.\r\n\r\n\r\n```\r\n>>>import torch \r\n>>> torch.tensor([0]).to(\"cuda\")\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"/home/oli/source/experiment/amd-test/venv/lib/python3.8/site-packages/torch/cuda/__init__.py\", line 214, in _lazy_init\r\n torch._C._cuda_init()\r\nRuntimeError: No HIP GPUs are available\r\n```\r\n\r\n```\r\n>>>import torch \r\n>>>torch.tensor([0]).to(\"hip\")\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\nRuntimeError: HIP error: hipErrorNoDevice\r\nHIP kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\r\nFor debugging consider passing HIP_LAUNCH_BLOCKING=1.\r\n```\r\n\r\n\r\n\r\n\r\n\n\n### Versions\n\nI am running Python 3.8.10. I installed pytorch in a virtual env today with this command:\r\n```\r\npip3 install torch torchvision==0.11.2 -f https://download.pytorch.org/whl/rocm4.2/torch_stable.html\r\n```\r\nROCM is at 4.2.0.40200-21\r\n```\r\n$apt show rocm-libs -a \r\nPackage: rocm-libs\r\nVersion: 4.2.0.40200-21\r\n```\r\n\r\nunfortunately running collect_env.py fails with\r\n\r\n```\r\npython3 collect_env.py\r\nCollecting environment information...\r\nTraceback (most recent call last):\r\n File \"collect_env.py\", line 469, in \r\n main()\r\n File \"collect_env.py\", line 452, in main\r\n output = get_pretty_env_info()\r\n File \"collect_env.py\", line 447, in get_pretty_env_info\r\n return pretty_str(get_env_info())\r\n File \"collect_env.py\", line 309, in get_env_info\r\n hip_runtime_version = [s.rsplit(None, 1)[-1] for s in cfg if 'HIP Runtime' in s][0]\r\nIndexError: list index out of range\r\n\r\n```\r\n\r\nrunning rocminfo gives me:\r\n\r\n```\r\nsudo /opt/rocm/bin/rocminfo\r\nROCk module is loaded\r\n===================== \r\nHSA System Attributes \r\n===================== \r\nRuntime Version: 1.1\r\nSystem Timestamp Freq.: 1000.000000MHz\r\nSig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)\r\nMachine Model: LARGE \r\nSystem Endianness: LITTLE \r\n\r\n========== \r\nHSA Agents \r\n========== \r\n******* \r\nAgent 1 \r\n******* \r\n Name: AMD Ryzen 7 1700 Eight-Core Processor\r\n Uuid: CPU-XX \r\n Marketing Name: AMD Ryzen 7 1700 Eight-Core Processor\r\n Vendor Name: CPU \r\n Feature: None specified \r\n Profile: FULL_PROFILE \r\n Float Round Mode: NEAR \r\n Max Queue Number: 0(0x0) \r\n Queue Min Size: 0(0x0) \r\n Queue Max Size: 0(0x0) \r\n Queue Type: MULTI \r\n Node: 0 \r\n Device Type: CPU \r\n Cache Info: \r\n L1: 32768(0x8000) KB \r\n Chip ID: 0(0x0) \r\n Cacheline Size: 64(0x40) \r\n Max Clock Freq. (MHz): 3000 \r\n BDFID: 0 \r\n Internal Node ID: 0 \r\n Compute Unit: 16 \r\n SIMDs per CU: 0 \r\n Shader Engines: 0 \r\n Shader Arrs. per Eng.: 0 \r\n WatchPts on Addr. Ranges:1 \r\n Features: None\r\n Pool Info: \r\n Pool 1 \r\n Segment: GLOBAL; FLAGS: FINE GRAINED \r\n Size: 32877948(0x1f5ad7c) KB \r\n Allocatable: TRUE \r\n Alloc Granule: 4KB \r\n Alloc Alignment: 4KB \r\n Accessible by all: TRUE \r\n Pool 2 \r\n Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED\r\n Size: 32877948(0x1f5ad7c) KB \r\n Allocatable: TRUE \r\n Alloc Granule: 4KB \r\n Alloc Alignment: 4KB \r\n Accessible by all: TRUE \r\n Pool 3 \r\n Segment: GLOBAL; FLAGS: COARSE GRAINED \r\n Size: 32877948(0x1f5ad7c) KB \r\n Allocatable: TRUE \r\n Alloc Granule: 4KB \r\n Alloc Alignment: 4KB \r\n Accessible by all: TRUE \r\n ISA Info: \r\n******* \r\nAgent 2 \r\n******* \r\n Name: gfx803 \r\n Uuid: GPU-XX \r\n Marketing Name: AMD Radeon (TM) RX 480 Graphics \r\n Vendor Name: AMD \r\n Feature: KERNEL_DISPATCH \r\n Profile: BASE_PROFILE \r\n Float Round Mode: NEAR \r\n Max Queue Number: 128(0x80) \r\n Queue Min Size: 4096(0x1000) \r\n Queue Max Size: 131072(0x20000) \r\n Queue Type: MULTI \r\n Node: 1 \r\n Device Type: GPU \r\n Cache Info: \r\n L1: 16(0x10) KB \r\n Chip ID: 26591(0x67df) \r\n Cacheline Size: 64(0x40) \r\n Max Clock Freq. (MHz): 1310 \r\n BDFID: 2048 \r\n Internal Node ID: 1 \r\n Compute Unit: 36 \r\n SIMDs per CU: 4 \r\n Shader Engines: 4 \r\n Shader Arrs. per Eng.: 1 \r\n WatchPts on Addr. Ranges:4 \r\n Features: KERNEL_DISPATCH \r\n Fast F16 Operation: FALSE \r\n Wavefront Size: 64(0x40) \r\n Workgroup Max Size: 1024(0x400) \r\n Workgroup Max Size per Dimension:\r\n x 1024(0x400) \r\n y 1024(0x400) \r\n z 1024(0x400) \r\n Max Waves Per CU: 40(0x28) \r\n Max Work-item Per CU: 2560(0xa00) \r\n Grid Max Size: 4294967295(0xffffffff) \r\n Grid Max Size per Dimension:\r\n x 4294967295(0xffffffff) \r\n y 4294967295(0xffffffff) \r\n z 4294967295(0xffffffff) \r\n Max fbarriers/Workgrp: 32 \r\n Pool Info: \r\n Pool 1 \r\n Segment: GLOBAL; FLAGS: COARSE GRAINED \r\n Size: 8388608(0x800000) KB \r\n Allocatable: TRUE \r\n Alloc Granule: 4KB \r\n Alloc Alignment: 4KB \r\n Accessible by all: FALSE \r\n Pool 2 \r\n Segment: GROUP \r\n Size: 64(0x40) KB \r\n Allocatable: FALSE \r\n Alloc Granule: 0KB \r\n Alloc Alignment: 0KB \r\n Accessible by all: FALSE \r\n ISA Info: \r\n ISA 1 \r\n Name: amdgcn-amd-amdhsa--gfx803 \r\n Machine Models: HSA_MACHINE_MODEL_LARGE \r\n Profiles: HSA_PROFILE_BASE \r\n Default Rounding Mode: NEAR \r\n Default Rounding Mode: NEAR \r\n Fast f16: TRUE \r\n Workgroup Max Size: 1024(0x400) \r\n Workgroup Max Size per Dimension:\r\n x 1024(0x400) \r\n y 1024(0x400) \r\n z 1024(0x400) \r\n Grid Max Size: 4294967295(0xffffffff) \r\n Grid Max Size per Dimension:\r\n x 4294967295(0xffffffff) \r\n y 4294967295(0xffffffff) \r\n z 4294967295(0xffffffff) \r\n FBarrier Max Size: 32 \r\n*** Done *** \r\n```\n\ncc @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH"},{"Id":"31023518523","Type":"PullRequestEvent","CreatedAt":"2023-08-10T07:33:09","Actor":"oliverguhr","Repository":"Donat24/FastVAD","Organization":null,"RawContent":null,"RelatedAction":"closed","RelatedUrl":"https://github.com/Donat24/FastVAD/pull/1","RelatedDescription":"Closed pull request \"a bunch of small bugfixes\" (#1) at Donat24/FastVAD","RelatedBody":"Some small fixes:\r\n\r\n- a typo\r\n- default micro has index -1\r\n- frame rate for graph improved"},{"Id":"31003085476","Type":"PullRequestEvent","CreatedAt":"2023-08-09T13:19:05","Actor":"oliverguhr","Repository":"Donat24/FastVAD","Organization":null,"RawContent":null,"RelatedAction":"opened","RelatedUrl":"https://github.com/Donat24/FastVAD/pull/1","RelatedDescription":"Opened pull request \"a bunch of small bugfixes\" (#1) at Donat24/FastVAD","RelatedBody":"Some small fixes:\r\n\r\n- a typo\r\n- default micro has index -1\r\n- frame rate for graph improved"}],"ResultType":"GitHubEvent"}},"RunOn":"2023-09-07T05:30:02.4006681Z","RunDurationInMilliseconds":1278} \ No newline at end of file