diff --git a/_data/Sloader.json b/_data/Sloader.json index 4ca92c8..bca7737 100644 --- a/_data/Sloader.json +++ b/_data/Sloader.json @@ -1 +1 @@ -{"Data":{"Blog":{"FeedItems":[{"Title":"Limit Active Directory property access","PublishedOn":"2023-09-20T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\n
Be aware: I’m not a full time administrator and this post might sound stupid to you.
\n\nWe access certain Active Directory properties with our application and on one customer domain we couldn’t get any data out via our Active Directory component.
\n\nAfter some debugging and doubts about our functionality we (the admin of the customer and me) found the reason:\nOur code was running under a Windows Account that was very limted and couldn’t read those properties.
\n\nIf you have similar problems you might want to take a look in the AD User & Group management.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2023/09/20/limit-active-directory-property-access/","RawContent":null,"Thumbnail":null},{"Title":"Zip deployment failed on Azure","PublishedOn":"2023-09-05T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nWe are using Azure App Service for our application (which runs great BTW) and deploy it automatically via ZipDeploy. \nThis basic setup was running smoth, but we noticed that at some point the deployment failed with these error messages:
\n\n2023-08-24T20:48:56.1057054Z Deployment endpoint responded with status code 202\n2023-08-24T20:49:15.6984407Z Configuring default logging for the app, if not already enabled\n2023-08-24T20:49:18.8106651Z Zip deployment failed. {'id': 'temp-b574d768', 'status': 3, 'status_text': '', 'author_email': 'N/A', 'author': 'N/A', 'deployer': 'ZipDeploy', 'message': 'Deploying from pushed zip file', 'progress': '', 'received_time': '2023-08-24T20:48:55.8916655Z', 'start_time': '2023-08-24T20:48:55.8916655Z', 'end_time': '2023-08-24T20:49:15.3291017Z', 'last_success_end_time': None, 'complete': True, 'active': False, 'is_temp': True, 'is_readonly': False, 'url': 'https://[...].scm.azurewebsites.net/api/deployments/latest', 'log_url': 'https://[...].scm.azurewebsites.net/api/deployments/latest/log', 'site_name': '[...]', 'provisioningState': 'Failed'}. Please run the command az webapp log deployment show\n2023-08-24T20:49:18.8114319Z -n [...] -g production\n
or this one (depending on how we invoked the deployment script):
\n\nGetting scm site credentials for zip deployment\nStarting zip deployment. This operation can take a while to complete ...\nDeployment endpoint responded with status code 500\nAn error occured during deployment. Status Code: 500, Details: {\"Message\":\"An error has occurred.\",\"ExceptionMessage\":\"There is not enough space on the disk.\\r\\n\",\"ExceptionType\":\"System.IO.IOException\",\"StackTrace\":\" \n
The message There is not enough space on the disk
was a good hint, but according to the File system storage everything should be fine with only 8% used.
Be aware - this is important: We have multiple apps on the same App Service plan!
\n\n\n\nNext step was to check the behind the scene environment via the “Advanced Tools” Kudu and there it is:
\n\n\n\nThere are two different storages attached to the app service:
\n\nc:\\home
is the “File System Storage” that you can see in the Azure Portal and is quite large. App files are located here.c:\\local
is a much smaller storage with ~21GB and if the space is used, then ZipDeploy will fail.c:\\local
stores “mostly” temporarily items, e.g.:
Directory of C:\\local\n\n08/31/2023 06:40 AM <DIR> .\n08/31/2023 06:40 AM <DIR> ..\n07/13/2023 04:29 PM <DIR> AppData\n07/13/2023 04:29 PM <DIR> ASP Compiled Templates\n08/31/2023 06:40 AM <DIR> Config\n07/13/2023 04:29 PM <DIR> DomainValidationTokens\n07/13/2023 04:29 PM <DIR> DynamicCache\n07/13/2023 04:29 PM <DIR> FrameworkJit\n07/13/2023 04:29 PM <DIR> IIS Temporary Compressed Files\n07/13/2023 04:29 PM <DIR> LocalAppData\n07/13/2023 04:29 PM <DIR> ProgramData\n09/05/2023 08:36 PM <DIR> Temp\n08/31/2023 06:40 AM <DIR> Temporary ASP.NET Files\n07/18/2023 04:06 AM <DIR> UserProfile\n08/19/2023 06:34 AM <SYMLINKD> VirtualDirectory0 [\\\\...\\]\n 0 File(s) 0 bytes\n 15 Dir(s) 13,334,384,640 bytes free\n
The “biggest” item here was in our case under c:\\local\\Temp\\zipdeploy
:
Directory of C:\\local\\Temp\\zipdeploy\n\n08/29/2023 04:52 AM <DIR> .\n08/29/2023 04:52 AM <DIR> ..\n08/29/2023 04:52 AM <DIR> extracted\n08/29/2023 04:52 AM 774,591,927 jiire5i5.zip\n
This folder stores our ZipDeploy
package, which is quite large with ~800MB. The folder also contains the extracted files - remember: We only have 21GB on this storage, but even if this zip file and the extracted files are ~3GB, there is still plenty of room, right?
Well… it turns out, that each App Service on a App Service plan is using this storage and if you have multiple App Services on the same plan, than those 21GB might melt away.
\n\nThe “bad” part is, that the space is shared, but each App Services has it’s own c:\\local
folder (which makes sense). To free up memory we had to clean up this folder on each App Service like that:
rmdir c:\\local\\Temp\\zipdeploy /s /q\n
If you have problems with ZipDeploy and the error message tells you, that there is not enough space, check out the c:\\local
space (and of course c:\\home
as well) and delete unused files. Sometimes a reboot might help as well (to clean up temp-files), but AFAIK those ZipDeploy files will survive that.
The AI world is rising very fast these days: ChatGPT is such an awesome (and scary good?) service and Microsoft joined the ship with some partner announcements and investments. The result is of these actions is, that OpenAI is now a “first class citizen” on Azure.
\n\nSo - for the average Microsoft/.NET developer this opens up a wonderful toolbox and the first steps are really easy.
\n\nBe aware: You need to “apply” to access the OpenAI service, but it took less then 24 hours for us to gain access to the service. I guess this is just a temporary thing.
\n\nDisclaimer: I’m not an AI/ML engineer and I only have a very “glimpse” knowledge about the technology behind GPT3, ChatGPT and ML in general. If in doubt, I always ask my buddy Oliver Guhr, because he is much smarter in this stuff. Follow him on Twitter!
\n\nSearch for “OpenAI” and you will see the “Azure OpenAI Service” entry:
\n\n\n\nCreate a new Azure OpenAI Service instance:
\n\n\n\nOn the next page you will need to enter the subscription, resource group, region and a name (typical Azure stuff):
\n\n\n\nBe aware: If your subscription is not enabled for OpenAI, you need to apply here first.
\n\nAfter the service is created you should see something like this:
\n\n\n\nNow go to “Model deployments” and create a model - I choosed “text-davinci-003”, because I think this is GPT3.5 (which was the initial ChatGPT release, GPT4 is currently in preview for Azure and you need to apply again.
\n\n\n\nMy guess is, that you could train/deploy other, specialized models here, because this model is quite complex and you might want to tailor the model for your scenario to get faster/cheaper results… but I honestly don’t know how to do it (currently), so we just leave the default.
\n\nIn this step we just need to copy the key and the endpoint, which can be found under “Keys and Endpoint”, simple - right?
\n\n\n\nCreate a .NET application and add the Azure.AI.OpenAI NuGet package (currently in preview!).
\n\ndotnet add package Azure.AI.OpenAI --version 1.0.0-beta.5\n
Use this code:
\n\nusing Azure.AI.OpenAI;\nusing Azure;\n\nConsole.WriteLine(\"Hello, World!\");\n\nOpenAIClient client = new OpenAIClient(\n new Uri(\"YOUR-ENDPOINT\"),\n new AzureKeyCredential(\"YOUR-KEY\"));\n\nstring deploymentName = \"text-davinci-003\";\nstring prompt = \"Tell us something about .NET development.\";\nConsole.Write($\"Input: {prompt}\");\n\nResponse<Completions> completionsResponse = client.GetCompletions(deploymentName, prompt);\nstring completion = completionsResponse.Value.Choices[0].Text;\n\nConsole.WriteLine(completion);\n\nConsole.ReadLine();\n\n
Result:
\n\nHello, World!\nInput: Tell us something about .NET development.\n\n.NET development is a mature, feature-rich platform that enables developers to create sophisticated web applications, services, and applications for desktop, mobile, and embedded systems. Its features include full-stack programming, object-oriented data structures, security, scalability, speed, and an open source framework for distributed applications. A great advantage of .NET development is its capability to develop applications for both Windows and Linux (using .NET Core). .NET development is also compatible with other languages such as\n
As you can see… the result is cut off, not sure why, but this is just a simple demonstration.
\n\nWith these basic steps you can access the OpenAI development world. Azure makes it easy to integrate in your existing Azure/Microsoft “stack”. Be aware, that you could also use the same SDK and use the endpoint from OpenAI. Because of billing reasons it is easier for us to use the Azure hosted instances.
\n\nHope this helps!
\n\nIf you understand German and want to see it in action, check out my video on my Channel:
\n\n\n\n","Href":"https://blog.codeinside.eu/2023/03/23/first-steps-with-azure-openai-and-dotnet/","RawContent":null,"Thumbnail":null},{"Title":"How to fix: 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine","PublishedOn":"2023-03-18T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nIn our product we can interact with different datasource and one of these datasources was a Microsoft Access DB connected via OLEDB
. This is really, really old, but still works, but on one customer machine we had this issue:
'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine\n
If you face this issue, you need to install the provider from here.
\n\nBe aware: If you have a different error, you might need to install the newer provider - this is labled as “2010 Redistributable”, but still works with all those fancy Office 365 apps out there.
\n\nImportant: You need to install the provider in the correct bit version, e.g. if you run under x64, install the x64.msi.
\n\nThe solution comes from this Stackoverflow question.
\n\nThe best tip from Stackoverflow was these powershell commands to check, if the provider is there or not:
\n\n(New-Object system.data.oledb.oledbenumerator).GetElements() | select SOURCES_NAME, SOURCES_DESCRIPTION \n\nGet-OdbcDriver | select Name,Platform\n
This will return something like this:
\n\nPS C:\\Users\\muehsig> (New-Object system.data.oledb.oledbenumerator).GetElements() | select SOURCES_NAME, SOURCES_DESCRIPTION\n\nSOURCES_NAME SOURCES_DESCRIPTION\n------------ -------------------\nSQLOLEDB Microsoft OLE DB Provider for SQL Server\nMSDataShape MSDataShape\nMicrosoft.ACE.OLEDB.12.0 Microsoft Office 12.0 Access Database Engine OLE DB Provider\nMicrosoft.ACE.OLEDB.16.0 Microsoft Office 16.0 Access Database Engine OLE DB Provider\nADsDSOObject OLE DB Provider for Microsoft Directory Services\nWindows Search Data Source Microsoft OLE DB Provider for Search\nMSDASQL Microsoft OLE DB Provider for ODBC Drivers\nMSDASQL Enumerator Microsoft OLE DB Enumerator for ODBC Drivers\nSQLOLEDB Enumerator Microsoft OLE DB Enumerator for SQL Server\nMSDAOSP Microsoft OLE DB Simple Provider\n\n\nPS C:\\Users\\muehsig> Get-OdbcDriver | select Name,Platform\n\nName Platform\n---- --------\nDriver da Microsoft para arquivos texto (*.txt; *.csv) 32-bit\nDriver do Microsoft Access (*.mdb) 32-bit\nDriver do Microsoft dBase (*.dbf) 32-bit\nDriver do Microsoft Excel(*.xls) 32-bit\nDriver do Microsoft Paradox (*.db ) 32-bit\nMicrosoft Access Driver (*.mdb) 32-bit\nMicrosoft Access-Treiber (*.mdb) 32-bit\nMicrosoft dBase Driver (*.dbf) 32-bit\nMicrosoft dBase-Treiber (*.dbf) 32-bit\nMicrosoft Excel Driver (*.xls) 32-bit\nMicrosoft Excel-Treiber (*.xls) 32-bit\nMicrosoft ODBC for Oracle 32-bit\nMicrosoft Paradox Driver (*.db ) 32-bit\nMicrosoft Paradox-Treiber (*.db ) 32-bit\nMicrosoft Text Driver (*.txt; *.csv) 32-bit\nMicrosoft Text-Treiber (*.txt; *.csv) 32-bit\nSQL Server 32-bit\nODBC Driver 17 for SQL Server 32-bit\nSQL Server 64-bit\nODBC Driver 17 for SQL Server 64-bit\nMicrosoft Access Driver (*.mdb, *.accdb) 64-bit\nMicrosoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb) 64-bit\nMicrosoft Access Text Driver (*.txt, *.csv) 64-bit\n
Hope this helps! (And I hope you don’t need to deal with these ancient technologies for too long 😅)
\n","Href":"https://blog.codeinside.eu/2023/03/18/microsoft-ace-oledb-12-0-provider-is-not-registered/","RawContent":null,"Thumbnail":null},{"Title":"Resource type is not supported in this subscription","PublishedOn":"2023-03-11T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nI was playing around with some Visual Studio Tooling and noticed this error during the creation of a “Azure Container Apps”-app:
\n\nResource type is not supported in this subscription
The solution is quite strange at first, but in the super configurable world of Azure it makes sense: You need to activate the Resource provider for this feature on your subscription. For Azure Container Apps
you need the Microsoft.ContainerRegistry
-resource provider registered:
It seems, that you can create such resources via the Portal, but if you go via the API (which Visual Studio seems to do) the provider needs to be registered at first.
\n\nSome resource providers are “enabled by default”, other providers needs to be turned on manually. Check out this list for a list of all resource providers and the related Azure service.
\n\nBe careful: I guess you should only enable the resource providers that you really need, otherwise your attack surface will get larger.
\n\nTo be honest: This was completly new for me - I do Azure since ages and never had to deal with resource providers. Always learning ;)
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2023/03/11/resource-type-is-not-supported-in-this-subscription/","RawContent":null,"Thumbnail":null},{"Title":"Azure DevOps Server 2022 Update","PublishedOn":"2023-02-15T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nYes I know - you can get everything from the cloud nowadays, but we are still using our OnPrem hardware and were running the “old” Azure DevOps Server 2020. \nThe _Azure DevOps Server 2022 was released last december, so an update was due.
\n\nIf you are running am Azure DevOps Server 2020 the requirements for the new 2022 release are “more or less” the same except the following important parts:
\n\nThe last requirement was a suprise for me, because I thought the update should run smoothly, but the installer removed the previous version and I couldn’t update, because our SQL Server was still on SQL Server 2016. Fortunately we had a VM backup and could rollback to the previous version.
\n\nThe update process itself was straightforward: Download the installer and run it.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe screenshots are from two different sessions. If you look carefully on the clock you might see that the date is different, that is because of the SQL Server 2016 problem.
\n\nAs you can see - everything worked as expected, but after we updated the server the search, which is powered by ElasticSearch was not working. The “ElasticSearch”-Windows-Service just crashed on startup and I’m not a Java guy, so… we fixed it by removing the search feature and reinstall it. \nWe tried to clean the cache, but it was still not working. After the reinstall of this feature the issue went away.
\n\nAzure Server 2022 is just a minor update (at least from a typical user perspective). The biggest new feature might be “Delivery Plans”, which are nice, but for small teams not a huge benefit. Check out the release notes.
\n\nA nice - nerdy - enhancement, and not mentioned in the release notes: “mermaid.js” is now supported in the Azure DevOps Wiki, yay!
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2023/02/15/azure-devops-server-2022-update/","RawContent":null,"Thumbnail":null},{"Title":"Use ASP.NET Core and React with Vite.js","PublishedOn":"2023-02-11T01:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nIn my previous post I showed a simple setup with ASP.NET Core & React. The React part was created with the “CRA”-Tooling, which is kind of problematic. The “new” state of the art React tooling seems to be vite.js - so let’s take a look how to use this.
\n\n\n\nStep 1: Create a “normal” ASP.NET Core project
\n\n(I like the ASP.NET Core MVC template, but feel free to use something else - same as in the other blogpost)
\n\n\n\nStep 2: Install vite.js and init the template
\n\nNow move to the root directory of your project with a shell and execute this:
\n\nnpm create vite@latest clientapp -- --template react-ts\n
This will install the latest & greatest vitejs based react app in a folder called clientapp
with the react-ts
template (React with Typescript). Vite itself isn’t focused on React and supports many different frontend frameworks.
Step 3: Enable HTTPS in your vite.js
\n\nJust like in the “CRA”-setup we need to make sure, that the environment is served under HTTPS. In the “CRA” world we needed to different files from the original ASP.NET Core & React template, but with vite.js there is a much simpler option available.
\n\nExecute the following command in the clientapp
directory:
npm install --save-dev vite-plugin-mkcert\n
Then in your vite.config.ts
use this config:
import { defineConfig } from 'vite'\nimport react from '@vitejs/plugin-react'\nimport mkcert from 'vite-plugin-mkcert'\n\n// https://vitejs.dev/config/\nexport default defineConfig({\n base: '/app',\n server: {\n https: true,\n port: 6363\n },\n plugins: [react(), mkcert()],\n})\n
Be aware: The base: '/app'
will be used as a sub-path.
The important part for the HTTPS setting is that we use the mkcert()
plugin and configure the server part with a port and set https
to true
.
Step 4: Add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package
\n\nSame as in the other blogpost, we need to add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package to glue the ASP.NET Core development and React world together. If you use .NET 7, then use the version 7.x.x, if you use .NET 6, use the version 6.x.x - etc.
\n\n\n\nStep 5: Enhance your Program.cs
\n\nBack to the Program.cs
- this is more or less the same as with the “CRA” setup:
Add the SpaStaticFiles
to the services collection like this in your Program.cs
- be aware, that vite.js builds everything in a folder called dist
:
var builder = WebApplication.CreateBuilder(args);\n\n// Add services to the container.\nbuilder.Services.AddControllersWithViews();\n\n// ↓ Add the following lines: ↓\nbuilder.Services.AddSpaStaticFiles(configuration => {\n configuration.RootPath = \"clientapp/dist\";\n});\n// ↑ these lines ↑\n\nvar app = builder.Build();\n
Now we need to use the SpaServices like this:
\n\napp.MapControllerRoute(\n name: \"default\",\n pattern: \"{controller=Home}/{action=Index}/{id?}\");\n\n// ↓ Add the following lines: ↓\nvar spaPath = \"/app\";\nif (app.Environment.IsDevelopment())\n{\n app.MapWhen(y => y.Request.Path.StartsWithSegments(spaPath), client =>\n {\n client.UseSpa(spa =>\n {\n spa.UseProxyToSpaDevelopmentServer(\"https://localhost:6363\");\n });\n });\n}\nelse\n{\n app.Map(new PathString(spaPath), client =>\n {\n client.UseSpaStaticFiles();\n client.UseSpa(spa => {\n spa.Options.SourcePath = \"clientapp\";\n\n // adds no-store header to index page to prevent deployment issues (prevent linking to old .js files)\n // .js and other static resources are still cached by the browser\n spa.Options.DefaultPageStaticFileOptions = new StaticFileOptions\n {\n OnPrepareResponse = ctx =>\n {\n ResponseHeaders headers = ctx.Context.Response.GetTypedHeaders();\n headers.CacheControl = new CacheControlHeaderValue\n {\n NoCache = true,\n NoStore = true,\n MustRevalidate = true\n };\n }\n };\n });\n });\n}\n// ↑ these lines ↑\n\napp.Run();\n
Just like in the original blogpost. In the development mode we use the UseProxyToSpaDevelopmentServer
-method to proxy all requests to the vite.js dev server. In the real world, we will use the files from the dist
folder.
Step 6: Invoke npm run build during publish
\n\nThe last step is to complete the setup. We want to build the ASP.NET Core app and the React app, when we use dotnet publish
:
Add this to your .csproj
-file and it should work:
\t<PropertyGroup>\n\t\t<SpaRoot>clientapp\\</SpaRoot>\n\t</PropertyGroup>\n\n\t<Target Name=\"PublishRunWebpack\" AfterTargets=\"ComputeFilesToPublish\">\n\t\t<!-- As part of publishing, ensure the JS resources are freshly built in production mode -->\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm install\" />\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm run build\" />\n\n\t\t<!-- Include the newly-built files in the publish output -->\n\t\t<ItemGroup>\n\t\t\t<DistFiles Include=\"$(SpaRoot)dist\\**\" /> <!-- Changed to dist! -->\n\t\t\t<ResolvedFileToPublish Include=\"@(DistFiles->'%(FullPath)')\" Exclude=\"@(ResolvedFileToPublish)\">\n\t\t\t\t<RelativePath>%(DistFiles.Identity)</RelativePath> <!-- Changed! -->\n\t\t\t\t<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>\n\t\t\t\t<ExcludeFromSingleFile>true</ExcludeFromSingleFile>\n\t\t\t</ResolvedFileToPublish>\n\t\t</ItemGroup>\n\t</Target>\n
You should now be able to use Visual Studio Code (or something like this) and start the frontend project with dev
. If you open a browser and go to https://127.0.0.1:6363/app
you should see something like this:
Now start the ASP.NET Core app and go to /app
and it should look like this:
Ok - this looks broken, right? Well - this is a more or less a “known” problem, but can be easily avoided. If we import the logo from the assets it works as expected and shouldn’t be a general problem:
\n\n\n\nThe sample code can be found here.
\n\nI made a video about this topic (in German, sorry :-/) as well - feel free to subscribe ;)
\n\n\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2023/02/11/aspnet-core-react-with-vitejs/","RawContent":null,"Thumbnail":null},{"Title":"Use ASP.NET Core & React togehter","PublishedOn":"2023-01-25T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nVisual Studio (at least VS 2019 and the newer 2022) ships with a ASP.NET Core React template, which is “ok-ish”, but has some really bad problems:
\n\nThe React part of this template is scaffolded via “CRA” (which seems to be problematic as well, but is not the point of this post) and uses JavaScript instead of TypeScript.\nAnother huge pain point (from my perspective) is that the template uses some special configurations to just host the react part for users - if you want to mix in some “MVC”/”Razor” stuff, you need to change some of this “magic”.
\n\nThe good parts:
\n\nBoth worlds can live together: During development time the ASP.NET Core stuff is hosted via Kestrel and the React part is hosted under the WebPack Development server. The lovely hot reload is working as expected and is really powerful.\nIf you are doing a release build, the project will take care of the npm-magic.
\n\nBut because of the “bad problems” outweight the benefits, we try to integrate a typical react app in a “normal” ASP.NET Core app.
\n\nStep 1: Create a “normal” ASP.NET Core project
\n\n(I like the ASP.NET Core MVC template, but feel free to use something else)
\n\n\n\nStep 2: Create a react app inside the ASP.NET Core project
\n\n(For this blogpost I use the “Create React App”-approach, but you can use whatever you like)
\n\nExecute this in your ASP.NET Core template (node & npm must be installed!):
\n\nnpx create-react-app clientapp --template typescript\n
Step 3: Copy some stuff from the React template
\n\nThe react template ships with some scripts and settings that we want to preserve:
\n\n\n\nThe aspnetcore-https.js
and aspnetcore-react.js
file is needed to setup the ASP.NET Core SSL dev certificate for the WebPack Dev Server. \nYou should also copy the .env
& .env.development
file in the root of your clientapp
-folder!
The .env
file only has this setting:
BROWSER=none\n
A more important setting is in the .env.development
file (change the port to something different!):
PORT=3333\nHTTPS=true\n
The port number 3333
and the https=true
will be important later, otherwise our setup will not work.
Also, add this line to the .env
-file (in theory you can use any name - for this sample we keep it spaApp
):
PUBLIC_URL=/spaApp\n
Step 4: Add the prestart to the package.json
\n\nIn your project open the package.json
and add the prestart
-line like this:
\"scripts\": {\n \"prestart\": \"node aspnetcore-https && node aspnetcore-react\",\n \"start\": \"react-scripts start\",\n \"build\": \"react-scripts build\",\n \"test\": \"react-scripts test\",\n \"eject\": \"react-scripts eject\"\n },\n
Step 5: Add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package
\n\n\n\nWe need the Microsoft.AspNetCore.SpaServices.Extensions NuGet-package. If you use .NET 7, then use the version 7.x.x, if you use .NET 6, use the version 6.x.x - etc.
\n\nStep 6: Enhance your Program.cs
\n\nAdd the SpaStaticFiles
to the services collection like this in your Program.cs
:
var builder = WebApplication.CreateBuilder(args);\n\n// Add services to the container.\nbuilder.Services.AddControllersWithViews();\n\n// ↓ Add the following lines: ↓\nbuilder.Services.AddSpaStaticFiles(configuration => {\n configuration.RootPath = \"clientapp/build\";\n});\n// ↑ these lines ↑\n\nvar app = builder.Build();\n
Now we need to use the SpaServices like this:
\n\napp.MapControllerRoute(\n name: \"default\",\n pattern: \"{controller=Home}/{action=Index}/{id?}\");\n\n// ↓ Add the following lines: ↓\nvar spaPath = \"/spaApp\";\nif (app.Environment.IsDevelopment())\n{\n app.MapWhen(y => y.Request.Path.StartsWithSegments(spaPath), client =>\n {\n client.UseSpa(spa =>\n {\n spa.UseProxyToSpaDevelopmentServer(\"https://localhost:3333\");\n });\n });\n}\nelse\n{\n app.Map(new PathString(spaPath), client =>\n {\n client.UseSpaStaticFiles();\n client.UseSpa(spa => {\n spa.Options.SourcePath = \"clientapp\";\n\n // adds no-store header to index page to prevent deployment issues (prevent linking to old .js files)\n // .js and other static resources are still cached by the browser\n spa.Options.DefaultPageStaticFileOptions = new StaticFileOptions\n {\n OnPrepareResponse = ctx =>\n {\n ResponseHeaders headers = ctx.Context.Response.GetTypedHeaders();\n headers.CacheControl = new CacheControlHeaderValue\n {\n NoCache = true,\n NoStore = true,\n MustRevalidate = true\n };\n }\n };\n });\n });\n}\n// ↑ these lines ↑\n\napp.Run();\n
As you can see, we run in two different modes. \nIn our development world we just use the UseProxyToSpaDevelopmentServer
-method to proxy all requests that points to spaApp
to the React WebPack DevServer (or something else). The huge benefit is, that you can use the React ecosystem with all its tools. Normally we use Visual Studio Code to run our react frontend and use the ASP.NET Core app as the “Backend for frontend”.\nIn production we use the build artefacts of the react build and make sure, that it’s not cached. To make the deployment easier, we need to invoke npm run build
when we publish this ASP.NET Core app.
Step 7: Invoke npm run build during publish
\n\nAdd this to your .csproj
-file and it should work:
\t<PropertyGroup>\n\t\t<SpaRoot>clientapp\\</SpaRoot>\n\t</PropertyGroup>\n\n\t<Target Name=\"PublishRunWebpack\" AfterTargets=\"ComputeFilesToPublish\">\n\t\t<!-- As part of publishing, ensure the JS resources are freshly built in production mode -->\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm install\" />\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm run build\" />\n\n\t\t<!-- Include the newly-built files in the publish output -->\n\t\t<ItemGroup>\n\t\t\t<DistFiles Include=\"$(SpaRoot)build\\**\" />\n\t\t\t<ResolvedFileToPublish Include=\"@(DistFiles->'%(FullPath)')\" Exclude=\"@(ResolvedFileToPublish)\">\n\t\t\t\t<RelativePath>%(DistFiles.Identity)</RelativePath> <!-- Changed! -->\n\t\t\t\t<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>\n\t\t\t\t<ExcludeFromSingleFile>true</ExcludeFromSingleFile>\n\t\t\t</ResolvedFileToPublish>\n\t\t</ItemGroup>\n\t</Target>\n
Be aware that these instruction are copied from the original ASP.NET Core React template and are slightly modified, otherwise the path wouldn’t match.
\n\nWith this setup you can add any spa app that you would like to add to your “normal” ASP.NET Core project.
\n\nIf everything works as expected you should be able to start the React app in Visual Studio Code like this:
\n\n\n\nBe aware of the https://localhost:3333/spaApp. The port and the name is important for our sample!
\n\nStart your hosting ASP.NET Core app in Visual Studio (or in any IDE that you like) and all requests that points to spaApp
use the WebPack DevServer in the background:
With this setup you can mix all client & server side styles as you like - mission succeeded and you can use any client setup (CRA, anything else) as you would like to.
\n\nThe code (but with slightly modified values (e.g. another port)) can be found here. \nBe aware, that npm i
needs to be run first.
I uploaded a video on my YouTube channel (in German) about this setup:
\n\n\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2023/01/25/aspnet-core-and-react/","RawContent":null,"Thumbnail":null},{"Title":"Your URL is flagged as malware/phishing, now what?","PublishedOn":"2023-01-04T22:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nOn my last day in 2022 - Friday, 23. December, I received a support ticket from one customer, that our software seems to be offline and it looks like that our servers are not responding. I checked our monitoring and the server side of the customer and everything was fine. \nMy first thought: Maybe a misconfiguration on the customer side, but after a remote support session with the customer, I saw that it “should work”, but something in the customer network blocks the requests to our services.\nNext thought: Firewall or proxy stuff. Always nasty, but we are just using port 443, so nothing too special.
\n\nAfter a while I received a phone call from the customers firewall team and they discovered the problem: They are using a firewall solution from “Check Point” and our domain was flagged as “phishing”/”malware”. What the… \nThey even created an exception so that Check Point doesn’t block our requests, but the next problem occured: The customers “Windows Defender for Office 365” has the same “flag” for our domain, so they revert everything, because they didn’t want to change their settings too much.
\n\n\n\nBe aware, that from our end everything was working “fine” and I could access the customer services and our Windows Defender didn’t had any problems with this domain.
\n\nSomehow our domain was flagged as malware/phishing and we needed to change this false positive listening. I guess there are tons of services, that “tracks” “bad” websites and maybe all are connected somehow. From this indicent I can only suggest:
\n\nIf you have trouble with Check Point:
\n\nGo to “URLCAT”, register an account and try to change the category of your domain. After you submit the “change request” you will get an email like this:
\n\nThank you for submitting your category change request.\nWe will process your request and notify you by email (to: xxx.xxx@xxx.com ).\nYou can follow the status of your request on this page.\nYour request details\nReference ID: [GUID]\nURL: https://[domain].com\nSuggested Categories: Computers / Internet,Business / Economy\nComment: [Given comment]\n
After ~1-2 days the change was done. Not sure if this is automated or not, but it was during Christmas.
\n\nIf you have trouble with Windows Defender:
\n\nGo to “Report submission” in your Microsoft 365 Defender setting (you will need an account with special permissions, e.g. global admin) and add the URL as “Not junk”.
\n\n\n\nI’m not really sure if this helped or not, because we didn’t had any issues with the domain itself and I’m not sure if those “false positive” tickets bubbles up into a “global defender catalog” or if this only affects our own tenant.
\n\nAnyway - after those tickets were “resolved” by Check Point / Microsoft the problem on the customer side disappeared and everyone was happy. This was my first experience with such an “false positive malware report”. I’m not sure how we ended up on such a list and why only one customer was affected.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2023/01/04/checkpoint-and-defender-false-positive-url/","RawContent":null,"Thumbnail":null},{"Title":"SQLLocalDb update","PublishedOn":"2022-12-03T22:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nSqlLocalDb is a “developer” SQL server, without the “full” SQL Server (Express) installation. If you just develop on your machine and don’t want to run a “full blown” SQL Server, this is the tooling that you might need.
\n\nFrom the Microsoft Docs:
\n\n\n\n\nMicrosoft SQL Server Express LocalDB is a feature of SQL Server Express targeted to developers. It is available on SQL Server Express with Advanced Services.
\n\nLocalDB installation copies a minimal set of files necessary to start the SQL Server Database Engine. Once LocalDB is installed, you can initiate a connection using a special connection string. When connecting, the necessary SQL Server infrastructure is automatically created and started, enabling the application to use the database without complex configuration tasks. Developer Tools can provide developers with a SQL Server Database Engine that lets them write and test Transact-SQL code without having to manage a full server instance of SQL Server.
\n
(I’m not really sure, how I ended up on this problem, but I after I solved the problem I did it on my “To Blog”-bucket list)
\n\nFrom time to time there is a new SQLLocalDb version, but to upgrade an existing installation is a bit “weird”.
\n\nIf you have installed an older SQLLocalDb version you can manage it via sqllocaldb
. If you want to update you must delete the “current” MSSQLLocalDB in the first place.
To to this use:
\n\nsqllocaldb stop MSSQLLocalDB\nsqllocaldb delete MSSQLLocalDB\n
Then download the newest version from Microsoft. \nIf you choose “Download Media” you should see something like this:
\n\n\n\nDownload it, run it and restart your PC, after that you should be able to connect to the SQLLocalDb.
\n\nWe solved this issue with help of this blogpost.
\n\nHope this helps! (and I can remove it now from my bucket list \\o/ )
\n","Href":"https://blog.codeinside.eu/2022/12/03/sqllocaldb-update/","RawContent":null,"Thumbnail":null},{"Title":"Azure DevOps & Azure Service Connection","PublishedOn":"2022-10-04T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nToday I needed to setup a new release pipeline on our Azure DevOps Server installation to deploy some stuff automatically to Azure. The UI (at least on the Azure DevOps Server 2020 (!)) is not really clear about how to connect those two worlds, and thats why I’m writing this short blogpost.
\n\nFirst - under project settings - add a new service connection. Use the Azure Resource Manager
-service. Now you should see something like this:
Be aware: You will need to register app inside your Azure AD and need permissions to setup. If you are not able to follow these instructions, you might need to talk to your Azure subscription owner.
\n\nSubscription id:
\n\nCopy here the id of your subscription. This can be found in the subscription details:
\n\n\n\nKeep this tab open, because we need it later!
\n\nService prinipal id/key & tenant id:
\n\nNow this wording about “Service principal” is technically correct, but really confusing if your are not familar with Azure AD. A “Service prinipal” is like a “service user”/”app” that you need to register to use it.\nThe easiest route is to create an app via the Bash Azure CLI:
\n\naz ad sp create-for-rbac --name DevOpsPipeline\n
If this command succeeds you should see something like this:
\n\n{\n \"appId\": \"[...GUID..]\",\n \"displayName\": \"DevOpsPipeline\",\n \"password\": \"[...PASSWORD...]\",\n \"tenant\": \"[...Tenant GUID...]\"\n}\n
This creates an “Serivce principal” with a random password inside your Azure AD. The next step is to give this “Service principal” a role on your subscription, because it has currently no permissions to do anything (e.g. deploy a service etc.).
\n\nGo to the subscription details page and then to Access control (IAM). There you can add your “DevOpsPipeline”-App as “Contributor” (Be aware that this is a “powerful role”!).
\n\nAfter that use the \"appId\": \"[...GUID..]\"
from the command as Service Principal Id. \nUse the \"password\": \"[...PASSWORD...]\"
as Service principal key and the \"tenant\": \"[...Tenant GUID...]\"
for the tenant id.
Now you should be able to “Verify” this connection and it should work.
\n\nLinks:\nThis blogpost helped me a lot. Here you can find the official documentation.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2022/10/04/azure-devops-azure-service-connection/","RawContent":null,"Thumbnail":null},{"Title":"'error MSB8011: Failed to register output.' & UTF8-BOM files","PublishedOn":"2022-08-30T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nBe aware: I’m not a C++ developer and this might be an “obvious” problem, but it took me a while to resolve this issue.
\n\nIn our product we have very few C++ projects. We use these projects for very special Microsoft Office COM stuff and because of COM we need to register some components during the build. Everything worked as expected, but we renamed a few files and our build broke with:
\n\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\MSBuild\\Microsoft\\VC\\v170\\Microsoft.CppCommon.targets(2302,5): warning MSB3075: The command \"regsvr32 /s \"C:/BuildAgentV3_1/_work/67/s\\_Artifacts\\_ReleaseParts\\XXX.Client.Addin.x64-Shims\\Common\\XXX.Common.Shim.dll\"\" exited with code 5. Please verify that you have sufficient rights to run this command. [C:\\BuildAgentV3_1\\_work\\67\\s\\XXX.Common.Shim\\XXX.Common.Shim.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\MSBuild\\Microsoft\\VC\\v170\\Microsoft.CppCommon.targets(2314,5): error MSB8011: Failed to register output. Please try enabling Per-user Redirection or register the component from a command prompt with elevated permissions. [C:\\BuildAgentV3_1\\_work\\67\\s\\XXX.Common.Shim\\XXX.Common.Shim.vcxproj]\n\n(xxx = redacted)\n
The crazy part was: Using an older version of our project just worked as expected, but all changes were “fine” from my point of view.
\n\nAfter many, many attempts I remembered that our diff tool doesn’t show us everything - so I checked the file encodings: UTF8-BOM
Somehow if you have a UTF8-BOM encoded file that your C++ project uses to register COM stuff it will fail. I changed the encoding and to UTF8
and everyting worked as expected.
What a day… lessons learned: Be aware of your file encodings.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2022/08/30/error-msb8011-failed-to-register-output-and-utf8bom/","RawContent":null,"Thumbnail":null},{"Title":"Which .NET Framework Version is installed on my machine?","PublishedOn":"2022-08-29T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nIf you need to know which .NET Framework Version (the “legacy” .NET Framework) is installed on your machine try this handy oneliner:
\n\nGet-ItemProperty \"HKLM:SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4\\Full\"\n
Result:
\n\nCBS : 1\nInstall : 1\nInstallPath : C:\\Windows\\Microsoft.NET\\Framework64\\v4.0.30319\\\nRelease : 528372\nServicing : 0\nTargetVersion : 4.0.0\nVersion : 4.8.04084\nPSPath : Microsoft.PowerShell.Core\\Registry::HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework\n Setup\\NDP\\v4\\Full\nPSParentPath : Microsoft.PowerShell.Core\\Registry::HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4\nPSChildName : Full\nPSDrive : HKLM\nPSProvider : Microsoft.PowerShell.Core\\Registry\n
The version should give you more then enough information.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2022/08/29/which-dotnet-version-is-installed-via-powershell/","RawContent":null,"Thumbnail":null},{"Title":"How to run a Azure App Service WebJob with parameters","PublishedOn":"2022-07-22T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nWe are using WebJobs in our Azure App Service deployment and they are pretty “easy” for the most part. Just register a WebJobs or deploy your .exe/.bat/.ps1/...
under the \\site\\wwwroot\\app_data\\Jobs\\triggered
folder and it should execute as described in the settings.job
.
If you put any executable in this WebJob folder, it will be executed as planned.
\n\nProblem: Parameters
\n\nIf you have a my-job.exe
, then this will be invoked from the runtime. But what if you need to invoke it with a parameter like my-job.exe -param \"test\"
?
Solution: run.cmd
\n\nThe WebJob environment is “greedy” and will search for a run.cmd
(or run.exe
) and if this is found, it will be executed and it doesn’t matter if you have any other .exe
files there.\nStick to the run.cmd
and use this to invoke your actual executable like this:
echo \"Invoke my-job.exe with parameters - Start\"\n\n..\\MyJob\\my-job.exe -param \"test\"\n\necho \"Invoke my-job.exe with parameters - Done\"\n
Be aware, that the path must “match”. We use this run.cmd
-approach in combination with the is_in_place
-option (see here) and are happy with the results).
A more detailed explanation can be found here.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2022/07/22/how-to-run-a-azure-appservice-webjob-with-parameters/","RawContent":null,"Thumbnail":null},{"Title":"How to use IE proxy settings with HttpClient","PublishedOn":"2022-03-28T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nInternet Explorer is - mostly - dead, but some weird settings are still around and “attached” to the old world, at least on Windows 10. \nIf your system administrator uses some advanced proxy settings (e.g. a PAC-file), those will be attached to the users IE setting.
\n\nIf you want to use this with a HttpClient you need to code something like this:
\n\n string target = \"https://my-target.local\";\n var targetUri = new Uri(target);\n var proxyAddressForThisUri = WebRequest.GetSystemWebProxy().GetProxy(targetUri);\n if (proxyAddressForThisUri == targetUri)\n {\n // no proxy needed in this case\n _httpClient = new HttpClient();\n }\n else\n {\n // proxy needed\n _httpClient = new HttpClient(new HttpClientHandler() { Proxy = new WebProxy(proxyAddressForThisUri) { UseDefaultCredentials = true } });\n }\n
The GetSystemWebProxy() gives access to the system proxy settings from the current user. Then we can query, what proxy is needed for the target. If the result is the same address as the target, then no proxy is needed. Otherwise, we inject a new WebProxy for this address.
\n\nHope this helps!
\n\nBe aware: Creating new HttpClients is (at least in a server environment) not recommended. Try to reuse the same HttpClient instance!
\n\nAlso note: The proxy setting in Windows 11 are now built into the system settings, but the API still works :)
\n\n\n","Href":"https://blog.codeinside.eu/2022/03/28/how-to-use-ie-proxy-settings-with-httpclient/","RawContent":null,"Thumbnail":null},{"Title":"Redirect to HTTPS with a simple web.config rule","PublishedOn":"2022-01-05T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nThe scenario is easy: My website is hosted in an IIS and would like to redirect all incomming HTTP traffic to the HTTPS counterpart.
\n\nThis is your solution - a “simple” rule:
\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<configuration>\n <system.webServer>\n <rewrite>\n <rules>\n <rule name=\"Redirect to https\" stopProcessing=\"true\">\n <match url=\".*\" />\n <conditions logicalGrouping=\"MatchAny\">\n <add input=\"{HTTPS}\" pattern=\"off\" />\n </conditions>\n <action type=\"Redirect\" url=\"https://{HTTP_HOST}{REQUEST_URI}\" redirectType=\"Found\" />\n </rule>\n </rules>\n </rewrite>\n </system.webServer>\n</configuration>\n
We used this in the past to setup a “catch all” web site in an IIS that redirects all incomming HTTP traffic.\nThe actual web applications had only the HTTPS binding in place.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2022/01/05/redirect-to-https-with-a-simple-webconfig-rule/","RawContent":null,"Thumbnail":null},{"Title":"Select random rows","PublishedOn":"2021-12-06T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nLet’s say we have a SQL table and want to retrieve 10 rows randomly - how would you do that? Although I have been working with SQL for x years, I have never encountered that problem. The solution however is quite “simple” (at least if you don’t be picky how we define “randomness” and if you try this on millions of rows):
\n\nThe most boring way is to use the ORDER BY NEWID()
clause:
SELECT TOP 10 FROM Products ORDER BY NEWID()\n
This works, but if you do that on “large” datasets you might hit performance problems (e.g. more on that here)
\n\nThe SQL Server implements the Tablesample clause
which was new to me. It seems to perform much bettern then the ORDER BY NEWID()
clause, but behaves a bit weird. With this clause you can specify the “sample” from a table. The size of the sample can be specified as PERCENT
or ROWS
(which are then converted to percent internally).
Syntax:
\n\nSELECT TOP 10 FROM Products TABLESAMPLE (25 Percent)\nSELECT TOP 10 FROM Products TABLESAMPLE (100 ROWS)\n
The weird part is that the given number might not match the number of rows of your result. You might got more or less results and if our tablesample is too small you might even got nothing in return. There are some clever ways to work around this (e.g. using the TOP 100
statement with a much larger tablesample clause to get a guaranteed result set), but it feels “strange”.\nIf you hit limitations with the first solution you might want to read more on this blog or in the Microsoft Docs.
Of course there is a great Stackoverflow thread with even wilder solutions.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2021/12/06/select-random-rows/","RawContent":null,"Thumbnail":null},{"Title":"SQL collation problems","PublishedOn":"2021-11-24T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nThis week I deployed a new feature and tried it on different SQL databases and was a bit suprised that on one database this error message came up:
\n\nCannot resolve the collation conflict between \"Latin1_General_CI_AS\" and \"SQL_Latin1_General_CP1_CI_AS\" in the equal to operation.\n
This was strange, because - at least in theory - all databases have the same schema and I was sure that each database had the same collation setting.
\n\nWell… my theory was wrong and this SQL statement told me that “some” columns had a different collation.
\n\nselect sc.name, sc.collation_name from sys.columns sc\ninner join sys.tables t on sc.object_id=t.object_id\nwhere t.name='TABLENAME'\n
As it turns out, some columns had the collation Latin1_General_CI_AS
and some had SQL_Latin1_General_CP1_CI_AS
. I’m still not sure why, but I needed to do something.
To change the collation you can execute something like this:
\n\nALTER TABLE MyTable\nALTER COLUMN [MyColumn] NVARCHAR(200) COLLATE SQL_Latin1_General_CP1_CI_AS\n
Unfortunately there are restrictions and you can’t change the collation if the column is referenced by any one of the following:
\n\nBe aware: If you are not in control of the collation or if the collation is “fine” and you want to do this operation anyway, there might be a way to specify the collation in the SQL query.
\n\nFor more information you might want to check out this Microsoft Docs “Set or Change the Column Collation”
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2021/11/24/sql-collations-problem/","RawContent":null,"Thumbnail":null},{"Title":"Microsoft Build 2021 session recommendations","PublishedOn":"2021-09-24T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nTo be fair: Microsoft Build 2021 was some month ago, but the content might still be relevant today. Sooo… it took me a while, but here is a list of sessions that I found interesting. Some sessions are “better” and some “lighter”, the order doesn’t reflect that - that was just the order I watched those videos.
\n\nThe headline has a link to the video and below are some notes.
\n\n](https://mybuild.microsoft.com/sessions/2915b9b6-6b45-430a-9df7-2671318e2161?source=sessions)
\n\n](https://mybuild.microsoft.com/sessions/b7d536c1-515f-476a-83d2-85b6cf14577a?source=sessions)
\n\nhttps://mybuild.microsoft.com/sessions/512470be-15d3-4b50-b180-6532c8153931?source=sessions)
\n\n](https://mybuild.microsoft.com/sessions/10930f2e-ad9c-460b-b91d-844d17a5a875?source=sessions)
\n\n](https://mybuild.microsoft.com/sessions/76ebac39-517d-44da-a58e-df4193b5efa9?source=sessions)
\n\n](https://mybuild.microsoft.com/sessions/08538f9b-e562-4d71-8b42-d240c3966ef0?source=sessions)
\n\n](https://mybuild.microsoft.com/sessions/70d379f4-1173-4941-b389-8796152ec7b8?source=sessions)
\n\nHope this helps.
\n","Href":"https://blog.codeinside.eu/2021/09/24/build-2021-recommendation/","RawContent":null,"Thumbnail":null},{"Title":"Today I learned (sort of) 'fltmc' to inspect the IO request pipeline of Windows","PublishedOn":"2021-05-30T22:00:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nThe headline is obviously a big lie, because I followed this twitter conversation last year, but it’s still interesting to me and I wanted to write it somewhere down.
\n\nStarting point was that Bruce Dawson (Google programmer) noticed, that building Chrome on Windows is slow for various reasons:
\n\n\n\n\nBased on some twitter discussion about source-file length and build times two months ago I wrote a blog post. It's got real data based on Chromium's build, and includes animations of build-time improvements:https://t.co/lsLH8BNe48
— Bruce Dawson (Antifa) (@BruceDawson0xB) March 31, 2020
Trentent Tye told him to disable the “filter driver”:
\n\n\n\n\ndisabling the filter driver makes it dead dead dead. Might be worth testing with the number and sizes of files you are dealing with. Even half a millisecond of processing time adds up when it runs against millions and millions of files.
— Trentent Tye (@TrententTye) April 1, 2020
If you have never heard of a “filter driver” (like me :)), you might want to take a look here.
\n\nTo see the loaded filter driver on your machine try out this: Run fltmc
(fltmc.exe) as admin.
Description:
\n\n\n\n\nEach filter in the list sit in a pipe through which all IO requests bubble down and up. They see all IO requests, but ignore most. Ever wondered how Windows offers encrypted files, OneDrive/GDrive/DB file sync, storage quotas, system file protection, and, yes, anti-malware? ;)
— Rich Turner (@richturn_ms) April 2, 2020
This makes more or less sense to me. I’m not really sure what to do with that information, but it’s cool (nerd cool, but anyway :)).
\n","Href":"https://blog.codeinside.eu/2021/05/30/fltmc-inspect-the-io-request-pipeline-of-windows/","RawContent":null,"Thumbnail":null},{"Title":"How to self host Google Fonts","PublishedOn":"2021-04-28T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nGoogle Fonts are really nice and widely used. Typically Google Fonts consistes of the actual font file (e.g. woff, ttf, eot etc.) and some CSS, which points to those font files.
\n\nIn one of our applications, we used a HTML/CSS/JS - Bootstrap like theme and the theme linked some Google Fonts. The problem was, that we wanted to self host everything.
\n\nAfter some research we discovered this tool: Google-Web-Fonts-Helper
\n\n\n\nPick your font, select your preferred CSS option (e.g. if you need to support older browsers etc.) and download a complete .zip package. Extract those files and add them to your web project like any other static asset. (And check the font license!)
\n\nThe project site is on GitHub.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2021/04/28/how-to-self-host-google-fonts/","RawContent":null,"Thumbnail":null},{"Title":"Microsoft Graph: Read user profile and group memberships","PublishedOn":"2021-01-31T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nIn our application we have a background service, that “syncs” user data and group membership information to our database from the Microsoft Graph.
\n\nThe permission model:
\n\nProgramming against the Microsoft Graph is quite easy. There are many SDKS available, but understanding the permission model is hard.
\n\n‘Directory.Read.All’ and ‘User.Read.All’:
\n\nInitially we only synced the “basic” user data to our database, but then some customers wanted to reuse some other data already stored in the graph. Our app required the ‘Directory.Read.All’ permission, because we thought that this would be the “highest” permission - this is wrong!
\n\nIf you need “directory” information, e.g. memberships, the Directory.Read.All
or Group.Read.All
is a good starting point. But if you want to load specific user data, you might need to have the User.Read.All
permission as well.
Hope this helps!
\n","Href":"https://blog.codeinside.eu/2021/01/31/microsoft-graph-read-user-profile-and-group-memberships/","RawContent":null,"Thumbnail":null},{"Title":"How to get all distribution lists of a user with a single LDAP query","PublishedOn":"2020-12-31T00:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nIn 2007 I wrote a blogpost how easy it is to get all “groups” of a given user via the tokenGroup attribute.
\n\nLast month I had the task to check why “distribution list memberships” are not part of the result.
\n\nThe reason is simple:
\n\nA pure distribution list (not security enabled) is not a security group and only security groups are part of the “tokenGroup” attribute.
\n\nAfter some thoughts and discussions we agreed, that it would be good if we could enhance our function and treat distribution lists like security groups.
\n\nThe get all groups of a given user might be seen as trivial, but the problem is, that groups can contain other groups. \nAs always, there are a couple of ways to get a “full flat” list of all group memberships.
\n\nA stupid way would be to load all groups in a recrusive function - this might work, but will result in a flood of requests.
\n\nA clever way would be to write a good LDAP query and let the Active Directory do the heavy lifting for us, right?
\n\nI found some sample code online with a very strange LDAP query and it turns out:\nThere is a “magic” ldap query called “LDAP_MATCHING_RULE_IN_CHAIN” and it does everything we are looking for:
\n\nvar getGroupsFilterForDn = $\"(&(objectClass=group)(member:1.2.840.113556.1.4.1941:= {distinguishedName}))\";\n using (var dirSearch = CreateDirectorySearcher(getGroupsFilterForDn))\n {\n using (var results = dirSearch.FindAll())\n {\n foreach (SearchResult result in results)\n {\n if (result.Properties.Contains(\"name\") && result.Properties.Contains(\"objectSid\") && result.Properties.Contains(\"groupType\"))\n groups.Add(new GroupResult() { Name = (string)result.Properties[\"name\"][0], GroupType = (int)result.Properties[\"groupType\"][0], ObjectSid = new SecurityIdentifier((byte[])result.Properties[\"objectSid\"][0], 0).ToString() });\n }\n }\n }\n
With a given distinguishedName of the target user, we can load all distribution and security groups (see below…) transitive!
\n\nDuring our testing we found some minor differences between the LDAP_MATCHING_RULE_IN_CHAIN and the tokenGroups approach. Some “system-level” security groups were missing with the LDAP_MATCHING_RULE_IN_CHAIN way. In our production code we use a combination of those two approaches and it seems to work.
\n\nA full demo code how to get all distribution lists for a user can be found on GitHub.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/12/31/how-get-all-distribution-lists-of-a-user-with-a-single-ldap-query/","RawContent":null,"Thumbnail":null},{"Title":"Update AzureDevOps Server 2019 to AzureDevOps Server 2019 Update 1","PublishedOn":"2020-11-30T18:00:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nWe did this update in May 2020, but I forgot to publish the blogpost… so here we are
\n\nLast year we updated to Azure DevOps Server 2019 and it went more or less smooth.
\n\nIn May we decided to update to the “newest” release at that time: Azure DevOps Server 2019 Update 1.1
\n\nOur AzureDevOps Server was running on a “new” Windows Server 2019 and everything was still kind of newish - so we just needed to update the AzureDevOps Server app.
\n\nThe actual update was really easy, but we had some issues after the installation.
\n\nWe had some issues with our Build Agents - they couldn’t connect to the AzureDevOps Server:
\n\nTF400813: Resource not available for anonymous access\n
As a first “workaround” (and a nice enhancement) we switched from HTTP to HTTPS internally, but this didn’t solved the problem.
\n\nThe real reason was, that our “Azure DevOps Service User” didn’t had the required write permissions for this folder:
\n\nC:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys\n
The connection issue went away, but now we introduced another problem: Our SSL Certificate was “self signed” (from our Domain Controller), so we need to register the agents like this:
\n\n.\\config.cmd --gituseschannel --url https://.../tfs/ --auth Integrated --pool Default-VS2019 --replace --work _work\n
The important parameter is -gituseschannel, which is needed when dealing with “self signed, but Domain ‘trusted’“-certificates.
\n\nWith this setting everything seemed to work as expected.
\n\nOnly node.js projects or toolings were “problematic”, because node.js itself don’t use the Windows Certificate Store.
\n\nTo resolve this, the root certificate from our Domain controller must be stored on the agent.
\n\n [Environment]::SetEnvironmentVariable(\"NODE_EXTRA_CA_CERTS\", \"C:\\SSLCert\\root-CA.pem\", \"Machine\") \n
The update itself was easy, but it took us some hours to configure our Build Agents. After the initial hiccup it went smooth from there - no issues and we are ready for the next update, which is already released.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/11/30/update-onprem-azuredevops-server-2019-to-azuredevops-server-2019-update1/","RawContent":null,"Thumbnail":null},{"Title":"DllRegisterServer 0x80020009 Error","PublishedOn":"2020-10-31T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nLast week I had a very strange issue and the solution was really “easy”, but took me a while.
\n\nFor our products we build Office COM Addins with a C++ based “Shim” that boots up our .NET code (e.g. something like this.\nAs the nature of COM: It requires some pretty dumb registry entries to work and in theory our toolchain should “build” and automatically “register” the output.
\n\nThe registration process just failed with a error message like that:
\n\nThe module xxx.dll was loaded but the call to DllRegisterServer failed with error code 0x80020009\n
After some research you will find some very old stuff or only some general advises like in this Stackoverflow.com question, e.g. “run it as administrator”.
\n\nLuckily we had another project were we use the same approach and this worked without any issues. After comparing the files I notices some subtile differences: The file encoding was different!
\n\nIn my failing project some C++ files were encoded with UTF8-BOM. I changed everything to UTF8 and after this change it worked.
\n\nMy reaction:
\n\n(╯°□°)╯︵ ┻━┻\n
I’m not a C++ dev and I’m not even sure why some files had the wrong encoding in the first place. It “worked” - at least Visual Studio 2019 was able to build the stuff, but register it with “regsrv32” just failed.
\n\nI needed some hours to figure that out.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/10/31/dllregisterserver-0x80020009-error/","RawContent":null,"Thumbnail":null},{"Title":"How to share an Azure subscription in a team","PublishedOn":"2020-09-29T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nWe at Sevitec are moving more and more workloads for us or our customers to Azure.
\n\nSo the basic question needs an answer:
\n\nHow can a team share an Azure subscription?
\n\nBe aware: This approach works for us. There might be better options. If we do something stupid, just tell me in the comments or via email - help is appreciated.
\n\nWe have a “company directory” with a fully configured Azure Active Directory (incl. User sync between our OnPrem system, Office 365 licenses etc.).
\n\nOur rule of thumb is: We create for each product team a individual directory and all team members are invited in the new directory.
\n\nKeep in mind: A directory itself costs you nothing but might help you to keep things manageable.
\n\n\n\nThis step might be optional, but all team members - except the “Administrator” - have the same rights and permissions in our company. To keep things simple, we created a group with all team members.
\n\n\n\nNow create a subscription. The typical “Pay-as-you-go” offer will work. Be aware that the user who creates the subscription is initially setup as the Administrator.
\n\n\n\nThis is the most important step:
\n\nYou need to grant the individual users or the group (from step 2) the “Contributor” role for this subscription via the “Access control (IAM)”.\nThe hard part is to understand how those “Role assignment” affect the subscription. I’m not even sure if the “Contributor” is the best fit, but it works for us.
\n\n\n\nI’m not really sure why such a basic concept is labeled so poorly but you really need to pick the correct role assignment and the other person should be able to use the subscription.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/09/29/how-to-share-an-azure-subscription-in-a-team/","RawContent":null,"Thumbnail":null},{"Title":"How to run a legacy WCF .svc Service on Azure AppService","PublishedOn":"2020-08-31T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nLast month we wanted to run good old WCF powered service on Azures “App Service”.
\n\nIf you are not familiar with WCF: Good! For the interested ones: WCF is or was a framework to build mostly SOAP based services in the .NET Framework 3.0 timeframe. Some parts where “good”, but most developers would call it a complex monster.
\n\nEven in the glory days of WCF I tried to avoid it at all cost, but unfortunately I need to maintain a WCF based service.
\n\nFor the curious: The project template and the tech is still there. Search for “WCF”.
\n\n\n\nThe template will produce something like that:
\n\nThe actual “service endpoint” is the Service1.svc
file.
Let’s assume we have a application with a .svc endpoint. In theory we can deploy this application to a standard Windows Server/IIS without major problems.
\n\nNow we try to deploy this very same application to Azure AppService and this is the result after we invoke the service from the browser:
\n\n\"The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.\" (HTTP Response was 404)\n
Strange… very strange. In theory a blank HTTP 400 should appear, but not a HTTP 404. The service itself was not “triggered”, because we had some logging in place, but the request didn’t get to the actual service.
\n\nAfter hours of debugging, testing and googling around I created a new “blank” WCF service from the Visual Studio template got the same error.
\n\nThe good news: It’s was not just my code something is blocking the request.
\n\nAfter some hours I found a helpful switch in the Azure Portal and activated the “Failed Request tracing” feature (yeah… I could found it sooner) and I discovered this:
\n\n\n\nMy initial thoughts were correct: The request was blocked. It was treated as “static content” and the actual WCF module was not mapped to the .svc extension.
\n\nTo “re-map” the .svc
extension to the correct handler I needed to add this to the web.config
:
...\n<system.webServer>\n ...\n\t<handlers>\n\t\t<remove name=\"svc-integrated\" />\n\t\t<add name=\"svc-integrated\" path=\"*.svc\" verb=\"*\" type=\"System.ServiceModel.Activation.HttpHandler\" resourceType=\"File\" preCondition=\"integratedMode\" />\n\t</handlers>\n</system.webServer>\n...\n\n
With this configuration everything worked as expected on Azure AppService.
\n\nBe aware:
\n\nI’m really not 100% sure why this is needed in the first place. I’m also not 100% sure if the name svc-integrated
is correct or important.
This blogpost is a result of these tweets.
\n\nThat was a tough ride… Hope this helps!
\n","Href":"https://blog.codeinside.eu/2020/08/31/how-to-run-a-legacy-wcf-svc-service-on-azure-app-service/","RawContent":null,"Thumbnail":null},{"Title":"EWS, Exchange Online and OAuth with a Service Account","PublishedOn":"2020-07-31T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nThis week we had a fun experiment: We wanted to talk to Exchange Online via the “old school” EWS API, but in a “sane” way.
\n\nBut here is the full story:
\n\nWe wanted to access contact information via a web service from the organization, just like the traditional “Global Address List” in Exchange/Outlook. We knew that EWS was on option for the OnPrem Exchange, but what about Exchange Online?
\n\nThe big problem: Authentication is tricky. We wanted to use a “traditional” Service Account approach (think of username/password). Unfortunately the “basic auth” way will be blocked in the near future because of security concerns (makes sense TBH). There is an alternative approach available, but at first it seems not to work as we would like.
\n\nSo… what now?
\n\nThe Exchange Web Services are old, but still quite powerful and still supported for Exchange Online and OnPrem Exchanges. On the other hand we could use the Microsoft Graph, but - at least currently - there is not a single “contact” API available.
\n\nTo mimic the GAL we would need to query List Users and List orgContacts, which would be ok, but the “orgContacts” has a “flaw”. \n“Hidden” contacts (“msexchhidefromaddresslists”) are returned from this API and we thought that this might be a NoGo for our customers.
\n\nAnother argument for using EWS was, that we could support OnPrem and Online with one code base.
\n\nThe good news is, that EWS and the Auth problem is more or less good documented here.
\n\nThere are two ways to authenticate against the Microsoft Graph or any Microsoft 365 API: Via “delegation” or via “application”.
\n\nDelegation:
\n\nDelegation means, that we can write a desktop app and all actions are executed in the name of the signed in user.
\n\nApplication:
\n\nApplication means, that the app itself can do some actions without any user involved.
\n\nAt first we thought that we might need to use the “application” way.
\n\nThe good news is, that this was easy and worked. \nThe bad news is, that the application needs the EWS permission “full_access_as_app”, which means that our application can access all mailboxes from this tenant. This might be ok for certain apps, but this scared us.
\n\nBack to the delegation way:
\n\nThe documentation from Microsoft is good, but our “Service Account” usecase was not mentioned. In the example from Microsoft a user needs to manually login.
\n\nAfter some research I found the solution to use a “username/password” OAuth flow to access a single mailbox via EWS:
\n\nFollow the normal “delegate” steps from the Microsoft Docs
\nInstead of this code, which will trigger the login UI:
\n...\n// The permission scope required for EWS access\nvar ewsScopes = new string[] { \"https://outlook.office.com/EWS.AccessAsUser.All\" };\n\n// Make the interactive token request\nvar authResult = await pca.AcquireTokenInteractive(ewsScopes).ExecuteAsync();\n...\n
Use the “AcquireTokenByUsernamePassword” method:
\n\n...\nvar cred = new NetworkCredential(\"UserName\", \"Password\");\nvar authResult = await pca.AcquireTokenByUsernamePassword(new string[] { \"https://outlook.office.com/EWS.AccessAsUser.All\" }, cred.UserName, cred.SecurePassword).ExecuteAsync();\n...\n
To make this work you need to enable the “Treat application as public client” under “Authentication” > “Advanced settings” in our AAD Application because this uses the “Resource owner password credential flow”.
\n\nNow you should be able to get the AccessToken and do some EWS magic.
\n\nI posted a shorter version on Stackoverflow.com
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/07/31/ews-exchange-online-oauth-with-a-service-account/","RawContent":null,"Thumbnail":null},{"Title":"Can a .NET Core 3.0 compiled app run with a .NET Core 3.1 runtime?","PublishedOn":"2020-06-30T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nWithin our product we move more and more stuff in the .NET Core land.\nLast week we had a disussion around needed software requirements and in the .NET Framework land this question was always easy to answer:
\n\n\n\n\n.NET Framework 4.5 or higher.
\n
With .NET Core the answer is sligthly different:
\n\nIn theory major versions are compatible, e.g. if you compiled your app with .NET Core 3.0 and a .NET Core runtime 3.1 is the only installed 3.X runtime on the machine, this runtime is used.
\n\nThis system is called “Framework-dependent apps roll forward” and sounds good.
\n\nUnfortunately this didn’t work for us. Not sure why, but our app refuses to work because a .dll is not found or missing. The reason is currently not clear. Be aware that Microsoft has written a hint that such things might occure:
\n\n\n\n\nIt’s possible that 3.0.5 and 3.1.0 behave differently, particularly for scenarios like serializing binary data.
\n
With .NET Core we could ship the framework with our app and it should run fine wherever we deploy it.
\n\nRead the docs about the “app roll forward” approach if you have similar concerns, but test your app with that combination.
\n\nAs a sidenote: 3.0 is not supported anymore, so it would be good to upgrade it to 3.1 anyway, but we might see a similar pattern with the next .NET Core versions.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/06/30/can-a-dotnet-core-30-compiled-app-run-with-a-dotnet-core-31-runtime/","RawContent":null,"Thumbnail":null},{"Title":"SqlBulkCopy for fast bulk inserts","PublishedOn":"2020-05-31T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nWithin our product OneOffixx we can create a “full export” from the product database. Because of limitations with normal MS SQL backups (e.g. compatibility with older SQL databases etc.), we created our own export mechanic.\nAn export can be up to 1GB and more. This is nothing to serious and far from “big data”, but still not easy to handle and we had some issues to import larger “exports”. \nOur importer was based on a Entity Framework 6 implementation and it was really slow… last month we tried to resolve this and we are quite happy. Here is how we did it:
\n\nTL;DR Problem:
\n\nBulk Insert with a Entity Framework based implementation is really slow. There is at least one NuGet package, which seems to help, but unfortunately we run into some obscure issues. This Stackoverflow question highlights some numbers and ways of doing it.
\n\nSqlBulkCopy to the rescure:
\n\nAfter my failed attempt to tame our EF implementation I discovered the SqlBulkCopy operation. In .NET (Full Framework and .NET Standard!) the usage is simple via the “SqlBulkCopy” class.
\n\nOur importer looks more or less like this:
\n\nusing (var scope = new TransactionScope(TransactionScopeOption.RequiresNew, TimeSpan.FromMinutes(30), TransactionScopeAsyncFlowOption.Enabled))\nusing (SqlBulkCopy bulkCopy = new SqlBulkCopy(databaseConnectionString))\n {\n var dt = new DataTable();\n dt.Columns.Add(\"DataColumnA\");\n dt.Columns.Add(\"DataColumnB\");\n dt.Columns.Add(\"DataColumnId\", typeof(Guid));\n\n foreach (var dataEntry in data)\n {\n dt.Rows.Add(dataEntry.A, dataEntry.B, dataEntry.Id);\n }\n\n sqlBulk.DestinationTableName = \"Data\";\n sqlBulk.AutoMapColumns(dt);\n sqlBulk.WriteToServer(dt);\n\n scope.Complete();\n }\n\npublic static class Extensions\n {\n public static void AutoMapColumns(this SqlBulkCopy sbc, DataTable dt)\n {\n sbc.ColumnMappings.Clear();\n\n foreach (DataColumn column in dt.Columns)\n {\n sbc.ColumnMappings.Add(column.ColumnName, column.ColumnName);\n }\n }\n } \n
Some notes:
\n\nOnly “downside”: SqlBulkCopy is a table by table insert. You need to insert your data in the correct order if you have any db constraints in your schema.
\n\nResult:
\n\nWe reduced the import from several minutes to seconds :)
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/05/31/sqlbulkcopy-for-fast-bulk-inserts/","RawContent":null,"Thumbnail":null}],"ResultType":"Feed"},"YouTube":{"FeedItems":[{"Title":"Erste Schritte mit dem Azure OpenAI Service","PublishedOn":"2023-03-23T22:30:48+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=VVNHT4gVxDo","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/VVNHT4gVxDo/hqdefault.jpg"},{"Title":"Erster Schritt in die Source Control: Visual Studio Projekte auf GitHub pushen","PublishedOn":"2023-03-17T21:59:57+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=iKQS5nYbC-k","RawContent":null,"Thumbnail":"https://i2.ytimg.com/vi/iKQS5nYbC-k/hqdefault.jpg"},{"Title":"Vite.js für React & TypeScript für ASP.NET Core & Visual Studio Entwickler","PublishedOn":"2023-02-12T00:25:03+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=-2iiXpBcmDY","RawContent":null,"Thumbnail":"https://i2.ytimg.com/vi/-2iiXpBcmDY/hqdefault.jpg"},{"Title":"React.js mit TypeScript in ASP.NET Core mit Visual Studio & Visual Studio Code","PublishedOn":"2023-01-26T23:35:26+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=IgW79wxMO-c","RawContent":null,"Thumbnail":"https://i2.ytimg.com/vi/IgW79wxMO-c/hqdefault.jpg"},{"Title":"React.js mit ASP.NET Core - ein Einstieg mit Visual Studio","PublishedOn":"2022-10-07T23:15:55+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=gIzMtWDs_QM","RawContent":null,"Thumbnail":"https://i4.ytimg.com/vi/gIzMtWDs_QM/hqdefault.jpg"},{"Title":"Einstieg in die Webentwicklung mit .NET 6 & ASP.NET Core","PublishedOn":"2022-04-12T21:13:18+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=WtpzsW5Xwqo","RawContent":null,"Thumbnail":"https://i4.ytimg.com/vi/WtpzsW5Xwqo/hqdefault.jpg"},{"Title":"Das erste .NET 6 Programm","PublishedOn":"2022-01-30T22:21:35+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=fVzo2qJubmA","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/fVzo2qJubmA/hqdefault.jpg"},{"Title":"Azure SQL - ist das echt so teuer? Neee...","PublishedOn":"2022-01-11T21:49:29+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=dNaIOGQj15M","RawContent":null,"Thumbnail":"https://i1.ytimg.com/vi/dNaIOGQj15M/hqdefault.jpg"},{"Title":"Was sind \"Project Templates\" in Visual Studio?","PublishedOn":"2021-12-22T22:36:25+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=_IMabo9yHSA","RawContent":null,"Thumbnail":"https://i4.ytimg.com/vi/_IMabo9yHSA/hqdefault.jpg"},{"Title":".NET Versionen - was bedeutet LTS und Current?","PublishedOn":"2021-12-21T21:06:29+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=2ghTKF0Ey_0","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/2ghTKF0Ey_0/hqdefault.jpg"},{"Title":"Einstieg in die .NET Entwicklung für Anfänger","PublishedOn":"2021-12-20T22:18:35+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=2EcSJDX-8-s","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/2EcSJDX-8-s/hqdefault.jpg"},{"Title":"Erste Schritte mit Unit Tests","PublishedOn":"2008-11-05T00:14:06+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=tjAv1-Qb4rY","RawContent":null,"Thumbnail":"https://i1.ytimg.com/vi/tjAv1-Qb4rY/hqdefault.jpg"},{"Title":"3 Schichten Architektur","PublishedOn":"2008-10-17T22:01:06+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=27yknlB8xeg","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/27yknlB8xeg/hqdefault.jpg"}],"ResultType":"Feed"},"O_Blog":{"FeedItems":[{"Title":"How to build a simple hate speech detector with machine learning","PublishedOn":"2019-08-02T13:00:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"Not everybody on the internet behaves nice and some comments are just rude or offending. If you run a web page that offers a public comment function hate speech can be a real problem. For example in Germany, you are legally required to delete hate speech comments. This can be challenging if you have to check thousands of comments each day. \nSo wouldn’t it be nice, if you can automatically check the user’s comment and give them a little hint to stay nice?\n
\n\nThe simplest thing you could do is to check if the user’s text contains offensive words. However, this approach is limited since you can offend people without using offensive words.
\n\nThis post will show you how to train a machine learning model that can detect if a comment or text is offensive. And to start you need just a few lines of Python code \\o/
\n\nAt first, you need data. In this case, you will need a list of offensive and nonoffensive texts. I wrote this tutorial for a machine learning course in Germany, so I used German texts but you should be able to use other languages too.
\n\nFor a machine learning competition, scientists provided a list of comments labeled as offensive and nonoffensive (Germeval 2018, Subtask 1). This is perfect for us since we just can use this data.
\n\nTo tackle this task I would first establish a baseline and then improve this solution step by step. Luckily they also published the scores of all submission, so we can get a sense of how well we are doing.
\n\nFor our baseline model we are going to use Facebooks fastText. It’s simple to use, works with many languages and does not require any special hardware like a GPU. Oh, and it’s fast :)
\n\nAfter you downloaded the training data file germeval2018.training.txt you need to transform this data into a format that fastText can read.\nFastTexts standard format looks like this “label[your label] some text”:
\n\n__label__offensive some insults\n__label__other have a nice day\n
To train the model you need to install the fastText Python package.
\n\n$ pip install fasttext\n
To train the model you need just there line of code.
\nimport fasttext\ntraning_parameters = {'epoch': 50, 'lr': 0.05, 'loss': \"ns\", 'thread': 8, 'ws': 5, 'dim': 100} \nmodel = fasttext.supervised('fasttext.train', 'model', **traning_parameters)\n
I packed all the training parameters into a seperate dictionary. To me that looks a bit cleaner but you don’t need to do that.
\n\nAfter we trained the model it is time to test how it performs. FastText provides us a handy test method the evaluate the model’s performance. To compare our model with the other models from the GermEval contest I also added a lambda which calculates the average F1 score. For now, I did not use the official test script from the contests repository. Which you should do if you wanted to participate in such contests.
\n\ndef test(model):\n f1_score = lambda precision, recall: 2 * ((precision * recall) / (precision + recall))\n nexamples, recall, precision = model.test('fasttext.test')\n print (f'recall: {recall}' )\n print (f'precision: {precision}')\n print (f'f1 score: {f1_score(precision,recall)}')\n print (f'number of examples: {nexamples}')\n
I don’t know about you, but I am so curious how we score. Annnnnnnd:
\n\nrecall: 0.7018686296715742\nprecision: 0.7018686296715742\nf1 score: 0.7018686296715742\nnumber of examples: 3532\n
Looking at the results we can see that the best other model had an average F1 score of 76,77 and our model achieves -without any optimization and preprocessing- an F1 Score of 70.18.
\n\nThis is pretty good since the models for these contests are usually specially optimized for the given data.
\n\nFastText is a clever piece of software, that uses some neat tricks. If interested in fastText you should take a look the paper and this one. For example, fastText uses character n-grams. This approach is well suited for the German language, which uses a lot of compound words.
\n\nIn this very basic tutorial, we trained a model with just a few lines of Python code. There are several things you can do to improve this model. The first step would be to preprocess your data. During preprocessing you could lower case all texts, remove URLs and special characters, correct spelling, etc. After every optimization step, you can test your model and check if your scores went up. Happy hacking :)
\n\nSome Ideas:
\n\nHere is the full code:
\n\n\n\nCredit: Photo by Jon Tyson on Unsplash
","Href":"https://www.oliverguhr.eu/nlp/jekyll/2019/08/02/build-a-simple-hate-speech-detector-with-machine-learning.html","RawContent":null,"Thumbnail":null}],"ResultType":"Feed"},"GitHubEventsUser":{"Events":[{"Id":"32654773663","Type":"IssuesEvent","CreatedAt":"2023-10-18T11:21:16","Actor":"oliverguhr","Repository":"oliverguhr/fullstop-deep-punctuation-prediction","Organization":null,"RawContent":null,"RelatedAction":"closed","RelatedUrl":"https://github.com/oliverguhr/fullstop-deep-punctuation-prediction/issues/18","RelatedDescription":"Closed issue \"What's the word limit for the model?\" (#18) at oliverguhr/fullstop-deep-punctuation-prediction","RelatedBody":"Hi, I'm trying to parse some texts that is pretty long. I run into this error.\r\n\r\n```\r\nAssertionError Traceback (most recent call last)\r\nCell In[47], line 1\r\n----> 1 restored_text=df.loc[df['unpunc'] == True, 0].map(model.restore_punctuation)\r\n\r\nFile /work/reddit-policomp/miniconda3/envs/my-py310env/lib/python3.10/site-packages/pandas/core/series.py:4539, in Series.map(self, arg, na_action)\r\n 4460 def map(\r\n 4461 self,\r\n 4462 arg: Callable | Mapping | Series,\r\n 4463 na_action: Literal[\"ignore\"] | None = None,\r\n 4464 ) -> Series:\r\n 4465 \"\"\"\r\n 4466 Map values of Series according to an input mapping or function.\r\n 4467 \r\n (...)\r\n 4537 dtype: object\r\n 4538 \"\"\"\r\n-> 4539 new_values = self._map_values(arg, na_action=na_action)\r\n 4540 return self._constructor(new_values, index=self.index).__finalize__(\r\n 4541 self, method=\"map\"\r\n 4542 )\r\n\r\nFile /work/reddit-policomp/miniconda3/envs/my-py310env/lib/python3.10/site-packages/pandas/core/base.py:890, in IndexOpsMixin._map_values(self, mapper, na_action)\r\n 887 raise ValueError(msg)\r\n 889 # mapper is a function\r\n--> 890 new_values = map_f(values, mapper)\r\n 892 return new_values\r\n\r\nFile /work/reddit-policomp/miniconda3/envs/my-py310env/lib/python3.10/site-packages/pandas/_libs/lib.pyx:2924, in pandas._libs.lib.map_infer()\r\n\r\nFile /work/reddit-policomp/miniconda3/envs/my-py310env/lib/python3.10/site-packages/deepmultilingualpunctuation/punctuationmodel.py:21, in PunctuationModel.restore_punctuation(self, text)\r\n 20 def restore_punctuation(self,text): \r\n---> 21 result = self.predict(self.preprocess(text))\r\n 22 return self.prediction_to_text(result)\r\n\r\nFile /work/reddit-policomp/miniconda3/envs/my-py310env/lib/python3.10/site-packages/deepmultilingualpunctuation/punctuationmodel.py:49, in PunctuationModel.predict(self, words)\r\n 47 text = \" \".join(batch)\r\n 48 result = self.pipe(text) \r\n---> 49 assert len(text) == result[-1][\"end\"], \"chunk size too large, text got clipped\"\r\n 51 char_index = 0\r\n 52 result_index = 0\r\n\r\nAssertionError: chunk size too large, text got clipped\r\n```\r\n\r\nI didn't use any other config, just the default model and predict function. It looks like the texts is too long or the chunk_size is too long (which I didn't configure)? Is there anything I should do to have it properly function?"},{"Id":"32654773324","Type":"PullRequestEvent","CreatedAt":"2023-10-18T11:21:15","Actor":"oliverguhr","Repository":"oliverguhr/deepmultilingualpunctuation","Organization":null,"RawContent":null,"RelatedAction":"merged","RelatedUrl":"https://github.com/oliverguhr/deepmultilingualpunctuation/pull/15","RelatedDescription":"Merged pull request \"expose chunk_size as variable\" (#15) at oliverguhr/deepmultilingualpunctuation","RelatedBody":"Public API fix for https://github.com/oliverguhr/deepmultilingualpunctuation/issues/4 i.e.\r\n```\r\nTraceback (most recent call last):\r\nline 49, in predict\r\n assert len(text) == result[-1][\"end\"], \"chunk size too large, text got clipped\"\r\nAssertionError: chunk size too large, text got clipped\r\n```\r\n\r\nCloses https://github.com/oliverguhr/fullstop-deep-punctuation-prediction/issues/18"},{"Id":"32074135434","Type":"IssuesEvent","CreatedAt":"2023-09-25T09:52:58","Actor":"oliverguhr","Repository":"pbelcak/fastfeedforward","Organization":null,"RawContent":null,"RelatedAction":"closed","RelatedUrl":"https://github.com/pbelcak/fastfeedforward/issues/2","RelatedDescription":"Closed issue \"Error: Cannot use soft decisions during evaluation.\" (#2) at pbelcak/fastfeedforward","RelatedBody":"Hello,\r\nthanks for publishing the code along with your paper. While reading your paper, I tried to run the demo notebook from this repository. I modified it slightly for colab to use the pip package instead of the local repository. However, I run into this issue:\r\n\r\n```python\r\nValueError Traceback (most recent call last)\r\nBe aware: I’m not a full time administrator and this post might sound stupid to you.
\n\nWe access certain Active Directory properties with our application and on one customer domain we couldn’t get any data out via our Active Directory component.
\n\nAfter some debugging and doubts about our functionality we (the admin of the customer and me) found the reason:\nOur code was running under a Windows Account that was very limted and couldn’t read those properties.
\n\nIf you have similar problems you might want to take a look in the AD User & Group management.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2023/09/20/limit-active-directory-property-access/","RawContent":null,"Thumbnail":null},{"Title":"Zip deployment failed on Azure","PublishedOn":"2023-09-05T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nWe are using Azure App Service for our application (which runs great BTW) and deploy it automatically via ZipDeploy. \nThis basic setup was running smoth, but we noticed that at some point the deployment failed with these error messages:
\n\n2023-08-24T20:48:56.1057054Z Deployment endpoint responded with status code 202\n2023-08-24T20:49:15.6984407Z Configuring default logging for the app, if not already enabled\n2023-08-24T20:49:18.8106651Z Zip deployment failed. {'id': 'temp-b574d768', 'status': 3, 'status_text': '', 'author_email': 'N/A', 'author': 'N/A', 'deployer': 'ZipDeploy', 'message': 'Deploying from pushed zip file', 'progress': '', 'received_time': '2023-08-24T20:48:55.8916655Z', 'start_time': '2023-08-24T20:48:55.8916655Z', 'end_time': '2023-08-24T20:49:15.3291017Z', 'last_success_end_time': None, 'complete': True, 'active': False, 'is_temp': True, 'is_readonly': False, 'url': 'https://[...].scm.azurewebsites.net/api/deployments/latest', 'log_url': 'https://[...].scm.azurewebsites.net/api/deployments/latest/log', 'site_name': '[...]', 'provisioningState': 'Failed'}. Please run the command az webapp log deployment show\n2023-08-24T20:49:18.8114319Z -n [...] -g production\n
or this one (depending on how we invoked the deployment script):
\n\nGetting scm site credentials for zip deployment\nStarting zip deployment. This operation can take a while to complete ...\nDeployment endpoint responded with status code 500\nAn error occured during deployment. Status Code: 500, Details: {\"Message\":\"An error has occurred.\",\"ExceptionMessage\":\"There is not enough space on the disk.\\r\\n\",\"ExceptionType\":\"System.IO.IOException\",\"StackTrace\":\" \n
The message There is not enough space on the disk
was a good hint, but according to the File system storage everything should be fine with only 8% used.
Be aware - this is important: We have multiple apps on the same App Service plan!
\n\n\n\nNext step was to check the behind the scene environment via the “Advanced Tools” Kudu and there it is:
\n\n\n\nThere are two different storages attached to the app service:
\n\nc:\\home
is the “File System Storage” that you can see in the Azure Portal and is quite large. App files are located here.c:\\local
is a much smaller storage with ~21GB and if the space is used, then ZipDeploy will fail.c:\\local
stores “mostly” temporarily items, e.g.:
Directory of C:\\local\n\n08/31/2023 06:40 AM <DIR> .\n08/31/2023 06:40 AM <DIR> ..\n07/13/2023 04:29 PM <DIR> AppData\n07/13/2023 04:29 PM <DIR> ASP Compiled Templates\n08/31/2023 06:40 AM <DIR> Config\n07/13/2023 04:29 PM <DIR> DomainValidationTokens\n07/13/2023 04:29 PM <DIR> DynamicCache\n07/13/2023 04:29 PM <DIR> FrameworkJit\n07/13/2023 04:29 PM <DIR> IIS Temporary Compressed Files\n07/13/2023 04:29 PM <DIR> LocalAppData\n07/13/2023 04:29 PM <DIR> ProgramData\n09/05/2023 08:36 PM <DIR> Temp\n08/31/2023 06:40 AM <DIR> Temporary ASP.NET Files\n07/18/2023 04:06 AM <DIR> UserProfile\n08/19/2023 06:34 AM <SYMLINKD> VirtualDirectory0 [\\\\...\\]\n 0 File(s) 0 bytes\n 15 Dir(s) 13,334,384,640 bytes free\n
The “biggest” item here was in our case under c:\\local\\Temp\\zipdeploy
:
Directory of C:\\local\\Temp\\zipdeploy\n\n08/29/2023 04:52 AM <DIR> .\n08/29/2023 04:52 AM <DIR> ..\n08/29/2023 04:52 AM <DIR> extracted\n08/29/2023 04:52 AM 774,591,927 jiire5i5.zip\n
This folder stores our ZipDeploy
package, which is quite large with ~800MB. The folder also contains the extracted files - remember: We only have 21GB on this storage, but even if this zip file and the extracted files are ~3GB, there is still plenty of room, right?
Well… it turns out, that each App Service on a App Service plan is using this storage and if you have multiple App Services on the same plan, than those 21GB might melt away.
\n\nThe “bad” part is, that the space is shared, but each App Services has it’s own c:\\local
folder (which makes sense). To free up memory we had to clean up this folder on each App Service like that:
rmdir c:\\local\\Temp\\zipdeploy /s /q\n
If you have problems with ZipDeploy and the error message tells you, that there is not enough space, check out the c:\\local
space (and of course c:\\home
as well) and delete unused files. Sometimes a reboot might help as well (to clean up temp-files), but AFAIK those ZipDeploy files will survive that.
The AI world is rising very fast these days: ChatGPT is such an awesome (and scary good?) service and Microsoft joined the ship with some partner announcements and investments. The result is of these actions is, that OpenAI is now a “first class citizen” on Azure.
\n\nSo - for the average Microsoft/.NET developer this opens up a wonderful toolbox and the first steps are really easy.
\n\nBe aware: You need to “apply” to access the OpenAI service, but it took less then 24 hours for us to gain access to the service. I guess this is just a temporary thing.
\n\nDisclaimer: I’m not an AI/ML engineer and I only have a very “glimpse” knowledge about the technology behind GPT3, ChatGPT and ML in general. If in doubt, I always ask my buddy Oliver Guhr, because he is much smarter in this stuff. Follow him on Twitter!
\n\nSearch for “OpenAI” and you will see the “Azure OpenAI Service” entry:
\n\n\n\nCreate a new Azure OpenAI Service instance:
\n\n\n\nOn the next page you will need to enter the subscription, resource group, region and a name (typical Azure stuff):
\n\n\n\nBe aware: If your subscription is not enabled for OpenAI, you need to apply here first.
\n\nAfter the service is created you should see something like this:
\n\n\n\nNow go to “Model deployments” and create a model - I choosed “text-davinci-003”, because I think this is GPT3.5 (which was the initial ChatGPT release, GPT4 is currently in preview for Azure and you need to apply again.
\n\n\n\nMy guess is, that you could train/deploy other, specialized models here, because this model is quite complex and you might want to tailor the model for your scenario to get faster/cheaper results… but I honestly don’t know how to do it (currently), so we just leave the default.
\n\nIn this step we just need to copy the key and the endpoint, which can be found under “Keys and Endpoint”, simple - right?
\n\n\n\nCreate a .NET application and add the Azure.AI.OpenAI NuGet package (currently in preview!).
\n\ndotnet add package Azure.AI.OpenAI --version 1.0.0-beta.5\n
Use this code:
\n\nusing Azure.AI.OpenAI;\nusing Azure;\n\nConsole.WriteLine(\"Hello, World!\");\n\nOpenAIClient client = new OpenAIClient(\n new Uri(\"YOUR-ENDPOINT\"),\n new AzureKeyCredential(\"YOUR-KEY\"));\n\nstring deploymentName = \"text-davinci-003\";\nstring prompt = \"Tell us something about .NET development.\";\nConsole.Write($\"Input: {prompt}\");\n\nResponse<Completions> completionsResponse = client.GetCompletions(deploymentName, prompt);\nstring completion = completionsResponse.Value.Choices[0].Text;\n\nConsole.WriteLine(completion);\n\nConsole.ReadLine();\n\n
Result:
\n\nHello, World!\nInput: Tell us something about .NET development.\n\n.NET development is a mature, feature-rich platform that enables developers to create sophisticated web applications, services, and applications for desktop, mobile, and embedded systems. Its features include full-stack programming, object-oriented data structures, security, scalability, speed, and an open source framework for distributed applications. A great advantage of .NET development is its capability to develop applications for both Windows and Linux (using .NET Core). .NET development is also compatible with other languages such as\n
As you can see… the result is cut off, not sure why, but this is just a simple demonstration.
\n\nWith these basic steps you can access the OpenAI development world. Azure makes it easy to integrate in your existing Azure/Microsoft “stack”. Be aware, that you could also use the same SDK and use the endpoint from OpenAI. Because of billing reasons it is easier for us to use the Azure hosted instances.
\n\nHope this helps!
\n\nIf you understand German and want to see it in action, check out my video on my Channel:
\n\n\n\n","Href":"https://blog.codeinside.eu/2023/03/23/first-steps-with-azure-openai-and-dotnet/","RawContent":null,"Thumbnail":null},{"Title":"How to fix: 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine","PublishedOn":"2023-03-18T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nIn our product we can interact with different datasource and one of these datasources was a Microsoft Access DB connected via OLEDB
. This is really, really old, but still works, but on one customer machine we had this issue:
'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine\n
If you face this issue, you need to install the provider from here.
\n\nBe aware: If you have a different error, you might need to install the newer provider - this is labled as “2010 Redistributable”, but still works with all those fancy Office 365 apps out there.
\n\nImportant: You need to install the provider in the correct bit version, e.g. if you run under x64, install the x64.msi.
\n\nThe solution comes from this Stackoverflow question.
\n\nThe best tip from Stackoverflow was these powershell commands to check, if the provider is there or not:
\n\n(New-Object system.data.oledb.oledbenumerator).GetElements() | select SOURCES_NAME, SOURCES_DESCRIPTION \n\nGet-OdbcDriver | select Name,Platform\n
This will return something like this:
\n\nPS C:\\Users\\muehsig> (New-Object system.data.oledb.oledbenumerator).GetElements() | select SOURCES_NAME, SOURCES_DESCRIPTION\n\nSOURCES_NAME SOURCES_DESCRIPTION\n------------ -------------------\nSQLOLEDB Microsoft OLE DB Provider for SQL Server\nMSDataShape MSDataShape\nMicrosoft.ACE.OLEDB.12.0 Microsoft Office 12.0 Access Database Engine OLE DB Provider\nMicrosoft.ACE.OLEDB.16.0 Microsoft Office 16.0 Access Database Engine OLE DB Provider\nADsDSOObject OLE DB Provider for Microsoft Directory Services\nWindows Search Data Source Microsoft OLE DB Provider for Search\nMSDASQL Microsoft OLE DB Provider for ODBC Drivers\nMSDASQL Enumerator Microsoft OLE DB Enumerator for ODBC Drivers\nSQLOLEDB Enumerator Microsoft OLE DB Enumerator for SQL Server\nMSDAOSP Microsoft OLE DB Simple Provider\n\n\nPS C:\\Users\\muehsig> Get-OdbcDriver | select Name,Platform\n\nName Platform\n---- --------\nDriver da Microsoft para arquivos texto (*.txt; *.csv) 32-bit\nDriver do Microsoft Access (*.mdb) 32-bit\nDriver do Microsoft dBase (*.dbf) 32-bit\nDriver do Microsoft Excel(*.xls) 32-bit\nDriver do Microsoft Paradox (*.db ) 32-bit\nMicrosoft Access Driver (*.mdb) 32-bit\nMicrosoft Access-Treiber (*.mdb) 32-bit\nMicrosoft dBase Driver (*.dbf) 32-bit\nMicrosoft dBase-Treiber (*.dbf) 32-bit\nMicrosoft Excel Driver (*.xls) 32-bit\nMicrosoft Excel-Treiber (*.xls) 32-bit\nMicrosoft ODBC for Oracle 32-bit\nMicrosoft Paradox Driver (*.db ) 32-bit\nMicrosoft Paradox-Treiber (*.db ) 32-bit\nMicrosoft Text Driver (*.txt; *.csv) 32-bit\nMicrosoft Text-Treiber (*.txt; *.csv) 32-bit\nSQL Server 32-bit\nODBC Driver 17 for SQL Server 32-bit\nSQL Server 64-bit\nODBC Driver 17 for SQL Server 64-bit\nMicrosoft Access Driver (*.mdb, *.accdb) 64-bit\nMicrosoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb) 64-bit\nMicrosoft Access Text Driver (*.txt, *.csv) 64-bit\n
Hope this helps! (And I hope you don’t need to deal with these ancient technologies for too long 😅)
\n","Href":"https://blog.codeinside.eu/2023/03/18/microsoft-ace-oledb-12-0-provider-is-not-registered/","RawContent":null,"Thumbnail":null},{"Title":"Resource type is not supported in this subscription","PublishedOn":"2023-03-11T23:55:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nI was playing around with some Visual Studio Tooling and noticed this error during the creation of a “Azure Container Apps”-app:
\n\nResource type is not supported in this subscription
The solution is quite strange at first, but in the super configurable world of Azure it makes sense: You need to activate the Resource provider for this feature on your subscription. For Azure Container Apps
you need the Microsoft.ContainerRegistry
-resource provider registered:
It seems, that you can create such resources via the Portal, but if you go via the API (which Visual Studio seems to do) the provider needs to be registered at first.
\n\nSome resource providers are “enabled by default”, other providers needs to be turned on manually. Check out this list for a list of all resource providers and the related Azure service.
\n\nBe careful: I guess you should only enable the resource providers that you really need, otherwise your attack surface will get larger.
\n\nTo be honest: This was completly new for me - I do Azure since ages and never had to deal with resource providers. Always learning ;)
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2023/03/11/resource-type-is-not-supported-in-this-subscription/","RawContent":null,"Thumbnail":null},{"Title":"Azure DevOps Server 2022 Update","PublishedOn":"2023-02-15T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nYes I know - you can get everything from the cloud nowadays, but we are still using our OnPrem hardware and were running the “old” Azure DevOps Server 2020. \nThe _Azure DevOps Server 2022 was released last december, so an update was due.
\n\nIf you are running am Azure DevOps Server 2020 the requirements for the new 2022 release are “more or less” the same except the following important parts:
\n\nThe last requirement was a suprise for me, because I thought the update should run smoothly, but the installer removed the previous version and I couldn’t update, because our SQL Server was still on SQL Server 2016. Fortunately we had a VM backup and could rollback to the previous version.
\n\nThe update process itself was straightforward: Download the installer and run it.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe screenshots are from two different sessions. If you look carefully on the clock you might see that the date is different, that is because of the SQL Server 2016 problem.
\n\nAs you can see - everything worked as expected, but after we updated the server the search, which is powered by ElasticSearch was not working. The “ElasticSearch”-Windows-Service just crashed on startup and I’m not a Java guy, so… we fixed it by removing the search feature and reinstall it. \nWe tried to clean the cache, but it was still not working. After the reinstall of this feature the issue went away.
\n\nAzure Server 2022 is just a minor update (at least from a typical user perspective). The biggest new feature might be “Delivery Plans”, which are nice, but for small teams not a huge benefit. Check out the release notes.
\n\nA nice - nerdy - enhancement, and not mentioned in the release notes: “mermaid.js” is now supported in the Azure DevOps Wiki, yay!
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2023/02/15/azure-devops-server-2022-update/","RawContent":null,"Thumbnail":null},{"Title":"Use ASP.NET Core and React with Vite.js","PublishedOn":"2023-02-11T01:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nIn my previous post I showed a simple setup with ASP.NET Core & React. The React part was created with the “CRA”-Tooling, which is kind of problematic. The “new” state of the art React tooling seems to be vite.js - so let’s take a look how to use this.
\n\n\n\nStep 1: Create a “normal” ASP.NET Core project
\n\n(I like the ASP.NET Core MVC template, but feel free to use something else - same as in the other blogpost)
\n\n\n\nStep 2: Install vite.js and init the template
\n\nNow move to the root directory of your project with a shell and execute this:
\n\nnpm create vite@latest clientapp -- --template react-ts\n
This will install the latest & greatest vitejs based react app in a folder called clientapp
with the react-ts
template (React with Typescript). Vite itself isn’t focused on React and supports many different frontend frameworks.
Step 3: Enable HTTPS in your vite.js
\n\nJust like in the “CRA”-setup we need to make sure, that the environment is served under HTTPS. In the “CRA” world we needed to different files from the original ASP.NET Core & React template, but with vite.js there is a much simpler option available.
\n\nExecute the following command in the clientapp
directory:
npm install --save-dev vite-plugin-mkcert\n
Then in your vite.config.ts
use this config:
import { defineConfig } from 'vite'\nimport react from '@vitejs/plugin-react'\nimport mkcert from 'vite-plugin-mkcert'\n\n// https://vitejs.dev/config/\nexport default defineConfig({\n base: '/app',\n server: {\n https: true,\n port: 6363\n },\n plugins: [react(), mkcert()],\n})\n
Be aware: The base: '/app'
will be used as a sub-path.
The important part for the HTTPS setting is that we use the mkcert()
plugin and configure the server part with a port and set https
to true
.
Step 4: Add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package
\n\nSame as in the other blogpost, we need to add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package to glue the ASP.NET Core development and React world together. If you use .NET 7, then use the version 7.x.x, if you use .NET 6, use the version 6.x.x - etc.
\n\n\n\nStep 5: Enhance your Program.cs
\n\nBack to the Program.cs
- this is more or less the same as with the “CRA” setup:
Add the SpaStaticFiles
to the services collection like this in your Program.cs
- be aware, that vite.js builds everything in a folder called dist
:
var builder = WebApplication.CreateBuilder(args);\n\n// Add services to the container.\nbuilder.Services.AddControllersWithViews();\n\n// ↓ Add the following lines: ↓\nbuilder.Services.AddSpaStaticFiles(configuration => {\n configuration.RootPath = \"clientapp/dist\";\n});\n// ↑ these lines ↑\n\nvar app = builder.Build();\n
Now we need to use the SpaServices like this:
\n\napp.MapControllerRoute(\n name: \"default\",\n pattern: \"{controller=Home}/{action=Index}/{id?}\");\n\n// ↓ Add the following lines: ↓\nvar spaPath = \"/app\";\nif (app.Environment.IsDevelopment())\n{\n app.MapWhen(y => y.Request.Path.StartsWithSegments(spaPath), client =>\n {\n client.UseSpa(spa =>\n {\n spa.UseProxyToSpaDevelopmentServer(\"https://localhost:6363\");\n });\n });\n}\nelse\n{\n app.Map(new PathString(spaPath), client =>\n {\n client.UseSpaStaticFiles();\n client.UseSpa(spa => {\n spa.Options.SourcePath = \"clientapp\";\n\n // adds no-store header to index page to prevent deployment issues (prevent linking to old .js files)\n // .js and other static resources are still cached by the browser\n spa.Options.DefaultPageStaticFileOptions = new StaticFileOptions\n {\n OnPrepareResponse = ctx =>\n {\n ResponseHeaders headers = ctx.Context.Response.GetTypedHeaders();\n headers.CacheControl = new CacheControlHeaderValue\n {\n NoCache = true,\n NoStore = true,\n MustRevalidate = true\n };\n }\n };\n });\n });\n}\n// ↑ these lines ↑\n\napp.Run();\n
Just like in the original blogpost. In the development mode we use the UseProxyToSpaDevelopmentServer
-method to proxy all requests to the vite.js dev server. In the real world, we will use the files from the dist
folder.
Step 6: Invoke npm run build during publish
\n\nThe last step is to complete the setup. We want to build the ASP.NET Core app and the React app, when we use dotnet publish
:
Add this to your .csproj
-file and it should work:
\t<PropertyGroup>\n\t\t<SpaRoot>clientapp\\</SpaRoot>\n\t</PropertyGroup>\n\n\t<Target Name=\"PublishRunWebpack\" AfterTargets=\"ComputeFilesToPublish\">\n\t\t<!-- As part of publishing, ensure the JS resources are freshly built in production mode -->\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm install\" />\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm run build\" />\n\n\t\t<!-- Include the newly-built files in the publish output -->\n\t\t<ItemGroup>\n\t\t\t<DistFiles Include=\"$(SpaRoot)dist\\**\" /> <!-- Changed to dist! -->\n\t\t\t<ResolvedFileToPublish Include=\"@(DistFiles->'%(FullPath)')\" Exclude=\"@(ResolvedFileToPublish)\">\n\t\t\t\t<RelativePath>%(DistFiles.Identity)</RelativePath> <!-- Changed! -->\n\t\t\t\t<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>\n\t\t\t\t<ExcludeFromSingleFile>true</ExcludeFromSingleFile>\n\t\t\t</ResolvedFileToPublish>\n\t\t</ItemGroup>\n\t</Target>\n
You should now be able to use Visual Studio Code (or something like this) and start the frontend project with dev
. If you open a browser and go to https://127.0.0.1:6363/app
you should see something like this:
Now start the ASP.NET Core app and go to /app
and it should look like this:
Ok - this looks broken, right? Well - this is a more or less a “known” problem, but can be easily avoided. If we import the logo from the assets it works as expected and shouldn’t be a general problem:
\n\n\n\nThe sample code can be found here.
\n\nI made a video about this topic (in German, sorry :-/) as well - feel free to subscribe ;)
\n\n\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2023/02/11/aspnet-core-react-with-vitejs/","RawContent":null,"Thumbnail":null},{"Title":"Use ASP.NET Core & React togehter","PublishedOn":"2023-01-25T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nVisual Studio (at least VS 2019 and the newer 2022) ships with a ASP.NET Core React template, which is “ok-ish”, but has some really bad problems:
\n\nThe React part of this template is scaffolded via “CRA” (which seems to be problematic as well, but is not the point of this post) and uses JavaScript instead of TypeScript.\nAnother huge pain point (from my perspective) is that the template uses some special configurations to just host the react part for users - if you want to mix in some “MVC”/”Razor” stuff, you need to change some of this “magic”.
\n\nThe good parts:
\n\nBoth worlds can live together: During development time the ASP.NET Core stuff is hosted via Kestrel and the React part is hosted under the WebPack Development server. The lovely hot reload is working as expected and is really powerful.\nIf you are doing a release build, the project will take care of the npm-magic.
\n\nBut because of the “bad problems” outweight the benefits, we try to integrate a typical react app in a “normal” ASP.NET Core app.
\n\nStep 1: Create a “normal” ASP.NET Core project
\n\n(I like the ASP.NET Core MVC template, but feel free to use something else)
\n\n\n\nStep 2: Create a react app inside the ASP.NET Core project
\n\n(For this blogpost I use the “Create React App”-approach, but you can use whatever you like)
\n\nExecute this in your ASP.NET Core template (node & npm must be installed!):
\n\nnpx create-react-app clientapp --template typescript\n
Step 3: Copy some stuff from the React template
\n\nThe react template ships with some scripts and settings that we want to preserve:
\n\n\n\nThe aspnetcore-https.js
and aspnetcore-react.js
file is needed to setup the ASP.NET Core SSL dev certificate for the WebPack Dev Server. \nYou should also copy the .env
& .env.development
file in the root of your clientapp
-folder!
The .env
file only has this setting:
BROWSER=none\n
A more important setting is in the .env.development
file (change the port to something different!):
PORT=3333\nHTTPS=true\n
The port number 3333
and the https=true
will be important later, otherwise our setup will not work.
Also, add this line to the .env
-file (in theory you can use any name - for this sample we keep it spaApp
):
PUBLIC_URL=/spaApp\n
Step 4: Add the prestart to the package.json
\n\nIn your project open the package.json
and add the prestart
-line like this:
\"scripts\": {\n \"prestart\": \"node aspnetcore-https && node aspnetcore-react\",\n \"start\": \"react-scripts start\",\n \"build\": \"react-scripts build\",\n \"test\": \"react-scripts test\",\n \"eject\": \"react-scripts eject\"\n },\n
Step 5: Add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package
\n\n\n\nWe need the Microsoft.AspNetCore.SpaServices.Extensions NuGet-package. If you use .NET 7, then use the version 7.x.x, if you use .NET 6, use the version 6.x.x - etc.
\n\nStep 6: Enhance your Program.cs
\n\nAdd the SpaStaticFiles
to the services collection like this in your Program.cs
:
var builder = WebApplication.CreateBuilder(args);\n\n// Add services to the container.\nbuilder.Services.AddControllersWithViews();\n\n// ↓ Add the following lines: ↓\nbuilder.Services.AddSpaStaticFiles(configuration => {\n configuration.RootPath = \"clientapp/build\";\n});\n// ↑ these lines ↑\n\nvar app = builder.Build();\n
Now we need to use the SpaServices like this:
\n\napp.MapControllerRoute(\n name: \"default\",\n pattern: \"{controller=Home}/{action=Index}/{id?}\");\n\n// ↓ Add the following lines: ↓\nvar spaPath = \"/spaApp\";\nif (app.Environment.IsDevelopment())\n{\n app.MapWhen(y => y.Request.Path.StartsWithSegments(spaPath), client =>\n {\n client.UseSpa(spa =>\n {\n spa.UseProxyToSpaDevelopmentServer(\"https://localhost:3333\");\n });\n });\n}\nelse\n{\n app.Map(new PathString(spaPath), client =>\n {\n client.UseSpaStaticFiles();\n client.UseSpa(spa => {\n spa.Options.SourcePath = \"clientapp\";\n\n // adds no-store header to index page to prevent deployment issues (prevent linking to old .js files)\n // .js and other static resources are still cached by the browser\n spa.Options.DefaultPageStaticFileOptions = new StaticFileOptions\n {\n OnPrepareResponse = ctx =>\n {\n ResponseHeaders headers = ctx.Context.Response.GetTypedHeaders();\n headers.CacheControl = new CacheControlHeaderValue\n {\n NoCache = true,\n NoStore = true,\n MustRevalidate = true\n };\n }\n };\n });\n });\n}\n// ↑ these lines ↑\n\napp.Run();\n
As you can see, we run in two different modes. \nIn our development world we just use the UseProxyToSpaDevelopmentServer
-method to proxy all requests that points to spaApp
to the React WebPack DevServer (or something else). The huge benefit is, that you can use the React ecosystem with all its tools. Normally we use Visual Studio Code to run our react frontend and use the ASP.NET Core app as the “Backend for frontend”.\nIn production we use the build artefacts of the react build and make sure, that it’s not cached. To make the deployment easier, we need to invoke npm run build
when we publish this ASP.NET Core app.
Step 7: Invoke npm run build during publish
\n\nAdd this to your .csproj
-file and it should work:
\t<PropertyGroup>\n\t\t<SpaRoot>clientapp\\</SpaRoot>\n\t</PropertyGroup>\n\n\t<Target Name=\"PublishRunWebpack\" AfterTargets=\"ComputeFilesToPublish\">\n\t\t<!-- As part of publishing, ensure the JS resources are freshly built in production mode -->\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm install\" />\n\t\t<Exec WorkingDirectory=\"$(SpaRoot)\" Command=\"npm run build\" />\n\n\t\t<!-- Include the newly-built files in the publish output -->\n\t\t<ItemGroup>\n\t\t\t<DistFiles Include=\"$(SpaRoot)build\\**\" />\n\t\t\t<ResolvedFileToPublish Include=\"@(DistFiles->'%(FullPath)')\" Exclude=\"@(ResolvedFileToPublish)\">\n\t\t\t\t<RelativePath>%(DistFiles.Identity)</RelativePath> <!-- Changed! -->\n\t\t\t\t<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>\n\t\t\t\t<ExcludeFromSingleFile>true</ExcludeFromSingleFile>\n\t\t\t</ResolvedFileToPublish>\n\t\t</ItemGroup>\n\t</Target>\n
Be aware that these instruction are copied from the original ASP.NET Core React template and are slightly modified, otherwise the path wouldn’t match.
\n\nWith this setup you can add any spa app that you would like to add to your “normal” ASP.NET Core project.
\n\nIf everything works as expected you should be able to start the React app in Visual Studio Code like this:
\n\n\n\nBe aware of the https://localhost:3333/spaApp. The port and the name is important for our sample!
\n\nStart your hosting ASP.NET Core app in Visual Studio (or in any IDE that you like) and all requests that points to spaApp
use the WebPack DevServer in the background:
With this setup you can mix all client & server side styles as you like - mission succeeded and you can use any client setup (CRA, anything else) as you would like to.
\n\nThe code (but with slightly modified values (e.g. another port)) can be found here. \nBe aware, that npm i
needs to be run first.
I uploaded a video on my YouTube channel (in German) about this setup:
\n\n\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2023/01/25/aspnet-core-and-react/","RawContent":null,"Thumbnail":null},{"Title":"Your URL is flagged as malware/phishing, now what?","PublishedOn":"2023-01-04T22:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nOn my last day in 2022 - Friday, 23. December, I received a support ticket from one customer, that our software seems to be offline and it looks like that our servers are not responding. I checked our monitoring and the server side of the customer and everything was fine. \nMy first thought: Maybe a misconfiguration on the customer side, but after a remote support session with the customer, I saw that it “should work”, but something in the customer network blocks the requests to our services.\nNext thought: Firewall or proxy stuff. Always nasty, but we are just using port 443, so nothing too special.
\n\nAfter a while I received a phone call from the customers firewall team and they discovered the problem: They are using a firewall solution from “Check Point” and our domain was flagged as “phishing”/”malware”. What the… \nThey even created an exception so that Check Point doesn’t block our requests, but the next problem occured: The customers “Windows Defender for Office 365” has the same “flag” for our domain, so they revert everything, because they didn’t want to change their settings too much.
\n\n\n\nBe aware, that from our end everything was working “fine” and I could access the customer services and our Windows Defender didn’t had any problems with this domain.
\n\nSomehow our domain was flagged as malware/phishing and we needed to change this false positive listening. I guess there are tons of services, that “tracks” “bad” websites and maybe all are connected somehow. From this indicent I can only suggest:
\n\nIf you have trouble with Check Point:
\n\nGo to “URLCAT”, register an account and try to change the category of your domain. After you submit the “change request” you will get an email like this:
\n\nThank you for submitting your category change request.\nWe will process your request and notify you by email (to: xxx.xxx@xxx.com ).\nYou can follow the status of your request on this page.\nYour request details\nReference ID: [GUID]\nURL: https://[domain].com\nSuggested Categories: Computers / Internet,Business / Economy\nComment: [Given comment]\n
After ~1-2 days the change was done. Not sure if this is automated or not, but it was during Christmas.
\n\nIf you have trouble with Windows Defender:
\n\nGo to “Report submission” in your Microsoft 365 Defender setting (you will need an account with special permissions, e.g. global admin) and add the URL as “Not junk”.
\n\n\n\nI’m not really sure if this helped or not, because we didn’t had any issues with the domain itself and I’m not sure if those “false positive” tickets bubbles up into a “global defender catalog” or if this only affects our own tenant.
\n\nAnyway - after those tickets were “resolved” by Check Point / Microsoft the problem on the customer side disappeared and everyone was happy. This was my first experience with such an “false positive malware report”. I’m not sure how we ended up on such a list and why only one customer was affected.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2023/01/04/checkpoint-and-defender-false-positive-url/","RawContent":null,"Thumbnail":null},{"Title":"SQLLocalDb update","PublishedOn":"2022-12-03T22:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nSqlLocalDb is a “developer” SQL server, without the “full” SQL Server (Express) installation. If you just develop on your machine and don’t want to run a “full blown” SQL Server, this is the tooling that you might need.
\n\nFrom the Microsoft Docs:
\n\n\n\n\nMicrosoft SQL Server Express LocalDB is a feature of SQL Server Express targeted to developers. It is available on SQL Server Express with Advanced Services.
\n\nLocalDB installation copies a minimal set of files necessary to start the SQL Server Database Engine. Once LocalDB is installed, you can initiate a connection using a special connection string. When connecting, the necessary SQL Server infrastructure is automatically created and started, enabling the application to use the database without complex configuration tasks. Developer Tools can provide developers with a SQL Server Database Engine that lets them write and test Transact-SQL code without having to manage a full server instance of SQL Server.
\n
(I’m not really sure, how I ended up on this problem, but I after I solved the problem I did it on my “To Blog”-bucket list)
\n\nFrom time to time there is a new SQLLocalDb version, but to upgrade an existing installation is a bit “weird”.
\n\nIf you have installed an older SQLLocalDb version you can manage it via sqllocaldb
. If you want to update you must delete the “current” MSSQLLocalDB in the first place.
To to this use:
\n\nsqllocaldb stop MSSQLLocalDB\nsqllocaldb delete MSSQLLocalDB\n
Then download the newest version from Microsoft. \nIf you choose “Download Media” you should see something like this:
\n\n\n\nDownload it, run it and restart your PC, after that you should be able to connect to the SQLLocalDb.
\n\nWe solved this issue with help of this blogpost.
\n\nHope this helps! (and I can remove it now from my bucket list \\o/ )
\n","Href":"https://blog.codeinside.eu/2022/12/03/sqllocaldb-update/","RawContent":null,"Thumbnail":null},{"Title":"Azure DevOps & Azure Service Connection","PublishedOn":"2022-10-04T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nToday I needed to setup a new release pipeline on our Azure DevOps Server installation to deploy some stuff automatically to Azure. The UI (at least on the Azure DevOps Server 2020 (!)) is not really clear about how to connect those two worlds, and thats why I’m writing this short blogpost.
\n\nFirst - under project settings - add a new service connection. Use the Azure Resource Manager
-service. Now you should see something like this:
Be aware: You will need to register app inside your Azure AD and need permissions to setup. If you are not able to follow these instructions, you might need to talk to your Azure subscription owner.
\n\nSubscription id:
\n\nCopy here the id of your subscription. This can be found in the subscription details:
\n\n\n\nKeep this tab open, because we need it later!
\n\nService prinipal id/key & tenant id:
\n\nNow this wording about “Service principal” is technically correct, but really confusing if your are not familar with Azure AD. A “Service prinipal” is like a “service user”/”app” that you need to register to use it.\nThe easiest route is to create an app via the Bash Azure CLI:
\n\naz ad sp create-for-rbac --name DevOpsPipeline\n
If this command succeeds you should see something like this:
\n\n{\n \"appId\": \"[...GUID..]\",\n \"displayName\": \"DevOpsPipeline\",\n \"password\": \"[...PASSWORD...]\",\n \"tenant\": \"[...Tenant GUID...]\"\n}\n
This creates an “Serivce principal” with a random password inside your Azure AD. The next step is to give this “Service principal” a role on your subscription, because it has currently no permissions to do anything (e.g. deploy a service etc.).
\n\nGo to the subscription details page and then to Access control (IAM). There you can add your “DevOpsPipeline”-App as “Contributor” (Be aware that this is a “powerful role”!).
\n\nAfter that use the \"appId\": \"[...GUID..]\"
from the command as Service Principal Id. \nUse the \"password\": \"[...PASSWORD...]\"
as Service principal key and the \"tenant\": \"[...Tenant GUID...]\"
for the tenant id.
Now you should be able to “Verify” this connection and it should work.
\n\nLinks:\nThis blogpost helped me a lot. Here you can find the official documentation.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2022/10/04/azure-devops-azure-service-connection/","RawContent":null,"Thumbnail":null},{"Title":"'error MSB8011: Failed to register output.' & UTF8-BOM files","PublishedOn":"2022-08-30T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nBe aware: I’m not a C++ developer and this might be an “obvious” problem, but it took me a while to resolve this issue.
\n\nIn our product we have very few C++ projects. We use these projects for very special Microsoft Office COM stuff and because of COM we need to register some components during the build. Everything worked as expected, but we renamed a few files and our build broke with:
\n\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\MSBuild\\Microsoft\\VC\\v170\\Microsoft.CppCommon.targets(2302,5): warning MSB3075: The command \"regsvr32 /s \"C:/BuildAgentV3_1/_work/67/s\\_Artifacts\\_ReleaseParts\\XXX.Client.Addin.x64-Shims\\Common\\XXX.Common.Shim.dll\"\" exited with code 5. Please verify that you have sufficient rights to run this command. [C:\\BuildAgentV3_1\\_work\\67\\s\\XXX.Common.Shim\\XXX.Common.Shim.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\MSBuild\\Microsoft\\VC\\v170\\Microsoft.CppCommon.targets(2314,5): error MSB8011: Failed to register output. Please try enabling Per-user Redirection or register the component from a command prompt with elevated permissions. [C:\\BuildAgentV3_1\\_work\\67\\s\\XXX.Common.Shim\\XXX.Common.Shim.vcxproj]\n\n(xxx = redacted)\n
The crazy part was: Using an older version of our project just worked as expected, but all changes were “fine” from my point of view.
\n\nAfter many, many attempts I remembered that our diff tool doesn’t show us everything - so I checked the file encodings: UTF8-BOM
Somehow if you have a UTF8-BOM encoded file that your C++ project uses to register COM stuff it will fail. I changed the encoding and to UTF8
and everyting worked as expected.
What a day… lessons learned: Be aware of your file encodings.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2022/08/30/error-msb8011-failed-to-register-output-and-utf8bom/","RawContent":null,"Thumbnail":null},{"Title":"Which .NET Framework Version is installed on my machine?","PublishedOn":"2022-08-29T23:15:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nIf you need to know which .NET Framework Version (the “legacy” .NET Framework) is installed on your machine try this handy oneliner:
\n\nGet-ItemProperty \"HKLM:SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4\\Full\"\n
Result:
\n\nCBS : 1\nInstall : 1\nInstallPath : C:\\Windows\\Microsoft.NET\\Framework64\\v4.0.30319\\\nRelease : 528372\nServicing : 0\nTargetVersion : 4.0.0\nVersion : 4.8.04084\nPSPath : Microsoft.PowerShell.Core\\Registry::HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework\n Setup\\NDP\\v4\\Full\nPSParentPath : Microsoft.PowerShell.Core\\Registry::HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4\nPSChildName : Full\nPSDrive : HKLM\nPSProvider : Microsoft.PowerShell.Core\\Registry\n
The version should give you more then enough information.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2022/08/29/which-dotnet-version-is-installed-via-powershell/","RawContent":null,"Thumbnail":null},{"Title":"How to run a Azure App Service WebJob with parameters","PublishedOn":"2022-07-22T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nWe are using WebJobs in our Azure App Service deployment and they are pretty “easy” for the most part. Just register a WebJobs or deploy your .exe/.bat/.ps1/...
under the \\site\\wwwroot\\app_data\\Jobs\\triggered
folder and it should execute as described in the settings.job
.
If you put any executable in this WebJob folder, it will be executed as planned.
\n\nProblem: Parameters
\n\nIf you have a my-job.exe
, then this will be invoked from the runtime. But what if you need to invoke it with a parameter like my-job.exe -param \"test\"
?
Solution: run.cmd
\n\nThe WebJob environment is “greedy” and will search for a run.cmd
(or run.exe
) and if this is found, it will be executed and it doesn’t matter if you have any other .exe
files there.\nStick to the run.cmd
and use this to invoke your actual executable like this:
echo \"Invoke my-job.exe with parameters - Start\"\n\n..\\MyJob\\my-job.exe -param \"test\"\n\necho \"Invoke my-job.exe with parameters - Done\"\n
Be aware, that the path must “match”. We use this run.cmd
-approach in combination with the is_in_place
-option (see here) and are happy with the results).
A more detailed explanation can be found here.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2022/07/22/how-to-run-a-azure-appservice-webjob-with-parameters/","RawContent":null,"Thumbnail":null},{"Title":"How to use IE proxy settings with HttpClient","PublishedOn":"2022-03-28T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nInternet Explorer is - mostly - dead, but some weird settings are still around and “attached” to the old world, at least on Windows 10. \nIf your system administrator uses some advanced proxy settings (e.g. a PAC-file), those will be attached to the users IE setting.
\n\nIf you want to use this with a HttpClient you need to code something like this:
\n\n string target = \"https://my-target.local\";\n var targetUri = new Uri(target);\n var proxyAddressForThisUri = WebRequest.GetSystemWebProxy().GetProxy(targetUri);\n if (proxyAddressForThisUri == targetUri)\n {\n // no proxy needed in this case\n _httpClient = new HttpClient();\n }\n else\n {\n // proxy needed\n _httpClient = new HttpClient(new HttpClientHandler() { Proxy = new WebProxy(proxyAddressForThisUri) { UseDefaultCredentials = true } });\n }\n
The GetSystemWebProxy() gives access to the system proxy settings from the current user. Then we can query, what proxy is needed for the target. If the result is the same address as the target, then no proxy is needed. Otherwise, we inject a new WebProxy for this address.
\n\nHope this helps!
\n\nBe aware: Creating new HttpClients is (at least in a server environment) not recommended. Try to reuse the same HttpClient instance!
\n\nAlso note: The proxy setting in Windows 11 are now built into the system settings, but the API still works :)
\n\n\n","Href":"https://blog.codeinside.eu/2022/03/28/how-to-use-ie-proxy-settings-with-httpclient/","RawContent":null,"Thumbnail":null},{"Title":"Redirect to HTTPS with a simple web.config rule","PublishedOn":"2022-01-05T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nThe scenario is easy: My website is hosted in an IIS and would like to redirect all incomming HTTP traffic to the HTTPS counterpart.
\n\nThis is your solution - a “simple” rule:
\n\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<configuration>\n <system.webServer>\n <rewrite>\n <rules>\n <rule name=\"Redirect to https\" stopProcessing=\"true\">\n <match url=\".*\" />\n <conditions logicalGrouping=\"MatchAny\">\n <add input=\"{HTTPS}\" pattern=\"off\" />\n </conditions>\n <action type=\"Redirect\" url=\"https://{HTTP_HOST}{REQUEST_URI}\" redirectType=\"Found\" />\n </rule>\n </rules>\n </rewrite>\n </system.webServer>\n</configuration>\n
We used this in the past to setup a “catch all” web site in an IIS that redirects all incomming HTTP traffic.\nThe actual web applications had only the HTTPS binding in place.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2022/01/05/redirect-to-https-with-a-simple-webconfig-rule/","RawContent":null,"Thumbnail":null},{"Title":"Select random rows","PublishedOn":"2021-12-06T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nLet’s say we have a SQL table and want to retrieve 10 rows randomly - how would you do that? Although I have been working with SQL for x years, I have never encountered that problem. The solution however is quite “simple” (at least if you don’t be picky how we define “randomness” and if you try this on millions of rows):
\n\nThe most boring way is to use the ORDER BY NEWID()
clause:
SELECT TOP 10 FROM Products ORDER BY NEWID()\n
This works, but if you do that on “large” datasets you might hit performance problems (e.g. more on that here)
\n\nThe SQL Server implements the Tablesample clause
which was new to me. It seems to perform much bettern then the ORDER BY NEWID()
clause, but behaves a bit weird. With this clause you can specify the “sample” from a table. The size of the sample can be specified as PERCENT
or ROWS
(which are then converted to percent internally).
Syntax:
\n\nSELECT TOP 10 FROM Products TABLESAMPLE (25 Percent)\nSELECT TOP 10 FROM Products TABLESAMPLE (100 ROWS)\n
The weird part is that the given number might not match the number of rows of your result. You might got more or less results and if our tablesample is too small you might even got nothing in return. There are some clever ways to work around this (e.g. using the TOP 100
statement with a much larger tablesample clause to get a guaranteed result set), but it feels “strange”.\nIf you hit limitations with the first solution you might want to read more on this blog or in the Microsoft Docs.
Of course there is a great Stackoverflow thread with even wilder solutions.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2021/12/06/select-random-rows/","RawContent":null,"Thumbnail":null},{"Title":"SQL collation problems","PublishedOn":"2021-11-24T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nThis week I deployed a new feature and tried it on different SQL databases and was a bit suprised that on one database this error message came up:
\n\nCannot resolve the collation conflict between \"Latin1_General_CI_AS\" and \"SQL_Latin1_General_CP1_CI_AS\" in the equal to operation.\n
This was strange, because - at least in theory - all databases have the same schema and I was sure that each database had the same collation setting.
\n\nWell… my theory was wrong and this SQL statement told me that “some” columns had a different collation.
\n\nselect sc.name, sc.collation_name from sys.columns sc\ninner join sys.tables t on sc.object_id=t.object_id\nwhere t.name='TABLENAME'\n
As it turns out, some columns had the collation Latin1_General_CI_AS
and some had SQL_Latin1_General_CP1_CI_AS
. I’m still not sure why, but I needed to do something.
To change the collation you can execute something like this:
\n\nALTER TABLE MyTable\nALTER COLUMN [MyColumn] NVARCHAR(200) COLLATE SQL_Latin1_General_CP1_CI_AS\n
Unfortunately there are restrictions and you can’t change the collation if the column is referenced by any one of the following:
\n\nBe aware: If you are not in control of the collation or if the collation is “fine” and you want to do this operation anyway, there might be a way to specify the collation in the SQL query.
\n\nFor more information you might want to check out this Microsoft Docs “Set or Change the Column Collation”
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2021/11/24/sql-collations-problem/","RawContent":null,"Thumbnail":null},{"Title":"Microsoft Build 2021 session recommendations","PublishedOn":"2021-09-24T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nTo be fair: Microsoft Build 2021 was some month ago, but the content might still be relevant today. Sooo… it took me a while, but here is a list of sessions that I found interesting. Some sessions are “better” and some “lighter”, the order doesn’t reflect that - that was just the order I watched those videos.
\n\nThe headline has a link to the video and below are some notes.
\n\n](https://mybuild.microsoft.com/sessions/2915b9b6-6b45-430a-9df7-2671318e2161?source=sessions)
\n\n](https://mybuild.microsoft.com/sessions/b7d536c1-515f-476a-83d2-85b6cf14577a?source=sessions)
\n\nhttps://mybuild.microsoft.com/sessions/512470be-15d3-4b50-b180-6532c8153931?source=sessions)
\n\n](https://mybuild.microsoft.com/sessions/10930f2e-ad9c-460b-b91d-844d17a5a875?source=sessions)
\n\n](https://mybuild.microsoft.com/sessions/76ebac39-517d-44da-a58e-df4193b5efa9?source=sessions)
\n\n](https://mybuild.microsoft.com/sessions/08538f9b-e562-4d71-8b42-d240c3966ef0?source=sessions)
\n\n](https://mybuild.microsoft.com/sessions/70d379f4-1173-4941-b389-8796152ec7b8?source=sessions)
\n\nHope this helps.
\n","Href":"https://blog.codeinside.eu/2021/09/24/build-2021-recommendation/","RawContent":null,"Thumbnail":null},{"Title":"Today I learned (sort of) 'fltmc' to inspect the IO request pipeline of Windows","PublishedOn":"2021-05-30T22:00:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nThe headline is obviously a big lie, because I followed this twitter conversation last year, but it’s still interesting to me and I wanted to write it somewhere down.
\n\nStarting point was that Bruce Dawson (Google programmer) noticed, that building Chrome on Windows is slow for various reasons:
\n\n\n\n\nBased on some twitter discussion about source-file length and build times two months ago I wrote a blog post. It's got real data based on Chromium's build, and includes animations of build-time improvements:https://t.co/lsLH8BNe48
— Bruce Dawson (Antifa) (@BruceDawson0xB) March 31, 2020
Trentent Tye told him to disable the “filter driver”:
\n\n\n\n\ndisabling the filter driver makes it dead dead dead. Might be worth testing with the number and sizes of files you are dealing with. Even half a millisecond of processing time adds up when it runs against millions and millions of files.
— Trentent Tye (@TrententTye) April 1, 2020
If you have never heard of a “filter driver” (like me :)), you might want to take a look here.
\n\nTo see the loaded filter driver on your machine try out this: Run fltmc
(fltmc.exe) as admin.
Description:
\n\n\n\n\nEach filter in the list sit in a pipe through which all IO requests bubble down and up. They see all IO requests, but ignore most. Ever wondered how Windows offers encrypted files, OneDrive/GDrive/DB file sync, storage quotas, system file protection, and, yes, anti-malware? ;)
— Rich Turner (@richturn_ms) April 2, 2020
This makes more or less sense to me. I’m not really sure what to do with that information, but it’s cool (nerd cool, but anyway :)).
\n","Href":"https://blog.codeinside.eu/2021/05/30/fltmc-inspect-the-io-request-pipeline-of-windows/","RawContent":null,"Thumbnail":null},{"Title":"How to self host Google Fonts","PublishedOn":"2021-04-28T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nGoogle Fonts are really nice and widely used. Typically Google Fonts consistes of the actual font file (e.g. woff, ttf, eot etc.) and some CSS, which points to those font files.
\n\nIn one of our applications, we used a HTML/CSS/JS - Bootstrap like theme and the theme linked some Google Fonts. The problem was, that we wanted to self host everything.
\n\nAfter some research we discovered this tool: Google-Web-Fonts-Helper
\n\n\n\nPick your font, select your preferred CSS option (e.g. if you need to support older browsers etc.) and download a complete .zip package. Extract those files and add them to your web project like any other static asset. (And check the font license!)
\n\nThe project site is on GitHub.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2021/04/28/how-to-self-host-google-fonts/","RawContent":null,"Thumbnail":null},{"Title":"Microsoft Graph: Read user profile and group memberships","PublishedOn":"2021-01-31T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nIn our application we have a background service, that “syncs” user data and group membership information to our database from the Microsoft Graph.
\n\nThe permission model:
\n\nProgramming against the Microsoft Graph is quite easy. There are many SDKS available, but understanding the permission model is hard.
\n\n‘Directory.Read.All’ and ‘User.Read.All’:
\n\nInitially we only synced the “basic” user data to our database, but then some customers wanted to reuse some other data already stored in the graph. Our app required the ‘Directory.Read.All’ permission, because we thought that this would be the “highest” permission - this is wrong!
\n\nIf you need “directory” information, e.g. memberships, the Directory.Read.All
or Group.Read.All
is a good starting point. But if you want to load specific user data, you might need to have the User.Read.All
permission as well.
Hope this helps!
\n","Href":"https://blog.codeinside.eu/2021/01/31/microsoft-graph-read-user-profile-and-group-memberships/","RawContent":null,"Thumbnail":null},{"Title":"How to get all distribution lists of a user with a single LDAP query","PublishedOn":"2020-12-31T00:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nIn 2007 I wrote a blogpost how easy it is to get all “groups” of a given user via the tokenGroup attribute.
\n\nLast month I had the task to check why “distribution list memberships” are not part of the result.
\n\nThe reason is simple:
\n\nA pure distribution list (not security enabled) is not a security group and only security groups are part of the “tokenGroup” attribute.
\n\nAfter some thoughts and discussions we agreed, that it would be good if we could enhance our function and treat distribution lists like security groups.
\n\nThe get all groups of a given user might be seen as trivial, but the problem is, that groups can contain other groups. \nAs always, there are a couple of ways to get a “full flat” list of all group memberships.
\n\nA stupid way would be to load all groups in a recrusive function - this might work, but will result in a flood of requests.
\n\nA clever way would be to write a good LDAP query and let the Active Directory do the heavy lifting for us, right?
\n\nI found some sample code online with a very strange LDAP query and it turns out:\nThere is a “magic” ldap query called “LDAP_MATCHING_RULE_IN_CHAIN” and it does everything we are looking for:
\n\nvar getGroupsFilterForDn = $\"(&(objectClass=group)(member:1.2.840.113556.1.4.1941:= {distinguishedName}))\";\n using (var dirSearch = CreateDirectorySearcher(getGroupsFilterForDn))\n {\n using (var results = dirSearch.FindAll())\n {\n foreach (SearchResult result in results)\n {\n if (result.Properties.Contains(\"name\") && result.Properties.Contains(\"objectSid\") && result.Properties.Contains(\"groupType\"))\n groups.Add(new GroupResult() { Name = (string)result.Properties[\"name\"][0], GroupType = (int)result.Properties[\"groupType\"][0], ObjectSid = new SecurityIdentifier((byte[])result.Properties[\"objectSid\"][0], 0).ToString() });\n }\n }\n }\n
With a given distinguishedName of the target user, we can load all distribution and security groups (see below…) transitive!
\n\nDuring our testing we found some minor differences between the LDAP_MATCHING_RULE_IN_CHAIN and the tokenGroups approach. Some “system-level” security groups were missing with the LDAP_MATCHING_RULE_IN_CHAIN way. In our production code we use a combination of those two approaches and it seems to work.
\n\nA full demo code how to get all distribution lists for a user can be found on GitHub.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/12/31/how-get-all-distribution-lists-of-a-user-with-a-single-ldap-query/","RawContent":null,"Thumbnail":null},{"Title":"Update AzureDevOps Server 2019 to AzureDevOps Server 2019 Update 1","PublishedOn":"2020-11-30T18:00:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nWe did this update in May 2020, but I forgot to publish the blogpost… so here we are
\n\nLast year we updated to Azure DevOps Server 2019 and it went more or less smooth.
\n\nIn May we decided to update to the “newest” release at that time: Azure DevOps Server 2019 Update 1.1
\n\nOur AzureDevOps Server was running on a “new” Windows Server 2019 and everything was still kind of newish - so we just needed to update the AzureDevOps Server app.
\n\nThe actual update was really easy, but we had some issues after the installation.
\n\nWe had some issues with our Build Agents - they couldn’t connect to the AzureDevOps Server:
\n\nTF400813: Resource not available for anonymous access\n
As a first “workaround” (and a nice enhancement) we switched from HTTP to HTTPS internally, but this didn’t solved the problem.
\n\nThe real reason was, that our “Azure DevOps Service User” didn’t had the required write permissions for this folder:
\n\nC:\\ProgramData\\Microsoft\\Crypto\\RSA\\MachineKeys\n
The connection issue went away, but now we introduced another problem: Our SSL Certificate was “self signed” (from our Domain Controller), so we need to register the agents like this:
\n\n.\\config.cmd --gituseschannel --url https://.../tfs/ --auth Integrated --pool Default-VS2019 --replace --work _work\n
The important parameter is -gituseschannel, which is needed when dealing with “self signed, but Domain ‘trusted’“-certificates.
\n\nWith this setting everything seemed to work as expected.
\n\nOnly node.js projects or toolings were “problematic”, because node.js itself don’t use the Windows Certificate Store.
\n\nTo resolve this, the root certificate from our Domain controller must be stored on the agent.
\n\n [Environment]::SetEnvironmentVariable(\"NODE_EXTRA_CA_CERTS\", \"C:\\SSLCert\\root-CA.pem\", \"Machine\") \n
The update itself was easy, but it took us some hours to configure our Build Agents. After the initial hiccup it went smooth from there - no issues and we are ready for the next update, which is already released.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/11/30/update-onprem-azuredevops-server-2019-to-azuredevops-server-2019-update1/","RawContent":null,"Thumbnail":null},{"Title":"DllRegisterServer 0x80020009 Error","PublishedOn":"2020-10-31T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nLast week I had a very strange issue and the solution was really “easy”, but took me a while.
\n\nFor our products we build Office COM Addins with a C++ based “Shim” that boots up our .NET code (e.g. something like this.\nAs the nature of COM: It requires some pretty dumb registry entries to work and in theory our toolchain should “build” and automatically “register” the output.
\n\nThe registration process just failed with a error message like that:
\n\nThe module xxx.dll was loaded but the call to DllRegisterServer failed with error code 0x80020009\n
After some research you will find some very old stuff or only some general advises like in this Stackoverflow.com question, e.g. “run it as administrator”.
\n\nLuckily we had another project were we use the same approach and this worked without any issues. After comparing the files I notices some subtile differences: The file encoding was different!
\n\nIn my failing project some C++ files were encoded with UTF8-BOM. I changed everything to UTF8 and after this change it worked.
\n\nMy reaction:
\n\n(╯°□°)╯︵ ┻━┻\n
I’m not a C++ dev and I’m not even sure why some files had the wrong encoding in the first place. It “worked” - at least Visual Studio 2019 was able to build the stuff, but register it with “regsrv32” just failed.
\n\nI needed some hours to figure that out.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/10/31/dllregisterserver-0x80020009-error/","RawContent":null,"Thumbnail":null},{"Title":"How to share an Azure subscription in a team","PublishedOn":"2020-09-29T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nWe at Sevitec are moving more and more workloads for us or our customers to Azure.
\n\nSo the basic question needs an answer:
\n\nHow can a team share an Azure subscription?
\n\nBe aware: This approach works for us. There might be better options. If we do something stupid, just tell me in the comments or via email - help is appreciated.
\n\nWe have a “company directory” with a fully configured Azure Active Directory (incl. User sync between our OnPrem system, Office 365 licenses etc.).
\n\nOur rule of thumb is: We create for each product team a individual directory and all team members are invited in the new directory.
\n\nKeep in mind: A directory itself costs you nothing but might help you to keep things manageable.
\n\n\n\nThis step might be optional, but all team members - except the “Administrator” - have the same rights and permissions in our company. To keep things simple, we created a group with all team members.
\n\n\n\nNow create a subscription. The typical “Pay-as-you-go” offer will work. Be aware that the user who creates the subscription is initially setup as the Administrator.
\n\n\n\nThis is the most important step:
\n\nYou need to grant the individual users or the group (from step 2) the “Contributor” role for this subscription via the “Access control (IAM)”.\nThe hard part is to understand how those “Role assignment” affect the subscription. I’m not even sure if the “Contributor” is the best fit, but it works for us.
\n\n\n\nI’m not really sure why such a basic concept is labeled so poorly but you really need to pick the correct role assignment and the other person should be able to use the subscription.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/09/29/how-to-share-an-azure-subscription-in-a-team/","RawContent":null,"Thumbnail":null},{"Title":"How to run a legacy WCF .svc Service on Azure AppService","PublishedOn":"2020-08-31T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nLast month we wanted to run good old WCF powered service on Azures “App Service”.
\n\nIf you are not familiar with WCF: Good! For the interested ones: WCF is or was a framework to build mostly SOAP based services in the .NET Framework 3.0 timeframe. Some parts where “good”, but most developers would call it a complex monster.
\n\nEven in the glory days of WCF I tried to avoid it at all cost, but unfortunately I need to maintain a WCF based service.
\n\nFor the curious: The project template and the tech is still there. Search for “WCF”.
\n\n\n\nThe template will produce something like that:
\n\nThe actual “service endpoint” is the Service1.svc
file.
Let’s assume we have a application with a .svc endpoint. In theory we can deploy this application to a standard Windows Server/IIS without major problems.
\n\nNow we try to deploy this very same application to Azure AppService and this is the result after we invoke the service from the browser:
\n\n\"The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.\" (HTTP Response was 404)\n
Strange… very strange. In theory a blank HTTP 400 should appear, but not a HTTP 404. The service itself was not “triggered”, because we had some logging in place, but the request didn’t get to the actual service.
\n\nAfter hours of debugging, testing and googling around I created a new “blank” WCF service from the Visual Studio template got the same error.
\n\nThe good news: It’s was not just my code something is blocking the request.
\n\nAfter some hours I found a helpful switch in the Azure Portal and activated the “Failed Request tracing” feature (yeah… I could found it sooner) and I discovered this:
\n\n\n\nMy initial thoughts were correct: The request was blocked. It was treated as “static content” and the actual WCF module was not mapped to the .svc extension.
\n\nTo “re-map” the .svc
extension to the correct handler I needed to add this to the web.config
:
...\n<system.webServer>\n ...\n\t<handlers>\n\t\t<remove name=\"svc-integrated\" />\n\t\t<add name=\"svc-integrated\" path=\"*.svc\" verb=\"*\" type=\"System.ServiceModel.Activation.HttpHandler\" resourceType=\"File\" preCondition=\"integratedMode\" />\n\t</handlers>\n</system.webServer>\n...\n\n
With this configuration everything worked as expected on Azure AppService.
\n\nBe aware:
\n\nI’m really not 100% sure why this is needed in the first place. I’m also not 100% sure if the name svc-integrated
is correct or important.
This blogpost is a result of these tweets.
\n\nThat was a tough ride… Hope this helps!
\n","Href":"https://blog.codeinside.eu/2020/08/31/how-to-run-a-legacy-wcf-svc-service-on-azure-app-service/","RawContent":null,"Thumbnail":null},{"Title":"EWS, Exchange Online and OAuth with a Service Account","PublishedOn":"2020-07-31T23:45:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nThis week we had a fun experiment: We wanted to talk to Exchange Online via the “old school” EWS API, but in a “sane” way.
\n\nBut here is the full story:
\n\nWe wanted to access contact information via a web service from the organization, just like the traditional “Global Address List” in Exchange/Outlook. We knew that EWS was on option for the OnPrem Exchange, but what about Exchange Online?
\n\nThe big problem: Authentication is tricky. We wanted to use a “traditional” Service Account approach (think of username/password). Unfortunately the “basic auth” way will be blocked in the near future because of security concerns (makes sense TBH). There is an alternative approach available, but at first it seems not to work as we would like.
\n\nSo… what now?
\n\nThe Exchange Web Services are old, but still quite powerful and still supported for Exchange Online and OnPrem Exchanges. On the other hand we could use the Microsoft Graph, but - at least currently - there is not a single “contact” API available.
\n\nTo mimic the GAL we would need to query List Users and List orgContacts, which would be ok, but the “orgContacts” has a “flaw”. \n“Hidden” contacts (“msexchhidefromaddresslists”) are returned from this API and we thought that this might be a NoGo for our customers.
\n\nAnother argument for using EWS was, that we could support OnPrem and Online with one code base.
\n\nThe good news is, that EWS and the Auth problem is more or less good documented here.
\n\nThere are two ways to authenticate against the Microsoft Graph or any Microsoft 365 API: Via “delegation” or via “application”.
\n\nDelegation:
\n\nDelegation means, that we can write a desktop app and all actions are executed in the name of the signed in user.
\n\nApplication:
\n\nApplication means, that the app itself can do some actions without any user involved.
\n\nAt first we thought that we might need to use the “application” way.
\n\nThe good news is, that this was easy and worked. \nThe bad news is, that the application needs the EWS permission “full_access_as_app”, which means that our application can access all mailboxes from this tenant. This might be ok for certain apps, but this scared us.
\n\nBack to the delegation way:
\n\nThe documentation from Microsoft is good, but our “Service Account” usecase was not mentioned. In the example from Microsoft a user needs to manually login.
\n\nAfter some research I found the solution to use a “username/password” OAuth flow to access a single mailbox via EWS:
\n\nFollow the normal “delegate” steps from the Microsoft Docs
\nInstead of this code, which will trigger the login UI:
\n...\n// The permission scope required for EWS access\nvar ewsScopes = new string[] { \"https://outlook.office.com/EWS.AccessAsUser.All\" };\n\n// Make the interactive token request\nvar authResult = await pca.AcquireTokenInteractive(ewsScopes).ExecuteAsync();\n...\n
Use the “AcquireTokenByUsernamePassword” method:
\n\n...\nvar cred = new NetworkCredential(\"UserName\", \"Password\");\nvar authResult = await pca.AcquireTokenByUsernamePassword(new string[] { \"https://outlook.office.com/EWS.AccessAsUser.All\" }, cred.UserName, cred.SecurePassword).ExecuteAsync();\n...\n
To make this work you need to enable the “Treat application as public client” under “Authentication” > “Advanced settings” in our AAD Application because this uses the “Resource owner password credential flow”.
\n\nNow you should be able to get the AccessToken and do some EWS magic.
\n\nI posted a shorter version on Stackoverflow.com
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/07/31/ews-exchange-online-oauth-with-a-service-account/","RawContent":null,"Thumbnail":null},{"Title":"Can a .NET Core 3.0 compiled app run with a .NET Core 3.1 runtime?","PublishedOn":"2020-06-30T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nWithin our product we move more and more stuff in the .NET Core land.\nLast week we had a disussion around needed software requirements and in the .NET Framework land this question was always easy to answer:
\n\n\n\n\n.NET Framework 4.5 or higher.
\n
With .NET Core the answer is sligthly different:
\n\nIn theory major versions are compatible, e.g. if you compiled your app with .NET Core 3.0 and a .NET Core runtime 3.1 is the only installed 3.X runtime on the machine, this runtime is used.
\n\nThis system is called “Framework-dependent apps roll forward” and sounds good.
\n\nUnfortunately this didn’t work for us. Not sure why, but our app refuses to work because a .dll is not found or missing. The reason is currently not clear. Be aware that Microsoft has written a hint that such things might occure:
\n\n\n\n\nIt’s possible that 3.0.5 and 3.1.0 behave differently, particularly for scenarios like serializing binary data.
\n
With .NET Core we could ship the framework with our app and it should run fine wherever we deploy it.
\n\nRead the docs about the “app roll forward” approach if you have similar concerns, but test your app with that combination.
\n\nAs a sidenote: 3.0 is not supported anymore, so it would be good to upgrade it to 3.1 anyway, but we might see a similar pattern with the next .NET Core versions.
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/06/30/can-a-dotnet-core-30-compiled-app-run-with-a-dotnet-core-31-runtime/","RawContent":null,"Thumbnail":null},{"Title":"SqlBulkCopy for fast bulk inserts","PublishedOn":"2020-05-31T23:30:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"\nWithin our product OneOffixx we can create a “full export” from the product database. Because of limitations with normal MS SQL backups (e.g. compatibility with older SQL databases etc.), we created our own export mechanic.\nAn export can be up to 1GB and more. This is nothing to serious and far from “big data”, but still not easy to handle and we had some issues to import larger “exports”. \nOur importer was based on a Entity Framework 6 implementation and it was really slow… last month we tried to resolve this and we are quite happy. Here is how we did it:
\n\nTL;DR Problem:
\n\nBulk Insert with a Entity Framework based implementation is really slow. There is at least one NuGet package, which seems to help, but unfortunately we run into some obscure issues. This Stackoverflow question highlights some numbers and ways of doing it.
\n\nSqlBulkCopy to the rescure:
\n\nAfter my failed attempt to tame our EF implementation I discovered the SqlBulkCopy operation. In .NET (Full Framework and .NET Standard!) the usage is simple via the “SqlBulkCopy” class.
\n\nOur importer looks more or less like this:
\n\nusing (var scope = new TransactionScope(TransactionScopeOption.RequiresNew, TimeSpan.FromMinutes(30), TransactionScopeAsyncFlowOption.Enabled))\nusing (SqlBulkCopy bulkCopy = new SqlBulkCopy(databaseConnectionString))\n {\n var dt = new DataTable();\n dt.Columns.Add(\"DataColumnA\");\n dt.Columns.Add(\"DataColumnB\");\n dt.Columns.Add(\"DataColumnId\", typeof(Guid));\n\n foreach (var dataEntry in data)\n {\n dt.Rows.Add(dataEntry.A, dataEntry.B, dataEntry.Id);\n }\n\n sqlBulk.DestinationTableName = \"Data\";\n sqlBulk.AutoMapColumns(dt);\n sqlBulk.WriteToServer(dt);\n\n scope.Complete();\n }\n\npublic static class Extensions\n {\n public static void AutoMapColumns(this SqlBulkCopy sbc, DataTable dt)\n {\n sbc.ColumnMappings.Clear();\n\n foreach (DataColumn column in dt.Columns)\n {\n sbc.ColumnMappings.Add(column.ColumnName, column.ColumnName);\n }\n }\n } \n
Some notes:
\n\nOnly “downside”: SqlBulkCopy is a table by table insert. You need to insert your data in the correct order if you have any db constraints in your schema.
\n\nResult:
\n\nWe reduced the import from several minutes to seconds :)
\n\nHope this helps!
\n","Href":"https://blog.codeinside.eu/2020/05/31/sqlbulkcopy-for-fast-bulk-inserts/","RawContent":null,"Thumbnail":null}],"ResultType":"Feed"},"YouTube":{"FeedItems":[{"Title":"Erste Schritte mit dem Azure OpenAI Service","PublishedOn":"2023-03-23T22:30:48+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=VVNHT4gVxDo","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/VVNHT4gVxDo/hqdefault.jpg"},{"Title":"Erster Schritt in die Source Control: Visual Studio Projekte auf GitHub pushen","PublishedOn":"2023-03-17T21:59:57+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=iKQS5nYbC-k","RawContent":null,"Thumbnail":"https://i2.ytimg.com/vi/iKQS5nYbC-k/hqdefault.jpg"},{"Title":"Vite.js für React & TypeScript für ASP.NET Core & Visual Studio Entwickler","PublishedOn":"2023-02-12T00:25:03+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=-2iiXpBcmDY","RawContent":null,"Thumbnail":"https://i2.ytimg.com/vi/-2iiXpBcmDY/hqdefault.jpg"},{"Title":"React.js mit TypeScript in ASP.NET Core mit Visual Studio & Visual Studio Code","PublishedOn":"2023-01-26T23:35:26+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=IgW79wxMO-c","RawContent":null,"Thumbnail":"https://i2.ytimg.com/vi/IgW79wxMO-c/hqdefault.jpg"},{"Title":"React.js mit ASP.NET Core - ein Einstieg mit Visual Studio","PublishedOn":"2022-10-07T23:15:55+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=gIzMtWDs_QM","RawContent":null,"Thumbnail":"https://i4.ytimg.com/vi/gIzMtWDs_QM/hqdefault.jpg"},{"Title":"Einstieg in die Webentwicklung mit .NET 6 & ASP.NET Core","PublishedOn":"2022-04-12T21:13:18+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=WtpzsW5Xwqo","RawContent":null,"Thumbnail":"https://i4.ytimg.com/vi/WtpzsW5Xwqo/hqdefault.jpg"},{"Title":"Das erste .NET 6 Programm","PublishedOn":"2022-01-30T22:21:35+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=fVzo2qJubmA","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/fVzo2qJubmA/hqdefault.jpg"},{"Title":"Azure SQL - ist das echt so teuer? Neee...","PublishedOn":"2022-01-11T21:49:29+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=dNaIOGQj15M","RawContent":null,"Thumbnail":"https://i1.ytimg.com/vi/dNaIOGQj15M/hqdefault.jpg"},{"Title":"Was sind \"Project Templates\" in Visual Studio?","PublishedOn":"2021-12-22T22:36:25+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=_IMabo9yHSA","RawContent":null,"Thumbnail":"https://i4.ytimg.com/vi/_IMabo9yHSA/hqdefault.jpg"},{"Title":".NET Versionen - was bedeutet LTS und Current?","PublishedOn":"2021-12-21T21:06:29+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=2ghTKF0Ey_0","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/2ghTKF0Ey_0/hqdefault.jpg"},{"Title":"Einstieg in die .NET Entwicklung für Anfänger","PublishedOn":"2021-12-20T22:18:35+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=2EcSJDX-8-s","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/2EcSJDX-8-s/hqdefault.jpg"},{"Title":"Erste Schritte mit Unit Tests","PublishedOn":"2008-11-05T00:14:06+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=tjAv1-Qb4rY","RawContent":null,"Thumbnail":"https://i1.ytimg.com/vi/tjAv1-Qb4rY/hqdefault.jpg"},{"Title":"3 Schichten Architektur","PublishedOn":"2008-10-17T22:01:06+00:00","CommentsCount":0,"FacebookCount":0,"Summary":null,"Href":"https://www.youtube.com/watch?v=27yknlB8xeg","RawContent":null,"Thumbnail":"https://i3.ytimg.com/vi/27yknlB8xeg/hqdefault.jpg"}],"ResultType":"Feed"},"O_Blog":{"FeedItems":[{"Title":"How to build a simple hate speech detector with machine learning","PublishedOn":"2019-08-02T13:00:00+00:00","CommentsCount":0,"FacebookCount":0,"Summary":"Not everybody on the internet behaves nice and some comments are just rude or offending. If you run a web page that offers a public comment function hate speech can be a real problem. For example in Germany, you are legally required to delete hate speech comments. This can be challenging if you have to check thousands of comments each day. \nSo wouldn’t it be nice, if you can automatically check the user’s comment and give them a little hint to stay nice?\n
\n\nThe simplest thing you could do is to check if the user’s text contains offensive words. However, this approach is limited since you can offend people without using offensive words.
\n\nThis post will show you how to train a machine learning model that can detect if a comment or text is offensive. And to start you need just a few lines of Python code \\o/
\n\nAt first, you need data. In this case, you will need a list of offensive and nonoffensive texts. I wrote this tutorial for a machine learning course in Germany, so I used German texts but you should be able to use other languages too.
\n\nFor a machine learning competition, scientists provided a list of comments labeled as offensive and nonoffensive (Germeval 2018, Subtask 1). This is perfect for us since we just can use this data.
\n\nTo tackle this task I would first establish a baseline and then improve this solution step by step. Luckily they also published the scores of all submission, so we can get a sense of how well we are doing.
\n\nFor our baseline model we are going to use Facebooks fastText. It’s simple to use, works with many languages and does not require any special hardware like a GPU. Oh, and it’s fast :)
\n\nAfter you downloaded the training data file germeval2018.training.txt you need to transform this data into a format that fastText can read.\nFastTexts standard format looks like this “label[your label] some text”:
\n\n__label__offensive some insults\n__label__other have a nice day\n
To train the model you need to install the fastText Python package.
\n\n$ pip install fasttext\n
To train the model you need just there line of code.
\nimport fasttext\ntraning_parameters = {'epoch': 50, 'lr': 0.05, 'loss': \"ns\", 'thread': 8, 'ws': 5, 'dim': 100} \nmodel = fasttext.supervised('fasttext.train', 'model', **traning_parameters)\n
I packed all the training parameters into a seperate dictionary. To me that looks a bit cleaner but you don’t need to do that.
\n\nAfter we trained the model it is time to test how it performs. FastText provides us a handy test method the evaluate the model’s performance. To compare our model with the other models from the GermEval contest I also added a lambda which calculates the average F1 score. For now, I did not use the official test script from the contests repository. Which you should do if you wanted to participate in such contests.
\n\ndef test(model):\n f1_score = lambda precision, recall: 2 * ((precision * recall) / (precision + recall))\n nexamples, recall, precision = model.test('fasttext.test')\n print (f'recall: {recall}' )\n print (f'precision: {precision}')\n print (f'f1 score: {f1_score(precision,recall)}')\n print (f'number of examples: {nexamples}')\n
I don’t know about you, but I am so curious how we score. Annnnnnnd:
\n\nrecall: 0.7018686296715742\nprecision: 0.7018686296715742\nf1 score: 0.7018686296715742\nnumber of examples: 3532\n
Looking at the results we can see that the best other model had an average F1 score of 76,77 and our model achieves -without any optimization and preprocessing- an F1 Score of 70.18.
\n\nThis is pretty good since the models for these contests are usually specially optimized for the given data.
\n\nFastText is a clever piece of software, that uses some neat tricks. If interested in fastText you should take a look the paper and this one. For example, fastText uses character n-grams. This approach is well suited for the German language, which uses a lot of compound words.
\n\nIn this very basic tutorial, we trained a model with just a few lines of Python code. There are several things you can do to improve this model. The first step would be to preprocess your data. During preprocessing you could lower case all texts, remove URLs and special characters, correct spelling, etc. After every optimization step, you can test your model and check if your scores went up. Happy hacking :)
\n\nSome Ideas:
\n\nHere is the full code:
\n\n\n\nCredit: Photo by Jon Tyson on Unsplash
","Href":"https://www.oliverguhr.eu/nlp/jekyll/2019/08/02/build-a-simple-hate-speech-detector-with-machine-learning.html","RawContent":null,"Thumbnail":null}],"ResultType":"Feed"},"GitHubEventsUser":{"Events":[{"Id":"32654773663","Type":"IssuesEvent","CreatedAt":"2023-10-18T11:21:16","Actor":"oliverguhr","Repository":"oliverguhr/fullstop-deep-punctuation-prediction","Organization":null,"RawContent":null,"RelatedAction":"closed","RelatedUrl":"https://github.com/oliverguhr/fullstop-deep-punctuation-prediction/issues/18","RelatedDescription":"Closed issue \"What's the word limit for the model?\" (#18) at oliverguhr/fullstop-deep-punctuation-prediction","RelatedBody":"Hi, I'm trying to parse some texts that is pretty long. I run into this error.\r\n\r\n```\r\nAssertionError Traceback (most recent call last)\r\nCell In[47], line 1\r\n----> 1 restored_text=df.loc[df['unpunc'] == True, 0].map(model.restore_punctuation)\r\n\r\nFile /work/reddit-policomp/miniconda3/envs/my-py310env/lib/python3.10/site-packages/pandas/core/series.py:4539, in Series.map(self, arg, na_action)\r\n 4460 def map(\r\n 4461 self,\r\n 4462 arg: Callable | Mapping | Series,\r\n 4463 na_action: Literal[\"ignore\"] | None = None,\r\n 4464 ) -> Series:\r\n 4465 \"\"\"\r\n 4466 Map values of Series according to an input mapping or function.\r\n 4467 \r\n (...)\r\n 4537 dtype: object\r\n 4538 \"\"\"\r\n-> 4539 new_values = self._map_values(arg, na_action=na_action)\r\n 4540 return self._constructor(new_values, index=self.index).__finalize__(\r\n 4541 self, method=\"map\"\r\n 4542 )\r\n\r\nFile /work/reddit-policomp/miniconda3/envs/my-py310env/lib/python3.10/site-packages/pandas/core/base.py:890, in IndexOpsMixin._map_values(self, mapper, na_action)\r\n 887 raise ValueError(msg)\r\n 889 # mapper is a function\r\n--> 890 new_values = map_f(values, mapper)\r\n 892 return new_values\r\n\r\nFile /work/reddit-policomp/miniconda3/envs/my-py310env/lib/python3.10/site-packages/pandas/_libs/lib.pyx:2924, in pandas._libs.lib.map_infer()\r\n\r\nFile /work/reddit-policomp/miniconda3/envs/my-py310env/lib/python3.10/site-packages/deepmultilingualpunctuation/punctuationmodel.py:21, in PunctuationModel.restore_punctuation(self, text)\r\n 20 def restore_punctuation(self,text): \r\n---> 21 result = self.predict(self.preprocess(text))\r\n 22 return self.prediction_to_text(result)\r\n\r\nFile /work/reddit-policomp/miniconda3/envs/my-py310env/lib/python3.10/site-packages/deepmultilingualpunctuation/punctuationmodel.py:49, in PunctuationModel.predict(self, words)\r\n 47 text = \" \".join(batch)\r\n 48 result = self.pipe(text) \r\n---> 49 assert len(text) == result[-1][\"end\"], \"chunk size too large, text got clipped\"\r\n 51 char_index = 0\r\n 52 result_index = 0\r\n\r\nAssertionError: chunk size too large, text got clipped\r\n```\r\n\r\nI didn't use any other config, just the default model and predict function. It looks like the texts is too long or the chunk_size is too long (which I didn't configure)? Is there anything I should do to have it properly function?"},{"Id":"32654773324","Type":"PullRequestEvent","CreatedAt":"2023-10-18T11:21:15","Actor":"oliverguhr","Repository":"oliverguhr/deepmultilingualpunctuation","Organization":null,"RawContent":null,"RelatedAction":"merged","RelatedUrl":"https://github.com/oliverguhr/deepmultilingualpunctuation/pull/15","RelatedDescription":"Merged pull request \"expose chunk_size as variable\" (#15) at oliverguhr/deepmultilingualpunctuation","RelatedBody":"Public API fix for https://github.com/oliverguhr/deepmultilingualpunctuation/issues/4 i.e.\r\n```\r\nTraceback (most recent call last):\r\nline 49, in predict\r\n assert len(text) == result[-1][\"end\"], \"chunk size too large, text got clipped\"\r\nAssertionError: chunk size too large, text got clipped\r\n```\r\n\r\nCloses https://github.com/oliverguhr/fullstop-deep-punctuation-prediction/issues/18"},{"Id":"32074135434","Type":"IssuesEvent","CreatedAt":"2023-09-25T09:52:58","Actor":"oliverguhr","Repository":"pbelcak/fastfeedforward","Organization":null,"RawContent":null,"RelatedAction":"closed","RelatedUrl":"https://github.com/pbelcak/fastfeedforward/issues/2","RelatedDescription":"Closed issue \"Error: Cannot use soft decisions during evaluation.\" (#2) at pbelcak/fastfeedforward","RelatedBody":"Hello,\r\nthanks for publishing the code along with your paper. While reading your paper, I tried to run the demo notebook from this repository. I modified it slightly for colab to use the pip package instead of the local repository. However, I run into this issue:\r\n\r\n```python\r\nValueError Traceback (most recent call last)\r\n