Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GitAuto: [FEATURE] Add Integration Tests project with WireMock fixture #406

Closed
wants to merge 9 commits into from

Conversation

gitauto-ai[bot]
Copy link

@gitauto-ai gitauto-ai bot commented Dec 27, 2024

Resolves #173

What is the feature

We are adding a comprehensive integration tests project for the C# integration library that interacts with the VTEX HTTP REST API. This feature aims to ensure that the integration library correctly handles various scenarios, including success, error, and edge cases when communicating with VTEX APIs.

Where / How to implement the feature and why

  1. Set Up Integration Tests Project:

    • Create a New Test Project: Add a new test project named VTEX.Integration.Tests within the Tests directory.
    • Add Dependencies: Update the .csproj file to include the necessary packages:
      <ItemGroup>
        <PackageReference Include="xunit" Version="2.4.1" />
        <PackageReference Include="WireMock.Net" Version="3.1.0" />
        <PackageReference Include="Snapshooter" Version="1.0.0" />
        <PackageReference Include="NSubstitute" Version="4.2.2" />
        <PackageReference Include="Bogus" Version="33.1.0" />
      </ItemGroup>
    • Reason: These tools provide a robust framework for testing, mocking HTTP responses, snapshot comparison, dependency mocking, and generating realistic fake data essential for comprehensive integration testing.
  2. Configure WireMock for Fake HTTP Requests:

    • Implement WireMock Fixtures: Create a WireMockServerFixture class to manage the lifecycle of the WireMock server during tests.
    • Simulate ERP Service Responses: Write test methods that use WireMock to simulate various HTTP responses from the ERP service, enabling the testing of different scenarios.
    • Example:
      public class ErpServiceTests : IClassFixture<WireMockServerFixture>
      {
          private readonly WireMockServerFixture _fixture;
      
          public ErpServiceTests(WireMockServerFixture fixture)
          {
              _fixture = fixture;
          }
      
          [Fact]
          public async Task ShouldHandleSuccessfulResponse()
          {
              // Arrange
              _fixture.Server
                  .Given(Request.Create().WithPath("/api/orders").UsingGet())
                  .RespondWith(Response.Create().WithStatusCode(200).WithBody("{ \"orderId\": \"12345\" }"));
      
              // Act
              var result = await _service.GetOrderAsync("12345");
      
              // Assert
              Assert.Equal("12345", result.OrderId);
          }
      }
    • Reason: Using WireMock allows for reliable simulation of the ERP service, ensuring that tests are not dependent on the actual service's availability and behavior.
  3. Use Snapshooter for Response Comparison:

    • Integrate Snapshot Testing: Utilize Snapshooter to capture and compare actual responses against expected snapshots.
    • Example:
      [Fact]
      public async Task ShouldMatchExpectedResponse()
      {
          // Arrange
          var expectedResponse = new { OrderId = "12345" };
      
          // Act
          var actualResponse = await _service.GetOrderAsync("12345");
      
          // Assert
          actualResponse.Should().MatchSnapshot();
      }
    • Reason: Snapshot testing ensures that responses remain consistent over time, making it easier to detect unintended changes or regressions.
  4. Generate Fake Data with Bogus:

    • Create Data Generators: Implement classes using Bogus to generate realistic fake data for tests.
    • Example:
      public class OrderGenerator
      {
          public static Order CreateFakeOrder()
          {
              var faker = new Faker<Order>()
                  .RuleFor(o => o.OrderId, f => f.Random.Guid().ToString())
                  .RuleFor(o => o.CustomerName, f => f.Name.FullName())
                  .RuleFor(o => o.Amount, f => f.Finance.Amount());
      
              return faker.Generate();
          }
      }
    • Reason: Generating realistic data helps in creating more meaningful and effective test scenarios, enhancing the quality of tests.
  5. Mock Dependencies with NSubstitute:

    • Implement Dependency Mocks: Use NSubstitute to mock dependencies that are not directly related to ERP service interactions.
    • Example:
      public class OrderServiceTests
      {
          private readonly IOrderRepository _orderRepository;
          private readonly OrderService _orderService;
      
          public OrderServiceTests()
          {
              _orderRepository = Substitute.For<IOrderRepository>();
              _orderService = new OrderService(_orderRepository);
          }
      
          [Fact]
          public async Task ShouldReturnOrder()
          {
              // Arrange
              var fakeOrder = OrderGenerator.CreateFakeOrder();
              _orderRepository.GetOrderAsync(Arg.Any<string>()).Returns(Task.FromResult(fakeOrder));
      
              // Act
              var result = await _orderService.GetOrderAsync(fakeOrder.OrderId);
      
              // Assert
              Assert.Equal(fakeOrder.OrderId, result.OrderId);
          }
      }
    • Reason: Mocking dependencies allows for isolated testing of the integration library, ensuring that tests are focused and reliable.
  6. Implement and Run Tests:

    • Develop Test Cases: Create comprehensive test cases covering various scenarios, including successful responses, error handling, and edge cases.
    • Execute Tests: Run all integration tests to validate functionality and ensure coverage.
    • Review and Refine: Continuously review and refine tests based on results and team feedback.
    • Reason: Regular execution and refinement of tests ensure that the integration library remains robust and reliable over time.

Anything the issuer needs to do

No action required.

Test these changes locally

git fetch origin
git checkout gitauto/issue-173-20241227-002452
git pull origin gitauto/issue-173-20241227-002452

Summary by Sourcery

Tests:

  • Add a new integration test project to verify interactions with the VTEX HTTP REST API.

Copy link

korbit-ai bot commented Dec 27, 2024

By default, I don't review pull requests opened by bots. If you would like me to review this pull request anyway, you can request a review via the /korbit-review command in a comment.

Copy link
Contributor

sourcery-ai bot commented Dec 27, 2024

Reviewer's Guide by Sourcery

This pull request introduces a new integration tests project for the C# integration library. It leverages xunit, WireMock.Net, Snapshooter, NSubstitute, and Bogus to provide a robust testing environment. The project includes tests for the ErpService and OrderService, utilizing WireMock for mocking HTTP responses, Snapshooter for response comparison, Bogus for fake data generation, and NSubstitute for mocking dependencies.

No diagrams generated as the changes look simple and do not need a visual representation.

File-Level Changes

Change Details Files
Added a new integration tests project named VTEX.Integration.Tests.
  • Created the project file and added necessary NuGet packages for xunit, WireMock.Net, Snapshooter, NSubstitute, and Bogus.
  • Configured the test project to target the integration library and set up basic test infrastructure.
Tests/VTEX.Integration.Tests/VTEX.Integration.Tests.csproj
Implemented integration tests for the ErpService using WireMock for simulating HTTP responses.
  • Created the ErpServiceTests class and used the WireMockServerFixture to manage the WireMock server.
  • Wrote test methods to simulate various HTTP responses from the ERP service, covering success and error scenarios.
Tests/VTEX.Integration.Tests/ErpServiceTests.cs
Implemented tests for the OrderService using NSubstitute for mocking dependencies and Bogus for generating fake data.
  • Created the OrderServiceTests class and mocked the IOrderRepository dependency using NSubstitute.
  • Used the OrderGenerator class to create realistic fake order data for testing.
  • Wrote test methods to verify the interaction between the OrderService and its dependencies.
Tests/VTEX.Integration.Tests/OrderServiceTests.cs
Tests/VTEX.Integration.Tests/OrderGenerator.cs
Implemented snapshot tests using Snapshooter to ensure consistent responses.
  • Created the SnapshotTests class and used Snapshooter to capture and compare actual responses against expected snapshots.
  • Configured Snapshooter to store snapshots and update them as needed.
Tests/VTEX.Integration.Tests/SnapshotTests.cs
Created a WireMock server fixture to manage the WireMock server lifecycle during tests.
  • Created the WireMockServerFixture class to start and stop the WireMock server.
  • Configured the fixture to use a specific port and enable the admin interface.
Tests/VTEX.Integration.Tests/WireMockServerFixture.cs

Assessment against linked issues

Issue Objective Addressed Explanation
#173 Create a new integration test project with required test dependencies (XUnit, WireMock, Snapshooter, NSubstitute, Bogus)
#173 Implement test infrastructure including WireMock server fixture and test data generators
#173 Create comprehensive test cases using all required testing approaches (WireMock for HTTP mocking, Snapshooter for response comparison, NSubstitute for dependency mocking, Bogus for fake data)

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time. You can also use
    this command to specify where the summary should be inserted.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

coderabbitai bot commented Dec 27, 2024

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have skipped reviewing this pull request. It seems to have been created by a bot (hey, gitauto-ai[bot]!). We assume it knows what it's doing!

@gstraccini gstraccini bot added .NET Pull requests that update .net code dependencies Pull requests that update a dependency file enhancement New feature or request gitauto GitAuto label to trigger the app in a issue. resilience tests ⚙️ CI/CD Continuous Integration/Continuous Deployment processes 📝 documentation Tasks related to writing or updating documentation 🚨 security Security-related issues or improvements labels Dec 27, 2024
@github-actions github-actions bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Dec 27, 2024
@gstraccini gstraccini bot added 🚦 awaiting triage Items that are awaiting triage or categorization 🤖 bot Automated processes or integrations labels Dec 27, 2024
Copy link

codacy-production bot commented Dec 27, 2024

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
+0.00% (target: -1.00%)
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (7b0e7f0) 1777 4 0.23%
Head commit (384c4b3) 1777 (+0) 4 (+0) 0.23% (+0.00%)

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#406) 0 0 ∅ (not applicable)

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

Codacy stopped sending the deprecated coverage status on June 5th, 2024. Learn more

Copy link

@github-advanced-security github-advanced-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sonarcsharp (reported by Codacy) found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.

@AppVeyorBot
Copy link

Build VTEX-SDK-dotnet 2.4.289 completed (commit 3ac732bff8 by @gitauto-ai[bot])

Copy link

codecov bot commented Dec 27, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 0.22%. Comparing base (3a07b95) to head (fdfc4a6).

Additional details and impacted files
@@          Coverage Diff          @@
##            main    #406   +/-   ##
=====================================
  Coverage   0.22%   0.22%           
=====================================
  Files        117     117           
  Lines       1777    1777           
  Branches      75      75           
=====================================
  Hits           4       4           
+ Misses      1773    1771    -2     
- Partials       0       2    +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@AppVeyorBot
Copy link

Build VTEX-SDK-dotnet 2.4.300 completed (commit fb0b073778 by @gstraccini[bot])

Copy link
Contributor

Infisical secrets check: ✅ No secrets leaked!

💻 Scan logs
1:38AM INF scanning for exposed secrets...
1:38AM INF 355 commits scanned.
1:38AM INF scan completed in 295ms
1:38AM INF no leaks found

Copy link

codacy-production bot commented Dec 30, 2024

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
+0.00% (target: -1.00%)
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (3a07b95) 1777 4 0.23%
Head commit (fdfc4a6) 1777 (+0) 4 (+0) 0.23% (+0.00%)

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#406) 0 0 ∅ (not applicable)

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

Codacy stopped sending the deprecated coverage status on June 5th, 2024. Learn more

@AppVeyorBot
Copy link

Build VTEX-SDK-dotnet 2.4.305 completed (commit aead4eb215 by @gstraccini[bot])

Copy link

sonarqubecloud bot commented Jan 6, 2025

@AppVeyorBot
Copy link

Build VTEX-SDK-dotnet 2.4.318 completed (commit 3dc8926d3d by @gstraccini[bot])

@guibranco guibranco closed this Jan 6, 2025
@guibranco guibranco deleted the gitauto/issue-173-20241227-002452 branch January 6, 2025 02:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🚦 awaiting triage Items that are awaiting triage or categorization 🤖 bot Automated processes or integrations ⚙️ CI/CD Continuous Integration/Continuous Deployment processes dependencies Pull requests that update a dependency file 📝 documentation Tasks related to writing or updating documentation enhancement New feature or request gitauto GitAuto label to trigger the app in a issue. .NET Pull requests that update .net code resilience 🚨 security Security-related issues or improvements size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEATURE] Add Integration Tests project with WireMock fixture
2 participants