Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(logging): handle RPC call to set V1 log level #717

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

james-munson
Copy link
Contributor

Which issue(s) this PR fixes:

Issue longhorn/longhorn#6702

What this PR does / why we need it:

Add a handler for RPC call to set log level.

Special notes for your reviewer:

Additional documentation or context

Copy link

coderabbitai bot commented Nov 19, 2024

Walkthrough

The changes in the pull request enhance the logging functionality across the pkg/instance/log.go and pkg/client/instance.go files. A new helper function logSetLevel is introduced to manage log levels dynamically. The LogSetLevel and LogGetLevel methods are updated for both V1DataEngineInstanceOps and V2DataEngineInstanceOps to incorporate component-specific logic. Additionally, the client methods in pkg/client/instance.go are modified to simplify their comments, clarifying their purpose without altering functionality.

Changes

File Change Summary
pkg/instance/log.go - Added logSetLevel helper function for dynamic log level management.
- Updated LogSetLevel and LogGetLevel methods for V1 and V2 to handle log levels.
- Clarified LogSetFlags and LogGetFlags methods for V1 regarding implementation.
pkg/client/instance.go - Updated comments for LogSetLevel, LogSetFlags, LogGetLevel, and LogGetFlags methods.

Assessment against linked issues

Objective Addressed Explanation
Configure log level for all components via RPC (Issue #6702)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
pkg/instance/log.go (2)

24-34: LGTM with suggestions for robustness

The implementation correctly handles log level changes with proper validation and logging. Consider two minor improvements:

  1. Add validation for empty level string
  2. Consider using the provided context for potential future timeout handling
 func (ops V1DataEngineInstanceOps) LogSetLevel(ctx context.Context, req *rpc.LogSetLevelRequest) (resp *emptypb.Empty, err error) {
+	if req.Level == "" {
+		return nil, grpcstatus.Error(grpccodes.InvalidArgument, "log level cannot be empty")
+	}
+
 	oldLevel := logrus.GetLevel()
 	newLevel, err := logrus.ParseLevel(req.Level)

Line range hint 1-135: Well-structured implementation with clear V1/V2 separation

The overall implementation successfully achieves the PR objectives by:

  1. Maintaining clear separation between V1 and V2 data engine capabilities
  2. Implementing proper error handling and validation
  3. Providing clear documentation for unsupported features

The architecture allows for easy extension of logging capabilities while maintaining backward compatibility.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 56ebc33 and fbb3522.

📒 Files selected for processing (1)
  • pkg/instance/log.go (5 hunks)
🔇 Additional comments (4)
pkg/instance/log.go (4)

5-5: LGTM: Required import for logging functionality

The addition of the logrus import is appropriate for implementing the logging functionality.


61-61: LGTM: Clear documentation of V1 limitations

The comment clearly explains that log flags are not applicable for V1, which is correct and well-documented.


118-118: LGTM: Consistent with V1 limitations

The implementation correctly returns an empty response, maintaining consistency with V1's lack of log flags support.


88-90: LGTM: Verify integration with log level changes

The implementation correctly retrieves and returns the current log level.

✅ Verification successful

Based on the search results, I can see that the LogGetLevel and LogSetLevel implementations are properly integrated across the codebase:

  1. The LogGetLevel implementation in pkg/instance/log.go correctly returns the current log level for both V1 and V2 data engines
  2. The LogSetLevel implementation exists and is properly connected to both engines
  3. The RPC service definitions show proper integration between these methods
  4. The client implementations show proper handling of both methods

LGTM: Log level operations are properly integrated

The implementation correctly retrieves the log level and is properly integrated with the log level change functionality across both V1 and V2 data engines. The RPC service definitions and client implementations confirm this integration.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify that LogGetLevel is called after LogSetLevel changes
ast-grep --pattern 'LogSetLevel.*LogGetLevel'

Length of output: 47


Script:

#!/bin/bash
# Search for test files that verify log level changes
rg -l "LogSetLevel.*LogGetLevel" --type go

# Check implementation of LogSetLevel and LogGetLevel
ast-grep --pattern 'func.*LogSetLevel'
ast-grep --pattern 'func.*LogGetLevel'

# Look for any log level related tests
rg "Test.*Log.*Level" --type go

Length of output: 160


Script:

#!/bin/bash
# Search for test files containing log level related tests with more context
rg -A 5 -B 5 "LogGetLevel|LogSetLevel" --type go

# Look for any RPC test files
fd ".*_test.go" | grep -i "rpc\|grpc"

# Check for any log level related integration tests
rg "log.*level" -i --type go test/

Length of output: 99272

Comment on lines 24 to 37
// Set instance-manager log level. We expect a string such as "debug", "info", or "warn".
oldLevel := logrus.GetLevel()
newLevel, err := logrus.ParseLevel(req.Level)
if err != nil {
return nil, err
}
if oldLevel != newLevel {
logrus.Warnf("Updating log level from %v to %v", oldLevel, newLevel)
logrus.SetLevel(newLevel)
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v2's LogSetLevel only handles spdk_tgt's log level, so the PR needs to update the instance-manager pod's log level as well.

Copy link
Contributor Author

@james-munson james-munson Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thanks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going to require a change to the imrpc protobuf definition. The existing LogSetLevelRequest does not have a field for "service" or "component". The InstanceServiceClient implementation just ignores the "service" parameter.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (3)
pkg/instance/log.go (2)

23-37: Consider enhancing input validation and logging strategy

The helper function is well-structured but could benefit from two improvements:

  1. Add explicit input validation before parsing
  2. Consider using Info level for log changes in production
 func logSetLevel(level string) error {
+    // Validate input early
+    if level == "" {
+        return errors.New("log level cannot be empty")
+    }
+
     // Set instance-manager log level.  We expect a string such as "debug", "info", or "warn".
     newLevel, err := logrus.ParseLevel(level)
     if err != nil {
         return err
     }

     oldLevel := logrus.GetLevel()
     if oldLevel != newLevel {
-        logrus.Warnf("Updating log level from %v to %v", oldLevel, newLevel)
+        logrus.Infof("Updating log level from %v to %v", oldLevel, newLevel)
         logrus.SetLevel(newLevel)
     }

     return nil
 }

78-78: Improve documentation for V1 implementation limitations

The comments for V1 implementations could be more descriptive about why flags are not supported and what users should expect.

-    // There is no V1 implementation.  Log flags are not a thing as they are for SPDK.
+    // V1 data engine does not support log flags configuration as this feature is specific to SPDK.
+    // This method exists only for interface compatibility.
-    // No implementation necessary.
+    // V1 data engine does not support log flags retrieval as this feature is specific to SPDK.
+    // Returns empty response for interface compatibility.

Also applies to: 142-142

pkg/client/instance.go (1)

Line range hint 489-547: Ensure consistent parameter usage across logging methods

The changes correctly add component support to the RPC requests, which aligns with the PR objective to handle component-specific log levels. However, all four logging methods (LogSetLevel, LogSetFlags, LogGetLevel, LogGetFlags) need their signatures updated to accept the new component parameter. Currently, they're using the service parameter as the component value, which could lead to confusion.

Consider:

  1. Updating all method signatures to explicitly accept the component parameter
  2. Adding validation for the component parameter
  3. Adding documentation about valid component values
  4. Potentially creating constants for supported component types
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between fbb3522 and 0aa21cd.

⛔ Files ignored due to path filters (4)
  • go.mod is excluded by !go.mod
  • go.sum is excluded by !**/*.sum, !go.sum
  • vendor/github.com/longhorn/types/pkg/generated/imrpc/instance.pb.go is excluded by !**/*.pb.go, !**/generated/**, !vendor/**
  • vendor/modules.txt is excluded by !vendor/**
📒 Files selected for processing (2)
  • pkg/client/instance.go (4 hunks)
  • pkg/instance/log.go (5 hunks)
🔇 Additional comments (4)
pkg/instance/log.go (1)

112-125: Standardize component handling and error patterns

The codebase would benefit from:

  1. Centralized component validation
  2. Consistent error handling patterns
  3. Better documentation of supported components

Consider introducing:

  1. A component registry or enum
  2. Centralized validation function
  3. Consistent error handling helpers

Example implementation:

// Component represents a loggable component in the system
type Component string

const (
    ComponentInstanceManager Component = "instance-manager"
    ComponentSPDKTarget      Component = "spdk_tgt"
)

// validateComponent ensures the component is supported
func (ops V2DataEngineInstanceOps) validateComponent(component string) error {
    switch Component(component) {
    case ComponentInstanceManager, ComponentSPDKTarget:
        return nil
    default:
        return errors.Errorf("unsupported component: %s", component)
    }
}

// wrapError consistently wraps errors with gRPC status
func wrapError(err error, msg string, args ...interface{}) error {
    return grpcstatus.Error(grpccodes.Internal, 
        errors.Wrapf(err, msg, args...).Error())
}

Let's verify the component usage across the codebase:

Also applies to: 147-147

pkg/client/instance.go (3)

526-526: ⚠️ Potential issue

Method signature needs to be updated to match the new functionality

The method is now passing a component parameter to the RPC request, but the method signature hasn't been updated.

Apply this diff:

-func (c *InstanceServiceClient) LogGetLevel(dataEngine, service string) (string, error) {
+func (c *InstanceServiceClient) LogGetLevel(dataEngine, service string, component string) (string, error) {

508-508: ⚠️ Potential issue

Method signature needs to be updated and comment typo fixed

The method is now passing a component parameter to the RPC request, but the method signature hasn't been updated. Also, there's a typo in the method comment.

Apply these diffs:

-// LogSetFlags sets the log flags of the service.x
+// LogSetFlags sets the log flags of the service.
-func (c *InstanceServiceClient) LogSetFlags(dataEngine, service, flags string) error {
+func (c *InstanceServiceClient) LogSetFlags(dataEngine, service, flags string, component string) error {

489-489: ⚠️ Potential issue

Method signature needs to be updated to match the new functionality

The method is now passing a component parameter to the RPC request, but the method signature hasn't been updated to accept this parameter.

Apply this diff to update the method signature:

-func (c *InstanceServiceClient) LogSetLevel(dataEngine, service, level string) error {
+func (c *InstanceServiceClient) LogSetLevel(dataEngine, service, level string, component string) error {

Comment on lines 48 to 59
if req.Component == "spdk_tgt" {
c, err := spdkclient.NewSPDKClient(ops.spdkServiceAddress)
if err != nil {
return nil, grpcstatus.Error(grpccodes.Internal, errors.Wrapf(err, "failed to create SPDK client").Error())
}
defer c.Close()

err = c.LogSetLevel(req.Level)
if err != nil {
return nil, grpcstatus.Error(grpccodes.Internal, errors.Wrapf(err, "failed to set SPDK log level").Error())
}
return &emptypb.Empty{}, nil
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider improving component handling and client lifecycle

The current implementation has several areas for improvement:

  1. The "spdk_tgt" string is hardcoded
  2. SPDK client is created for each call
  3. Error messages could be more specific

Consider these improvements:

+// Define constants for supported components
+const (
+    ComponentSPDKTarget = "spdk_tgt"
+)

 func (ops V2DataEngineInstanceOps) LogSetLevel(ctx context.Context, req *rpc.LogSetLevelRequest) (resp *emptypb.Empty, err error) {
-    if req.Component == "spdk_tgt" {
+    if req.Component == ComponentSPDKTarget {
         c, err := spdkclient.NewSPDKClient(ops.spdkServiceAddress)
         if err != nil {
-            return nil, grpcstatus.Error(grpccodes.Internal, errors.Wrapf(err, "failed to create SPDK client").Error())
+            return nil, grpcstatus.Error(grpccodes.Internal, 
+                errors.Wrapf(err, "failed to create SPDK client for component %s", req.Component).Error())
         }
         defer c.Close()

         err = c.LogSetLevel(req.Level)
         if err != nil {
-            return nil, grpcstatus.Error(grpccodes.Internal, errors.Wrapf(err, "failed to set SPDK log level").Error())
+            return nil, grpcstatus.Error(grpccodes.Internal, 
+                errors.Wrapf(err, "failed to set log level for component %s", req.Component).Error())
         }
         return &emptypb.Empty{}, nil
     }

Consider caching the SPDK client in the ops struct to avoid creating a new client for each call. This would require proper lifecycle management but could improve performance.

Committable suggestion skipped: line range outside the PR's diff.

@@ -541,6 +544,7 @@ func (c *InstanceServiceClient) LogGetFlags(dataEngine, service string) (string,

resp, err := client.LogGetFlags(ctx, &rpc.LogGetFlagsRequest{
DataEngine: rpc.DataEngine(driver),
Component: service,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

⚠️ Potential issue

Method signature update is required to match the RPC request structure

The verification confirms that:

  1. The LogGetFlagsRequest struct in the RPC definition includes both DataEngine and Component fields
  2. The current implementation in pkg/client/instance.go is already passing the Component parameter in the RPC request, but it's not part of the method signature
  3. No other callers of this client method were found in the codebase, making this a safe change

Apply this diff:

-func (c *InstanceServiceClient) LogGetFlags(dataEngine, service string) (string, error) {
+func (c *InstanceServiceClient) LogGetFlags(dataEngine, service, component string) (string, error) {
🔗 Analysis chain

Method signature needs to be updated to match the new functionality

The method is now passing a component parameter to the RPC request, but the method signature hasn't been updated.

Apply this diff:

-func (c *InstanceServiceClient) LogGetFlags(dataEngine, service string) (string, error) {
+func (c *InstanceServiceClient) LogGetFlags(dataEngine, service string, component string) (string, error) {

Let me analyze the findings from the rg search to verify the method signature update requirement.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Find all callers of LogGetFlags to ensure they're updated for the new signature
rg -A 2 "LogGetFlags\(" --type go

Length of output: 9202


Script:

#!/bin/bash
# Let's check the LogGetFlagsRequest definition to understand its structure
ast-grep --pattern 'type LogGetFlagsRequest struct {
  $$$
}'

Length of output: 1201

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (2)
pkg/instance/log.go (2)

25-39: Add input validation and documentation to logSetLevel

The helper function looks good but could benefit from some improvements:

  1. Add validation for empty string input
  2. Document valid log level values in the function comment

Consider this enhancement:

+// logSetLevel sets the instance-manager log level.
+// Valid levels are: "panic", "fatal", "error", "warn", "info", "debug", "trace"
 func logSetLevel(level string) error {
+    if strings.TrimSpace(level) == "" {
+        return errors.New("log level cannot be empty")
+    }
     // Set instance-manager log level.  We expect a string such as "debug", "info", or "warn".
     newLevel, err := logrus.ParseLevel(level)

59-69: Consider caching SPDK client connection

Creating a new SPDK client for each call could impact performance.

Consider maintaining a cached SPDK client in the V2DataEngineInstanceOps struct with proper lifecycle management. This would require:

  1. Adding client initialization during ops creation
  2. Adding proper cleanup during shutdown
  3. Implementing connection health checks and reconnection logic

Would you like me to provide a detailed implementation proposal?

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 0aa21cd and abcb04c.

📒 Files selected for processing (2)
  • pkg/client/instance.go (4 hunks)
  • pkg/instance/log.go (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • pkg/client/instance.go
🔇 Additional comments (1)
pkg/instance/log.go (1)

41-47: LGTM: V1 implementation looks good

The V1 implementation correctly:

  • Uses the new helper function for setting log level
  • Returns the actual log level instead of a placeholder
  • Handles errors appropriately

Also applies to: 109-111

Comment on lines +54 to +58
// Also set level for spdk_tgt. There is no "trace" level for SPDK.
spdkLevel := strings.ToUpper(req.Level)
if spdkLevel == "TRACE" {
spdkLevel = "DEBUG"
}
Copy link

@coderabbitai coderabbitai bot Nov 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve component handling with constants and validation

The hardcoded "TRACE" string and lack of component validation could lead to maintenance issues.

Consider these improvements:

+const (
+    ComponentSPDKTarget = "spdk_tgt"
+    SPDKLogLevelTrace = "TRACE"
+    SPDKLogLevelDebug = "DEBUG"
+)
+
 // Also set level for spdk_tgt.  There is no "trace" level for SPDK.
+if req.Component != ComponentSPDKTarget {
+    return nil, grpcstatus.Error(grpccodes.InvalidArgument, 
+        errors.Errorf("unsupported component: %s", req.Component).Error())
+}
 spdkLevel := strings.ToUpper(req.Level)
-if spdkLevel == "TRACE" {
-    spdkLevel = "DEBUG"
+if spdkLevel == SPDKLogLevelTrace {
+    spdkLevel = SPDKLogLevelDebug
 }

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

@james-munson james-munson marked this pull request as draft November 21, 2024 21:03
Copy link
Contributor

@PhanLe1010 PhanLe1010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@james-munson james-munson marked this pull request as ready for review November 21, 2024 22:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants