-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(logging): handle RPC call to set V1 log level #717
base: master
Are you sure you want to change the base?
Conversation
WalkthroughThe changes in the pull request enhance the logging functionality across the Changes
Assessment against linked issues
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
pkg/instance/log.go (2)
24-34
: LGTM with suggestions for robustnessThe implementation correctly handles log level changes with proper validation and logging. Consider two minor improvements:
- Add validation for empty level string
- Consider using the provided context for potential future timeout handling
func (ops V1DataEngineInstanceOps) LogSetLevel(ctx context.Context, req *rpc.LogSetLevelRequest) (resp *emptypb.Empty, err error) { + if req.Level == "" { + return nil, grpcstatus.Error(grpccodes.InvalidArgument, "log level cannot be empty") + } + oldLevel := logrus.GetLevel() newLevel, err := logrus.ParseLevel(req.Level)
Line range hint
1-135
: Well-structured implementation with clear V1/V2 separationThe overall implementation successfully achieves the PR objectives by:
- Maintaining clear separation between V1 and V2 data engine capabilities
- Implementing proper error handling and validation
- Providing clear documentation for unsupported features
The architecture allows for easy extension of logging capabilities while maintaining backward compatibility.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
pkg/instance/log.go
(5 hunks)
🔇 Additional comments (4)
pkg/instance/log.go (4)
5-5
: LGTM: Required import for logging functionality
The addition of the logrus import is appropriate for implementing the logging functionality.
61-61
: LGTM: Clear documentation of V1 limitations
The comment clearly explains that log flags are not applicable for V1, which is correct and well-documented.
118-118
: LGTM: Consistent with V1 limitations
The implementation correctly returns an empty response, maintaining consistency with V1's lack of log flags support.
88-90
: LGTM: Verify integration with log level changes
The implementation correctly retrieves and returns the current log level.
✅ Verification successful
Based on the search results, I can see that the LogGetLevel and LogSetLevel implementations are properly integrated across the codebase:
- The LogGetLevel implementation in pkg/instance/log.go correctly returns the current log level for both V1 and V2 data engines
- The LogSetLevel implementation exists and is properly connected to both engines
- The RPC service definitions show proper integration between these methods
- The client implementations show proper handling of both methods
LGTM: Log level operations are properly integrated
The implementation correctly retrieves the log level and is properly integrated with the log level change functionality across both V1 and V2 data engines. The RPC service definitions and client implementations confirm this integration.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify that LogGetLevel is called after LogSetLevel changes
ast-grep --pattern 'LogSetLevel.*LogGetLevel'
Length of output: 47
Script:
#!/bin/bash
# Search for test files that verify log level changes
rg -l "LogSetLevel.*LogGetLevel" --type go
# Check implementation of LogSetLevel and LogGetLevel
ast-grep --pattern 'func.*LogSetLevel'
ast-grep --pattern 'func.*LogGetLevel'
# Look for any log level related tests
rg "Test.*Log.*Level" --type go
Length of output: 160
Script:
#!/bin/bash
# Search for test files containing log level related tests with more context
rg -A 5 -B 5 "LogGetLevel|LogSetLevel" --type go
# Look for any RPC test files
fd ".*_test.go" | grep -i "rpc\|grpc"
# Check for any log level related integration tests
rg "log.*level" -i --type go test/
Length of output: 99272
pkg/instance/log.go
Outdated
// Set instance-manager log level. We expect a string such as "debug", "info", or "warn". | ||
oldLevel := logrus.GetLevel() | ||
newLevel, err := logrus.ParseLevel(req.Level) | ||
if err != nil { | ||
return nil, err | ||
} | ||
if oldLevel != newLevel { | ||
logrus.Warnf("Updating log level from %v to %v", oldLevel, newLevel) | ||
logrus.SetLevel(newLevel) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
v2's LogSetLevel only handles spdk_tgt's log level, so the PR needs to update the instance-manager pod's log level as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to require a change to the imrpc protobuf definition. The existing LogSetLevelRequest does not have a field for "service" or "component". The InstanceServiceClient implementation just ignores the "service" parameter.
fbb3522
to
0aa21cd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (3)
pkg/instance/log.go (2)
23-37
: Consider enhancing input validation and logging strategyThe helper function is well-structured but could benefit from two improvements:
- Add explicit input validation before parsing
- Consider using Info level for log changes in production
func logSetLevel(level string) error { + // Validate input early + if level == "" { + return errors.New("log level cannot be empty") + } + // Set instance-manager log level. We expect a string such as "debug", "info", or "warn". newLevel, err := logrus.ParseLevel(level) if err != nil { return err } oldLevel := logrus.GetLevel() if oldLevel != newLevel { - logrus.Warnf("Updating log level from %v to %v", oldLevel, newLevel) + logrus.Infof("Updating log level from %v to %v", oldLevel, newLevel) logrus.SetLevel(newLevel) } return nil }
78-78
: Improve documentation for V1 implementation limitationsThe comments for V1 implementations could be more descriptive about why flags are not supported and what users should expect.
- // There is no V1 implementation. Log flags are not a thing as they are for SPDK. + // V1 data engine does not support log flags configuration as this feature is specific to SPDK. + // This method exists only for interface compatibility.- // No implementation necessary. + // V1 data engine does not support log flags retrieval as this feature is specific to SPDK. + // Returns empty response for interface compatibility.Also applies to: 142-142
pkg/client/instance.go (1)
Line range hint
489-547
: Ensure consistent parameter usage across logging methodsThe changes correctly add component support to the RPC requests, which aligns with the PR objective to handle component-specific log levels. However, all four logging methods (LogSetLevel, LogSetFlags, LogGetLevel, LogGetFlags) need their signatures updated to accept the new component parameter. Currently, they're using the service parameter as the component value, which could lead to confusion.
Consider:
- Updating all method signatures to explicitly accept the component parameter
- Adding validation for the component parameter
- Adding documentation about valid component values
- Potentially creating constants for supported component types
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (4)
go.mod
is excluded by!go.mod
go.sum
is excluded by!**/*.sum
,!go.sum
vendor/github.com/longhorn/types/pkg/generated/imrpc/instance.pb.go
is excluded by!**/*.pb.go
,!**/generated/**
,!vendor/**
vendor/modules.txt
is excluded by!vendor/**
📒 Files selected for processing (2)
pkg/client/instance.go
(4 hunks)pkg/instance/log.go
(5 hunks)
🔇 Additional comments (4)
pkg/instance/log.go (1)
112-125
: Standardize component handling and error patterns
The codebase would benefit from:
- Centralized component validation
- Consistent error handling patterns
- Better documentation of supported components
Consider introducing:
- A component registry or enum
- Centralized validation function
- Consistent error handling helpers
Example implementation:
// Component represents a loggable component in the system
type Component string
const (
ComponentInstanceManager Component = "instance-manager"
ComponentSPDKTarget Component = "spdk_tgt"
)
// validateComponent ensures the component is supported
func (ops V2DataEngineInstanceOps) validateComponent(component string) error {
switch Component(component) {
case ComponentInstanceManager, ComponentSPDKTarget:
return nil
default:
return errors.Errorf("unsupported component: %s", component)
}
}
// wrapError consistently wraps errors with gRPC status
func wrapError(err error, msg string, args ...interface{}) error {
return grpcstatus.Error(grpccodes.Internal,
errors.Wrapf(err, msg, args...).Error())
}
Let's verify the component usage across the codebase:
Also applies to: 147-147
pkg/client/instance.go (3)
526-526
:
Method signature needs to be updated to match the new functionality
The method is now passing a component parameter to the RPC request, but the method signature hasn't been updated.
Apply this diff:
-func (c *InstanceServiceClient) LogGetLevel(dataEngine, service string) (string, error) {
+func (c *InstanceServiceClient) LogGetLevel(dataEngine, service string, component string) (string, error) {
508-508
:
Method signature needs to be updated and comment typo fixed
The method is now passing a component parameter to the RPC request, but the method signature hasn't been updated. Also, there's a typo in the method comment.
Apply these diffs:
-// LogSetFlags sets the log flags of the service.x
+// LogSetFlags sets the log flags of the service.
-func (c *InstanceServiceClient) LogSetFlags(dataEngine, service, flags string) error {
+func (c *InstanceServiceClient) LogSetFlags(dataEngine, service, flags string, component string) error {
489-489
:
Method signature needs to be updated to match the new functionality
The method is now passing a component parameter to the RPC request, but the method signature hasn't been updated to accept this parameter.
Apply this diff to update the method signature:
-func (c *InstanceServiceClient) LogSetLevel(dataEngine, service, level string) error {
+func (c *InstanceServiceClient) LogSetLevel(dataEngine, service, level string, component string) error {
pkg/instance/log.go
Outdated
if req.Component == "spdk_tgt" { | ||
c, err := spdkclient.NewSPDKClient(ops.spdkServiceAddress) | ||
if err != nil { | ||
return nil, grpcstatus.Error(grpccodes.Internal, errors.Wrapf(err, "failed to create SPDK client").Error()) | ||
} | ||
defer c.Close() | ||
|
||
err = c.LogSetLevel(req.Level) | ||
if err != nil { | ||
return nil, grpcstatus.Error(grpccodes.Internal, errors.Wrapf(err, "failed to set SPDK log level").Error()) | ||
} | ||
return &emptypb.Empty{}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider improving component handling and client lifecycle
The current implementation has several areas for improvement:
- The "spdk_tgt" string is hardcoded
- SPDK client is created for each call
- Error messages could be more specific
Consider these improvements:
+// Define constants for supported components
+const (
+ ComponentSPDKTarget = "spdk_tgt"
+)
func (ops V2DataEngineInstanceOps) LogSetLevel(ctx context.Context, req *rpc.LogSetLevelRequest) (resp *emptypb.Empty, err error) {
- if req.Component == "spdk_tgt" {
+ if req.Component == ComponentSPDKTarget {
c, err := spdkclient.NewSPDKClient(ops.spdkServiceAddress)
if err != nil {
- return nil, grpcstatus.Error(grpccodes.Internal, errors.Wrapf(err, "failed to create SPDK client").Error())
+ return nil, grpcstatus.Error(grpccodes.Internal,
+ errors.Wrapf(err, "failed to create SPDK client for component %s", req.Component).Error())
}
defer c.Close()
err = c.LogSetLevel(req.Level)
if err != nil {
- return nil, grpcstatus.Error(grpccodes.Internal, errors.Wrapf(err, "failed to set SPDK log level").Error())
+ return nil, grpcstatus.Error(grpccodes.Internal,
+ errors.Wrapf(err, "failed to set log level for component %s", req.Component).Error())
}
return &emptypb.Empty{}, nil
}
Consider caching the SPDK client in the ops struct to avoid creating a new client for each call. This would require proper lifecycle management but could improve performance.
Committable suggestion skipped: line range outside the PR's diff.
pkg/client/instance.go
Outdated
@@ -541,6 +544,7 @@ func (c *InstanceServiceClient) LogGetFlags(dataEngine, service string) (string, | |||
|
|||
resp, err := client.LogGetFlags(ctx, &rpc.LogGetFlagsRequest{ | |||
DataEngine: rpc.DataEngine(driver), | |||
Component: service, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Method signature update is required to match the RPC request structure
The verification confirms that:
- The
LogGetFlagsRequest
struct in the RPC definition includes bothDataEngine
andComponent
fields - The current implementation in
pkg/client/instance.go
is already passing theComponent
parameter in the RPC request, but it's not part of the method signature - No other callers of this client method were found in the codebase, making this a safe change
Apply this diff:
-func (c *InstanceServiceClient) LogGetFlags(dataEngine, service string) (string, error) {
+func (c *InstanceServiceClient) LogGetFlags(dataEngine, service, component string) (string, error) {
🔗 Analysis chain
Method signature needs to be updated to match the new functionality
The method is now passing a component parameter to the RPC request, but the method signature hasn't been updated.
Apply this diff:
-func (c *InstanceServiceClient) LogGetFlags(dataEngine, service string) (string, error) {
+func (c *InstanceServiceClient) LogGetFlags(dataEngine, service string, component string) (string, error) {
Let me analyze the findings from the rg
search to verify the method signature update requirement.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Find all callers of LogGetFlags to ensure they're updated for the new signature
rg -A 2 "LogGetFlags\(" --type go
Length of output: 9202
Script:
#!/bin/bash
# Let's check the LogGetFlagsRequest definition to understand its structure
ast-grep --pattern 'type LogGetFlagsRequest struct {
$$$
}'
Length of output: 1201
Signed-off-by: James Munson <[email protected]>
0aa21cd
to
abcb04c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (2)
pkg/instance/log.go (2)
25-39
: Add input validation and documentation to logSetLevelThe helper function looks good but could benefit from some improvements:
- Add validation for empty string input
- Document valid log level values in the function comment
Consider this enhancement:
+// logSetLevel sets the instance-manager log level. +// Valid levels are: "panic", "fatal", "error", "warn", "info", "debug", "trace" func logSetLevel(level string) error { + if strings.TrimSpace(level) == "" { + return errors.New("log level cannot be empty") + } // Set instance-manager log level. We expect a string such as "debug", "info", or "warn". newLevel, err := logrus.ParseLevel(level)
59-69
: Consider caching SPDK client connectionCreating a new SPDK client for each call could impact performance.
Consider maintaining a cached SPDK client in the V2DataEngineInstanceOps struct with proper lifecycle management. This would require:
- Adding client initialization during ops creation
- Adding proper cleanup during shutdown
- Implementing connection health checks and reconnection logic
Would you like me to provide a detailed implementation proposal?
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (2)
pkg/client/instance.go
(4 hunks)pkg/instance/log.go
(5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pkg/client/instance.go
🔇 Additional comments (1)
pkg/instance/log.go (1)
41-47
: LGTM: V1 implementation looks good
The V1 implementation correctly:
- Uses the new helper function for setting log level
- Returns the actual log level instead of a placeholder
- Handles errors appropriately
Also applies to: 109-111
// Also set level for spdk_tgt. There is no "trace" level for SPDK. | ||
spdkLevel := strings.ToUpper(req.Level) | ||
if spdkLevel == "TRACE" { | ||
spdkLevel = "DEBUG" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve component handling with constants and validation
The hardcoded "TRACE" string and lack of component validation could lead to maintenance issues.
Consider these improvements:
+const (
+ ComponentSPDKTarget = "spdk_tgt"
+ SPDKLogLevelTrace = "TRACE"
+ SPDKLogLevelDebug = "DEBUG"
+)
+
// Also set level for spdk_tgt. There is no "trace" level for SPDK.
+if req.Component != ComponentSPDKTarget {
+ return nil, grpcstatus.Error(grpccodes.InvalidArgument,
+ errors.Errorf("unsupported component: %s", req.Component).Error())
+}
spdkLevel := strings.ToUpper(req.Level)
-if spdkLevel == "TRACE" {
- spdkLevel = "DEBUG"
+if spdkLevel == SPDKLogLevelTrace {
+ spdkLevel = SPDKLogLevelDebug
}
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Which issue(s) this PR fixes:
Issue longhorn/longhorn#6702
What this PR does / why we need it:
Add a handler for RPC call to set log level.
Special notes for your reviewer:
Additional documentation or context