Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Support max_lifetime. #359

Open
clia opened this issue Oct 9, 2024 · 2 comments
Open

Feature Request: Support max_lifetime. #359

clia opened this issue Oct 9, 2024 · 2 comments
Labels
A-core Area: Core / deadpool enhancement New feature or request

Comments

@clia
Copy link

clia commented Oct 9, 2024

Description

First of all, thank you for maintaining the deadpool project. It's a valuable tool for connection pooling in Rust applications. I'd like to request a new feature that would enhance the functionality of deadpool, particularly for PostgreSQL connections.

Feature Request

I would like to propose adding support for a max_lifetime feature in deadpool, specifically for PostgreSQL connections. This feature would allow users to set a maximum lifetime for connections in the pool, after which they would be automatically closed and replaced with fresh connections.

Rationale

  1. Connection Hygiene: Some database systems, including PostgreSQL, can benefit from periodically refreshing connections to ensure optimal performance and resource utilization.
  2. Consistency with Other Pools: Many other connection pool implementations (e.g., HikariCP for Java) offer this feature, making it a common expectation among developers.
  3. Enhanced Control: This feature would provide users with more fine-grained control over connection lifecycle management.

Proposed Implementation

  • Add a max_lifetime configuration option to the pool settings.
  • Implement a mechanism to track the age of each connection.
  • Automatically close and replace connections that exceed the specified max_lifetime.

Example Usage

    let mut cfg = Config::new();
    cfg.dbname = Some("deadpool".to_string());
    cfg.manager = Some(ManagerConfig {
        recycling_method: RecyclingMethod::Fast,
    });
    cfg.max_lifetime = Some(Duration::from_secs(3600)); // Set max lifetime to 1 hour
    let pool = cfg.create_pool(Some(Runtime::Tokio1), NoTls).unwrap();

Questions

  1. Is this feature something you'd consider adding to deadpool?
  2. Are there any potential drawbacks or implementation challenges you foresee?
  3. Would this feature be specific to PostgreSQL, or could it be implemented generically for all supported databases?

Thank you for considering this feature request. I'm happy to provide any additional information or clarification if needed.

@bikeshedder
Copy link
Owner

bikeshedder commented Oct 9, 2024

This is related to the following issues:

Right now deadpool is completely passive. Unless you call Pool::get it doesn't perform any background tasks whatsoever.

Via the Object::metrics method you can already get to know the age of the connection. The documentation provides an example how you can periodically remove old connections from the pool via the Pool::retain method.

let interval = Duration::from_secs(30);
let max_age = Duration::from_secs(60);
tokio::spawn(async move {
    loop {
        tokio::time::sleep(interval).await;
        pool.retain(|_, metrics| metrics.last_used() < max_age);
    }
});

As this won't catch connections which are in-flight at the time this job runs you should also add a pre_recycle hook that discards connections before they are recycled:

Pool::builder(...)
    // ...
    .pre_recycle(Hook::sync_fn(|_, metrics| {
        if metrics.age() > max_age {
            Err(HookError::message("Max age reached"))
        } else {
            Ok(())
        }
    }))
    // ...

While this is a working fix for #299 it doesn't provide the ahead of time recycling and minimum size feature.

Yes, this is a feature planned for the future. I haven't come around writing it. The documentation of the hooks lacks some good examples and I haven't come around designing the API for making ahead of time recycling possible.

I'm open to suggestions on what kind of additional APIs deadpool needs to provide to make this a reality. I just want to keep the core of deadpool light and passive. I'm confident that a passive pool implementation as basis with hooks and a plugin interfaces is more flexible and robust at the same time.

@bikeshedder bikeshedder added enhancement New feature or request A-core Area: Core / deadpool labels Oct 9, 2024
@bikeshedder
Copy link
Owner

Oh, and regarding your questions...

Q: Is this feature something you'd consider adding to deadpool?
A: Yes.

Q: Are there any potential drawbacks or implementation challenges you foresee?
A: Not that I'm aware of.

Q: Would this feature be specific to PostgreSQL, or could it be implemented generically for all supported databases?
A: It's a feature that should be added to the deadpool core. The MySQL pool would benefit from this a lot as MySQL connections can be very slow to establish.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-core Area: Core / deadpool enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants