Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot proxy to Mariadb instance #228

Closed
KevinBLT opened this issue Aug 5, 2024 · 8 comments
Closed

Cannot proxy to Mariadb instance #228

KevinBLT opened this issue Aug 5, 2024 · 8 comments

Comments

@KevinBLT
Copy link

KevinBLT commented Aug 5, 2024

Hey... maybe I am missing something but since the L4 Module seems to wait for a byte to be sent and is not opening a proxy connection until that moment it will not recieve the mariadb greeting session hence the client also waits until the connection is closed due to a timeout since nobody actually talked.

I tested with telnet:

kb@MacBook-Air ~ % telnet 127.0.0.1 3306
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.

kb@MacBook-Air ~ % telnet 127.0.0.1 3306
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
test
Z
11.2.4-MariaDB-ubu220'0YQjCL7??-j4al|I8{&Fmumysql_native_password5#HY000Proxy header is not accepted from 172.19.0.3Connection closed by foreign host.

When I just try to connect... it will close after:
WARN layer4 matching connection {"remote": "192.168.65.1:53364", "error": "aborted matching according to timeout"}

If I am fast enough to just send random bytes (like here text) MariaDB will answer with its greeting but of course terminate because just sent some garbage in its view.

What can I do? Is there a buffer value? Anything related to it?

Caddyfile:

{
  layer4 {

    :3306 {
      route {
        proxy {
          upstream mariadb:3306
        }
      }
    }

   
  }
}
@mholt
Copy link
Owner

mholt commented Aug 5, 2024

@vnxme Just a quick thought as a side-note: I wonder if instead of upstream we should call it to like we do in the HTTP proxy (I can't remember if we already talked about this)?

@ydylla Do you think this is fixed by one of the proposed patches? Seems like it shouldn't be waiting to match in this case.

@vnxme
Copy link
Collaborator

vnxme commented Aug 5, 2024

@mholt, I've certainly thought about it. Decided to avoid using to and keep upstream, because upstream here, unlike in the http app, may have multiple inner options like max_connections and tls_* ones defined per upstream. See more here.

In this case we could also use a shorter syntax:

{
  layer4 {
    :3306 {
      route {
        proxy mariadb:3306
      }
    }
  }
}

@ydylla
Copy link
Collaborator

ydylla commented Aug 5, 2024

@mholt

Do you think this is fixed by one of the proposed patches?

No. It's essentially a duplicate of #212. prefetch is called eagerly right now so all protocols where the server speaks first will not work. I did not consider these cases during the rewrite.

I will try to make prefetch lazy again.

@mholt
Copy link
Owner

mholt commented Aug 5, 2024

Thanks. 😃 Since all the big changes lately I'm a little behind on side-effects. I appreciate your help!

@KevinBLT
Copy link
Author

KevinBLT commented Aug 6, 2024

Could you add a boolean tag lazy; when it's set, it works like it is now so that when applications just connect but not transmitting anything directly it will choose and upstream on first byte. Without lazy the whole connections is established directly?

I think it would prevent other users from encountering this unexpected situation. I spent time in debugging if the containers are connected, if there are network errors etc.

If lazy is explicitly set, the user has to know what he is doing.

@ydylla
Copy link
Collaborator

ydylla commented Aug 6, 2024

@KevinBLT Please try it with the lazy-prefetch branch from #229.
There is no need for a lazy flag, reading lazily from the connection during matching works for all cases. And was also the old behavior before my rewrite. I just forgot to think about server speaks first configs 😅

@WeidiDeng
Copy link
Contributor

@KevinBLT FIY, it's also fixed here, build caddy by xcaddy build --with github.com/mholt/caddy-l4=github.com/WeidiDeng/caddy-l4@routes-fixes

@KevinBLT
Copy link
Author

KevinBLT commented Aug 7, 2024

Yes, it is working! Thanks :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants