Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bellon/getjerry customize branch test #2

Open
wants to merge 72 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
72 commits
Select commit Hold shift + click to select a range
8bde676
getjerry customize branch
bellondr Dec 13, 2024
43bd780
getjerry customize branch
bellondr Dec 14, 2024
324f0b6
log config change
bellondr Dec 14, 2024
682e4e8
getjerry customize branch
bellondr Dec 16, 2024
ff74a64
getjerry customize branch
bellondr Dec 16, 2024
43749bb
getjerry customize branch
bellondr Dec 16, 2024
e60a25e
getjerry customize branch
bellondr Dec 16, 2024
f7ff383
getjerry customize branch
bellondr Dec 16, 2024
b96e23c
fix issue
bellondr Dec 16, 2024
42a3469
fix issue
bellondr Dec 16, 2024
aaee599
fix issue
bellondr Dec 16, 2024
0ad3169
fix issue
bellondr Dec 16, 2024
1591e4d
fix issue
bellondr Dec 16, 2024
34ba83f
fix issue
bellondr Dec 16, 2024
b436eb6
fix issue
bellondr Dec 16, 2024
218c791
fix issue
bellondr Dec 16, 2024
331f64d
fix issue
bellondr Dec 16, 2024
e5b19ac
fix issue
bellondr Dec 16, 2024
f1a84c3
fix issue
bellondr Dec 16, 2024
110ad03
fix issue
bellondr Dec 16, 2024
fe01adb
fix issue
bellondr Dec 17, 2024
b08be7b
fix issue
bellondr Dec 17, 2024
c469855
fix issue
bellondr Dec 17, 2024
9cbd369
fix issue
bellondr Dec 17, 2024
70f1c43
fix issue
bellondr Dec 17, 2024
d0aa19e
fix issue
bellondr Dec 17, 2024
f19e14a
fix issue
bellondr Dec 17, 2024
0673161
add readiness srcipt
bellondr Dec 17, 2024
c82e24b
fix issue
bellondr Dec 18, 2024
52f3ce3
fix issue
bellondr Dec 18, 2024
32584cf
fix issue
bellondr Dec 18, 2024
64a546a
fix issue
bellondr Dec 18, 2024
6243f1b
fix issue
bellondr Dec 18, 2024
e53eacb
fix issue
bellondr Dec 19, 2024
5ed7200
enlarge bw service size
bellondr Dec 22, 2024
c1b8f31
enlarge bw service size
bellondr Dec 22, 2024
a7c5846
enable metrics
bellondr Dec 22, 2024
7b8fcc1
for lua memory usage test
bellondr Dec 23, 2024
2b78518
reduce memory usage
bellondr Dec 25, 2024
3dd2d9f
reduce memory usage
bellondr Dec 25, 2024
da66a09
this is for test
bellondr Dec 27, 2024
d7cb5ed
this is for test
bellondr Dec 27, 2024
66afafa
this is for test
bellondr Dec 27, 2024
afc53fa
this is for test
bellondr Dec 27, 2024
92203c3
this is for test
bellondr Dec 28, 2024
9fea405
this is for test
bellondr Dec 28, 2024
3f43210
this is for test
bellondr Dec 28, 2024
3042a9d
this is for test
bellondr Dec 28, 2024
62fc791
modsecurity refactor
bellondr Jan 6, 2025
ac6eb1d
modsecurity refactor
bellondr Jan 6, 2025
cadfb69
modsecurity refactor
bellondr Jan 6, 2025
5f4bf2b
modsecurity refactor
bellondr Jan 7, 2025
17e9885
modsecurity refactor
bellondr Jan 7, 2025
b849028
modsecurity refactor
bellondr Jan 7, 2025
c835aac
modsecurity refactor
bellondr Jan 8, 2025
5941d5a
modsecurity refactor
bellondr Jan 8, 2025
5e0861e
modsecurity refactor
bellondr Jan 8, 2025
75a9d6b
ingress controller support large domain
bellondr Jan 8, 2025
0bf6d70
ingress controller support large domain
bellondr Jan 8, 2025
ad1d54c
ingress controller support large domain
bellondr Jan 8, 2025
ef99a90
remove get instance env value from ingress and change ingress annotat…
bellondr Jan 8, 2025
8cb996a
jerry test
bellondr Jan 21, 2025
7bb2d32
jerry test
bellondr Jan 21, 2025
1eeea19
jerry test
bellondr Jan 21, 2025
d02a3f9
HA mode in kubernetes
bellondr Jan 27, 2025
268f1ef
HA mode in kubernetes
bellondr Jan 27, 2025
804ba45
HA mode in kubernetes
bellondr Jan 27, 2025
fea0992
HA mode in kubernetes
bellondr Jan 27, 2025
6a706eb
HA mode in kubernetes
bellondr Jan 27, 2025
cd28a98
HA mode in kubernetes
bellondr Jan 27, 2025
1064507
HA mode in kubernetes
bellondr Jan 27, 2025
fb99077
HA mode in kubernetes
bellondr Jan 27, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/autoconf/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ VOLUME /data

WORKDIR /usr/share/bunkerweb/autoconf

USER autoconf:autoconf
USER root

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 Code Vulnerability

last user should not be root (...read more)

Do not use root as the last user because your container runs with the root user. Always use a user with lower privileges.

View in Datadog  Leave us feedback  Documentation

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Running container as root poses security risks.

Running the container as root significantly increases security risks and violates container security best practices. This change:

  1. Increases the attack surface
  2. Violates the principle of least privilege
  3. Could lead to host system compromise if container is breached

Consider reverting to non-root user:

-USER root
+USER autoconf:autoconf

If root access is absolutely necessary for specific operations, consider:

  1. Using gosu or similar tools to drop privileges after root operations
  2. Documenting why root access is required
  3. Adding additional security controls
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
USER root
USER autoconf:autoconf


HEALTHCHECK --interval=10s --timeout=10s --start-period=60s --retries=6 CMD /usr/share/bunkerweb/helpers/healthcheck-autoconf.sh

Expand Down
57 changes: 9 additions & 48 deletions src/autoconf/IngressController.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,15 +47,6 @@ def _to_instances(self, controller_instance) -> List[dict]:
else:
for env in pod.env:
instance["env"][env.name] = env.value or ""
for controller_service in self._get_controller_services():
if controller_service.metadata.annotations:
for (
annotation,
value,
) in controller_service.metadata.annotations.items():
if not annotation.startswith("bunkerweb.io/"):
continue
instance["env"][annotation.replace("bunkerweb.io/", "", 1)] = value
return [instance]

def _get_controller_services(self) -> list:
Expand All @@ -66,6 +57,8 @@ def _to_services(self, controller_service) -> List[dict]:
return []
namespace = controller_service.metadata.namespace
services = []
if controller_service.metadata.annotations is None or "bunkerweb.io" not in controller_service.metadata.annotations:
return []
# parse rules
for rule in controller_service.spec.rules:
if not rule.host:
Expand All @@ -79,45 +72,13 @@ def _to_services(self, controller_service) -> List[dict]:
services.append(service)
continue
location = 1
for path in rule.http.paths:
if not path.path:
self._logger.warning(
"Ignoring unsupported ingress rule without path.",
)
continue
elif not path.backend.service:
self._logger.warning(
"Ignoring unsupported ingress rule without backend service.",
)
continue
elif not path.backend.service.port:
self._logger.warning(
"Ignoring unsupported ingress rule without backend service port.",
)
continue
elif not path.backend.service.port.number:
self._logger.warning(
"Ignoring unsupported ingress rule without backend service port number.",
)
continue

service_list = self.__corev1.list_service_for_all_namespaces(
watch=False,
field_selector=f"metadata.name={path.backend.service.name},metadata.namespace={namespace}",
).items

if not service_list:
self._logger.warning(
f"Ignoring ingress rule with service {path.backend.service.name} : service not found.",
)
continue

reverse_proxy_host = f"http://{path.backend.service.name}.{namespace}.svc.cluster.local:{path.backend.service.port.number}"
if len(rule.http.paths) > 0:
reverse_proxy_host = "http://localhost:80"
service.update(
{
"USE_REVERSE_PROXY": "yes",
f"REVERSE_PROXY_HOST_{location}": reverse_proxy_host,
f"REVERSE_PROXY_URL_{location}": path.path,
f"REVERSE_PROXY_URL_{location}": "/",
}
)
location += 1
Expand All @@ -132,12 +93,12 @@ def _to_services(self, controller_service) -> List[dict]:
) in controller_service.metadata.annotations.items():
if not annotation.startswith("bunkerweb.io/"):
continue

variable = annotation.replace("bunkerweb.io/", "", 1)
server_name = service["SERVER_NAME"].strip().split(" ")[0]
if not variable.startswith(f"{server_name}_"):
continue
service[variable.replace(f"{server_name}_", "", 1)] = value
service[variable] = value
else:
service[variable.replace(f"{server_name}_", "", 1)] = value

# parse tls
if controller_service.spec.tls:
Expand Down Expand Up @@ -210,7 +171,7 @@ def __process_event(self, event):
if obj.kind == "Pod":
return annotations and "bunkerweb.io/INSTANCE" in annotations
if obj.kind == "Ingress":
return True
return annotations and "bunkerweb.io" in annotations
if obj.kind == "ConfigMap":
return annotations and "bunkerweb.io/CONFIG_TYPE" in annotations
if obj.kind == "Service":
Expand Down
81 changes: 63 additions & 18 deletions src/autoconf/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,22 @@
from SwarmController import SwarmController
from IngressController import IngressController
from DockerController import DockerController
import uuid
from kubernetes import config
from kubernetes.leaderelection import leaderelection
from kubernetes.leaderelection.resourcelock.configmaplock import ConfigMapLock
from kubernetes.leaderelection import electionconfig

# Get variables
logger = setup_logger("Autoconf", getenv("LOG_LEVEL", "INFO"))
swarm = getenv("SWARM_MODE", "no").lower() == "yes"
kubernetes = getenv("KUBERNETES_MODE", "no").lower() == "yes"
docker_host = getenv("DOCKER_HOST", "unix:///var/run/docker.sock")
wait_retry_interval = getenv("WAIT_RETRY_INTERVAL", "5")
namespace = getenv("NAMESPACE", "default")
pod_name = getenv("POD_NAME", f'auto-{uuid.uuid4()}')
# Authenticate using config file
config.load_incluster_config()
Comment on lines +32 to +33
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Move Kubernetes config loading into Kubernetes-specific code path

The Kubernetes config loading is currently done globally, which could cause issues in non-Kubernetes modes. Consider moving it into the Kubernetes-specific code path.

-# Authenticate using config file
-config.load_incluster_config()
+if kubernetes:
+    # Authenticate using config file
+    config.load_incluster_config()

Committable suggestion skipped: line range outside the PR's diff.


if not wait_retry_interval.isdigit():
logger.error("Invalid WAIT_RETRY_INTERVAL value, must be an integer")
Expand All @@ -38,19 +47,34 @@ def exit_handler(signum, frame):
signal(SIGINT, exit_handler)
signal(SIGTERM, exit_handler)

try:
# Instantiate the controller
if swarm:
logger.info("Swarm mode detected")
controller = SwarmController(docker_host)
elif kubernetes:
logger.info("Kubernetes mode detected")
controller = IngressController()
else:
logger.info("Docker mode detected")
controller = DockerController(docker_host)

# Wait for instances

def run_on_kubernetes_ha_mode():
lock_name = "autoconfig-election"

config = electionconfig.Config(
ConfigMapLock(lock_name, namespace, pod_name),
lease_duration=17,
renew_deadline=15,
retry_period=5,
onstarted_leading=kubernetes_start,
onstopped_leading=onstopped_leading
)
logger.info(f'I am {pod_name} with {lock_name} in namespace {namespace}')
# Enter leader election
leaderelection.LeaderElection(config).run()


def onstopped_leading():
logger.info(f'{pod_name} is follower')


def kubernetes_start():
logger.info(f'{pod_name} is leader')
controller = IngressController()
start(controller=controller)


def start(controller):
logger.info("Waiting for BunkerWeb instances ...")
instances = controller.wait(wait_retry_interval)
logger.info("BunkerWeb instances are ready 🚀")
Expand All @@ -65,8 +89,29 @@ def exit_handler(signum, frame):
Path(sep, "var", "tmp", "bunkerweb", "autoconf.healthy").write_text("ok")
logger.info("Processing events ...")
controller.process_events()
except:
logger.error(f"Exception while running autoconf :\n{format_exc()}")
sys_exit(1)
finally:
Path(sep, "var", "tmp", "bunkerweb", "autoconf.healthy").unlink(missing_ok=True)


def start_server():
try:
# Instantiate the controller
if kubernetes:
run_on_kubernetes_ha_mode()
elif swarm:
logger.info("Swarm mode detected")
controller = SwarmController(docker_host)
else:
logger.info("Docker mode detected")
controller = DockerController(docker_host)

if not kubernetes:
start(controller=controller)

except:
logger.error(f"Exception while running autoconf :\n{format_exc()}")
sys_exit(1)
Comment on lines +109 to +111

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 Code Quality Violation

no bare except (...read more)

Avoid bare except. Try to always use specialized exception names in except blocks.

View in Datadog  Leave us feedback  Documentation

Comment on lines +109 to +111
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Improve exception handling

The bare except clause makes it difficult to debug specific issues and could mask important errors.

-    except:
+    except Exception as e:
         logger.error(f"Exception while running autoconf :\n{format_exc()}")
+        logger.error(f"Error type: {type(e).__name__}")
         sys_exit(1)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
except:
logger.error(f"Exception while running autoconf :\n{format_exc()}")
sys_exit(1)
except Exception as e:
logger.error(f"Exception while running autoconf :\n{format_exc()}")
logger.error(f"Error type: {type(e).__name__}")
sys_exit(1)
🧰 Tools
🪛 Ruff (0.8.2)

111-111: Do not use bare except

(E722)

finally:
Path(sep, "var", "tmp", "bunkerweb", "autoconf.healthy").unlink(missing_ok=True)


if __name__ == '__main__':
start_server()
5 changes: 4 additions & 1 deletion src/bw/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,13 @@ WORKDIR /usr/share/bunkerweb
# Copy python requirements
COPY src/deps/requirements.txt /tmp/requirements-deps.txt
COPY src/common/gen/requirements.txt deps/requirements-gen.txt
COPY src/common/db/requirements.txt deps/requirements-db.txt

# Install python requirements
RUN export MAKEFLAGS="-j$(nproc)" && \
pip install --break-system-packages --no-cache-dir --require-hashes --ignore-installed -r /tmp/requirements-deps.txt && \
pip install --break-system-packages --no-cache-dir --require-hashes --target deps/python -r deps/requirements-gen.txt
pip install --break-system-packages --no-cache-dir --require-hashes --target deps/python -r deps/requirements-gen.txt && \
pip install --break-system-packages --no-cache-dir --require-hashes --target deps/python -r deps/requirements-db.txt

# Copy files
# can't exclude deps from . so we are copying everything by hand
Expand All @@ -36,6 +38,7 @@ COPY src/common/cli cli
COPY src/common/confs confs
COPY src/common/core core
COPY src/common/gen gen
COPY src/common/db db
COPY src/common/helpers helpers
COPY src/common/settings.json settings.json
COPY src/common/utils utils
Expand Down
20 changes: 19 additions & 1 deletion src/common/confs/default-server-http.conf
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,25 @@ server {
})
}
}
{% else +%}
location / {
etag off;
proxy_pass "http://localhost:80";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-Host $http_host;

proxy_set_header X-Forwarded-Prefix "/";

proxy_buffering on;

proxy_connect_timeout 60s;
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
{% endif %}

# include core and plugins default-server configurations
Expand Down Expand Up @@ -189,5 +208,4 @@ server {
logger:log(INFO, "log_default phase ended")

}

}
6 changes: 6 additions & 0 deletions src/common/confs/healthcheck.conf
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,12 @@ server {
}
}

location /nginx_status {
stub_status on;
allow 127.0.0.1;
deny all;
}

# disable logging
access_log off;

Expand Down
9 changes: 9 additions & 0 deletions src/common/confs/http-modsec-crs/http-http3.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{% if USE_MODSECURITY == "yes" and MODSECURITY_CRS_VERSION == "3" and HTTP3 == "yes" +%}
SecAction \
"id:900230,\
phase:1,\
nolog,\
pass,\
t:none,\
setvar:'tx.allowed_http_versions=HTTP/1.0 HTTP/1.1 HTTP/2 HTTP/2.0 HTTP/3 HTTP/3.0'"
{% endif %}
4 changes: 4 additions & 0 deletions src/common/confs/http-modsecurity/http-modsecurity.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{% if USE_MODSECURITY == "yes" +%}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix template syntax error

The template syntax +%} appears incorrect. The plus sign should be removed as it's not valid in standard template syntax.

-{% if USE_MODSECURITY == "yes" +%}
+{% if USE_MODSECURITY == "yes" %}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{% if USE_MODSECURITY == "yes" +%}
{% if USE_MODSECURITY == "yes" %}

modsecurity on;
modsecurity_rules_file /etc/nginx/http-modsecurity/modsecurity-rules.conf.modsec;
{% endif %}
Loading