-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compress rules, jobs for cos-proxy #634
base: main
Are you sure you want to change the base?
Conversation
This was prompted by charmcraft pack failing with "error: can't find Rust compiler".
def _decode_content(encoded_content: str) -> str: | ||
try: | ||
# Assuming content is encoded, try decoding it. | ||
return lzma.decompress(base64.b64decode(encoded_content.encode("utf-8"))).decode() | ||
except (binascii.Error, lzma.LZMAError): | ||
# Failed to base64-decode or decompress, so probably not encoded. Return as is. | ||
return encoded_content |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are assuming the content is encoded, shouldn't we split this except
so we can log meaningful messages to juju debug-log in case our assumption is False?
RELATION_INTERFACE_NAME = "prometheus_scrape" | ||
|
||
DEFAULT_ALERT_RULES_RELATIVE_PATH = "./src/prometheus_alert_rules" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ENCODING = "utf-8" |
|
||
def _encode_content(content: Union[str, bytes]) -> str: | ||
if isinstance(content, str): | ||
content = bytes(content, "utf-8") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
content = bytes(content, "utf-8") | |
content = bytes(content, ENCODING) |
if isinstance(content, str): | ||
content = bytes(content, "utf-8") | ||
|
||
return base64.b64encode(lzma.compress(content)).decode("utf-8") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return base64.b64encode(lzma.compress(content)).decode("utf-8") | |
return base64.b64encode(lzma.compress(content)).decode(ENCODING) |
def _decode_content(encoded_content: str) -> str: | ||
try: | ||
# Assuming content is encoded, try decoding it. | ||
return lzma.decompress(base64.b64decode(encoded_content.encode("utf-8"))).decode() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return lzma.decompress(base64.b64decode(encoded_content.encode("utf-8"))).decode() | |
return lzma.decompress(base64.b64decode(encoded_content.encode(ENCODING))).decode() |
|
||
def _exec(self, cmd) -> str: | ||
result = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) | ||
return result.stdout.decode("utf-8").strip() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return result.stdout.decode("utf-8").strip() | |
return result.stdout.decode(ENCODING).strip() |
Issue
Some productions deployments have several megs of relation data just for alert rules, scrape jobs.
Contributing factors to the high volume of reldata:
As a result, every event cos-proxy gets from nrpe relation-changed, results in reading/writing several megs of data, taking several seconds to complete. When many units are at play, model settling takes a long time.
In addition, relation data limit is 16M, and in common envs we already approach 25% of that.
Solution
TODO:
Context
Testing Instructions
Upgrade Notes