Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lib: Memory spike reduction for sh cmds at scale #1

Conversation

raja-rajasekar
Copy link

The output buffer vty->obuf is a linked list where each element is of 4KB.
Currently, when a huge sh command like is executed on a large scale, all the vty_outs are processed and the entire data is accumulated.
After the entire vty execution, vtysh_flush proceeses and puts this data in the socket (131KB at a time).

Problem here is the memory spike for such heavy duty show commands.

The fix here is to chunkify the output on VTY shell by flushing it intermediately for every 128 KB of output accumulated and free the memory allocated for the buffer data.

This way, we achieve ~25-30% reduction in the memory spike.

The output buffer vty->obuf is a linked list where
each element is of 4KB.
Currently, when a huge sh command  like <show ip route json>
is executed on a large scale, all the vty_outs are
processed and the entire data is accumulated.
After the entire vty execution, vtysh_flush proceeses
and puts this data in the socket (131KB at a time).

Problem here is the memory spike for such heavy duty
show commands.

The fix here is to chunkify the output on VTY shell by
flushing it intermediately for every 128 KB of output
accumulated and free the memory allocated for the buffer data.

This way, we achieve ~25-30% reduction in the memory spike.

Signed-off-by: Srujana <[email protected]>

Signed-off-by: Rajasekar Raja <[email protected]>
@raja-rajasekar raja-rajasekar deleted the raja_srujana branch August 27, 2024 19:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants