-
Notifications
You must be signed in to change notification settings - Fork 897
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: After table compression database size increased a lot #5850
Comments
Hi @yinan8128, Thank you for filing the issue! As explained in #5582, this could happen if the compression error'ed out for some reason before reclaiming the space. Running Alternatively, you can continue using version Can you try either of these and let us know if that helps improve your situation? |
Hi Thank you for your reply vacuum script
vacuum log in ansible
Compared the database size, no released, should we run vacuum full or vacuum is okay please? |
Hi @yinan8128, You can try |
Hi Thank you for your reply
|
Hi @yinan8128, Any error in the jobs will be recorded in job_errors table. So, if you don't see that there, it is highly unlikely that the compression job failed. But as your database size increased after compression, the most likely explanation is that the old uncompressed rows have not been vacuumed yet. It could be the case that vacuum has not freed up those rows yet due to some long running transaction. You can follow this - https://emmer.dev/blog/finding-long-running-queries-in-postgresql/ - to check if you have any of those. If you did have such a transaction running, cancel that transaction and then retry vacuum again. As for why this has happened, it could be that you have hit a bug. But without an error log or an entry in job_errors, this might be harder to investigate. |
Hi @yinan8128 Any chance to re-run your setup on timescaledb 2.11 and see if the issue is resolved. |
Hi Thank you for the update We already disabled the compression in the production env, let's check another production env which have problem and feedback to you |
Hello @yinan8128, Some more questions:
|
Hello There are retention jobs and compression jobs, there is no timescaledb_information.job_errors records which means no job failure? From the attached image can see that retention worked(database size reduced 4 times), compression caused database size increased 7 times |
@yinan8128 Note that you need to upgrade to 2.11 to allow a VACUUM on the hypertable to release disk space. Your earlier vacuum script won't work because the informational view you used doesn't show the "internal" compressed chunks. You need to query the |
Hi We found that manual compression also can make chunk and table size increased, added reproduce steps to the ticket, #5961 could you please help check? |
Hi @yinan8128 , Replied to questions asked as part of issue #5961. Please let us know if you have any further questions. |
Dear Author, This issue has been automatically marked as stale due to lack of activity. With only the issue description that is currently provided, we do not have enough information to take action. If you have or find the answers we would need, please reach out. Otherwise, this issue will be closed in 30 days. Thank you! |
How about the relationship of compression rate, query efficiency and data?
Which cases can get good compression rate and query efficieny please? Will case 3 get the best compression rate but worst query efficiency as query need to decompress compressed chunk firstly please? Thanks |
Hi, I believe this was answered in your other issue: Thanks. |
Dear Author, This issue has been automatically marked as stale due to lack of activity. With only the issue description that is currently provided, we do not have enough information to take action. If you have or find the answers we would need, please reach out. Otherwise, this issue will be closed in 30 days. Thank you! |
Dear Author, We are closing this issue due to lack of activity. Feel free to add a comment to this issue if you can provide more information and we will re-open it. Thank you! |
What type of bug is this?
Other
What subsystems and features are affected?
Compression, Other
What happened?
Our production encounter the issue that after table compression table size reduced but database size increased a lot
Table and compression defination
Table size before compression:
3.23GB
Table size after compression:
847mb
Total database size before compression:
93.4GB
Total database size after compression:
101GB
There are no other tables compressions happening simultaneously
There is another ticket #5582, the ticket space not claimed but no database size increased
Please suggest, thanks
TimescaleDB version affected
2.10.1
PostgreSQL version used
14.6
What operating system did you use?
CentOS Linux release 7.9.2009
What installation method did you use?
RPM
What platform did you run on?
On prem/Self-hosted
Relevant log output and stack trace
No response
How can we reproduce the bug?
The text was updated successfully, but these errors were encountered: