-
-
Notifications
You must be signed in to change notification settings - Fork 260
Remove/increase the record size limit #7332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
src/jrd/val.h
Outdated
}; | ||
|
||
const ULONG MAX_RECORD_SIZE = 65535; | ||
const ULONG MAX_RECORD_SIZE = 1000000; // just to protect from misuse |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would not 1048576 (1MB) be more easy to document/explain?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. But my primary worry is whether we can foresee any other problems with this change. Increased tempspace usage is bad, but this is just a performance issue (those using very long records should remember about that). Longer records will also cause bigger memory usage. For very complex queries (those near the 255 contexts limit), if we imagine that e.g. every second stream has its rpb_record, then max memory usage per query (worst case) increases from 8MB to 128MB. With many compiled statement being cached this may become a problem, although in practice we shouldn't expect all tables to be that wide. Or we should release rpb's records of cached requests when their use count goes to zero. Any other issue you can think of?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although the memory usage issue is not only about user statements but also about procedures/functions/triggers that are also cached. Maybe EXE_unwind()
should delete all rpb_record's after closing the rsb's and releasing the local tables? Or should it be done by RecordStream::invalidateRecords()
?
As Firebird is used in LibreOffice i suppose that 10MB is more rational change for them. |
BTW, is there the same sanity check for result set record size or it is completely unlimited? |
Unlimited. |
The patch passed all the CI tests successfully (except explicit checks for the max record size). The sorting module switches to the "refetch" mode while processing long records, so the memory consumption remains low. Hash joins are slightly affected from the memory consumption POV, but the effect is limited only by the right part of the join which usually has low cardinality. Merge joins may be more affected but this just means switching to the temp files earlier, the maximum memory usage is still restricted by the I still suppose that it makes sense to release |
This addresses ticket #1130. After compression improvements the storage overhead this is no longer an issue. I think we should still preserve some safety limit, e.g. 1MB. This change suggests some other improvements too, like compression of the stored temporary records (sorts, record buffers), but they may be addressed separately.