You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Feb 21, 2024. It is now read-only.
merge: featurebase merge updates for 2022-10-28 (#2188)
* use t.Fatal(f) to abort tests, not panic
* make perf_able run at all, make it debug a bit better
switch perf-able to using same node type we use for other spot instances,
because otherwise it never finds any available capacity.
we switch the perf-able script to use the standard get_value function
instead of direct jq calls.
we try to grab server logs if the restore fails in the hopes of finding
out why the restore very occasionally fails.
* Fix some issues with running IDK tests in docker. (#2248)
*Stop running TestKafkaSourceIntegration with t.Parallel()
This test can't be run in parallel as it's currently written. Doing so
allows for interleaving of messages to the same kafka topic between
tests.
I didn't attempt to modify the test so it could be run in parallel. That
could be done, but left for someone more ambitious.
* Remove idk/testenv/certs which got accidentally committed.
also update .gitignore to include those.
* changes to add bool support in idk (#2240)
* initial changes to add bool support in idk
* modifying some default parameters for testing, will revert them later
* adding support for bool in making fragments function
* boolean values implementation without supporting empty or null values at this point
* Implement bool support in batch using a map (and a slice for nulls) (#2247)
* Implement bool support in batch using a map (and a slice for nulls)
* Keep the PackBools default for now
But set it explicity in the ingest tests which rely on it.
* Modify batch to construct bool update like mutex
The code in API.ImportRoaringShard has a switch statement which causes
bool fields to be handled like mutex fields. This means, that the
viewUpdate.Clear value should only contain data in the first "row" of
the fragment, which it will treat as records to clear for *all* rows.
This makes more sense for mutex fields; for bool fields, there's only
one other row to clear. But since the code is currently handling them
the same, we need to construct viewUpdate.Clear such that it conforms to
that pattern.
This commit also adds a test which covers this logic.
* Remove commented code; revert config for testing
This commit also removes the DELETE_SENTINEL case for non-packed bools,
since that isn't supported anyway.
* Revert default setting
* remove inconsistent type scope
* correcting the logic of string converstion to bool
* resolving an error in a test
* adding tests to cover code related to bool support in batch.go file and interface.go files
* modifying interfaces test
* added one more test case
Co-authored-by: Travis Turner <[email protected]>
Co-authored-by: Travis Turner <[email protected]>
* resolving bool null field ingestion error (#2254)
* resolving bool null field ingestion error
* testing issues
* adding null support for bools
* updating the null bool field ingestion
* trying to resolve issue when ingesting null value for bool type
* adding a clearing support for bool type
* resolving issues with bool null value ingestion
* updating the jwt go package version and removing changes made in docker compose file
* reverting jwt go version
* removing v4 of jwt
* adding a comment in test file to see if sonar cloud accepts this file
* don't obtain stack traces on rbf.Tx creation
We thought stack traces were mildly expensive. We were very wrong.
Due to a complicated issue in the Go runtime, simultaneous requests
for stack traces end up contending on a lock even when they're not
actually contending on any resources. I've filed a ticket in the
Go issue tracker for this:
golang/go#56400
In the mean time: Under some workloads, we were seeing 85% of all
CPU time go into the stack backtraces, of which 81% went into the
contention on those locks. But even if you take away the contention,
that leaves us with 4/19 of all CPU time in our code going into
building those stack backtraces. That's a lot of overhead for a
feature we virtually never use.
We might consider adding a backtrace functionality here, possibly
using `runtime.Callers` which is much lower overhead, and allows us
to generate a backtrace on demand (no argument values available,
but then, we never read those because they're unformatted hex
values), but I don't think it's actually very informative to know
what the stack traces were of the Tx; they don't necessarily reflect
the current state of any ongoing use of the Tx, so we can't necessarily
correlate them to goroutine stack dumps, and so on.
* fb-1729 Enriched Table Metadata (#2255)
enriched metadata for tables
added support for the concept of a table and field owners in metadata; mechanism to derive owner from http request metadata; metadata for table description
* tightened up is/is not null filter expressions (FB-1741) (#2260)
Covers tightening up handling filter expressions that contain is/is not null ops. These filters may have to be translated into PQL calls to be passed to the executor and even though sql3 language supports nullability for any data type, currently only BSI fields are nullable at the storage engine level (there is a ticket to add support for non-BSI field here FB-1689: IS SQL Argument returns incorrect error) so when these fields are used in filter conditions we need to handle BSI and non-BSI fields differently.
* added a test to cover the keyword replace as being synonymous with insert (#2261)
* update molecula references to featurebase (#2262)
Co-authored-by: Seebs <[email protected]>
Co-authored-by: Travis Turner <[email protected]>
Co-authored-by: Pranitha-malae <[email protected]>
Co-authored-by: Travis Turner <[email protected]>
Co-authored-by: pokeeffe-molecula <[email protected]>
Co-authored-by: Stephanie Yang <[email protected]>
0 commit comments