-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
🎄 Defined Schemas with auto migration #7091
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🎄 Defined Schemas with auto migration #7091
Conversation
Ok implementation concepts are done, just missing the testing part (and docs). |
Codecov Report
@@ Coverage Diff @@
## master #7091 +/- ##
==========================================
- Coverage 93.92% 84.79% -9.13%
==========================================
Files 181 182 +1
Lines 13209 13365 +156
==========================================
- Hits 12406 11333 -1073
- Misses 803 2032 +1229
Continue to review full report at Codecov.
|
@Moumouls I played with your Gist snippet and the schema generations works pretty well, congrats for the nice job!
|
It seems that enabling single schema cache does not fixes the issue :(
Any ideas? |
Thanks @L3K0V during this implementation; i discovered that a field option change (like adding/modifying The parse server onboarded implementation will have a better stability and have many little improvements ! Thanks for you report about the i hope we can get reviews from @mtrezza @dplewis @davimacedo to merge this one asap :) (as several contributors are on holiday, I think we will be able to merge this one at the beginning of January. ) |
Did you try to start Parse server without redis adapter ? |
@L3K0V could you post your issue with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job, @Moumouls ! I have some questions.
@@ -763,7 +763,7 @@ describe('schemas', () => { | |||
}); | |||
}); | |||
|
|||
it('refuses to put to existing fields, even if it would not be a change', done => { | |||
it('refuses to put to existing fields with different type, even if it would not be a change', done => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering if it can cause some side effect (probably in dashboard?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No side effect it's just to support field options
this.resetSchemaOps(schema); | ||
} | ||
|
||
async execute() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can't we just clean up the _Session collection and populate it with the startup data?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure to understand, what do you mean by clean up the _Session
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry. Clean up the _SCHEMA collection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure it will work since clean up schema will not perform field deletion, index deletion on the database.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if we should delete field/indexes. Deleting a column using the dashboard currently does not perform these operations. Also it could take a long time for a huge class. Why don't we just leave the data over there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I share the concerns regarding auto-deletion of field data and indexes. How would a field of a large collection be deleted without impacting DB performance? See #6823.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hummm, to be honest currently i don't know the answer and also the solution. I have checked your linked issue/pr.
I suppose that you have already encountered a performance issue on your projects on a field deletion ?
Deleting a field takes 100% of the database processor and also slows down other requests?
I don't know if mongo/postgres support field delete/unset in background without eating all the CPU or if we should manage this on our side ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR describes that deleting a field in Parse Dashboard (as it works today) can be resource intensive for large collections. In the PR, the deleting of a field takes advantage of an index, if there is one, which has to be manually created and removed. The reason the index is not auto-created is because also index creation can be resource intensive for large collections.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So here it seems that we are blocked :/
I think in first step, we can add documentation on the issue of running field deletion on large collection. And also raise a warning on field deletion with a link to the docs.
Currently I do not see a solution to ensure data consistency and operation performance for this case :/
I'm totally open for an implementation suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think a prominent warning in the docs would be helpful. This topics get further complicated when looking at the difference of index building processes between MongoDB <4.4 and >=4.4.
A pragmatic solution could be to define a step-by-step process for the developer to manually create additional required indices, then deploy with changed schema, then manually delete unused indices. Although, I believe a more sophisticated and automated solution is possible with a deeper look into the MongoDB index building process.
@Moumouls I have very limited time (and about 6 PRs to complete for Parse JS + Server), but I'll try to implement the suggestions I made above. @++ |
No problem @sadortun i've also some time issue to work properly on my parse server PRs, it's just to know if you want to contribute in the futur with me on the schema feature (since it's a huge one with many improvement planned ahhaha) |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
@Moumouls Just to avoid duplicate work ... Do you plan in rebasing your PR on master ? If so, when ? Any other changes/updates planned in the short term ? If you answer I'll also try to add types definitions to the config structure. @++ |
@sadortun hello there, so currently i've some hard time to work on my parse server PRs 🙁 i'll try at least to sync again this Pr with master then only some little adjustments will be needed. |
# Conflicts: # CHANGELOG.md # spec/ParseFile.spec.js # src/Config.js # src/Options/Definitions.js # src/Options/docs.js # src/ParseServer.js
I think the current process is run prettier, then lint. The result is what the CI expects. |
@Moumouls awesome. I'm working on a different project for about another week, after that I'll be able to work on this. |
Is anyone willing to pick up this PR, so we can get it merged? |
@mtrezza yes, I can get this PR ready. The only issue blocking me now is the new project I'll be integrating will be used in production soon, how stable is (Timeframe is roughly ~2 weeks to get my project in production, so I could work on this this week) |
Ouch! We are still working on the alpha / beta release channels for Parse Server, so master is currently as stable as it gets without significant user feedback and testing. Big pain point. I think for now we just have to assume that it is stable (enough). |
Ok, thanks! I'll complete the imementation in two steps.
|
Hi there @mtrezza @sadortun , this PR is synced with my fork used in production for now more than 1 year, so i can say that it's pretty stable. Here may be the last missing point is to allow the user to choose the migration policy (auto at server start up) or manually at a given time.
What is your idea here ? 🙂 Here we need to be careful, the current config structure (class definition) is Parse Rest/ JS SDK ISO, we need to avoid a new interface here, it could lead to hard maintenance, since each change to Parse.Schema could lead to a specific adjustment on the pre defined schema. Currently any change/feat on Parse.Schema could be used directly in the predefined schema 🙂 |
I'm not touching the actual schema structure. I was thinking of a few things, mostly related when a field is missing in the schema, but defined in the server, having some logging that will tell the user what is missing from his schemas definitions. Basically, if you have defined your object in the Parse Dashboard, then run the migration tool, it should tell you "hey, I found this class defined in Parse, but not in your schema and it contains X,Y,Z ...". Since you're around, what do you think about moving the "buildSchema()" into Parse Server. @++ |
const CLPKeys = ['find', 'count', 'get', 'create', 'update', 'delete', 'addField']; | ||
CLPKeys.forEach(key => { | ||
if (!clp[key]) { | ||
clp[key] = cloudCLP[key] || { '*': true }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Moumouls @mtrezza Please correct me if i'm wrong, but it seems from this that when no CLP is defined in the schema, the default is Read/Write/Add for anonymous ?
For a security perspective, i think this should have no permissions by default, or throw an error Missing CLP for "ObjectName"
.
I understand the behaviour from ParseDashboard
but from the dashboard when the user add a Class, it's created by default public, BUT there is a clear indication at the top that the class is public ...
With the schema migration behaviour being accessible by default to anonymous
users, the developer will probably have no idea his class is public. It seems dangerous, and not a "secure by default" approach.
I think it would be resonable to either of the following :
- Set everything to private and display a warning
CLP is not defined for "ObjectName", the CLP has been set to PRIVATE and the objects will only be accessible using a MASTER_KEY. Please see [link to docs] for details
- Throw an error until any kind of CLP is set in the schema
Missing CLP for "ObjectName"
@++
Samuel
Changing to draft; PR needs rebase and resolving open questions in previous review feedback. @Moumouls I noticed that you assign your own PRs to yourself. It's not clear to me why, but I removed your assignments from this and your other PRs, because we don't use this feature and it is not clear what it implies for other contributors. Every PR can be picked up by anyone who wants to contribute. Especially in case of PRs that are inactive for months I am afraid it may cause others to think that the PR or issue is somehow "reserved" and may deter them from further working on the PR. |
|
Closing, continued in #7418 |
New Pull Request Checklist
Issue Description
Allow to start parse server with predefined and locked schemas.
This feature seems to be wanted since the 2 community discussions on this subject have gathered 263 views 👍
Related issue: #7063
Approach
As discussed on the Parse Community forum it's a first simple implementation out of the schema controller
https://community.parseplatform.org/t/possibility-to-set-class-level-permissions-via-file/1061/23
This a simple implementation that run after initialization of parse server. The the script perform operation depending of fields found into the DB.
Important note: Pre defined will block all Schema route except the delete one to avoid dual source of truth (and a huge mess). Delete route will be conserved to avoid unwanted data loss, the developer will have the choice to delete the class through the Parse Dashboard on DB Shell.
Usage example:
Schema follow the Parse Schema REST JSON structure:
Full example here: https://github.com/Moumouls/next-atomic-gql-server/blob/master/src/schema/schemas/User.ts
TODOs before merging