-
-
Notifications
You must be signed in to change notification settings - Fork 129
Closed
Description
Persisting dfs / Prefect Results to DB:
- For writing to SQL we are already explicitly specifying the column types for the database, using dtypes from the SQLAlchemy metadata object that’s generated by
Package.to_sql()
:
# Load any tables that exist in our dictionary of dataframes into the
# corresponding table in the newly create database:
for table in md.sorted_tables:
dfs[table.name].to_sql(
table.name,
engine,
if_exists="append",
index=False,
dtype={c.name: c.type for c in table.columns},
)
- Pandas added a dtype argument to
read_sql_query
in v1.3.0, which does not exist in theread_sql()
wrapper or read_sql_table() function. So if we are using an SQL query rather than trying to read the whole table, we can specify what data types we get in the resulting dataframe using the same metadata structures we’ve already defined. - Though this would mean that we need to define metadata for the intermediary tables and columns as well (anything that’s going to get persisted and read back out). But if the intent is for these tables to stick around for reference and re-use, then that’s something we’d already be doing.
- For many of these additional tables we would not need to have descriptions, foreign key relationships, primary keys etc. – those would only make sense in the context of the “real” normalized database tables.
- Question: is it just me or does it seem like well structured database tables with clear primary keys, constraints, foreign key relations, good normalization etc. are getting kinda kicked to the curb in the "Modern Data Stack" universe?
Compare with Prefect+dbt+SQL architecture
Metadata
Metadata
Assignees
Labels
No labels