Flask数据迁移保留现有的表
在使用flask-migrate 进行数据迁移的过程中,会删除掉数据库中所有的表。因为在alembic的设定中:如果目标数据库存在不属于元数据的表,自动生成过程通常会假定这些表是数据库中的无用,并对每个表执行 Operations.drop _ table ()操作。
为了防止这种情况的出现,使用 EnvironmentContext.configure.include _ name
钩子进行过滤:
(The EnvironmentContext.configure.include_name
hook is also most appropriate to limit the names of tables in the target database to be considered. If a target database has many tables that are not part of the MetaData
, the autogenerate process will normally assume these are extraneous tables in the database to be dropped, and it will generate a Operations.drop_table()
operation for each. To prevent this, the EnvironmentContext.configure.include_name
hook may be used to search for each name within the tables
collection of the MetaData
object and ensure names which aren’t present are not included:)
from flask import current_app ''' target_metadata 获取当前项目中所有的表数据 在 MetaData 对象的表集合中搜索每个名称,并确保不包含不存在的名称 ''' target_metadata = current_app.extensions['migrate'].db.metadata def include_name(name, type_, parent_names): if type_ == "table": return name in target_metadata.tables else: return True #默认会执行 run_migrations_online()函数,在context.configure配置以下几项 context.configure( # ... target_metadata = target_metadata, include_name = include_name, include_schemas = False )
env.py
from __future__ import with_statement import logging from logging.config import fileConfig from flask import current_app from alembic import context # this is the Alembic Config object, which provides # access to the values within the .ini file in use. config = context.config # Interpret the config file for Python logging. # This line sets up loggers basically. fileConfig(config.config_file_name) logger = logging.getLogger('alembic.env') # add your model's MetaData object here # for 'autogenerate' support # from myapp import mymodel # target_metadata = mymodel.Base.metadata config.set_main_option( 'sqlalchemy.url', str(current_app.extensions['migrate'].db.engine.url).replace('%', '%%')) target_metadata = current_app.extensions['migrate'].db.metadata # other values from the config, defined by the needs of env.py, # can be acquired: # my_important_option = config.get_main_option("my_important_option") # ... etc. def run_migrations_offline(): """Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. """ url = config.get_main_option("sqlalchemy.url") context.configure( url=url, target_metadata=target_metadata, literal_binds=True ) with context.begin_transaction(): context.run_migrations() def include_name(name, type_, parent_names): if type_ == "table": return name in target_metadata.tables else: return True def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ # this callback is used to prevent an auto-migration from being generated # when there are no changes to the schema # reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html def process_revision_directives(context, revision, directives): if getattr(config.cmd_opts, 'autogenerate', False): script = directives[0] if script.upgrade_ops.is_empty(): directives[:] = [] logger.info('No changes in schema detected.') connectable = current_app.extensions['migrate'].db.engine with connectable.connect() as connection: context.configure( connection=connection, compare_type=True,compare_server_default=True, target_metadata=target_metadata, process_revision_directives=process_revision_directives, **current_app.extensions['migrate'].configure_args, include_name = include_name, include_schemas = False ) with context.begin_transaction(): context.run_migrations() if context.is_offline_mode(): run_migrations_offline() else: run_migrations_online()