单例模式
单例模式(Singleton Pattern)是一种常用的软件设计模式,该模式的主要目的是确保某一个类只有一个实例存在。当你希望在整个系统中,某个类只能出现一个实例时,单例对象就能派上用场。
比如,某个程序的配置信息存放在一个文件中,客户端通过一个Appconfig 的类来读取配置文件的信息。如果在程序运行期间,有很多地方都在使用配置文件的内容,也就说,很多地方都需要创建Appconfig 对象的实例,这样子就会导致系统中存在很多Appconfig 的实例对象,而这种会严重浪费内存资源。尤其在配置文件内容很多的情况下。事实上 ,类似Appconfig 这样的类 我们希望在程序运行期间只存在一个实例对象。
实现单例模式方式
1、模块实现方式
python 的模块就是天然的单例模式,因为模块在第一次导入时,会生成 .pyc 文件,当第二次导入时,就会直接加载 .pyc 文件,而不会再次执行模块代码。因此,我们只需把相关的函数和数据定义在一个模块中,就可以获得一个单例对象了。如果我们真的想要一个单例类,可以考虑这样做:
mysingleton.py
| Class Mysingleton: |
| def foo(self): |
| pass |
| |
| singleton_obj=Mysingleton() |
保存文件,需要使用时,直接在其他文件导入此文件中的对象 ,这个对象即是单例模式的对象
| from mysingleton import singleton_obj |
2、装饰器实现方式
| def Singleton(cls): |
| _instance={} |
| def _singleton(*args,**kwagrs): |
| if cls not in _instance: |
| _instance[cls]=cls(*args,**kwagrs) |
| return _instance[cls] |
| return _singleton |
| |
| @Singleton |
| class A: |
| a=1 |
| def __init__(self,x=0): |
| self.x=x |
| |
| a1=A(2) |
| a2=A(4) |
3、类方法实现
| class Player(object): |
| |
| _instanc=None |
| _flag=False |
| def __new__(cls, *args, **kwargs): |
| print('new 执行') |
| |
| if cls._instanc is None: |
| cls._instanc=super().__new__(cls) |
| return cls._instanc |
| def __init__(self): |
| if not Player._flag: |
| print('init') |
| Player._flag=True |
| |
| |
| if __name__=='__main__': |
| video=Player() |
| print(video) |
| music=Player() |
| print(music) |
4 、基于__new__ 方法实现
我们知道,当我们实例化一个对象时,是先执行了类的__new__
方法(我们没写时,默认调用object.new
),实例化对象;然后再执行类的__init__
方法,对这个对象进行初始化,所以我们可以基于这个实现单例模式,hasattr
方法 是判断类里面是否有某个属性或者方法 例如 hasattr(cls, ’_instance’)
判断这类是否有_instance
| class Singleton_: |
| def __new__(cls, *args, **kwargs): |
| if not hasattr(cls,'_instance'): |
| cls._instance=super(Singleton_,cls).__new__(cls) |
| return cls._instance |
| |
5、基于元类metaclass实现
| class SingletonType(type): |
| def __init__(cls, name, bases, attrs): |
| super(SingletonType, cls).__init__(name, bases, attrs) |
| cls.instance = None |
| |
| def __call__(cls, *args, **kwargs): |
| if cls.instance is None: |
| cls.instance = super(SingletonType, cls).__call__(*args, **kwargs) |
| return cls.instance |
| |
| |
| class Singleton(metaclass=SingletonType): |
| pass |
| |
| |
| singleton_one = Singleton() |
| singleton_two = Singleton() |
| |
| print(singleton_one) |
| |
| print(singleton_two) |
| |
| print(singleton_one is singleton_two) |
| |
| |
| |
| |
| |
| singleton_three = Singleton() |
| print(singleton_one is singleton_three) |
| |
定义一个元类,在元类的__call__
方法中判断实例是否已存在,如果不存在则调用父类的__call__
方法来创建并返回实例。
使用场景
1、配置信息
- 某个项目的配置信息存放在一个配置文件中,通过一个 Config 的类来读取配置文件的信息。
- 如果在程序运行期间,有很多地方都需要使用配置文件的内容,也就是说,很多地方都需要创建 Config 对象的实例,这就导致系统中存在多个 Config 的实例对象,而这样会严重浪费内存资源,尤其是在配置文件内容很多的情况下。
- 事实上,类似 Config 这样的类,我们希望在程序运行期间只存在一个实例对象。
| import configparser |
| import threading |
| |
| |
| class Config: |
| _instance = None |
| _lock = threading.Lock() |
| |
| def __new__(cls): |
| if cls._instance is None: |
| with cls._lock: |
| if cls._instance is None: |
| cls._instance = super(Config, cls).__new__(cls) |
| cls._instance.load_config() |
| return cls._instance |
| |
| def load_config(self): |
| self.config = configparser.ConfigParser() |
| if not self.config.read('config.ini'): |
| |
| self.set_default_config() |
| |
| def set_default_config(self): |
| |
| self.config['Section1'] = {'key1': 'default_value1', 'key2': 'default_value2'} |
| |
| def get_config(self, section, key): |
| return self.config.get(section, key) |
| |
| def update_config(self, section, key, value): |
| self.config.set(section, key, value) |
| with open('config.ini', 'w') as config_file: |
| self.config.write(config_file) |
| |
| |
| if __name__ == "__main__": |
| config = Config() |
| |
| |
| value1 = config.get_config("Section1", "key1") |
| print(value1) |
| |
| |
| config.update_config("Section1", "key1", "new_value") |
| |
| |
| updated_value1 = config.get_config("Section1", "key1") |
| print(updated_value1) |
2、分布式ID(雪花算法)
- 分布式ID生成是一个常见的需求,以下是一个使用雪花算法实现分布式ID生成的Python代码示例,并将雪花算法的生成ID功能与单例模式结合使用,创建了一个单例类,该类包含了雪花算法的实例,并确保只有一个该类的实例存在
| import threading |
| import time |
| |
| |
| class SnowflakeIDGenerator: |
| def __init__(self, worker_id, datacenter_id): |
| |
| self.timestamp_bits = 41 |
| |
| self.worker_id_bits = 10 |
| |
| self.sequence_bits = 12 |
| |
| |
| self.max_worker_id = -1 ^ (-1 << self.worker_id_bits) |
| self.max_sequence = -1 ^ (-1 << self.sequence_bits) |
| |
| |
| self.timestamp_shift = self.worker_id_bits + self.sequence_bits |
| |
| self.worker_id_shift = self.sequence_bits |
| |
| |
| self.worker_id = worker_id |
| self.datacenter_id = datacenter_id |
| |
| |
| self.sequence = 0 |
| |
| self.last_timestamp = -1 |
| |
| |
| self.lock = threading.Lock() |
| |
| |
| if self.worker_id < 0 or self.worker_id > self.max_worker_id: |
| raise ValueError(f"Worker ID must be between 0 and {self.max_worker_id}") |
| if self.datacenter_id < 0 or self.datacenter_id > self.max_worker_id: |
| raise ValueError(f"Datacenter ID must be between 0 and {self.max_worker_id}") |
| |
| def _current_timestamp(self): |
| return int(time.time() * 1000) |
| |
| def _wait_for_next_timestamp(self, last_timestamp): |
| timestamp = self._current_timestamp() |
| while timestamp <= last_timestamp: |
| timestamp = self._current_timestamp() |
| return timestamp |
| |
| def generate_id(self): |
| with self.lock: |
| current_timestamp = self._current_timestamp() |
| if current_timestamp < self.last_timestamp: |
| raise ValueError("Clock moved backwards. Refusing to generate ID.") |
| |
| if current_timestamp == self.last_timestamp: |
| self.sequence = (self.sequence + 1) & self.max_sequence |
| if self.sequence == 0: |
| current_timestamp = self._wait_for_next_timestamp(self.last_timestamp) |
| else: |
| self.sequence = 0 |
| |
| self.last_timestamp = current_timestamp |
| |
| |
| timestamp = current_timestamp << self.timestamp_shift |
| worker_id = self.worker_id << self.worker_id_shift |
| id = timestamp | worker_id | self.sequence |
| return id |
| |
| |
| class SingletonSnowflakeGenerator: |
| _instance_lock = threading.Lock() |
| _instance = None |
| |
| def __new__(cls, worker_id, datacenter_id): |
| if cls._instance is None: |
| with cls._instance_lock: |
| if cls._instance is None: |
| cls._instance = SnowflakeIDGenerator(worker_id, datacenter_id) |
| return cls._instance |
| |
| |
| if __name__ == "__main__": |
| generator1 = SingletonSnowflakeGenerator(worker_id=1, datacenter_id=1) |
| generator2 = SingletonSnowflakeGenerator(worker_id=2, datacenter_id=2) |
| |
| print(generator1 is generator2) |
| |
| id1 = generator1.generate_id() |
| id2 = generator2.generate_id() |
| |
| print(id1) |
| print(id2) |
3、数据库连接池
- 确保在应用程序中只存在一个数据库连接池的实例,以提高性能和资源利用率。
| import threading |
| import pymysql |
| from dbutils.pooled_db import PooledDB |
| |
| |
| class DatabaseConnectionPoolProxy: |
| _instance_lock = threading.Lock() |
| |
| def __new__(cls, *args, **kwargs): |
| if not hasattr(DatabaseConnectionPoolProxy, "_instance"): |
| with DatabaseConnectionPoolProxy._instance_lock: |
| if not hasattr(DatabaseConnectionPoolProxy, "_instance"): |
| DatabaseConnectionPoolProxy._instance = object.__new__(cls) |
| cls._instance.initialize_pool() |
| return DatabaseConnectionPoolProxy._instance |
| |
| def initialize_pool(self): |
| self.pool = PooledDB( |
| creator=pymysql, |
| maxconnections=6, |
| mincached=2, |
| maxcached=5, |
| maxshared=3, |
| blocking=True, |
| maxusage=None, |
| |
| host='192.168.91.1', |
| port=3306, |
| user='root', |
| password='root', |
| database='inventory', |
| charset='utf8' |
| ) |
| |
| def get_connection(self): |
| if self.pool: |
| return self.pool.connection() |
| |
| def execute_query(self, query, params=None): |
| conn = self.get_connection() |
| if conn: |
| cursor = conn.cursor() |
| try: |
| cursor.execute(query, params) |
| result = cursor.fetchall() |
| return result |
| finally: |
| cursor.close() |
| conn.close() |
| |
| |
| if __name__ == "__main__": |
| db_proxy = DatabaseConnectionPoolProxy() |
| result = db_proxy.execute_query("SELECT * FROM inventory WHERE id=%s", [5]) |
| |
| print(result) |
| |
| db_proxy1 = DatabaseConnectionPoolProxy() |
| db_proxy2 = DatabaseConnectionPoolProxy() |
| |
| print(db_proxy1 is db_proxy2) |
4、缓存管理
- 管理应用程序中的缓存数据,确保只有一个缓存管理器实例来避免数据一致性问题。
| import threading |
| |
| |
| class CacheManager: |
| _instance = None |
| _lock = threading.Lock() |
| |
| def __new__(cls): |
| if cls._instance is None: |
| with cls._lock: |
| if cls._instance is None: |
| cls._instance = super(CacheManager, cls).__new__(cls) |
| cls._instance.initialize_cache() |
| return cls._instance |
| |
| def initialize_cache(self): |
| self.cache_data = {} |
| |
| def get_data(self, key): |
| return self.cache_data.get(key) |
| |
| def set_data(self, key, value): |
| with self._lock: |
| self.cache_data[key] = value |
| |
| |
| if __name__ == "__main__": |
| cache_manager1 = CacheManager() |
| cache_manager1.set_data("key1", "value1") |
| |
| cache_manager2 = CacheManager() |
| value = cache_manager2.get_data("key1") |
| print(value) |
| |
| print(cache_manager1 is cache_manager2) |
5、线程池
- 确保在应用程序中只存在一个线程池的实例,以管理并发任务的执行。
| import threading |
| from concurrent.futures import ThreadPoolExecutor |
| |
| |
| class ThreadPoolManager: |
| _instance = None |
| _lock = threading.Lock() |
| |
| def __new__(cls): |
| if cls._instance is None: |
| with cls._lock: |
| if cls._instance is None: |
| cls._instance = super(ThreadPoolManager, cls).__new__(cls) |
| cls._instance.initialize_thread_pool() |
| return cls._instance |
| |
| def initialize_thread_pool(self): |
| self.thread_pool = ThreadPoolExecutor(max_workers=4) |
| |
| def submit_task(self, task_function, *args, **kwargs): |
| return self.thread_pool.submit(task_function, *args, **kwargs) |
| |
| |
| if __name__ == "__main__": |
| thread_pool_manager1 = ThreadPoolManager() |
| |
| |
| def sample_task(x): |
| return x * 2 |
| |
| |
| future = thread_pool_manager1.submit_task(sample_task, 5) |
| result = future.result() |
| print(result) |
| |
| thread_pool_manager2 = ThreadPoolManager() |
| |
| print(thread_pool_manager1 is thread_pool_manager2) |
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 【自荐】一款简洁、开源的在线白板工具 Drawnix
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY