A structural design pattern proposes a way of composing objects to provide new functionality.
• The adapter pattern
The adapter pattern is a structural design pattern that helps us make two incompatible interfaces compatible. What does that really mean? If we have an old component and we want to use it in a new system, or a new component that we want to use in an old system, the two can rarely communicate without requiring any code changes. But changing the code is not always possible, either because we don’t have access to it, or because it is impractical. In such cases, we can write an extra layer that makes all the required modifications for enabling communication between the two interfaces. This layer is called an adapter.
In general, if you want to use an interface that expects function_a(), but you only have function_b(), you can use an adapter to convert (adapt) function_b() to function_a().
class OldPaymentSystem: def __init__(self, currency): self.currency = currency
def make_payment(self, amount): print(f"[OLD] Pay {amount} {self.currency}") class NewPaymentGateway: def __init__(self, currency): self.currency = currency
def execute_payment(self, amount): print(f"Execute payment of {amount} {self.currency}") class PaymentAdapter: def __init__(self, system): self.system = system
def make_payment(self, amount): self.system.execute_payment(amount) def main(): new_system = NewPaymentGateway("euro") print(new_system) adapter = PaymentAdapter(new_system) adapter.make_payment(100)
Implementing the adapter pattern – adapt several classes into a unified interface
Let’s look at another application to illustrate adaptation: a club’s activities. Our club has two main activities:
• Hire talented artists to perform in the club
• Organize performances and events to entertain its clients
At the core, we have a Club class that represents the club where hired artists perform some evenings. The organize_performance() method is the main action that the club can perform. The code
is as follows:
class Club: def __init__(self, name): self.name = name def __str__(self): return f"the club {self.name}" def organize_performance(self): return "hires an artist to perform"
Most of the time, our club hires a DJ to perform, but our application should make it possible to organize a diversity of performances: by a musician or music band, by a dancer, a one-man or one-woman show, and so on.
Via our research to try and reuse existing code, we find an open source contributed library that brings us two interesting classes: Musician and Dancer. In the Musician class, the main action is performed by the play() method. In the Dancer class, it is performed by the dance() method.
class Musician: def __init__(self, name): self.name = name def __str__(self): return f"the musician {self.name}" def play(self): return "plays music" class Dancer: def __init__(self, name): self.name = name def __str__(self): return f"the dancer {self.name}" def dance(self): return "does a dance performance"
The code we are writing, to use these two classes from the external library, only knows how to call the organize_performance() method (on the Club class); it has no idea about the play() or dance() methods (on the respective classes).
How can we make the code work without changing the Musician and Dancer classes?
Adapters to the rescue! We create a generic Adapter class that allows us to adapt a number of objects with different interfaces into one unified interface. The obj argument of the __init__() method is the object that we want to adapt, and adapted_methods is a dictionary containing key/value pairs matching the method the client calls and the method that should be called. The code for that class is as follows:
class Adapter: def __init__(self, obj, adapted_methods): self.obj = obj self.__dict__.update(adapted_methods) def __str__(self): return str(self.obj)
When dealing with the instances of the different classes, we have two cases:
• The compatible object that belongs to the Club class needs no adaptation. We can treat it as is.
• The incompatible objects need to be adapted first, using the Adapter class.
The result is that the client code can continue using the known organize_performance() method on all objects without the need to be aware of any interface differences. Consider the following main() function code to prove that the design works as expected:
def main(): objects = [ Club("Jazz Cafe"), Musician("Roy Ayers"), Dancer("Shane Sparks"), ]
for obj in objects: if hasattr(obj, "play") or hasattr( obj, "dance" ): if hasattr(obj, "play"): adapted_methods = dict( organize_performance=obj.play ) elif hasattr(obj, "dance"): adapted_methods = dict( organize_performance=obj.dance ) obj = Adapter(obj, adapted_methods) print(f"{obj} {obj.organize_event()}")
• The decorator pattern
A second interesting structural pattern to learn about is the decorator pattern, which allows a programmer to add responsibilities to an object dynamically, and in a transparent manner (without affecting other objects).
As Python developers, we can write decorators in a Pythonic way (meaning using the language’s features), thanks to the built-in decorator feature.
Use cases for the decorator pattern
The decorator pattern shines when used for implementing cross-cutting concerns, such as the following:
• Data validation
• Caching
• Logging
• Monitoring
• Debugging
• Business rules
• Encryption
In general, all parts of an application that are generic and can be applied to many other parts of it are considered to be cross-cutting concerns.
Another popular example of using the decorator pattern is in graphical user interface (GUI) toolkits. In a GUI toolkit, we want to be able to add features such as borders, shadows, colors, and scrolling to individual components/widgets.
Implementing the decorator pattern
import functools def memoize(func): cache = {} @functools.wraps(func) def memoizer(*args): if args not in cache: cache[args] = func(*args) return cache[args] return memoizer @memoize def number_sum(n): if n == 0: return 0 else: return n + number_sum(n - 1) @memoize def fibonacci(n): if n in (0, 1): return n else: return fibonacci(n - 1) + fibonacci(n - 2)
• The bridge pattern
A third structural pattern to look at is the bridge pattern. We can actually compare the bridge and the adapter patterns, looking at the way both work. While the adapter pattern is used later to make unrelated classes work together, as we saw in the implementation example we discussed earlier in the section on The adapter pattern, the bridge pattern is designed up-front to decouple an implementation from its abstraction, as we are going to see.
Real-world examples
In our modern, everyday lives, an example of the bridge pattern I can think of is from the digital economy: information products. Nowadays, the information product or infoproduct is part of the resources one can find online for training, self-improvement, or one’s ideas and business development. The purpose of an information product that you find on certain marketplaces, or the website of the provider, is to deliver information on a given topic in such a way that it is easy to access and consume. The provided material can be a PDF document or ebook, an ebook series, a video, a video series, an online course, a subscription-based newsletter, or a combination of all those formats.
In the software realm, we can find two examples:
• Device drivers: Developers of an OS define the interface for device (such as printers) vendors to implement it
• Payment gateways: Different payment gateways can have different implementations, but the checkout process remains consistent
Use cases for the bridge pattern
Using the bridge pattern is a good idea when you want to share an implementation among multiple objects. Basically, instead of implementing several specialized classes, and defining all that is required within each class, you can define the following special components:
• An abstraction that applies to all the classes
• A separate interface for the different objects involved
Implementing the bridge pattern
Let’s assume we are building an application where the user is going to manage and deliver content after fetching it from diverse sources, which could be the following:
• A web page (based on its URL)
• A resource accessed on an FTP server
• A file on the local filesystem
• A database server
So, here is the idea: instead of implementing several content classes, each holding the methods responsible for getting the content pieces, assembling them, and showing them inside the application, we can define an abstraction for the Resource Content and a separate interface for the objects that are responsible for fetching the content. Let’s try it!
We begin with the interface for the implementation classes that help fetch content – that is, the ResourceContentFetcher class. This concept is called the Implementor. Let’s use Python’s protocols feature, as follows:
class ResourceContentFetcher(Protocol): def fetch(self, path: str) -> str: ...
Then, we define the class for our Resource Content abstraction, called ResourceContent. The first trick we use here is that, via an attribute (_imp) on the ResourceContent class, we maintain a reference to the object that represents the Implementor (fulfilling the ResourceContentFetcher interface). The code is as follows:
class ResourceContent: def __init__(self, imp: ResourceContentFetcher): self._imp = imp def get_content(self, path): return self._imp.fetch(path)
Now we can add an implementation class to fetch content from a web page or resource:
class URLFetcher: def fetch(self, path): res = "" req = urllib.request.Request(path) with urllib.request.urlopen( req ) as response: if response.code == 200: res = response.read() return res
We can also add an implementation class to fetch content from a file on the local filesystem:
class LocalFileFetcher: def fetch(self, path): with open(path) as f: res = f.read() return res
Based on that, a main function with some testing code to show content using both content fetchers could look like the following:
def main(): url_fetcher = URLFetcher() rc = ResourceContent(url_fetcher) res = rc.get_content("http://python.org") print( f"Fetched content with {len(res)} characters" ) localfs_fetcher = LocalFileFetcher() rc = ResourceContent(localfs_fetcher) pathname = os.path.abspath(__file__) dir_path = os.path.split(pathname)[0] path = os.path.join(dir_path, "file.txt") res = rc.get_content(path) print( f"Fetched content with {len(res)} characters" )
• The facade pattern
As systems evolve, they can get very complex. It is not unusual to end up with a very large (and sometimes confusing) collection of classes and interactions. In many cases, we don’t want to expose this complexity to the client. This is where our next structural pattern comes to the rescue: facade.
The facade design pattern helps us hide the internal complexity of our systems and expose only what is necessary to the client through a simplified interface. In essence, facade is an abstraction layer implemented over an existing complex system.
Let’s take the example of the computer to illustrate things. A computer is a complex machine that depends on several parts to be fully functional. To keep things simple, the word “computer,” in this case, refers to an IBM derivative that uses a von Neumann architecture. Booting a computer is a particularly complex procedure. The CPU, main memory, and hard disk need to be up and running, the boot loader must be loaded from the hard disk to the main memory, the CPU must boot the operating system kernel, and so forth. Instead of exposing all this complexity to the client, we create a facade that encapsulates the whole procedure, making sure that all steps are executed in the right order.
In terms of object design and programming, we should have several classes, but only the Computer class needs to be exposed to the client code. The client will only have to execute the start() method of the Computer class, for example, and all the other complex parts are taken care of by the facade Computer class.
Real-world examples
The facade pattern is quite common in life. When you call a bank or a company, you are usually first connected to the customer service department. The customer service employee acts as a facade between you and the actual department (billing, technical support, general assistance, and so on), where an employee will help you with your specific problem.
As another example, a key used to turn on a car or motorcycle can also be considered a facade. It is a simple way of activating a system that is very complex internally. And, of course, the same is true for other complex electronic devices that we can activate with a single button, such as computers.
The Requests library is another great example of the facade pattern. It simplifies sending HTTP requests and handling responses, abstracting the complexities of the HTTP protocol. Developers can easily make HTTP requests without dealing with the intricacies of sockets or the underlying HTTP methods.
Use cases for the facade pattern
The most usual reason to use the facade pattern is to provide a single, simple entry point to a complex system. By introducing facade, the client code can use a system by simply calling a single method/function. At the same time, the internal system does not lose any functionality, it just encapsulates it.
Not exposing the internal functionality of a system to the client code gives us an extra benefit: we can introduce changes to the system, but the client code remains unaware of and unaffected by the changes. No modifications are required to the client code.
Facade is also useful if you have more than one layer in your system. You can introduce one facade entry point per layer and let all layers communicate with each other through their facades. That promotes loose coupling and keeps the layers as independent as possible.
Implementing the facade pattern
Assume that we want to create an operating system using a multi-server approach, similar to how it is done in MINIX 3 or GNU Hurd. A multi-server operating system has a minimal kernel, called the microkernel, which runs in privileged mode. All the other services of the system are following a server architecture (driver server, process server, file server, and so forth). Each server belongs to a different memory address space and runs on top of the microkernel in user mode. The pros of this approach
are that the operating system can become more fault-tolerant, reliable, and secure. For example, since all drivers are running in user mode on a driver server, a bug in a driver cannot crash the whole system, nor can it affect the other servers. The cons of this approach are the performance overhead and the complexity of system programming, because the communication between a server and the microkernel, as well as between the independent servers, happens using message passing. Message
passing is more complex than the shared memory model used in monolithic kernels such as Linux.
We begin with a Server interface. Also, an Enum parameter describes the different possible states of a server. We use the ABC technique to forbid direct instantiation of the Server interface and make the fundamental boot() and kill() methods mandatory, assuming that different actions are needed to be taken for booting, killing, and restarting each server. Here is the code for these elements, the first important bits to support our implementation:
State = Enum( "State", "NEW RUNNING SLEEPING RESTART ZOMBIE", ) # ... class Server(ABC): @abstractmethod def __init__(self): pass def __str__(self): return self.name @abstractmethod def boot(self): pass @abstractmethod def kill(self, restart=True): pass
A modular operating system can have a great number of interesting servers: a file server, a process server, an authentication server, a network server, a graphical/window server, and so forth. The following example includes two stub servers: FileServer and ProcessServer. Apart from the boot() and kill() methods all servers have, FileServer has a create_file() method for creating files, and ProcessServer has a create_process() method for creating processes.
The FileServer class is as follows:
class FileServer(Server): def __init__(self): self.name = "FileServer" self.state = State.NEW def boot(self): print(f"booting the {self}") self.state = State.RUNNING def kill(self, restart=True): print(f"Killing {self}") self.state = ( State.RESTART if restart else State.ZOMBIE ) def create_file(self, user, name, perms): msg = ( f"trying to create file '{name}' " f"for user '{user}' " f"with permissions {perms}" ) print(msg)
The ProcessServer class is as follows:
class ProcessServer(Server): def __init__(self): self.name = "ProcessServer" self.state = State.NEW def boot(self): print(f"booting the {self}") self.state = State.RUNNING def kill(self, restart=True): print(f"Killing {self}") self.state = ( State.RESTART if restart else State.ZOMBIE ) def create_process(self, user, name): msg = ( f"trying to create process '{name}' " f"for user '{user}'" ) print(msg)
The OperatingSystem class is a facade. In its __init__(), all the necessary server instances are created. The start() method, used by the client code, is the entry point to the system. More wrapper methods can be added, if necessary, as access points to the services of the servers, such as the wrappers, create_file() and create_process(). From the client’s point of view, all those services are provided by the OperatingSystem class. The client should not be confused by unnecessary details such as the existence of servers and the responsibility of each server.
The code for the OperatingSystem class is as follows:
class OperatingSystem: """The Facade""" def __init__(self): self.fs = FileServer() self.ps = ProcessServer() def start(self): [i.boot() for i in (self.fs, self.ps)] def create_file(self, user, name, perms): return self.fs.create_file(user, name, perms) def create_process(self, user, name): return self.ps.create_process(user, name)
As you are going to see in a minute, when we present a summary of the example, there are many dummy classes and servers. They are there to give you an idea about the required abstractions (User, Process, File, and so forth) and servers (WindowServer, NetworkServer, and so forth)
for making the system functional.
Finally, we add our main code for testing the design, as follows:
def main(): os = OperatingSystem() os.start() os.create_file("foo", "hello.txt", "-rw-r-r") os.create_process("bar", "ls /tmp")
• The flyweight pattern
The flyweight design pattern is a technique used to minimize memory usage and improve performance by introducing data sharing between similar objects. A flyweight is a shared object that contains state-independent, immutable (also known as intrinsic) data. The state-dependent, mutable (also known as extrinsic) data should not be part of flyweight because this is information that cannot be shared, since it differs per object. If flyweight needs extrinsic data, it should be provided explicitly by the client code.
An example might help to clarify how the flyweight pattern can be used practically. Let’s assume that we are creating a performance-critical game – for example, a first-person shooter (FPS). In FPS games, the players (soldiers) share some states, such as representation and behavior. In Counter-Strike, for instance, all soldiers on the same team (counter-terrorists versus terrorists) look the same (representation). In the same game, all soldiers (on both teams) have some common actions, such as jump, duck, and so forth (behavior). This means that we can create a flyweight that will contain all of the common data. Of course, the soldiers also have a lot of data that is different per soldier and will not be a part of the flyweight, such as weapons, health, location, and so on.
Use cases for the flyweight pattern
The Gang of Four (GoF) book lists the following requirements that need to be satisfied to effectively use the flyweight pattern:
• The application needs to use a large number of objects.
• There are so many objects that it’s too expensive to store/render them. Once the mutable state is removed (because if it is required, it should be passed explicitly to flyweight by the client code), many groups of distinct objects can be replaced by relatively few shared objects.
• Object identity is not important for the application. We cannot rely on object identity because object sharing causes identity comparisons to fail (objects that appear different to the client code end up having the same identity).
Implementing the flyweight pattern
We will create a small car park to illustrate the idea, making sure that the whole output is readable in a single terminal page. However, no matter how large you make the car park, the memory allocation stays the same.
First, we need an Enum parameter that describes the three different types of car that are in the car park:
CarType = Enum( "CarType", "SUBCOMPACT COMPACT SUV" )
Then, we will define the class at the core of our implementation: Car. The pool variable is the object pool (in other words, our cache). Notice that pool is a class attribute (a variable shared by all instances).
Using the __new__() special method, which is called before __init__(), we are converting the Car class to a metaclass that supports self-references. This means that cls references the Car class. When the client code creates an instance of Car, they pass the type of the car as car_type. The type of the car is used to check whether a car of the same type has already been created. If that’s the case, the previously created object is returned; otherwise, the new car type is added to the pool and returned:
class Car: pool = dict() def __new__(cls, car_type): obj = cls.pool.get(car_type, None) if not obj: obj = object.__new__(cls) cls.pool[car_type] = obj obj.car_type = car_type return obj
The render() method is what will be used to render a car on the screen. Notice how all the mutable information not known by flyweight needs to be explicitly passed by the client code. In this case, random color and the coordinates of a location (of form x, y) are used for each car.
Also, note that to make render() more useful, it is necessary to ensure that no cars are rendered on top of each other. Consider this as an exercise. If you want to make rendering more fun, you can use a graphics toolkit such as Tkinter, Pygame, or Kivy.
The render() method is defined as follows:
def render(self, color, x, y): type = self.car_type msg = f"render a {color} {type.name} car at ({x}, {y})" print(msg)
The main() function shows how we can use the flyweight pattern. The color of a car is a random value from a predefined list of colors. The coordinates use random values between 1 and 100. Although 18 cars are rendered, memory is allocated only for 3. The last line of the output proves that when using flyweight, we cannot rely on object identity. The id() function returns the memory address of an object. This is not the default behavior in Python because, by default, id() returns a unique ID (actually the memory address of an object as an integer) for each object. In our case, even if two objects appear to be different, they actually have the same identity if they belong to the same flyweight family (in this case, the family is defined by car_type). Of course, different identity comparisons can still be used for objects of different families, but that is possible only if the client knows the implementation details.
def main(): rnd = random.Random() colors = [ "white", "black", "silver", "gray", "red", "blue", "brown", "beige", "yellow", "green", ] min_point, max_point = 0, 100 car_counter = 0 for _ in range(10): c1 = Car(CarType.SUBCOMPACT) c1.render( random.choice(colors), rnd.randint(min_point, max_point), rnd.randint(min_point, max_point), ) car_counter += 1 for _ in range(3): c2 = Car(CarType.COMPACT) c2.render( random.choice(colors), rnd.randint(min_point, max_point), rnd.randint(min_point, max_point), ) car_counter += 1 for _ in range(5): c3 = Car(CarType.SUV) c3.render( random.choice(colors), rnd.randint(min_point, max_point), rnd.randint(min_point, max_point), ) car_counter += 1 print(f"cars rendered: {car_counter}") print( f"cars actually created: {len(Car.pool)}" ) c4 = Car(CarType.SUBCOMPACT) c5 = Car(CarType.SUBCOMPACT) c6 = Car(CarType.SUV) print( f"{id(c4)} == {id(c5)}? {id(c4) == id(c5)}" ) print( f"{id(c5)} == {id(c6)}? {id(c5) == id(c6)}" )
• The proxy pattern
The proxy design pattern gets its name from the proxy (also known as surrogate) object used to perform an important action before accessing the actual object. There are four well-known types of proxy. They are as follows:
1.A virtual proxy, which uses lazy initialization to defer the creation of a computationally expensive object until the moment it is actually needed.
2.A protection/protective proxy, which controls access to a sensitive object.
3.A remote proxy, which acts as the local representation of an object that really exists in a different address space (for example, a network server).
4.A smart (reference) proxy, which performs extra actions when an object is accessed. Examples of such actions are reference counting and thread-safety checks.
Implementing the proxy pattern – a virtual proxy
(The example code is confusing, so skip it.)
Implementing the proxy pattern – a protection proxy
As a second example, let’s implement a simple protection proxy to view and add users. The service provides two options:
• Viewing the list of users: This operation does not require special privileges
• Adding a new user: This operation requires the client to provide a special secret message
The SensitiveInfo class contains the information that we want to protect. The users variable is the list of existing users. The read() method prints the list of the users. The add() method adds a new user to the list. The code for that class is as follows:
class SensitiveInfo: def __init__(self): self.users = ["nick", "tom", "ben", "mike"] def read(self): nb = len(self.users) print(f"There are {nb} users: {' '.join(self.users)}") def add(self, user): self.users.append(user) print(f"Added user {user}")
The Info class is a protection proxy of SensitiveInfo. The secret variable is the message required to be known/provided by the client code to add a new user.
Note that this is just an example. In reality, you should never do the following:
• Store passwords in the source code
• Store passwords in clear-text form
• Use a weak (for example, MD5) or custom form of encryption
In the Info class, as we can see next, the read() method is a wrapper to SensitiveInfo. read() and the add() method ensures that a new user can be added only if the client code knows the secret message:
class Info: def __init__(self): self.protected = SensitiveInfo() self.secret = "0xdeadbeef" def read(self): self.protected.read() def add(self, user): sec = input("what is the secret? ") if sec == self.secret: self.protected.add(user) else: print("That's wrong!")
The main() function shows how the proxy pattern can be used by the client code. The client code creates an instance of the Info class and uses the displayed menu to read the list, add a new user, or exit the application. Let’s consider the following code:
def main(): info = Info() while True: print("1. read list |==| 2. add user |==| 3. quit") key = input("choose option: ") if key == "1": info.read() elif key == "2": name = input("choose username: ") info.add(name) elif key == "3": exit() else: print(f"unknown option: {key}")
Implementing the proxy pattern – a remote proxy
Imagine we are building a file management system where clients can perform operations on files stored on a remote server. The operations might include reading a file, writing to a file, and deleting a file. The remote proxy hides the complexity of network requests from the client.
We start by creating an interface that defines the operations that can be performed on the remote server, RemoteServiceInterface, and the class that implements it to provide the actual service, RemoteService.
The interface is defined as follows:
from abc import ABC, abstractmethod class RemoteServiceInterface(ABC): @abstractmethod def read_file(self, file_name): pass @abstractmethod def write_file(self, file_name, contents): pass @abstractmethod def delete_file(self, file_name): pass
The RemoteService class is defined as follows (the methods just return a string, for the sake of simplicity, but normally, you would have specific code for the file handling on the remote service):
class RemoteService(RemoteServiceInterface): def read_file(self, file_name): # Implementation for reading a file from the server return "Reading file from remote server" def write_file(self, file_name, contents): # Implementation for writing to a file on the server return "Writing to file on remote server" def delete_file(self, file_name): # Implementation for deleting a file from the server return "Deleting file from remote server"
Then, we define ProxyService for the proxy. It implements the RemoteServiceInterface interface and acts as a surrogate for RemoteService, which handles communication with the latter:
class ProxyService(RemoteServiceInterface): def __init__(self): self.remote_service = RemoteService() def read_file(self, file_name): print("Proxy: Forwarding read request to RemoteService") return self.remote_service.read_file(file_name) def write_file(self, file_name, contents): print("Proxy: Forwarding write request to RemoteService") return self.remote_service.write_file(file_name, contents) def delete_file(self, file_name): print("Proxy: Forwarding delete request to RemoteService") return self.remote_service.delete_file(file_name)
Clients interact with the ProxyService component as if it were the RemoteService one, unaware of the remote nature of the actual service. The proxy handles the communication with the remote service, potentially adding logging, access control, or caching. To test things, we can add the following code, based on creating an instance of ProxyService:
if __name__ == "__main__": proxy = ProxyService() print(proxy.read_file("example.txt"))
Implementing the proxy pattern – a smart proxy
Let’s consider a scenario where you have a shared resource in your application, such as a database connection. Every time an object accesses this resource, you want to keep track of how many references to the resource exist. Once there are no more references, the resource can be safely released or closed. A smart proxy will help manage the reference counting for this database connection, ensuring it’s only closed once all references to it are released.
As in the previous example, we will need an interface, DBConnectionInterface, defining operations for accessing the database, and a class that represents the actual database connection, DBConnection.
For the interface, let’s use Protocol (to change from the ABC way):
from typing import Protocol class DBConnectionInterface(Protocol): def exec_query(self, query): ...
The class for the database connection is as follows:
class DBConnection: def __init__(self): print("DB connection created") def exec_query(self, query): return f"Executing query: {query}" def close(self): print("DB connection closed")
Then, we define the SmartProxy class; it also implements the DBConnectionInterface interface (see the exec_query() method). We use this class to manage reference counting and access to the DBConnection object. It ensures that the DBConnection object is created on demand when the first query is executed and is only closed when there are no more references to it. The code is as follows:
class SmartProxy: def __init__(self): self.cnx = None self.ref_count = 0 def access_resource(self): if self.cnx is None: self.cnx = DBConnection() self.ref_count += 1 print(f"DB connection now has {self.ref_count} references.") def exec_query(self, query): if self.cnx is None: # Ensure the connection is created # if not already self.access_resource() result = self.cnx.exec_query(query) print(result) # Decrement reference count after # executing query self.release_resource() return result def release_resource(self): if self.ref_count > 0: self.ref_count -= 1 print("Reference released...") print(f"{self.ref_count} remaining refs.") if self.ref_count == 0 and self.cnx is not None: self.cnx.close() self.cnx = None
Now, we can add some code to test the implementation:
if __name__ == "__main__": proxy = SmartProxy() proxy.exec_query("SELECT * FROM users") proxy.exec_query("UPDATE users SET name = 'John Doe' WHERE id = 1")