Architectural design patterns provide a template for solving common architectural problems, facilitating the development of scalable, maintainable, and reusable systems.
Technical requirements
• For the Microservices pattern section, install the following:
- gRPC, using the following command: python -m pip install grpcio
- gRPC-tools, using the following command: python -m pip install grpcio-tools
• For the Serverless pattern section, install the following:
- Docker
- LocalStack, for testing AWS Lambda locally, using the following command: python –m pip install localstack (Note that this is not compatible with Python 3.12, at the time of writing. You may use Python 3.11 instead for this case.)
- awscli-local, using the command: python -m pip install awscli-local
- awscli, using the command: python -m pip install awscli
• For the Event Sourcing section, install the following:
- eventsourcing, using the command: python –m pip install eventsourcing
• The Model-View-Controller (MVC) pattern
The MVC pattern is another application of the loose coupling principle. The name of the pattern comes from the three main components used to split a software application: the model, the view, and the controller.
The model is the core component. It represents knowledge. It contains and manages the (business) logic, data, state, and rules of an application. The view is a visual representation of the model. Examples of views are a computer GUI, the text output of a computer terminal, a smartphone’s application GUI, a PDF document, a pie chart, a bar chart, and so forth. The view only displays the data; it doesn’t handle it. The controller is the link/glue between the model and the view. All communication between the model and the view happens through a controller.
A typical use of an application that uses MVC, after the initial screen is rendered to the user, is as follows:
1.The user triggers a view by clicking (typing, touching, and so on) a button.
2.The view informs the controller of the user’s action.
3.The controller processes user input and interacts with the model.
4.The model performs all the necessary validation and state changes and informs the controller about what should be done.
5.The controller instructs the view to update and display the output appropriately, following the model’s instructions.
But is the controller part necessary? Can’t we just skip it? We could, but then we would lose a big benefit that MVC provides: the ability to use more than one view (even at the same time, if that’s what we want) without modifying the model. To achieve decoupling between the model and its representation, every view typically needs its own controller. If the model communicated directly with a specific view, we wouldn’t be able to use multiple views (or, at least, not in a clean and modular way).
Real-world examples
In web development, several frameworks use the MVC idea, for example:
• The Web2py framework is a lightweight Python framework that embraces the MVC pattern.
• Django (https://www.djangoproject.com/) is also an MVC framework, although it uses different naming conventions. The controller is called view, and the view is called template. Django uses the name Model-View Template (MVT). According to the designers of Django, the view describes what data is seen by the user, and therefore, it uses the name view as the Python callback function for a particular URL. The term “template” in Django is used to separate content from representation. It describes how the data is seen by the user, not which data is seen.
Use cases for the MVC pattern
MVC is a very generic and useful design pattern. In fact, all popular web frameworks (Django, Rails, Symfony, and Yii) and application frameworks (iPhone SDK, Android, and QT) make use of MVC or a variation of it (model-view-adapter (MVA), model-view-presenter (MVP), or MVT, for example). However, even if we don’t use any of these frameworks, it makes sense to implement the pattern on our own because of the benefits it provides, which are as follows:
• The separation between the view and model allows graphics designers to focus on the user interface (UI) part and programmers to focus on development, without interfering with each other.
• Because of the loose coupling between the view and model, each part can be modified/extended without affecting the other. For example, adding a new view is trivial. Just implement a new controller for it.
• Maintaining each part is easier because the responsibilities are clear.
When implementing MVC from scratch, be sure that you create smart models, thin controllers, and dumb views.
A model is considered smart because it does the following:
• It contains all the validation/business rules/logic
• It handles the state of the application
• It has access to application data (database, cloud, and so on)
• It does not depend on the UI
A controller is considered thin because it does the following:
• It updates the model when the user interacts with the view
• It updates the view when the model changes
• It processes the data before delivering it to the model/view, if necessary
• It does not display the data
• It does not access the application data directly
• It does not contain validation/business rules/logic
A view is considered dumb because it does the following:
• It displays the data
• It allows the user to interact with it
• It does only minimal processing, usually provided by a template language (for example, using simple variables and loop controls)
• It does not store any data
• It does not access the application data directly
• It does not contain validation/business rules/logic
If you are implementing MVC from scratch and want to find out whether you did it right, you can try answering some key questions:
• If your application has a GUI, is it skinnable? How easily can you change the skin/look and feel of it? Can you give the user the ability to change the skin of your application during runtime? If this is not simple, it means that something is going wrong with your MVC implementation.
• If your application has no GUI (for instance, if it’s a terminal application), how hard is it to add GUI support? Or, if adding a GUI is irrelevant, is it easy to add views to display the results in a chart (pie chart, bar chart, and so on) or a document (PDF, spreadsheet, and so on)? If these changes are not trivial (a matter of creating a new controller with a view attached to it, without modifying the model), MVC is not implemented properly.
• If you make sure that these conditions are satisfied, your application will be more flexible and maintainable compared to an application that does not use MVC.
Implementing the MVC pattern
I could use any of the common frameworks to demonstrate how to use MVC, but I feel that the picture will be incomplete. So, I decided to show you how to implement MVC from scratch, using a very simple example: a quote printer. The idea is extremely simple. The user enters a number and sees the quote related to that number. The quotes are stored in a quotes tuple. This is the data that normally exists in a database, file, and so on, and only the model has direct access to it.
Let’s consider this example for the quotes tuple:
quotes = ( "A man is not complete until he is married. Then he is finished.", "As I said before, I never repeat myself.", "Behind a successful man is an exhausted woman.", "Black holes really suck...", "Facts are stubborn things.", )
The model is minimalistic; it only has a get_quote() method that returns the quote (string) of the quotes tuple based on its index, n. The model class is as follows:
class QuoteModel: def get_quote(self, n): try: value = quotes[n] except IndexError as err: value = "Not found!" return value
The view has three methods: show(), which is used to print a quote (or the Not found! message) on the screen; error(), which is used to print an error message on the screen; and select_quote(), which reads the user’s selection. This can be seen in the following code:
class QuoteTerminalView: def show(self, quote): print(f'And the quote is: "{quote}"') def error(self, msg): print(f"Error: {msg}") def select_quote(self): return input("Which quote number would you like to see? ")
The controller does the coordination. The __init__() method initializes the model and view. The run() method validates the quoted index given by the user, gets the quote from the model, and passes it back to the view to be displayed, as shown in the following code:
class QuoteTerminalController: def __init__(self): self.model = QuoteModel() self.view = QuoteTerminalView() def run(self): valid_input = False while not valid_input: try: n = self.view.select_quote() n = int(n) valid_input = True except ValueError as err: self.view.error(f"Incorrect index '{n}'") quote = self.model.get_quote(n) self.view.show(quote)
Finally, the main() function initializes and fires the controller, as shown in the following code:
def main(): controller = QuoteTerminalController() while True: controller.run()
• The Microservices pattern
Implementing the microservices pattern – a payment service using gRPC
Let’s briefly talk about software installation and application deployment in the Microservices world. Switching from deploying a single application to deploying many small services means that the number of things that need to be handled increases exponentially. While you might have been fine with a single application server and a few runtime dependencies, when moving to Microservices, the number of dependencies will increase drastically. For example, one service could benefit from
the relational database while the other would need ElasticSearch. You may need a service that uses MySQL and another one that uses the Redis server. So, using the Microservices approach also means you will need to use containers.
Thanks to Docker, things have become easier, since we can run those services as containers. The idea is that your application server, dependencies and runtime libraries, compiled code, configurations, and so on, are inside those containers. Then, all you must do is run services packed as containers and make sure that they can communicate with each other.
You can implement the Microservices pattern, for a web app or an API, by directly using Django, Flask, or FastAPI. However, to quickly show a working example, we are going to use gRPC, a high-performance universal RPC framework that uses Protocol Buffers (protobuf) as its interface description language, making it an ideal candidate for microservices communication due to its efficiency and cross-language support.
Imagine a scenario where your application architecture includes a microservice dedicated to handling payment processing. This microservice (let’s call it PaymentService), is responsible for processing payments and interacts with other services such as OrderService and AccountService. We are going to focus on the implementation of such a service using gRPC.
First, we define the service and its methods using protobuf, in the payment.proto file. This includes specifying request and response message formats:
syntax = "proto3"; package payment; service PaymentService { rpc ProcessPayment (PaymentRequest) returns (PaymentResponse) {} } message PaymentRequest { string order_id = 1; double amount = 2; string currency = 3; string user_id = 4; } message PaymentResponse { string payment_id = 1; string status = 2; // e.g., "SUCCESS", "FAILED" }
Then, you must compile the payment.proto file into Python code using the protobuf compiler (protoc). For that, you need to use a specific command line that invokes protoc with the appropriate plugins and options for Python.
Here is the general form of the command line for compiling .proto files for use with gRPC in Python:
python -m grpc_tools.protoc -I<PROTO_DIR> --python_out=<OUTPUT_DIR> --grpc_python_out=<OUTPUT_DIR> <PROTO_FILES>
In this case, we make sure we change the directory to be under the right path (for example, by doing cd ch06/microservices/grpc), and then we run the following command:
python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. payment.proto
This will generate two files in the current directory: payment_pb2.py and payment_pb2_grpc.py. Those files are not to be manually edited.
from concurrent.futures import ThreadPoolExecutor import grpc import payment_pb2 import payment_pb2_grpc class PaymentServiceImpl(payment_pb2_grpc.PaymentServiceServicer): def ProcessPayment(self, request, context): return payment_pb2.PaymentResponse(payment_id="12345", status="SUCCESS")
Then, we have the main() function, with the code needed to start the processing service, created by calling grpc.server(ThreadPoolExecutor(max_workers=10)). The code of the function is as follows:
def main(): print("Payment Processing Service ready!") server = grpc.server(ThreadPoolExecutor(max_workers=10)) payment_pb2_grpc.add_PaymentServiceServicer_to_server(PaymentServiceImpl(), server) server.add_insecure_port("[::]:50051") server.start() server.wait_for_termination()
With that, the service is done and ready to be tested. We can write a test client with code that calls the service using gRPC, with the following code (client.py):
import grpc import payment_pb2 import payment_pb2_grpc with grpc.insecure_channel("localhost:50051") as chan: stub = payment_pb2_grpc.PaymentServiceStub(chan) resp = stub.ProcessPayment( payment_pb2.PaymentRequest( order_id="order123", amount=99.99, currency="USD", user_id="user456", ) ) print("Payment Service responded.") print(f"Response status: {resp.status}")
• The Serverless pattern
The Serverless pattern abstracts server management, allowing developers to focus solely on code. Cloud providers handle the scaling and execution based on event triggers, such as HTTP requests, file uploads, or database modifications.
The Serverless pattern is particularly useful for Microservices, APIs, and event-driven architectures.
Use cases for the Serverless pattern
There are two types of use cases the Serverless pattern can be used for.
First, Serverless is useful for handling event-driven architectures where specific functions need to be executed in response to events, such as doing image processing (cropping, resizing) or dynamic PDF generation.
The second type of architecture where Serverless can be used is Microservices. Each microservice can be a serverless function, making it easier to manage and scale.
Implementing the Serverless pattern
Let’s see a simple example using AWS Lambda to create a function that squares a number. AWS Lambda is Amazon’s serverless compute service, which runs code in response to triggers such as changes in data, shifts in system state, or actions by users.
First, we need to write the Python code for the function. We create a lambda_handler() function, which takes two parameters, event and context. In our case, the input number is accessed as a value of the “number” key in the event dictionary. We take the square of that value and we return a a string containing the expected result. The code is as follows:
import json def lambda_handler(event, context): number = event["number"] squared = number * number return f"The square of {number} is {squared}."
Once we have the Python function, we need to deploy it so that it can be invoked as an AWS Lambda function. For our learning, instead of going through the procedure of deploying to AWS infrastructure, we can use a method that consists of testing things locally. This is what the LocalStack Python package allows us to do. Once it is installed, from your environment, you can start LocalStack inside a Docker container by running the available executable in your Python environment, using the command:
zzh@ZZHPC:/zdata/Github/ztest$ localstack start -d __ _______ __ __ / / ____ _________ _/ / ___// /_____ ______/ /__ / / / __ \/ ___/ __ `/ /\__ \/ __/ __ `/ ___/ //_/ / /___/ /_/ / /__/ /_/ / /___/ / /_/ /_/ / /__/ ,< /_____/\____/\___/\__,_/_//____/\__/\__,_/\___/_/|_| 💻 LocalStack CLI 3.6.0 👤 Profile: default [09:20:42] starting LocalStack in Docker mode 🐳 localstack.py:503 preparing environment bootstrap.py:1283 configuring container bootstrap.py:1291 container image not found on host bootstrap.py:1272 ⠴ Pulling container image localstack/localstack
zzh@ZZHPC:/etc/docker$ sudo vi daemon.json [sudo] password for zzh: zzh@ZZHPC:/etc/docker$ cat daemon.json { "registry-mirrors": [ "https://dockerproxy.com", "https://docker.m.daocloud.io", "https://docker.anyhub.us.kg", "https://docker.awsl9527.cn" ] } zzh@ZZHPC:/etc/docker$ docker pull localstack/localstack Using default tag: latest latest: Pulling from localstack/localstack e4fff0779e6d: Pull complete d97016d0706d: Pull complete 53db1713e5d9: Pull complete a8cd795d9ccb: Pull complete de3ba92de392: Pull complete 00d0d3277eb2: Pull complete f8fa73e79877: Pull complete 9f3c1d3bec44: Pull complete 88aebca4f19e: Pull complete 252ec6c58ed3: Pull complete fa549f73c11c: Pull complete 2776fce53d3c: Pull complete 4f4fb700ef54: Pull complete 05cd6faa7921: Pull complete 376b2418de12: Pull complete ec9973e73b33: Pull complete a9a621fd61fc: Pull complete 5a47b6910650: Pull complete 4894c248423b: Pull complete f527999463e6: Pull complete 3e184bd0cb3f: Pull complete 92626bc1a9cc: Pull complete 4888a3018c3d: Pull complete f0e7c6cd0999: Pull complete 526e7f7bb1d8: Pull complete e29d8929ab78: Pull complete 571aa2436f69: Pull complete Digest: sha256:7d4bfdb6395e1c78221791c37bdcf5b23f2bbefcb806a40406fb94b21695ca0f Status: Downloaded newer image for localstack/localstack:latest docker.io/localstack/localstack:latest
zzh@ZZHPC:/zdata/Github/ztest$ localstack start -d __ _______ __ __ / / ____ _________ _/ / ___// /_____ ______/ /__ / / / __ \/ ___/ __ `/ /\__ \/ __/ __ `/ ___/ //_/ / /___/ /_/ / /__/ /_/ / /___/ / /_/ /_/ / /__/ ,< /_____/\____/\___/\__,_/_//____/\__/\__,_/\___/_/|_| 💻 LocalStack CLI 3.6.0 👤 Profile: default [10:44:18] starting LocalStack in Docker mode 🐳 localstack.py:503 preparing environment bootstrap.py:1283 configuring container bootstrap.py:1291 starting container bootstrap.py:1301 [10:44:19] detaching bootstrap.py:1305
Then, we compress our Python code file (lambda_function_square.py) to a ZIP file, for example, by using the ZIP program as follows:
zip lambda.zip lambda_function_square.py
The other tool we are going to use here is the awslocal tool (a Python module we need to install). Once installed, we can use this program to deploy the Lambda function into the “local stack” AWS infrastructure. This is done, in our case, using the following command:
zzh@ZZHPC:/zdata/Github/ztest$ awslocal lambda create-function \ --function-name lambda_function_square \ --runtime python3.12 \ --zip-file fileb://lambda.zip \ --handler lambda_function_square.lambda_handler \ --role arn:aws:iam::000000000000:role/lambda-role { "FunctionName": "lambda_function_square", "FunctionArn": "arn:aws:lambda:ap-southeast-1:000000000000:function:lambda_function_square", "Runtime": "python3.12", "Role": "arn:aws:iam::000000000000:role/lambda-role", "Handler": "lambda_function_square.lambda_handler", "CodeSize": 316, "Description": "", "Timeout": 3, "MemorySize": 128, "LastModified": "2024-08-23T03:35:52.892394+0000", "CodeSha256": "HSCEdNyhHyIBz00P7zJTUwUygFcZDz1+oTRjjqHSYVE=", "Version": "$LATEST", "TracingConfig": { "Mode": "PassThrough" }, "RevisionId": "ac34c577-2752-4304-a175-ca16108f18b3", "State": "Pending", "StateReason": "The function is being created.", "StateReasonCode": "Creating", "PackageType": "Zip", "Architectures": [ "x86_64" ], "EphemeralStorage": { "Size": 512 }, "SnapStart": { "ApplyOn": "None", "OptimizationStatus": "Off" }, "RuntimeVersionConfig": { "RuntimeVersionArn": "arn:aws:lambda:ap-southeast-1::runtime:8eeff65f6809a3ce81507fe733fe09b835899b99481ba22fd75b5a7338290ec1" }, "LoggingConfig": { "LogFormat": "Text", "LogGroup": "/aws/lambda/lambda_function_square" } }
You can test the Lambda function, providing an input using the payload.json file, using the command:
awslocal lambda invoke --function-name lambda_function_square --payload file://payload.json output.txt
zzh@ZZHPC:/zdata/Github/ztest$ cat payload.json {"number": 6}
zzh@ZZHPC:/zdata/Github/ztest$ awslocal lambda invoke --function-name lambda_function_square --payload '{"number": 6}' output.txt
An error occurred (ResourceConflictException) when calling the Invoke operation: The operation cannot be performed at this time. The function is currently in the following state: Pending
After a while, try again:
zzh@ZZHPC:/zdata/Github/ztest$ awslocal lambda invoke --function-name lambda_function_square --payload file://payload.json output.txt { "StatusCode": 200, "ExecutedVersion": "$LATEST" } zzh@ZZHPC:/zdata/Github/ztest$ cat output.txt "The square of 6 is 36."
zzh@ZZHPC:/zdata/Github/ztest$ awslocal lambda invoke --function-name lambda_function_square --payload '{"number": 4}' output.txt { "StatusCode": 200, "ExecutedVersion": "$LATEST" } zzh@ZZHPC:/zdata/Github/ztest$ cat output.txt "The square of 4 is 16.
Okay, this was a preliminary test, but we can go further. We can create a URL for the Lambda function. Again, thanks to awslocal, running the following command:
zzh@ZZHPC:/zdata/Github/ztest$ awslocal lambda create-function-url-config \ --function-name lambda_function_square \ --auth-type NONE { "FunctionUrl": "http://63nifrqm4oomdo99puffmbb7m6pomwxv.lambda-url.ap-southeast-1.localhost.localstack.cloud:4566/", "FunctionArn": "arn:aws:lambda:ap-southeast-1:000000000000:function:lambda_function_square", "AuthType": "NONE", "CreationTime": "2024-08-23T03:56:50.765366+0000" }
This will generate a URL that can be used to invoke the Lambda function. The URL will be in the http://<XXXXXXXX>.lambda-url.us-east-1.localhost.localstack.cloud:4566 format.
Now, for example, we can trigger the Lambda function URL using cUrl:
zzh@ZZHPC:/zdata/Github/ztest$ curl -X POST \
'http://63nifrqm4oomdo99puffmbb7m6pomwxv.lambda-url.ap-southeast-1.localhost.localstack.cloud:4566/' \
-H 'Content-Type: application/json' \
-d '{"number": 6}'
Internal Server Error
This was a minimal example. Another example of a serverless application could be a function that generates PDF receipts for a business. This would allow the business to not worry about server management and only pay for the computing time that is consumed.
• The Event Sourcing pattern
The Event Sourcing pattern stores state changes as a sequence of events, allowing the reconstruction of past states and providing an audit trail. This pattern is particularly useful in systems where the state is complex and the business rules for transitions are complex.
As we will see in implementation examples later, the Event Sourcing pattern emphasizes the importance of capturing all changes to an application state as a sequence of events. An outcome of this is that the application state can be reconstructed at any point in time by replaying these events.
Real-world examples
There are several real-world examples in the software category:
• Audit trails: Keeping a record of all changes made to a database for compliance
• Collaborative editing: Allowing multiple users to edit a document simultaneously
• Undo/redo features: Providing the ability to undo or redo actions in an application
Use cases for the Event Sourcing pattern
There are several use cases for the Event Sourcing pattern. Let’s consider the following three:
• Financial transactions: Event Sourcing can be used to record every change to an account’s balance as a chronological series of immutable events. This method ensures that every deposit, withdrawal, or transfer is captured as a distinct event. This way, we can provide a transparent, auditable, and secure ledger of all financial activities.
• Inventory management: Within inventory management contexts, Event Sourcing helps in tracking each item’s life cycle by logging all changes as events. This enables businesses to maintain accurate and up-to-date records of stock levels, identify patterns in item usage or sales, and predict future inventory needs. It also facilitates tracing the history of any item, aiding in recall processes or quality assurance investigations.
• Customer behavior tracking: Event Sourcing plays a critical role in capturing and storing every interaction a customer has with a platform, from browsing history and cart modifications to purchases and returns. This wealth of data, structured as a series of events, becomes a valuable resource for analyzing customer behavior, personalizing marketing strategies, enhancing user experience, and improving product recommendations.
Implementing the event sourcing pattern – the manual way
Let’s start with some definitions. The components of the Event Sourcing pattern implementation are as follows:
• Event: A representation of a state change, typically containing the type of event and the data associated with that event. Once an event is created and applied, it cannot be changed.
• Aggregate: An object (or group of objects) that represents a single unit of business logic or data. It keeps track of things, and every time something changes (an event), it makes a record of it.
• Event store: A collection of all the events that have occurred.
By handling state changes through events, the business logic becomes more flexible and easier to extend. For example, adding new types of events or modifying the handling of existing events can be done with minimal impact on the rest of the system.
In this first example, for the bank account use case, we will see how to implement the event sourcing pattern in a manual way. In such an implementation, you would typically define your event classes and manually write the logic to apply these events to your aggregates. Let’s see that.
We start by defining an Account class representing a bank account with a balance and a list of events attached to it, for the operations on the account. This class acts as the aggregate. Its events attribute represents the event store. Here, an event will be represented by a dictionary containing the type of operation (“deposited” or “withdrawn”) and the amount value.
We then add the apply_event() method taking an event as the input. Depending on event["type"], we increment or decrement the account balance by the event’s amount, and we add the event to the events list, effectively storing the event:
class Account: def __init__(self): self.balance = 0 self.events = [] def apply_event(self, event): if event["type"] == "deposited": self.balance += event["amount"] elif event["type"] == "withdrawn": self.balance -= event["amount"] self.events.append(event)
Then, we add a deposit() method and a withdraw() method, which both call the apply_event() method, as follows:
def deposit(self, amount): event = {"type": "deposited", "amount": amount} self.apply_event(event) def withdraw(self, amount): event = {"type": "withdrawn", "amount": amount} self.apply_event(event)
Finally, we add the main() function, as follows:
def main(): account = Account() account.deposit(100) account.deposit(50) account.withdraw(30) account.deposit(30) for evt in account.events: print(evt) print(f"Balance: {account.balance}")
Implementing the Event Sourcing pattern – using a library
In this second example, we will use the eventsourcing library to implement the Event Sourcing pattern. Let’s consider an inventory management system where we track the quantity of items.
We start by importing what we need, as follows:
from eventsourcing.domain import Aggregate, event from eventsourcing.application import Application
Then, we define the class for the aggregate object, InventoryItem, by inheriting from the Aggregate class. The class has an increase_quantity() and a decrease_quantity method, each decorated with the @event decorator. The code for this class is as follows:
class InventoryItem(Aggregate): @event("ItemCreated") def __init__(self, name, quantity=0): self.name = name self.quantity = quantity @event("QuantityIncreased") def increase_quantity(self, amount): self.quantity += amount @event("QuantityDecreased") def decrease_quantity(self, amount): self.quantity -= amount
Next, we create our inventory application’s class, InventoryApp, inheriting from the eventsourcing library’s Application class. The first method handles the creation of an item, taking an instance of the InventoryItem class (item) and calling the save() method on the InventoryApp object using the item. But what exactly does the save() method do? It collects pending events from given aggregates and puts them in the application’s event store. The definition of the class starts as follows:
class InventoryApp(Application): def create_item(self, name, quantity): item = InventoryItem(name, quantity) self.save(item) return item.id
Next, similarly to what we did in the previous example, we add an increase_item_quantity() method, which handles the increase of the item’s quantity (for the aggregate object) and then saves the aggregate object on the application, followed by the corresponding decrease_item_quantity() method, for the decreasing action, as follows:
def increase_item_quantity(self, item_id, amount): item = self.repository.get(item_id) item.increase_quantity(amount) self.save(item) def decrease_item_quantity(self, item_id, amount): item = self.repository.get(item_id) item.decrease_quantity(amount) self.save(item)
Finally, we add the main() function, with some code to test our design, as follows:
def main(): app = InventoryApp() # Create a new item item_id = app.create_item("Laptop", 10) # Increase quantity app.increase_item_quantity(item_id, 5) # Decrease quantity app.decrease_item_quantity(item_id, 3) notifs = app.notification_log.select(start=1, limit=5) notifs = [notif.state for notif in notifs] for notif in notifs: print(notif.decode())
• Other architectural design patterns
You may encounter documentation about other architectural design patterns. Here are three other patterns:
• Event-Driven Architecture (EDA): This pattern emphasizes the production, detection, consumption of, and reaction to events. EDA is highly adaptable and scalable, making it suitable for environments where systems need to react to significant events in real time.
• Command Query Responsibility Segregation (CQRS): This pattern separates the models for reading and writing data, allowing for more scalable and maintainable architectures, especially when there are clear distinctions between operations that mutate data and those that only read data.
• Clean Architecture: This pattern proposes a way to organize code such that it encapsulates the business logic but keeps it separate from the interfaces through which the application is exposed to users or other systems. It emphasizes the use of dependency inversion to drive the decoupling of software components.