Software Architecture: Ensuring Scalability and Reliability Through Strategic Design Choices
Hello there! In my previous article, I discussed Test-Driven Development and its impact on creating a robust, low-error software system. We even had periods of zero bugs for several months, despite ongoing updates and changes. Building on that, this article will explore another vital element for creating scalable, reliable, and adaptable software. We’re turning our attention to software architecture, but not in the usual way. It’s often thought that software architecture is mainly about cloud technology or choosing the right tech stack. Instead, we’ll focus on how to use these technologies effectively without letting our code become too dependent on them. This strategy is essential for keeping our software agile and responsive in the fast-changing tech world.
Decoding the Essence of Software Architecture
During my journey as a software engineer, and through consulting books, videos, and other resources on software architecture, I’ve encountered numerous definitions and opinions about what software architecture entails. Some view it as collaborating with people to design a system, which is a significant aspect. Others emphasize selecting tools, defining technologies, choosing cloud services, frameworks, and programming languages, and making decisions about Microservices, SOA, or monolithic systems. However, one of my favorite descriptions comes from Uncle Bob, who describes software architecture as ‘the whole enchilada,’ highlighting an important point: good architecture should keep options open. But what does this mean? Let’s delve deeper
Keeping Options Open, a Feature of Good Architecture
What does it mean to keep options open? Let’s look at a common scenario I’ve seen in my career, a mistake many of us make. Early in new software development, we choose a database, decide if it’s cloud-based, or pick the frameworks to use. Without careful planning, this can lead the system to become too dependent on that specific database or technology. When the technology becomes outdated or a cheaper alternative appears, replacing it can be nearly impossible and very expensive for the company. Often, teams end up rebuilding the product from scratch, usually making the same mistakes because the same programmers are doing the replacement. I plan to explore this further in a future article, focusing on working with legacy code and refactoring. This issue can lead to major financial problems and could even bankrupt companies that rely heavily on software.
So, how can we keep these options open and reduce these scenarios? In short, leaving options open is the ability of our architecture to be flexible, easy to change. As mentioned, it gives us the possibility to change modules and technologies in our system without affecting other layers, such as the domain layer (which should be isolated from all this). But how do we decouple ourselves from the database? This is where Object Oriented Programming and one of its properties, polymorphism, help us maintain these options open. If we look at the Dependency Inversion Principle (DIP), proposed by Uncle Bob in the SOLID principles, it tells us that high-level modules should communicate with low-level modules through their abstractions. What does this mean? Let’s suppose a use case where we need to save information in some storage mechanism. How can we keep our options open to replace the storage mechanism or technology without affecting the use case that resides in the domain code?, well, we can create an Interface. An Interface, as we know, is a kind of contract or blueprint that contains methods. Any class implementing the Interface must also implement these methods. Then, our Interface can define the methods we need to fulfill the use case, such as storing and/or retrieving information. This way, we could have multiple implementations of the Interface, which can be injected into the domain class through dependency injection. We can see this in the following code, where we have a class called AddIncomeTransaction, implementing the use case of adding income to an account (the full code project can be found here).
public class AddIncomeTransaction : IAddIncomeTransactionInput
{
private readonly IAccountQueriesRepository _accountQueriesRepository;
private readonly IAccountCommandsRepository _accountCommandsRepository;
public AddIncomeTransaction(IAccountQueriesRepository accountQueriesRepository, IAccountCommandsRepository accountCommandsRepository)
{
_accountQueriesRepository = accountQueriesRepository;
_accountCommandsRepository = accountCommandsRepository;
}
/// <exception cref="CategoryNotFoundException"></exception>
/// <exception cref="AccountNotFoundException"></exception>
/// <exception cref="AccountNameException">
/// Name contains invalid values, is null or empty.
/// Name lenght is greater than 155.
/// </exception>
public void Execute(AddTransactionRequest request, IAddIncomeTransactionOutput output)
{
var account = _accountQueriesRepository.GetAccountWithoutTransactions(request.AccountId, request.OwnerId);
if (account == null) throw new AccountNotFoundException();
var newAccountBalance = account.AddIncomeTransaction(request.Amount, request.Description, request.Category);
var transaction = account.Transactions.GetLastTransaction()!;
_accountCommandsRepository.StoreTransaction(account.Id, newAccountBalance, transaction);
var response = new AddTransactionResponse(transaction.Id, newAccountBalance.Amount);
output.Results(response);
}
}
In the constructor of the class, we can see that two interfaces are being injected: IAccountQueriesRepository and IAccountCommandsRepository. Let’s focus on the command interface, which has the following structure:
public interface IAccountCommandsRepository
{
public void OpenAccount(FinancialAccount account);
public void StoreTransaction(string accountId, Balance accountBalance, Transaction transaction);
}
Now, if we look at the use case using these interfaces, which have two implementations in different layers, one in the testing layer, which is just a mock, as we can see below:
public class AddIncomeTransactionOutputMock : IAddIncomeTransactionOutput
{
public string? TransactionId { get; private set; }
public decimal AccountBalance { get; private set; }
public void Results(AddTransactionResponse response)
{
TransactionId = response.TransactionId;
AccountBalance = response.AccountBalance;
}
}
The other two implementations are from the two databases I have in this system, one in memory, which is just a hashtable or dictionary in C#, and an implementation in Cosmos DB.
Hashtable class:
public class AccountsMemoryRepository : IAccountCommandsRepository, IAccountQueriesRepository
{
public MemoryDb DataBase { get; }
public AccountsMemoryRepository() : this(new MemoryDb())
{
}
public AccountsMemoryRepository(MemoryDb memoryDb)
{
DataBase = memoryDb;
}
public AccountsMemoryRepository(FinancialAccountModel financialAccountModel)
{
DataBase = new MemoryDb(financialAccountModel);
}
public void OpenAccount(FinancialAccount account)
{
DataBase.FinancialAccounts.Add(account.Id, FinancialAccountModel.CreateFromFinancialAccount(account));
}
public void StoreTransaction(string accountId, Balance accountBalance, Transaction transaction)
{
DataBase.FinancialAccounts[accountId].Balance = accountBalance.Amount;
DataBase.FinancialAccounts[accountId].Transactions.Add(transaction);
}
// .
// .
// .
}
CosmosDB class
public class AccountCommandsRepository : IAccountCommandsRepository
{
private readonly Container _container;
private readonly TaskFactory _taskFactory;
public AccountCommandsRepository(Container container)
{
_container = container;
_taskFactory = new TaskFactory(CancellationToken.None,
TaskCreationOptions.None,
TaskContinuationOptions.None,
TaskScheduler.Default);
}
public void OpenAccount(FinancialAccount account)
{
var accountDto = FinancialAccountDto.FromFinancialAccount(account);
var resultTask = _container.CreateItemAsync(accountDto);
var result = _taskFactory
.StartNew(() => resultTask)
.Unwrap()
.GetAwaiter()
.GetResult();
if (result.StatusCode != HttpStatusCode.Created) throw new Exception();
}
public void StoreTransaction(string accountId, Balance accountBalance, Transaction transaction)
{
var transactionDto = new TransactionDto(transaction.Id, transaction.Amount, transaction.Description.Value,
transaction.Category.Value, transaction.TimeStamp.Value);
var result = _taskFactory.StartNew(() =>
_container.PatchItemAsync<FinancialAccountDto>(accountId, new PartitionKey(accountId),
new[]
{
PatchOperation.Set("/balance", accountBalance.Amount),
PatchOperation.Add("/transactions/0", transactionDto),
}))
.Unwrap()
.GetAwaiter()
.GetResult();
if (result.StatusCode != HttpStatusCode.OK) throw new Exception();
}
}
This approach ensures that the use case, and therefore the business logic, remains decoupled from the implementation. This same principle can be applied to various scenarios. For example, we can choose whether to deploy applications in Azure Functions or AWS Lambda, or consider replacing a REST API with a service bus. By following this principle, we can create applications or modules that are capable of transitioning from a monolithic to a microservice architecture, or vice versa, depending on the business needs. The most important aspect, however, is that we are decoupling from most technologies, which allows us to keep our options open. But how do we determine what to define behind an interface? We’ll explore that in the next section.
Defining Layers and Domains in Our Architecture
As we mentioned, DIP tells us how to communicate different modules, but how can we identify which is the high-level module and which is the low-level module? This is something that personally took me a while to understand correctly. In Clean Architecture, Hexagonal Architecture, or Onion Architecture, we are told that we have a domain layer, where our business and application logic should be. We need to analyze our domain and identify what operations we perform and what rules we have. For example, a business logic is usually a mathematical operation or a validation, and an application rule can be a use case, like the example we saw in the previous section, a use case that needs to verify that the account exists, add the new transaction, and send the information to be stored in the data access layer. Therefore, the high-level module is the Use Case and the entities where the business logic is, while the low-level module are the classes that implement the interfaces of the high-level module, like my classes that implement mocks, CosmosDB, or the in-memory database. This way, we can define the layers of our architecture, keep them decoupled, and change them when necessary, without needing to modify the other layers. We can also add or remove layers as the business requires, maintaining low costs, and making it a quantifiable and achievable task.
Conclusion
The critical takeaway from our exploration of software architecture is the paramount importance of decoupling from specific technologies. This approach not only ensures flexibility and adaptability in a constantly evolving tech landscape but also safeguards against the risks associated with technological obsolescence. By prioritizing a decoupled architecture, we empower our systems to seamlessly integrate new technologies, respond to changing market demands, and reduce dependency on any single tool or platform. Such an architecture, grounded in principles like the Dependency Inversion Principle and leveraging techniques like polymorphism, enables organizations to stay agile and competitive. It’s a strategic investment in the sustainability and scalability of software solutions, ensuring that they remain robust and relevant over time. This philosophy of decoupling from technology is not merely a technical decision; it’s a business strategy that aligns software development with long-term organizational goals.
References
- “Domain-Driven Design: Tackling Complexity in the Heart of Software” by Eric Evans — Often referred to as the “Blue Book”, this seminal work lays the foundation for understanding domain-driven design, an approach that emphasizes the importance of a domain-centric architecture in software development.
- “Clean Architecture: A Craftsman’s Guide to Software Structure and Design” by Robert C. Martin (Uncle Bob) — This book offers a deep dive into the principles of clean architecture, highlighting the significance of creating systems that are independent of frameworks, UI, and databases. It guides the reader towards architecture that is both testable and easy to understand, which inherently supports the concept of technology decoupling.