tl;dr
Instead of forcing your application into a prescriptive template like Clean or Hexagonal Architectures, get back to basics and use patterns from Modular Software Design. Divide the application into independent modules, each containing business logic representing a specific process. For modules with complex business logic, extract the infrastructure-related code into separate Infrastructure-Modules. This will enable you to build an application characterized by low cognitive load, high maintainability, and high extensibility.
This approach is called MIM AA (Module Infrastructure-Module Application Architecture).

Important remark: if you’re new to Modular Software Design, I really encourage you to first read the appendix on fundamentals: 8. Appendix - Introduction to Modular Design.
Why MIM AA?
In this article, I’d like to present a generic application architecture that can be used in a wide range of software types, whether it’s an enterprise system or a console app. It’s the result of my research on modernizing Modular Software Design with the essence of Clean/Hex/Onion Architectures.
The approach presented here isn’t revolutionary, nor did I invent it. Preparing this paper was more about connecting the dots, polishing patterns, and providing a practical example of how it can be used. Sadly, this design is relatively unknown (especially when compared to the alternatives), and what’s worse, it seems nameless. So I decided to call it a “Module Infrastructure-Module Application Architecture” or just MIM for short.
The beauty of this architecture is that it’s a natural consequence of Modular Design patterns:
-
High Cohesion,
-
Low Coupling,
-
Information Hiding.
Additionally, it encompasses:
This Application Architecture aims to lower the cognitive load required to work with the design. At the same time, it’s simple and straightforward. With testability as a first-class citizen, it’s compatible with advanced techniques like Test-Driven Development and the Chicago School of Unit Tests. Overall, it can easily compete with the Clean/Hex/Onion Architecture trio.
Why is this text so long?
Although MIM is simple, I felt that the benefits of the new approach might not be obvious without good examples. Additionally, I realized I couldn’t just use terms and concepts from Modular Software Design without explanation, because they have lost their original meaning and, for many people, become mere clichés. This is because there aren’t really many good, modern resources on this subject. These were the reasons that convinced me to introduce the lengthy “Example application” and “Introduction to Modular Design” chapters.
In this text, I also wanted to tackle the “gray area” between High-Level Design (System Design/Architecture) and Low-Level Design (Patterns/Principles/Classes). This is the stage where we consider the decomposition of the application into coarse-grained units (or grouping classes into larger units). In my opinion, there’s a gap in the resources available for this design area and it can be filled precisely with Modular Software Design.
Where it can be applied
MIM is a basic concept, so it can be used almost everywhere:
- Microservices,
- Monoliths and Modular Monoliths,
- CLI console apps,
- Domain-Driven Design,
- Non-Domain-Driven Design,
- Anything that is not trivial and/or doesn’t have strict memory/CPU constraints (e.g. low-level embedded systems).
Modular Software Design is the foundation of MIM. This application architecture assumes that designers begin with - and in tough cases also fall back to - the characteristics, patterns, and heuristics from Modular Design (see 8. Appendix - Introduction to Modular Design).
To modernize the Modular Software Design, I proposed two extensions: Business-Modules and Infrastructure-Modules.
Business-Modules
First of all, when it comes to application architecture, forget about layers. Assume it’s an archaic way to arrange the design, and that includes layers in circles.
Instead of focusing on prescriptive arrangements, design the application around the processes it fulfills. That’s where modules come into play. Modules should be your building blocks.
Characteristics of a proper module:
- It has a clear public API (so you can easily check what you can do with it).
- It encapsulates its data, which is accessible only through the public API.
- It has clear responsibilities that reflect the process, not a design layer.
- It is independent, except for responsibilities carried out for or delegated to other modules.
Examples of proper modules:

Examples of bad modules:

In short, a proper module is a black box, responsible for an independent flow (i.e. process). Other modules act upon it via its programming API (i.e. calls to class methods). And it doesn’t allow other modules to query its datasource directly (by bypassing the API).
That’s the classical Modular Software Design. What it misses, though, is testability. If a business-module has a lot of complex business logic, it cannot be easily tested, since the business logic is mixed with the untestable infrastructure code (e.g. file system or network calls) that lies in the same module. So we need to introduce a separation, and that’s where Infrastructure-Modules come into play.
Infrastructure-Modules (aka Infra-Modules)
An Infrastructure-Module is a module that contains only infrastructure code without any business logic. It serves as a subordinate to a Business-Module.
Characteristics of Infrastructure-Modules:
- Belong to one Business-Module.
- Contain no business logic.
- Adhere to the Dependency Inversion Principle (DIP) with regard to their Business-Modules. This means that such a module can have a compile-time dependency on a Business-Module by implementing its interfaces. But no Business-Module can have a compile-time dependency on Infra-Modules.
After moving the Infra code away, Business-Modules will be free from untestable code. That design decision brings us testability, so we are able to fully test all the complex business logic via the Business-Module’s public API alone. As a side effect, we also got Separation of Concerns with its benefits.
What goes to the Infra-Modules:
- Http handlers (exposed APIs, but also Server-side rendering),
- Http clients,
- Database connectivity,
- Message bus code (incoming and outgoing, no need for segregation),
- File system operations and all other I/O.
Here’s an example of a Business-Module with its companion (subordinate) Infrastructure-Module:

The image above shows a pair of modules, where the green one is the Business-Modules with a complex logic. Whenever it needs to invoke code external to itself (e.g. save an entity or send a message to a message bus), it exposes a public interface and invokes its methods instead. Thanks to that, it has no code dependencies (i.e. compile-time dependencies) on the Infrastructure-Module. The blue infra-module has the dependency on the Business-Module, because it implements the Business-Module’s interface. This module contains all the code related to database connectivity and RabbitMq handling. But it also bootstraps the Business-Module, for instance, it hooks up its classes as implementation of Business-Modules interfaces in the Dependency Injection container (DI/IoC container).
On the following image you can see how it could look like in a more detailed quasi-uml diagram:

- Infrastructure-Modules are not layers. An infra-module should be used only by its Business-Module. If there’s code you would like to share among many modules, move it to a stand-alone module or a library.
- They’re not mandatory. Not all modules have infrastructure code, and not all have so complex business logic that would make the separation justified. The most important hint on introducing an infra-module is a need to unit test the Business-Module.
If you know the Clean/Hexagonal or Onion Architecture, you’ve probably already spotted some similarities, or you wonder if MIM is not too simplistic when compared to them. This chapter will explore these matters.
The core concept of the mentioned application architectures is to make the business logic independent from communication with the outside world (disk, network, UI, etc). It’s achieved by applying the Dependency Inversion Principle (DIP) at the architectural level. Exactly the same concept is used in MIM (Module Infrastructure-Module Application Architecture). Thanks to that, all these architectures solved the problem of low testability, which was intrinsic to the classical Three-tier Architecture.
So let’s list characteristics MIM and circular-layers architectures share:
- Separation of business and infrastructure logic,
- Allowing only a compile-time dependency from infrastructure code to the business logic code (never the reverse). That means that the business logic exposes public methods (which could be invoked from outside) and/or exposes required interfaces, and the infrastructure layer/module implements them.
But when we look at all the extra stuff in the Clean/Hex/Onion trio that has been added around that core concept, these architectures don’t look so simple anymore (see images below). I must admit that the Hexagonal one, at least in the original paper, looks the most straightforward. However, just like for the others, there are long debates over the Internet how these architectures should be implemented, what each element means, how to implement each layer, etc. A lot of failed or overengineered implementations happened due to such misunderstandings.
Here are the canonical diagrams representing each of the application architectures:
Clean Architecture (from blog.cleancoder.com )

Hexagonal Architecture (aka Ports & Adapters)
Meanwhile, MIM AA

Let’s go through some of the problems people encounter when implementing circular-layers architectures and see how MIM helps with them:
- The original writings on circular-layers architectures focus on the notion of an application, as if the entire monolith or service was to be fit into these 3-5 layers. On the other hand, MIM relies on Modular Design and shows how to compose the application out of modules. This makes MIM a more holistic approach to application design.
- All the circular-layers architectures prescribe some layers of code, no matter if you need them or not. In MIM there are no layers, just modules and optional separation of concerns. So MIM is more adaptable. And it’s up to the designer how the inner implementation of modules will look.
- In the Hexagonal Architecture there is an artificial split between Driven and Driving Adapters, in MIM you can place them in one module (and you will save yourself trouble in figuring out where to put a duplex I/O).
- In MIM each module could be unit tested separately. In the circular-layers architectures… it’s hard to tell, because there are many community interpretations of what you put into the “core”. With the literal implementations everything is mixed in the “core” or “application” (and you don’t get the clear testing boundaries per process/domain). But in one of the popular community interpretations, i.e. “layers per domain”, you can get good separation, but at the cost of duplicated layers you don’t need.
To summarize, MIM shares the dependency-inversion concept with Clean/Hexagonal/Onion Architectures. But MIM also proposes approaches that can be applied to the gray areas in the overall design process developers need to address to complete the project. Also this application architecture is less prescriptive, thus more universal. Of course, there are many successful projects with e.g. Hexagonal or Clean/Onion Architectures, but in most cases I’ve seen the designers had to first address the ambiguities and unanswered questions themselves in order to succeed. For small projects, a design typical of Hexagonal Architecture (or maybe even Clean Architecture) might be better. At least as long as it stays small.
In this chapter, we are going to see how to split an exemplary, “big ball of mud” application into a modular one that adheres to MIM AA. I hope this exercise will complement my descriptions from previous chapters, making MIM concepts more tangible.
Description
Let’s describe our imaginary application:
- Name: H&V Server
- Purpose: To control Heat and Ventilation in an experimental greenhouse (it’s a custom HVAC solution).
- External components:
- SPA web app - the UI that talks to the app via HTTP.
- Embedded IoT Devices with sensors that send a massive amount of signals (our app is just one of the systems that use these signals).
- A NoSQL database.
- Independent Ventilation Scheduler - an external system which has a priority in controlling the ventilation.
- Alarm system - that needs to be notified if human intervention is required.
- Device API - each device can be accessed via an IP and controlled with a custom protocol over TCP.
- Requirements: There are a lot. We will go over them later on.
- Application architecture: In the beginning, it was to be a classic layered architecture. But when programmers realized that there were no blueprints for such a mix of external components, it all collapsed into just one project, where everything is mixed together. So at the root you just see directories like: “Model”, “Services”, “Interfaces”, “Entities”, etc. It’s compiled into a single, massive executable file.
A diagram representing this application could look like this:

What are the possible problems with this application:
- It’s hard to reason about any feature without performing a detailed code inspection;
- It’s hard to implement proper Unit Tests (i.e. tests based on behaviors);
- During the period of initial development, it was hard to split work among teams or developers (the “everyone must know everything” syndrome);
Unfortunately, when modernizing such a system we need to understand the codebase and all requirements (not only initial requirements, but also actual behaviors not documented anywhere). This process will allow us to build a list of the Responsibilities the application fulfills.
In this exercise I will take some shortcuts to keep this chapter at a reasonable size.
First iteration
Let’s say that we have begun the first iteration by identifying the following high-level responsibilities:

As you can see, they are totally unrelated to each other. It means that according to the High Cohesion pattern, the code responsible for each of them should not reside together.
So let’s start the refactoring by splitting the app based on these responsibilities:

The first iteration revealed what the real, external dependencies of “high-level responsibilities” are. For instance, now it’s visible that only one module needs a database.
In this diagram, shared dependencies mean module coupling. We’ll tackle this later, but let’s first go module by module and try to see if there are any more hidden responsibilities, and maybe figure out what patterns we can apply to make the codebase more modular.
Battery Alarms Module
Now, let’s assume the Battery Alarms module has some additional responsibilities (they were already extracted with all the alarms code from the original codebase):
- Raising the alarm only once per 15 minutes (this requirement is allowed to be broken after application reboot, so the state is held only in memory).
- Support for many alarm types in the future.
Luckily, all these responsibilities already fit nicely into our Battery Alarms module. So, this module adheres to fundamental principles of modularity (High Cohesion, Low Coupling, Information Hiding), but it ignores testability. Right now, we can only test Alarms using Integration Tests. (In theory, we could analyze the codebase, find boundaries, segregate classes internally, and introduce seams. But in practice, it will start to rot over time, especially when new people are introduced to the project).
That’s where MIM comes into play. To make the Battery Alarms module really testable, we need to extract all the code related to the infrastructure: IoT handler and the http client to the external system:

Here is an example of the quasi-uml low-level design for the Battery Alarms:

There are some interesting points in this diagram that I need to highlight:
- All relations point from the Infrastructure-Module to the Business-Module. The Business-Module is fully independent, thus fully testable via Chicago School of TDD (more on that later on).
- Actions that the Business-Module has to perform on the outside world are represented by an interface. That interface is implemented in the Infra-Module (see IAlarms and Alarms).
- Actions that originate in the Infra-Module (i.e. incoming IoT signals) invoke classes in the Business-Module directly. There’s no need for any interface here. The Infrastructure-Module bootstraps the Business-Module, so it can just as well inject a real implementation to its classes. (In rare situations, an extra abstraction layer might be needed, but it should not be a default option).
Firmware Dispatcher Module
Again, let’s say that during the code inspection of the original code, we found some more responsibilities:
- Controlling percentage-based releases of the new version.
- Firmware rollback.
And, just like previously, even with all these responsibilities, having just one Business-Module is fine. It will still be simple, and it nicely represents the main high-level responsibility.
But there’s some complex infra code around that logic, and that’s a basis for creating an Infrastructure-Module, which will handle the following technical responsibilities:
- Setup and operation of an http endpoint.
- Talking with devices via a custom protocol.
Such a separation will make the business logic easier to comprehend (and also testable).

The diagram above shows what the design on the module level looks like for the Firmware Dispatcher module. Its low-level design will be similar to that of the Battery Alarms Module.
H&V Controller
This module will be the most complex one. In this chapter, I’m simulating a design process, so please bear in mind that such a process is not only incremental, but also iterative and adaptive. That’s why during this iteration we will see how the design of this module affects the two previous module designs.
The previously discovered responsibility was:
- Controlling Heat and Ventilation.
Now, let’s assume that during the code inspection, we found these additional responsibilities:
- Performing simulations that will determine the optimal heat and ventilation parameters.
- Scheduling the process of sending parameters to the devices.
- Including external Ventilation Schedule in simulation and scheduling.
I’m already anticipating some issues, but for the time being, let’s just apply the MIM pattern to this module and see what we will achieve. The result will look like this:

It might look fine on the surface, but there are two problems visible on this diagram (that were not the case in the designs of the former modules):
- The H&V Controller module has too many high-level responsibilities assigned to it.
- It duplicates logic among modules that is related to communication with external subsystems (e.g. handling Devices).
The first problem is hard to quantify. It is a designer’s call what “too many responsibilities” is (well, only until we see it in code, then it will be more apparent). If you were to say “one responsibility per module”, you would often end up with a plethora of small modules. Such a situation increases the cognitive load of the design. The same will happen if you assign too many responsibilities to each module. Of course, the High Cohesion pattern is something that will come in handy here.
In this case my judgement is that I will extract the following modules:
- H&V Simulation - because it’s a complicated piece of algorithm, which is also self-contained. It can be implemented by a separate person in parallel and it will for sure have its own set of unit tests (and by the way, those are two more hints for module extraction).
- H&V Scheduler - it’s an execution module with an independent business logic. Its infrastructure code will be extracted to the companion “infra” module.
The second problem with the design above was the duplication of code for some of the infrastructure components. Don’t get me wrong, not all duplication is bad (as Rob Pike said: “A little copying is better than a little dependency”). But in this case it’s a significant piece of logic - a custom implementation of a TCP protocol in one case, and filtering of high-throughput data in the second. That’s why we will add the following standalone modules:
- “Device API” - which will contain the “Device API” code from “Firmware Dispatcher.Infra” (which we designed previously).
- “Signals Filter” - which will contain the “IOT” code from “Battery Alarms.Infra” (likewise, designed previously).
Another decision to make is where to put the infrastructure code for “Independent Ventilation Scheduler” (see the diagram above). The default choice would be to use the MIM pattern and put it into the Infrastructure-Module of the “H&V Controller” module (i.e. “H&V Controller.Infra”), just as we did with H&V Scheduler’s infra code. But anticipating that it could also be reused, or maybe just to show how flexible the design process is, I will also extract it as a standalone “infrastructure-only” module.
When we apply all these decisions to the design, we are going to achieve H&V Controller module that looks like this:

This is a design. The image might look a bit untidy, and for sure there are more possibilities for arranging the modules and spreading the responsibilities among them. But I think this one features the benefits of MIM well.
The advantages of MIM visible in the above design for the H&V Controller are:
- Responsibilities are clearly separated (I didn’t even have to label them on the diagram).
- It’s easy to reason about the flow of logic.
- Adding new features or updating existing ones will only happen in modules related to the affected responsibilities (on the contrary, in circular-layers architectures, even simple changes often affect all layers: domain, application, and repositories).
- As you can see, the MIM approach is quite flexible. In places where MIM doesn’t fit, we use standard Modular Design practices. E.g. Ventilation Schedule is a standalone read-model. I color-coded this element in blue, because it contains only infrastructure code. It’s still subordinate to the H&V Controller, hence the direction of the dependency.
Integrating the modules
When we bring all modules together, the overall design will look like in the image below. (Just a reminder: the “dep.” on the diagram means compile-time dependency; it points to a module that exposes a required interface. This interface is implemented by the module from which the dependency originates).

The layout of elements on the image is not the best (I wanted it to be mobile phone friendly), but nevertheless, when just reading the names, I think we achieved the “screaming architecture” level. Everything is obvious, especially when you compare it to the “big ball of mud” we started with.
- You can easily see which modules depend on which external dependencies (and which ones are shared).
- Each module is self-contained. E.g. http handlers for Battery Alarms features are located in the “Battery Alarm.Infra” module.
- Such a design is “teamwork” friendly. Each module could be implemented by a separate person on a shared repo, without stepping on each other’s toes.
On the bottom of the image, you can see the Entrypoint module. It’s the starting point of the application. Such a module should bootstrap other modules (in most cases, it has dependencies to almost all other modules, except the Business-Modules, which are bootstrapped by their Infra-Modules). It also contains code for shared infrastructure such as observability, setup of authorization, etc. (See FAQ for more information on that).
This whole application would probably be compiled to a single executable (but that depends on your runtime/framework; e.g. in C# it would be an executable and a dll library for each module; in Go it would be a single executable, and in Java a set of jar files).
Our example ends here. But if I were to carry this further I would move on to verifying architectural drivers that were selected upfront (high availability, resilience, security, etc). For instance, do I need to run the application (service) on many instances (pods)? If so, would the design allow for that? How about ensuring that the alarm goes off only once? All of these could be requirements (NFRs/architectural characteristics) that the design must support.
The proposed application architecture uses modules as building blocks, because they allow us to lower the overall complexity of the solution. Modules in MIM should represent business capabilities, processes, flows or “significant pieces of logic”. They hide the information required to carry out the logic as well as the underlying complexity. Modules then expose public APIs that let us use them as black boxes. That’s nothing else but High Cohesion and Information Hiding in practice. Or as J. B. Rainsberger put it, by using modules you “hide details in a box”.
Moreover, when modules are properly split by business capabilities we achieve the “Screaming Architecture” (the term coined by Robert C. Martin). It means that it’s easier to understand and navigate the codebase, because its structure guides you through itself (which is not the case when all you see are modules or directories like: “Domain”, “Core”, “Persistence”, “Infrastructure”, etc).
Other architectures that are based on layers put more emphasis on Separation of Concerns. Thus, every layer is dedicated only to a specific technical function (e.g. all domain code, all use case code, all infrastructure code). But as a result they mix in one layer all the different business processes and domains. As the code grows, it gets harder to reason about.
Many systems and monoliths that have been built a long time ago were also modular, but their modules were mostly based on the classical Three-tier Architecture. Such a setup has a low testability, because the Business layer required the Data Access Layer (DAL) to operate even in unit tests (or you had to do a code surgery to find seams, if there were ones).
To achieve testability in modular software we need to leverage the Dependency Inversion Principle (DIP) up to the level of application architecture. That was the reasoning behind introducing the concept of Infrastructure-Modules. When we extract all the infrastructure code from Business-Modules, they can stay focused on the business logic and have no design/compile-time dependencies on any untestable infrastructure code. As a side effect, we achieve Separation of Concerns.
You might wonder why not just put everything that is “infrastructure related” in a dedicated directory inside the Business-Module. That’s the approach often taken in many designs in the wild, but the problem with such a weak separation is that it tends to erode (and after many months you discover that a business class peeks messages in a broker). Another problem is that it’s much harder to find the boundary for unit tests (whereas with BM and IM separated, you can just assume that the public API of BM is what should be unit tested).
Just to emphasize once more, Infrastructure-Modules are optional. They make sense only when the module contains non-trivial business logic. In other cases, e.g. when designing CRUD-based modules, Infrastructure-Modules would be just a burden, and you should fall back to the classic Modular Design practices.
Despite being late in this paper, this chapter is one of the most important ones. It describes the Adaptive Testing Strategy. It’s a recommended test approach, a sensible default, if you will, when working with MIM. Please note that Adaptive Testing Strategy is not a part of MIM and can also be used with different application architectures.
Testing is crucial and without testing no architecture will help you in the long run. That’s why testability is a core of MIM. But it doesn’t mean you should do excessive, knee-jerk testing of every little method. A testing strategy must be adaptive the same way the architecture is.
The Adaptive Testing Strategy encompasses the following types of tests.
- [Optional] Integration Tests (also known as Service Tests or Component Tests) - i.e. testing all the application’s modules combined together, including dependencies the application owns (e.g. a database). External dependencies should be substituted by test doubles (fakes or mocks).
- [Optional] Sociable Unit Tests for the Business-Modules - for testing the business logic.
- [Optional] Overlapping Unit Tests for Classes - where individual classes or groups of classes are tested. I call them “overlapping”, because in most cases they will test the same code elements as Sociable Unit Tests, but they come in handy in rare cases when better coverage is needed.
As you can see, I’ve highlighted that every level of the tests is optional. That’s because it should be intentionally deliberated on which levels of tests will bring the most value in a given application. Writing tests is a cost, and bad tests are a waste. Duplicated coverage might seem like an extra protection, but it will be a burden during refactoring.

So it’s essential to properly balance the levels of tests to maintain a high quality and speed of development.
Here are some rules of thumb when to use each level of tests:
- If there is at least one Infrastructure-Module, Integration Tests will be worthwhile. The scope of the integration tests depends on whether or not other types of tests are present. If there’s good coverage with Sociable Unit Tests for the Business-Modules, it will be enough for Integration-Tests to cover only happy-paths. On the other hand, CRUD apps will benefit more from having a comprehensive suite of Integration Tests than from unit tests.
- If there are Business-Modules with complex logic, each of them should have a dedicated, independent suite of Sociable Unit Tests. Thanks to the Dependency Inversion Principle (DIP) between BMs and Infrastructure-Modules, this kind of test will be easy to write. The more complex the logic, the bigger Return-on-Investment the tests will yield. If you don’t know what “Sociable Unit Tests” mean, please read on.
- If there’s a complex algorithm or a piece of logic which is not easily testable through Sociable Unit Tests, then you should unit test this piece with Overlapping Unit Tests. I will explain what it means in a moment.
Please remember that these are just guidelines, not rules. I wrote previously “intentionally deliberated on” not without a reason.
Characteristics of good unit tests
Let’s just briefly recall what good unit tests should look like:
- They are a safety net - it means that you can refactor the code (e.g. split or merge classes or functions) and tests still pass. If you can’t refactor anything without breaking the tests, something is not right.
- They help in finding and fixing bugs - tests can expose paths in the logic that were overlooked during implementation. Also an existing suite of tests should allow you to reproduce a found bug before fixing it.
- They’re fast - i.e. feedback in seconds.
- They give confidence - so that the software can be released without hesitation.
Sociable Unit Tests
I wrote an article Unit Tests - from waste to asset that explains how to write unit tests that bring the most value. I suggest reading it as a complement for this section.
When you test a module, especially when you use TDD to drive the design of the code, you shouldn’t test classes or functions in isolation. Going “too low” with the unit tests (class by class, method by method) takes away almost all “Characteristics of good tests” (as described above). That’s because you don’t test the key element of the design, that is the interactions between classes/functions. Such a test suite exhibits an excessive usage of a mocking framework (and it’s a sign of a bad test suite; probably a large number of tests only check that method A invokes method B).
Instead, you should treat Business-Modules as the units you want to test. A module has a clear, explicit API (public methods as well as required interfaces). It means that all the results and side effects can be observed via the API. So in tests you can apply stimulus to the module and assert the results/side effects - that should cover all possible paths (except for time based logic, but there are other methods to cover these cases).
The following illustration shows the scope of Sociable Unit Tests, and for contrast, the scope of Overlapping Unit Tests.

If your Business-Module cooperates with other Business-Modules, you should by default intercept the communication in tests by using a Fake (that’s a kind of a test-double). That’s because the module is the “unit” you test. (But if for some reason, you decide to test two modules together, probably nothing will explode. But remember that that’s what integration tests are for).
This kind of test is also more business-oriented, because they focus on testing external, visible behaviors (so it’s kind of “BDD in code”). You don’t have to think about internal implementation details anymore. In most cases, you don’t even need mocks, it’s enough to use hand-written Test Doubles/Fakes/Stubs (e.g. an in-memory list that simulates a database, fake time provider, etc) (tests with Fakes are much cleaner).
Side note: I’ve found that some people don’t like or don’t understand Test-Driven Development, because they assume that in TDD you’re supposed to test every implementation detail in isolation. That would be indeed a waste. But that’s not how people do TDD (especially in Chicago/Detroit School of TDD). Previous paragraphs have already explained what the suggested approach looks like.
Overlapping Unit Tests
This level of test is the most optional of all levels. In most cases, Sociable Unit Tests should cover all required scenarios and we can assume that paths that can’t be invoked via the module’s API also won’t be invoked in production.
Yet in practice, it sometimes turns out that a setup of some complex scenarios, especially corner cases, takes too much effort/code. Or there are algorithms that, for the time being, are used only with limited parameters, but we should exercise the whole implementation in tests. There might be a similar situation with complex Value Objects. In such cases it’s worthwhile to go one level below Sociable Unit Tests and create a suite of Overlapping Unit Tests.
The concept was already illustrated above on the image with Sociable Unit Tests. Also the name is likely self-explanatory and there is a good article online about it, so I won’t be too exhaustive here (see: Resources). The crux is that it’s acceptable - if circumstances require it - to add an extra suite of tests for particular classes, methods, or functions. This does not violate the rules of Sociable Unit Tests (it will just supplement them).
Is MIM compliant with DDD?
Yes, you can use it with projects that follow Domain Driven Design.
What if I don’t use DDD?
It doesn’t matter. MIM is a more fundamental concept, closer to Low-Level Design than to a high-level software development approach.
Can I use it with CQRS or Event Sourcing?
Yes, you can even mix them with other approaches, so that every module uses the technique that is most suitable for it. For CQRS or ES, the implementation for one domain should be concentrated in one module. If the business logic is considerable, the infrastructure part of the mechanism should be extracted into an Infrastructure-Module.
Can I use it with Event Storming\Event Modeling\Event Streaming and whatnot?
These concepts are orthogonal to MIM, so yes.
Is it AI friendly?
One of the benefits of MIM is lowering cognitive load on developers. The same effect makes this application architecture AI friendly. With clear boundaries and limited scope, it’s easier to fit the problem into AI’s context window.
Is it really something new? (I think I saw it before)
MIM is not revolutionary. It’s just a Modular Design married with a simplification of Clean/Hex/Onion Architectures (i.e. elevated DIP). It’s also influenced by the “Imperative Shell, Functional Core” pattern and James Shore’s A-Frame. You can also find it similar to J.B.Rainsberger’s “Universal Architecture” (although it uses circular-layers, its simplicity is refreshing).
You could have already seen in the wild some designs that resemble MIM, especially in projects that wanted to be modular, but also testable. Or in projects which started with the Clean/Hex/Onion Architecture, but got rid of artificial layers.
But while modular design and DIP on an architectural level are well-established, the simplified combination described here is rather niche. And since I haven’t found any longer articles for such a design, I decided to write my own detailed description (along with the design tutorial).
A Modular Monolith could use MIM AA. But to be honest, Modular Monolith is just a buzz-word that emphasizes the need to modularize monoliths. But proper monoliths were always modularized (see for instance Linux codebase). MIM AA could be seen as an approach to Monoliths that prevents people from creating “Big Ball of Mud Monoliths”.
How does MIM compare to Vertical Slice Architecture (VSA) or “Package by Feature”?
VSA and “Package by Feature” are patterns for organizing code around features and each slice/package should contain “all layers” the feature needs to operate. In this regard, it’s quite similar to classical Modular Design, except for the scope. There are no hard rules, but by intuition I would say that “features” are usually smaller than “processes” (on which modules should be based). But in practice I see developers organise slices into groups, which then are similar to modules.
So when it comes to MIM the main differences would be:
- Size/scope - as described above;
- MIM separates the infrastructure code while VSA/“Package by Feature” does not (at least not with the original definitions), hence it’s harder to unit test slices and people are told to rely more on integration tests.
But in practice, (at least from what I’ve seen over the Internet), even for VSA some developers do extract infrastructure code into a separate module (what resembles MIM) or layer (what resembles the Hexagonal Architecture). That is another example of what I wrote previously that you could have seen something similar to MIM in the wild before.
What about SPA web applications?
Such applications could be viewed as any other, so MIM can be applied to them as well. But keep in mind that if the SPA webapp is just a thin client for the backend, separating Infrastructure-Modules might not be worthwhile. (In such a case, according to Adaptive Testing Strategy, blackbox Integration Tests with a faked backend could be just enough. That’s because testability of individual modules would be less important).
How Bounded Contexts relate to Modules
A bounded Context is a logical group. There will be at least a few of them, in any big application (a monolith). Each Bounded Context will group one or more Modules. But one module must belong only to one Bounded Context.
What do modules look like on the code level?
Although some programming languages support the concept of modules more than others, there are no hard rules and MIM doesn’t enforce anything at the code level as long as Modular Design is preserved. It’s helpful though when the compiler assists with keeping the boundaries right and when connections between modules are as explicit as they can be.
I can suggest the following setups:
- In C#/.Net you can use
.csprojas a module boundary (e.g.HVServer.BatteryAlarms.csprojplus a companionHVServer.BatteryAlarms.Infra.csproj); - For JVM based languages (Java, Kotlin) you can utilise
jarfiles; - Go (golang) has native module support already.
The application module that contains the main() function - I call it the Entrypoint module - should be used to connect modules together, so other modules don’t need to know how to wire up modules they depend on. In most cases, it would mean using some kind of Dependency Injection framework, though such a framework is not strictly required (e.g. Golang does just fine without it). The Entrypoint can also be used to provide cross-cutting concerns (authorization, observability, etc) to other modules.
Some other advice:
- It’s worthwhile to prepare a single “Bootstrap” place for each module. So the executable’s Entrypoint (mentioned above) can easily compose the application out of modules without needing to know how to wire modules’ internals. For instance, in the SignalFilter module I could create a
SignalFilterModule.csfile with the bootstrap code. Yet for FirmwareDispatcher I would place this code in the companion Infrastructure-Module, i.e. in the FirmwareDispatcherInfraModule.cs (so there is just one bootstrap code for the pair); - Avoid internal namespaces/directories like
Interfaces/Helpers/Extensions/etc. Use domain language and remember the fractal nature of modules. - Be very strict on what code elements have public scope. Use private (or internal/package-private) access modifiers by default;
How should Business-Modules communicate?
In most cases, just plain old method invocations will do. If two Business-Modules are on the same level (like e.g. “H&V Controller” and “H&V Simulator” from the example in point 4), invoking a public class’ method is just fine. It doesn’t even have to be an interface, although in some cases that’s handy.
The crux is to introduce a more fancy mechanism only if you can prove it’s required. Otherwise you’re overengineering the solution. For instance, using a mediator pattern or event based communication between modules raises complexity and makes it harder to reason about the code. And despite the popular claim, these techniques don’t remove coupling - they just make it less explicit.
The same goes for asynchronous communication between modules. Especially when considering modules that are run in the same process, introducing async communication might cause us problems that are intrinsic to Distributed Systems.
In short, such fancy mechanisms may be beneficial in some cases, but they shouldn’t be a default choice.
Isn’t the Infrastructure-Module the same as the infrastructure layer?
The key difference is that there is just one “infrastructure layer” per application. It contains the entire infra code for all features. It is not a problem in small apps. But as the number of features (processes) grows, having all the infrastructure code tangled together gets harder to maintain.
Meanwhile with MIM, there could be many infrastructure-modules in the application, and each dedicated to a particular business-module. That segregation makes it easier to track dependencies, increases cohesion, and also makes removal easier.
In the best scenario, removing a feature, like e.g. the Dispatching Firmware responsibility from our example, could be as easy as removing modules which are responsible for it, in our example it would be: “FirmwareDispatcher” and “FirmwareDispatcher.Infra” (and their wiring in the Entrypoint module).
Does MIM support Evolutionary Architecture?
Yes, a great deal. One of the characteristics of a Module is replaceability. With the right cohesion and explicit API and dependencies, it’s quite easy to - for instance - extract a module and move it to a separate application. It’s far easier to replace a module with a new implementation when you know that the whole process it’s responsible for is concentrated in one place.
The topic of Modular Design doesn’t have many modern resources. Quite the contrary, most of the resources are scattered and dated. For the sake of MIM I wanted to get everyone on the same page about Modular Design, so I’ve prepared this chapter as a brief introduction.
What is a module
The simplest definition says that a module is a logical group of code (e.g. a namespace, a package, an isolated group of classes or even a single class). It shouldn’t be confused with a component, which is a physical group of code (e.g. a library, an executable or a service). Although not everybody agrees with this definition (see Vlad Khononov’s Balancing Coupling book), it’s still practical enough to be useful.
One remark though, the definition allows us to call even a single class “a module”, but in practice it’s almost never used that way. The most common understanding of a module is something in between a class and an application.
The purpose of grouping code into modules is lowering the overall complexity of the solution. If done properly, it’s easier to reason about an application composed out of modules without being overwhelmed by details. One of the means that lead to this goal is assigning responsibilities. Each module gets one or more responsibilities from the set of responsibilities of the whole application and hides in itself the way the responsibility is fulfilled.
Let’s illustrate the concept of modules by an example. Assume there’s a requirement for a rich-client application to perform an auto-update process. This requirement during the design process will become the application’s responsibility. If we assign it to an Auto-Updater Module, it will mean that fulfillment of this responsibility lies solely with this module. Not-modular approaches often lead to scattering responsibility among so-called “modules” like “model”, “database”, “infrastructure”, “domain”. But in the modular approach, if it’s decided that auto-update is not needed anymore, we should be able to get rid of this responsibility by deleting just one module (and its wiring to other modules), without the need for the shotgun surgery throughout layers.
A Module in the Modular Design
Not all modules are created equal. There are some desired characteristics of modules that when combined give us so-called Modular Design. Likewise, when the characteristics are not met, we are likely to get a “Spaghetti Code” or “Big Ball of Mud” antipatterns, no matter if we group code into modules or not.
A word of caution: this chapter presents ideal modules, but in real-life designs, trade-offs will come into play and sometimes you will have to loosen some characteristics (that’s normal).
Responsible for a process
A module should be responsible for one or more business processes (or a subprocess), a feature, (or a set of features), or a business capability. The goal is to have the business logic, for whatever it’s responsible for, concentrated in one place. It makes comprehending and maintaining it easier, but also helps with removability. In an ideal situation, you should be able to remove a feature by removing just one module from the application.
This characteristic also implies that modules have responsibilities and they carry them out. Other parts of the system should be able to trust that they will be fulfilled well. Other modules mustn’t be assigned with the same responsibility.
As you can see, modules in Modular Design are not layers. Names like “Domain”, “Application” are not desirable, because they clearly favor a technical separation. In such designs the business logic related to one process is often scattered and at the same time unrelated logic is squeezed into one “module”. Both cases make it harder to reason about the system.
It’s a similar situation with modules that resemble entities. “Order Module”, “User Module” are common anti-examples. In most cases, they’re just CRUD wrappers for DB tables, and real processes are dispersed over the codebase.
Explicit, public API
API in terms of Software Design has nothing to do with HTTP or any interprocess communication matters. API is rather a set of code elements that are exposed to the user (i.e. developer) of your module. It defines inputs as well as outputs. (There are also parts of APIs that are not elements of the code, but it’s not important here.)
When preparing a module a special care should be taken to explicitly define a public API. It doesn’t mean you have to use the “interface” keyword. It’s enough to ensure that the developers who might be interested in using your module, know which elements are to be used and which are not (i.e. what is an input and what’s an expected output).
In most cases, it’s as simple as making sure that types, interfaces, classes, methods and functions that should not be used outside of the module have internal or private access scope. The compiler (or interpreter) will make sure that users won’t break encapsulation. Alternatively, for small and medium modules, you can prepare a facade object that will be used as an entrypoint to the module.
You also want to ensure that the API is clean, small and easy to use. Bloated API increases cognitive load on the users (developers). A good check (and good practice) is to unit test all the public elements of the module. If you find that something is hard to test, it means that the API is not good enough (or you’re trying to unit test something that shouldn’t be exposed)
Encapsulates its data
It means that a module fully manages its data, i.e. any data that the module owns and operates can be altered only by the module’s code itself. If there’s a need to manipulate the data from outside, it’s done only via the module’s public API. It guarantees that there are no unauthorized changes to the data that come from other modules (i.e. a distant, unrelated code, which is not tested together). Encapsulation is also used to maintain invariants.
The benefits are:
- Centralized invariants of business logic. If two modules are found to keep the same invariant, they should be merged. It’s difficult to keep invariants in sync between modules.
- No data corruption. Let’s say that you’ve introduced a voltage value restriction in some parameter. Even though you’ve corrected all the code in your module, if you allow other modules to make direct changes to your data, they will easily ignore your restriction.
One of the most profound implications is that if a module has a database, it should not be shared with other modules. Of course, it doesn’t mean a separate database instance, a private schema will do. But keeping Foreign Keys between modules should be avoided. This implication also applies to file schemas, network protocols, etc.
Self-contained
Everything that is required for a module to carry out its responsibilities should be embedded inside the module. In other words it should be discrete and independent as much as possible, like a stand-alone unit.
In an ideal situation, such a module has all the parts of code it needs to operate (logic, infrastructure, database access layer, UI). Especially, there should be no shared business logic. Also a module should be able to operate when other modules enter fault mode. In real designs, though, it often turns out that some parts have to be extracted - most often it’s the UI or some infrastructure code or some shared libraries.
It’s worth highlighting that “self-contained” doesn’t mean “isolated”. It’s desirable for a module to collaborate with other modules, but the communication should represent business processes interacting with each other (or a process with subprocesses).
Minimal communication
Modules shouldn’t be chatty and the list of collaborators should be kept low. Otherwise we could suspect the module of feature envy on steroids. What more or less implies that other characteristics and patterns are not met.
Note: “minimal communication” is not related to the bandwidth of the communication, but rather its quality.
Replaceable
As was already mentioned several times, a module should be easily replaceable. Even if there are several other modules dependent on it, having a clear, public API enables us to replace the module by a new implementation. There are also some other considerations, e.g. related to bootstrapping of the module, but the clear API is by far the most important one.
An example use case: let’s say we have a module that is responsible for synchronizing data with an external system. We could implement the first version with support only for direct, synchronous HTTP calls. If the module’s interface is well-designed, we would be able to substitute it later for a new version that utilizes an outbox pattern (where the synchronization is paused when there’s no internet connection).
Design Patterns
There are some well established patterns that can be used to design applications in a modular way. These concepts can be used to enforce the characteristics described above, so don’t be surprised they’re highly related.
Remember that patterns are not rules or principles, but I think the ones listed below can be assumed to be safe defaults for Modular Design.
Information Hiding
That’s the cornerstone of Modular Design and that’s why it was already mentioned several times in this article. It’s about “hiding in a box” how something is implemented without requiring other developers, who just use your module, to know any of the inner workings. It hides inner complexity. That implies that the public API of the module is on the higher level of abstraction (agnostic of the details). That’s desirable because it lowers the cognitive load on the users (they don’t have to be experts in whatever the module is doing as long as the public API is easy). Another benefit is that it allows you to change the implementation without breaking other modules.
Information Hiding regards the module’s data as well as functionality.
For instance, let’s say we’re programming a module that controls an electrical gate of a canal lock. If the public API was to require a voltage value needed for the engine to lift the gate - that would violate the Information Hiding a great deal. First of all, the user of the module would need to know the details of the engine. Second of all, changing the gate to a pneumatic one would completely break all the code that uses the module. In terms of software design the equivalent violation would be to catch SqlException in the Business-Module. If I were to propose an implementation that leverages Information Hiding, it would be to accept the opening ratio instead of the voltage.
High Cohesion
That’s another well established pattern in Modular Design. It’s about putting together code that is highly related to each other and by implication separating unrelated code into other modules. High Cohesion discourages spreading the knowledge (e.g. of particular logic) among many modules, because it increases complexity and cognitive load. It’s easier to reason about the code that is closer to each other than scattered throughout the system (coupling inside the module is not such a problem).
When a module has many responsibilities, they should be coherent.
Let’s reuse the example of the electrical gate controller module. Assume there’s another module that for reporting purposes computes how much electrical power was used to open the gate. It takes the opening ratio, converts it to voltage and then uses some configured engine parameters to compute the power. In such a design, we would violate the High Cohesion pattern, because the knowledge about engine design and usage would be dispersed in two modules. If a developer had adjusted the voltage conversion logic, would he/she remember to adjust the second module as well? If not, a bug would be introduced.
Low Coupling
One of the goals of Modular Design is to minimize coupling between Modules (i.e. interactions between modules). Complexity increases with every introduced coupling. And uncontrolled coupling makes it harder to reason about the system. Yet we need modules to interact, because putting everything into one procedure is not an option. Interactions just need to be kept reasonably low.
Low Coupling is not an inversion of High Cohesion, but it’s often the case that if modules are very chatty, it means that High Cohesion is broken and the module isn’t truly independent.
We can illustrate a violation of this pattern with an example of two modules: Order and User, where Order’s facade exposes a lot of methods like GetProduct(), GetTax(), GetAvailability(), SetBasket(), SaveOrder(). Whereas the User module invokes all these methods (in the correct order!). As you can see, the communication is very intense and the User module is the coordinator to something that looks like a “CRUD” Order module. To reduce coupling we could move the coordination logic to the Order module itself and just expose one PlaceOrder() method.
By the way, changing method invocations to events doesn’t remove the interactions, so coupling is still there. It’s just harder to follow and debug.
Deep Modules
This pattern puts weight on the ratio between the module’s API interface and the functionality it provides. In a nutshell, it’s best to have powerful functionality hidden behind a simple interface.
If the interface is complex, yet the features the module provides are simple, the overall complexity of the system is not decreased (and may even increase), so there’s no benefit in introducing such a module. Small modules tend to be shallow.
Password Hashers are good examples of deep modules. The underlying implementation is not trivial, but their interface is just input and output strings.
Acyclic dependencies
The graph of modules should avoid circular dependencies. First of all, it may introduce compilation issues, but the problem may also not surface until runtime.
If it’s hard to break the circular dependency, it might be a sign that two or more modules should be merged.
Balancing Coupling
That’s not really a pattern, more like a group of techniques from Vlad Khononov’s Balancing Coupling book. They can be summarized as follows: when making a design it’s important to balance coupling, by taking into account strength, distance and volatility of the relation.
For example: if there’s an old, battle-tested module that’s unlikely to change, a direct coupling with it shouldn’t be as problematic as coupling with a new, beta-version module.
Design Heuristics
“Because design is nondeterministic, skillful application of an effective set of heuristics is the core activity in good software design.”, Code Complete, Steve McConnell
Besides patterns, there is a large set of heuristics one can use in Software Design. Heuristics are just rules of thumb, but they are still useful.
The list below is not complete by any means.
Avoid fine grained modules
Modules should be responsible for a process or a subprocess, and it is advised to break down large responsibilities. But taking this advice too far will lead to modules that are not responsible for a subprocess, but a mere action.
As Tesler’s Law says, a system exhibits a certain amount of complexity that cannot be reduced - it can only be shifted around. With Modular Design we move global complexity to modules, but with too fine-grained modules, the effect is negated, because small modules hide too little.
For perspective, a not-so-large microservice would be typically composed of 5 to 10 modules. Having 50+ modules in such a microservice would probably be excessive (it sounds more like a small monolith). Unfortunately, there are no exact rules.
Avoid dependencies on lower-level modules
Higher-level modules are the ones that operate on a higher conceptual level (i.e. algorithms, domain logic, business rules, etc). In other words, they’re more abstract and they govern “what” the application is supposed to do. On the other hand, modules that contain more implementation-specific code (e.g. hardware, I/O, network, etc) are said to be lower-level.
In Modular Design higher-level modules shouldn’t depend on lower-level modules. That’s because abstract modules shouldn’t be limited by concrete solutions. Another reason is testability - modules that have direct dependencies to lower-level modules are harder to test.
There are some techniques that can be used to reverse a relationship if the flow of logic requires it. For instance, the higher-level module could expose a required interface which will be implemented by the lower-level module. That’s the Dependency Inversion Principle on the module level. One of the other possibilities is to introduce a mediator pattern.
This heuristic can be illustrated by the following design. Let’s say we have an application that generates a report, compresses it with zip, and saves it to the disk. With this heuristic in mind, we wouldn’t want the report generation to depend on the zip algorithm. A change from zip to LZMA algorithm shouldn’t affect the report generation. Likewise, the compression module shouldn’t depend on the file storage module. If we follow that heuristic, we will be able to change file storage to network storage without affecting compression or the report module. (By the way, this design might not be ideal, for instance, these tasks are too small to be modules, but I hope it conveys the meaning of this heuristic).
Make dependencies explicit
The list of dependencies on other modules should be clear and obvious. Hidden dependencies (e.g. other modules loaded on runtime via service discovery or a hardcoded url to a http endpoint) can become a real hassle during maintenance.
Anyone new to the module should be able to check the list of dependencies easily. But it’s naive to point him/her to the documentation. Docs are rarely up to date. It would be better if she/he could rather check the module’s bootstrap code (in some languages also the required public interfaces).
A candidate for a microservice
There are many misconceptions around microservices, but one of the original characteristics of the microservices is “Organized around Business Capabilities” (see James Lewis’s and Fowler’s “Microservices” article). It’s no coincidence it’s also the characteristic of a proper module (but on a smaller scale).
The quality of the design will be increased if the module is designed as if it meant to be extracted into a separate microservice in the future. It doesn’t matter when or if ever it’s going to be pulled away. It’s also not about preparing technical infrastructure for the extraction itself (e.g. http or event queues between modules) - that would be overkill if applied to all modules. This heuristic is more about the quality of choices of boundaries and responsibilities.
“Module per Developer”
I haven’t found this heuristic documented anywhere, but I’ve decided to put it here, because I’ve used it and I can bet I’m not the only one. This heuristic is a scaled down version of the “Service per team” pattern from Microservices.
Modules are a fine way to split work among Software Developers. If modules are designed appropriately, people should be able to work in parallel without stepping on each other’s toes.
It’s also worthwhile for a module to have its owner, i.e. a programmer that will be the main developer and maintainer.
Submodules
Software Design is fractal in its nature. Systems are composed of subsystems, then we have services which are composed of modules. Modules not only form a graph, but also each module can have its own hierarchy of submodules.
We can use submodules to distribute the responsibilities of the parent module among them and thus lower its complexity. Most of the patterns of Modular Design apply here as well, but on the lower scale.
Side note: submodules are not well supported by most programming languages, but often can be emulated (e.g. by namespaces).
Modular Design and MIM
One might wonder whether the Module Infrastructure-Module Application Architecture (MIM), which is the main topic of this paper, really preserves all the patterns and characteristics described in this chapter. In fact, extracting the infrastructure code to a companion “Infrastructure-Module” disregards at least the High Cohesion pattern for the sake of testability.
However, if we treat a Module and its Infrastructure-Module as a unity (or as a supermodule) then all characteristics and patterns would be preserved once again. This approach makes sense, because the Infrastructure-Module is more like a sidecar than a standalone module on its own.
Modular Design
-
[Text] The Dependency Inversion Principle, Robert Martin Principle http://www.objectmentor.com/resources/articles/dip.pdf
- The original text on DIP from 1994.
- The DIP is the only “principle” that didn’t exist before SOLID.
-
[Book] A Philosophy of Software Design, Ousterhout, John
-
[Book] Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development, Craig Larman
-
[Book] Clean Architecture, Robert Martin
- This is one of the few modern books that address the topic of Modularity. While it also presents some patterns I found less practical (e.g. SOLID), I highly recommend reading it.
-
[Book] Balancing Coupling in Software Design, Vlad Khononov
-
[Book] Code Complete: McConnell, Steve
-
[Video] Modular Monoliths • Simon Brown • GOTO 2018 https://www.youtube.com/watch?v=5OjqD-ow8GE
- A must-watch. Simon Brown goes through several application architectures (layers/hex/vertical slice) and compares them to his “package by component” approach (read: modular design). Brown nails down hard problems of many code-bases.
-
[Video] What I wish I knew when I started designing systems years ago, Jakub Nabrdalik https://www.youtube.com/watch?v=1HJJhGHC2A4
- Nabrdalik also has got other talks on modularity, but it’s always mixed with other topics
-
[Video] A Contrarian View of Software Architecture - Jeremy Miller - NDC Oslo 2023 https://www.youtube.com/watch?v=ttYQzHPe5s4
-
[Text] Package by Component and Architecturally-aligned Testing, Simon Brown https://dzone.com/articles/package-component-and
-
[Text] Patterns of Modular Architecture, Kirk Knoernschild https://dzone.com/refcardz/patterns-modular-architecture
- Also Youtube talks are available
-
[Book] Object-Oriented Software Construction, Bertrand Meyer
Other application architectures
-
[Text] The Clean Architecture, Robert C. Martin (Uncle Bob) https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html
- The Clean Architecture was an attempt to unify Hex, Onion and other application architectures.
- The topic is more elaborated in the Clean Architecture book
-
[Text] Hexagonal architecture the original 2005 article, Alistair Cockburn https://alistair.cockburn.us/hexagonal-architecture
- This was the first application architecture that referenced Robert Martin’s DIP to remove the dependencies from the “Core” to the I/O.
-
[Text] DDD, Hexagonal, Onion, Clean, CQRS, … How I put it all together, Herberto Graça https://herbertograca.com/2017/11/16/explicit-architecture-01-ddd-hexagonal-onion-clean-cqrs-how-i-put-it-all-together/
- A popular community interpretation of Hex Architecture.
-
[Text] The Onion Architecture, Jeffrey Palermo https://jeffreypalermo.com/2008/07/the-onion-architecture-part-1/
Tests
Other
- 2025.11.17 - first release.
Andrzej Nowik, v1/2025.11.17
.