
Case in point – architecture of OpenStack
The preceding decoupling concepts will become evident when we take a look at the architecture of the OpenStack system. The system flaunts an application architecture that is distributed with excellent scale and robustness.
OpenStack is a combination of several individual projects that come together to perform a certain function. Some of these projects are:
- Nova: Compute as a Service (and some basic network features)
- Swift: Object Storage as a Service
- Cinder: Block Storage as a Service
- Glance: Image repository
- Neutron: Network as a Service (including security groups, Load Balancer as a Service, VPN as a Service, and so on)
- Keystone: Authentication and Authorization
- Horizon: Dashboard
There are several others, but I think those are good enough to make our point. Say we request a virtual machine through horizon after authenticating through Keystone, then Nova will boot the compute after requesting the image from Glance, which will in turn serve the image stored in Swift. After booting, Nova will request the neutron for networking, and may add a persistent volume served by Cinder.
Confused? Don't worry, all we need to know right now is that all of this works together as a well-oiled machine, even after each of these projects are coded by different isolated teams, and it's only the magic of the architecture that makes it work.
Each of the projects have services in them. As an example, there are (Project Name)-api service, scheduler service, and so on. The naming at this level is not consistent, as each and every project may need a different set of services.
The intra-project communication happens in a queue, while inter-project communication happens using the API (which means you can put a Load Balancer in front of it). The following diagram shows a small part of the overall architecture, but this design principle is used throughout all the projects:

As you can see, now we can simply increase the number of schedulers, conductors, and so on, and no changes need to be made, as their configuration will point them to their queue.
In the event we add new API nodes, we will need to tell the load balancer about it so that all the nodes can get the traffic.
Also, now all of these projects are part of the application (equivalent) tier in the OpenStack's n-tier architecture, they can be put on various machines and it will work just fine.
So hypothetically, a part of the application layer can now reside on a completely different cloud, with only network connectivity.