Abstracting External Services with I/O Interfaces - Part 1
Updated: Feb 2, 2022
While this is a concept I've used at many clients, I've never seen this concept approached elsewhere. Because of this, all of the terminology used in this post is mine. I would love to find more official or accepted terminology for these ideas or a more accepted version of this pattern if anyone has come across them!
Whenever a non-trivial system is architected, there will be at least one external system which you rely on. This might be a file system, a database, a notification service, or any number of other services. Oftentimes, the system doesn't need to deal with the inner workings of the service, nor does it depend on a particular service. Instead, the system just relies on the general contours of the service.
For example, a microservices architecture generally will not care if you are using Kinesis or Kafka or some other more esoteric system as your messaging service. At the code level, it may not even care if you are using a queuing system like RabbitMQ or an event notification service like SNS, even though these have major differences between them. We could say similar things for file stores, key-value databases, RDBMS systems, notification systems (such as email, slack messages, etc.), and many others.
Abstracting and Swapping
Using this idea, we can see how we could create a generalized interface for each of these, which I call an I/O Interface. These interfaces will have one or more implementations for a given system that can be swapped between each other easily depending on anything from what environment we are running in to what a configuration file tells us to what time of day it is. This allows nearly the entire system to be agnostic to the platform we are running on, while still interacting with that platform. Each implementation is called an Implementation or a Service.
Finally, these Implementations can be grouped together into services that work together to make it easier to pass them throughout the system. You can almost think of this as a distribution (i.e., Hadoop Distribution or Linux Distribution), with a set of Implementations known to work well together grouped together for specific use cases. These groupings are called Environments. So, you might have a production environment, a test environment, etc. Then you can just pass this Environment instance around the system and a function that needs to interact with the outside world can access the Environment to get what it needs without changing any interfaces.
Just because we can do this doesn't mean we should, however. But there are a few good reasons to abstract away the exact service, especially when it comes to data and cloud centric systems.
The first is to allow easier testing. If I build a system that reads messages from Kinesis, enriches the messages with data from DynamoDB, and then outputs a message to Slack with the results, writing unit tests may be easy enough, but writing higher-level tests that do not have to use AWS or Slack is tricky without some abstraction. Creating I/O Interfaces for message services, operational datastore, and notification service allows the majority of the system to be tested using one set of implementations that mocks out everything. We then just need to implement the actual I/O Interfaces to use in the production system and have a smaller set of tests that test that functionality locally. This restricts the scope of the sometimes-finicky process of mocking out these complex libraries like boto3 or requests. This doesn't resolve you of the need to do end-to-end tests on AWS itself, but it does make more of your tests simpler, which is always a good goal for maintainability.
Another reason this can be appealing is when you're building with services that may be changed by business decisions later, whether wholesale or bit-by-bit. The obvious example here is the business decides they want to use a different platform, such as Azure instead of AWS. In the example above, that just requires writing new IO Interfaces for Azure Pub/Sub and CosmosDB, and testing those, and maybe tweaking the function definition for the entry points. We can even do this while we are still using AWS in production, allowing us to build side-by-side for a while. Without this abstraction, assumptions about the platform can spread throughout the code base and building side-by-side as other dependencies are completed around you is near impossible.
While switching from one cloud to another may not happen all that often, smaller changes happen with some regularity. As an example, at a client of mine, we had a system which uploaded reports to S3 for business users. After some time, it was determined that using object URLs was not secure enough, and the workflow for business users to read reports from S3 without this was too unwieldy. Because of that, it was requested the reports instead be uploaded to Sharepoint for use, where things like SSO could be used directly. Without an approach like I/O Interfaces, we would've needed to scour the system for any assumptions we were making that we were sending reports to S3. This may have included code where we were building S3 paths or calling boto3 functions directly for small one-off operations like fixing permissions. Instead, we could just write a new I/O Interface for Sharepoint with the same interface (one specifically for file storage) and use that in this instance. Once tests were written for that I/O interface, everything was ready to go in much less time. The S3 I/O interface could even still be used to upload more technical files that we were fine requiring the user to log into AWS for.
A final reason I've seen is if different executions using the same code base have different packages available. For example, if we are building a microservices architecture, each Lambda function might use the same code base, with a different entry point. Each function may need different dependencies, but since we are using a shared code base, it makes sense to share those dependencies when possible.
One Lambda function, though, may have a certain dependency that the others don't and can cause problems if included on all functions. We can use layers to ensure just that one Lambda function has the dependency, but we need to make sure the other functions aren't including that dependency accidentally through an import. This is where having separate Environments come into play, since you can have two production Environments, one with the dependency and one without. The functions pick the ones they need, and the other is never executed, so no problematic dependencies are included.
This finishes up the concept of why something like I/O interfaces might be useful and how we can use them. Next week, we'll be diving into the technical details and also discussing the architectural design patterns at play here. Stay tuned!