The API Lifecycle
This article is an excerpt from the book Enterprise API Management written by Luis Weir. This book explores the architectural decisions, implementation patterns, and management practices for successful enterprise APIs. This article covers essential processes and methods required throughout the entire API life cycle.
This cycle is the main or core cycle that triggers the chain of activities that ideally should never end (as long as the product is successful):
Figure 1: The core API life cycle
Note that preceding the implementation of a full API life cycle should always be the creation of an API strategy that doesn't just bring business context, but also defines clear goals and objectives about why APIs are being delivered in the first place. This is important as it brings relevance to the process and brings intrinsic justification to each of the steps.
API ideation and planning
Ideation is a creative process whereby new and innovative ideas are generated and captured. It typically consists of sessions where brainstorming, sketching, and even quick prototyping (as is the case with hackathons) takes place. During the creative process, good ideas are shortlisted and should, in principle, become candidates for implementation, at which point planning takes place.
The API life cycle is the main flow from which all activities are derived. It not only initiates the chain of events that ultimately results in the API being designed and delivered, but it also triggers related iterations around the API design cycle, service implementation, and even consuming applications.
Figure 2: The ideation process in action
In the context of APIs, a series of ideation workshops that bring together business and IT stakeholders (and also, when applicable, end users) can be planned and executed with the objective to collectively identify new APIs that have good potential to deliver customer and business value, and thus can be packaged, marketized, and sold as products.
Part of the planning process should also be around creating introductory content that participants can easily understand and relate to; for example, describing APIs that may already exist in the functional domain and, in simple terms, explaining how they help the business and add value to its consumers.
The best API ideas could be determined based on their implementation feasibility (availability of business and technical capabilities to deliver the API), their uniqueness (no similar API can be easily found based on an initial search), and the potential business and customer benefits.
Design
This is the stage of the life cycle where requirements are translated into something tangible that can be built and delivered. It requires understanding all functional and non-functional requirements. A domain model and a conceptual design are produced based on a series of well-thought-out design decisions.
The concept design should, among other things, answer questions: what is the business domain of an API and its bounded context? What business capability does the API offer? What API architectural style is to be adopted (for example, GraphQL for public interface or gRPC for inter-service communication)? What does the end-to-end solution look like, including the patterns (for example, API aggregator and CQRS) and technical capabilities required, naming conventions, and documentation?
Figure 3: The API design process
The design phase should ideally consist of the following activities:
Analysis
This is the process of understanding all the business requirements in the backlog and, if necessary, organizing question and answer (Q&A) sessions to clarity doubts and ensure all needs are well understand and nothing is left out. If needed, the backlog items can be further refined. The analysis activity is typically carried out by API architects but also typically involves input from the product owner and business analysts who contributed to the creation of the backlog.
Domain and concept design
A domain design is a domain model describing the business domain and problem being solved in notation that both business and IT teams can relate to. The model should therefore act as a ubiquitous language, as it reflects a shared understanding of the domain.
Key design decisions (KDDs)
KDDs are design choices that have a notable impact on the way a product is realized or on the final product itself. They are decisions because multiple viable options exist and therefore a choice has to be made (if only one option exists then it will not be a design decision).
Discovery
During this activity, existing business capabilities that offer similar functionality to the items in the backlog are searched for. Ultimately, the objective is to avoid reinventing wheels. If an API(s) already exists that offers functionality that addresses requirements of the backlog, instead of duplicating functionality, reuse should be considered. The discovery activity is also typically carried out by API architects.
API specification
This is a document that describes the technical contract of an interface, such as All methods/ operations supported by the interface, technical constraints. An API specification is defined in accordance with the interface description language (IDL) for the API architectural style chosen and depending on it, different tools and techniques can be adopted, some more dynamic than others.
API page
This is a web page that in addition to incorporating the specification of an API, also includes a human-readable (meaning less technical) description of the API, including what it offers (in terms of functionality) and plenty of examples on how to use it.
Mock and try
API mocking is a technique by which the methods/operations specified within an IDL are simulated by means of a mocking server. The idea behind this approach is to enable API consumers and developers to try the API through its mock before the actual implementation takes place.
Figure 4: API mocking
As the preceding diagram illustrates, an API mock should behave in accordance with the API specification itself (the IDL). As long as the mock makes use of representative (sample) data, API consumers can use it to try out the API early in the life cycle.
This is useful for many reasons, but especially because it can speed up the development process and also the quality of the API by collecting feedback early in the life cycle. Doing so means that in theory, the API should not undergo many changes later in the life cycle as a consequence of mismatched requirements.
Create/configure
Once an API mock is available or a service has been implemented and/or iterated (for example, a new version or enhancement), an API can be created using API management capabilities, so different policies, such as authorization, rate limiting, monetization plans, and mediation, can be applied to it.
This step is typically carried out by developers and/or hands-on API architects. It involves using a policy editor (typically a centralized web application in newer tools) in order to:
- Create and edit an API and its metadata (for example, its description, stage in the life cycle and so on.).
- Define the version of the API.
- Attach any related API specification and/or documentation.
- Define the service endpoints. In some cases, this could be just the API mock if the service is under development.
- Apply, edit, or remove API policies, such as OAuth 2.0 authorization, API-key validation, throttling, and rate limiting.
- Configure environment-specific properties as required.
Depending on the number of API policies to be applied, this process can be very quick or time-consuming.
Deploy
In simple terms, deployment is a process by which a code artifact (typically referred to as a deployment unit), such as an API and/or service, is prepared and moved into a runtime environment for its execution and use. However, in actual practice, deployment is carried out within a continuous integration and continuous deployment (CICD) pipeline that takes care of the packaging, verification, regression testing, and deployment of the code.
Figure 5: A CICD pipeline
As illustrated in the preceding diagram, the process typically involves adopting a continuous integration tool that based on preconfigured conditions, such as a merge into a main development branch, a scheduled task, or even a manual action, executes a series of tasks. The tasks are as follows.
Pull
This means pulling the code from a specific branch in a code repository, which could be a source control system, such as Git (for example, GitHub, Gitlab, or Bitbucket), or even older repositories such as Subversion (SVN) and Concurrent Versions System (CVS).
Inspect
This means inspecting the code and its dependencies in search of potential errors and/or vulnerabilities. This can be done with tools such as Sonarqube (sonarqube.org), Coverity (scan.coverity.com), or Fortify (microfocus.com).
Build and package
This involves compiling and packaging the code, along with its dependencies, into a release package, tagged with its respective version. For example, in the case of Java, this typically involves using tools like Maven (maven.apache.org) or Gradle (gradle.org) to generate .jar files.
Quality assurance (QA)
This consists of deploying the released artifacts into a QA environment with the objective to conduct a series of tests, such as:
- Interface testing: This is verifying that an interface matches its interface definition.
- Functional testing: This is completed by making a series of pre-defined API calls to verify that the API behaves as expected.
Deploy
Should no issues be encountered during the tests, the solution is deployed into the production environment.
Rollback
Should any issues be encountered post-deployment, then there should be the ability to rollback to the previous working version. The creation of the deployment pipeline is typically carried out by platform engineers with support from developers as required.
Promote, deprecate, and retire
As APIs and services evolve through multiple design and development iterations, naturally new versions are created to reflect the fact that changes have taken place, such as new features being added, improvements to existing features, or even just bug fixes. Regardless of the type of change, the fact remains that handling versions is a critical aspect of any software development life cycle.
However, having a strategy in place to handle API versioning isn't enough. If the process of rolling out changes isn't carefully thought through, such as how to deal with non-backward-compatible versions, there can be negative repercussions, such as breaking the API consumer's code or even ending up with too many versions of the same API in production, thus adding additional complexity and costs.
Such a fate can be avoided by having the ability to promote, deprecate, and retire different versions of the same API and its corresponding service.
Figure 6: Promote, deprecate, and retire examples
For example, as illustrated in the preceding diagram, a REST API that adopts an URI versioning approach could have concurrent versions of its services running in production, though a rule of thumb is to have no more than two: a deprecated version and a new version. API consumers binding to the deprecated version are given a notice period (for example, three months) to switch to the new API before the current one is retired, and they get an error.
A header-based approach could also be adopted, even in the case of version-less APIs (for example, GraphQL), especially in scenarios whereby backward compatibility simply isn't possible. In this case, the API gateway takes care of the routing to the right service version based on a version HTTP header. In this example, consumers could be notified that the current API (accessed without any version header) is deprecated and therefore they have a period of time to switch to the new one by adding a version HTTP header.
Once the deprecated API is retired, the default API becomes the newer one, at which point consumers that didn't switch may face issues when calling it. Needless to say, this approach is more intrusive and requires careful consideration. This task is typically carried out by platform engineers with input from API architects and developers as required.
Observe
In control theory, observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs. In other words, it is the ability of a system to externalize internal state data, namely logs (verbose and text-based representations of system events), traces (data representing a specific event that occurred within the application), and metrics (a numeric representation of point-in-time data, such as counters and gauges for example, CPU, RAM, and disk usage). This is referred to as the three pillars of observability, which are needed for monitoring and analyzing the whole system.
Figure 7: End-to-end observability
The preceding diagram illustrates multiple layers of an application stack that have been instrumented in order to send traces, metrics, and logs into centralized storage for analysis and visualization.
Although observability isn't new (for example, logging has been around since the beginning of programming), in distributed systems, such as microservices architectures, it has become paramount. Without properly instrumenting all components of the stack (as illustrated), performing tasks such as understanding the overall health of the system, debugging/troubleshooting issues, identifying potential performance bottlenecks, monitoring compliance against important service-level agreements (SLAs) (for example, availability and throughput), and even discovering potential security breaches can become a very difficult task, if not impossible.
This article delivered a comprehensive overview of not just the entire API life cycle, but also related cycles that are derived from it. Define the right organization model for business-driven APIs with Luis Weir’s latest release Enterprise API Management.
About the Author
Luis Augusto Weir is a Director of Software Development at Oracle and a former Chief Architect at Capgemini, Oracle Ace Director and Oracle Groundbreaker Ambassador. An API management and microservices evangelist, Luis has over 17 years of experience implementing complex distributed systems around the world.