Fifteen Factor App

Vikas Gupta
8 min readJun 24, 2021
The Fifteen Factors

In the continuously evolving world of technology, the most pressing requirement for any software development is to have the application performant, scalable, easily manageable, and resilient. In 2012, programmers at Heroku debuted the Twelve-Factor app methodology that states the General Principles and Guidelines for creating robust Enterprise Applications.

Modern application architectures are highly complex and the containerized approach along with Cloud Native applications, especially the microservices architecture, makes the complete environment highly dynamic. Though Twelve-factor app principles are equally effective today and do align with the changing environment, but need to have extensions to these principles is felt.

This article focuses on the additional factors that are pretty much in force and are highly discussed across the industry now-a-days.

The Methodology — Fifteen Factor App

Fifteen Factors are an extension and suggested additions to the twelve factors that support the modern application architectures. Looking at the complexity and architectural changes witnessed in modern applications, existing layout principles are being elaborated further and the key suggested factor additions are as below –

  1. API First Approach
  2. Telemetry
  3. Security (Authentication and Authorization)

The methodology remains aligned with the principles suggested in core Twelve-Factor App methodology with an extension to additional factors –

  • Use Declarative Formats for setup automation, to minimize time and cost for new developers joining the project
  • Have a Clean Contract, offering maximum portability between execution environments
  • Are suitable for deployment on modern Cloud Platforms, obviating the need for servers and systems administration
  • Minimize Divergence between development and production, enabling continuous deployment for maximum agility
  • Dynamically Scalable without significant changes to tooling, architecture, or development practices.
  • Service-Driven Approach — Availability of the contract to be consumed by a Front-end client application, routing gateway, or any other downstream system
  • Monitoring the distributed applications deployments for domain/application-specific logs/data, health information, or more statistics on modern cloud platforms
  • Security (Authentication and Authorization) is addressed appropriately so that identity is implemented for each of the requests

The Fifteen Factors

I. Codebase

One codebase is tracked in version control. There is only one codebase per application, but there will be many deploys of the application.

The codebase is the same across all deploys, although different versions may be active in each deployment. For example, a developer has some commits not yet deployed to staging; staging has some commits not yet deployed to production.

II. Dependencies

The principle never relies on the implicit existence of system-wide packages. It declares all dependencies, completely and exactly, via a dependency declaration manifest. Furthermore, it uses a dependency isolation tool during execution to ensure that no implicit dependencies “leak in” from the surrounding system.

e.g., from Java perspective, Maven and Gradle are the examples of dependency manager which allows users to specify the dependencies and also to exclude the implicit dependencies which should not be the part of the system.

III. Config

Everything that varies between different deploys across environments (staging, production, developer environments, etc.) is categorized as configuration. This includes -

  • Database, Distributed Cache, and other backing services configuration
  • Credentials to external services such as Azure Event Hub, Amazon S3, or Media Server
  • Application connectivity information like IP Addresses, ports, and hostnames, etc.

Application Configuration should never be hardcoded. The principle suggests saving the configuration values in the environment variables or externalize the configuration from the application. In the Java world, one of the examples of externalizing the application configuration is the use of Spring-Cloud-Config.

IV. Backing Services

The principle states that all the backing services, either local or third-party services, should be treated as attached resources. The principle makes no distinction between local and third-party services. To the application, all are attached resources, accessed via a URL or other locator/credentials stored in the config. The attached resources can be swapped at any point in time without impacting the service.

Examples of the attached resources can be data-stores (such as Oracle, MongoDB, MySQL), messaging/queueing systems (such as ActiveMQ or RabbitMQ), SMTP services for outbound email (such as Postfix), and caching systems (such as Redis or Memcached).

V. Build, Release, Run

The principle enforces a strict separation between the build, release, and run stages. For instance, any code changes are not possible at runtime as there is no way to propagate those changes back to the build stage.

Automation using CI/CD tools (like Jenkins) facilitates the builds and deployment process. Further containerization tools (like Docker images) make it easy to separate the build, release, and run stages more efficiently.

VI. Processes

Execute the app as one or more stateless processes — All the processes are stateless and share nothing. Any data that needs to be persisted, must be stored in a stateful backing service (such as a Datastore). For instance, by making applications aligned with the stateless behavior of REST, the services can be horizontally scaled as per the requirements without any impact.

VII. Port binding

Export services via port binding — The application is completely self-contained and does not rely on the runtime injection of a webserver into the execution environment to create a web-facing service. From a Java perspective, Spring boot is one of the examples as it by default comes with an embedded server.

VIII. Concurrency

The principle suggests scaling out via the process model. The applications should be designed to distribute workload across multiple processes. Individual processes can, however, leverage a concurrency model like Thread internally. The share-nothing, horizontally partitionable nature of application processes means that adding more concurrency is a simple and reliable operation.

IX. Disposability

Maximize robustness with fast startup and graceful shutdown — The processes are disposable, meaning they can be started or stopped at a moment’s notice. Graceful shutdowns are very important and the system must ensure the correct state. In the world of containerisation and microservices, the deployment process implicitly follows this principle to a greater extent.

X. Dev/Prod Parity

Keep Development, Staging, and Production as Similar as Possible

The application is designed for continuous deployment by keeping the gap between development and production small. The principle also resists using different backing services between development and production, even when adapters theoretically abstract away any differences in backing services. Backing services, such as the app’s database, queueing system, or cache, is one area where dev/prod parity is important.

Additionally, having the right processes like Continuous Integration (CI) and Continuous Delivery (CD) in place facilitates bridging this gap further.

XI. Logs (Treat logs as event streams)

Logs provide visibility into the behavior of a running app. Logs are the stream of aggregated, time-ordered events collected from the output streams of all running processes and backing services. The principle suggests separating the two processes — log generation and processing the log’s information.

A tool such as Fluentd can be one of the examples to collect the stream of logs and can feed into Elasticsearch for storage and indexing. Finally, meaningful dashboards can be rendered for visualisation in Kibana.

XII. Admin Processes

The principle suggests running admin/management tasks as one-off processes. One-off admin processes should be run in an identical environment as the regular long-running processes of the app. Admin code must ship with application code to avoid synchronization issues. Developers need to make sure that one-off scripts are automated so that these are not executed manually before releasing the build.

XIII. API First

API First principle implies defining the service contract first to help consumers understand what the request and response communication is expected to be. The service consumers can work in parallel to develop the consuming applications. All this can happen even before the actual service contract is implemented and is made available.

A few of the major advantages of the API First approach are –

  • Cross-Platform Compatible — Facilitates discussions with various stakeholders whether those are part of an internal team, customers, or third-party systems who want to integrate with the APIs
  • Teams Can Develop in Parallel and Know What to Expect
  • Enables Reuse of Schemas / Contract Definitions

The principle further helps in avoiding bottlenecks as a result of the Waterfall model as multiple stakeholders can work in parallel, finally cutting down the implementation and the integration time.

Further, it facilitates the virtualization of the APIs so that integrating systems can start running the tests against the mocks. This pattern is an extension of the contract-first development pattern, where developers concentrate on building the edges or seams of their application first.

There are numerous tools available to reap the benefits of the API-First Approach. For instance, Apiary provides things like GitHub integration and server mocks. Prism is another open-source HTTP mock server that can mimic your API’s behavior as if you already built it.

XIV. Telemetry

The principle focuses on design to include the collection of monitoring domain/application-specific logs/data, health information, or more statistics on modern cloud platforms. With increasing dynamicity in the deployments especially in Cloud Native environments, it has become essential to address this factor. Monitoring of a typical cloud-native application can include (but not limited to) -

  • Application performance monitoring (APM) — stream of events monitoring the application’s performance
  • Domain-specific telemetry — stream of events and data for analytics and reporting
  • Health and system logs — stream of events, such as application start, shutdown, scaling, web request tracing, and the results of periodic health checks

It is important to have the right Telemetry controls in place for the success of the application in the Cloud Native environment. There are a lot of tools already available to facilitate the same. To name a few — Cloud Provider Tools for Health and System Logs, ELK, Jaeger, Grafana, Prometheus, APM tools like AppDynamics, and others from tools used for Container Orchestration.

XV. Authentication and Authorisation

Security is a very important aspect of any application. At times, while capturing the business requirements, people tend to ignore the security aspect but once realized, it has to be done the right way. All the applications, especially Cloud-Native applications, secure their APIs using Role-Based Access Control (RBAC).

Simplest of the reason for using user security is to maintain the audit trail of the events happened for a user session, or which user has made what data changes.

There are numerous solutions available like OAuth2, OpenID Connect, various SSO servers, and standards that can simplify the Authentication and Authorization implementation for an application.

Fifteen Factors — GIT Reference

More examples or references for various factors can be accessed from the GIT Repository — https://github.com/vikasg11/fifteen-factor-app

Further Reading / References

--

--