one codebase tracked in revision control, many deploys
one-to-one correlation between the codebase and the app
- Multiple apps sharing the same code is a violation of twelve-factor. The solution here is to factor shared code into libraries which can be included through the dependency manager.
- If there are multiple codebases, it’s not an app – it’s a distributed system. Each component in a distributed system is an app, and each can individually comply with twelve-factor.
- inner testing
- The codebase is the same across all deploys, although different versions may be active in each deploy.
Explicitly declare and isolate dependencies
- packaging system
A twelve-factor app never relies on implicit existence of system-wide packages.
dependency declaration manifest
dependency isolation tool
- dependency isolation tool
- dependency isolation tool
- dependency declaration and isolation must always be used together
Twelve-factor apps also do not rely on the implicit existence of any system tools
Store config in the environment
- Resource handles to the database, Memcached, and other backing services
- Credentials to external services such as Amazon S3 or Twitter
- Per-deploy values such as the canonical hostname for the deploy
- strict separation of config from code. Config varies substantially across deploys, code does not.
config is the use of config files which are not checked into revision control
- database settings
- The twelve-factor app stores config in environment variables
- Another aspect of config management is grouping. Sometimes apps batch config into named groups (often called “environments”) named after specific deploys, such as the development, test, and production environments in Rails. This method does not scale cleanly
- env vars are granular controls, each fully orthogonal to other env vars. They are never grouped together as “environments”, but instead are independently managed for each deploy.
Treat backing services as attached resources
A backing service is any service the app consumes over the network as part of its normal operation.
services provided and managed by third parties
- New Relic
- Amazon S3
- Google Maps
no distinction between local and third party services.
- accessed via a URL or other locator/credentials stored in the config.
- should be able to swap out a local MySQL database with one managed by a third party (such as Amazon RDS) without any changes to the app’s code.
- only the resource handle in the config needs to change.
- app and resources loose coupling to the deploy they are attached to.
Build, release, run
Strictly separate build and run stages
- converts a code repo into an executable bundle known as a build.
- fetches vendors dependencies and compiles binaries and assets.
- takes the build produced by the build stage and combines it with the deploy’s current config.
- runs the app in the execution environment, by launching some set of the app’s processes against a selected release.
- it is impossible to make changes to the code at runtime, since there is no way to propagate those changes back to the build stage.
Deployment tools typically offer release management tools, most notably the ability to roll back to a previous release.
- Every release should always have a unique release ID
- the run stage should be kept to as few moving parts as possible
- Execute the app as one or more stateless processes
- processes are stateless and share-nothing.
- Any data that needs to persist must be stored in a stateful backing service
- never assumes that anything cached in memory or on disk will be available on a future request or job
- prefer to use the filesystem as a cache for compiled assets during the building stage, rather than at runtime.
- Session state data is a good candidate for a datastore that offers time-expiration, such as Memcached or Redis.
Export services via port binding
- completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service.
- one app can become the backing service for another app.
Scale out via the process model
- processes are a first class citizen.
the developer can architect their app to handle diverse workloads by assigning each type of work to a process type.
- HTTP requests may be handled by a web process, and long-running background tasks handled by a worker process.
- an individual VM can only grow so large (vertical scale), so the application must also be able to span multiple processes running on multiple physical machines.
- The array of process types and number of processes of each type is known as the process formation.
- processes should never daemonize or write PID files.
- App should rely on the operating system’s process manager to manage output streams, respond to crashed processes, and handle user-initiated restarts and shutdowns.
Maximize robustness with fast startup and graceful shutdown
- Processes should strive to minimize startup time.
Processes shut down gracefully when they receive a SIGTERM signal from the process manager.
- ceasing to listen on the service port (thereby refusing any new requests), allowing any current requests to finish, and then exiting.
- the client should seamlessly attempt to reconnect when the connection is lost.
- returning the current job to the work queue.
- Processes should also be robust against sudden death
Keep development, staging, and production as similar as possible
gaps between development and production
the time gap
- A developer may work on code that takes days, weeks, or even months to go into production.
- a developer may write code and have it deployed hours or even just minutes later.
the personnel gap
- Developers write code, ops engineers deploy it.
- developers who wrote code are closely involved in deploying it and watching its behavior in production.
the tools gap
- Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux.
- keep development and production as similar as possible.
- using library to be an adapter for different backing services from development and production
- resists the urge to use different backing services between development and production
- using packaging system, declarative provisioning tools, and virtual environment to build local environment close with production environment.
- all deploys of the app should be using the same type and version of each of the backing services.
Treat logs as event streams
never concerns itself with routing or storage of its output stream.
- It should not attempt to write to or manage logfiles. Instead, each running process writes its event stream, unbuffered, to stdout.
staging or production deploys
- each process’ stream will be captured by the execution environment
- ollated together with all other streams from the app
- routed to one or more final destinations for viewing and long-term archival
- destinations are not visible to or configurable by the app, and instead are completely managed by the execution environment.
- log indexing and analysis system
- data warehousing system
- Finding specific events in the past
- Large-scale graphing of trends (such as requests per minute)
- Active alerting according to user-defined heuristics (such as an alert when the quantity of errors per minute exceeds a certain threshold)
Run admin/management tasks as one-off processes
- Running database migrations
Running a console
- run arbitrary code
- inspect the app’s models against the live database
- Running one-time scripts committed into the app’s repo
One-off admin processes should be run in an identical environment as the regular long-running processes of the app.
- Admin code must ship with application code to avoid synchronization issues.
- The same dependency isolation techniques should be used on all process types.
- strongly favors languages which provide a REPL shell out of the box