Logo

12 Factor App

Add by dinos80152 | May 09, 2017 22:05  242 |  5
12 Factor App
Download

Map Outline

12 Factor App
1 Reference
1.1 https://12factor.net/
2 Codebase
2.1 one codebase tracked in revision control, many deploys
2.1.1 one-to-one correlation between the codebase and the app
2.1.1.1 Multiple apps sharing the same code is a violation of twelve-factor. The solution here is to factor shared code into libraries which can be included through the dependency manager.
2.1.1.2 If there are multiple codebases, it’s not an app – it’s a distributed system. Each component in a distributed system is an app, and each can individually comply with twelve-factor.
2.1.2 deploys
2.1.2.1 production
2.1.2.2 stage
2.1.2.3 inner testing
2.1.2.4 developer1
2.1.2.5 developer2
2.1.3 The codebase is the same across all deploys, although different versions may be active in each deploy.
3 Dependencies
3.1 Explicitly declare and isolate dependencies
3.1.1 packaging system
3.1.2 A twelve-factor app never relies on implicit existence of system-wide packages.
3.1.2.1 dependency declaration manifest
3.1.2.1.1 pip
3.1.2.2 dependency isolation tool
3.1.2.2.1 dependency isolation tool
3.1.2.2.1.1 dependency isolation tool
3.1.2.2.1.1.1 Virtualenv
3.1.3 dependency declaration and isolation must always be used together
3.1.4 Twelve-factor apps also do not rely on the implicit existence of any system tools
3.1.4.1 curl
3.1.4.2 ImageMagick
4 Config
4.1 Store config in the environment
4.1.1 includes
4.1.1.1 Resource handles to the database, Memcached, and other backing services
4.1.1.2 Credentials to external services such as Amazon S3 or Twitter
4.1.1.3 Per-deploy values such as the canonical hostname for the deploy
4.1.2 strict separation of config from code. Config varies substantially across deploys, code does not.
4.1.3 config is the use of config files which are not checked into revision control
4.1.3.1 database settings
4.1.4 The twelve-factor app stores config in environment variables
4.1.5 Another aspect of config management is grouping. Sometimes apps batch config into named groups (often called “environments”) named after specific deploys, such as the development, test, and production environments in Rails. This method does not scale cleanly
4.1.6 env vars are granular controls, each fully orthogonal to other env vars. They are never grouped together as “environments”, but instead are independently managed for each deploy.
5 Backing services
5.1 Treat backing services as attached resources
5.1.1 A backing service is any service the app consumes over the network as part of its normal operation.
5.1.1.1 datastores
5.1.1.1.1 mysql
5.1.1.2 messaging/queueing sytem
5.1.1.2.1 RabbitMQ
5.1.1.3 SMTP
5.1.1.3.1 Postfix
5.1.1.4 caching system
5.1.1.4.1 redis
5.1.2 services provided and managed by third parties
5.1.2.1 SMTP
5.1.2.1.1 PostMark
5.1.2.2 metrics-gathering
5.1.2.2.1 New Relic
5.1.2.3 binary asset
5.1.2.3.1 Amazon S3
5.1.2.4 API
5.1.2.4.1 Twitter
5.1.2.4.2 Google Maps
5.1.3 no distinction between local and third party services.
5.1.3.1 accessed via a URL or other locator/credentials stored in the config.
5.1.3.2 should be able to swap out a local MySQL database with one managed by a third party (such as Amazon RDS) without any changes to the app’s code.
5.1.3.3 only the resource handle in the config needs to change.
5.1.3.4 app and resources loose coupling to the deploy they are attached to.
6 Build, release, run
6.1 Strictly separate build and run stages
6.1.1 stages
6.1.1.1 build
6.1.1.1.1 converts a code repo into an executable bundle known as a build.
6.1.1.1.2 fetches vendors dependencies and compiles binaries and assets.
6.1.1.2 release
6.1.1.2.1 takes the build produced by the build stage and combines it with the deploy’s current config.
6.1.1.3 run
6.1.1.3.1 runs the app in the execution environment, by launching some set of the app’s processes against a selected release.
6.1.2 it is impossible to make changes to the code at runtime, since there is no way to propagate those changes back to the build stage.
6.1.3 Deployment tools typically offer release management tools, most notably the ability to roll back to a previous release.
6.1.3.1 Capistrano
6.1.4 Every release should always have a unique release ID
6.1.5 the run stage should be kept to as few moving parts as possible
7 Processes
7.1 Execute the app as one or more stateless processes
7.2 processes are stateless and share-nothing.
7.3 Any data that needs to persist must be stored in a stateful backing service
7.4 never assumes that anything cached in memory or on disk will be available on a future request or job
7.5 prefer to use the filesystem as a cache for compiled assets during the building stage, rather than at runtime.
7.6 Session state data is a good candidate for a datastore that offers time-expiration, such as Memcached or Redis.
8 Port binding
8.1 Export services via port binding
8.1.1 completely self-contained and does not rely on runtime injection of a webserver into the execution environment to create a web-facing service.
8.1.2 one app can become the backing service for another app.
9 Concurrency
9.1 Scale out via the process model
9.1.1 processes are a first class citizen.
9.1.2 the developer can architect their app to handle diverse workloads by assigning each type of work to a process type.
9.1.2.1 HTTP requests may be handled by a web process, and long-running background tasks handled by a worker process.
9.1.3 an individual VM can only grow so large (vertical scale), so the application must also be able to span multiple processes running on multiple physical machines.
9.1.4 The array of process types and number of processes of each type is known as the process formation.
9.1.5 processes should never daemonize or write PID files.
9.1.6 App should rely on the operating system’s process manager to manage output streams, respond to crashed processes, and handle user-initiated restarts and shutdowns.
9.1.7
10 Disposability
10.1 Maximize robustness with fast startup and graceful shutdown
10.1.1 Processes should strive to minimize startup time.
10.1.2 Processes shut down gracefully when they receive a SIGTERM signal from the process manager.
10.1.2.1 Web Process
10.1.2.1.1 ceasing to listen on the service port (thereby refusing any new requests), allowing any current requests to finish, and then exiting.
10.1.2.2 long polling
10.1.2.2.1 the client should seamlessly attempt to reconnect when the connection is lost.
10.1.2.3 worker process
10.1.2.3.1 returning the current job to the work queue.
10.1.3 Processes should also be robust against sudden death
11 Dev/prod parity
11.1 Keep development, staging, and production as similar as possible
11.1.1 gaps between development and production
11.1.1.1 the time gap
11.1.1.1.1 A developer may work on code that takes days, weeks, or even months to go into production.
11.1.1.1.2 a developer may write code and have it deployed hours or even just minutes later.
11.1.1.2 the personnel gap
11.1.1.2.1 Developers write code, ops engineers deploy it.
11.1.1.2.2 developers who wrote code are closely involved in deploying it and watching its behavior in production.
11.1.1.3 the tools gap
11.1.1.3.1 Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux.
11.1.1.3.2 keep development and production as similar as possible.
11.1.1.3.2.1 using library to be an adapter for different backing services from development and production
11.1.2 resists the urge to use different backing services between development and production
11.1.3 using packaging system, declarative provisioning tools, and virtual environment to build local environment close with production environment.
11.1.4 all deploys of the app should be using the same type and version of each of the backing services.
12 Logs
12.1 Treat logs as event streams
12.1.1 never concerns itself with routing or storage of its output stream.
12.1.1.1 It should not attempt to write to or manage logfiles. Instead, each running process writes its event stream, unbuffered, to stdout.
12.1.2 staging or production deploys
12.1.2.1 each process’ stream will be captured by the execution environment
12.1.2.2 ollated together with all other streams from the app
12.1.2.3 routed to one or more final destinations for viewing and long-term archival
12.1.2.4 destinations are not visible to or configurable by the app, and instead are completely managed by the execution environment.
12.1.2.5 example
12.1.2.5.1 Logplex
12.1.2.5.2 Fluent
12.1.3 log analysis
12.1.3.1 example
12.1.3.1.1 log indexing and analysis system
12.1.3.1.1.1 splunk
12.1.3.1.2 data warehousing system
12.1.3.1.2.1 Hadoop
12.1.3.1.2.2 Hive
12.1.3.2 Finding specific events in the past
12.1.3.3 Large-scale graphing of trends (such as requests per minute)
12.1.3.4 Active alerting according to user-defined heuristics (such as an alert when the quantity of errors per minute exceeds a certain threshold)
13 Admin Processes
13.1 Run admin/management tasks as one-off processes
13.1.1 one-off tasks
13.1.1.1 Running database migrations
13.1.1.2 Running a console
13.1.1.2.1 run arbitrary code
13.1.1.2.2 inspect the app’s models against the live database
13.1.1.3 Running one-time scripts committed into the app’s repo
13.1.2 One-off admin processes should be run in an identical environment as the regular long-running processes of the app.
13.1.2.1 Admin code must ship with application code to avoid synchronization issues.
13.1.3 The same dependency isolation techniques should be used on all process types.
13.1.4 strongly favors languages which provide a REPL shell out of the box

More Maps From User