Fly Launch configuration (fly.toml)

All new apps on the Fly Platform are V2 Apps, running on Fly Machines. Our docs apply to V2 Apps, but we still include legacy V1 Apps info where appropriate.

We’re migrating all V1 Apps to V2 in phases. Learn more about how and why we’re getting off Nomad.

You can also migrate your V1 app yourself using our migration and migration troubleshooting tools, or migrate your V1 app manually.

You can configure an app for deployment on Fly.io using a fly.toml file. Configuration of builds, environment variables, internet-exposed services, disk mounts and release commands go here.

TOML is a simple configuration file format. Here’s a useful introduction on its syntax.

You don’t need to create a fly.toml file by hand. Running flyctl launch will create one for you. You can also generate one from an existing app by running flyctl config save.

VSCode users: Install the Even Better TOML extension for automatic fly.toml validation and hints drawn from this documentation.

The app name

The first key/value in any fly.toml file is the application name. This will also be used to create the host name that the application will use by default. For example:

app = "restless-fire-6276"

Whenever flyctl is run, it will look for a fly.toml file in the current directory and use the application name in that file. This behavior can be overridden by using the -a flag to set the application name, or on some commands (such as deploy) by using a -c flag to point to a different fly.toml file.

Primary Region

Apps V2 only: fly deploy under Apps V2 uses the primary_region option to determine where to create new Machines, as well as to set the PRIMARY_REGION environment variable within the Machines it deploys. Replace ord with the three-letter code for your Fly Region of choice.

primary_region = "ord"

Runtime options

The following options are available to control the lifecycle of a running application. These are optional and can be placed at the top level of the fly.toml file:

kill_signal option

When shutting down a Fly app instance, by default, Fly sends a SIGINT signal to the running process. Typically this triggers a hard shutdown option on most applications. The kill_signal option allows you to change what signal is sent so that you can trigger a softer, less disruptive shutdown. Options are SIGINT (The default), SIGTERM, SIGQUIT, SIGUSR1, SIGUSR2, SIGKILL, or SIGSTOP. For example, to set the kill signal to SIGTERM, you would add:

kill_signal = "SIGTERM"

kill_timeout option

The time to wait, in seconds, before stopping a Machine after sending the SIGINT signal or the signal set by kill_signal. Set kill_timeout to a value that gives your app enough time to exit gracefully. The default setting is 5 seconds. The maximum kill_timeout is 300 seconds (five minutes).

For example, to set the timeout to two minutes:

kill_timeout = 120

Console command

The command to run when you run the fly console command. Configure the console_command field with the command that opens your framework’s console, and then fly console will run that command automatically in a new, dedicated Machine. The new Machine is configured with the image and environment from your app’s latest release, but your app isn’t started, and no traffic will be routed to it. The Machine gets destroyed when you exit the console.

Here’s an example of a console command for Django:

console_command = "/code/manage.py shell"

swap_size_mb option

Setting this option enables Linux swap on Machines. A swap partition is created with size swap_size_mb, expressed in megabytes.

Swapping to disk can help avoid out-of-memory crashes on brief spikes in memory use. Swap is much slower than RAM, so if performance is important, a better solution is to increase the Machine memory with fly scale memory.

swap_size_mb = 512

The build section

The optional build section contains key/values concerned with how the application should be built. You can read more about builders in Builders and Fly

builder

[build]
  builder = "paketobuildpacks/builder:base"
  buildpacks = ["gcr.io/paketo-buildpacks/nodejs"]

The builder “builder” uses CNB Buildpacks and Builders to create the application image. These are third party toolkits which can use Heroku compatible build processes or other tools. The tooling is all managed by the buildpacks and buildpacks are assembled into CNB Builders - images complete with the buildpacks and OS to run the tool chains.

In our example above, the builder is being set to use Paketo’s all-purpose builder with the NodeJS buildpack.

Specify a Docker image

[build]
  image = "flyio/hellofly:latest"

The image builder is used when you want to immediately deploy an existing public image. When deployed, there will be no build process; the image will be prepared and uploaded to Fly.io infrastructure as is. This option is useful if you already have a working Docker image you want to deploy on Fly.io or you want a well known Docker image from a repository to be run.

Specify a Dockerfile

[build]
  dockerfile = "Dockerfile.test"

dockerfile accepts a relative path to a Dockerfile, or a URL. By default, flyctl looks for Dockerfile in the application root.

Gotchas: 1) This option will not change the Docker context path, which is set to the project root directory by default. If you want the Dockerfile to use its containing directory as the context root, use fly deploy <directory>. 2) When specifying a local Dockerfile, make sure it’s not excluded from the Docker build context in your .dockerignore.

Specify a Docker ignore file

[build]
  ignorefile = "/path/.dockerignore"

ignorefile accepts a relative path to a .dockerignore file. By default, flyctl looks for the .dockerignore file in the working directory.

Specify a multistage Docker build target

[build]
  build-target = "test"

If your Dockerfile has multiple stages, you can specify one as the target for deployment. The target stage must have a CMD or ENTRYPOINT set.

Specify Docker build arguments

You can pass build-time arguments to both Dockerfile and Buildpack builds using the [build.args] sub-section:

[build.args]
  USER="plugh"
  MODE="production"

You can also pass build arguments to flyctl deploy using --build-arg. Command line args take priority over args with the same name in fly.toml.

Note that build arguments are not available in the runtime container. If you need build information at runtime - like a Git revision - store it in a file at build time, like:

RUN echo $GIT_REVISION > REVISION

Likewise, application environment variables and secrets are not available to builds.

The deploy section

This section configures deployment-related settings such as the release command or deployment strategy.

Run one-off commands before releasing a deployment

To run a command in a temporary VM—using the app’s successfully built Docker image—before the release is deployed, define a release_command:

[deploy]
  release_command = "bin/rails db:prepare"

The release_command value replaces CMD in the temporary VM. This is useful for running database migrations before app VMs are created or updated with the new release. Note that the Docker image’s ENTRYPOINT is not overridden by the release_command; ENTRYPOINT always runs.

The temporary VM has full access to the network, environment variables and secrets, but not to persistent volumes. Changes made to the filesystem on the temporary VM will not be retained or deployed. The building/compiling of your project should be done in your Dockerfile. If you need to modify persistent volumes or configure your application, consider making use of CMD or ENTRYPOINT in your Dockerfile, or of a process group command.

The temporary VM inherits the size from the largest machine in the default process group of the app as of flyctl v0.0.508 (they also default larger on empty/new apps, using the machine shared-cpu-2x preset).

A non-zero exit status from this command will stop the deployment. fly deploy will display logs from the command. Logs are available via fly logs as well.

To ensure the command runs in a specific region, such as dfw, set PRIMARY_REGION = 'dfw' in your application environment in fly.toml or with fly deploy -e PRIMARY_REGION=dfw. Setting PRIMARY_REGION is important if you’re running database replicas in multiple regions.

The environment variable RELEASE_COMMAND=1 is set within the temporary release VM. You can use this to define behavior in your Dockerfile’s ENTRYPOINT that’s conditional on being run on a release VM.

Picking a deployment strategy

[deploy]
  strategy = "bluegreen"

strategy controls the way a new release replaces the previous release. Different strategies offer trade-offs between downtime and reliability.

strategy may also be specified at deploy time with flyctl deploy --strategy.

The available strategies are:

rolling: The default strategy for apps with or without volumes. One by one, each running Machine is taken down and replaced by a new release Machine.

immediate: Replace all Machines with new releases immediately without waiting for health checks to pass. This is useful in emergency situations where you’re confident about release health and can’t wait for health checks.

canary: Boots a single, new Machine with the new release, verifies its health, and then proceeds with a rolling restart strategy.

bluegreen: For every running Machine, a new one is booted alongside it in the same region. Once all of the new Machines pass health checks, traffic gets migrated to the new Machines and the old Machines are destroyed. If your app is scaled to 2 or more Machines, this strategy can reduce deploy time by running tasks in parallel. You need to configure at least one health check to use the bluegreen strategy. The bluegreen strategy doesn’t work if Machines have volumes attached.

Note: If max-per-region is set to 1, then the default strategy is set to rolling. This happens because canary needs to temporarily run more than one VM to work correctly. The bluegreen strategy will behave similarly with max-per-region set to 1.

The env variables section

[env]
  LOG_LEVEL = "debug"
  RAILS_ENV = "production"
  S3_BUCKET = "my-app-production"

This optional section allows setting non-sensitive information as environment variables in the application’s runtime environment.

For sensitive information, such as credentials or passwords, use the secrets CLI command.

Env variable names are strictly case-sensitive and cannot begin with FLY_ (as this could clash with the runtime environment). Env values can only be strings.

Secrets take precedence over env variables with the same name.

Note: In Apps V2, the primary_region option sets the PRIMARY_REGION environment variable within a Machine and overrides any value set in the [env] section.

The http_service section

Apps V2 only: For apps that only need HTTP and HTTPS services, you can replace the [[services]] section with this simpler alternative.

An [http_service] section defines a service that listens on ports 80 and 443. Port 80 will have an HTTP handler. Port 443 will have a TLS and HTTP handler. You can configure additional services on different ports by adding [[services]] sections.

As with the more verbose [[services]], you can specify a list of processes to limit the service definition to Machines belonging to process groups that should receive HTTP requests.

[http_service]
  internal_port = 8080
  force_https = true
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 0
  [http_service.concurrency]
    type = "requests"
    soft_limit = 200
    hard_limit = 250
  • processes: For apps with multiple processes. The process group that this service belongs to. Define process groups in the processes section.
  • internal_port: The port this service (and application) will use to communicate with clients. The default is 8080. We recommend applications use the default.
  • force_https: A boolean which determines whether to enforce HTTP to HTTPS redirects.
  • auto_stop_machines: Whether to automatically stop an application’s machines when there’s excess capacity, per region. If there’s only one machine in a region, then the machine is stopped if it has no traffic. The Fly Proxy runs a process to automatically stop machines every few minutes. The default is true.
  • auto_start_machines: Whether to automatically start an application’s machines when a new request is made to the application and there’s no excess capacity, per region. If there’s only one machine in a region, then it’s started whenever a request is made to the application. The default is true.
  • min_machines_running: The number of Machines to keep running, in the primary region only, when auto_stop_machines = true.

We recommend setting auto_stop_machines and auto_start_machines to the same value to avoid having machines that either never start or never stop. Learn more about automatically starting and stopping V2 app Fly machines, including how Fly Proxy determines excess capacity in a region using the soft_limit setting.

http_service.concurrency

The [http_service.concurrency] section has the same settings as [services.concurrency], which are:

  • type specifies what metric is used to determine when to scale up and down, or when a given instance should receive more or less traffic (load balancing). The two supported values are connections and requests.

connections: Load balance and scale based on the number of concurrent tcp connections. This is the default when unspecified. This is also the default when fly.toml is created with fly launch.

requests: Load balance and scale based on the number of http requests. This is recommended for web services, since multiple requests can be sent over a single tcp connection.

  • hard_limit : When an application instance is at or over this number of concurrent connections or requests, the system will stop sending new traffic to that application instance. The system will bring up another instance if the auto scaling policy supports doing so.
  • soft_limit : When an application instance is at or over this number of concurrent connections or requests, the system will deprioritize sending new traffic to that application instance and only send it to that application instance if all other instances are also at or above their soft_limit. The system will likely bring up another instance if the auto scaling policy for the application supports doing so.

http_service.http_options.response.headers

Add or remove HTTP response headers.

In the following example, Example-Header will be removed from the final response that Fly Proxy sends, while Example-Header-1 and Example-Multi-Value and their values will be added to the response header.

  [http_service.http_options.response.headers]
    Example-Header = false
    Example-Header-1 = "value"
    Example-Multi-Value = ["value1", "value2"]

http_service.tls_options

Configure the TLS versions and ALPN protocols that Fly’s edge will use to terminate TLS for your application with:

[http_service.tls_options]
  alpn = ["h2", "http/1.1"]
  versions = ["TLSv1.2", "TLSv1.3"]
  default_self_signed = false
  • alpn: Array of strings indicating how to handle ALPN negotiations with clients.
  • versions: Array of strings indicating which TLS versions are allowed.
  • default_self_signed: When true, serve a self-signed certificate if no certificate exists. Default is false.

Fly.io can also terminate TLS only and pass through directly to your service. For more information, refer to services.ports.tls_options

http_service.checks

To configure health checks for your http service, you can use the http_service.checks section. These checks expect a successful HTTP status in response (i.e, 2xx). Here is an example of a http_service.checks section:

[[http_service.checks]]
  grace_period = "10s"
  interval = "30s"
  method = "GET"
  timeout = "5s"
  path = "/"

Roughly translated, this section says every thirty seconds, perform an HTTP GET on the root path (e.g. http://appname.fly.dev/) looking for it to return an HTTP 200 status within five seconds. The parameters of a check are listed below.

Times are in milliseconds unless units are specified.

  • grace_period: The time to wait after a VM starts before checking its health. Make sure this is long enough for your app to start up. For example, if your app takes 2 seconds to start up, give it some runway by setting grace_period to at least 3 seconds.
  • interval: The time between connectivity checks. There should be a balance between the interval and the grace period. If it’s long and your grace_period shorter than your app’s startup time, health check will take too long adding you your deployment time.
  • timeout: The maximum time a connection can take before being reported as failing its health check.
  • restart_limit: Only applicable to V1 (Nomad) apps. The number of consecutive HTTP check failures to allow before attempting to restart the VM. The default is 0, which disables restarts based on failed HTTP health checks.
  • method: The HTTP method to be used for the check.
  • path: The path of the URL to be requested.
  • protocol: The protocol to be used (http or https).
  • tls_server_name: If the protocol is https, the hostname to use for TLS certificate validation
  • tls_skip_verify: When true (and using HTTPS protocol) skip verifying the certificates sent by the server.
  • http_service.checks.headers: This is a sub-section of http_service.checks. It uses the key/value pairs as a specification of header and header values that will get passed with the check call.

The services sections

The services sections define how Fly Proxy connects requests that hit an app’s public Anycast or private Flycast address to services running within Machines, and configure other Fly Proxy behavior for a service. If a service only needs HTTP and HTTPS on standard ports, you can use the less verbose http_service.

A services section itself is a table of tables in TOML, so it is delimited with double square brackets. Tables within [[services]] configure additional features and behaviors for the service, and are described in subsequent sections.

At least one services.ports entry is required for each services section, to set a port and handler for incoming requests.

An app can have:

  • No services section (and no http_service section): The application cannot be reached from the public internet, nor by a Flycast address; this is typical of apps that talk only over 6PN private networking to other apps on the same network.
  • One services section (or an http_service section): One internal port mapped to one or more external ports.
  • Multiple services sections (or an http_service section and one or more services sections): Mapping multiple internal ports to multiple external ports.

Example (note the double square brackets):

[[services]]
  internal_port = 8080
  protocol = "tcp"
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 0

Settings:

  • processes: For apps with multiple process groups, the process group or groups that this service belongs to. Define process groups in the processes section.
  • internal_port : The port this service (and application) will use to communicate with clients. The default is 8080. We recommend applications use the default.
  • protocol : The protocol that this service will use to communicate. Typically tcp for most applications, but can also be udp.
  • auto_stop_machines: Whether to automatically stop an application’s machines when there’s excess capacity, per region. If there’s only one machine in a region, then the machine is stopped if it has no traffic. The Fly Proxy runs the checks to automatically stop machines every few minutes. The default is true.
  • auto_start_machines: Whether to automatically start an application’s machines when a new request is made to the application and there’s no excess capacity, per region. If there’s only one machine in a region, then it’s started whenever a request is made to the application. The default is true.
  • min_machines_running: The number of Machines to keep running, in the primary region only, when auto_stop_machines = true.

We recommend setting auto_stop_machines and auto_start_machines to the same value to avoid having machines that either never start or never stop. Learn more about automatically starting and stopping V2 app Fly machines, including how Fly Proxy determines excess capacity in a region using the soft_limit setting.

services.ports

For each services section in your fly.toml, i.e. for each external port you want to accept connections on, you need a services.ports section. The section is denoted by double square brackets like this:

  [[services.ports]]
    handlers = ["http"]
    port = 80
    force_https = true  # optional

This example defines an HTTP handler on port 80.

  • handlers : An array of strings, each string selecting a handler process to terminate the connection with at the edge. Here, the http handler will accept HTTP traffic and pass it on to the internal_port of the application, which we defined in the services section above. For the list of available handlers, and how they manage network traffic, see the Public Network Services documentation.
  • port : An integer representing the external port to listen on (ports 1-65535).
  • start_port : For a port range, the first port to listen on.
  • end_port : For a port range, the last port to listen on.
  • force_https: A boolean which determines whether to enforce HTTP to HTTPS redirects.

You can have more than one services.ports section. The default configuration, for example, contains two. We’ve already seen one above. The second one defines an external port 443 for secure connections, using the tls handler.

  [[services.ports]]
    handlers = ["tls", "http"]
    port = "443"

For UDP applications, make sure to bind the application to the same port as defined in the relevant services.ports section. For example, the configuration below will listen on port 5000 and the application will need to bind to fly-global-services:5000 to receive traffic. Leave handlers unset for UDP services.

[[services]]
  internal_port = 5000
  protocol = "udp"
  [[services.ports]]
    port = 5000

Instead of using multiple port definitions, you can specify a range of ports. For example, the configuration below will listen on the ports 8080, 8081, 8082, 8083, 8084 and 8085:

[[services.ports]]
  handlers = ["tls"]
  start_port = 8080
  end_port = 8085

services.ports.http_options.response.headers

Add or remove HTTP response headers.

In the following example, Example-Header will be removed from the final response that Fly Proxy sends, while Example-Header-1 and Example-Multi-Value and their values will be added to the response header.

  [services.ports.http_options.response.headers]
    Example-Header = false
    Example-Header-1 = "value"
    Example-Multi-Value = ["value1", "value2"]

services.ports.proxy_proto_options

Configure the version of the PROXY protocol that your app accepts. Version 1 is the default.

For example:

[[services.ports]]
  handlers = ["proxy_proto"]
  port = "5000"
  proxy_proto_options = { version = "v2" }
  • version : A string to indicate that the TCP connection uses PROXY protocol version 2. The default when not set is version 1.

services.ports.tls_options

Configure the TLS versions and ALPN protocols that Fly’s edge will use to terminate TLS for your application with:

  [[services.ports]]
    handlers = ["tls", "http"]
    port = "443"
    tls_options = { "alpn" = ["h2", "http/1.1"], "versions" = ["TLSv1.2", "TLSv1.3"] }
  • alpn : Array of strings indicating how to handle ALPN negotiations with clients.
  • versions : Array of strings indicating which TLS versions are allowed.
  • default_self_signed: When true, serve a self-signed certificate if no certificate exists. Default is false.

Fly.io can also terminate TLS only and pass through directly to your service. This works for a variety of applications that can benefit from offloading TLS termination and accept the unencrypted connection.

One use case is applications using HTTP/2, like gRPC. Fly’s edge terminates TLS and sends h2c (HTTP/2 without TLS) directly to your application through our backhaul. The config below will negotiate HTTP/2 with clients, and then send h2c to the application:

  [[services.ports]]
    handlers = ["tls"]
    port = "443"
    tls_options = { "alpn" = ["h2"] }

services.concurrency

The services concurrency sub-section configures how to measure load for an application to inform load balancing and, for legacy apps only, scaling.

This section is a simple list of key/values, so the section is denoted with single square brackets like so:

  [services.concurrency]
    type = "connections"
    hard_limit = 25
    soft_limit = 20

type specifies what metric is used to determine when a given instance has reached a concurrency limit. The two supported values are connections and requests.

connections: Load balance based on the number of concurrent tcp connections. This is the default when unspecified. This is also the default when fly.toml is created with fly launch.

requests: Load balance based on the number of http requests. This is recommended for web services, since multiple requests can be sent over a single tcp connection.

  • hard_limit : When an application instance is at or over this number of concurrent connections or requests, the system will stop sending new traffic to that application instance. For Nomad apps only, the system will bring up another instance if the autoscaling policy supports doing so.
  • soft_limit : When an application instance is at or over this number of concurrent connections or requests, the system will deprioritize sending new traffic to that application instance and only send it to that application instance if all other instances are also at or above their soft_limit. For Nomad apps only, the system will likely bring up another instance if the auto scaling policy for the application supports doing so.

services.tcp_checks

When a service is running, Fly Proxy can check up on it by connecting to a port. The services.tcp_checks section defines parameters for those checks. For example, the default tcp_checks looks like this:

  [[services.tcp_checks]]
    grace_period = "1s"
    interval = "15s"
    restart_limit = 0
    timeout = "2s"

Times are in milliseconds unless units are specified.

  • grace_period: The time to wait after a VM starts before checking its health. Make sure this is long enough for your application to start up. For example if your app takes 2 seconds to start up, give it some runway by setting this to at least 3 seconds.
  • interval: The time between connectivity checks. If the interval is long, and the grace_period is shorter than your app’s startup time, then the health check will take longer and will add to your deployment time.
  • restart_limit: Only applicable to V1 (Nomad) apps. The number of consecutive TCP check failures to allow before attempting to restart the VM. The default is 0, which disables restarts based on failed TCP health checks.
  • timeout: The maximum time a connection can take before being reported as failing its health check.

services.http_checks

Another way of checking a service is running is through HTTP checks as defined in the services.http_checks section. These checks are more thorough than services.tcp_checks as they require not just a connection but a successful HTTP status in response (i.e, 2xx). Here is an example of a services.http_checks section:

  [[services.http_checks]]
    interval = 10000
    grace_period = "5s"
    method = "get"
    path = "/"
    protocol = "http"
    timeout = 2000
    tls_skip_verify = false
    [services.http_checks.headers]

Roughly translated, this section says every ten seconds, perform an HTTP GET on the root path (e.g. http://appname.fly.dev/) looking for it to return an HTTP 200 status within two seconds. The parameters of a http_check are listed below.

Times are in milliseconds unless units are specified.

  • grace_period: The time to wait after a VM starts before checking its health. Make sure this is long enough for your app to start up. For example, if your app takes 2 seconds to start up, give it some runway by setting grace_period to at least 3 seconds.
  • interval: The time between connectivity checks. There should be a balance between the interval and the grace period. If it’s long and your grace_period shorter than your app’s startup time, health check will take too long adding you your deployment time.
  • timeout: The maximum time a connection can take before being reported as failing its health check.
  • restart_limit: Only applicable to V1 (Nomad) apps. The number of consecutive HTTP check failures to allow before attempting to restart the VM. The default is 0, which disables restarts based on failed HTTP health checks.
  • method: The HTTP method to be used for the check.
  • path: The path of the URL to be requested.
  • protocol: The protocol to be used (http or https).
  • tls_skip_verify: When true (and using HTTPS protocol) skip verifying the certificates sent by the server.
  • services.http_checks.headers: This is a sub-section of services.http_checks. It uses the key/value pairs as a specification of header and header values that will get passed with the http_check call.

Note: The services.http_checks section is optional and not generated in the default fly.toml file.

The mounts section

This section supports the Volumes feature for persistent storage. The section has two required entries: source and destination.

[mounts]
  source = "myapp_data"
  destination = "/data"
  processes= ["disk"] # optional - attach volumes to Machines that belong to one or more process groups

source

The source is a volume name that this app should mount. Any volume with this name, in the same region as the app and that isn’t already mounted, may be mounted. A volume of this name must exist in some region for the application to deploy.

destination

The destination is the directory where the source volume should be mounted on the running app.

processes

Optionally, you can specify a list of processes to limit volume mounts by process group. You can even specify two different [[mounts]] sections to mount different source volumes to VMs in different process groups. Note the need to use double brackets if there are multiple [[mounts]] defined.

The checks section

If your app doesn’t have public-facing services, or you want independent health checks that don’t affect request routing, use this top-level checks section instead of [[services.checks]].

Unlike service-level checks, top-level checks require:

  • A port number
  • A unique name
  • The type of check (http or tcp)
[checks]
  [checks.name_of_your_http_check]
    grace_period = "30s"
    interval = "15s"
    method = "get"
    path = "/path/to/status"
    port = 5500
    timeout = "10s"
    type = "http"
    [checks.name_of_your_http_check.headers]
      Content-Type = "application/json"
      Authorization = "super-secret"

  [checks.name_of_your_tcp_check]
    grace_period = "30s"
    interval = "15s"
    port = 1234
    timeout = "10s"
    type = "tcp"

Fields are very similar to [[services.checks]]:

  • port: Internal port to connect to. Needs to be available on 0.0.0.0. Required.
  • type: Either tcp or http. Required.
  • grace_period: The time to wait after a VM starts before checking its health. Make sure you give your app enough time to start up. For example if your app takes 2 seconds to start up, give it some runway by setting this to at least 3 seconds.
  • interval: The time between check runs. If your grace_period is shorter than your app’s startup time, and interval is too long, checks will increase deployment times.
  • processes: For apps with multiple processes. The process group to apply the health checks to. Define process groups in the processes section.

For http checks only:

  • method: The HTTP method to be used for the check.
  • path: The path of the URL to be requested.
  • timeout: The maximum time a connection can take before being reported as failing its health check.
  • headers: Specify key/value pairs will get passed as HTTP headers on the check request.

Again, times are in milliseconds unless units are specified.

The processes section

The processes section allows you to define process groups to be run on separate VMs within a single app. Learn more about running multiple process groups in an app.

Each machine can run a different command on start, allowing you to re-use your code base for different tasks (web server, queue worker, etc).

To run multiple processes, you’ll need a [processes] block containing a map of a name and command to start the application.

[processes]
  web = "bundle exec rails server -b [::] -p 8080"
  worker = "bundle exec sidekiqswarm"

Furthermore, you can “match” a specific process (or processes) to an [http_service] or [[services]] configuration. For example:

[http_service]
  processes = ["web"] # this service only applies to the web process
  internal_port = 8080
  force_https = true

Volumes can also be assigned to specific processes; use double brackets to include more than one [[mounts]] section, and mount differently named volumes to VMs in different process groups:

[[mounts]]
  source = "data"
  destination = "/data"
  processes = ["app"]

The metrics section

Fly apps can be configured to export custom metrics to the Fly.io-hosted Prometheus service. Add this section to fly.toml.

[metrics]
port = 9091       # default for most prometheus clients
path = "/metrics" # default for most prometheus clients

Check out Metrics on Fly.io for more information about collecting metrics for your apps.

The statics sections

When statics are set, requests under url_prefix that are present as files in guest_path will be delivered directly to clients, bypassing your web server. These assets are extracted from your Docker image and delivered directly from our proxy on worker hosts.

[[statics]]
  guest_path = "/app/public"
  url_prefix = "/public"

Each [[statics]] block maps a URL prefix to a path inside your container. You can have up to 10 mappings in an application.

The “guest path” — the path inside your container where the files to serve are located — can overlap with other static mappings; the URL prefix should not (so, two mappings to /public/foo and /public/bar are fine, but two mappings to /public are not).

Caveats

This feature should not be compared directly with a CDN, for the following reasons:

  • This feature does not exempt you from having to run a web service in your VM.

  • Statics will not find index.html at the root. The full path must be requested.

  • You can’t set Cache-Control or any other headers on assets. If you need those, you’ll need to deliver them from your application and set the relevant headers.

  • Assets are not delivered, by default, from all edge Fly.io regions. Rather, assets are delivered from the regions the application is deployed in.

  • statics does not honor symlinks. So, if /app/public in your container is actually a symlink to something like /app-39403/public, you’ll want to use the absolute original path in your statics configuration.

The files section

When files are set, the contents from one of raw_value, local_path, or secret_name will be written to the Machine at the provided guest_path. The contents of both raw_value and secret_name must be base64 encoded.

Examples:

[[files]]
  guest_path = "/path/to/hello.txt"
  raw_value = "aGVsbG8gd29ybGQK"

[[files]]
  guest_path = "/path/to/secret.txt"
  secret_name = "SUPER_SECRET"

You can optionally restrict which Machine(s) contain the file using the [processes](#the-processes-section) field. For example:

[[files]]
  guest_path = "/path/to/config.yaml"
  local_path = "/local/path/config.yaml"
  processes = ["web"]

The experimental section

This section is for flags and feature settings which have yet to be promoted into the main configuration.

[experimental]
cmd = ["path/to/command", "arg1", "arg2"]
entrypoint = ["path/to/command", "arg1", "arg2"]
exec = ["path/to/command", "arg1", "arg2"]

cmd

This overrides the CMD set by the Dockerfile. It should be specified as an array of strings, as seen in the example above.

entrypoint

This overrides the ENTRYPOINT set by the Dockerfile. It should be specified as an array of strings, as seen in the example above.