Chapters

Hide chapters

Server-Side Swift with Vapor

Third Edition - Early Acess 1 · iOS 13 · Swift 5.2 - Vapor 4 Framework · Xcode 11.4

Before You Begin

Section 0: 3 chapters
Show chapters Hide chapters

Section I: Creating a Simple Web API

Section 1: 13 chapters
Show chapters Hide chapters

34. Production Concerns & Redis
Written by Tanner Nelson

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Heads up... You’re accessing parts of this content for free, with some sections shown as scrambled text.

Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now

Note: This update is an early-access release. This chapter has not yet been updated to Vapor 4.

One of the most exciting parts of programming is sharing what you’ve created with the world. For web applications, this usually means deploying your project to a server that is accessible via the Internet.

Web servers can be dedicated machines in a data center, containers in a cloud, or even a Raspberry Pi sitting in your closet. As long as your server can run Swift and has a connection to the Internet, you can use it to deploy Vapor applications.

In this chapter, you’ll learn the advantages and disadvantages of some common deployment methods for Vapor. You’ll also learn how to properly optimize, configure, and monitor your applications to increase efficiency and uptime.

Using environments

Every instance of Application has an associated Environment. Each environment has a String name and a Bool is-release flag. Common environments include: production, development, and testing. The current environment is stored in the environment property of all Containers.

print(req.environment) // "production"

For the most part, the container environment is there for you to use as you wish while configuring your application.

However, some parts of Vapor will behave differently when running in a release environment. Some differences include hiding debug information in 500 errors and reducing the verbosity of error logs.

Because of this, make sure you are using the production environment when running your application in production.

Choosing an environment

Most templates include code to detect the current environment when the application runs. If you open main.swift in your project’s Run module, you’ll see something similar to the following:

import App

try app(.detect()).run()
swift run Run --env development

$ swift run Run -e prod

Dynamic services

To better understand how you can make use of environments, take a look at following example that dynamically configures MySQL database credentials.

// 1
services.register { container in
  let mysqlConfig: MySQLConfig
  // 2
  switch container.environment {
  case .production: // use production credentials
  default: // use development credentials
  }
  return mysqlConfig
}

Compiling with optimizations

While developing your application, you’ll usually compile code using Swift’s debug build mode. Debug build mode is fast and includes useful debug information in the resulting binary. Xcode can use this information later to provide more information about fatal errors and breakpoint debugging.

Building release in Xcode

You enable release build mode in Xcode using the scheme editor. To build in release mode, edit the scheme for your app’s executable target — this is usually Run. Then, select Release under Build Configuration.

Building release using SPM

When deploying to Linux, you’ll need to use SPM to compile release executables since Xcode is not available. By default, SPM compiles in debug build mode. To specify release mode, append -c release to your build command.

swift build -c release
swift test -c release

Note on testing

Building and testing your code regularly in production-like environments is important for catching issues early. Some modules you will use, like Foundation, have different implementations depending on the platform. Subtle differences in implementation can cause bugs in your code. Sometimes, an API’s implementation may not yet exist for a platform. Container environments like Docker help you address this by making it easy to test your code on platforms different from your host machine, such as testing on Linux while developing on macOS.

Using Docker

Docker is a great tool for testing and deploying your Vapor applications. Deployment steps are coded into a Dockerfile you can commit to source control alongside your project. You can execute this Dockerfile to build and run instances of your app locally for testing or on your deployment server for production. This has the advantage of making it easy to test deployments, create new ones and track changes to how your deploy your code.

Process monitoring

To run a Vapor application, you simply need to launch the executable generated by SPM.

swift build -c release
.build/release/Run serve -e prod

Supervisor

Supervisor, also called supervisord, is a popular process monitor for Linux. This program allows you to register processes that you would like to start and stop on demand. If one of those processes crashes, Supervisor will automatically restart it for you. It also makes it easy to store the process’s stdout and stderr in /var/log for easy access.

apt-get install supervisor
systemctl restart supervisor
// 1
[program:my-app]
command=/path/to/my-app/.build/release/Run serve -e prod
// 2
autostart=true
autorestart=true
// 3
stderr_logfile=/var/log/my-app.err.log
stdout_logfile=/var/log/my-app.out.log
supervisorctl reread
supervisorctl update

Systemd

Another alternative that doesn’t require you to install additional software is called systemd. It’s a standard part of the Linux versions that Swift supports. For more on how to configure your app using systemd, see Chapter 34, “Deploying with AWS”.

Reverse Proxies

Regardless of where or how you deploy your Vapor application, it’s usually a good idea to host it behind a reverse proxy like nginx. nginx is an extremely fast, battle tested and easy-to-configure HTTP server and proxy. While Vapor supports directly serving HTTP requests, proxying behind nginx can provide increased performance, security, and ease-of-use. nginx, for example, can provide support for TLS (SSL), public file serving and HTTP/2.

Installing Nginx

nginx is usually installed using APT on Ubuntu but may vary depending on your deployment method.

apt-get update
apt-get install nginx
systemctl start nginx
systemctl restart nginx
systemctl stop nginx
server {
  ## 1
  server_name hello.com;

  ## 2
  listen 80;

  ## 3
  root /home/vapor/Hello/Public/;
  try_files $uri @proxy;
  
  ## 4
  location @proxy {
    ## 5
    proxy_pass http://127.0.0.1:8080;

    ## 6
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    ## 7
    proxy_connect_timeout 3s;
    proxy_read_timeout 10s;
  }
}

Logging

Using Swift’s print method for logging is great during development and can even be a suitable option for some production use cases. Programs like Supervisor help aggregate your application’s print output into files on your server that you can access as needed.

router.get("log-test") { req -> HTTPStatus in
    try req.make(Logger.self).info("The route was called")
    return .ok
}
services.register(Logger.self) { container in
    return MyLogger()
}
config.prefer(MyLogger.self, for: Logger.self)

Horizontal scalability

Finally, one of the most important concerns in designing a production-ready app is that of scalability. As your application’s user base grows, and traffic increases, how will you keep up with demand? What will be your bottlenecks? When first starting out, a reasonable solution can be to increase your server’s resources as traffic increases — adding RAM, better CPU, more disk space, etc. This is commonly referred to as scaling vertically.

Load balancing

Now that you understand some of the benefits of horizontal scaling, you may be wondering how it actually works. The key to this concept is load balancers. Load balancers are light-weight, fast programs that sit in front of your application’s servers. When a new request comes in, the load balancer chooses one of your servers to send the request to.

Sessions with Redis

To demonstrate how this works in an app, download the starter project for this chapter. The project is based on the TIL app from the first sections of this book. Open the project in Xcode and build the application. When a user logs in to the website, the application stores the user’s ID in an associated session. Currently the application stores sessions in memory. This presents a couple of problems:

import Redis
try services.register(RedisProvider())
// 1
var redisConfig = RedisClientConfig()
// 2
if let redisHostname = Environment.get("REDIS_HOSTNAME") {
  redisConfig.hostname = redisHostname
}
// 3
let redis = try RedisDatabase(config: redisConfig)
// 4
databases.add(database: redis, as: .redis)
config.prefer(RedisCache.self, for: KeyedCache.self)
# 1
docker run --name postgres -e POSTGRES_DB=vapor \
  -e POSTGRES_USER=vapor -e POSTGRES_PASSWORD=password \
  -p 5432:5432 -d postgres
# 2
docker run --name redis -p 6379:6379 -d redis

Where to go from here?

You now understand the common pitfalls to avoid when moving your Swift web application to production. It’s time to put the best practices and useful tools listed here to use. Here are some additional resources that should prove invaluable as you continue to hone your skills:

Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.
© 2024 Kodeco Inc.

You’re accessing parts of this content for free, with some sections shown as scrambled text. Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.

Unlock now