When I first started developing on Pantheon I was intimidated by adding another command line tool to my developer’s tool belt: Terminus. Terminus is a powerful utility that assists you in managing all facets of your Pantheon accounts and environments from the comfort of your command line. In this blog, I will share commands and recipes I use most frequently. 

The first place to start is installing and running terminus which Pantheon’s docs have covered

Drush on Pantheon

Terminus does not replace Drush. To use Drush on Pantheon you will be passing your Drush command through Terminus. Run terminus list to see the exhaustive list of all the commands available to you. As I mentioned earlier, Terminus can do almost anything, but for now, we are focused on running our very first Drush command. Note: based on your access level granted in Pantheon, you may have to download drush aliases via terminus aliases.

The best way to find out how to run a command is to pass “-h” to show help information. Let’s try running terminus remote:drush -h.  

As you can see under the usage, there’s an alias to this command: drush. This means we can run drush via terminus drush rather than terminus remote:drush. Also, we see that the command takes two arguments: site_env_id and drush_command. Each site on Pantheon will have a unique site name and environment id. Out of the box, you should have dev, test, and live but have access to create multidev environments, which we will cover later.

Now if you have organizational access you might have multiple sites, so it is best to find the right site and then get the environment id. We can do this with the following command:

terminus site:list --field=name
terminus env:list site_name_from_previous_command --fields=id,domain

For both these commands we passed options field/fields, I do this to limit the amount of information presented by these commands as I have access to a lot of sites. Feel free to try both without passing the option.

Now that we have both the site name and environment name we can run our drush command for the specific environment. The site_env_id parameter for terminus drush is the concatenation of site name and environment separated by a period. For instance, if I had a site name of “foo” and wanted to run a drush command on live, my command would look like this:

terminus drush foo.live uli

Now, we aren’t done yet. While this works for the above example, you will see that more complicated drush commands will only run the first word after the site_env_id argument. To have terminus pass all text after the site_env_id, you have to add “--”. Here is how you can log in as user “foo_bar” in the live environment:

terminus drush foo.live -- uli foo_bar

Next up, how to move “content” i.e. databases and files to different environments.

Moving Content

Pantheon comes with three environments: dev, test, and live. The idea here is that no feature or bug fix should go to production without first being tested in lower environments. However, these environments will become stale, and you will have to import the database and files from live to keep them up to date. Run terminus list, to see all commands available to you. Which do you think is the right one? If you found, “env:content-clone” you are correct.  

Content clone takes two arguments: the source and the target. The source needs to be both the site_id.site_env while the target is only the env_id. Here is an example of copying both database and files from live to dev:

terminus env:content-clone foo.live dev

Now there will be scenarios where you don’t need both database and files. For example, you might only need files when reproducing a gnarly front end bug. To only import live images, try passing the files-only option. This will only copy images down from live to dev:

terminus env:content-clone foo.live dev --files-only

This is great for working with the latest backups, but a situation might arise where you need a previous database backup. To get an old database backup and import, it will require a few commands:

1. Determine what backup you want to import

terminus backup:list foo.live --element=db

This will return a table of all backups for the environment. We need the file name for the backup we want to import.

2. Get the import link for the backup.

Now that we have the file name we can get the information for the backup.

terminus backup:info foo.live --file=databse_file_name.sql.gz --element=db

Switch “database_file_name” with the name of the database you want to import. This will return several fields, but we only care about the URL.  

3. Import the backup

terminus import:database foo.dev url

Take the URL in step two and replace it where I have the placeholder url. This will take the specific database backup and import it to our dev environment.

Pantheon’s three starting environments (dev, test, and live) all point to the master branch at different tags. You may be asking yourself, “how can I test my features or bug fixes before a merge to master?” and the answer is Pantheon’s Multidev system. Multidev allows you to quickly spin up new site instances that are incredibly helpful for automated CI/CD or testing code. And like all things in Pantheon, if you can do it in the UI you can do it in Terminus.

Creating and Managing Multidev Environments

Multidev environments are crucial to any development workflow, let’s get started with commands for spinning one up. Here is how we create a new multidev in our foo site called “new_feature”:

terminus multidev:create foo new_feature

If you run that code, you should get an error message because we are not following Pantheon’s branch naming convention. Take a look at the warning in the getting started section. To fix this we can change new_feature to new-feature.

The multidev:create command does three actions: 

  1. Create new environment.
  2. Sync environment to branch “new-feature.” If a branch doesn’t exist, it will create a new branch.
  3. Clone down database and files.

If you are an enterprise role user your aliases should already sync; if not, you will have to download the new aliases via terminus aliases.

Since multidev instances are identical to live, it is best practice to lock down the environment so no malicious user can take advantage of a bug introduced in new code. By locking down, we are adding HTTP basic auth to the environment, forcing users to provide a username and password to go visit the site. Here is an example of locking down our new-feature multidev environment with an almost uncrackable username and password:

terminus lock:enable foo.new-feature foo bar

Armed with the knowledge of deploying to pantheon, we will review how to release code through the dev, test, and live workflow.

Release Workflow

To get our feature or bug fix code deployed to dev we are going to have to merge to master. Behind the scenes, Pantheon has a listener for merges on master and will create the environment-specific tag that will end up on the server. We can start this process through terminus. The following command will merge our new-feature multidev environment into dev:

terminus multidev:merge-to-dev foo.new-feature

Now that the code is in master, we can use a separate set of commands to move the code from dev to test and finally test to live.

terminus env:deploy foo.test --note="Release 1.x.x"

The note option allows us to leave a commit message on the merge. We typically default to the sem version of the release so we can easily backtrack the code change to our version control system. Once testing is completed and we are happy there are no bug, we can deploy to live: 

terminus env:deploy foo.live --note="1.x.x"

Again, with working with Terminus you can always pass “-h” to learn more about a command. E.g both env:deploy and multidev:merge-to-dev have an updatedb option that would allow you to run any database updates as part of this command. Note: If any of your code changes required a config import you will have to run that manually through drush.

Now that we’ve covered how to use Terminus in Pantheon I wanted to leave you with one last recipe for getting your local site up and running with the latest content using Terminus.

Local Development

There are usually two steps in getting your local in sync with the hosted live site and that is database and files. To get a local copy of the database I follow these three steps (continuing with the example site of “foo”):

1. Create a database backup

terminus backup:create foo.live --element=db

This creates a database backup for the live environment and saves it to the cloud.

2. Download the database locally

terminus backup:get foo.live  --element=db --to=./db.sql.gz 

This retrieves the most recent live database backup and saves in locally to db.sql.gz.

3. Using drush, import the database

gunzip -c ./db.sql.gz | drush @alias sqlc

This unzips the database backup and is immediately consumed by drush SQL connect.

Unfortunately, files are a different story as they can range so drastically from site to site. For instance, I work on several sites with 100gb of files. If I were to download those locally for all my projects, not only would it be a huge amount of time but I would use all my local memory. There are two solutions for getting files locally: first is stage_file_proxy module. This module will link all files to your hosted environment-saving valuable local memory.

If it is absolutely necessary, you can get all files locally in a manageable/scalable way and that is through Terminus rsync plugin. Terminus wraps rsync functionality so once you’ve downloaded the initial files, you can pass the option ignore-existing and you’ll only copy down files that are new or changed.

terminus rsync foo.live:files .

This downloads all files on the live environment locally keeping the same tree structure—huzzah! 

You should now feel comfortable using Terminus to do your day to day development tasks. We’ve covered a lot of commands and recipes, but always remember the help option: “-h” really does help for determining how to correctly use a Terminus command. Hopefully you found this helpful. If I missed your favorite Terminus command, recipe, or workflow please leave a comment below.