Using Teleport with OpenSSH
Teleport is fully compatible with OpenSSH and can be quickly set up to record and
audit all SSH activity. Using Teleport and OpenSSH has the advantage of getting you up
and running, but in the long run, we would recommend replacing
We've outlined these reasons in OpenSSH vs Teleport SSH for Servers?
Teleport is a standards-compliant SSH proxy and it can work in environments with existing SSH implementations, such as OpenSSH. This section will cover:
- Configuring OpenSSH server
sshdto join a Teleport cluster. Existing fleets of OpenSSH servers can be configured to accept SSH certificates dynamically issued by a Teleport CA.
- Configuring OpenSSH client
sshto login into nodes inside a Teleport cluster.
OpenSSH 6.9 is the minimum OpenSSH version compatible with Teleport. View your OpenSSH version with the command:
The recording proxy mode, although less secure, was added to allow Teleport users
to enable session recording for OpenSSH's servers running
sshd, which is helpful
when gradually transitioning large server fleets to Teleport.
We consider the "recording proxy mode" to be less secure for two reasons:
- It grants additional privileges to the Teleport proxy. In the default "node recording" mode, the proxy stores no secrets and cannot "see" the decrypted data. This makes a proxy less critical to the security of the overall cluster. But if an attacker gains physical access to a proxy node running in the "recording" mode, they will be able to see the decrypted traffic and client keys stored in the proxy's process memory.
- Recording proxy mode requires the use of SSH agent forwarding. Agent forwarding is required because without it, a proxy will not be able to establish the 2nd connection to the destination node.
Teleport proxy should be available to clients and be set up with TLS.
Teleport OpenSSH supports:
The examples below may include the use of the
sudo keyword, token UUIDs, and users with
admin privileges to make following each step easier when creating resources from scratch.
- We discourage using
sudoin production environments unless it's needed.
- We encourage creating new, non-root, users or new test instances for experimenting with Teleport.
- We encourage adherence to the Principle of Least Privilege (PoLP) and Zero Admin best practices. Don't give users the
adminrole when giving them the more restrictive
access,editorroles will do instead.
- Saving tokens into a file rather than sharing tokens directly as strings.
Learn more about Teleport Role-Based Access Control best practices.
Backing up production instances, environments, and/or settings before making permanent modifications is encouraged as a best practice. Doing so allows you to roll back to an existing state if needed.
To enable session recording for
sshd nodes, the cluster must be switched to
"recording proxy" mode.
In this mode, the recording will be done on the proxy level:
# snippet from /etc/teleport.yaml auth_service: # Session Recording must be set to Proxy to work with OpenSSH session_recording: "proxy" # can also be "off" and "node" (default)
sshd must be told to allow users to log in with certificates generated
by the Teleport User CA. Start by exporting the Teleport CA public key:
Export the Teleport Certificate Authority certificate into a file and update SSH configuration to trust Teleport's CA:
tctl needs to be run on the auth server.sudo tctl auth export --type=user | sed s/cert-authority\ // > teleport_user_ca.pubsudo mv ./teleport_user_ca.pub /etc/ssh/teleport_user_ca.pubecho "TrustedUserCAKeys /etc/ssh/teleport_user_ca.pub" | sudo tee -a /etc/ssh/sshd_config
Restart SSH daemon.
sshd will trust users who present a Teleport-issued certificate.
The next step is to configure host authentication.
When in recording mode, Teleport will check that the host certificate of any node a user connects to is signed by a Teleport CA. By default, this is a strict check. If the node presents just a key or a certificate signed by a different CA, Teleport will reject this connection with the error message saying "ssh: handshake failed: remote host presented a public key, expected a host certificate"
You can disable strict host checks as shown below. However, this opens the possibility for Man-in-the-Middle (MITM) attacks and is not recommended.
# snippet from /etc/teleport.yaml auth_service: proxy_checks_host_keys: no
The recommended solution is to ask Teleport to issue valid host certificates for all OpenSSH nodes. To generate a host certificate, run this on your Teleport auth server:
Creating host certs, with an array of every host to be accessed.
Wildcard certs aren't supported by OpenSSH, must be full FQDN.
Management of the host certificates can become complex, this is another
reason we recommend using Teleport SSH on nodes.sudo tctl auth sign \ --host=api.example.com,ssh.example.com,22.214.171.124,126.96.36.199 \ --format=openssh \ --out=api.example.com
The credentials have been written to api.example.com, api.example.com-cert.pub
You can use ssh-keygen to verify the contents.ssh-keygen -L -f api.example.com-cert.pub
Type: [email protected] host certificate
Public key: RSA-CERT SHA256:ireEc5HWFjhYPUhmztaFud7EgsopO8l+GpxNMd3wMSk
Signing CA: RSA SHA256:/6HSHsoU5u+r85M26Ut+M9gl+HventwSwrbTvP/cmvo
Key ID: ""
Valid: after 2020-07-29T20:26:24
Critical Options: (none)
x-teleport-authority UNKNOWN OPTION (len 47)
x-teleport-role UNKNOWN OPTION (len 8)
Then add the following lines to
/etc/ssh/sshd_config on all OpenSSH nodes, and restart
HostKey /etc/ssh/api.example.com HostCertificate /etc/ssh/api.example.com-cert.pub
Now you can use
tsh ssh --port=22 [email protected] to login
sshd node in the cluster and the session will be recorded.
If you want to use OpenSSH
ssh client for logging into
sshd servers behind a proxy
in "recording mode", you have to tell the
ssh client to use the jump host and
enable SSH agent forwarding, otherwise, a recording proxy will not be able to
terminate the SSH connection to record it:
Note that agent forwarding is enabled twice: one from a client to a proxy
(mandatory if using a recording proxy), and then optionally from a proxy
to the end server if you want your agent running on the end server or not
To avoid typing all this and use the usual
ssh [email protected], users can update their
It's important to remember that SSH agent forwarding must be enabled on the client. Verify that a Teleport certificate is loaded into the agent after logging in:
Login as Joetsh login --proxy=proxy.example.com --user=joe
see if the certificate is present (look for "teleport:joe") at the end of the certssh-add -L
It is well-known that the Gnome Keyring SSH agent, used by many popular Linux desktops like Ubuntu, and
gpg-agent from GnuPG do not support SSH
certificates. We recommend using the
ssh-agent from OpenSSH.
Alternatively, you can disable SSH agent integration entirely using the
--no-use-local-ssh-agent flag or
environment variable with
It is possible to use the OpenSSH client
ssh to connect to nodes within a
Teleport cluster. Teleport supports SSH subsystems and includes a
proxy subsystem that can be used like
netcat is with
ProxyCommand to connect
through a jump host.
OpenSSH client configuration may be generated automatically by
tsh, or it can
be configured manually. In either case, make sure you are running OpenSSH's
ssh-agent, and have logged in to the Teleport proxy:
eval `ssh-agent`tsh --proxy=root.example.com login
ssh-agent will print environment variables into the console. Either
output as in the example above, or copy and paste the output into the shell you
will be using to connect to a Teleport node. The output exports the
SSH_AGENT_PID environment variables that allow OpenSSH
clients to find the SSH agent.
Automatic OpenSSH client configuration is supported on Linux and macOS as of Teleport 7.0 and on Windows as of Teleport 7.2.
tsh can automatically generate the necessary OpenSSH client configuration to
connect using the standard OpenSSH client:
on the machine where you want to run the ssh clienttsh --proxy=root.example.com config
This will generate an OpenSSH client configuration block for the root cluster
and all currently-known leaf clusters. Append this to your local OpenSSH config
~/.ssh/config) using your text editor of choice.
Once configured, log into any node in the
root.example.com cluster as any
principal listed in your Teleport profile:
If any trusted clusters exist, they are also configured:
When connecting to nodes with Teleport daemons running on non-standard ports
3022), a port may be specified:
If you switch between multiple Teleport proxy servers, you'll need to re-run
tsh config for each to generate the cluster-specific configuration.
Similarly, if trusted clusters are added or removed, be sure to re-run the above command and replace the previous configuration.
On your client machine, you need to import the public key of Teleport's host certificate. This will allow your OpenSSH client to verify that host certificates are signed by Teleport's trusted host CA:
on the Teleport auth servertctl auth export --type=host > teleport_host_ca.pub
on the machine where you want to run the ssh clientcat teleport_host_ca.pub >> ~/.ssh/known_hosts
If you have multiple Teleport clusters, you have to export and set up these certificate authorities for each cluster individually.
If you use recording proxy mode and trusted clusters,
you need to set up the certificate authority from
the root cluster to match all nodes, even those that belong to leaf
clusters. For example, if your node naming scheme is
*.leaf2.example.com, then the
@certificate-authority entry should match
*.example.com and use the CA
from the root auth server only.
Lastly, configure the OpenSSH client to use the Teleport proxy when connecting
to nodes with matching names. Edit
~/.ssh/config for your user or
/etc/ssh/ssh_config for global changes:
# root.example.com is the jump host (proxy). credentials will be obtained from the # openssh agent. Host root.example.com HostName 192.168.1.2 Port 3023 # connect to nodes in the root.example.com cluster through the jump # host (proxy) using the same. credentials will be obtained from the # openssh agent. Host *.root.example.com HostName %h Port 3022 ProxyCommand ssh -p 3023 %[email protected] -s proxy:%h:%p # when connecting to a node within a trusted cluster with the name "leaf1.example.com", # add the name of the cluster to the invocation of the proxy subsystem. Host *.leaf1.example.com HostName %h Port 3022 ProxyCommand ssh -p 3023 %[email protected] -s proxy:%h:%[email protected]
When everything is configured properly, you can use ssh to connect to any node
Teleport uses OpenSSH certificates instead of keys which means you cannot ordinarily connect to a Teleport node by IP address. You have to connect by
DNS name. This is because OpenSSH ensures the DNS name of the node you are connecting to is listed under the
Principals section of the OpenSSH certificate to verify you are connecting to the correct node.
To connect to the OpenSSH server via
--port=<ssh port> with the
tsh ssh command:
Example ssh to
root with an OpenSSH server on port 22 via
The principal/username (
[email protected] in the example above) being used to connect must be listed in the Teleport user/role configuration.
When using a Teleport proxy in "recording mode", be aware of OpenSSH built-in rate-limiting. On large numbers of proxy connections, you may encounter errors like:
channel 0: open failed: connect failed: ssh: handshake failed: EOF
MaxStartups setting in
man sshd_config. This setting means that by
default OpenSSH only allows 10 unauthenticated connections at a time and starts
dropping connections 30% of the time when the number of connections goes over 10.
When it hits 100 authentication connections, all new connections are
To increase the concurrency level, increase the value to something like
MaxStartups 50:30:100. This allows 50 concurrent connections and a max of 100.
To revoke the current Teleport CA and generate a new one, run
tctl auth rotate. Unless you've highly automated your
infrastructure, we would suggest you proceed with caution as this will invalidate the user
and host CAs, meaning that the new CAs will need to be exported to every OpenSSH-based machine again using
tctl auth export as above.