TCP Endpoints
Overview
ngrok's TCP endpoints enable you to deliver any network service with a TCP-based protocol. For example, it is commonly used to establish connectivity for:
- Remote access protocols like SSH, VNC and RDP
- Databases like MySQL, Postgres, MSSQL and SQLite
- IoT protocols like MQTT
- Gaming servers like Minecraft
If the service you are delivering uses TLS, prefer to create a TLS Endpoint.
Because the TCP protocol is low-level, ngrok offers very little functionality to manipulate TCP traffic beyond restricting access via IP Restrictions.
Example Usage
Random Address
Listen on a random TCP address.
- Agent CLI
- Agent Config
- SSH
- Go
- Javascript
- Python
- Rust
- Kubernetes Controller
ngrok tcp 22
tunnels:
example:
proto: tcp
addr: 22
ssh -R 0:localhost:22 v2@connect.ngrok-agent.com tcp
import (
"context"
"net"
"golang.ngrok.com/ngrok"
"golang.ngrok.com/ngrok/config"
)
func ngrokListener(ctx context.Context) (net.Listener, error) {
return ngrok.Listen(ctx,
config.TCPEndpoint(),
ngrok.WithAuthtokenFromEnv(),
)
}
Go Package Docs:
const ngrok = require("@ngrok/ngrok");
(async function () {
const listener = await ngrok.forward({
addr: 8080,
authtoken_from_env: true,
proto: "tcp",
});
console.log(`Ingress established at: ${listener.url()}`);
})();
Javascript SDK Docs
import ngrok
listener = ngrok.forward("localhost:8080", authtoken_from_env=True,
proto="tcp")
print(f"Ingress established at: {listener.url()}");
Python SDK Docs:
use ngrok::prelude::*;
async fn listen_ngrok() -> anyhow::Result<impl Tunnel> {
let sess = ngrok::Session::builder()
.authtoken_from_env()
.connect()
.await?;
let tun = sess
.tcp_endpoint()
.listen()
.await?;
println!("Listening on URL: {:?}", tun.url());
Ok(tun)
}
Rust Crate Docs
TCP Endpoints are not supported via the Kubernetes Ingress Controller
Fixed Address
Listen on the TCP Address 1.tcp.eu.ngrok.io:12345
. You must create this TCP
address ahead of time, see TCP Addresses.
- Agent CLI
- Agent Config
- SSH
- Go
- Javascript
- Python
- Rust
- Kubernetes Controller
ngrok tcp 3389 --remote-addr 1.tcp.eu.ngrok.io:12345
tunnels:
example:
proto: tcp
addr: 3389
remote_addr: 1.tcp.eu.ngrok.io:12345
ssh -R 1.tcp.eu.ngrok.io:12345:localhost:3389 connect.eu.ngrok-agent.com tcp
import (
"context"
"net"
"golang.ngrok.com/ngrok"
"golang.ngrok.com/ngrok/config"
)
func ngrokListener(ctx context.Context) (net.Listener, error) {
return ngrok.Listen(ctx,
config.TCPEndpoint(
config.WithRemoteAddr("1.tcp.ngrok.io:12345"),
),
ngrok.WithRegion("eu"),
ngrok.WithAuthtokenFromEnv(),
)
}
Go Package Docs:
const ngrok = require("@ngrok/ngrok");
(async function () {
const listener = await ngrok.forward({
addr: 8080,
authtoken_from_env: true,
proto: "tcp",
remote_addr: "1.tcp.eu.ngrok.io:12345",
});
console.log(`Ingress established at: ${listener.url()}`);
})();
Javascript SDK Docs:
import ngrok
listener = ngrok.forward("localhost:8080", authtoken_from_env=True,
proto="tcp",
remote_addr="1.tcp.eu.ngrok.io:12345")
print(f"Ingress established at: {listener.url()}");
Python SDK Docs:
use ngrok::prelude::*;
async fn listen_ngrok() -> anyhow::Result<impl Tunnel> {
let sess = ngrok::Session::builder()
.authtoken_from_env()
.connect()
.await?;
let tun = sess
.tcp_endpoint()
.remote_addr("1.tcp.eu.ngrok.io:12345")
.listen()
.await?;
println!("Listening on URL: {:?}", tun.url());
Ok(tun)
}
Rust Crate Docs:
TCP Endpoints are not supported via the Kubernetes Ingress Controller
Forward to non-local service
Forward to a non-local Postgres instance listening on your network at
192.168.1.2:5432
.
- Agent CLI
- Agent Config
- SSH
- Go
- Javascript
- Python
- Rust
- Kubernetes Controller
ngrok tcp 192.168.1.2:5432
tunnels:
example:
proto: tcp
addr: 192.168.1.2:5432
ssh -R 0:192.168.1.2:5432 v2@connect.ngrok-agent.com tcp
Forwarding to a non-local address is not supported by the Go SDK
const ngrok = require("@ngrok/ngrok");
(async function () {
const listener = await ngrok.forward({
addr: "192.168.1.2:5432",
authtoken_from_env: true,
proto: "tcp",
});
console.log(`Ingress established at: ${listener.url()}`);
})();
Javascript SDK Docs:
import ngrok
listener = ngrok.forward("192.168.1.2:5432", authtoken_from_env=True,
proto="tcp")
print(f"Ingress established at: {listener.url()}");
Python SDK Docs:
Forwarding to a non-local address is not supported by the Rust SDK
TCP Endpoints are not supported via the Kubernetes Ingress Controller
PROXY Protocol
Add a PROXY protocol header on connection to your upstream service. This sends connection information like the original client IP address to your upstream service.
- Agent CLI
- Agent Config
- SSH
- Go
- Javascript
- Python
- Rust
- Kubernetes Controller
ngrok tcp 22 --proxy-proto=2
tunnels:
example:
proto: tcp
addr: 22
proxy_proto: 2
PROXY proto is not support via SSH.
import (
"context"
"net"
"golang.ngrok.com/ngrok"
"golang.ngrok.com/ngrok/config"
)
func ngrokListener(ctx context.Context) (net.Listener, error) {
return ngrok.Listen(ctx,
config.TCPEndpoint(
config.WithProxyProto(2),
),
ngrok.WithAuthtokenFromEnv(),
)
}
Go Package Docs:
const ngrok = require("@ngrok/ngrok");
(async function () {
const listener = await ngrok.forward({
addr: 8080,
authtoken_from_env: true,
proto: "tcp",
proxy_proto: "2",
});
console.log(`Ingress established at: ${listener.url()}`);
})();
Javascript SDK Docs:
import ngrok
listener = ngrok.forward("localhost:8080", authtoken_from_env=True,
proto="tcp",
proxy_proto="2")
print(f"Ingress established at: {listener.url()}");
Python SDK Docs:
use ngrok::prelude::*;
async fn listen_ngrok() -> anyhow::Result<impl Tunnel> {
let sess = ngrok::Session::builder()
.authtoken_from_env()
.connect()
.await?;
let tun = sess
.tcp_endpoint()
.proxy_proto(ProxyProto::V2)
.listen()
.await?;
println!("Listening on URL: {:?}", tun.url());
Ok(tun)
}
Rust Crate Docs:
TCP Endpoints are not supported via the Kubernetes Ingress Controller
Behavior
Endpoints
When your TCP endpoint is online, it will be available as an Endpoint
resource. Endpoints have URLs, but there is no
standard scheme for TCP URLs so ngrok renders them as tcp://
.
TCP Addresses
If you would like your TCP endpoints to be fixed, you must first provision a
TCP address. TCP Addresses include a hostname and port component and look like
1.tcp.ngrok.io:12345
. When you provision a TCP address, a random address will
be assigned to you. If you delete a TCP address, there is no way to provision
the same one again. TCP addresses may be managed via the dashboard and via API.
Customization
TCP Addresses are assigned randomly on an ngrok-controlled hostname with a randomly-assigned port. You may not choose the hostname and you may not select the port.
You may, however, simulate a customized hostname by creating a CNAME record to the hostname of your assigned TCP address. If you do so, be aware that all ports on that hostname, even those provisioned to other accounts will then be available on your domain.
Reference
Edges
Edges enable you to centrally manage your endpoints' Module configurations in the ngrok dashboard or API instead of defining them via an Agent or Agent SDK.
- A TCP Edge is attached to one or more TCP Addresses. For each TCP Address, it creates a TCP Endpoint that it listens for traffic on.
- When a TCP Address is associated with a TCP Edge, agents may no longer start endpoints on that TCP Address. You can always detach a TCP Address from your Edge if you want to create Endpoints on it from an Agent or Agent SDK.
- Modules on a TCP Edge are attached directly to the edge itself. There are no Routes.
- When you create a TCP Edge via the dashboard, it will automatically create a new TCP Address assign it to your Edge.
- When you create a TCP Edge via the dashboard, it will automatically create a tunnel group backend with a unique label.
Modules
Use modules to modify the behavior of traffic flowing through your endpoints.
Module | Description |
---|---|
IP Restrictions | Allow or deny traffic based on the source IP of connections |
Observability
Use ngrok's events system to capture logs of tcp connections to your endpoints.
When TCP connections to your endpoints are closed, tcp_connection_closed.v0 events are published.
Errors
If an error is encountered while handling TCP connections to your endpoints for any reason (e.g. no available backends, module rejected a connection or internal server error), the connection will be closed. Because of the low-level nature of the TCP protocol, there is no mechanism used to transmit information about what error code was encountered.
You can use the use the observability primitives to understand the error handling behavior of a connection.
Pricing
TCP endpoints are available on all plans.
Fixed TCP Addresses are available on the Pro and Enterprise plans.