In this article:
    Applicable to:
    • Twingate Component: Connector
    • Platform: Linux, Docker, other container services

    Overview

    Twingate Technical Support will often request Twingate Connector debug level logs to troubleshoot reported issues. As debug level logs are extremely busy for the Twingate Connector, error level logging is configured by default.

    The below process will cover how to enable debug logging across the various deployment methods and how to collect the logs to provide to Twingate Technical Support.

     

    Process

    The process for enabling debug level logs and collecting the logs will depend on the Twingate Connector deployment method. The two methods will be containerized or systemd deployments and the applicable steps are broken down by deployment method below.

     

    Systemd Service (Linux or AWS AMI deployment)

    If you are using the Twingate systemd service, for example, if using our AMI deployment process, you need to add the TWINGATE_LOG_LEVEL variable to the Connector configuration file before restarting the service. Follow the below steps to do so.

    1. Enable debug level logging
      1. Add the line TWINGATE_LOG_LEVEL=7 to the /etc/twingate/connector.conf file:
        echo "TWINGATE_LOG_LEVEL=7" | sudo tee -a /etc/twingate/connector.conf
      2. Restart the Twingate Connector service:
        sudo systemctl restart twingate-connector
    2. Export Twingate Connector logs to file
      After sufficient time or reproducing the issue on the applicable Twingate Connector, run the below command to export the Twingate Connector logs to a compressed file: /tmp/<hostname>_<timestamp>.log.gz, where <hostname> and <timestamp> are the applicable real values.
      ts=$(date -d "today" +"%Y%m%d%H%M") && sudo journalctl --utc -u twingate-connector | tee /tmp/$(hostname -s)_$ts.log && sudo gzip /tmp/$(hostname -s)_$ts.log
    3. Disable debug level logging (restore to Error level)
      Note:
      It is best practice to disable debug level logging when not actively troubleshooting Twingate Connector issues. Debug level logging left in place long term can result in unnecessary disk utilization.
      sudo sed -i '/TWINGATE_LOG_LEVEL=7/d' /etc/twingate/connector.conf && sudo systemctl restart twingate-connector

     

    Container Deployments

    If you are using Docker or another container service, you need to set the TWINGATE_LOG_LEVEL environment variable and redeploy the container. TWINGATE_LOG_LEVEL should be set to the value 7 to enable debug logs.

    For example, in a Docker run command, adding --env TWINGATE_LOG_LEVEL=7 will enable debug logging. Further specific deployment instructions are below.

     

    Docker Containers (Linux / macOS)

    1. Enable debug level logging
      For existing Docker containers you can use the below script on the Docker host to enable debug level logging. It will set the environmental variable TWINGATE_LOG_LEVEL=7 to the existing variables and start a new container instance with the previous container configurations.

      curl -s https://binaries.twingate.com/connector/docker-change-log-level.sh | sudo bash -s 7
    2. Export Twingate Connector logs to file
      After sufficient time or reproducing the issue on the applicable Twingate Connector, run the below command to export the Twingate Connector logs to a compressed file. To save logs from a Docker container, run the below command to save all logs to the current directory:

      cont=<container ID or name> && ts=$(date -d "today" +"%Y%m%d%H%M") && sudo docker logs -t $cont 2>&1 | sudo tee $cont_$ts.log && sudo gzip $cont_$ts.log

      Note: Replace <container ID or name> with the Docker container ID or name. Additionally,2>&1 is required in the command above to save the full logging output from Docker.

    3. Disable debug level logging (restore to Error level)
      Note: It is best practice to disable debug level logging when not actively troubleshooting Twingate Connector issues. Debug level logging left in place long term can result in unnecessary log disk utilization. In order to disable debug level logging, run the below command.

      curl -s https://binaries.twingate.com/connector/docker-change-log-level.sh | sudo bash

     

    Other Container Deployments (ECS, Azure Container Instance, Kubernetes, etc)

    1. Enable debug level logging
      1. Add the environmental variable TWINGATE_LOG_LEVEL=7 to the Twingate Connector deployment YAML.
      2. Redeploy the container instance to apply the variable to be implemented.
    2. Export Twingate Connector logs to file
      After sufficient time or reproducing the issue on the applicable Twingate Connector, run the below command to export the Twingate Connector logs to a compressed file.

      The command set used to export the logs will vary depending on the underlying container infrastructure. Ensure both stderr and stdout in addition to timestamps are exported. Zip the Connector logs to compress prior to providing to Twingate Technical Support.
      1. ECS - Refer to the below linked AWS documentation: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/logs.html
      2. Azure Container Instance (ACI) - Refer to the below linked Azure documentation:
        https://docs.microsoft.com/en-us/azure/container-instances/container-instances-get-logs
        az container logs --resource-group <resource-group> --name <container-name>
      3. Kubernetes - Refer to the below linked Kubernetes documentation:
        https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs 
    3. Disable debug level logging (restore to Error level)
      Note:
      It is best practice to disable debug level logging when not actively troubleshooting Twingate Connector issues. Debug level logging left in place long term can result in unnecessary log disk utilization.

      Remove the environmental variable TWINGATE_LOG_LEVEL=7 to the Twingate Connector deployment YAML.