Ternary Self-Hosted

Learn how Ternary Self-Hosted is deployed within your environment, including architecture requirements, provisioning steps, network considerations, update strategy, and billing responsibilities.

Ternary Self-Hosted is a dedicated deployment of the full Ternary platform within a separate Google Cloud environment, isolated from Ternary’s primary SaaS infrastructure. This deployment can reside either in a dedicated project maintained by Ternary or within your own Google Cloud organization, depending on your security and governance requirements.

When deployed, Ternary Self-Hosted provides the complete functionality of the Ternary platform, including reporting, optimization, and administrative features, while ensuring that all infrastructure and data remain within your controlled environment. Customers continue to receive feature updates on a cadence aligned with the main product, either through automated or manual update processes.

Self-Hosted is typically selected by organizations with strict security, compliance, or data residency requirements. It provides greater control over infrastructure, stronger isolation from public SaaS environments, and the ability to align the deployment with enterprise networking, IAM, and policy standards.

The following sections outline the architecture, provisioning responsibilities, network considerations, update strategy, and operational cost guidance required for a successful initial and ongoing deployment.

Architecture Overview of Ternary Self-Hosted

Ternary Self-Hosted requires 2 Google Cloud projects to operate, Core and Access. Core will contain most of the components of the solution, including BigQuery datasets, Cloud Run / Kubernetes based container workloads, a Cloud SQL instance, and more. The Access project is mainly used to centralize IAM identities used to access your cloud resources and is the project from which BigQuery jobs will be launched against the data assets in Core. In total, the following services are used:

  • BigQuery
  • Cloud Tasks
  • Cloud Scheduler
  • Cloud Firestore
  • Cloud Logging
  • Cloud Storage
  • Cloud DNS
  • Cloud Memorystore (Redis)
  • Cloud Pub/Sub
  • Cloud SQL
  • Compute Engine
  • Kubernetes Engine
  • Secrets Manager

Ternary requires Owner access to perform Terraform provisioning within the two projects. After the provisioning has taken place, the owner access can be removed until the next time Terraform updates are required.

How do you provision Ternary Self-Hosted in Google Cloud?

Provisioning Ternary Self-Hosted requires preparing two dedicated Google Cloud projects, granting temporary Terraform access, and ensuring networking is ready for Kubernetes deployment.

Step 1: Create the Required Google Cloud Projects

Create two Google Cloud Platform projects, preferably within a folder dedicated to Ternary inside your organization:

  • ternary-core-123456
  • ternary-access-123456

Replace 123456 with appropriate organizational identifiers. The Core project will host the primary infrastructure components, while the Access project centralizes IAM identities and launches BigQuery jobs against Core datasets.

Step 2: Grant Terraform Access and Share Project Details

Grant Owner access to the following service account at the folder level or directly on both projects: [email protected]

After access is granted, provide Ternary with the newly created project IDs and project numbers.

If VPN access is required to access Google APIs within these projects, provision a VPN account accordingly. For operational continuity, assigning VPN access to multiple Ternary engineers is recommended.

Step 3: Prepare Networking for Kubernetes Engine

For Kubernetes Engine deployment, a network, primary subnet, and secondary subnet must be available.

If project-level Owner permissions do not allow network creation, the internal networking team should provision the required subnetwork and secondary subnet in advance. Once networking resources are available, Ternary will create the Kubernetes cluster using Terraform.

If specific regions or instance types are required for compliance or performance reasons, these preferences should be defined before provisioning begins.

Step 4: Configure VPC Service Controls and Restricted Domains (If Applicable)

If VPC Service Controls are required, configure policies that allow Ternary to host an internal web application service within the Core project. Both Core and Access projects must be permitted to access the Google Cloud services listed in the Architecture Overview section, potentially through Private Google Access.

Browser access to the hosted web application must also be possible for Ternary operators, typically via a client VPN, to ensure the environment can be monitored and supported effectively.

If Restricted Domains are enforced, add the Ternary Google Workspace organization identifier C01kf7ryi to the allow list. This is preferred to allow us to use our terraform@ service account identity to perform automation operations within the projects, however, if this is not permitted, a different cloud identity within your organization can be used to perform automation operations.

Step 5: Define Ingress and DNS Configuration

Create a dedicated subdomain for the Ternary deployment, such as: ternary.mycompany.com

Provide DNS management access or identify an internal resource responsible for updating DNS records as needed.

Ternary will provision an HTTPS Load Balancer. If the load balancer must use a corporate SSL certificate and corporate IP space, ensure that:

  • The Core project is attached to the organization’s Shared VPC
  • Access to both the private and public components of the SSL certificate is available
  • Any firewall or web application firewall (WAF) requirements are defined prior to implementation

Network security constraints should be finalized before deployment to prevent reconfiguration later.

Step 6: Define Update Strategy

Determine how software updates will be delivered to the environment. If permitted under internal security policy, access can be granted to our CI/CD automation service account: [email protected]

This enables inclusion in weekly automated deployments. If automated access is not acceptable, updates can be applied manually at a lower cadence, typically monthly.

All updates are delivered through a Helm chart repository and container images accessible to the organization.

Billing Considerations for Ternary Self-Hosted

Ternary Self-Hosted is deployed within the organization’s own Google Cloud environment, and all infrastructure costs are incurred directly by the organization. While the deployment is designed to be cost-conscious and aligned with FinOps best practices, ongoing infrastructure spend remains the customer’s responsibility.

It is recommended to establish a dedicated budget for the Ternary operating stack after understanding the expected daily and weekly run rate. Ongoing costs can be monitored directly within the Ternary instance once provisioning is complete.

Estimated Retail Operating Costs (Without Rate Optimization)

The following estimates reflect approximate monthly infrastructure costs under standard retail pricing, assuming no commitment discounts or rate optimization:

  • $1M monthly cloud spend or below: approximately $1,000 USD or less per month
  • $1M–$5M monthly cloud spend: approximately $3,000 USD or less per month
  • Above $5M monthly cloud spend: approximately $5,000 USD or more per month

Actual costs vary based on billing volume, number of connected accounts, and data processing complexity.

Impact of Rate Optimization

Operating costs can be significantly reduced by aligning the deployment with existing rate optimization strategies, such as:

  • Purchasing resource-based or spend-based commitments so that the Ternary infrastructure benefits from discounted usage
  • Aligning Kubernetes Engine instance types with commitments already owned by the organization
  • Purchasing BigQuery annual slot commitments (1-year or 3-year terms)
  • Allowing Ternary workloads to consume pre-purchased BigQuery slot capacity

Integrating Ternary Self-Hosted into existing commitment and optimization strategies can materially lower the effective operating cost of the environment.