Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

Overview

Data Theorem provides several integrations that collect information about your GKE deployments to help you manage your Kubernetes security posture. Additionally, like the data collected from all Data Theorem integrations, we send your GKE information to our analyzer to build a deep, full-stack understanding of your applications and resources they rely on.

The following integrations collect KSPM information:

  • GCP Account Integration

  • GCP Load Balancer Log Analysis Integration

  • Kubernetes In-Cluster Helm Chart Integration

KSPM Integrations

GCP Account Integration

The close integration of GKE and Google Cloud means that just by onboarding your GCP account we good visibility into your GKE clusters and the GCP resources they use.

How to enable this integration: https://datatheorem.atlassian.net/wiki/x/AoBQAg

GCP Load Balancer Log Analysis Integration

The Data Theorem GCP Load Balancer integration forwarding HTTP request logs from your GCP load balancers to a log sink that publishes HTTP request metadata to a Data Theorem Pub/Sub queue.

Cloud Logging Sinks can be created at multiple levels within GCP. Where the sink is created determines which logs it is able to forward: if the sink is created within a project, it will only be able to forward logs from that project. If the sink is created at the organization level or in a folder containing gcp projects, then it will be able to forward logs from any project within that organization or that folder.

Data Theorem strongly recommends creating the sink at the organization level to maximize discovery, and to then use the sink’s log filter to limit which logs are sent to Data Theorem.

Information Collected

This integration collects non-sensitive runtime information about requests made to your load balancers. No request or response bodies or sensitive headers are collected.

Here is an example:

Screenshot 2024-05-17 at 10.52.44.png

Pre-requisites

  • Make sure that Logging is enabled on the Load Balancer Backend Service Configuration

  • Check this link for more information on how to enable Logging on the Load Balancer Backend Service

Create a Pub/Sub Topic For The Logs Routing Sink

If creating a logs routing sink at the organization or folder level, this should be your Data Theorem integration project, otherwise it can be in the same project as where you plan to create the sink

  1. In the GCP console, switch to the project where you will create the Pub/Sub topic

  2. Using the left-hand side menu, select Pub/Sub (in the Analytics section), and then select Topics, Click on Create Topic

  3. In Create Topic Flow

  4. Use datatheorem-logs-processing as the topic ID

  5. Uncheck "Add a default subscription" an confirm not other boxes are checked

  6. Click Create

STEP_1_CREATE_PUBSUB_TOPIC.png

Create The Cloud Logging Sink

If creating the sink at the organization (or folder) level, switch from the project to your organization (or folder)

  1. Using the left-hand side menu, select Logging (in the Observability section), then within the Configure subsection, select Log router

  2. Click Create Sink

  3. In the Sink details section, input datatheorem-logs-processing as the sink name, and click Next

  4. You will have to fill in the full ID of the sink destination. For a Pub/Sub topic, it must be formatted as (but replace the [PROJECT_ID] and [TOPIC_ID] with the topic's information): pubsub.googleapis.com/projects/[PROJECT_ID]/topics/[TOPIC_ID]

  5. Click Next

STEP_1_CREATE_SINK_1.png

Choose Logs to Include in Sink

You can click on Preview logs to see which logs will be included

  1. In the Choose logs to include section, add the following inclusion filter: resource.type="http_load_balancer"

  2. Click Create sink

STEP_1_CREATE_SINK_3.png

Create a Service Account To Authenticate Log Forwarding

  1. In the GCP on console, switch back to the GCP project where the Pub/Sub topic was created

  2. Then using the left-hand side menu, select IAM & Admin section, and then select Service Accounts

  3. Click on Create Service Account at the top

  4. In the Service account details section, input datatheorem-logs-processing as the name

  5. Click CREATE AND CONTINUE

Allow Service Account to Assume Role To Authenticate Log Forwarding

  1. In the Grant this service account access to project section Select a role

  2. Filter for “token creator” in the role filter

  3. Select Service Account OpenID Connect Identity Token Creator role to allow Pub/Sub to generate OIDC tokens that will be used to authenticate requests

  4. Complete the service account creation by clicking on Done

STEP_2_CREATE_SA_2.png

Collect Service Account’s OAuth2 ClientId

  1. On the service account listing, above the table, input datatheorem-logs-processing to retrieve the newly created service account

  2. Copy the value from the OAuth 2 Client ID column and register it below

STEP_2_CREATE_SA_3.png

Create a Pub/Sub Subscription In The Same GCP project As The Pub/Sub Topic

  1. Using the left-hand side menu, select Pub/Sub (in the Analytics section), then within the PUB/SUB subsection, select Subscriptions

  2. Click on CREATE SUBSCRIPTION at the top

  3. Input datatheorem-logs-processing as the subscription ID

  4. Click on Select a Cloud Pub/Sub topic and input datatheorem to filter the previously created Pub/Sub topic

  5. In the Delivery type section, select Push

  6. In the Endpoint URL text box, input https://api-protect-api.securetheorem.com/logs/v1/ingest/gcp_load_balancers

  7. Check on the Enable Authentication checkbox below the Endpoint URL, and select the previously created service account

  8. In the Retry policy section at the bottom, change the retry policy in the subscription to exponential backoff instead of immediate retry

  9. Click CREATE

STEP_3_CREATE_SUBSCRIPTION_2.png

Kubernetes In-Cluster Helm Chart Integration

Overview

This integration uses a Helm chart to create a discovery deployment in the datatheorem namespace in your Kubernetes cluster. The deployment is not “in-line” for any of your cluster’s services. The application is stateless and designed to consume almost no resources, and it should not require any autoscaling.

It uses the datatheorem-service-account bound to the datatheorem-cluster-role with the following permissions for read-only access on a limited set of cluster resources:

rules:
  - apiGroups:
      - "*"
    resources:
      - deployments
      - pods/log
      - pods
      - services
      - endpoints
      - persistentvolumeclaims
      - ingresses
      - gateways

    verbs:
      - list
      - get
      - watch 

Installation

Step 1 : Extract all the items which you should receive during the onboarding process.
unzip DataTheorem-APIProtect-K8S_PROTECT.zip

Step 2 : Verify you are configured for the correct kubernetes cluster
kubectl config current-context

Step 3 : Install API Protect

Add mirroring to the chosen endpoint. This step must be repeated for each endpoint.

helm install k8s-protect    \
    ./k8s-protect           \
    --create-namespace      \
    --namespace datatheorem \
    --wait

Step 5 : Verify the deployment

It should look something like this

helm list -n datatheorem

NAME        NAMESPACE   REVISION UPDATED                                 STATUS   CHART             APP VERSION
k8s-protect datatheorem 1        2023-06-20 11:56:08.223009524 +0100 CET deployed k8s_protect-1.0.0 1.0.5

Test the deployment

helm test -n datatheorem k8s-protect

Finished.

Un-Installation should it be required

helm uninstall -n datatheorem k8s-protect

  • No labels