. . .

Kubernetes Deployment Setup


Introduction

One feature of the executor is to use it as facade for a Kubernetes deployment environment, so it helps you to bring your generated microservices into your Kubernetes environment.

For testing purposes and proof of concepts (POC) you can use Executor to directly deploy your generated microservices to a Kubernetes instance.
However, this type of deployment is not recommended in productive Kubernetes clusters, because it conflicts several Kubernetes best practices.

This page contains the necessary configuration steps on how to setup the Executor in a non-productive and a productive environment.


Setup Declarative Deployment (productive)

It is planned that Executor provides interfaces to generate a Kubernetes deployment set for each generated service which can be used in productive environments. Currently the focus is set to the direct deployment which is described in the next chapter. We will provide more information about the declarative deployment in future releases.

Setup Direct Deployment (non-productive)

The general service configuration of Executor is described in [Executor] Installation. This section contains the current Kubernetes specific configuration for the Executor. It's recommended to use the following configuration only in POCs or testing stages, because it may conflict with your current productive Kubernetes requirements.

The setup guide to deploy generated services into Kubernetes will now split into two alternative parts. The first part describes the setup when your Executor is running inside of Kubernetes. The second part describes the setup when your Executor is running outside of Kubernetes, (e.g. debian package installation).

Executor running inside of Kubernetes

If your Executor should be started inside of Kubernetes and you want to directly deploy generated services into the same Kubernetes cluster you first need the following Manifest files to setup your Executor.

configMap.yml

The ConfigMap object is used to inject the application.yml into the Spring Boot Application Container and will be mapped within the deployment.yml later on:

configMap.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: executor
  namespace: default
data:
  application.yml: |
    spring:
      jmx:
        enabled: false
      cloud:
        consul:
          host:           port:           discovery:
            hostname: executor
            fail-fast: false
            health-check-path: "${server.servlet.context-path}/actuator/health"
            health-check-critical-timeout: 2m
            health-check-interval: 20s
            tags: "contextPath=${server.servlet.context-path}, version='@project.version@', notRegisterOnGateway, hiddenAPI"
      zipkin:
        enabled: false
      sleuth:
        enabled: false
      aop:
        auto: false
    server:
      port: 8070
      servlet:
        context-path: /executor
    executor:
      engine: kubernetes
      properties:
        count: 1
        memory: 300
        disk: 300
        cpu: 500
        javaJvmOpts: "-Xmx76M -Xss256k -XX:+UseG1GC"
        javaExecArgs: "--spring.cloud.consul.host=${spring.cloud.consul.host} --spring.cloud.consul.port=${spring.cloud.consul.port}"
        dockerEnvVars:
          spring_cloud_consul_host: ${spring.cloud.consul.host}
          spring_cloud_consul_port: ${spring.cloud.consul.port}
        resourceDir: ""
      docker:
        repository:         email:         username:         password:       kubernetes:
        namespace: default
        restartPolicy: Always
        strategyType: RollingUpdate
        rollingUpdateMaxSurge: 1
        rollingUpdateMaxUnavailable: 0
        imagePullSecret:         discovery:
          ipRegister: true
          podRegister: true
          podClusterDomainSuffix: svc.cluster.local
      external-config-service:
        name: Gearhead
        pathprefix: /config
    feign:
      client:
        config:
          Gearhead:
            connectTimeout: 5000
            readTimeout: 5000

To apply the ConfigMap object you can use the command: kubectl apply -f configMap.yml

Further details about every configuration property like the <IMAGE_PULL_SECRET> can be found in the last section of this page.


rbac.yml

In order Executor is able to deploy a generated service into its own cluster, we need to set permissions that allow Executor to interact with the Kubernetes API. In Kubernetes permissions are handled through the role-based access control (RBAC) which regulates resource access based on the roles of individual users within your cluster.

But at first we need to create a Kubernetes object of type Secret that contains the docker credentials to access the executor image. This Secret object is then mapped to the ServiceAccount so that Kubernetes is able to pull the image in deployment process.

This imagePullSecret is not the same like the one that was defined in application.yml/configMap.yml before. This Secret is used to retrieve the executor image. The Secret from configMap.yml is used to push and pull images of the generated services.



To create the Secret Object you can use the following command:

# create Kubernetes Secret named 'executorImageAccess' containing your Docker login for pulling the executor image (for dockerhub the login server is https://index.docker.io/v1/)
kubectl create secret docker-registry executorImageAccess \
 --docker-server=https://index.docker.io/v1/ \
--docker-username="<USERNAME>" \
 --docker-password="<PASSWORD>" \
--docker-email="<MAIL>"


For more detailed information about creating imagePullSecrets visit the offical Kubernetes documentation https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/


The following rbac.yml file creates a new service account for Executor and a new role which are bound together. The service account will be mapped to the pod template within the deployment.yml later on:

rbac.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: executor
  namespace: default
imagePullSecrets:
- name: executorImageAccess
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: executor-role
  namespace: default
rules:
  - apiGroups: ["apps"]
    resources: [ "deployments"]
    verbs: ["create", "update", "delete", "list"]
  - apiGroups: [ "" ]
    resources: [ "services" ]
    verbs: [ "create", "update", "delete", "list"  ]
  - apiGroups: [ "" ]
    resources: [ "pods" ]
    verbs: [ "list" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: executor-rolebinding
  namespace: default
subjects:
  - kind: ServiceAccount
    name: executor
roleRef:
  kind: Role
  name: executor-role
  apiGroup: rbac.authorization.k8s.io

To apply the objects ServiceAccount, Role and RoleBinding you can use the command: kubectl apply -f rbac.yml


service.yml

The Service object is the representation of the Executor service on Kubernetes side. It is needed for the Kubernetes deployment and proxy mechanism.

service.yml
apiVersion: v1
kind: Service
metadata:
  name: executor
  namespace: default
spec:
  clusterIP: None
  selector:
    app.kubernetes.io/name: executor
  ports:
    - protocol: TCP
      port: 8070

To apply the Service object you can use the command: kubectl apply -f service.yml


deployment.yml

The Deployment object bundles every previous Kubernetes object and contains the template definition of the Pod setup.


deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: executor
  namespace: default
  labels:
    app.kubernetes.io/name: executor
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: executor
  template:
    metadata:
      labels:
        app.kubernetes.io/name: executor
    spec:
      serviceAccountName: executor
      volumes:
        - name: config
          configMap:
            name: executor
      containers:
        - name: executor
          image: apiomat/executor:          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: config
              mountPath: "/config/application.yml"
              subPath: "application.yml"
          resources:
            limits:
              cpu: "1"
              memory: "256M"
            requests:
              cpu: "0.1"
              memory: "256M"
      terminationGracePeriodSeconds: 60
      restartPolicy: Always


To apply the Deployment object you can use the command: kubectl apply -f deployment.yml

Executor running outside of Kubernetes

If Executor is running outside of Kubernetes, but you want to deploy generated services into a Kubernetes cluster, you need to make some changes to the application.yml.
Beside the consul and docker specific properties (that are described in general configuration in [Executor] Installation) we additionally need to specify the executor.kubernetes.kubeConfigFile.
For more information read the section about configuration details afterwards.

application.yml
spring:
  application:
    name: '@name@'
    version: '@project.version@'
  jmx:
    enabled: false
  cloud:
    consul:
      host:       port:       discovery:
        fail-fast: false
        health-check-path: "${server.servlet.context-path}/actuator/health"
        health-check-critical-timeout: 2m
        health-check-interval: 20s
        tags: "contextPath=${server.servlet.context-path}, version='@project.version@', notRegisterOnGateway, hiddenAPI"
  zipkin:
    enabled: false
  sleuth:
    enabled: false
  aop:
    auto: false
server:
  port: 8070
  servlet:
    context-path: /executor
executor:
  engine: kubernetes
  properties:
    count: 1
    memory: 300
    disk: 300
    cpu: 500
    javaJvmOpts: "-Xmx76M -Xss256k -XX:+UseG1GC"
    javaExecArgs: "--spring.cloud.consul.host=${spring.cloud.consul.host} --spring.cloud.consul.port=${spring.cloud.consul.port}"
    dockerEnvVars:
      spring_cloud_consul_host: ${spring.cloud.consul.host}
      spring_cloud_consul_port: ${spring.cloud.consul.port}
    resourceDir: ""
  docker:
    repository:     email:     username:     password:   kubernetes:
    kubeConfigFile:     namespace: default
    restartPolicy: Always
    strategyType: RollingUpdate
    rollingUpdateMaxSurge: 1
    rollingUpdateMaxUnavailable: 0
    imagePullSecret:     discovery:
      ipRegister: true
      podRegister: true
      podClusterDomainSuffix: svc.cluster.local
  external-config-service:
    name: Gearhead
    pathprefix: /config
feign:
  client:
    config:
      Gearhead:
        connectTimeout: 5000
        readTimeout: 5000

Configuration Details

The general configuration properties were already described on the [Executor] Installation page. The Kubernetes specific configuration properties are the following:

Connection and Authentication

The executor uses the official Kubernetes Java API Client Library which currently provides different ways to authenticate a Kubernetes API Client to the Kubernetes API Server.

If executor is running inside of Kubernetes you do not need to configure the connection and authentication, since it is already handled via serviceAccount token mechanism.
If executor is running outside of Kubernetes you need to configure the Kubernetes connection in Executor via KubeConfig file.

Key

Description

Example

executor.kubernetes.kubeConfigFile

Path to the Kubernetes config file. The configuration file must describe your cluster, the user and the context . For more information see the official Kubernetes documentation: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

Leave this property empty if you already setup the service account of executor that is running inside of Kubernetes.

/home/user/.kube/config

Deployment

In Kubernetes you have different strategies to update your containers. For the direct deployment of your generated microservice you have the following options to configure:

Key

Description

Example

executor.kubernetes.namespace

The Kubernetes namespace to which the received services should be deployed

namespace-xyz

executor.kubernetes.labels

Define some custom Kubernetes labels as map<String, String> which will be set to all received services that should be deployed. This is useful to identify deployments, service definitions and pods of generated services that were automatically deployed (e.g. for setting up some dynamic network policies).

generated_service: "true" (default)

executor.kubernetes.restartPolicy

Define under which condition your pods should be restarted: 'Always', 'OnFailure', 'Never'

Always (default)

executor.kubernetes.strategyType

Supported strategy to deploy services on Kubernetes: 'ReCreate', 'RollingUpdate'

RollingUpdate (default)

executor.kubernetes.rollingUpdateMaxSurge

Amount of pods we can add at a time when RollingUpdate is selected as strategy.

1 (default)

executor.kubernetes.rollingUpdateMaxUnavailable

MaxUnavailable defines how many pods can be unavailable during the rolling update.

0 (default)

executor.kubernetes.imagePullSecret

Configure a Kubernetes Secret.Name here for accessing private docker repositories that contain your generated service images. An example how to create a Secret is shown in the section about rbac.yml above. Alternatively see https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ to learn more about defining image pull secrets on your Kubernetes instance.


Discovery

The services in the ApiOmat universe can find each other via service discovery provided by Consul. That means, in order the generated Services can communicate with Yambas we need to register them to Consul.
There are different methods on how to register to the service discovery with different advantages controlled by the following executor configuration properties:

Key

Description

Example

executor.kubernetes.discovery.ipRegister

If set to true, the deployed services will be registered to the service discovery with their IP address (default).

If set to false they will be registered with their service names. Keep in mind to add the option "endpoint_pod_names" to the
configMap of your coreDns, so the dns entry is accessible by other pods (see https://coredns.io/plugins/kubernetes)

true

executor.kubernetes.discovery.podRegister

If set to true, the deployed services will be registered to the service discovery as Pod (default).

If set to false they will be registered as service which means that the kube-proxy is used for load balancing.

true

executor.kubernetes.discovery.podClusterDomainSuffix

Cluster domain suffix that is used to construct the reachable pod hostname (Default: svc.cluster.local)
The cluster domain suffix is only used when PodRegister is enabled an IpRegister is disabled.

svc.cluster.local

One advantage of registering your service pods to consul is that all the pod instances are visible in service discovery and can be load-balanced by an additional API Gateway service like Dispatcher. The disadvantage is that we bypass the kube-proxy functionality.
If you want to let Kubernetes take care of the load-balancing it is recommended to register your Kubernetes service to consul by setting discoveryPreferencePodRegister in application.yaml of executor to false. In that case only one running instance is visible in consul even if there are multiple pods running in background. Additionally the service needs to take care about re-registering itself to service discovery from time to time to ensure that the downscaling of the service is working properly.

Network policy

Kubernetes NetworkPolicies gives you the possibility to control the traffic flow. The default configuration allows traffic from and to all pods. That means: any pod can connect to any other pod.
To isolate your pods, declare a network policy and apply it to your cluster. See the official Kubernetes documentation for an example of what a network policy looks like and how it is declared.
You can find in the official documentation recipes for common scenarios like restricting access to a namespace.
In the tables below you see which pod from an ApiOmat service needs to interact with which. This helps you to define your own network policy.
In case of a direct deployment, you can change the namespace of your generated service in Executor by changing the property executor.kubernetes.namespace.

A generated microservice needs to interact with following ApiOmat services:

Service

Usage

YAMBAS

E.g. handles non-transient objects and basic authentication

Consul

E.g. does health check and makes sure that other services are able to interact with this service

Dashboard

E.g. Dashboard calls service endpoints like it is done in data editor or service configuration

Explorer

Explorer calls service endpoints to get api documentation (Swagger)

Gearhead

The generated service pulls service configuration values from Gearhead

Bouncer

Handles token based authentication


In the following table you see an overview about ApiOmat service interactions:

Service

Sends request to..

Executor

Consul: to register service for health check and that other services be able to interact with this service

Innkeeper: to retrieve the generated service artifact

Innkeeper

Consul: to register service for health check and that other services be able to interact with this service

YAMBAS: to handle Basic authentication and announce service artifact

Bouncer: to handle JWT authentication

Gearhead

Consul: to register service for health check and that other services be able to interact with this service

Dashboard

Consul: to register service for health check and that other services be able to interact with this service

YAMBAS: backend for Dashboard

Gearhead: to load the configuration values for a generated service

Executor: to start the deployment of a service

Bouncer: to handle JWT authentication

Brewer: to trigger service generation

Brewer

Consul: to register service for health check and that other services be able to interact with this service

YAMBAS: to retrieve the meta model information of a service

Innkeeper: to store the service artifacts

Explorer

Interacts with all services. To provide a central UI for accessible REST interfaces of ApiOmat.

YAMBAS

Consul: to register service for health check and that other services be able to interact with this service

Bouncer: to handle JWT authentication

Innkeeper: to get artifact from innkeeper after announcement

Consul

Interacts with all services, is used for health check of service

Bouncer

Consul: to register service for health check and that other services be able to interact with this service

All services: to return validated JWT