Introduction
Common tasks tied to the application deployment and configuration within a cluster are:
git push eyk
eyk config:set
Both will report a success or failure message back to the user as an output of the command itself. There are going to be cases that you would want to be able to retrieve those messages for troubleshooting purposes or the customer would like to review those as part of his own troubleshooting sessions.
Kibana comes to the rescue. As with every other action within a cluster, the above commands will log to stdout and get pushed to the elasticsearch instance running on the cluster. With a default log retention of 30 days, either party will be able to review those without having to scroll back to the output of each command.
Background information
Kibana uses ky_logs*
index pattern from the automatically created elasticsearch index. In order to display the logging information related to deployment and configuration failures, we need to narrow down our search to specific field values. Like any other dashboard creation on kibana, once we have the desired search results, we can save the search and then use it to create the corresponding dashboard.
Build Process git push eyk
EYK image builds are based either on buildpacks or Dockerfiles. Simply put, buildpacks are used when there is no Dockerfile present at the root of the application when pushed to EYK's git remote.
Buildpacks
As in most hephy application related pods, buildpack logs can be found under the deis
namespace:
(output can be inspected by using the JSON tab of a specific log line)
"kubernetes": {
"pod_name": "slugbuild-todo-af9bc2f6-f854c1f5",
"namespace_name": "deis",
"pod_id": "70905690-9f7d-437f-8aea-6713bf135f47",
"labels": {
"heritage": "slugbuild-todo-af9bc2f6-f854c1f5"
},
"annotations": {
"kubernetes_io/psp": "eks.privileged"
},
"host": "ip-10-101-107-34.ap-southeast-2.compute.internal",
"container_name": "deis-slugbuilder",
"docker_id": "65dde6d1f1044ed9b5807156d0d5b402f4cab3a9c06f1574114bfa68d98cc28d",
"container_hash": "hephy/slugbuilder@sha256:7648de797582251b71c26283966647950adc49a0de6d56adff884b58b555b7bc",
"container_image": "hephy/slugbuilder:v2.7.8"
}
In our case we will narrow down the search on the following filed values:
"namespace_name": "deis" "container_name": "deis-slugbuilder"
1. Create the search filters
- Click the top left menu button (3 parallel lines)
- Click Discover
- Optional - edit the time range to include a known logged event
- Click the + Add filter button below the search line
- Select the Field from the dropdown menu
- Select the Operator
is
- Update the Value to match the desired one
- Click the Save button
For our case we used:
kubernetes.namespace_name: deis
and
kubernetes.container_name: deis-slugbuilder
example screenshot:
Hint: We are all accustomed to reading the logs in ascending order. Make sure to select the sort button right next to Time column heading to display the logs accordingly.
2. Save the Search
- Click on the Save link on the top of the page (not the Floppy icon)
- Type a Title for the search
- Click the Save Button
3. Create a dashboard
- Click the top left menu button (3 parallel lines)
- Click Dashboard
- Click Create dashboard button
- Click on Add an existing or new object to this dashboard link
- Select the Saved Search depicted by the magnifying glass icon to the left of its name
- Click on Save at the top of the page
- Add a title for the dashboard and optionally a description
- Optionally - select to save the time range (defaults to 15 minutes)
- Click the save button
Sample screenshot:
Dockerfiles
As in the buildpacks example above, Dockerfile logs can be found under the deis
namespace:
{
"_index": "ky_logs-2021.09.01",
"_type": "fluentd",
"_id": "c9147c37-6c16-3ba5-8906-574b140a6015",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2021-09-01T10:43:41.577Z",
"_fluent_tag": "kube.var.log.containers.dockerbuild-todo-ea35f3ec-271c1beb_deis_deis-dockerbuilder-93a6d04098c8e849d519f45f6bbc78a365293f433e699dc2bdac54744365b3cd.log",
"log": "Step 2/18 : RUN apt-get update && apt-get install -y imagemagick libsqlite3-dev\n",
"stream": "stdout",
"time": "2021-09-01T10:43:41.577125134Z",
"kubernetes": {
"pod_name": "dockerbuild-todo-ea35f3ec-271c1beb",
"namespace_name": "deis",
"pod_id": "25cad44f-2026-4c0c-8ec1-92b38750c0d9",
"labels": {
"heritage": "dockerbuild-todo-ea35f3ec-271c1beb"
},
"annotations": {
"kubernetes_io/psp": "eks.privileged"
},
"host": "ip-10-101-107-34.ap-southeast-2.compute.internal",
"container_name": "deis-dockerbuilder",
"docker_id": "93a6d04098c8e849d519f45f6bbc78a365293f433e699dc2bdac54744365b3cd",
"container_hash": "hephy/dockerbuilder@sha256:02741ddda8f3f895ac2429704ebfcc62fc335a6071f9c881540f0a7aa35ed13e",
"container_image": "hephy/dockerbuilder:v2.8.3"
}
},
"fields": {
"@timestamp": [
"2021-09-01T10:43:41.577Z"
],
"time": [
"2021-09-01T10:43:41.577Z"
]
},
"highlight": {
"log": [
"Step 2/18 : RUN apt-get update && apt-get install -y @kibana-highlighted-field@imagemagick@/kibana-highlighted-field@ libsqlite3-dev"
],
"kubernetes.namespace_name": [
"@kibana-highlighted-field@deis@/kibana-highlighted-field@"
]
},
"sort": [
1630493021577
]
}
In our case we will narrow down the search on the following filed values:
"namespace_name": "deis"
"container_name": "deis-dockerbuilder"
Using the 3 step process mentioned above:
- Create the search filters
- Save the Search
- Create a dashboard
Hint: On step 3 you can use the same Dashboard.
To do so:
- Click the top left menu button (3 parallel lines)
- Click Dashboard
- Click your previously created dashboard
- Click on the Edit link at the top of the page
- Click on Add link on the top of the page continue accordingly
We created the following search results and dashboard:
Environment Variables eyk config:set
Updating environment variables for a possible failure to be raised and subsequently logged requires a readiness check i.e. for a simple port check on an application running on port 5000
eyk healthchecks:set readiness httpGet 5000 --type web --success-threshold=3
eyk healthchecks:set liveness httpGet 5000 --type web
With the health check set we can then attempt a config set update i.e. from development to doesntexist:
eyk config:set RAILS_ENV=doesntexist
In this case, given that we haven't configured the application to run in a doesntexist rails environment, we would expect it to fail. In practice, hephy will deploy the updated environment variables and start the process of routing traffic to the now failing pods. Momentarily, those pods will receive traffic while the previous version of the application is silently started and gets traffic routed to it:
eyk config:set RAILS_ENV=doesntexist
Creating config... done === testapp Config DEIS_DOCKER_BUILD_ARGS_ENABLED 1 RAILS_ENV doesntexist db_yml_adapter sqlite3 db_yml_database db/development.sqlite3
config.eager_load is set to nil. Please update your config/environments/*.rb files accordingly: * development - set it to false * test - set it to false (unless you use a tool that preloads your test environment) * production - set it to true rake aborted! ActiveRecord::AdapterNotSpecified: 'doesntexist' database is not configured. Available: ["development"]
kubectl -n testapp get pods
NAME READY STATUS RESTARTS AGE testapp-web-6f7d4b444c-l6222 0/1 CrashLoopBackOff 2 48s testapp-web-6f7d4b444c-t5hp6 0/1 CrashLoopBackOff 2 48s testapp-web-84cb477874-tf4pb 1/1 Running 0 3m testapp-web-84cb477874-vbh9h 1/1 Running 0 3m
testapp-web-6f7d4b444c-l6222 0/1 Running 6 5m52s testapp-web-6f7d4b444c-t5hp6 0/1 CrashLoopBackOff 5 5m52s testapp-web-84cb477874-9kqqg 1/1 Running 0 2m59s testapp-web-84cb477874-wf7p5 1/1 Running 0 2m57s
and although eyk will report the pods in an error state:
=== testapp Processes --- web: testapp-web-6f7d4b444c-l6222 error (v42) testapp-web-6f7d4b444c-t5hp6 error (v42)
the release will be considered successful:
=== testapp Releases v42 2021-09-02T06:45:45Z itsouvalas@engineyard.com changed RAILS_ENV
Important
The only case an error will be logged, both to the user issuing the command as well as Kibana, would be if the release already running is also failing. In other words, if it doesn't have a release to fall back to. When that happens and given the race condition the only way to update the environment variables to a working configuration would be by:
- scaling pods to 0
- update environment variables to a working version
- scale back up
Logging EYK interactions/Deployments
eyk config:set
like any other eyk command, interact with the controller. The controller is also responsible for controlling the releases. That being said, overall monitoring of the cluster interactions can be achieved by inspecting the logs produced by the controller:
As done previously we will filter our results to the following values:
"namespace_name": "deis"
"container_name": "deis-controller"
To view the logs better we need to filter out the noise as follows:
- From the left-hand pane browse through Available fields
- Hover over log and click Add
The updated view should look something like this:
We can see the following repeating:
- DEBUG request is not refreshable
- DEBUG:mozilla_django_oidc.
middleware:request is not refreshable etc
Hover over the log line and click on the magnifying glass with the (-) minus sign inside to filter them or any others out. The simpler version should look similar to this:
The example above focuses on the environment update process. Below, we are focusing on info
log-level messages alone and in this case, we are seeing the output of a deploy:
Finally, as in our previous examples, we save the search and add it to our dashboard:
Comments
Article is closed for comments.