Illustration Image

Cassandra.Link

The best knowledge base on Apache Cassandra®

Helping platform leaders, architects, engineers, and operators build scalable real time data platforms.

8/24/2020

Reading time:4 min

RyanQuey/cassandra-in-kibana

by RyanQuey

# actually sets for docker container as wellsudo sysctl -w vm.max_map_count=262144docker-compose upSee here for more info on how it works.Test that it is workingAdd a dummy log (see instructions here)# start logstash cli session in docker containersudo docker exec -it elk /opt/logstash/bin/logstash --path.data /tmp/logstash/data \ -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'Wait for something like The stdin plugin is now waiting for input:. Then add some logs. Whatever you type becomes a log in logstash.demo log entryCheck that it shows up in ES by hitting http://localhost:9200/logstash-*/_search?pretty&size=1000. It should show up somewhere in one of the entries. (This will get records from indices that start with logstash-*).Setup FilebeatProcessing (ie parsing) Cassandra logsSince filebeat doesn't have a Cassandra DB module currently, we have to add our own filebeat processor to filter and enhance the logs.For this demonstration we are going to largely borrow from Anant's NodeAnalyzer tool. They have a sample filebeat config that provides processor settings.We also made use of The Last Pickle's filebeat.yml from their docker Cassandra bootstrap project.Container directory organizationelk containerlogstash configs (e.g., beats-input.conf): /etc/logstash/conf.dlogstash binaries: /opt/logstash/bin/filebeat containerfilebeat.yml: /usr/share/filebeat/filebeat.ymlNote that the filebeat host and port have to be set on both the filebeat.yml in the filebeat container as well as in the logstash conf beats-input.conf file in the elk container, or else will get:Port where logstash is listening for beats input is set in beats-input.confPort where beats is sending it to is set in filebeat.ymlFailed to connect to backoff(async(tcp://filebeat:5044)): dial tcp 172.23.0.3:5044: connect: connection refusedThis will appear in filebeat logs.Instructions/ReferencesFollowing these instructions might be helpful. But using the official ES filebeat image for now.Could do a volume based config system, but we want to be consistent across envs without any additional setup, so do it within dockerSample ScriptsMake sure filebeat is connecting to ESTo make sure it worked, try: http://localhost:9200/_cat/indices. You should see one like filebeat-2020.08.04. Check it out by doing: http://localhost:9200/filebeat-*/_search?pretty&size=1000Throw it some C* logs from hostdocker cp /var/log/cassandra/ filebeat:/var/log/cassandra/Change the config and restartEasiest right now is just to rebuilddocker-compose up --build -dThen will need to throw it some logs again, since everything else was reset. See that docker cp script above for howNOTE: Can't just copy in a new yml, since it won't have the right permissionsThis WON'T work, since need to change permissions (see the Dockerfile.filebeat example of what needs to be ran for the proper permissions to be set on the filebeat.yml file)docker cp configs/filebeat.yml filebeat:/usr/share/filebeat/filebeat.ymldocker restart filebeatSetup Kibana Dashboards for filebeathttps://www.elastic.co/guide/en/beats/filebeat/current/load-kibana-dashboards.htmldocker exec filebeat filebeat setup --dashboardsIt will take a few minutes, showing just Loading dashboards (Kibana must be running and reachable).This is a CLI way of setting up dashboards, rather than just setting them up from the config using setup.dashboards.enabled: true or using other settings as described here.Should now be able to view Kibana filebeat dashboards in the Discover view (following these instructions. If you don't see any, make sure that the time filters are set around the time frame the logs were added into filebeat NOT when the log event happened.Sample Queries/Filters in Kibana for CassandraGet ERROR level logs for the past 90 dayshttp://localhost:5601/app/kibana#/discover?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-90d,to:now))&_a=(columns:!(ingest.loglevel),filters:!(),index:'filebeat-*',interval:auto,query:(language:lucene,query:'ingest.loglevel:ERROR'),sort:!())Filters using lucene query: ingest.loglevel:ERRORSave progress (export kibana data to version control)./scripts/export-kibana-dashboards.shTODO make an import script using thisCopyright (c) 2020 Ryan Quey.Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Illustration Image
# actually sets for docker container as well
sudo sysctl -w vm.max_map_count=262144
docker-compose up

See here for more info on how it works.

Test that it is working

Add a dummy log (see instructions here)

# start logstash cli session in docker container
sudo docker exec -it elk /opt/logstash/bin/logstash --path.data /tmp/logstash/data \
    -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }'

Wait for something like The stdin plugin is now waiting for input:. Then add some logs. Whatever you type becomes a log in logstash.

demo log entry

Check that it shows up in ES by hitting http://localhost:9200/logstash-*/_search?pretty&size=1000. It should show up somewhere in one of the entries. (This will get records from indices that start with logstash-*).

Setup Filebeat

Processing (ie parsing) Cassandra logs

Since filebeat doesn't have a Cassandra DB module currently, we have to add our own filebeat processor to filter and enhance the logs.

For this demonstration we are going to largely borrow from Anant's NodeAnalyzer tool. They have a sample filebeat config that provides processor settings.

We also made use of The Last Pickle's filebeat.yml from their docker Cassandra bootstrap project.

Container directory organization

elk container

  • logstash configs (e.g., beats-input.conf): /etc/logstash/conf.d
  • logstash binaries: /opt/logstash/bin/

filebeat container

  • filebeat.yml: /usr/share/filebeat/filebeat.yml

Note that the filebeat host and port have to be set on both the filebeat.yml in the filebeat container as well as in the logstash conf beats-input.conf file in the elk container, or else will get:

  • Port where logstash is listening for beats input is set in beats-input.conf
  • Port where beats is sending it to is set in filebeat.yml
Failed to connect to backoff(async(tcp://filebeat:5044)): dial tcp 172.23.0.3:5044: connect: connection refused

This will appear in filebeat logs.

Instructions/References

  • Following these instructions might be helpful. But using the official ES filebeat image for now.
  • Could do a volume based config system, but we want to be consistent across envs without any additional setup, so do it within docker

Sample Scripts

Make sure filebeat is connecting to ES

To make sure it worked, try: http://localhost:9200/_cat/indices. You should see one like filebeat-2020.08.04. Check it out by doing: http://localhost:9200/filebeat-*/_search?pretty&size=1000

Throw it some C* logs from host

docker cp /var/log/cassandra/ filebeat:/var/log/cassandra/

Change the config and restart

Easiest right now is just to rebuild

docker-compose up --build -d

Then will need to throw it some logs again, since everything else was reset. See that docker cp script above for how

NOTE: Can't just copy in a new yml, since it won't have the right permissions

This WON'T work, since need to change permissions (see the Dockerfile.filebeat example of what needs to be ran for the proper permissions to be set on the filebeat.yml file)

docker cp configs/filebeat.yml filebeat:/usr/share/filebeat/filebeat.yml
docker restart filebeat

Setup Kibana Dashboards for filebeat

https://www.elastic.co/guide/en/beats/filebeat/current/load-kibana-dashboards.html

docker exec filebeat filebeat setup --dashboards

It will take a few minutes, showing just Loading dashboards (Kibana must be running and reachable).

This is a CLI way of setting up dashboards, rather than just setting them up from the config using setup.dashboards.enabled: true or using other settings as described here.

Should now be able to view Kibana filebeat dashboards in the Discover view (following these instructions. If you don't see any, make sure that the time filters are set around the time frame the logs were added into filebeat NOT when the log event happened.

Sample Queries/Filters in Kibana for Cassandra

Get ERROR level logs for the past 90 days

Filters using lucene query: ingest.loglevel:ERROR

Save progress (export kibana data to version control)

./scripts/export-kibana-dashboards.sh

TODO make an import script using this

Copyright (c) 2020 Ryan Quey.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Related Articles

elastic
logging
kibana

Cassandra open-source log analysis in Kibana, using filebeat, modeled in Docker

John Doe

2/16/2024

Checkout Planet Cassandra

Claim Your Free Planet Cassandra Contributor T-shirt!

Make your contribution and score a FREE Planet Cassandra Contributor T-Shirt! 
We value our incredible Cassandra community, and we want to express our gratitude by sending an exclusive Planet Cassandra Contributor T-Shirt you can wear with pride.

Join Our Newsletter!

Sign up below to receive email updates and see what's going on with our company

Explore Related Topics

AllKafkaSparkScyllaSStableKubernetesApiGithubGraphQl

Explore Further

elastic