step by step graylog elasticsearch log management configuration CentOS 7

Linuxtopic
1


Graylog2 Server Configuration in CentOS7


Graylog is a free open source & powerful  log management server, log management tools, graylog remote source, graylog remote host configuration, rsyslog, graylog client configuration, full graylog2, graylog, centralised logging, open source log analysis, centralized logging open source, logs management, Open Source Log Monitoring, elasticsearch, graylog.org,




Components:




1. MongoDB – Stores the configurations and meta information.




2. Elasticsearch – Stores the log messages and offers a searching facility, nodes should have high memory as all the I/O operations are happens here.




3. GrayLog – Log parser, it collect the logs from various inputs.




File Details




Configuration           /etc/graylog/server/server.conf


Logging configuration       /etc/graylog/server/log4j2.xml


Plugins               /usr/share/graylog-server/plugin


JVM settings           /etc/default/graylog-server


Message journal files       /var/lib/graylog-server/journal


Log Files               /var/log/graylog-server/












Requirement :




Install Java


Install & configure Elasticsearch


Install MongoDB


Install & Configure Graylog2


Install & Configure rsyslog (Client) to send to graylog






Step 1 :




Install Java




yum install java-1.8.0-openjdk.x86_64




graylog-yum.png


 


Verify Java




java -version




graylog2.png




Step 2:




Downloading and Installing Elasticsearch




rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch





wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/rpm/elasticsearch/2.4.0/elasticsearch-2.4.0.rpm





rpm -ivh elasticsearch-2.4.0.rpm




Graylog is a free open source & powerful  log management server, log management tools, graylog remote source, graylog remote host configuration, rsyslog, graylog client configuration, full graylog2, graylog, centralised logging, open source log analysis, centralized logging open source, logs management, Open Source Log Monitoring, elasticsearch, graylog.org,





Open file and uncomment below line and edit cluster name




vi /etc/elasticsearch/elasticsearch.yml




               


# ---------------------------------- Cluster -----------------------------------


#


# Use a descriptive name for your cluster:


#


#cluster.name: my-application


cluster.name: graylogserver


#




# ---------------------------------- Network -----------------------------------


#


# Set the bind address to a specific IP (IPv4 or IPv6):


#


#network.host: 192.168.0.1


network.host: 0.0.0.0


#


# Set a custom port for HTTP:


#


http.port: 9200




# --------------------------------- Discovery ----------------------------------


#


# Pass an initial list of hosts to perform discovery when new node is started:


# The default list of hosts is ["127.0.0.1", "[::1]"]


#


#discovery.zen.ping.unicast.hosts: ["host1", "host2"]


discovery.zen.ping.unicast.hosts: ["0.0.0.0"]


#






Start elasticsearch service by executing




systemctl start elasticsearch.service




Elasticsearch service to start automatically using systemd




systemctl daemon-reloadsystemctl enable elasticsearch.service




Check status




curl -XGET http://localhost:9200




graylog5.png




Check Health of Cluster




curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'




graylog6.png




Step 3:




Installation of  MongoDB




Create Repo file##




vim /etc/yum.repos.d/mongodb-org-3.0.repo




[mongodb-org-3.0]


name=MongoDB Repository


baseurl=http://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.0/x86_64/


gpgcheck=0


enabled=1




graylog7.png




yum install mongodb-org




Configure MongoDB


       


vi /etc/mongod.conf




# network interfaces


net:


#port: 27017


port: 27020


# bindIp: 0.0.0.0    # Listen to local interface only, comment to listen on all interfaces.




Step 4:




Create /data/ folder




mkdir /data/db -p




chmod -R 777 /data




chown -R mongod:mongod /data




Restart Service




systemctl restart mongod.service




systemctl enable mongod.service
OR
/sbin/chkconfig mongod on




Step 5:




Install & Configure Graylog2




rpm -ivh https://packages.graylog2.org/repo/packages/graylog-2.1-repository_latest.rpm




OR




Create a file named /etc/yum.repos.d/graylog.repo with the following content:




[graylog]


name=graylog


baseurl=https://packages.graylog2.org/repo/el/stable/2.1/$basearch/


gpgcheck=1


gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-graylog


               


yum install graylog-server




Graylog is a free open source & powerful  log management server, log management tools, graylog remote source, graylog remote host configuration, rsyslog, graylog client configuration, full graylog2, graylog, centralised logging, open source log analysis, centralized logging open source, logs management, Open Source Log Monitoring, elasticsearch, graylog.org,


graylog9.png




Step : 6




Get the sha265 sum of your accounts password




echo -n yourpassword | sha256sum






graylog10.png




Configure Graylog




vi /etc/graylog/server/server.conf




               


# Generate one by using for example: pwgen -N 1 -s 96




password_secret = YyP89em7SxDM0NWjFSzJpcFiPXl7DrVC77Cxz17yIOxKYj






# Create one by using for example: echo -n yourpassword | shasum -a 256


# and put the resulting hash value into the following line




root_password_sha2 = b3eacd33433b31b5252351032c9b3e7a2e7aa7738d5de




# The email address of the root user.


# Default is empty


root_email = "root@yourdomain"




# The time zone setting of the root user. See http://www.joda.org/joda-time/timezones.html for a list of valid time zones.


# Default is UTC


#root_timezone = UTC


root_timezone = Asia/Kolkata




# REST API listen URI. Must be reachable by other Graylog server nodes if you run a cluster.


# When using Graylog Collectors, this URI will be used to receive heartbeat messages and must be accessible for all collectors.


#rest_listen_uri = http://localhost:9000/api/


rest_listen_uri = http://0.0.0.0:9000/api/




# Web interface listen URI.


# Configuring a path for the URI here effectively prefixes all URIs in the web interface. This is a replacement


# for the application.context configuration parameter in pre-2.0 versions of the Graylog web interface.


#web_listen_uri = http://127.0.0.1:9000/


web_listen_uri = http://0.0.0.0:9000/




# settings to be passed to elasticsearch's client (overriding those in the provided elasticsearch_config_file)


# all these


# this must be the same as for your Elasticsearch cluster


elasticsearch_cluster_name = graylog




# A comma-separated list of Elasticsearch nodes which Graylog is using to connect to the Elasticsearch cluster,


# see https://www.elastic.co/guide/en/elasticsearch/reference/2.3/modules-discovery-zen.html for details.


# Default: 127.0.0.1


#elasticsearch_discovery_zen_ping_unicast_hosts = 127.0.0.1:9300


elasticsearch_discovery_zen_ping_unicast_hosts = 0.0.0.0:9300




# MongoDB connection string


# See https://docs.mongodb.com/manual/reference/connection-string/ for details


mongodb_uri = mongodb://0.0.0.0/graylog




# Email transport


transport_email_enabled = true


transport_email_hostname = localhost


transport_email_port = 25


transport_email_use_auth = false


transport_email_use_tls = false


transport_email_use_ssl = false


transport_email_auth_username = graylog1


transport_email_auth_password = server


transport_email_subject_prefix = [graylog]


transport_email_from_email = graylog1@linuxtopic.com




# This should define the fully qualified base url to your web interface exactly the same way as it is accessed by your users.


transport_email_web_interface_url = http://172.17.20.101:9000




Step 7:


               


Create Graylog1 user and set password




useradd graylog1




passwd graylog1




Restart Graylog Service




systemctl restart graylog-server.service




systemctl enable graylog-server.service






Check log :


   


tail -f /var/log/graylog-server/server.log




Step 8:




               


## Access Graylog Server Using URL




http://172.17.20.100:9000






Graylog is a free open source & powerful  log management server, log management tools, graylog remote source, graylog remote host configuration, rsyslog, graylog client configuration, full graylog2, graylog, centralised logging, open source log analysis, centralized logging open source, logs management, Open Source Log Monitoring, elasticsearch, graylog.org,




               


Dashboard Window




Graylog is a free open source & powerful  log management server, log management tools, graylog remote source, graylog remote host configuration, rsyslog, graylog client configuration, full graylog2, graylog, centralised logging, open source log analysis, centralized logging open source, logs management, Open Source Log Monitoring, elasticsearch, graylog.org,




Step 9


           


Create Input :




Go to System -> input




Graylog is a free open source & powerful  log management server, log management tools, graylog remote source, graylog remote host configuration, rsyslog, graylog client configuration, full graylog2, graylog, centralised logging, open source log analysis, centralized logging open source, logs management, Open Source Log Monitoring, elasticsearch, graylog.org,




               


Choose Syslog UPD from the drop down and click on Launch new input








Fill Launch new Syslog UDP input





  1. Select Node    -  /gaylog2-linuxtopic.com    # Select your node



  2. Title        - linuxtopic            # Choose Title



  3. Bind Address    - 172.17.20.100        # your graylog server ip



  4. Port        - 5555                # Enter any free port
















Press “Save”








Your Input is configured.



Post a Comment

1Comments

  1. Hi, with same configuration its not communicating with elastic search server
    Elasticsearch cluster unavailable (triggered 30 minutes ago)
    Graylog could not successfully connect to the Elasticsearch cluster. If you're using multicast, check that it is working in your network and that Elasticsearch is accessible. Also check that the cluster name setting is correct. Read how to fix this in

    ReplyDelete
Post a Comment

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!