Elasticsearch,基于lucene,隐藏复杂性,提供简单易用的restful API接口、Java API接口
Elasticsearch:一个实时分布式搜索和分析引擎,它用于全文搜索、结构话搜索、分析
索引包含一堆有相似结构的文档数据,一个index包含很多document,一个index代表了一类类似的或者相同的document
每个索引里都可以有一个或多个type,type是index中的一个逻辑数据分类,一个type下的document,都有相同的field,每一个type都包含一堆document
文档是es中的最小数据单元,每个index下的type中,都可以去存储多个document
Field是es的最小单元,一个document里面有多个field,每个field就是一个数据字段
数据通过映射存放到索引对象上
(1)Elasticsearch官网https://www.elastic.co/cn/downloads/elasticsearch下载安装包
(2)解压elasticsearch-7.1.0-linux-x86_64.tar.gz压缩包
tar -xvzf elasticsearch-7.1.0-linux-x86_64.tar.gz(3)修改elasticsearch-7.1.0文件夹的用户权限
chown -R destiny elasticsearch-7.1.0(4)创建data和logs文件夹
mkdir -p data/data mkdir -p data/logs(5)修改data文件夹的用户权限
chown -R destiny data(6)切换用户
su destiny(7)修改config文件夹下的elasticsearch.yml文件
master节点配置
# ---------------------------------- Cluster ----------------------------------- # 配置集群需要节点上cluster.name配置相同 cluster.name: elasticsearch # ------------------------------------ Node ------------------------------------ # 配置集群各节点上node.name配置不能相同 node.name: master # ----------------------------------- Paths ------------------------------------ path.data: /usr/local/elasticsearch/data/data path.logs: /usr/local/elasticsearch/data/logs # ----------------------------------- Memory ----------------------------------- bootstrap.memory_lock: false bootstrap.system_call_filter: false # ---------------------------------- Network ----------------------------------- network.host: 192.168.138.130 # --------------------------------- Discovery ---------------------------------- discovery.zen.ping.unicast.hosts: ["192.168.138.130","192.168.138.129","192.168.138.128"] cluster.initial_master_nodes: ["master"] discovery.zen.minimum_master_nodes: 2 # ---------------------------------- Various ----------------------------------- #action.destructive_requires_name: true http.cors.enabled: true http.cors.allow-origin: "*"slave1节点配置
# ---------------------------------- Cluster ----------------------------------- # 配置集群需要节点上cluster.name配置相同 cluster.name: elasticsearch # ------------------------------------ Node ------------------------------------ # 配置集群各节点上node.name配置不能相同 node.name: slave1 # ----------------------------------- Paths ------------------------------------ path.data: /usr/local/elasticsearch/data/data path.logs: /usr/local/elasticsearch/data/logs # ----------------------------------- Memory ----------------------------------- bootstrap.memory_lock: false bootstrap.system_call_filter: false # ---------------------------------- Network ----------------------------------- network.host: 192.168.138.129 # --------------------------------- Discovery ---------------------------------- discovery.zen.ping.unicast.hosts: ["192.168.138.130","192.168.138.129","192.168.138.128"] cluster.initial_master_nodes: ["master"] discovery.zen.minimum_master_nodes: 2 # ---------------------------------- Various ----------------------------------- #action.destructive_requires_name: true http.cors.enabled: true http.cors.allow-origin: "*"slave2节点配置
# ---------------------------------- Cluster ----------------------------------- # 配置集群需要节点上cluster.name配置相同 cluster.name: elasticsearch # ------------------------------------ Node ------------------------------------ # 配置集群各节点上node.name配置不能相同 node.name: slave2 # ----------------------------------- Paths ------------------------------------ path.data: /usr/local/elasticsearch/data/data path.logs: /usr/local/elasticsearch/data/logs # ----------------------------------- Memory ----------------------------------- bootstrap.memory_lock: false bootstrap.system_call_filter: false # ---------------------------------- Network ----------------------------------- network.host: 192.168.138.128 # --------------------------------- Discovery ---------------------------------- discovery.zen.ping.unicast.hosts: ["192.168.138.130","192.168.138.129","192.168.138.128"] cluster.initial_master_nodes: ["master"] discovery.zen.minimum_master_nodes: 2 # ---------------------------------- Various ----------------------------------- #action.destructive_requires_name: true http.cors.enabled: true http.cors.allow-origin: "*"(8)切换root用户修改/etc/security/limits.conf文件添加配置
* soft nofile 65536 * hard nofile 131072 * soft nproc 4096 * hard nproc 4096(9)修改/etc/security/limits.d/20-nproc.conf文件
* soft nproc 4096(10)修改/etc/sysctl.conf文件添加配置
vm.max_map_count=655360(11)执行命令
sysctl -p(12)启动elasticsearch
./bin/elasticsearch # 后台运行 ./elasticesrarch -d(13)测试elasticsearch
curl http://hadoop1:9200 curl -XGET '192.168.138.130:9200/_cat/health?v&pretty'(14)将Elasticsearch相关文件和文件夹发送到其它服务器上
scp -r elasticsearch hadoop2:$PWD scp /etc/security/limits.conf hadoop2:/etc/security scp /etc/profile hadoop2:/etc/ scp /etc/security/limits.d/20-nproc.conf hadoop2:/etc/security/(1)网站https://github.com/mobz/elasticsearch-head/archive/master.zip下载head插件
(2)解压elasticsearch-head-master.zip压缩包
unzip elasticsearch-head-master.zip(3)安装node.js,命令node -v验证安装
curl -sL https://rpm.nodesource.com/setup_8.x | bash - yum install -y nodejs(4)验证是否安装成功
node -v npm -v(5)切换到elasticsearch-head-master目录下安装grunt
npm install -g grunt-cli npm install phantomjs-prebuilt@2.1.14 --ignore-scripts npm install(6)修改Gruntfile.js文件
connect: { server: { options: { port: 9100, hostname: '*', base: '.', keepalive: true }(6)修改_site/app.js文件
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.138.130:9200";(7)启动head
grunt server &(8)浏览器输入http://192.168.138.130:9100/访问
(1)网站https://github.com/medcl/elasticsearch-analysis-ik/releases/tag/v7.1.0下载analysis插件
(2)在elasticsearch的plugins文件夹下创建elasticsearch-analysis-ik-7.1.0
mkdir elasticsearch-analysis-ik-7.1.0(3)解压elasticsearch-analysis-ik-7.1.0.zip压缩包
unzip elasticsearch-analysis-ik-7.1.0.zip(4)重新启动elasticsearch
(1)网站https://www.elastic.co/cn/downloads/past-releases/kibana-7-1-0下载kibana
(2)解压kibana-7.1.0-linux-x86_64.tar.gz压缩包
tar -xvzf kibana-7.1.0-linux-x86_64.tar.gz(3)修改配置文件kibana.yml
server.port: 5601 server.host: "192.168.138.130" elasticsearch.hosts: ["http://192.168.138.130:9200"] kibana.index: ".kibana"(4)启动Elasticsearch集群
elasticsearch(5)启动kibana
kibana(6)网页输入http://192.168.138.130:5601