Hive MetaStore和HiveServer2的高可用配置

一、Metastore和Hiveserver2服务的区别

1.1 Metastore 服务

Metastore 服务访问元数据的方式

bin/hive(cli命令行的方式访问元数据) –访问–> metaStore server –访问–>MySQL

Metastore 服务启动方式

服务端启动Metastore服务
hive --service metastore

客户端使用Hive连接
hive

1.2 Hiveserver2 服务

Hiveserver2 服务访问元数据的方式

bin/beeline(jdbc的方式访问元数据) –访问–>hiveServer2 –访问–> metaStore server –访问–> MySQL

Hiveserver2 服务启动方式

服务端启动hiveserver2服务
hive --service hiveserver2

客户端使用beeline/Java编码jdbc连接
beeline -u jdbc:hive2://hiveserver2_ip:10000 -n hadoop

二、Hive Metastore 高可用配置

2.1 工作原理

常规连接
图片1

MetaStore HA连接
图片2

2.2 MetaStore 高可用配置

前置条件

Hadoop、Hive是可用的前提

服务规划

主机名运行服务
hadoop1MetaStore
hadoop2hive
hadoop3MetaStore

添加hive和hadoop环境变量

1
2
3
4
5
6
[hadoop@hadoop1 ~]$ vim .bash_profile
export HADOOP_HOME=/home/hadoop/hadoop-2.7.2
export HIVE_HOME=/home/hadoop/apache-hive-2.3.9-bin
export PATH=$HIVE_HOME/bin:$PATH:HADOOP_HOME/bin

[hadoop@hadoop1 ~]$ source .bash_profile

2.2.1 Hive Server MetaStore 配置

修改 hive-site.xml 配置,和单节点配置一样,只不过在多台服务器上启动MetaSotre

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[hadoop@hadoop1 ~]$ vim apache-hive-2.3.9-bin/conf/hive-site.xml
[hadoop@hadoop3 ~]$ vim apache-hive-2.3.9-bin/conf/hive-site.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://hadoop1:3306/hive_metadata?createDatabaseIfNotExist=true</value>
<description>使用mysql数据库,Hive默认使用内嵌的Derby数据库作为存储引擎</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>hive数据库用户</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>passwd</value>
<description>hive数据库密码</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/hive</value>
<description>hdfs上的数据存储位置,默认存储在/user/hive/warehouse</description>
</property>

修改临时文件目录,建议所有hive都修改

1
2
3
4
5
6
7
8
9
#将以下配置放在最前列
<property>
<name>system:java.io.tmpdir</name>
<value>/home/hadoop/apache-hive-2.3.9-bin/tmp</value>
</property>
<property>
<name>system:user.name</name>
<value>${user.name}</value>
</property>

2.2.2 Hive Client HA 配置

修改 hive-site.xml 配置

1
2
3
4
5
[hadoop@hadoop2 ~]$ vim apache-hive-2.3.9-bin/conf/hive-site.xml
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoop1:9083,thrift://hadoop3:9083</value>
</property>

2.3 启动 MetaStore

启动两个节点的metastore服务

1
2
3
4
[hadoop@hadoop1 ~]$ hive --service metastore
[hadoop@hadoop3 ~]$ hive --service metastore
#后台启动
nohup hive --service metastore >> apache-hive-2.3.9-bin/metastore.log 2>&1 &

2.4 验证 MetaStore高可用

先使用hive查询数据

1
2
3
4
5
6
7
8
9
10
11
12
13
[hadoop@hadoop2 ~]$ hive
hive> show databases;
OK
default
Time taken: 1.437 seconds, Fetched: 1 row(s)
hive> show tables;
OK
t1
Time taken: 0.102 seconds, Fetched: 1 row(s)
hive> select * from t1;
OK
1 qb
Time taken: 2.357 seconds, Fetched: 1 row(s)

把优先连接的第一个MetaStore服务关掉

1
2
3
4
5
6
7
8
9
10
11
12
13
[hadoop@hadoop1 ~]$ jps
20464 DFSZKFailoverController
14977 ResourceManager
11858 HRegionServer
20034 DataNode
19909 NameNode
5190 RunJar
1656 QuorumPeerMain
20251 JournalNode
11581 HMaster
15101 NodeManager
5486 Jps
[hadoop@hadoop1 ~]$ kill 5190

再次查询数据

1
2
3
4
hive> select * from t1;
OK
1 qb
Time taken: 0.343 seconds, Fetched: 1 row(s)

依然可以正常查询,说明MetaStore高可用配置成功

三、Hive Hiveserver2 高可用配置

Hive从0.14开始,使用Zookeeper实现了HiveServer2的HA功能(ZooKeeper Service Discovery),Client端可以通过指定一个nameSpace来连接HiveServer2,而不是指定某一个host和port。

在生产环境中使用Hive,强烈建议使用HiveServer2来提供服务,好处很多:

  1. 在应用端不用部署Hadoop和Hive客户端;
  2. 相比hive-cli方式,HiveServer2不用直接将HDFS和Metastore暴漏给用户;
  3. 有安全认证机制,并且支持自定义权限校验
  4. 有HA机制,解决应用端的并发和负载均衡问题;
  5. JDBC方式,可以使用任何语言,方便与应用进行数据交互;
  6. 从2.0开始,HiveServer2提供了WEB UI。

3.1 工作原理

如果使用HiveServer2的Client并发比较少,可以使用一个HiveServer2实例
图片3

可以启用两个HiveServer2的实例,并通过zookeeper完成HA高可用
图片4

3.2 HiveServer2 高可用配置

前置条件

Hadoop、zookeeper、Hive是可用的前提

添加hive和hadoop环境变量

1
2
3
4
5
export HADOOP_HOME=/home/hadoop/hadoop-2.7.2
export HIVE_HOME=/home/hadoop/apache-hive-2.3.9-bin
export PATH=$HIVE_HOME/bin:$PATH:HADOOP_HOME/bin

[hadoop@hadoop1 ~]$ source .bash_profile

服务规划

主机名运行服务依赖服务
hadoop1HiveServer2zookeeper
hadoop2beelinezookeeper
hadoop3HiveServer2zookeeper

前置条件

zookeeper服务是可用的

Hive Server HiveServer2 配置,修改 hive-site.xml 配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[hadoop@hadoop1 ~]$ vim apache-hive-2.3.9-bin/conf/hive-site.xml
[hadoop@hadoop3 ~]$ vim apache-hive-2.3.9-bin/conf/hive-site.xml
<property>
<name>hive.server2.support.dynamic.service.discovery</name>
<value>true</value>
</property>
<property>
<name>hive.server2.zookeeper.namespace</name>
<value>hiveserver2</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value/>hadoop1,hadoop2,hadoop3</value>
</property>
<property>
<name>hive.zookeeper.client.port</name>
<value>2181</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>hadoop3</value>
<description>zk返回给beeline连接的HiveServer2服务器地址,每台HiveServer2 HA都需要修改</description>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>

注意修改 hive.server2.thrift.bind.host 的参数

3.3 启动 HiveServer2

1
2
3
4
[hadoop@hadoop1 ~]$ hive --service hiveserver2
[hadoop@hadoop3 ~]$ hive --service hiveserver2
#后台启动
nohup hive --service hiveserver2 >> apache-hive-2.3.9-bin/hiveserver2.log 2>&1 &

启动后,在ZK中可以看到两个HiveServer2都注册上来了

1
2
3
[hadoop@hadoop1 ~/zookeeper-3.4.8/bin]$ ./zkCli.sh
[zk: localhost:2181(CONNECTED) 0] ls /hiveserver2
[serverUri=hadoop1:10000;version=2.3.9;sequence=0000000001, serverUri=hadoop3:10000;version=2.3.9;sequence=0000000000]

3.4 验证 HiveServer2高可用

先使用hive查询数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[hadoop@hadoop2 ~]$ beeline -u "jdbc:hive2://hadoop1:2181,hadoop2:2181,hadoop3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2" -n hadoop
Connecting to jdbc:hive2://hadoop1:2181,hadoop2:2181,hadoop3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
21/10/26 18:28:56 [main]: INFO jdbc.HiveConnection: Connected to hadoop1:10000
Connected to: Apache Hive (version 2.3.9)
Driver: Hive JDBC (version 2.3.9)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 2.3.9 by Apache Hive
0: jdbc:hive2://hadoop1:2181,hadoop2:2181,had> show databases;
+----------------+
| database_name |
+----------------+
| default |
+----------------+
1 row selected (1.327 seconds)
0: jdbc:hive2://hadoop1:2181,hadoop2:2181,had> show tables;;
+-----------+
| tab_name |
+-----------+
| t1 |
+-----------+
1 row selected (0.173 seconds)
0: jdbc:hive2://hadoop1:2181,hadoop2:2181,had> select * from t1;
+--------+----------+
| t1.id | t1.name |
+--------+----------+
| 1 | qb |
+--------+----------+
1 row selected (1.625 seconds)

把连接到hadoop1上的HiveServer2服务关掉

1
2
3
4
5
6
7
8
9
10
11
12
13
[hadoop@hadoop1 ~/zookeeper-3.4.8/bin]$ jps
20464 DFSZKFailoverController
14977 ResourceManager
11858 HRegionServer
20034 DataNode
19909 NameNode
1656 QuorumPeerMain
11049 Jps
20251 JournalNode
11581 HMaster
15101 NodeManager
10895 RunJar
[hadoop@hadoop1 ~/zookeeper-3.4.8/bin]$ kill 10895

再次查询数据

1
2
3
0: jdbc:hive2://hadoop1:2181,hadoop2:2181,had> select * from t1;
Unexpected end of file when reading from HS2 server. The root cause might be too many concurrent connections. Please ask the administrator to check the number of active connections, and adjust hive.server2.thrift.max.worker.threads if applicable.
Error: org.apache.thrift.transport.TTransportException (state=08S01,code=0)

查询报错,再次重新连接HiveServer2就可以了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[hadoop@hadoop2 ~]$ beeline
beeline> !connect jdbc:hive2://hadoop1:2181,hadoop2:2181,hadoop3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Connecting to jdbc:hive2://hadoop1:2181,hadoop2:2181,hadoop3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for jdbc:hive2://hadoop1:2181,hadoop2:2181,hadoop3:2181/: hadoop
Enter password for jdbc:hive2://hadoop1:2181,hadoop2:2181,hadoop3:2181/: ********
21/10/27 11:01:12 [main]: INFO jdbc.HiveConnection: Connected to hadoop3:10000
Connected to: Apache Hive (version 2.3.9)
Driver: Hive JDBC (version 2.3.9)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://hadoop1:2181,hadoop2:2181,had>
Connecting to jdbc:hive2://hadoop1:2181,hadoop2:2181,hadoop3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
21/10/26 18:32:57 [main]: INFO jdbc.HiveConnection: Connected to hadoop3:10000
Connected to: Apache Hive (version 2.3.9)
Driver: Hive JDBC (version 2.3.9)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 2.3.9 by Apache Hive
0: jdbc:hive2://hadoop1:2181,hadoop2:2181,had> select * from t1;
+--------+----------+
| t1.id | t1.name |
+--------+----------+
| 1 | qb |
+--------+----------+
1 row selected (2.537 seconds)
-------------本文结束感谢您的阅读-------------