Kafka部署

1、安装

1.1 下载与安装

kafka下载地址:https://kafka.apache.org/downloads
需要说明的是,kafka的安装依赖于zk,zk的部署可直接参考《Zookeeper介绍与基本部署》。当然,kafka默认也内置了zk的启动脚本,在kafka安装路径的bin目录下,名称为zookeeper-server-start.sh,如果不想独立安装zk,可直接使用该脚本。

1
2
3
4
wget http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.2.0/kafka_2.12-2.2.0.tgz
tar xf kafka_2.12-2.2.0.tgz -C /usr/local/
cd /usr/local
ln -s kafka_2.12-2.2.0 kafka

1.2 配置

kafka主配置文件为/usr/local/kafka/config/server.properties,配置示例如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
broker.id=0
listeners=PLAINTEXT://10.1.60.29:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/logs
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.1.60.29:2181,10.1.61.195:2181,10.1.61.27:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
auto.create.topics.enable=true
delete.topics.enable=true

配置文件

kafka1kafka2kafka3

zookeeper1zookeeper2zookeeper3

配置说明:

  • broker.id:每个broker在集群中的唯一标识,正整数。当该服务器的ip地址发生变更,但broker.id未变,则不会影响consumers的消费情况
  • listeners:kafka的监听地址与端口,在实际测试中如果写0.0.0.0会报错。
  • num.network.threads:kafka用于处理网络请求的线程数
  • num.io.threads:kafka用于处理磁盘io的线程数
  • socket.send.buffer.bytes:发送数据的缓冲区
  • socket.receive.buffer.bytes:接收数据的缓冲区
  • socket.request.max.bytes:允许接收的最大数据包的大小(防止数据包过大导致OOM)
  • log.dirs:kakfa用于保存数据的目录,所有的消息都会存储在该目录当中。可以通过逗号来指定多个路径,kafka会根据最少被使用的原则选择目录分配新的partition。需要说明的是,kafka在分配partition的时候选择的原则不是按照磁盘空间大小来定的,而是根据分配的partition的个数多少而定
  • num.partitions:设置新创建的topic的默认分区数
  • number.recovery.threads.per.data.dir:用于恢复每个数据目录时启动的线程数
  • log.retention.hours:配置kafka中消息保存的时间,还支持log.retention.minutes和log.retention.ms。如果多个同时设置会选择时间最短的配置,默认为7天。
  • log.retention.check.interval.ms:用于检测数据过期的周期
  • log.segment.bytes:配置partition中每个segment数据文件的大小。默认为1GB。超出该大小后,会自动创建一个新的segment文件。
  • zookeeper.connect:指定连接的zk的地址,zk中存储了broker的元数据信息。可以通过逗号来设置多个值。格式为:hostname:port/path。hostname为zk的主机名或ip,port为zk监听的端口。/path表示kafka的元数据存储到zk上的目录,如果不设置,默认为根目录
  • zookeeper.connection.timeout:kafka连接zk的超时时间
  • group.initial.rebalance.delay.ms:在实际环境当中,当将多个consumer加入到一个空的consumer group中时,每加入一个consumer就会触发一次对partition消费的重平衡,如果加入100个,就得重平衡100次,这个过程就会变得非常耗时。通过设置该参数,可以延迟重平衡的时间,比如有100个consumer会在10s内全部加入到一个consumer group中,就可以将该值设置为10s,10s之后,只需要做一次重平衡即可。默认为0则代表不开启该特性。
  • auto.create.topics.enable:当有producer向一个不存在的topic中写入消息时,是否自动创建该topic
  • delete.topics.enable:kafka提供了删除topic的功能,但默认并不会直接将topic数据物理删除。如果要从物理上删除(删除topic后,数据文件也一并删除),则需要将此项设置为true

需要说明的是,多个kafka节点依赖zk实现集群,所以各节点并不需要作特殊配置,只需要broker.id不同,并接入到同一个zk集群即可。

1.3 启停操作

1
2
3
4
5
6
7
8
9
10
11
12
13
#启动
/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties

#检查java进程

# jps
1394 QuorumPeerMain
13586 Logstash
27591 Kafka
27693 Jps

#停止
/usr/local/kafka/bin/kafka-server-start.sh

1.4 验证

可以通过zookeeper查看kafka的元数据信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#通过zk客户端连接zookeeper

../zookeeper/bin/zkCli.sh

#查看根下多了很多目录
[zk: localhost:2181(CONNECTED) 1] ls /
[cluster, controller_epoch, controller, brokers, zookeeper, admin, isr_change_notification, consumers, log_dir_event_notification, latest_producer_id_block, config]

#查看/brokers/ids,可以看到有三个broker已经加入
[zk: localhost:2181(CONNECTED) 8] ls /brokers/ids
[0, 1, 2]

#查看/brokers/topics,目前为空,说明还没有创建任何的topic
[zk: localhost:2181(CONNECTED) 3] ls /brokers/topics
[]

1.5 查询集群状态

可以使用 echo status | nc localhost 2181 查询kafka集群状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[admin@kafka1 ~]$ echo status | nc localhost 2181
Zookeeper version: 3.4.6-1569965, built on 02/20/2014 09:09 GMT
Clients:
/127.0.0.1:34368[1](queued=0,recved=1,sent=0)
/127.0.0.1:34382[1](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/2
Received: 36
Sent: 36
Connections: 18
Outstanding: 0
Zxid: 0x8000000f7
Mode: follower
Node count: 2728
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[admin@kafka2 ~]$ echo status | nc localhost 2181
Zookeeper version: 3.4.6-1569965, built on 02/20/2014 09:09 GMT
Clients:
/192.168.6.219:54658[1](queued=0,recved=1,sent=0)
/127.0.0.1:47144[0](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/56
Received: 36
Sent: 36
Connections: 21
Outstanding: 0
Zxid: 0x8000000f7
Mode: follower
Node count: 2728
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[admin@kafka3 ~]$ echo status | nc localhost 2181
Zookeeper version: 3.4.6-1569965, built on 02/20/2014 09:09 GMT
Clients:
/127.0.0.1:34368[0](queued=0,recved=1,sent=0)
/192.168.6.223:48022[1](queued=0,recved=1,sent=0)

Latency min/avg/max: 0/0/33
Received: 36
Sent: 36
Connections: 8
Outstanding: 0
Zxid: 0x8000000f7
Mode: leader
Node count: 2728

可以看到目前 kafka3 是 leader 节点

2、Kafka主要配置

2.1 Broker Config

属性默认值描述
broker.id必填参数,broker的唯一标识
log.dirs/tmp/kafka-logsKafka数据存放的目录。可以指定多个目录,中间用逗号分隔,当新partition被创建的时会被存放到当前存放partition最少的目录。
port9092BrokerServer接受客户端连接的端口号
zookeeper.connectnullZookeeper的连接串,格式为:hostname1:port1,hostname2:port2,hostname3:port3。可以填一个或多个,为了提高可靠性,建议都填上。注意,此配置允许我们指定一个zookeeper路径来存放此kafka集群的所有数据,为了与其他应用集群区分开,建议在此配置中指定本集群存放目录,格式为:hostname1:port1,hostname2:port2,hostname3:port3/chroot/path 。需要注意的是,消费者的参数要和此参数一致。
message.max.bytes1000000服务器可以接收到的最大的消息大小。注意此参数要和consumer的maximum.message.size大小一致,否则会因为生产者生产的消息太大导致消费者无法消费。
num.io.threads8服务器用来执行读写请求的IO线程数,此参数的数量至少要等于服务器上磁盘的数量。
queued.max.requests500I/O线程可以处理请求的队列大小,若实际请求数超过此大小,网络线程将停止接收新的请求。
socket.send.buffer.bytes100 * 1024The SO_SNDBUFF buffer the server prefers for socket connections.
socket.receive.buffer.bytes100 * 1024The SO_RCVBUFF buffer the server prefers for socket connections.
socket.request.max.bytes100 * 1024 * 1024服务器允许请求的最大值, 用来防止内存溢出,其值应该小于 Java heap size.
num.partitions1默认partition数量,如果topic在创建时没有指定partition数量,默认使用此值,建议改为5
log.segment.bytes1024 * 1024 * 1024Segment文件的大小,超过此值将会自动新建一个segment,此值可以被topic级别的参数覆盖。
log.roll.{ms,hours}24 * 7 hours新建segment文件的时间,此值可以被topic级别的参数覆盖。
log.retention.{ms,minutes,hours}7 daysKafka segment log的保存周期,保存周期超过此时间日志就会被删除。此参数可以被topic级别参数覆盖。数据量大时,建议减小此值。
log.retention.bytes-1每个partition的最大容量,若数据量超过此值,partition数据将会被删除。注意这个参数控制的是每个partition而不是topic。此参数可以被log级别参数覆盖。
log.retention.check.interval.ms5 minutes删除策略的检查周期
auto.create.topics.enabletrue自动创建topic参数,建议此值设置为false,严格控制topic管理,防止生产者错写topic。
default.replication.factor1默认副本数量,建议改为2
replica.lag.time.max.ms10000在此窗口时间内没有收到follower的fetch请求,leader会将其从ISR(in-sync replicas)中移除。
replica.lag.max.messages4000如果replica节点落后leader节点此值大小的消息数量,leader节点就会将其从ISR中移除。
replica.socket.timeout.ms30 * 1000replica向leader发送请求的超时时间。
replica.socket.receive.buffer.bytes64 * 1024The socket receive buffer for network requests to the leader for replicating data.
replica.fetch.max.bytes1024 * 1024The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader.
replica.fetch.wait.max.ms500The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader.
num.replica.fetchers1Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker.
fetch.purgatory.purge.interval.requests1000The purge interval (in number of requests) of the fetch request purgatory.
zookeeper.session.timeout.ms6000ZooKeeper session 超时时间。如果在此时间内server没有向zookeeper发送心跳,zookeeper就会认为此节点已挂掉。 此值太低导致节点容易被标记死亡;若太高,.会导致太迟发现节点死亡。
zookeeper.connection.timeout.ms6000客户端连接zookeeper的超时时间。
zookeeper.sync.time.ms2000H ZK follower落后 ZK leader的时间。
controlled.shutdown.enabletrue允许broker shutdown。如果启用,broker在关闭自己之前会把它上面的所有leaders转移到其它brokers上,建议启用,增加集群稳定性。
auto.leader.rebalance.enabletrueIf this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the “preferred” replica for each partition if it is available.
leader.imbalance.per.broker.percentage10The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker.
leader.imbalance.check.interval.seconds300The frequency with which to check for leader imbalance.
offset.metadata.max.bytes4096The maximum amount of metadata to allow clients to save with their offsets.
connections.max.idle.ms600000Idle connections timeout: the server socket processor threads close the connections that idle more than this.
num.recovery.threads.per.data.dir1The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
unclean.leader.election.enabletrueIndicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.
delete.topic.enablefalse启用deletetopic参数,建议设置为true。
offsets.topic.num.partitions50The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200).
offsets.topic.retention.minutes1440Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic.
offsets.retention.check.interval.ms600000The frequency at which the offset manager checks for stale offsets.
offsets.topic.replication.factor3The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas.
offsets.topic.segment.bytes104857600Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads.
offsets.load.buffer.size5242880An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache.
offsets.commit.required.acks-1The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden.
offsets.commit.timeout.ms5000The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout.

2.2 Producer Config

属性默认值描述
metadata.broker.list启动时producer查询brokers的列表,可以是集群中所有brokers的一个子集。注意,这个参数只是用来获取topic的元信息用,producer会从元信息中挑选合适的broker并与之建立socket连接。格式是:host1:port1,host2:port2。
request.required.acks0参见3.2节介绍
request.timeout.ms10000Broker等待ack的超时时间,若等待时间超过此值,会返回客户端错误信息。
producer.typesync同步异步模式。async表示异步,sync表示同步。如果设置成异步模式,可以允许生产者以batch的形式push数据,这样会极大的提高broker性能,推荐设置为异步。
serializer.classkafka.serializer.DefaultEncoder序列号类,.默认序列化成 byte[] 。
key.serializer.classKey的序列化类,默认同上。
partitioner.classkafka.producer.DefaultPartitionerPartition类,默认对key进行hash。
compression.codecnone指定producer消息的压缩格式,可选参数为: “none”, “gzip” and “snappy”。
compressed.topicsnull启用压缩的topic名称。若上面参数选择了一个压缩格式,那么压缩仅对本参数指定的topic有效,若本参数为空,则对所有topic有效。
message.send.max.retries3Producer发送失败时重试次数。若网络出现问题,可能会导致不断重试。
retry.backoff.ms100Before each retry, the producer refreshes the metadata of relevant topics to see if a new leader has been elected. Since leader election takes a bit of time, this property specifies the amount of time that the producer waits before refreshing the metadata.
topic.metadata.refresh.interval.ms600 * 1000The producer generally refreshes the topic metadata from brokers when there is a failure (partition missing, leader not available…). It will also poll regularly (default: every 10min so 600000ms). If you set this to a negative value, metadata will only get refreshed on failure. If you set this to zero, the metadata will get refreshed after each message sent (not recommended). Important note: the refresh happen only AFTER the message is sent, so if the producer never sends a message the metadata is never refreshed.
queue.buffering.max.ms5000启用异步模式时,producer缓存消息的时间。比如我们设置成1000时,它会缓存1秒的数据再一次发送出去,这样可以极大的增加broker吞吐量,但也会造成时效性的降低。
queue.buffering.max.messages10000采用异步模式时producer buffer 队列里最大缓存的消息数量,如果超过这个数值,producer就会阻塞或者丢掉消息。
queue.enqueue.timeout.ms-1当达到上面参数值时producer阻塞等待的时间。如果值设置为0,buffer队列满时producer不会阻塞,消息直接被丢掉。若值设置为-1,producer会被阻塞,不会丢消息。
batch.num.messages200采用异步模式时,一个batch缓存的消息数量。达到这个数量值时producer才会发送消息。
send.buffer.bytes100 * 1024Socket write buffer size
client.id“”The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request.

2.3 Consumer Config

属性默认值描述
group.idConsumer的组ID,相同goup.id的consumer属于同一个组。
zookeeper.connectConsumer的zookeeper连接串,要和broker的配置一致。
consumer.idnull如果不设置会自动生成。
socket.timeout.ms30 * 1000网络请求的socket超时时间。实际超时时间由max.fetch.wait + socket.timeout.ms 确定。
socket.receive.buffer.bytes64 * 1024The socket receive buffer for network requests.
fetch.message.max.bytes1024 * 1024查询topic-partition时允许的最大消息大小。consumer会为每个partition缓存此大小的消息到内存,因此,这个参数可以控制consumer的内存使用量。这个值应该至少比server允许的最大消息大小大,以免producer发送的消息大于consumer允许的消息。
num.consumer.fetchers1The number fetcher threads used to fetch data.
auto.commit.enabletrue如果此值设置为true,consumer会周期性的把当前消费的offset值保存到zookeeper。当consumer失败重启之后将会使用此值作为新开始消费的值。
auto.commit.interval.ms60 * 1000Consumer提交offset值到zookeeper的周期。
queued.max.message.chunks2用来被consumer消费的message chunks 数量, 每个chunk可以缓存fetch.message.max.bytes大小的数据量。
rebalance.max.retries4When a new consumer joins a consumer group the set of consumers attempt to “rebalance” the load to assign partitions to each consumer. If the set of consumers changes while this assignment is taking place the rebalance will fail and retry. This setting controls the maximum number of attempts before giving up.
fetch.min.bytes1The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request.
fetch.wait.max.ms100The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy fetch.min.bytes.
rebalance.backoff.ms2000Backoff time between retries during rebalance.
refresh.leader.backoff.ms200Backoff time to wait before trying to determine the leader of a partition that has just lost its leader.
auto.offset.resetlargestWhat to do when there is no initial offset in ZooKeeper or if an offset is out of range ;smallest : automatically reset the offset to the smallest offset; largest : automatically reset the offset to the largest offset;anything else: throw exception to the consumer.
consumer.timeout.ms-1若在指定时间内没有消息消费,consumer将会抛出异常。
exclude.internal.topicstrueWhether messages from internal topics (such as offsets) should be exposed to the consumer.
zookeeper.session.timeout.ms6000ZooKeeper session timeout. If the consumer fails to heartbeat to ZooKeeper for this period of time it is considered dead and a rebalance will occur.
zookeeper.connection.timeout.ms6000The max time that the client waits while establishing a connection to zookeeper.
zookeeper.sync.time.ms2000How far a ZK follower can be behind a ZK leader.

Topic Configs

3、基本操作

3.1 创建topic

上面完成了kafka的部署,通过验证部署我们发现当前没有topic,所以创建一个topic如下:

1
2
# ./bin/kafka-topics.sh --create --zookeeper localhost:2181  --replication-factor 2 --partitions 3 --topic myfirsttopic 
Created topic myfirsttopic.

参数说明:

  • –create:创建一个topic
  • –zookeeper: 指定zookeeper集群的主机列表,多个server可使用逗号分隔,这里因为kafka和zk是在同一个server,所以直接连接了本机的2181端口
  • –replication-factor:指定创建这个topic的副本数
  • –partitions:指定该topic的分区数
  • –topic:指定topic的名称

3.2 列出现有的topic

上面通过操作zk就可以看到topic相关信息,接下来我们直接通过kafka命令行来进行相关操作:

1
2
# ./bin/kafka-topics.sh --zookeeper localhost:2181 --list                                                               
myfirsttopic

3.3 查看topic的详细信息

1
2
3
4
5
6
#查看myfirsttopic的详细信息
# ./bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic myfirsttopic
Topic:myfirsttopic PartitionCount:3 ReplicationFactor:2 Configs:
Topic: myfirsttopic Partition: 0 Leader: 0 Replicas: 0,2 Isr: 0
Topic: myfirsttopic Partition: 1 Leader: 1 Replicas: 1,0 Isr: 1,0
Topic: myfirsttopic Partition: 2 Leader: 2 Replicas: 2,1 Isr: 2,1

参数说明:

  • –describe:查看topic的详细信息

输出说明:

  • leader:当前负责读写的leader broker
  • replicas:当前分区所有的副本对应的broker列表
  • isr:处于活动状态的broker

3.4 增加topic的partition数量

1
2
3
4
5
6
7
8
9
10
11
12
# ./bin/kafka-topics.sh --zookeeper localhost:2181 --alter --partitions 6 --topic myfirsttopic
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!

# ./bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic myfirsttopic
Topic:myfirsttopic PartitionCount:6 ReplicationFactor:2 Configs:
Topic: myfirsttopic Partition: 0 Leader: 0 Replicas: 0,2 Isr: 0
Topic: myfirsttopic Partition: 1 Leader: 1 Replicas: 1,0 Isr: 1,0
Topic: myfirsttopic Partition: 2 Leader: 2 Replicas: 2,1 Isr: 2,1
Topic: myfirsttopic Partition: 3 Leader: 0 Replicas: 0,2 Isr: 0,2
Topic: myfirsttopic Partition: 4 Leader: 1 Replicas: 1,0 Isr: 1,0
Topic: myfirsttopic Partition: 5 Leader: 2 Replicas: 2,1 Isr: 2,1

3.5 修改一个topic的副本数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#创建一个topic名为mysecondtopic,指定分区为2,副本为1
# ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --replication-factor 1 --partitions 2 --topic mysecondtopic
Created topic mysecondtopic.

#查看新创建的topic详细信息
# ./bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic mysecondtopic
Topic:mysecondtopic PartitionCount:2 ReplicationFactor:1 Configs:
Topic: mysecondtopic Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: mysecondtopic Partition: 1 Leader: 1 Replicas: 1 Isr: 1

#将broker.id为0上的partition的副本由原来的[0]扩充为[0,2],将broker.id为1上的partition的副本由原来的[1]扩充为[1,2]。
#需要先创建一个json文件如下:
# cat partitions-to-move.json
{
"partitions":
[
{
"topic":"mysecondtopic",
"partition": 0,
"replicas": [0,2]
},
{
"topic": "mysecondtopic",
"partition": 1,
"replicas": [1,2]
}
],
"version": 1
}

#执行副本修改
# ./bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file ./partitions-to-move.json --execute
Current partition replica assignment

{"version":1,"partitions":[{"topic":"mysecondtopic","partition":1,"replicas":[1],"log_dirs":["any"]},{"topic":"mysecondtopic","partition":0,"replicas":[0],"log_dirs":["any"]}]}

Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions.

#再次查看topic状态,发现副本数由按照预期发生变更
# ./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic mysecondtopic
Topic:mysecondtopic PartitionCount:2 ReplicationFactor:2 Configs:
Topic: mysecondtopic Partition: 0 Leader: 0 Replicas: 0,2 Isr: 0
Topic: mysecondtopic Partition: 1 Leader: 1 Replicas: 1,2 Isr: 1

3.6 删除一个topic

1
2
3
4
5
6
7
8
9
#执行删除操作
# ./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic myfirsttopic
Topic myfirsttopic is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

#查看topic,可以看到myfirsttopic已被删除
# ./bin/kafka-topics.sh --zookeeper localhost:2181 --list
__consumer_offsets
mysecondtopic

3.7 通过producer生产消息

1
2
3
4
5
6
7
8
# ./bin/kafka-console-producer.sh  --broker-list 10.1.60.29:9092 --topic mysecondtopic                                                
>hello kafka!
>hello world!
>just a test!
>
>hi world!
>hahahaha!
>

3.8 通过consumer消费消息

1
2
3
4
5
6
7
# ./bin/kafka-console-consumer.sh --bootstrap-server 10.1.60.29:9092 --topic mysecondtopic --from-beginning       
hello kafka!
just a test!
hi world!
hello world!

hahahaha!
-------------本文结束感谢您的阅读-------------