❮ Previous Lecture
Next Lecture ❯
Welcome to Spring Boot + Apache Kafka Tutorial series. In the previous lecture, we implemented Wikimedia Producer and Event Handler. In this lecture, we run and test Wikimedia Producer.
Welcome to Spring Boot + Apache Kafka Tutorial series. In the previous lecture, we implemented Wikimedia Producer and Event Handler. In this lecture, we run and test Wikimedia Producer.
Lecture - #18 - Run and Test Wikimedia Producer
Transcript:
Well, in order to run this, we accumulated this war reserve. We need to call send message method. Right. So just go to many different class of this project. So this is a management class that is being, what, many management class. So let's implement command line interface. It provides a run method. So this method will be executed whenever this application get started. So here, what I want to do is I'm going to inject Private Wikimedia Ginger's producer and let's use add a total annotation. Now we have injected this preserve. So just call it send message method, right. So whenever we run this promote application by using this meaning print class, then this object will be, you know, instantiated, and then it will call its matter. That is, send a message method. Now let's go and let's trade selection in the springboard application and let's see how this preserve works. Well, you can see in a console we haven't started the capture broker. If the capture broker is not running in a local machine, then you will get this kind of, you know, warning messages. So let's go and render the zookeeper service as well as capture broker. Well, let me open the terminal and let me start the zookeeper service as well as capture broker service. Well, I am using Mac, so I went to open the terminal. But if you are using windows, then make sure that you open the command prompt. So let me slightly maximize it and let me zoom it. Okay. So we have a couple internal operators who let me first go in to the reporter and then Kafka, and then when you do exit the command. So let's say in order to render the zookeeper, we need to trigger this command that has been slash zookeeper hyper server happen standardized. And then when you to these properties fine. Just hit enter now our zookeeper services happening in your local machine. Similarly let's go ahead and let's start capture broker service. So in order to do that, just open the new cell, go to cell, new window. If you are using windows, then make sure that you should open a new command prompt. So let me a little bit maximize it and then let me slightly do it and let me go into Caterpillar. Perfect. And let me run the command. Well, in order to win the capture broker, we need to run this command means slash cap server, start the decision. Then we need to pass this over the properties file. All right, so hit enter. Now, our capture broker is upon running on Port 9092. Well, in order to confirm whether the broker is running or not, we can just see this log recorded new controller. From now on will use broker localhost 909. Once you got this log, then you can, you know, sure that you are capable of corresponding on port 9092. All right. Now let's go back to our Springwood project where let's go and let's run this application and let's see how this position works. So let me start this being what application and again, it has some error. So let me stop this project and let me check what is the error? So it says cause unexpected error. So what is the error? Oh, let me see. Failed to construct. What is it? Well, there should be some issue in configuration. So let me go to application dot properties file and let me check what is missing. So you can see localhost. So this should be located, right? So data type book. So let me see this file. Now let's run the Springwood application and let's see how this works. And there we go again. Able to see and console you in data is in your retrieving from the source you can able to see the log here event data. So you know accumulated handler we have logged this statement right you in data followed by the message. It means that the browser that we have returned to retrieve a real time Wikimedia you went to data from the Wikimedia is working as expected you can see a lot of data is are doing okay so there is basically huge amount of real time Wikimedia Stream data. Okay, so let me stop this. Otherwise this will go in finite. Let me stop the survey. Now, what we will do will verify whether the computer that we have written has sent a real stream data to the capture topic. So in order to verify that, what we can do is we can trigger some command from the command line and then we can see, well, basically in next lecture onwards, we create a compact consumer to read a realtime stream data from the kakapo. But in order to really very quickly, we can, you know, check all we can verify from the command line. So go to the terminal and let's open a new cell. And then let me slightly do without and let me go to the cup. No not and then Kaka and then just have a command that is means Lashkargah control consumer dot asset and then we need to pass the topic. Right. So let me quickly change the topic name for this command. Now, right now, the topic name is delegates and this gorgeous night. So we have given the topic name is Wikimedia underscore a recent change. Okay who it. I know here you can able to see we can see that the consumer is reading real time streaming data from the Clubcard topic, isn't it? Now let's go ahead and let's rent what's been what project and let's see how the Kafka producer will in order through the real time stream data from the Wikimedia and how it will write to the capture topic. And then we'll see this terminal. So let's go to the Springwood project and let's run the spring project and you can see the Kafka producer is start reading a real time stream data from the Wikimedia. And similarly if you go to terminal, you can see here. All right, so consumer is reading the events are the real time stream data from the Kafka topic, isn't it? You can just take a look into the console as well as you can take a look into the terminal. As soon as the Kafka producer will reach the real time stream data from the Wikimedia and get right to the Kafka topic, then the consumer will read that data from the copper topic and it will print here in a terminal, isn't it? So it means that the Kafka producer to read the real time stream data from the Wikimedia is working as expected in next section of the lectures will start implementing Kafka consumer grade this data from the capture topic and then we'll see how we can read that data to the magical database. All right. I will see you in the next section of the lectures.
❮ Previous Lecture
Next Lecture ❯
Comments
Post a Comment
Leave Comment