❮ Previous Lecture
Main Course Page ❯
Welcome to Spring Boot + Apache Kafka Tutorial series. In this lecture, we will write a code to save Wikimedia data into the MySQL database.
Welcome to Spring Boot + Apache Kafka Tutorial series. In this lecture, we will write a code to save Wikimedia data into the MySQL database.
Lecture - #22 - Save Wikimedia Data into MySQL Database
Transcript:
Welcome back. In this lecture, we'll configure my security division work being good project. In this lecture will see help see Wikimedia data into my cycle database. Well, we have already done a cupcake engineer which will consume a data from the topic. Next, we need to sell that data in the MySchool database. Right. So in this lecture, let's see how to sell that Wikimedia data in the database debate. Let's do today in college idea. Let's quickly create a JP entity to store the records into the database table will go to capture consumer database project, go to main package, right click new and then just package and let you package an image entity you data within entity package where you want to create a class. Let your class name it Rikki. Wikimedia data. Something like this you'd enter the electorate of feels like private lung. I did. And then private stink Ricky event data. All right, so let's give this to Fields now. Let's annotate this class with identity annotation. So in order to make this class as a GP entity, we have to validate this class at entity annotation from GP. Okay. Next, let's use a table annotation in order to provide table details. So let's your table name let you table something like Ricky Media underscore recent change, something like this. Next, let's have that idea in addition to make this treat as a primary key. Let's also use identity to allow annotation to provide the primary key generation strategy. So let's view identity. And here we need to provide label annotation because they went to data is quite huge, right? So in order to store large data, we can use ad lib annotation. Okay, great. Now we need to create together certain methods, right? So we have already added a Lombok library. We can leverage Lombok provided annotations annotation automatically. Greater good a second method. So here I want to do it. I had to get the annotation from Lombok Library to create a getters method for this fit. And also let's just add setter remote annotation. Okay. So these countries will basically create a go to Seattle method for these two private pools. Okay, great. Now what we'll do will create a deep repository that is being as a repository for this entity. So go to main package, right click new and then choose package. And let's go back in the image repository. And then within this repository package, we're going to create interface. Let's call it as Riki media data repository. Perfect. And let's extend this interface from deep repository interface and then parse both arguments. The entity type that is Wikimedia data followed by long as a second argument type. Okay, perfect. Now we have created a spring data repository, so it will basically give you a crude methods to perform database operations on a given entity. Now let's go to Kafka Consumer Cloud Native Kafka database consumer here will inject spring to legibly repository and then we'll call spectrometer to save the event data. So here, just declare the spring data repository that is Riki Media data repository. Let's call it is data 3.3 and let's use constructor based or dependency injection. So let me gender the constructor over here. Okay. We no need to add to add annotation because this being will contains only one parameter its constructor. Now we have injected Wikimedia data equality. Okay, now let's go. And let's call it simple method to say whether you're into research. So here I will create object of Wikimedia data entity new Wikimedia. And then I'm going to set the data that is event message, next call data repository, dot select and then put Wikimedia data. Okay, perfect. Now, what we have done, we have injected Wikimedia data repository and then we have called sale method to save this Wikimedia data object. Okay, great. Now let's go ahead and listen in the Springwood project and let's see how the data will be stored in a massive database. We'll go to management class that is being put an application from here just to run the Springwood project. And you can see the consumer, Kafka consumer is running similarly. Let's go to industry. Is it so good to produce a project that is got a couple of key media and then go to many different class that is Springwood put in their application and just run this project. So let me select the Springwood, put a prison application over here. I just started and here you can see two steps. Springwood police application, Springwood continuing application, and Springwood will put in in application. It will basically, you know, retrieve the real time streamed data from the Wikimedia right. And here you can see the logs in the console and in the Springwood continuing application you can able to see they were installed, you know, storing in the database, you can see the statements, right inserting to Wikimedia underscore the recent change and then the values passing. So let me stop both distances and we can see the lawyer, you can see they insist statement insert into this table and this is that it is ending it means that we are successfully not storing the accumulated and with harvest. So let's go to my scroll bench and here let's refresh this Kmart and go to tables and select rows from the table and you can able to see oh wikimedia you in the data successfully stored in this table. Well, let me recap what we have done so far. We have created a multimodal mine project. Within that, we have created two more projects, one for capture producer and therefore capture consumer. Well, we have created a couple producer project to implement capture. What is there to read the realtime stream data from the Wikimedia and rate that data to the capture topic. And then we have created got consumer project to consume the real time stream data from the capture topic and rate that data to the lifecycle database. I hope you understood how to use Apache Kafka as a broker to exchange messages between producer and consumer.
❮ Previous Lecture
Main Course Page ❯
Comments
Post a Comment
Leave Comment