Home Tech Aggregation and Windowing Operation in Spark Streaming

Aggregation and Windowing Operation in Spark Streaming

In these blogs, lets us see how we can do aggregations and windowing operations in Apache Spark Streaming Dstream.

Let us consider an example where we will be receiving data from twitter based on some keyword filters. You can consider “Trump” as a keyword for now. We will be receiving a continuous stream of data that has a “Trump keyword” in its twitter message or text.
Now we have to find out the location from where we are receiving the maximum number of tweets on filter condition i.e., we are aggregating the data analytics services and finding out the location.

We also want to know the counts in a time window, say for example let’s consider a time window of 30 seconds and a sliding window of 10 seconds. It means we should get the max count from the last 30 seconds every 10 seconds.

Format : HH:MM:SS
00:00:00 – 00:00:10 => 1st window
00:00:00 – 00:00:20 => 2nd window
00:00:00 – 00:00:30 => 3rd window
00:00:10 – 00:00:40 => 4th window
00:00:20 – 00:00:50 => 5th window
00:00:30 – 00:00:60 => 6th window

Now let us write code.

First, we are going to import all the required packages.

Now we are going to set the log level to “error” so that it will not generate so many logs and create a Spark streaming context.

Now let us create a configuration builder and set all our access keys and access token secrete keys to it. Let’s pass this Configuration builder object to OAuth and the create a twitter stream using it.

Note: Please don’t use the keys available in the screenshot. I have already deleted them. Please create your own keys by signing up for a twitter developer account.

Now let’s filter out to get only English language tweets.

Now let us filter out those tweets whose location is null and create a tuple of (location,1). Then let us do reduceByKeyAndWindow and provide window time and sliding time.

Then let us find out where we are getting the maximum tweets from the last 30 seconds time interval with every 10 seconds sliding by doing ordering on the aggregated data.

Now let us start streaming and make it to wait until streaming is terminated so that it will keep on streaming twitter data and process it.

Now let us see how many counts we have every 30 seconds with 10 seconds time interval.

Happy Learning!

Divyesh Aegis
Divyesh Aegis is a technical writer at Aegis Softtech especially for computer programmings like Asp.net, Java, Big Data, Hadoop, dynamics ax, and CRM for more than 8 years. Also, have basic knowledge of Computer Programming.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

8 Tips On How To Take Care Of Your Diamond Collection

Do you always wear your diamond jewelry? If yes, then you should know that even though diamond is the hardest substance on earth, you...

Amazing Ideas for A Successful Virtual Concert

Virtual Concert - Over the past few years, virtual concerts have gained so much popularity, and these have become a necessity in...

All You Need to Know About Tokyo International Schools

Tokyo, the largest metropolitan in the world, has a lot to offer to people of all ages. Known for its historic shrines and landmarks,...

Eliminate Your Double Chin With Kybella

These days, one of the major concerns of men and women is none other than the stubborn fat in their different body parts. Weight...

New York Bomber Jacket | Lycos-Gears

If you are looking for a quality bomber jacket, the New York Bomber Jacket will be a great choice. This jacket is perfect for...
rec