Home Tech Aggregation and Windowing Operation in Spark Streaming

Aggregation and Windowing Operation in Spark Streaming

In these blogs, lets us see how we can do aggregations and windowing operations in Apache Spark Streaming Dstream.

Let us consider an example where we will be receiving data from twitter based on some keyword filters. You can consider “Trump” as a keyword for now. We will be receiving a continuous stream of data that has a “Trump keyword” in its twitter message or text.
Now we have to find out the location from where we are receiving the maximum number of tweets on filter condition i.e., we are aggregating the data analytics services and finding out the location.

We also want to know the counts in a time window, say for example let’s consider a time window of 30 seconds and a sliding window of 10 seconds. It means we should get the max count from the last 30 seconds every 10 seconds.

Format : HH:MM:SS
00:00:00 – 00:00:10 => 1st window
00:00:00 – 00:00:20 => 2nd window
00:00:00 – 00:00:30 => 3rd window
00:00:10 – 00:00:40 => 4th window
00:00:20 – 00:00:50 => 5th window
00:00:30 – 00:00:60 => 6th window

Now let us write code.

First, we are going to import all the required packages.

Now we are going to set the log level to “error” so that it will not generate so many logs and create a Spark streaming context.

Now let us create a configuration builder and set all our access keys and access token secrete keys to it. Let’s pass this Configuration builder object to OAuth and the create a twitter stream using it.

Note: Please don’t use the keys available in the screenshot. I have already deleted them. Please create your own keys by signing up for a twitter developer account.

Now let’s filter out to get only English language tweets.

Now let us filter out those tweets whose location is null and create a tuple of (location,1). Then let us do reduceByKeyAndWindow and provide window time and sliding time.

Then let us find out where we are getting the maximum tweets from the last 30 seconds time interval with every 10 seconds sliding by doing ordering on the aggregated data.

Now let us start streaming and make it to wait until streaming is terminated so that it will keep on streaming twitter data and process it.

Now let us see how many counts we have every 30 seconds with 10 seconds time interval.

Happy Learning!

divyeshaegis
Hi my name's Divyesh, I'm working on a Digital marketing executive from a town called Rajkot-India. I currently work at Aegis Softtech, an award-winning and offshore development company.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

Different Kind of Big Christmas Boxes and Packaging Styles

Christmas boxes are used to pack gifts and many other things like bakery products that you want to give as a gift to your...

What Does a Locksmith Job Duties and Responsibilities

A locksmith is a skilled tradesman that can install and repair a wide range of security products. There was a time when simple deadbolts...

An Ultimate Guide to Sidewalk Violation and Repair in NYC

Sidewalk violation is official notices issued by the Department of transportation. It states that the Sidewalk is defective, and it needs to be repaired....

How to Control Mobile Addiction in Kids

Kids nowadays are addicted to mobile devices a lot. Additionally, this pandemic has brought situations wherein the kids had to stick to mobile or...

How to Find Affordable Packers and Movers in Bangalore?

Getting in touch with the best and reliable packers and movers in Bangalore is easy nowadays. You can simply search on Google and a...
rec