Oliver Sarfas β€’ September 1, 2019

laravelprogrammingWeb scraper google chrome

I was involved recently in a Hackathon, more specifically LaraHack. The theme was Community. Many took this as a theme to bring people together through a social media, or chat interface. I went through a different route, and found a use for a tool, that I'd never considered before.

Laravel Web ScraperWeb

Find freelancers and freelance jobs on Upwork - the world's largest online workplace where savvy businesses and professional freelancers go to work! Web scraper is very useful when we want to get data from another site. We are using goutte library to fetching data from another site. Web scrapping is very helpful for fetching some recent news, article etc Today we are going to scraping data from WordPress site. Let’s see how to fetch recent posts using Laravel web scraper goutte library.

I decided to go through the route of building a site in tribute to the US TV Show, Community. For this, I needed a database full of every scripted line from the show, along with the episode and associated series that the line was said in. After some looking around, I googled 'Community TV Show full scripts, first link πŸ˜†, I came across this.

πŸŽ‰ Jackpot πŸŽ‰

All the lines, episode names, and seasons. Now, all I need to do is get that data. There is no way that I was going to click through every single episode, copy and paste the script, then extract EACH AND EVERY LINE.

Automating the 'extraction'

At a previous role, I had to automate a web based process - logging into a solution, doing some administration, then logging out. We achieved this using Selenium, and a couple of other tools. I always wanted to achieve this with PHP however as it's my preferred language.

This made me consider using Laravel Dusk to automate / scrape the data from my script pages.

Scraper

Dusk allows developers to build browser based tests to ensure that pages and solutions work as intended throughout the development process. For instance, you can tell your test to _go to page and make sure that the <h1> tag says 'Expected Title'. Taking this to the next level, you can also get the test to pull out the content of any given Query Selector.

So by using this, I can automate the process of hitting each script page, and pulling the content from a specific element on the page. This even works for SPAs - as you can tell the browser to wait for JS.

What pages do we hit?

Laravel Web Scraper

There is a little human requirement to this - determining the pages to be hit. In my case, I wanted to hit a url with a query string of the episode and season number. So I had to establish how many seasons there were, and how many associated episodes to that season existed.

I knew from personal experience that there are 6 seasons of the hit show. I just needed to know how many episodes. So I went on IMDB and jotted the numbers down. Created some seeders to populate some models, and bash - I was ready to go.

Seeders

Here's my seeder, which populated my database ready for the main data extraction;

The Dusk Test

So, I wrote myself some test logic that would creates Lines for each episode, and store it in the database, here it is'

Explanation of Logic

$s = str_pad($episode->season_id, 2, 0, STR_PAD_LEFT);

$e = str_pad($episode->episode_number, 2, 0, STR_PAD_LEFT);

Here I pad the episode and season numbers with a leading 0 (where necessary). This is due to the way the script website accepts it's parameters. i.e. I want to see S01E01, not S1E1.

We grab the script for the episode, break it into lines and iterate over the loop. The save each line to the database, and associate it to the episode that we're in.

Laravel Web Scraper Free

Using Laravel Dusk has it's perks for this kinda thing, however it does have it's drawbacks.

  • You can't use Laravel Dusk in a Production environment.
  • Getting the selector's for your browser elements can be a PITA. Especially if the site doesn't use HTML ID tags or classes.
  • If the site changes, you need to update your tests

Web Scraper Free

However, I must say I like this method as an 'intial hit' for a database build, and then copying / exporting the database to a production environment. Using this method I can get ALL the episodes, lines, and episode names in under 2 minutes - far faster than doing it manually

Questions? Want to talk? Here are all my social channels