Hi guys,
I'm not even close to great with php, I mostly just tinker around when I get a chance. I'm hoping this is something super simple I'm just not catching on to.
So, quick question here about scraping. Most scripts I see (and ones I've successfully made/frankenscripted myself) focus on scraping a single page, which is fine, but what if you want a single script to scrape multiple pages?
Say I have a list of urls in a text document - nothing massive, like 5-10. What would be the best way to go about scraping the html from each and dumping it all into a single, appended txt?
What I've tried is importing the list as an array and then using foreach to get the html and fwrite it. Needless to say I'm having some difficulties.
Yes, I know I should be getting comfortable with cURL, that's my next step.
Any tips or info on this would be great though, thanks!
I'm not even close to great with php, I mostly just tinker around when I get a chance. I'm hoping this is something super simple I'm just not catching on to.
So, quick question here about scraping. Most scripts I see (and ones I've successfully made/frankenscripted myself) focus on scraping a single page, which is fine, but what if you want a single script to scrape multiple pages?
Say I have a list of urls in a text document - nothing massive, like 5-10. What would be the best way to go about scraping the html from each and dumping it all into a single, appended txt?
What I've tried is importing the list as an array and then using foreach to get the html and fwrite it. Needless to say I'm having some difficulties.
Yes, I know I should be getting comfortable with cURL, that's my next step.
Any tips or info on this would be great though, thanks!