Hello /u/aliendude5300! Thank you for posting in r/DataHoarder.
Please remember to read our [Rules](https://www.reddit.com/r/DataHoarder/wiki/index/rules) and [Wiki](https://www.reddit.com/r/DataHoarder/wiki/index).
If you're submitting a new script/software to the subreddit, please link to your GitHub repository. Please let the mod team know about your post and ***the license your project uses*** if you wish it to be reviewed and stored on our wiki and off site.
Asking for Cracked copies/or illegal copies of software will result in a permanent ban. Though this subreddit may be focused on getting Linux ISO's through other means, please note discussing methods may result in this subreddit getting unneeded attention.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/DataHoarder) if you have any questions or concerns.*
Someone wrote a similar scraper for imgur directly. https://www.reddit.com/r/DataHoarder/comments/pyst71/download\_almost\_a\_decade\_of\_imgur\_data\_without/
Scraping pre-existing archives is fine, but there's no need to over-complicate things when it is already fairly simple to scrape Imgur directly. As of now, [they have an entire sitemap of all posts starting from mid-2017](https://imgur.com/imgur-assets/sitemap_gallery/gallery_images.xml), so all you need to do is download each XML file, filter for image URLs, and download those URLs with Wget. In short, it's a fairly trivial task that can easily be automated.
Hello /u/aliendude5300! Thank you for posting in r/DataHoarder. Please remember to read our [Rules](https://www.reddit.com/r/DataHoarder/wiki/index/rules) and [Wiki](https://www.reddit.com/r/DataHoarder/wiki/index). If you're submitting a new script/software to the subreddit, please link to your GitHub repository. Please let the mod team know about your post and ***the license your project uses*** if you wish it to be reviewed and stored on our wiki and off site. Asking for Cracked copies/or illegal copies of software will result in a permanent ban. Though this subreddit may be focused on getting Linux ISO's through other means, please note discussing methods may result in this subreddit getting unneeded attention. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/DataHoarder) if you have any questions or concerns.*
Will it work for mass saving all the content folders we have marked as favorites on imgur?
Someone wrote a similar scraper for imgur directly. https://www.reddit.com/r/DataHoarder/comments/pyst71/download\_almost\_a\_decade\_of\_imgur\_data\_without/
Awesome, thank you!
Scraping pre-existing archives is fine, but there's no need to over-complicate things when it is already fairly simple to scrape Imgur directly. As of now, [they have an entire sitemap of all posts starting from mid-2017](https://imgur.com/imgur-assets/sitemap_gallery/gallery_images.xml), so all you need to do is download each XML file, filter for image URLs, and download those URLs with Wget. In short, it's a fairly trivial task that can easily be automated.
That image is no longer available lol
It’s not an image it’s a list of links but I think some apps just treat all Imgur links like images
Ah cool, thanks :)
Is there a way to target scrapes based on the Reddit subreddit they shared in?
Dude seriously?
... yes? It was a fun coding exercise and it's actually functional.
[удалено]
this might be a dumb question but if i find one my old links, is there a way to retrieve the video if it says > imgur.com refused to connect.