Remember to clear the cache afterwards with Downloader.flush_cache Pictures have a unique ID, so that multiple downloads can persomeĭownload_cache => allows to download all the URLs stored in the Downloader's instance Cache.The directory is named after the Keyword.This functions downloads pictures into defined class instance’s directory: download(keywords, limit, verbose=False, cache=True, download_cache=False, timer=None) Usefule in case of a picture that is not been found, so won't allow to loop indefinitely. In the function scan_webpage a 100_000 lookkup means that it will loop up to 100_000 before stop. Timer => Default is set to 100_000 Looks up, defines the number of WebPages's 'chunks' will search. Verbose => Output the links in the terminal in real timeĬache => if set to False, the URLs won’t be stored in the class instance, default is True How many picture per keyword you need with the limit parameter.File Extensions based on the class instance you define.This functions returns and Caches URLs of Pictures based on: extension - Type of extension allowed to be downloaded, if left None defaults are:įunctions search_urls(keywords, limit, verbose=False, cache=True, timer=None).API class simple_image_download.Downloader(extensions=None) From simple_image_download import Downloader simp
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |