Skip to content

gwon713/Web_image_crawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Custom Crawler By

https://github.com/YoongiKim/AutoCrawler

AutoCrawler

Google, Naver multiprocess image crawler (High Quality & Speed & Customizable)

How to use

default : install python, pip

  1. Install Chrome // 크롬 웹브라우저 필요

  2. python package download // 크롤러에서 필요한 패키지 다운로드 * python, pip 필요

pip install -r requirements.txt
or
pip3 install -r requirements.txt
  1. Write search keywords in keywords.txt // keywords 텍스트 파일에 검색할 키워드 작성

  2. Run "main.py" // 크롤러를 받은 폴더에서 main.py 실행

python main.py
or
python3 main.py
  1. Files will be downloaded to 'download' directory.

Option 옵션 사용

usage:

python3 main.py [--skip true] [--threads 4] [--google true] [--naver true] [--full false] [--face false] [--no_gui auto] [--limit 0]
--skip true        Skips keyword if downloaded directory already exists. This is needed when re-downloading. // 이미 다운로드된 키워드 크롤링 제외

--threads 4        Number of threads to download.

--google true      Download from google.com (boolean) // 구글 검색 사용

--naver true       Download from naver.com (boolean) // 네이버 검색 사용

--full false       Download full resolution image instead of thumbnails (slow)

--face false       Face search mode 

--no_gui auto      No GUI mode. (headless mode) Acceleration for full_resolution mode, but unstable on thumbnail mode.
                   Default: "auto" - false if full=false, true if full=true
                   (can be used for docker linux system)
                   
--limit 0          Maximum count of images to download per site. (0: infinite) // 이미지 다운로드 수 설정 0=제한없음
--proxy-list ''    The comma separated proxy list like: "socks://127.0.0.1:1080,http://127.0.0.1:1081".
                   Every thread will randomly choose one from the list.

Full Resolution Mode

You can download full resolution image of JPG, GIF, PNG files by specifying --full true

Data Imbalance Detection

Detects data imbalance based on number of files.

When crawling ends, the message show you what directory has under 50% of average files.

I recommend you to remove those directories and re-download.

Remote crawling through SSH on your server

sudo apt-get install xvfb <- This is virtual display

sudo apt-get install screen <- This will allow you to close SSH terminal while running.

screen -S s1

Xvfb :99 -ac & DISPLAY=:99 python3 main.py

Customize

You can make your own crawler by changing collect_links.py

Issues

As google site consistently changes, please make issues if it doesn't work.

About

Web Image Crawler

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages