Create agents that monitor and act on your behalf. Your agents are standing by!
-
Updated
Nov 9, 2024 - Ruby
Create agents that monitor and act on your behalf. Your agents are standing by!
A visual no-code/code-free web crawler/spider易采集:一个可视化浏览器自动化测试/数据采集/爬虫软件,可以无代码图形化的设计和执行爬虫任务。别名:ServiceWrapper面向Web应用的智能化服务封装系统。
The fast, flexible, and elegant library for parsing and manipulating HTML and XML.
Auto_Jobs_Applier by AIHawk is an Agen that automates the jobs application process. Utilizing artificial intelligence, it enables users to apply for multiple jobs in an automated and personalized way.
🔥 Turn entire websites into LLM-ready markdown or structured data. Scrape, crawl and extract with a single API.
Crawlee—A web scraping and browser automation library for Node.js to build reliable crawlers. In JavaScript and TypeScript. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with Puppeteer, Playwright, Cheerio, JSDOM, and raw HTTP. Both headful and headless mode. With proxy rotation.
📙 中华新华字典数据库。包括歇后语,成语,词语,汉字。
AV 电影管理系统, avmoo , javbus , javlibrary 爬虫,线上 AV 影片图书馆,AV 磁力链接数据库,Japanese Adult Video Library,Adult Video Magnet Links - Japanese Adult Video Database
🚀「Douyin_TikTok_Download_API」是一个开箱即用的高性能异步抖音、快手、TikTok、Bilibili数据爬取工具,支持API调用,在线批量解析及下载。
A collection of awesome web crawler,spider in different languages
A Smart, Automatic, Fast and Lightweight Web Scraper for Python
Declarative web scraping
A Chrome DevTools Protocol driver for web automation and scraping.
YouTube video downloader in javascript.
Crawlee—A web scraping and browser automation library for Python to build reliable crawlers. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with BeautifulSoup, Playwright, and raw HTTP. Both headful and headless mode. With proxy rotation.
A social networking service scraper in Python
Add a description, image, and links to the scraper topic page so that developers can more easily learn about it.
To associate your repository with the scraper topic, visit your repo's landing page and select "manage topics."